uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,116,691,498,615 | arxiv | \section{Introduction}
The production of a photon (or an isolated photon) in association with
hadrons in $e^+e^-$ annihilation is a useful process to learn about the
differences in the properties of $q$$\bar{q}$$\gamma$ and $q$$\bar{q}$$g$\ final
states, to measure the parton-photon fragmentation function and to test
QCD predictions in a channel crossed to photon-photon annihilation. The
corresponding theoretical problems are well understood in the case of
prompt photon production at hadron colliders, photo-production of jets
and heavy flavor and photon-photon scattering. It is an important
development that experiments at LEP give us high statistics data and
open ground to study even photon plus multijet final states
\cite{LEP,OPAL}. The better data call for a quantitative QCD
description.
The QCD description of inclusive photon production has a
simple, but important feature: the photon has hadronic component. In the
perturbative treament this fact is reflected by the appearance of
collinear photon-quark singularities. In order to obtain well defined
cross sections in perturbative QCD\ in all orders of the running coupling $\alpha_s$,
these singularities are to be subtracted and absorbed into the photon
fragmentation functions (factorization theorem) \cite{Aletal,FP}. The
fragmentation functions of the photon satisfy inhomogeneous evolution
equation; it grows with $Q^2$ therefore, it is called ``anomalous''
\cite{W,Detal,WZ}.
It is also interesting
to study the case of isolated photon. Physical isolation means that we
isolate the photons from hadrons and so we cannot make distinction
between quarks and gluons. Gluons, however, cannot be
isolated completely from the photon without destroying the cancellation
of soft gluon singularities between the virtual and real gluon
corrections.
Therefore, a physical isolation cannot eliminate
completely the collinear photon-quark singularities,
and so, even in the case of isolated photon production
the cross section contains ``anomalous'' (non-perturbative) piece.
This problem has been recognized clearly in the
next-to-leading order QCD study of isolated photon production at hadron
colliders \cite{ABF,BQ}. The theoretical subtleties of defining
isolated photon cross section in perturbative QCD, however, have not been clearly
formulated in previous studies in the case of $e^+e^-$ annihilation
\cite{KL,KS}.
In section 2 we review the next-to-leading order description of
the inclusive (non-isolated) photon production.
In section 3 we outline the change in the formalism due to
the introduction of isolation cuts for the photon production.
We point out that isolation cannot completely eliminate
the non-perturbative fragmentation contribution, although it can
reduce its size. In section 4 a detailed perturbative study is
given for the cross section of isolated photon plus jet
production up to order ${\cal O}(\alpha\alpha_s)$.
We review the mechanisms of the cancellation of the infrared
singularities and point out that in perturbation theory for processes
containing a photon in the final state the definition of
a finite hard scattering cross requires a counter term which
necessarily introduces an unphysical parameter.
Section 5 contains our numerical results for the isolated photon plus
$n$-jet production at LEP. To demonstrate the flexibility of our
numerical program to calculate any jet shape parameters, we calculate
the distribution of the photon transverse momentum with respect to the
thrust axis as well. The last section contains our conclusions.
\section{Inclusive photon production in $e^+ e^-$ annihilation}
According to the factorization theorem, the physical cross section
of inclusive photon production is obtained by folding
the fragmentation functions $D_{\gamma /a}(x,\mu_f)$ with the finite
hard-scattering cross sections ${\rm d} \hat{\sigma}_a$:
\begin{equation}
\label{inclusive}
\frac{{\rm d} \sigma_\gamma}{{\rm d} E_\gamma}=\sum_a
\int_0^{\sqrt{s}/2}{\rm d} E_a\,\int_0^1{\rm d} x\,D_{\gamma /a}(x,\mu_f)
\frac{{\rm d} \hat{\sigma}_a}{{\rm d} E_a}(E_a,\mu, \mu_f, \alpha_s(\mu))
\delta(E_\gamma- xE_a),
\end{equation}
where $\alpha_s(\mu)$ is the strong coupling constant at the ultraviolet
renormalization scale $\mu$ and $\mu_f$ is the factorization scale.
It is instructive to investigate the decomposition of this generally
valid expression up to next-to-leading order. First we remark that
\begin{equation}
\label{dff}
D_{\gamma/\gamma}(x)=\delta(1-x)+{\cal O}(\alpha^2),
\end{equation}
therefore, to leading order in the electromagnetic coupling, the term in
eq.\ (\ref{inclusive}) given by $a=\gamma$ is a purely perturbative
contribution. We use this equation to eliminate $D_{\gamma/\gamma}(x)$
from eq.\ (\ref{inclusive}). The hard scattering cross section
$d\hat{\sigma}_{\gamma}/dE_{\gamma}$ is of order $\alpha$ in comparison
to the leading order annihilation cross section $\sigma_0$.\footnote{
In the following analysis, when the order of a contribution is given, it
is always understood in comparison to the leading order annihilation
cross section $\sigma_0$.} The leading non-perturbative part given by
the fragmentation function, however, is of order $\alpha/\alpha_s$.
This contribution is the ``anomalous'' photon component. Its
enhanced order is due to the fact that the scale dependence of the
fragmentation functions $D_{\gamma/a}(x,\mu_f)$, $a=q,\bar{q},g$ are
given by the inhomogeneous renormalization group equations \cite{KWZ,O}:
\begin{equation}
\mu\frac{\partial}{\partial\mu}D_{\gamma/a}(x,\mu)=
\frac{\alpha}{\pi} P_{\gamma/a}(x)
+ \frac{\alpha_s}{\pi} \sum_{b}
\int\frac{{\rm d} y}{y}\,D_{\gamma/b}\left(\frac{x}{y},\mu\right)P_{b/a}(y),
\end{equation}
where $P_{b/a}(x)$ denote the Altarelli-Parisi splitting functions.
To order $\alpha\alpha_s$ the inhomogeneous terms have the expressions
\cite{FP}
\begin{equation}
P_{\gamma/a}(x)=P^{(0)}_{\gamma/a}(x)+
\frac{\alpha_s}{2\pi}P^{(1)}_{\gamma/a}(x),
\end{equation}
where
\begin{equation}
\label{FPq}
P^{(0)}_{\gamma/q}(x)=e_q^2\frac{1+(1-x)^2}{x},
\quad\quad
P^{(0)}_{\gamma/g}(x)=0,
\end{equation}
and after trivial replacement of the color factors in eq.\ (12) of
ref.\ \cite{FP}, we have
\begin{eqnarray}
\lefteqn{P^{(1)}_{\gamma/q}(x)=} \\
& & e_q^2 C_F\left\{-\frac{1}{2}+\frac{9}{2}x+
\left(-8+\frac{1}{2}x\right)\log{x}+2x\log{(1-x)}+
\left(1-\frac{1}{2}x\right) \log^2{x}\right. \nonumber \\
& & \left.+\left[\log^2{(1-x)}+4\log{x}\log{(1-x)}+8\,{\rm Li}_2(1-x)-
\frac{4}{3}\pi^2\right]P^{(0)}_{\gamma/q(\bar{q})}(x)\right\},\nonumber
\end{eqnarray}
\begin{eqnarray}
\label{FPg}
P^{(1)}_{\gamma/g}(x)&=&\langle e_q^2\rangle T_R
\left\{-4+12x-\frac{164}{9}x^2+\frac{92}{9}x^{-1}\right.\nonumber \\
& &\left.+\left(10+14x+\frac{16}{3}x^2+\frac{16}{3}x^{-1}\right)\log{x}
+2(1+x)\log^2{x}\right\}. \nonumber
\end{eqnarray}
In the last equation,
\begin{equation}
\langle e_q^2 \rangle \equiv \sum_{q=1}^{N_F} e_q^2,
\end{equation}
where $N_F$ is the number of flavors.\footnote{We assume $e^+e^-$
annihilation via virtual photon. In order to obtain formulas valid at
the $Z^0$ peak, trivial modifications of charge factors are required.}
The unique solution of these inhomogeneous equations requires
non-perturbative input\footnote{In the literature it is usually called
Vector Meson Dominance (VMD) contribution \cite{GGReya,O,Aletal}.} at a
certain initial scale
$\mu$. At asymptotically large values of $\mu$, however, the solutions
are independent of the initial values and one obtains
\begin{eqnarray}
\label{asym1}
\lim_{\mu\rightarrow\infty}D_{\gamma/q}(x,\mu)&=&
\frac{\alpha}{2\pi}\log\frac{\mu^2}{\Lambda^2} a_{\gamma/q}(x),\\
\label{asym2}
\lim_{\mu\rightarrow\infty}D_{\gamma/g}(x,\mu)&=&
\frac{\alpha}{2\pi}\log\frac{\mu^2}{\Lambda^2} a_{\gamma/g}(x).
\end{eqnarray}
Exact analytic expressions for the Mellin transforms of the
$a_{a/\gamma}(x)$ functions have been found in refs.\ \cite{W,Detal}.
These are related to the $a_{\gamma/a}$ functions via crossing. It is
useful, however, to have a parametrization in $x$-space. Formulas which
accurately reproduce the exact leading logarithmic solutions were given
in ref.\ \cite{DukeO}:
\begin{eqnarray}
a_{\gamma/q}(x)&=&e_q^2\frac{1}{x}\left[
\frac{2.21-1.28x+1.29x^2}{1-1.63\log(1-x)}x^{0.049}
+0.002(1-x)^2x^{-1.54}\right],\\
a_{\gamma/g}(x)&=&\frac{1}{x}[0.0243(1-x)^{1.03}x^{-0.97}].
\end{eqnarray}
A new parametrization of the photon fragmentation functions
is described in ref.\ \cite{Auetal}.
The most striking feature of these solution is that they
increase as $1/\alpha_s$ with increasing the evolution scale.
Therefore, at high energy the contribution from the quark fragmentation
into a photon gives the leading order ($\alpha/\alpha_s$) term
\begin{equation}
\frac{ {\rm d}\sigma_{\gamma}^{(0)} }{ {\rm d} E_{\gamma}}=
\sigma_0 \frac{4}{\sqrt{s}}\sum_q \frac{e_q^2}{\langle e_q^2\rangle}
D_{\gamma/q}\left(\frac{2E_\gamma}{\sqrt{s}},\mu\right)
+ {\cal O}(\alpha),
\end{equation}
In next-to-leading order, the $\mu$ dependence of $D_{\gamma/q}$ has to
be calculated with the next-to-leading order evolution equation and we
should also add the order $\alpha$ hard scattering cross section
\begin{eqnarray}
\lefteqn{\frac{{\rm d}\sigma_\gamma}{{\rm d} E_\gamma}=
\sigma_0\frac{4}{\sqrt{s}}\sum_q \frac{e_q^2}{\langle e_q^2\rangle}
D_{\gamma/q}\left(\frac{2E_{\gamma}}{\sqrt{s}},\mu\right)}\nonumber \\
&& + \sum_{a}\int_0^{\sqrt{s}/2}{\rm d} E_a\,\int_0^1{\rm d} x\,
D_{\gamma /a}(x,\mu_f)\frac{{\rm d} \hat{\sigma}^{(1)}_a}{{\rm d} E_a}
(E_a,\mu, \mu_f, \alpha_s(\mu))\delta(E_\gamma- xE_a) \nonumber \\
&& + \ \frac{{\rm d} \hat{\sigma}^{(0)}_{\gamma}}{{\rm d} E_{\gamma}}
(E_{\gamma},\mu_f, \alpha_s(\mu))\ + \ {\cal O}(\alpha\alpha_s),
\end{eqnarray}
where ${\rm d} \hat{\sigma}^{(1)}_a/{\rm d} E_a$ denotes the order
$\alpha_s$ cross section of quark and gluon production.
The ${\cal O}(\alpha )$ hard-scattering cross section
${\rm d}\hat{\sigma}_\gamma^{(0)}/{\rm d} E_\gamma$ is defined by
subtracting the photon-quark collinear singularity in the $\overline{{\rm MS}}$\ scheme
\begin{equation}
\frac{{\rm d}\hat{\sigma}_\gamma^{(0)}}{{\rm d} E_\gamma}=
\lim_{\varepsilon\rightarrow 0}\left(\frac{{\rm d}\tilde{\sigma}^{(0)}}
{{\rm d} E_\gamma} +\frac{{\rm d}\sigma^{(0)}_{\rm CT}}{{\rm d} E_\gamma}\right),
\end{equation}
where the first term on the right hand side is the partonic cross
section in $4-2\varepsilon$ dimensions as defined by Feynman diagrams
\begin{equation}
\label{bare}
\frac{{\rm d}\tilde{\sigma}^{(0)}}{{\rm d} E_\gamma}=
\sigma_0\sum_q \frac{e_q^4}{\langle e_q^2 \rangle} \frac{\alpha}{2\pi}
\frac{2}{\sqrt{s}}H\left(\frac{4\pi\mu^2}{s}\right)^\varepsilon
\frac{1}{\Gamma(1-\varepsilon)}\int{\rm d} y_{12}\,{\rm d} y_{13}\,{\rm d} y_{23}\,
\theta(1- y_{13}-y_{23})(y_{12}y_{13}y_{23})^{-\varepsilon}
\end{equation}
\[ \times\left[(1-\varepsilon)\left(\frac{y_{23}}{y_{13}}
+\frac{y_{13}}{y_{23}}\right)
+\frac{2y_{12}-\varepsilon y_{13}y_{23}}{y_{13}y_{23}}\right]
\delta(1-y_{12}-y_{13}-y_{23})
\delta\left(1-y_{12}-\frac{2E_\gamma}{\sqrt{s}}\right),\]
where $H=1+{\cal O}(\varepsilon)$, while the second term
is the $\overline{{\rm MS}}$\ counter-term
\begin{equation}
\label{CT}
\frac{{\rm d}\sigma^{(0)}_{\rm CT}}{{\rm d} E_\gamma}=\frac{\alpha}{2\pi}
\frac{(4\pi)^\varepsilon}{\varepsilon\Gamma(1-\varepsilon)}
\sum_q \int_0^{\sqrt{s}/2}{\rm d} E_q\,\int_0^1{\rm d} x\,
P^{(0)}_{\gamma/q}(x)\frac{{\rm d}\hat{\sigma}^{(0)}_q}{{\rm d} E_q}(E_q)
\delta(E_\gamma-xE_q).
\end{equation}
The integrations in eqs.\ (\ref{bare}), (\ref{CT}) are easily performed.
The collinear poles cancel in their sum. Setting $\varepsilon=0$,
one obtains
\begin{equation}
\label{hardf}
\frac{{\rm d}\hat{\sigma}_\gamma^{(0)}}{{\rm d} E_\gamma} =
\sigma_0\frac{\alpha}{2\pi}\frac{2}{\sqrt{s}}
\sum_{q=1}^{2N_F}
\frac{e^2_q}{\langle e_q^2\rangle}
P^{(0)}_{\gamma/q}(x_\gamma)\log\left(
\frac{s(1-x_\gamma)x^2_\gamma}{\mu^2}\right),
\end{equation}
where $x_\gamma=2E_\gamma/\sqrt{s}$.
The ${\cal O}(\alpha_s)$ corrections to the ${\rm d}\hat{\sigma}_{q,g}$
hard-scattering cross sections are defined by the Feynman diagrams
of fig.\ 1. First we note that ${\rm d}\hat{\sigma}^{(1)}_g$ can be
obtained from ${\rm d}\hat{\sigma}^{(0)}_\gamma$ by modifying the
charge factors:
\begin{equation}
\frac{{\rm d}\hat{\sigma}^{(1)}_g}{{\rm d} E_g}=C_F\sigma_0\frac{\alpha_s}{2\pi}
\frac{4}{\sqrt{s}}N_FP^{(0)}_{g/q}(x_g)
\log\left(\frac{s(1-x_g)x^2_g}{\mu^2}\right).
\end{equation}
The cross sections ${\rm d}\hat{\sigma}_q^{(1)}$ receives both real and
virtual corrections. The loop correction can be written as
\begin{equation}
\frac{{\rm d}\sigma_{\rm loop}}{{\rm d} E_q}=
C_F\sigma_0 \frac{\alpha_s}{2\pi}
\frac{2}{\sqrt{s}}H\left(\frac{\mu^2}{s}\right)^\varepsilon
\frac{(4\pi)^\varepsilon}{\varepsilon\Gamma(1-\varepsilon)}
\left[-\frac{2}{3}-3+(\pi^2-8)\varepsilon\right]
\delta\left(\frac{\sqrt{s}}{2}-E_q\right).
\end{equation}
The Bremsstrahlung contribution has an expression similar to
${\rm d}\tilde{\sigma}^{(0)}/{\rm d} E_\gamma$ (\ref{bare}):
\begin{equation}
\frac{{\rm d}\sigma_{\rm real}}{{\rm d} E_q}=
C_F\sigma_0\frac{e_q^2}{\langle e_q^2\rangle}\frac{\alpha_s}{2\pi}
\frac{2}{\sqrt{s}}H\left(\frac{4\pi\mu^2}{s}\right)^\varepsilon
\frac{1}{\Gamma(1-\varepsilon)}\int{\rm d} y_{12}\,{\rm d} y_{13}\,{\rm d} y_{23}\,
\theta(1- y_{13}-y_{23})(y_{12}y_{13}y_{23})^{-\varepsilon}
\end{equation}
\[ \times\left[(1-\varepsilon)\left(\frac{y_{23}}{y_{13}}
+\frac{y_{13}}{y_{23}}\right)
+\frac{2y_{12}-\varepsilon y_{13}y_{23}}{y_{13}y_{23}}\right]
\delta(1-y_{12}-y_{13}-y_{23})
\delta\left(1-y_{23}-\frac{2E_\gamma}{\sqrt{s}}\right),\]
The sum of the loop and Bremsstrahlung contributions has the simple
expression
\begin{eqnarray}
\lefteqn{\frac{{\rm d}\tilde{\sigma}^{(1)}_q}{{\rm d} E_q}=
C_F\sigma_0 \frac{\alpha_s}{2\pi}\frac{2}{\sqrt{s}}H
\frac{(4\pi)^\varepsilon}{\varepsilon\Gamma(1-\varepsilon)}}\\
&\times &\nonumber \left\{-P^{(0)}_{q/q}(x_q)
+\varepsilon\left[P^{(0)}_{q/q}(x_q)\log\left(\frac{s}{\mu^2}\right)
+\left(\frac{2}{3}\pi^2-\frac{9}{2}\right)\delta(1-x_q)
+2\log x_q\frac{1+x_q^2}{1-x_q}\right.\right. \\
& &\nonumber \left.\left.
+(1+x_q^2)\left(\frac{\log(1- x_q)}{1-x_q}\right)_+
-\frac{3}{2}\left(\frac{1}{1-x_q}\right)_+-\frac{3}{2}x_q
+\frac{5}{2}\right]\right\},
\end{eqnarray}
where the index + denotes the usual ``+ prescription'' of regularizing
singular behavior at $x_q=1$. The remaining single pole is cancelled
when one adds the $\overline{{\rm MS}}$\ counterterm ${\rm d}\sigma^{(0)}_{CT}$ which is
defined as
\begin{equation}
\frac{{\rm d}\sigma^{(1)}_{\rm CT}}{{\rm d} E_q}=
C_F\sigma_0\frac{\alpha_s}{2\pi}\frac{2}{\sqrt{s}}
\frac{(4\pi)^\varepsilon}{\varepsilon\Gamma(1-\varepsilon)}
P^{(0)}_{q/q}\left(\frac{2E_q}{\sqrt{s}}\right).
\end{equation}
The final result is obtained after setting $\varepsilon=0$:
\begin{eqnarray}
\lefteqn{\frac{{\rm d}\hat{\sigma}^{(1)}_q}{{\rm d} E_q}=
C_F\sigma_0 \frac{\alpha_s}{2\pi}\frac{2}{\sqrt{s}}}\\
&\times &\nonumber
\left\{P^{(0)}_{q/q}(x_q)\log\left(\frac{s}{\mu^2}\right)
+\left(\frac{2}{3}\pi^2-\frac{9}{2}\right)\delta(1-x_q)
+2\log x_q\frac{1+x_q^2}{1-x_q}\right. \\
& &\nonumber \left.
+(1+x_q^2)\left(\frac{\log(1- x_q)}{1-x_q}\right)_+
-\frac{3}{2}\left(\frac{1}{1-x_q}\right)_+-\frac{3}{2}x_q
+\frac{5}{2}\right\},
\end{eqnarray}
where $x_q=2E_q/\sqrt{s}$. This result can also be deduced after
replacing trivial color factors from the coefficient functions of
inclusive single hadron production calculated in ref.\ \cite{Aletal}.
The theoretical input described in this section is sufficient to
extract the photon fragmentation functions from experimental data
in next-to-leading order accuracy. A complete analysis requires
the measurement of the inclusive photon production cross section at
various energies. The recent LEP data give information at the $Z$-pole.
Unfortunately, the data obtained at PETRA, LEP and TRISTAN suffer from
low statistics. Needless to say that such an experimental study would
give very important complementary information on the fragmentation
functions of the photon obtained at hadron colliders.
\section{Inclusive isolated photon production in $e^+ e^-$ annihilation}
Let us now consider the inclusive photon cross section with photon
isolation. One can argue that due to isolation cuts the fragmentation
contribution is suppressed. As a consequence, isolation changes the
relative importance of the different contributions. It is reasonable to
consider the effect of isolation typically as an order $\alpha_s$ effect.
After imposing the isolation cuts, the fragmentation
contribution will be of order $\alpha$, i.\ e., the same order as the
order of the pointlike perturbative cross section
${\rm d} \hat{\sigma}^{(0)}_{\gamma}/{\rm d} E_{\gamma}$.
Isolation in practice can only be made with finite energy resolution.
Therefore, we require that in a cone of half angle $\delta_c$
around the photon three momentum the deposited energy be less than a
fraction $\epsilon_c$ of the photon energy. In experiments this
parameter $\epsilon_c$ has a value typically about 0.1.
Calculating ${\rm d}\hat{\sigma}^{(0)}_{\gamma,\:{\rm iso}}/\displaystyle{\rm d} E_{\gamma}$,
we should insert a combination of $\theta$ functions in the phase space
integrals as follows
\begin{eqnarray}
\label{sfc}
S(\epsilon_c,\delta_c)&=&
\theta(\vartheta_{q\gamma}-\delta_c)
\theta(\vartheta_{\bar{q}\gamma}-\delta_c) \nonumber \\
&+&\theta(\vartheta_{q\gamma}-\delta_c)
\theta(\delta_c-\vartheta_{\bar{q}\gamma})
\theta(\epsilon_c E_{\gamma}-E_{\bar{q}}) \\ \nonumber
&+&\theta(\vartheta_{\bar{q}\gamma}-\delta_c)
\theta(\delta_c-\vartheta_{q\gamma})
\theta(\epsilon_c E_{\gamma}-E_q).
\end{eqnarray}
Let us require that
\[\epsilon_c < \frac{1}{2} \quad {\rm and }\quad
\sin^2\frac{\delta_c}{2} < \frac{1}{2} \]
and choose integration variables
\[ x_\gamma=\frac{2E_{\gamma}}{\sqrt{s}},\quad
y=\frac{y_{13}}{x_\gamma}.\]
We define the hard scattering cross section again with a collinear
counter-term
\begin{equation}
\frac{{\rm d}\hat{\sigma}_{\gamma,\:{\rm iso}}^{(0)}}{{\rm d} x_\gamma}=
\lim_{\varepsilon\rightarrow 0}
\left(\frac{{\rm d}\tilde{\sigma}^{(0)}_{\rm iso}}{{\rm d} x_\gamma}
+\frac{{\rm d}\sigma^{(0)}_{\rm CT,\:iso}}{{\rm d} x_\gamma}\right),
\end{equation}
where the first term in the right hand side is calculated as given by
Feynman diagrams in $4-2\varepsilon$ dimensions and for the
counter-term, we use the $\overline{{\rm MS}}$-type expression
\begin{equation}
\label{CTiso}
\frac{{\rm d}\sigma^{(0)}_{\rm CT, iso}}{{\rm d} x_\gamma}=
2\sigma_0\frac{\alpha}{2\pi}\sum_q\frac{e^2_q}{\langle e_q^2\rangle}
\frac{(4\pi)^\varepsilon}{\varepsilon\Gamma(1-\varepsilon)}
P^{(0)}_{\gamma/q}(x_\gamma)\theta(x_\gamma-\frac{1}{1+\epsilon_c}).
\end{equation}
After performing the integration over $y$ and setting $\varepsilon=0$,
one obtains
\begin{equation}
\label{sig0iso}
\frac{{\rm d}\hat{\sigma}_{\gamma,\:{\rm iso}}^{(0)}}{{\rm d} x_\gamma}=
2\sigma_0\frac{\alpha}{2\pi}\sum_q \frac{e_q^2}{\langle e_q^2\rangle}
\left\{\left[P^{(0)}_{\gamma/q}(x_\gamma)
\log\frac{s(1-x_\gamma)x_\gamma^2y_m}{\mu^2(1-y_m)}
+ e_q^2x_\gamma(1-2y_m)\right]
\theta\left(x_\gamma-\frac{1}{1+\epsilon_c}\right)\right.
\end{equation}
\[
{}~~~~~+\left.P^{(0)}_{\gamma/q}(x_\gamma)\log\frac{1-y_c}{y_c}
-e_q^2x_\gamma(1-2y_c)\right\},
\]
where $y_c$ and $y_m$ are defined as follows
\[
y_c= \frac{1-x_\gamma}{1-x_\gamma \sin^2\frac{\delta_c}{2}}
\sin^2\frac{\delta_c}{2},\qquad
y_m={\rm min}\left\{y_c, 1+\epsilon_c - \frac{1}{x_\gamma}\right\}.
\]
One can make several comments on this result.
\begin{itemize}
\item The unisolated case can be recovered in the limit $\epsilon_c
\rightarrow \infty$ (cf.\ eq.\ (\ref{hardf})).
\item Imperfect isolation allows for a contribution from
the fragmentation: the photon looks isolated since the relatively soft
fragments surrounding it are not counted.
\item Assuming perfect energy resolution
($\epsilon_c=0$) we obtain vanishing counter term. In higher order,
however, we can not isolate the photon from the soft gluons completely
(we shall discuss this point in great detail in the next section),
therefore, one can not set the value of $\epsilon_c$ to zero.
\item In the leading logarithmic approximation one can define
a fragmentation function with isolation satisfying a modified
inhomogeneous evolution equation
\begin{eqnarray}
\lefteqn{\mu\frac{\partial}{\partial\mu}
D_{\gamma/a}(x,\mu,\epsilon_c)=} \\ \nonumber
&&\frac{\alpha}{\pi} P_{\gamma/a}(x)
\theta\left(x-\frac{1}{1+\epsilon_c}\right)
+ \frac{\alpha_s}{\pi} \sum_{b}\int\frac{{\rm d} y}{y}\,
D_{\gamma/b}\left(\frac{x}{y},\mu,\epsilon_c\right)P_{b/a}(y).
\end{eqnarray}
Clearly, if $D_{\gamma/a}(x,\mu)$ is a solution of the evolution
equation without isolation then
\begin{equation}
D_{\gamma/a}(x,\mu,\epsilon_c)=
D_{\gamma/a}(x,\mu)\theta\left(x-\frac{1}{1+\epsilon_c}\right)
\end{equation}
will be the solution of the evolution equation with isolation. In
next-to-leading logarithmic approximation and/or choosing a different
counter-term (for example completely subtracting the contribution of
the singular region as defined by the third term of eq.\ (\ref{sfc})),
the isolated fragmentation can also be dependent on $\delta_c$
therefore, in general, one cannot simply identify the isolated fragmentation
with the non-isolated fragmentation in the high-$x$ region.
\end{itemize}
In next-to-leading order, the physical cross section of isolated photon
production is given by the terms as follows
\begin{eqnarray}
\label{isolated}
\lefteqn{\frac{{\rm d}\sigma_{\gamma}^{\rm iso}}
{{\rm d} E_{\gamma}}(\epsilon_c,\delta_c) =
\frac{{\rm d} \hat{\sigma}^{(0)}_{\rm iso}}{{\rm d} E_{\gamma}}
+ \frac{{\rm d} \hat{\sigma}^{(1)}_{\rm iso}}{{\rm d} E_{\gamma}}+
2\sigma_0 \frac{\alpha}{2\pi}
D_{\gamma/q}^{\rm iso}(\frac{2E_{\gamma}}{\sqrt{s}},\mu)
\theta(\frac{2E_{\gamma}}{\sqrt{s}} - \frac{1}{1+\epsilon_c})}
\nonumber \\
&+&\sum_a
\int_0^{\sqrt{s}/2}{\rm d} E_a\,\int_{\frac{1}{1+\epsilon_c}}^1
{\rm d} x\,D_{\gamma /a}^{\rm iso}(x,\mu_f)
\frac{{\rm d} \hat{\sigma}_a}{{\rm d} E_a}(E_a,\mu,\mu_f,\alpha_s(\mu))
\delta(E_\gamma- xE_a).
\end{eqnarray}
This decomposition is scheme dependent. The first term on the right hand
side of this equation has been calculated in the $\overline{{\rm MS}}$\ scheme (see eq.\
(\ref{sig0iso}). It also appears useful to calculate the next-to-leading
order perturbative cross section ${\rm d}\hat{\sigma}^{(1)}/dE_{\gamma}$ in
the $\overline{{\rm MS}}$\ scheme. This requires the calculation of the next-to-leading
order splitting function $P_{\gamma/a}^{(1)}$ in the presence of
isolation cuts and a corresponding local subtraction term has to be found.
This is a complex but feasible calculation. Since such a result is not
yet available, in the next section we carry out the calculation of
${\rm d} \hat{\sigma}^{(1)}_{\gamma}$ in a different subtraction scheme where
the photon is completely isolated from the quarks but not from soft
gluons (``cone subtraction''). In this scheme, in leading order, the
counter-term is vanishing and the cross section becomes independent of
$\epsilon_c$:
\begin{equation}
\label{sig0isocone}
\frac{{\rm d}\hat{\sigma}^{(0)}}
{{\rm d} E_{\gamma}} = 2\sigma_0\frac{\alpha}{2\pi}
\sum_q \frac{e_q^2}{\langle e_q^2\rangle}\left\{ P^{(0)}_{\gamma/q}
(x_{\gamma}) \ln\frac{1-y_c}{y_c} - x_{\gamma}(1-2y_c)\right\}.
\end{equation}
We note that the logarithmic divergence at $x_{\gamma}=1$ is the usual
soft singularity. Contrary to the case of the $\overline{{\rm MS}}$\ scheme, with cone
subtraction the cross section is continuous and always positive (see
fig.\ 2 for comparison). One may argue that in this scheme the
perturbative part is separated more efficiently, consequently the
contributions of the non-perturbative terms (proportional to
$D^{\rm iso}_{\gamma/a}$) become relatively smaller.
In general we find that the non-perturbative terms contribute mainly
in the region $x_{\gamma} > 1/(1+\epsilon_c)$ thus we conclude that
the perturbative predictions appear to be reliable for
$E_{\gamma} < \sqrt{s}/(2(1+\epsilon_c))$.
In the next section we present the results of our next-to-leading
order perturbative calculation of ${\rm d}\hat{\sigma}^{(1)}_{\rm iso}$
for isolated photon plus n-jet production. We conjecture
that a jet algorithm applied to the isolated photon hard scattering
cross section (eq.\ (\ref{isolated})) provides an infrared safe isolated
photon plus $n$-jet cross section. This is supported by the
fact that our isolation prescription does not influence the soft-gluon
structure of the cross section. If we can define a jet algorithm,
\begin{equation}
\frac{{\rm d}\hat{\sigma}_\gamma^{\rm iso}}{{\rm d} E_\gamma}(\delta_c,\epsilon_c)=
\frac{{\rm d}\hat{\sigma}^{\rm iso}_{\gamma+{\rm 1\:jet}}}{{\rm d} E_\gamma}
(\delta_c,\epsilon_c)+
\frac{{\rm d}\hat{\sigma}^{\rm iso}_{\gamma+{\rm 2\:jets}}}{{\rm d} E_\gamma}
(\delta_c,\epsilon_c)+
\frac{{\rm d}\hat{\sigma}^{\rm iso}_{\gamma+{\rm 3\:jets}}}{{\rm d} E_\gamma}
(\delta_c,\epsilon_c)+\ldots,
\end{equation}
such that every term on the right hand side is finite and we count every
particle only once, then isolated photon plus $n$-jet cross section
appears to be infrared safe.
We shall see that the non-perturbative (``anomalous'') contributions
are important only in the case of photon plus 1 jet production when
the cross section is dominated by the $x_{\gamma}>1/(1+\epsilon_c)$
region.
\section{Isolated photon plus $n$-jet production}
In QCD, the differential cross section at ${\cal O}(\alpha_s^2)$ is
a sum of the real and virtual corrections:
\begin{equation}
{\rm d}\sigma=|M_4|^2 {\rm d} S^{(4)}+|M_3|^2 {\rm d} S^{(3)},
\end{equation}
where ${\rm d} S^{(n)}$ is the $n$-body phase space element with the flux
factor included.
For infrared safe quantities both terms on the right hand side are
separately divergent, but the sum is finite. It is very difficult to
handle numerically this cancellation. Fortunately, at least at one
loop, the divergencies can be cancelled analytically. There are two
commonly used algorithms to achieve such a cancellation --- the
subtraction method \cite{ERT,KN} and the phase space slicing method
\cite{FSKS,GG}. They both rely on the fact that after partial
decomposition $|M_4|^2$ can be written as a sum of terms with single pole
singularity. Focusing our attention to the case of $q \bar{q} \gamma g$
final state, we find four such terms:
\begin{equation}
\label{partdec}
|M_4|^2=C_F\alpha\alpha_s\left(
\frac{M_{gq}}{y_{gq}}+\frac{M_{g\bar{q}}}{y_{g\bar{q}}}+
\frac{M_{\gamma q}}{y_{\gamma q}}
+\frac{M_{\gamma\bar{q}}}{y_{\gamma\bar{q}}}\right),
\end{equation}
where
\begin{equation}
y_{ij}=(p_i+p_j)^2/s,\qquad (s=M_Z^2)
\end{equation}
The pole part of each term is defined as
\begin{equation}
\frac{P_{ij}}{y_{ij}},\qquad {\rm where} \quad
P_{ij}=\lim_{y_{ij}\rightarrow 0}M_{ij}.
\end{equation}
It can be integrated analytically over either the whole or a part of the
phase space. In this way, in general, we obtain analytical expressions
for the regularized divergencies of ${\rm d}\sigma^{(4)}$ which cancel
against the divergencies of the virtual corrections, ${\rm d}\sigma^{(3)}$
(KLN theorem). When a photon in the final state is observed, the
cancellation mechanism described above does not apply to the $y_{\gamma
q(\bar{q})}$ poles. The reason for this is that the process is exclusive
in the photon and the virtual corrections with the photon in the loop
cannot contribute for kinematical reasons.
To make the discussion more transparent, let us consider contributions
from the region where only $y_{qg}$ is small. The virtual corrections
can also be split into three terms
\begin{equation}
\label{m3}
|M_3|^2=C_F\alpha\alpha_s\left(
M_{gq}^{(3)}+M_{g\bar{q}}^{(3)}+M_f^{(3)}\right),
\end{equation}
such that $M_{gq}^{(3)}$ contains one half of the singularities, the
second term contains the other half and the third is the finite
part.\footnote{For the reader's convenience, we give the explicit
expressions for $M_{ij}$, $M_{kl}^{(3)}$ and $M_f^{(3)}$ in the
appendix.} (Notice that there are no $M_{\gamma q}^{(3)}$, $M_{\gamma
{\bar q}}^{(3)}$ terms.) Then we shall concentrate on
\begin{equation}
\label{cs}
\frac{M_{ij}}{y_{ij}}{\rm d} S^{(4)}+M_{ij}^{(3)}{\rm d} S^{(3)}
\end{equation}
parts of the cross section.
In the subtraction method, one considers the combination
\begin{equation}
\label{subtraction}
\frac{M_{gq}}{y_{gq}}{\rm d} S^{(4)}-\frac{P_{gq}}{y_{gq}}
{\rm d} S^{(4\rightarrow 3)}+
\left(\int \frac{P_{gq}}{y_{gq}}{\rm d} S^{(g)}\right){\rm d} S^{(3)}+
M_{gq}^{(3)}{\rm d} S^{(3)},
\end{equation}
where the integration over ${\rm d} S^{(g)}$ is meant to be an integral over
the gluon variables. ${\rm d} S^{(4\rightarrow 3)}$ means the factorized
four-body phase space element in the limit when the gluon is soft or
collinear to the quark: ${\rm d} S^{(4\rightarrow 3)}={\rm d} S^{(g)}{\rm d} S^{(3)}$.
In the phase space slicing method, formula (\ref{cs}) is written as
\begin{equation}
\frac{M_{gq}}{y_{gq}}\theta(y_{gq}-y_0){\rm d} S^{(4)}+
\frac{M_{gq}}{y_{gq}}\theta(y_0-y_{gq}){\rm d} S^{(4)}+M_{gq}^{(3)}{\rm d} S^{(3)}.
\end{equation}
If $y_0$\ is chosen small enough ($y_0\leq 10^{-4}$), then
\begin{equation}
\label{slice}
\frac{M_{gq}}{y_{gq}}\theta(y_{gq}-y_0){\rm d} S^{(4)}+
\frac{P_{gq}}{y_{gq}}\theta(y_0-y_{gq}){\rm d} S^{(4\rightarrow 3)}+
M_{gq}^{(3)}{\rm d} S^{(3)}
\end{equation}
is a good approximation. The first and second terms depend on $y_0$\
strongly, but their sum is independent of this unphysical parameter.
The strong $y_0$\ dependence originates mainly from the slicing of the
soft gluon region.
If one wishes to calculate isolated photon production, one has to make
sure that the restriction of the phase space does not disturb the
cancellation mechanism of soft and collinear gluons. At hadron level
the meaning of photon isolation is well-defined. At parton level
however, one has to be careful because the isolation prescription is
different for states with different number of partons.\footnote{One
may object isolation at parton level arguing that the fragmentation
process inevitably scatters hadronic matter into the isolation cone.
For a purely perturbative analysis, this objection is not valid. To
understand the reason for this, let us consider the same process at
higher energies, say $\sqrt{s}=10\,TeV$, in which energy region
perturbation theory is expected to give even better description.
Clearly, at this energy, fragmentation does not alter the energy flow,
therefore isolation at parton level corresponds to isolation at hadron
level.} If the photon is isolated form the partons with $y_\gamma$, we
should include isolation cuts with respect to all partons:
\begin{equation}
\theta(y_{\gamma q}-y_\gamma)\theta(y_{\gamma \bar{q}}-y_\gamma)
\theta(y_{\gamma g}-y_\gamma).
\end{equation}
The isolation from the gluon can be implemented only in the first term
of formula (\ref{subtraction}). However, if we cut the soft gluons in
the first term, then the cancellation of singularities between the first
and second terms breaks down. One can maintain the cancellation
introducing an energy resolution parameter $\epsilon$ such that a
gluon is isolated from the photon only if its energy is greater then
$\epsilon E_\gamma$. Accordingly, isolation for the first term means
multiplication with
\begin{equation}
\label{iso}
\theta(y_{\gamma q}-y_\gamma)\theta(y_{\gamma \bar{q}}-y_\gamma)
(1-\theta(y_\gamma-y_{\gamma g})\theta(E_g-\epsilon E_\gamma)).
\end{equation}
Clearly, this criterium is ``not physical'' in the sense that one cannot
implement it at hadron level since we apply different cuts to quarks and
gluons.
If we introduce photon isolation in the slicing method, from formula
(\ref{slice}) we obtain
\begin{equation}
\label{isoslice}
\frac{M_{gq}}{y_{gq}}\theta(y_{gq}-y_0){\rm d} S^{(4)}
\theta(y_{\gamma q}-y_\gamma)\theta(y_{\gamma \bar{q}}-y_\gamma)
\theta(y_{\gamma g}-y_\gamma)+
\end{equation}
\[
\left(\frac{P_{gq}}{y_{gq}}\theta(y_0-y_{gq}){\rm d} S^{(4\rightarrow 3)}+
M_{gq}^{(3)}{\rm d} S^{(3)}\right)
\theta(y_{\gamma q}-y_\gamma)\theta(y_{\gamma \bar{q}}-y_\gamma).
\]
Usually, $y_\gamma\gg y_0$. This means that when changing $y_0$\ at fixed
$y_\gamma$ , the contribution from the soft gluons will be cut independently of
$y_0$\ and consequently, the $y_0$\ dependence is damped in the first term.
On the other hand, in the second term the $y_0$\ dependence is not damped
by the gluon-photon isolation condition. The conclusion is that the
$y_0$\ dependence will be different in the two terms and, therefore, the
cross section will depend on the unphysical parameter $y_0$. It is
important to notice that if $y_{gq}>y_0$, then there exist an
$\epsilon'$ such that if $\epsilon<\epsilon'$ then
\begin{equation}
\label{equivalence}
1-\theta(y_\gamma-y_{\gamma g})\theta(E_g-\epsilon E_\gamma)=
\theta(y_{\gamma g}-y_\gamma),
\end{equation}
therefore (\ref{isoslice}) defines a finite cross section, but $y_0$\
plays in a sense the role of $\epsilon$ used in formula (\ref{iso}).
To demonstrate the $y_0$\ dependence of the isolated photon cross section
explicitly, we calculated the isolated photon plus 1- and 2-jet cross
sections using the isolation criterium given by formula
(\ref{isoslice}). First we make two technical remarks about the slicing
method.
Choosing large $y_0$, the pole approximation is not precise enough in the
singular region; one has to take into account the non-singular terms in
the same region, i.\ e., one should add the
\begin{equation}
\label{correction}
\left(\frac{M_{gq}}{y_{gq}}{\rm d} S^{(4)}-
\frac{P_{gq}}{y_{gq}}{\rm d} S^{(4\rightarrow 3)}\right)\theta(y_0-y_{gq})
\end{equation}
correction term.
The calculation becomes analogous to the subtraction method and one has
to introduce the $\epsilon$ energy resolution parameter.
It is a practical question to establish at what value of $y_0$\ the
correction term (\ref{correction}) becomes important. The most
straightforward way to calculate the finite terms in (\ref{isoslice}) is
to perform the integration by a Monte Carlo method, which leaves
sufficient flexibility to calculate any jet shape parameter one wishes
to obtain. The Monte Carlo calculation has a finite statistical error
which of course, can be reduced by generating more points. Then the
criterium which determines the importance of the correction term
(\ref{correction}) is to require that the systematic error introduced by
neglecting (\ref{correction}) has to be smaller then the statistical
one. Clearly, this critical value of $y_0$\ depends on the jet resolution
parameter $y_J$\ as well as on $y_\gamma$. For the case of 3-jet production
without photon in the final state, an analysis was carried out in
ref.\ \cite{GG} to determine the critical
value of $y_0$\ above which the systematic error dominates. They found
that choosing $y_0/y_J\leq 0.01$ removes the systematic error.
The number of isolated photon plus $n$-jet events can be conveniently
parametrized in the form
\begin{eqnarray}
\frac{1}{\sigma_0}\sigma_{\gamma+n\:{\rm jets}}(y_J,y_0)&=&
\frac{\alpha}{2\pi}\sum_q\frac{e_q^4}{\langle e_q^2\rangle}
\left(g_n^{(0)}(y_J,y_0)
+\frac{\alpha_s}{2\pi}g_n^{(1)}(y_J,y_0)\right) \\ \nonumber
&\equiv&\frac{\alpha}{2\pi}\sum_q\frac{e_q^4}{\langle e_q^2\rangle}
g_n^{(0)}(y_J,y_0)(1+\alpha_s R_n(y_J,y_0)),
\end{eqnarray}
where $\sigma_0$ is the leading order cross section of the reaction
$e^+e^-\rightarrow$ hadrons and the $R_n(y_J,y_0)$ functions are defined
by the equation. In figs.\ 3 and 4, we show the $y_0$\ dependence of the
${\cal O}(\alpha_s)$ QCD corrections, $R_n(y_J,y_0)$, to the isolated
photon plus 1-jet and the isolated photon plus 2-jet cross sections. To
obtain the corrections, we used the following algorithm:
\begin{enumerate}
\item select isolated $\gamma+n$-jet events by requiring the invariant
mass of the photon with {\em any particle} in the event to be larger
than $y_\gamma$\ (see formula (\ref{isoslice});
\item apply E0 cluster algorithm to the hadronic part of the event;
\item separate $\gamma+$ 1-, 2-, and 3-jet events by the number of
remaining clusters of hadrons.
\end{enumerate}
We used $y_\gamma=y_J$. As expected, the $y_0$\ dependence in
$R_n(y_J,y_0)$ is strong up to $y_0=y_\gamma$. As explained before, in
formula (\ref{isoslice}) the $y_0$\ cut plays the role of the
$\epsilon$ parameter of formula (\ref{iso}). Therefore, the
(apparently) physical cut, (\ref{isoslice}) is in fact unphysical because
$y_0$\ is no longer a dummy variable of the cross section.
In order to demonstrate that we control the numerical evaluation of the
integrals at small $y_0$\ values, we calculated $R_n(y_J,y_0)$ with
$\theta(y_{\gamma g}-y_\gamma)$ in (\ref{isoslice}) removed. We denote
the corresponding quantity with $\tilde{R}_n(y_J,y_0)$. According to
the discussion after formula (\ref{isoslice}), this alteration should
remove the $y_0$\ dependence. The explicit calculation shows that this
indeed happens.\footnote{In fact, we can see weak $y_0$\ dependence in
$\tilde{R}_n(y_J,y_0)$. The origin of this dependence is the use of the
pole approximation. It can be observed for values $y_0>10^{-3}$ (this
value depends on $y_J$ ) in accordance with the observation made in ref.\
\cite{GG}. The results are shown for $y_J=0.06$. For other values of
$y_J$\ the dependence is similar.} We see that in order to obtain a $y_0$\
independent result, we have to use an unphysical cut: different cuts
are applied to quarks and gluons.
\bigskip
We conclude from this discussion that if we want to define a finite
isolated photon plus $n$-jet cross section we have to make a subtraction
which depends on some unphysical parameter no matter which algorithm ---
the subtraction or slicing one --- is used (see formulas (\ref{iso}),
(\ref{isoslice}) and (\ref{equivalence})). In other words, the physical
isolated photon plus $n$-jet cross section always contains some
non-perturbative (``anomalous'') contribution which is expected to give
contributions comparable or somewhat smaller than the ${\cal
O}(\alpha_s)$ QCD corrections. For the separation of perturbative and
non-perturbative parts of the cross section, one must introduce an
unphysical (non-zero) parameter. Of course, the sum of the perturbative
and non-perturbative pieces is independent of this parameter. In the
previous section we pointed out that the non-perturbative contribution
is expected to be small for $x_\gamma<1/(1+\epsilon_c)$.
\section{Numerical results}
As advocated in section 4, we carry out our calculation with the
subtraction method. The event definition is the following:
\begin{enumerate}
\item isolate the photon;
\item apply E0 cluster algorithm to the hadronic part of the event;
\item separate $\gamma+$ 1-, 2-, and 3-jet events by the number of
remaining clusters of hadrons.
\end{enumerate}
Photon isolation can be achieved either by isolating the photon in a
cone (cone isolation) or by requiring the invariant mass of the photon
with any particle in the event to be larger then an invariant mass cut
$y_\gamma$ . From experimental point of view the cone isolation is more natural.
Unfortunately, the results by OPAL \cite{OPAL} are corrected experimental
values in order to compare the measured rates with the matrix element
calculation of \cite{KL} where invariant mass isolation was used (with
$y_\gamma=y_J$). We give results for both. Since the QCD corrections are
very sensitive to the event definition we give explicitly how formula
(\ref{subtraction}) is modified in the case of cone isolation:
\begin{eqnarray}
\label{ci}
\lefteqn{
\theta(\vartheta_{q\gamma}-\delta_c)
\theta(\vartheta_{\bar{q}\gamma}-\delta_c)} \nonumber \\
&\times& \left\{
(1-\theta(\delta_c-\vartheta_{g\gamma})
\theta(E_g-\epsilon_c E_\gamma))
\frac{M_{gq}}{y_{gq}}{\rm d} S^{(4)}\right. \\ \nonumber
& &\left. -\frac{P_{gq}}{y_{gq}} {\rm d} S^{(4\rightarrow 3)}+
\left(\int \frac{P_{gq}}{y_{gq}}{\rm d} S^{(g)}\right){\rm d} S^{(3)}
+M_{gq}^{(3)}\right\};
\end{eqnarray}
and in the case of invariant mass isolation:
\begin{eqnarray}
\label{imi}
\lefteqn{
\theta(y_{q\gamma}-y_J) \theta(y_{\bar{q}\gamma}-y_J)}\nonumber \\
&\times& \left\{
(1-\theta(y_J-y_{g\gamma})\theta(E_g-\epsilon_c E_\gamma))
\frac{M_{gq}}{y_{gq}}{\rm d} S^{(4)}\right. \\ \nonumber
& &\left.-\frac{P_{gq}}{y_{gq}} {\rm d} S^{(4\rightarrow 3)}+
\left(\int \frac{P_{gq}}{y_{gq}}{\rm d} S^{(g)}\right){\rm d} S^{(3)}
+M_{gq}^{(3)}\right\}.
\end{eqnarray}
To obtain the isolated photon plus $n$-jet rates, the formulas above are
multiplied with $\theta$ functions as follows:
\begin{itemize}
\item One photon plus 3-jet:
\begin{equation}
\theta(y_{qg}-y_J) \theta(y_{\bar{q}g}-y_J) \theta(y_{q\bar{q}}-y_J)
\theta(y_{q\gamma}-y_J) \theta(y_{\bar{q}\gamma}-y_J)
\theta(y_{g\gamma}-y_J).
\end{equation}
\item One photon plus 2-jet:\\
Denote $i$ and $j$ the partons which when combined have the smallest
invariant mass in the hadronic part of the event, so they are combined
into pseudoparticle $c$. Denote $k$ the third parton. In the three-body
phase space the momentum of the $j$ particle is identically zero. Then
we use
\begin{equation}
\theta(y_J-y_{ij})\theta(y_{ck}-y_J)
\theta(y_{c\gamma}-y_J) \theta(y_{k\gamma}-y_J).
\end{equation}
\item One photon plus 1-jet:
\begin{equation}
\label{1g1j}
\theta(y_J-y_{qg}) \theta(y_J-y_{\bar{q}g})
\theta(y_J-y_{q\bar{q}}) \theta(y_J-y_{ck}).
\end{equation}
\end{itemize}
In the case of cone isolation, we also required that the energy of the
photon has to be larger than 7.5\,GeV. The half-opening angle of the
cone is $15^\circ$.
We shall give the results of our calculation for the partial widths
$\Gamma(Z\rightarrow\gamma+n\:{\rm jets})$ as ratios to the hadronic
width:
\begin{equation}
\frac{\Gamma(Z\rightarrow\gamma+n\:{\rm jets})}
{\Gamma(Z\rightarrow{\rm hadrons})}=
\frac{\left(\frac{\displaystyle 8}{\displaystyle 9}c_u+\frac{\displaystyle 1}{\displaystyle 3}c_d\right)
\frac{\displaystyle \alpha}{\displaystyle 2\pi}}{(2c_u+3c_d)
\left(1+\frac{\displaystyle \alpha_s}{\displaystyle \pi}
+1.42\left(\frac{\displaystyle \alpha_s}{\displaystyle \pi}\right)^2\right)}g_n(y_J),
\end{equation}
where
\begin{equation}
c_f=v_f^2+a_f^2
\end{equation}
and $v_f$ and $a_f$ are the weak vector and axial vector couplings:
\begin{equation}
v_f=2I_{3,f}-4e_f\sin^2\theta_W
\end{equation}
\begin{equation}
a_f=2 I_{3,f},
\end{equation}
so with $\sin^2\theta_W=0.23$, $v_u=0.39$, $v_d=-0.69$, $a_u=+1$ and
$a_d=-1$. The $g_n(y_J)$ functions can be expanded in $\alpha_s$ :
\begin{equation}
g_n(y_J)=g_n^{(0)}(y_J)+\frac{\alpha_s}{2\pi}g_n^{(0)}(y_J)
\equiv g_n^{(0)}(y_J)(1+\alpha_s R_n(y_J)),
\end{equation}
and our aim is to compute the $g_n^{(i)}$ functions (of course,
$g_3^{(0)}(y_J)=0$.)
It is interesting to study the photon energy spectrum of the jet cross
sections
\begin{equation}
\frac{{\rm d} \sigma^{\rm iso}_{\gamma + 1jet}}{{\rm d} E_{\gamma}}
(\delta_c,\epsilon_c), \quad
\frac{{\rm d} \sigma^{\rm iso}_{\gamma + 2jet}}{{\rm d} E_{\gamma}}
(\delta_c,\epsilon_c)
\end{equation}
at some realistic values of the isolation parameters $\delta_c,
\epsilon_c$. In fig.\ 5 the photon energy distibutions of one jet
and two jet production are shown in the Born approximation, while in
fig.\ 6 the same curves are plotted but including the next-to-leading
order corrections. Due to obvious kinematical reasons one jet
production is completely dominated by the hard photon region
$x_{\gamma}>1/(1 + \epsilon_c)$. In this region as we pointed out
one may get substantial (but not overwhelmingly large) ``anomalous''
photon contribution. It is difficult to estimate the ``anomalous''
contribution since we do not have yet enough phenomenological input.
Certainly a combined study of the hadron collider and LEP data would
help to understand its size better. We note that in the high $x$ region
the application of perturbative QCD by itself requires some care due
to the appearance of large logarithms of type $\log(1-x)$. Indeed the
QCD corrections are larger for one jet than for two jet production.
It is interesting to compare the one jet data to the perturbative
QCD prediction, but one should not be surprised if one does not find
perfect agreement.
The non-perturbative corrections appear, however, negligible
in the case of 2-jet production since it is dominated by
the complementary region $x_{\gamma} < 1/(1+\epsilon_c)$. Requiring
that
$x_{\gamma} < 1/(1+\epsilon_c)$, the 2-jet results remain practically
unaffected, while this cut largely eliminates the 1-jet production.
This is illustrated by the numbers given in Table 1.
There is a tendency that if we shrink the isolation region
the perturbative contribution increases.
{}From figs.\ 5, 6 and, we can see also that
the total one jet and two jet rates should depend
weekly on $\epsilon_c$. The reason for this is that
the photon energy distribution changes weakly
if we change $\epsilon_c$ in the physically interesting
region of 0.06--0.2 .
In addition to the ambiguities due to ``anomalous''
photon production there are also the usual scale ambiguities.
In fig.\ 7 we present the predicted values of the $\Gamma(Z\rightarrow
\gamma+n\:{\rm jets})$ ($n=1,2$) partial widths for the cone isolation
with $\epsilon_c=0.1$. The bands between the dashed lines represent
the scale dependence between the scales $M_Z/2$ and
$2M_Z$. We used $\alpha_s(M_Z)=0.12$ and $\alpha=1/137$. The
$\epsilon_c$ dependence is so weak for experimentally feasible values
that the uncertainty introduced by the $\epsilon_c$ dependendce is
much smaller than the scale dependence and therefore we did not show it.
The scale dependence of the 1-jet rate is rather large. This is a
reflection of the fact that the QCD corrections are large.
In figs.\ 8 and 9 the same curves as in
fig.\ 7 are depicted in the case of invariant mass isolation
with $y_{\gamma}=y_J$ for the 1-jet and 2-jet rates, respectively.
In the same figures, we show the enhancement induced by
the choice of smaller isolation region. In accordance with our previous
discussion, the enhancement is larger for the 1-jet rate than for the
2-jet rate. We note, however, that when comparison is made to the data
at a given isolation it is important to use exactly the same
isolation and event definition both in the experimental
and theoretical analysis. Therefore one can not just change
the value of $y_{\gamma}$ such that the prediction fits
better the data. In particular one is not
allowed to use different values of $y_{\gamma}$ in case of one jet
and two jet production. In a given subtraction scheme with
well defined experimental isolation cuts all the parameters of the
perturbative part are fixed.
In particular the discrepancy between the measured
$\gamma$+ 1-jet rate and the perturbative prediction at
$y_{\gamma}=y_J$ may indicate non-negligible
anomalous contributions.
As mentioned in the section 4, the Monte Carlo approach is useful
because it leaves sufficient flexibility to calculate any jet shape
parameter. To demonstrate this feature of our work we present the
result of matrix element calculation for the distribution of the photon
transverse momentum with respect to the thrust axis (fig.\ 10). The
thrust axis has been calculated all particles taken into account,
including the photon. We used invariant mass isolation (with
$y_\gamma=0.005$ and 0.06) to isolate the photon from the partons. We
also required the photon to be more energetic than 7.5\,GeV. For
small $p_T$, configurations with thrust value close to one may occur.
The histogram is normalized to one, therefore the
uncertainty in the small $p_T$ region influences the behaviour in the
large $p_T$ region. We note, however, that requiring
$x_{\gamma} < 1/(1+\epsilon_c)$ the small $p_T$ region will be
suppressed.
Finally, in fig.\ 11, we present the predicted values of the
$\Gamma(Z\rightarrow \gamma+n\:{\rm jets})$ ($n=1,2$) partial widths for
the cone isolation with $\epsilon_c=0.1$ when Durham clustering
algorithm is used \cite{D}. In this algorithm, two jets are combined in
to a single jet if
\begin{equation}
y_{Dij}=\frac{2{\rm min}(E_i^2,E_j^2)(1-\cos\theta_{ij})}{s}
\end{equation}
is smaller than the jet resolution parameter $y_J$. For pure QCD events,
this algorithm tends to emphasize 2-jet events as compared to other
algorithms and suited better for resummation purposes \cite{CDW}. When a
photon in the final state is observed, we find higher 1-jet rate and
lower 2-jet rate and the QCD corrections are smaller as compared to the
E0 cluster algorithm.
\section{Conclusions}
Photon production in association with hadrons in $e^+e^-$ annihilation
provides us interesting information on the non-perturbative
component of the photon and new possibilities to test the underlying
structure of perturbative QCD.
In this paper we paid special attention to the importance of the correct
treatment of the collinear photon-quark region. It was shown that
next-to-leading and higher orders the perturbative part can only be
defined using some non-physical parameter, no matter whether
non-isolated or isolated photon production is considered. The physical
cross section defined as the sum of the perturbative and non-perturbative
part is, of course, independent of such a parameter.
We briefly reviewed the theoretical description of the inclusive
non-isolated photon production in $e^+e^-$ annihilation. It was pointed
out that the LEP data can be used to constrain the parametrization of
the fragmentation functions of the photon, $D_{\gamma/q}(x,\mu)$,
$D_{\gamma,g}(x,\mu)$. The measurement of these fragmentation functions
would give important input information for the other inclusive photon
production measurements at hadron colliders and at HERA. Furthermore,
one could test the anomalous $\mu$-dependence at asymptotically large
$\mu$ values predicted by perturbative QCD.
The case of isolated photon production was studied as well. Under well
defined circumstances, isolation can suppress the numerical contribution
of the non-perturbative contributions. We pointed out that the
non-perturbative (``anomalous'') contribution can be sizable only for
$E_{\gamma} > \sqrt{s}/2/(1+\epsilon_c)$, where $\epsilon_c$ is the
energy fraction in the isolation cone with respect to the photon energy.
When a jet algorithm is used, then the non-perturbative contribution is
expected to be further suppressed for isolated photon plus $n$-jet
cross section for $n>1$, but not for $n=1$.
We demonstrated the difficulty due to the quark photon collinear
singularity with careful calculation of the next-to-leading order QCD
corrections to isolated photon plus one or two jets. We argued
that in the case of isolated photon plus 2-jet
production indeed, as suggested by Kramer and Lampe, the
perturbative contribution dominates the physical cross section.
The next-to-leading order corrections are calculated
by developing a Monte Carlo program
which can be used to calculate the perturbative corrections
to any physical quantity.
\bigskip\bigskip
\noindent {\bf \large Acknowledgement}\ \ We thank P. M\"attig and
C. Markus for helpful correspondence. One of us (Z.K.) is greatful to
R. K. Ellis and G. Sterman for illuminating discussions.
\bigskip\bigskip
\Large
{\bf Appendix A}
\normalsize
\bigskip
In this appendix we give the explicit expressions for the $M_{ij}$ and
$M^{(3)}_{ij}$ expressions used in formulas (\ref{partdec}) and
(\ref{m3}) respectively. We shall make the following renaming:
\begin{eqnarray}
&q& \rightarrow {\rm particle}\;1 \nonumber\\
&\bar{q}& \rightarrow {\rm particle}\;2 \\ \nonumber
&g& \rightarrow {\rm particle}\;3 \\ \nonumber
&\gamma& \rightarrow {\rm particle}\;4.
\end{eqnarray}
Then the following relations are valid:
\begin{equation}
M_{23}=M_{13}(1\leftrightarrow 2),\qquad
M_{14}=M_{13}(3\leftrightarrow 4),\qquad
M_{24}=M_{13}(1\leftrightarrow 2,\quad 3\leftrightarrow 4),
\end{equation}
therefore, it is sufficient to spell out $M_{13}$. The corresponding
expression can be obtained from the four-parton matrix element given in
Appendix B of ref.\ \cite{ERT} by setting $N_C=0$ and $T_R=0$. After
performing partial fractioning one obtains
\begin{eqnarray*}
M_{13}&=&\frac{2}{4\pi^2}\left[
\frac{2y_{12} y_{123} (1+y_{34})}{y_{134}y_{234}(y_{13}+y_{23})}
+\frac{2y_{14}(1-y_{24})}{y_{234} (y_{13}+y_{24})}
+\frac{2(1-y_{13}) y_{23}}{y_{134} (y_{13}+y_{24})}\right. \\
& &+\frac{1}{y_{134}^2}
(y_{24} y_{34}+y_{12} y_{34}+y_{13} y_{24}-y_{14} y_{23}+y_{12} y_{13})
+\frac{y_{34}}{y_{13}+y_{24}} \\
& & +\frac{y_{12} y_{24} y_{34}+y_{12} y_{14} y_{34}-y_{13}
y_{24}^2+y_{13} y_{14} y_{24}+2y_{12} y_{14} y_{24}}
{y_{134}(y_{13}+y_{14}+y_{23})}\left(
\frac{1}{y_{13}+y_{23}}+\frac{1}{y_{13}+y_{14}}\right) \\
& &+\frac{y_{12} y_{24} y_{34}+y_{12} y_{14} y_{34}-y_{14}^2
y_{23}+y_{14} y_{23} y_{24}+2y_{12} y_{14} y_{24}}
{y_{234}(y_{13}+y_{23}+y_{24})}\left(
\frac{1}{y_{13}+y_{23}}+\frac{1}{y_{13}+y_{24}}\right) \\
& &+\frac{y_{12} y_{23} y_{34}+y_{12} y_{13} y_{34}-y_{14}
y_{23}^2+y_{13} y_{14} y_{23}+2y_{12} y_{13} y_{23}}
{y_{134}(y_{13}+y_{14}+y_{24})}\left(
\frac{1}{y_{13}+y_{14}}+\frac{1}{y_{13}+y_{24}}\right)
\end{eqnarray*}
\begin{eqnarray*}
{}~~~~~~~+\frac{2y_{12} y_{123} y_{124}}{y_{13}+y_{23}+y_{14}+y_{24}}
& &\left(\frac{1}{y_{13}+y_{23}+y_{24}}
\left(\frac{1}{y_{13}+y_{24}}+\frac{1}{y_{13}+y_{23}}\right)\right.\\
& &+\frac{1}{y_{13}+y_{14}+y_{24}}
\left(\frac{1}{y_{13}+y_{14}}+\frac{1}{y_{13}+y_{24}}\right)\\
& &+\left.\frac{1}{y_{13}+y_{14}+y_{23}}
\left(\frac{1}{y_{13}+y_{14}}+\frac{1}{y_{13}+y_{23}}\right)\right) \\
+\frac{1}{y_{134} y_{234} (y_{13}+y_{24})}& &
(y_{12} y_{34}^2-y_{13} y_{24} y_{34}+y_{14} y_{23} y_{34}+3y_{12}
y_{23} y_{34} \\
& &+3y_{12} y_{14} y_{34}+4y_{12}^2 y_{34}
-y_{13} y_{23}y_{24}+2y_{12} y_{23} y_{24} \\
& &-y_{13} y_{14} y_{24}-2y_{12} y_{13} y_{24}
+2y_{12}^2 y_{24}+y_{14} y_{23}^2 \\
& &+2y_{12} y_{23}^2
+y_{14}^2 y_{23}+4y_{12} y_{14} y_{23}+4y_{12}^2y_{23} \\
& &+2y_{12} y_{14}^2
+2y_{12} y_{13} y_{14}+4y_{12}^2y_{14}+2y_{12}^2y_{13}+2y_{12}^3) \\
-\frac{1}{y_{134}(y_{13}+y_{14})}
& &(y_{14} y_{24}+2y_{14} y_{23}+2y_{12} y_{14}+y_{13}^2 \\
& &+y_{13} y_{23}+2y_{13} y_{24}+2y_{12} y_{13}+y_{14}^2)
\left.\frac{~}{~}\right]
\end{eqnarray*}
Due to partial fractioning, this expression is finite if a single
$y_{ij}\rightarrow 0$ (and for the same reason the expression is
lengthy.)
The virtual corrections can also be obtained easily from eq. (2.20) of
ref.\ \cite{ERT} by setting $N_C=0$ and $T_R=0$. In our decomposition
\begin{equation}
M_{gq}^{(3)}=M_{g{\bar q}}^{(3)}=
\frac{1}{4\pi^2}\left[-\frac{1}{\varepsilon^2}-
\frac{1}{2\varepsilon}(3-2\log{y_{12}})\right],
\end{equation}
and the finite part is
\begin{eqnarray}
\label{m3finite}
M_f^{(3)}&=&\frac{1}{4\pi^2}\left[
\frac{y_{12}}{y_{12}+y_{14}}+\frac{y_{12}}{y_{12}+y_{24}}+\right.
\frac{y_{12}+y_{24}}{y_{14}}+\frac{y_{12}+y_{14}}{y_{24}} \\ \nonumber
& &+\log{y_{14}}
\left[\frac{4y_{12}^2+2y_{12}y_{14}+4y_{12}y_{24}+y_{14}y_{24}}
{(y_{12}+y_{24})^2}\right] \\ \nonumber
& &+\log{y_{24}}
\left[\frac{4y_{12}^2+2y_{12}y_{24}+4y_{12}y_{14}+y_{14}y_{24}}
{(y_{12}+y_{14})^2}\right] \\ \nonumber
& &-2\left[
\frac{y_{12}^2+(y_{12}+y_{14})^2}{y_{14}y_{24}}R(y_{12},y_{24})
+\frac{y_{12}^2+(y_{12}+y_{24})^2}{y_{14}y_{24}}R(y_{12},y_{14})\right.\\
\nonumber & &~~~~~+\frac{y_{14}^2+y_{24}^2}{y_{14}y_{24}(y_{14}+y_{24})}-
2\log{y_{12}}\left.\left(
\frac{y_{12}^2}{(y_{14}+y_{24})^2}+\frac{2y_{12}}{y_{14}+y_{24}}\right)
\right]\\ \nonumber
& &\left.+\left(\frac{y_{24}}{y_{14}}+\frac{y_{14}}{y_{24}}+
\frac{2y_{12}}{y_{14}y_{24}}\right)
\left(\frac{2}{3}\pi^2-\log^2y_{12}-8\right)\right],
\end{eqnarray}
where
\begin{eqnarray}
\lefteqn{R(x,y)=} \\ \nonumber
& &\log x\log y-\log x\log (1-y)-\log y\log (1-x)
+\frac{1}{6}\pi^2-{\rm Li}_2(x)-{\rm Li}_2(y)
\end{eqnarray}
and
\begin{equation}
{\rm Li}_2(x)=-\int_0^x{\rm d} z\,\frac{\log (1-z)}{z}.
\end{equation}
\newpage
|
1,116,691,498,616 | arxiv | \section{Introduction\label{section:intro}}
The magnetic fields in spiral galaxies are an important component, but
their basic three dimensional topology remains largely unknown. Two
of their main characteristics are however, known. First, the fields in relatively
face-on spiral galaxies are seen to follow the spiral pattern traced
in the optical morphology. In the handful of more edge-on galaxies
that have been imaged to date, the field distributions are seen to
extend into the halo regions, and have a characteristic {\sf X}-shaped
morphology \citep[eg.][]{heesen_etal_2009}. Apart from these
basic properties, the details of the magnetic field topology are unknown.
Observations of polarized flux, polarization vector orientations, and
Faraday rotation measures all provide information about the magnetic
field associated with different electron populations and at
different projections with respect to the line of sight. Synchrotron
emission originates in ultrarelativistic electrons spiralling around
magnetic field lines, is beamed in the direction of motion of the
electron, and is polarized perpendicular to the orientation of the
field line. Polarized synchrotron radiation and polarization vector
orientation are thus direct tracers of the magnetic fields
perpendicular to the line-of-sight (LOS), $B_{\perp}$, within the
region where both ordered magnetic fields and relativistic electrons
are maximized. The Faraday rotation measure (RM), or more generally
the Faraday depth, $\Phi$, that pertains to a given component of
polarized emission, is sensitive to the integrated product of magnetic field
component parallel to the LOS ($B_{\parallel}$) and the thermal
electron density in the foreground of a polarized emission component:
\begin{equation}
\Phi\,\propto\,\int_{\mathrm{source}}^{\mathrm{telescope}}n_e\vec{B}{\cdot}d\vec{l}.
\end{equation}
The Faraday depth is defined to be positive when $\vec{B}$ points
toward the observer, and negative when $\vec{B}$ points away. When
assessing the magnetic field geometry traced by these observational
characteristics, it is essential to keep in mind that the observable
attributes may originate in distinct regions of space. The classical
Faraday rotation measure (RM) is an observable quantity derived from
the polarization angle difference(s) $\Delta\chi$ between two (or
more) frequency bands as
$RM=\Delta\chi/(\lambda_1^2-\lambda_2^2)$. The empirically determined
RM is only equivalent to the Faraday depth $\Phi$ for a simple
background emitter plus foreground dispersive screen geometry.
Polarized emission can become depolarized in a number of ways: beam
depolarization can arise because the spatial resolution element is
large relative to the size of significant variations in the field
orientation or the thermal electron content, while Faraday
depolarization can arise because synchrotron emission and Faraday
rotation take place in the same extended volume along the
LOS. Polarized emission from different locations (either separated
spatially or along the LOS) is affected by different amounts of Faraday
rotation, such that at a given wavelength there may be orthogonal
polarization angles that cancel, yielding no net polarization at that
wavelength. Beam depolarization can be circumvented in principle by
using higher angular resolution, although the brightness sensitivity
may then be insufficient to detect the extended emission at
all. Faraday depolarization can be circumvented in principle by
achieving a sufficiently complete sampling of the $\lambda^2$
measurement space (relevant for measurements of RM and $\Phi$), since
cancellation effects are confined to discrete wavelengths or ranges of
wavelength.
All of these observables can be used to constrain the likely magnetic
field topology in the galaxies observed. In a previous paper
(\citet{heald_etal_2009}, hereafter Paper II), we presented our
polarimetric results for a large sample of nearby galaxies observed to
a comparable sensitivity limit of about 10~$\mu$Jy beam$^{-1}$ RMS. In
this paper, we begin by briefly summarizing the observations and data
reduction steps of Paper II in Sect.~\ref{section:reductions}. Trends
noted in the data are described in Sect.~\ref{section:trends}. In
Sect.~\ref{section:Bdist}, we then explore how particular magnetic field
geometries might relate to the observations. We conclude
the paper in Sect.~\ref{section:disc}.
\section{Summary of observations and data reduction\label{section:reductions}}
The observational parameters and data reduction techniques of the
WSRT-SINGS survey were presented in detail both by
\citet{braun_etal_2007}, and specifically regarding the polarization
data in Paper II. Here we recap the most important details. For
more information, the reader is referred to Sect.~2 and Appendix~A of
Paper II.
The data used in this analysis were obtained using the Westerbork Synthesis
Radio Telescope (WSRT). Two observing bands were used: of $1300-1432$ MHz
and $1631-1763$ MHz (centered on 22- and 18-cm, respectively), there being 512 channels in each
band and in all four polarization products. Each galaxy
in the WSRT-SINGS sample (refer to Paper II) was observed for 12~hr in the
Maxi-short configuration of the WSRT. During each 12~hr synthesis, the observing
frequency was switched between the two bands every 5~min. This provided an
effective observation time of 6~hr per band, and good $uv$
coverage in both bands.
The data for both bands were analyzed using the Rotation Measure
Synthesis (RM-Synthesis) technique
\citep[][see also Paper II]{brentjens_debruyn_2005}. This provides the
possible reconstruction of the intrinsic polarization vectors along each LOS,
within the constraints set by the observing frequencies. The output of the
RM-Synthesis procedure was deconvolved along the Faraday depth ($\Phi$) axis,
as described in Paper II. Polarized fluxes, polarization angles, and Faraday
depths were extracted from these data and are discussed for
each target galaxy in Paper II. In that paper, we also estimate the
contribution to the RM from the Milky Way foreground using only
background radio sources in the observed fields, rather than the target
galaxies themselves. With this collection
of data, we noted several patterns in the target galaxies, and that the
basic patterns were common to the sample galaxies collectively. In this
paper, we seek to explain these patterns using a common global magnetic
field topology.
\section{Observational trends\label{section:trends}}
Several interesting patterns emerge from our study of a large sample
of galaxy types. Polarized emission is found to originate both in the
disks of actively star-forming galaxies and in what appear to
be either AGN or star-formation driven (circum-)nuclear or galactic
wind outflows. ``Disk'' emission is relatively planar and
detected out to large radii, whereas apparent ``outflow'' components extend
much further from the plane and are only detectable at small radii. The
magnetic field orientations are in all cases simply related to the
morphology of these components (as shown in Fig. 4 of Paper II).
Disk fields have a spiral morphology that is strongly correlated with
the orientation and pitch angle of traditional tracers of spiral arms,
such as massive stars and dust lanes. This is despite the
polarized emission possibly being both coincident with,
and independent of, other spiral arm tracers. Classic examples of
disk-dominated fields can be seen in galaxies such as NGC 628, 5194, and
6946, although they are present in almost all of our targets with
varying detectability. On the other hand, outflow-related fields are
typically oriented along the periphery of the bipolar lobes, and are often
brightest close to the disk. The contribution of outflow-related
components may be apparent in NGC 4569, 4631, and possibly 4736.
Here, we illustrate and describe some of the trends present in the
WSRT-SINGS dataset. We consider the polarized flux distribution
(Sect.~\ref{subsection:polaz}), the Faraday depth distribution
(Sect.~\ref{subsection:fdaz}), the effects of depolarization in
(Sect.~\ref{subsection:depol} and Sect.~\ref{subsection:2disk}). In
Sect.~\ref{section:Bdist}, we explore the predictions of a variety of
global magnetic field topologies that
can be used to model these trends.
\subsection{Polarized flux\label{subsection:polaz}}
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics{fdo1.eps}}
\caption{Azimuthal variation in peak polarized intensity (center
panel) and associated Faraday depth (left panel) for galaxies with
extended polarized emission (illustrated in the right panel from
Fig.~5 of Paper II). The mean values in azimuthal wedges, each
subtending 10$^\circ$ within the galaxy disk, are plotted with error
bars giving the wedge RMS. Galaxies are arranged in order of
increasing inclination (top to bottom) from face-on. Azimuth is
measured counter-clockwise from the receding major axis.}
\label{figure:fdpofaz}
\end{figure*}
\addtocounter{figure}{-1}
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics{fdo2.eps}}
\caption{(continued) Azimuthal variation in peak polarized intensity (center
panel) and associated Faraday depth (left panel) for galaxies with
extended polarized emission (illustrated in right panel).}
\end{figure*}
\addtocounter{figure}{-1}
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics{fdo3.eps}}
\caption{(continued) Azimuthal variation in peak polarized intensity (center
panel) and associated Faraday depth (left panel) for galaxies with
extended polarized emission (illustrated in right panel).}
\end{figure*}
A remarkable pattern emerges for our sample relating to the basic
distribution of polarized intensity in galaxy disks at GHz
frequencies. For small inclination angles, there is a general
gradient in the average polarized intensity that is approximately
aligned with the major axis of the target galaxy. This
gradient from high to low polarized intensity has the same sign in all
well-detected cases, from high values on the kinematically
approaching major axis to low values on the receding major axis. This
effect cannot be explained by a symmetric planar field
geometry. As the inclination of the target galaxy increases,
a pair of local maxima in polarized intensity begins to separate from
the approaching major axis and propagates toward the minor axis. This
is simply a geometrical effect because maximum polarized emission
emerges from magnetic fields perpendicular to the line of sight, which,
in the case of a planar field geometry, are strongest near the minor
axis \citep{stil_etal_2009}. Even when the inclination has become
quite substantial, and the two local maxima of polarized intensity are
near the minor axis, there is still a systematic tendency for the
receding major axis to have the overall minimum polarized brightness.
This pattern in the distribution of polarized flux in our target
galaxies is visible first of all in Fig.~4 of Paper II, which shows
maps of the polarized flux in each galaxy. We quantify this
trend in Fig.~\ref{figure:fdpofaz} by plotting the average peak
polarized intensity, $<P>$, and the associated Faraday depth,
$<\Phi>$, within inclination-corrected wedges spanning 10 degrees of
azimuth in the disk of each galaxy in which extended polarized
emission was detected. Azimuth is measured counter-clockwise from the
kinematically receding major axis (see Table~1 of Paper~II for basic
data on each target). Error bars on the points in the plots represent
the RMS variation within each wedge. The estimated foreground
contribution to $<\Phi>$ and its error caused by our own Galaxy toward
each target is indicated by the horizontal lines. The pairs of panels
in the figure have been ordered by increasing galaxy inclination (top
to bottom) extending from less than 10 degrees for NGC~628 to about 85
degrees in NGC~4631.
The simple trend described above is clear, the polarized intensity
showing a global minimum toward the receding major axis and there
being a
systematic progression from one broad maximum at the approaching major
axis to a pair of maxima that move to the minor axes as the
inclination increases. The latter effect is due to geometry of a
planar field, while the former one has no current explanation. We note that
the polarization asymmetry along the major axis is not as pronounced
at frequencies of 5~GHz and higher (see e.g. \citet{beck_2007}).
\subsection{Faraday depth\label{subsection:fdaz}}
The variation in Faraday depth with azimuth (shown in
Fig.~\ref{figure:fdpofaz}) also shows systematic trends with
increasing inclination, although not as cleanly as those seen in
polarized intensity. One complication is that each Faraday
depth distribution can be either positive or negative, so both options
need to be considered in assessing a possible trend with
inclination. Another complication is the uncertainty in the
foreground contribution to $\Phi$, which is critical in distinguishing
peaks (be they positive or negative) in the modulation patterns from
minima (consistency with that foreground level). What is immediately
clear from an assessment of the $<\Phi(\phi)>$ patterns is that they
are in no case consistent with being symmetric sinusoids (either of
period $2\pi$ or $\pi$) in their excursions about the estimated
foreground Faraday depth. The implication is that a thin axisymmetric
or bisymmetric spiral disk is not a viable model for the medium
responsible for the Faraday rotation of the polarized emission
detected at these frequencies \citep[cf.][]{krause_1990}.
The $<\Phi(\phi)>$ pattern that applies to a large fraction of our
sample has a minimum Faraday depth (consistent with the estimated
foregound) that occurs close to the approaching major axis (at an azimuth of
180$^\circ$) and a single maximum excursion near the receding major
axis at low inclinations. This same pattern applies to many of the
eight lowest inclination galaxies and would also apply to NGC~6946 if
the previously published estimate of the Galactic foreground value,
$\Phi_{FG} = \rpms{40}$ by \citet{beck_2007}, were the correct one,
rather than the $\Phi_{FG} = \rpms{23}\pm2$ we estimate in Paper~II. A
similar consideration applies to NGC~5194, for which
\citet{horellou_etal_1992} estimated $\Phi_{FG} = \rpms{-5}\pm12$
rather than our estimate of $\Phi_{FG} = \rpms{+12}\pm2$. We
plotted these alternate estimates of $\Phi_{FG}$ in the figure with
horizontal lines. We also note here a typographical error in the
value of $\Phi_{FG}$ for NGC~4321 in Table~1 of Paper~II, which is
$\rpms{-7}$ and not $\rpms{-17}$ as stated there. This
pattern of maximum and minimum is not a simple sinusoid but has a
clear asymmetry about an azimuth of 180$^\circ$ that is most obvious
in NGC~4321 and 6946. When the inclination exceeds 65$^\circ$, there
is a sudden change to a pattern of two peaks near the minor axis that
is found in all four of the highly inclined galaxies in our sample. It
may well be of further significance that several (and possibly all) of
these highly inclined galaxies had already been identified as having a
morphology suggestive of a circum-nuclear or galactic wind outflow.
Another critical observation is the magnitude of the Faraday depth
excursions from the foreground value, which is in all cases very
modest, typically between 10 and 30~rad~m$^{-2}$. This is
substantially less than the Faraday depth variations measured through
the entire disk of the Large Magellanic Cloud by
\citet{gaensler_etal_2005} using distant background sources that show
average excursions of plus and minus 50~rad~m$^{-2}$ and peak
excursions of $+$245 and $-$215~rad~m$^{-2}$. Since the LMC is in no
way a remarkable galaxy in terms of its likely field strength (based
on the synchrotron surface brightness) or thermal electron populations
(based on the star-formation-rate density) relative to our sample
galaxies, the implication is that the medium responsible for the
Faraday rotation of the diffuse polarized emission in our targets only
extends over a small fraction of the complete line-of-sight. A similar
conclusion was reached by \citet{berkhuijsen_etal_1997} for NGC~5194
based on the smaller Faraday depths seen at 1.4-1.8 GHz relative to
5-10~GHz for that target.
An important conclusion that follows directly from a comparison
of the $<\Phi(\phi)>$ and $<P(\phi)>$ plots is that the polarized
intensity and the Faraday rotation in the highly inclined galaxies of
our sample must originate in distinct regions along the
line-of-sight. This is because $\Phi$ is proportional to B$_\parallel$,
while $P$ is proportional to $B^{1+\alpha}_{\perp}$. For any given
field geometry, a peak in B$_\parallel$ will be accompanied by a
minimum in $B^{1+\alpha}_{\perp}$ and vice versa. The observation that
maximal excursions of both $<\Phi>$ and $<P>$ are seen in all four of
the most inclined galaxies cannot be achieved with any co-extensive
geometry of the emitting and dispersing media.
\subsection{Depolarization\label{subsection:depol}}
In addition to detecting a systematic pattern of Faraday depth
excursions across the full LMC disk, \citet{gaensler_etal_2005} also
document the very substantial depolarizing effect of the LMC disk on
background polarized sources observed at 1.4~GHz. Despite a likely
mean angular source size of only about 6 arcsec, which projects to
1.5~pc at the LMC disk, these sources are more depolarized
(by a factor of more than two) when the LOS is associated with thermal
electrons in the LMC disk exceeding an emission measure of about
50~pc~cm$^{-6}$. The implication appears to be that significant RM
fluctuations are present at the 1.4~GHz observing frequency on scales
$<<$~1.5~pc. Since even the diffuse ionized gas that permeates galaxy
disks has an emission measure in excess of about 20~pc~cm$^{-6}$ in
moderately face-on systems \citep[e.g.,][]{greenawalt_etal_1998}, we can
expect a significant degree of depolarization for any ``backside''
emission in our galaxy sample. This will be true in particular for the
diffuse polarized emission originating in the target galaxy itself,
since the relevant scale is then that of the observing beam,
$\ge$15~arcsec, which projects across more than 700~pc at the typical
galaxy distance of 10~Mpc.
It therefore seems likely that the polarized intensity from low
inclination galaxy disks observed at GHz frequencies is dominated by
emission from that portion of the galaxy disk/halo that faces
us. Corresponding structures from the far side of the galaxy would be
dispersed and depolarized by turbulent magneto-ionic
structures in the star-forming mid-plane. We recall that it is only in
those regions dominated by a regular rather than a turbulent magnetic
field that a net polarized emissivity is expected at all. Given
the strong concentration of massive-star formation and its associated
turbulent energy injection into the mid-plane, there may well be two
zones of enhanced net polarized emissivity that are offset above and
below the mid-plane. Each region of polarized emissivity will then
experience the dispersive effects of its own line-of-sight
foreground, which is likely to be dominated by {\it thermal\ } rather
than {\it relativistic\ } plasma. For the near-side polarized emission,
this is likely to be caused by the extended thermal halo of the host
galaxy, while for the far-side polarized emission there will be
the additional contribution of the dense (depolarizing and dispersive)
mid-plane.
\subsection{The ``second'' polarized disk\label{subsection:2disk}}
Because of the likely Faraday depth of the mid-plane medium, it is
conceivable that the near- and far-side polarized emission zones in
moderately face-on galaxies would experience very different Faraday
dispersion, making it possible to distinguish the two components along
each line-of-sight usingthe RM synthesis technique, which we employed
in our study. We conducted a deep search for multiple Faraday depth
components along the line-of-sight to detect disk emission from each
of our sample galaxies, after applying a spatial smoothing to the
$P(\phi)$ cubes that results in an angular beamsize of 90
arcsec. Solid detections of faint secondary emission components were
made in the brightest face-on galaxies of our sample, as shown in
Figs.~\ref{figure:n628smo}--\ref{figure:n6946smo}. In addition to the
bright polarized emission that typically resides at a Faraday depth
within only a few tens of rad~m$^{-2}$ of the Galactic foreground
value, much fainter polarized emission (by a factor of 4 -- 5) is
detected in NGC~628, 5194, and 6946, which is offset to both positive
and negative Faraday depths by about 200 rad~m$^{-2}$. We first
considered whether these faint secondary components might be
instrumental in nature, since they are similar to the Faraday depth
side-lobes of our instrumental response (as described in Paper II),
but concluded that they are likely reliable. Indicators of the
reliability of these faint components are that (1) they do not occur
toward the brightest low dispersion components; (2) the Faraday
depth separation of the secondary components varies from source to
source, while the instrumental sidelobe response does not; and (3)
the faint positive and negative-shifted components form a
complementary distribution of eachother, rather than merely
repeating themselves in detail.
Detection of this highly dispersed and probably depolarized ``second
disk'' supports the emerging model in which the polarized
intensity observed at GHz frequencies from nearby galaxy disks is
dominated by a region of emissivity located above the mid-plane on the
near-side, which subsequently experiences Faraday rotation within the
extended halo of the galaxy amounting to only a few tens of
rad~m$^{-2}$. Significantly fainter polarized emission (by a factor of
4 -- 5) is detected from the farside of the mid-plane, which displays
the additional dispersive effects of that mid-plane zone amounting to
plus and minus 150 -- 200 rad~m$^{-2}$ within the three galaxies where
this could be detected.
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics{n0628_PCF-213.eps},\includegraphics{n0628_PCF-30.eps},\includegraphics{n0628_PCF+145.eps}}
\caption{Polarized intensity at distinct Faraday depths toward NGC
628. The dominant Faraday depth component, centered near $\rpms{-30}$
is shown in the center panel, while the two secondary components
centered near $-$213 and $\rpms{+145}$ are shown on the left and
right. The greyscale varies as indicated. The contours begin at
0.29~mJy~beam$^{-1}$ and increase by factors of 1.1 for the
secondary components and 1.3 for the primary component.}
\label{figure:n628smo}
\end{figure*}
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics{n5194_PCF-180.eps},\includegraphics{n5194_PCF+13.eps},\includegraphics{n5194_PCF+200.eps}}
\caption{Polarized intensity at distinct Faraday depths toward NGC
5194. The dominant Faraday depth component, centered near $\rpms{+13}$
is shown in the center panel, while the two secondary components
centered near $-$180 and $\rpms{+200}$ are shown on the left and
right. The greyscale varies as indicated. The contours begin at
0.6~mJy~beam$^{-1}$ and increase by factors of 1.1 for the secondary
components and 1.3 for the primary component.}
\label{figure:n5194smo}
\end{figure*}
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics{n6946_PCF-162.eps},\includegraphics{n6946_PCF+38C.eps},\includegraphics{n6946_PCF+228.eps}}
\caption{Polarized intensity at distinct Faraday depths toward NGC
6946. The dominant Faraday depth component, centered near $\rpms{+38}$
is shown in the center panel, while the two secondary components
centered near $-$162 and $\rpms{+228}$ are shown on the left and
right. The greyscale varies as indicated. The contours begin at
0.7~mJy~beam$^{-1}$ and increase by factors of 1.1 for the secondary
components and 1.3 for the primary component.}
\label{figure:n6946smo}
\end{figure*}
\subsection{Small-scale RM fluctuations\label{subsection:RMfluc}}
Extremely small-scale RM fluctuations within our own
Galaxy were discovered in our sample data for the field of
NGC~7331. This target is observed through the Galaxy near $(l,
b)~\sim~(94, -21)$, within an extended ($30^\circ\times30^\circ$)
region of particularly large negative RMs near $\rpms{-200}$
\citep[e.g.,][]{johnston-hollitt_etal_2004}. A corresponding region of
large positive Galactic RMs is centered near $(l, b)~\sim~(250,
-10)$. These two regions correspond to the directions where we look
directly along what is likely to be the axisymmetric spiral field of the
Galaxy \citep[e.g.,][]{sun_etal_2008}. Diffuse polarized emission from
the lobes of a background head-tail radio galaxy, the disk of NGC~7331,
and even the Galactic synchrotron itself display the remarkable
oscillatory behavior of the Faraday depth of the polarized emission
with position in the field. As shown in the Faraday depth versus
declination slices of Figs.~\ref{figure:n7331FDD1} and
\ref{figure:n7331FDD2}, there are two dominant Faraday depths present
in this field, one near $\rpms{-180}$ and the other near
$\rpms{0}$. Depending on the exact location along the indicated
declination slice, either one or the other of these RMs are
encountered. In some regions, the transition from one RM to the other
is well-resolved and a single RM value is observed over several
beam-widths (of about 20 arcsec or 0.1~pc at a distance of 1~kpc),
while in other regions the transition is unresolved, such that both
RMs overlap spatially with only the peak polarization showing the
oscillation between the two values. When completely unresolved
extra-galactic sources are observed in this field, they display only
one or the other of these two possible foreground RMs (compare Table 2
of Paper II), but more extended sources show an oscillation between
the two values, $\rpms{-180}$ occurring 3 -- 4 times as often as
$\rpms{0}$. The regular Galactic magnetic field appears to be
organized into discrete filamentary components with transverse sizes
significantly less than a parsec.
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics{n7331_FD-Dec_E.eps},\includegraphics{n7331_FD-Dec_D.eps},\includegraphics{n7331_FD-Dec_F.eps}}
\caption{Polarized intensity as a function of Faraday depth and
declination in the field of NGC 7331. The two left hand panels are
at right ascensions that intersect the diffuse lobe of a background
head-tail radio galaxy. The right-hand panel intersects a background
double-lobed radio galaxy as well as a region of diffuse Galactic
polarized emission.}
\label{figure:n7331FDD1}
\end{figure*}
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics{n7331_FD-Dec_C.eps},\includegraphics{n7331_FD-Dec_B.eps},\includegraphics{n7331_FD-Dec_A.eps}}
\caption{Polarized intensity as a function of Faraday depth and
declination in the field of NGC 7331. Slices are presented at three
right ascensions that intersect the disk of NGC~7331 near the major
axis.}
\label{figure:n7331FDD2}
\end{figure*}
\section{Magnetic field distributions\label{section:Bdist}}
The observational results summarized in Sect.\,\ref{section:trends} suggest
a magnetic field geometry dominated by an in-plane
field described by an axisymmetric spiral structure (ASS) and/or possibly
a bi-symmetric spiral structure (BSS). These topologies assume a planar
field configuration (in the $(x,y)$ plane) given by a family of
logarithmic spirals defined in radial coordinates $(r,\phi)$
by the form
\begin{equation}
r = a e^{b(\phi+c)},
\end{equation}
for a radial scaling constant, $a$, an angular scaling
constant, $b$, which is related to the spiral pitch angle, $\psi^\prime_{xy}$, by,
\begin{equation}
\psi^\prime_{xy} = \tan^{-1}(b)
\end{equation}
and an angular offset, $c$, which defines each curve in the family. We
find it convenient to use the complement of the spiral pitch
angle, $\psi_{xy}$ given by,
\begin{equation}
\psi_{xy} = 90^\circ - \psi^\prime_{xy} = \tan^{-1}(1/b).
\label{eqn:pxy}
\end{equation}
The right-handed cartesian components of an inward directed planar ASS
logarithmic spiral are then given by,
\begin{eqnarray}
B_x & = & B\cos(\phi+\psi_{xy}) \\
B_y & = & B\sin(\phi+\psi_{xy}) \\
B_z & = & 0.
\end{eqnarray}
For the corresponding case of a planar BSS magnetic field with the
usually assumed sinusoidal modulation of $B$ with $\phi$, we have,
\begin{eqnarray}
B_x & = & B\cos(2\phi+\psi_{xy}+\mu) \\
B_y & = & B\sin(2\phi+\psi_{xy}+\mu) \\
B_z & = & 0,
\end{eqnarray}
where $\mu$ is used to track the positive peak in the field modulation
pattern from an initial value $\mu_0$ such that,
\begin{equation}
\mu = \mu_0 + [\ln(r/a)]/b
\end{equation}
at radius $r$, in terms of the spiral scaling constants defined above,
$a$ and $b$.
The out-of-plane field topology is not well-constrained by previous
observations, but might be expected to have either an even or odd
configuration of the symmetry about the mid-plane \citep[see Fig.~2
of][]{widrow_2002}. An even configuration corresponds to the case
where the azimuthal (toroidal) component of the field has the same
sign both above and below the mid-plane. The resulting field geometry
has a quadrupole structure in the poloidal field and is the one
predicted to emerge most naturally from an $\alpha\omega$ dynamo
operating at intermediate to large radii in a galactic disk, where
differential rotation is important \citep{elstner_etal_1992}. An
odd-parity configuration has the opposite signs of the azimuthal field
above and below the mid-plane. The associated structure of the
poloidal field is that of a dipole. This topology may be associated
with the $\alpha^2$ dynamo process that may be dominant within the
circum-nuclear regions where solid body rotation may prevail in
galaxies \citep{elstner_etal_1992}. The $\alpha\omega$ dynamo in
galactic halos may also generate dipolar fields
\citet{sokoloff_shukurov_1990}.
Of stronger relevance is the expected
height above and below the mid-plane at which the polarized
synchrotron emission might originate. From the discussion in
Sect.\,\ref{section:trends}, we expect that polarized emissivity may peak
on either side of the galaxy mid-plane, but that the near-side
component will dominate the detected polarized intensity at GHz
frequencies because of depolarization in the turbulent mid-plane,
which is an intervenor to the far-side emission. The
Faraday depth distribution will reflect all of the relevant
propagation effects that the emission has experienced. For the
near-side component this will reflect the extended near-side halo of
the target galaxy, while for the far-side component, the additional
dispersion in the mid-plane region will also contribute.
We extend the usual planar ASS and BSS field topology with the
addition of an out-of-plane component taken from a linear combination
of dipole and quadrupole topologies. The basic equations describing
dipole and quadrupole fields and magnetic flux functions can be found
in \citet{long_etal_2007}. For the simple case under consideration,
there is cylindrical symmetry about the galaxy rotation axis, z. In
terms of the angle from the rotation axis, $\theta$, and the distance
from the origin, $\rho$, the two perpendicular components of the
poloidal magnetic field with a dipole moment, D, and quadrupole
moment, Q, (each with a non-trivial sign) are given by,
\begin{eqnarray}
B_\rho & = & { 2 D cos(\theta) \over \rho^3} + {3 Q [ 3 cos^2(\theta) - 1] \over 4 \rho^4},\\
B_\theta & = & {D sin(\theta) \over \rho^3} + {3 Q sin(\theta) cos(\theta) \over 2 \rho^4}.
\label{eqn:dq}
\end{eqnarray}
The corresponding magnetic flux function, $\Psi$, is given by,
\begin{eqnarray}
\Psi & = & {D sin^2(\theta) \over \rho} + {3 Q \over 4 \rho^2} sin^2(\theta) cos(\theta).
\end{eqnarray}
The surfaces defined by $\Psi = constant$ represents example surfaces
on which the magnetic field lines reside. The field components out of-
and within the plane are given by,
\begin{eqnarray}
B_z & = & B_\rho cos(\theta) - B_\theta sin(\theta),\\
B_r & = & B_\rho sin(\theta) + B_\theta cos(\theta),
\end{eqnarray}
which yield the total field strength and local orientation angle from,
\begin{eqnarray}
B^2 & = & B^2_z + B^2_r\\
\psi_{z} & = & \tan^{-1}(B_z/B_r).
\label{eqn:pz}
\end{eqnarray}
The poloidal field topologies are shown in a plane that includes the
rotation axis in Fig.~\ref{figure:geom} for pure dipole, quadrupole,
and a mixed dipole plus quadrupole configuration. The units of the two
axes in the plot are arbitrary, since the topologies are
self-similar. A three dimensional depiction of the pure dipole and
quadrupole field topologies is given in Fig.~\ref{figure:topol}, where
several surfaces of constant magnetic flux are shown for each
case.
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics{dqshow_1_0.eps},\includegraphics{dqshow_0_1.eps},\includegraphics{dqshow_01_1.eps}}
\caption{Depiction of the assumed poloidal modification to the out-of-plane field
topology. A planar logarithmic spiral is modified by the local
orientation of a dipole or quadrupole field that is symmetric about
the rotation axis. The panels are labelled with the relative
strengths of dipole (D) and quadrupole (Q) moments and illustrate a
pure dipole, pure quadrupole, and a 1:100 mix of dipole and
quadrupole from left to right. Dashed horizontal lines illustrate
the heights for which model distributions are shown in
Fig.~\ref{figure:sims}. }
\label{figure:geom}
\end{figure*}
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics{dip_quad.nb.eps}}
\caption{Depiction of the magnetic flux function for a pure dipole
(left) and quadrupole (right) field. In our modeling we assume that
a planar logarithmic spiral field, as illustrated with the solid
lines in the Figure defines the angle $\psi_{xy}$, while the local orientation of
a dipole or quadrupole field that is symmetric about the rotation
axis defines the angle, $\psi_z$, out of the plane. }
\label{figure:topol}
\end{figure*}
Combining the planar (Eq.~\ref{eqn:pxy}) and out-of-plane
(Eq.~\ref{eqn:pz}) geometries permits the three right-handed
cartesian components of the modified ASS magnetic field in the frame
of the galaxy to be written as,
\begin{eqnarray}
B_x & = & B\cos(\phi+\psi_{xy})\cos(\psi_z), \\
B_y & = & B\sin(\phi+\psi_{xy})\cos(\psi_z),\\
B_z & = & B\sin(\psi_z).
\label{eqn:ass}
\end{eqnarray}
When viewed at an
inclination, $i$, this yields the observer's frame components,
\begin{eqnarray}
B_{x^\prime} & = & B\cos(\phi+\psi_{xy})\cos(\psi_z), \\
B_{y^\prime} & = & B\sin(\phi+\psi_{xy})\cos(\psi_z)\cos(i)-B\sin(\psi_z)\sin(i), \\
\label{eqn:pass}
B_{z^\prime} & = & B\sin(\phi+\psi_{xy})\cos(\psi_z)\sin(i)+B\sin(\psi_z)\cos(i),
\end{eqnarray}
where the $(x^\prime,y^\prime,z^\prime)$ are the major axis, minor axis, and
line of sight, respectively. If the spiral is a trailing
one (as demonstrated in every kinematically studied galaxy
with a well-defined nearside) then the positive $x^\prime$ axis so defined
corresponds to the receding major axis. The projected parallel and
perpendicular components of the magnetic field and the
orientation angle of $B_{\perp}$ are given by,
\begin{eqnarray}
B_{\parallel} & = & B_{z^\prime} \\
B_{\perp} & = & \sqrt{B^2_{x^\prime}+B^2_{y^\prime}}\\
\chi^\prime_0 & = & \arctan\left(\frac{B_{y^\prime}}{B_{x^\prime}}\right).
\label{eqn:opass}
\end{eqnarray}
We recall that the Faraday depth caused by a magneto-ionic medium is
proportional to $B_{\parallel}$, the polarized intensity is
proportional to $B^{1+\alpha}_{\perp}$, and the intrinsic polarization angle
(giving the E-field orientation) is $\chi_0 = \chi^\prime_0 -
90^\circ$.
In the corresponding case of a modified BSS magnetic field, we have
\begin{eqnarray}
B_x & = & B\cos(2\phi+\psi_{xy}+\mu)\cos(\psi_z), \\
B_y & = & B\sin(2\phi+\psi_{xy}+\mu)\cos(\psi_z), \\
B_z & = & B\sin(\psi_z).
\label{eqn:bss}
\end{eqnarray}
When viewed at an
inclination, $i$, this yields the observer's frame components,
\begin{eqnarray}
B_{x^\prime} & = & B\cos(2\phi+\psi_{xy}+\mu)\cos(\psi_z), \\
B_{y^\prime} & = & B\sin(2\phi+\psi_{xy}+\mu)\cos(\psi_z)\cos(i)-B\sin(\psi_z)\sin(i), \\
B_{z^\prime} & = & B\sin(2\phi+\psi_{xy}+\mu)\cos(\psi_z)\sin(i)+B\sin(\psi_z)\cos(i),
\end{eqnarray}
and the corresponding $B_{\parallel}$, $B_{\perp}$, and $\chi^\prime_0$
as above.
With these definitions in place, it is possible to explore the
parameter space of modified ASS and BSS field topologies and produce
both images and azimuthal traces of the expected distributions of
$B_{\parallel}$ and $B_{\perp}$. We use these measures as proxies
of the Faraday depth and polarized intensity, respectively. In the
absence of a model for the spatial distributions of cosmic-ray and
thermal electrons, we do not attempt to reproduce the observables
directly, but merely the systematic modulations
of $P$ and $\Phi$ with azmiuth.
We present images and traces of the ASS and BSS predictions in
Figs.~\ref{figure:psims}--\ref{figure:rhsims}. Each group of $2\times2$
panels in the figure represents a contour plot of $B_{\parallel}$ and
$B_{\perp}$ (at top) and a series of traces of $B_{\parallel}(\phi)$
and $B_{\perp}(\phi)$ at fixed radii (below). The relative weights of
a dipole (D) and quadrupole (Q) field (each including a nontrivial
sign), galaxy inclination, $i$, spiral pitch angle, $\psi_{xy}$
(positive for counter-clockwise and negative for clockwise), and
distance(s), Z or (Z1 and Z2), from the mid-plane (positive toward the
observer and negative away) are indicated at the top of the contour
plots, together with an ASS or BSS designation. Two representative
spirals are drawn for reference, both directed inwardly for the ASS plots
and one inward and the other outward for the BSS plots. The inclination is
defined such that for kinematically trailing spirals, the receding and
approaching ends of the major axis are as indicated. The ``near'' and
``far'' designations in the plots also aid in demonstrating the
spatial orientation of the disk unambiguously. An ellipse that is
offset from the disk by the distance Z is drawn to demonstrate its
location relative to the mid-plane. The radii at which the azimuth
plots were made are marked by the same linetype on the contour
plots. The azimuth angle increases counter-clockwise from the receding
major axis.
We also plot the projected angle of $B_{\perp}$,
$\chi^\prime_0(\phi)$, within the same panel that presents the trace
of $B_{\perp}(\phi)$ using the right-hand scale. The
$\chi^\prime_0(\phi)$ traces are only plotted for the ASS cases, since
the BSS cases vary so dramatically with radius.
For simplicity, we begin with a fixed spiral pitch angle,
$\psi^\prime_{xy}~=~20^\circ$, since this corresponds well with the
average measured value for our sample galaxies
\citep[cf.][]{kennicutt_1981}, which vary from about 15--25$^\circ$,
and we begin by considering counter-clockwise (CCW) spirals. Other
values will also be considered below. First we
consider the predictions for the simple planar ASS and BSS
spirals in Fig~\ref{figure:psims}. A radial spiral scaling constant, $a~=~5$,
and a galaxy disk radius, $r_{max}~=~35$, are assumed for
illustration. These choices have no effect on the results.
\subsection{Planar models\label{subsection:planarmod}}
Planar, axisymmetric field topologies yield projected field
components (as shown in the left-hand groups of
Fig.~\ref{figure:psims}) that are independent of radius and have very
simple symmetries, $B_{\parallel}(\phi)$ having positive and negative
excursions that are fully symmetric about zero. The single positive
peak occurs at $\phi~\sim~\psi^\prime_{xy}$ offset from the
approaching major axis, the negative peak being opposite to this
\citep[as also demonstrated by][]{krause_1990}. The $B_{\perp}(\phi)$
component has
a complimentary behavior with two equal minima slightly offset from
both major axes, and two equal maxima slightly offset from both minor
axes. As the inclination increases from face-on toward edge-on, the
amplitude of the $B_{\parallel}(\phi)$ and $B_{\perp}(\phi)$
modulation increases, although there is no change in the azimuthal
location of either the maxima or minima. The projected orientation of the
plane-of-sky field, $\chi^\prime_0(\phi)$, has the expected linear
variation with azimuth for a nearly face-on geometry, which becomes
increasingly non-linear as the inclination increases.
Changing the sense of the field to be outwardly directed, rather than
inwardly directed, changes only the sign of $B_{\parallel}(\phi)$, and
leaves $B_{\perp}(\phi)$ unchanged. This is true for all of the
field geometries we consider.
The basic BSS projected field patterns and their variation with galaxy
inclination are illustrated in the right-hand groups of
Fig.~\ref{figure:psims}. Because of the modulation of field sense with
azimuth along the family of spirals, there is a strong radial
dependence of the projected field components and their variation with
azimuth. $B_{\parallel}(\phi)$ has positive and negative excursions
that are still fully symmetric about zero, but it exhibits two
equal maxima and two equal minima (rather than only one for ASS),
which are located at different azimuth depending on the
radius. $B_{\perp}(\phi)$ displays four equal maxima at the
azimuthal angles where $B_{\parallel}(\phi)$ has its zero crossings,
and four equal minima in-between (at the locations of maximum and
minimum $B_{\parallel}$). As with the basic ASS fields, a higher
inclination yields an increased fractional modulation of
$B_{\parallel}(\phi)$ and $B_{\perp}(\phi)$, while not affecting
the location of those excursions. The strong radial dependence of
these azimuthal patterns implies that if there were significant
averaging of radii (by even as little as 20\% in $\Delta r/r$ for a
pitch angle of $\psi^\prime_{xy}~\sim~20^\circ$) then most of the
predicted modulation would disappear. This is particularly true of
$B_{\perp}(\phi)$, for which the predicted modulation is both
intrinsically smaller and twice as rapid as that of
$B_{\parallel}(\phi)$. The strong radial dependance of BSS models
severely limits their predictive power for the general patterns of
azimuthal modulation that are observed in galaxies and so will not be
considered further.
The azimuthal modulation of $B_{\parallel}$ and $B_{\perp}$ in the
planar ASS and BSS spiral topologies is apparently inadequate for
describing the general patterns noted above in
Sect.\,\ref{section:trends}. Neither of these topologies predict a clear
distinction between the receding and approaching kinematic major axis
in terms of either the intrinsic brightness of polarized intensity
or the Faraday depth that it may encounter.
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics{dq_0_1_20_0.eps},\includegraphics{dqb_0_1_20_0.eps}}
\resizebox{\hsize}{!}{\includegraphics{dq_0_1_40_0.eps},\includegraphics{dqb_0_1_40_0.eps}}
\caption{Simulated field distributions. Each group of $2\times2$
panels presents contour plots (top) and azimuthal traces (bottom) of
$B_\parallel$ (left) and $B_\perp$ (right). The dipole and
quadrupole moments (D, Q), galaxy inclination (I), spiral pitch
angle (P), and height above the galaxy mid-plane (Z) are indicated at
the top of each contour plot. Contours are drawn at $\pm$10,
$\pm$20, $\pm$30, $\dots$ $\pm$90\% of $|B|$. Two reference spirals
are drawn, both with field directed inward ($B~=~+1$) for the
axisymmetric (ASS) case and one in and the other out for the
bisymmetric (BSS) case. The approaching and receding major axis of
the galaxy are indicated for trailing spirals. Azimuth is measured
CCW from the receding major axis. Azimuth traces are drawn for the
radii indicated in the contour plots. A trace of the orientation of
$B_\perp$, $\chi^\prime_0(\phi)$, is given in the $B_\perp$ panel
for ASS models with a dashed linetype using the right-hand scale in
degrees. Here we compare the ASS (left hand groups) and BSS (right hand
groups) planar models (Z~=~0). The galaxy inclination is 20$^\circ$
in the upper series of groups and 40$^\circ$ in the lower
series. The spiral pitch angle is 20$^\circ$.}
\label{figure:psims}
\end{figure*}
\addtocounter{figure}{-1}
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics{dq_0_1_60_0.eps},\includegraphics{dqb_0_1_60_0.eps}}
\resizebox{\hsize}{!}{\includegraphics{dq_0_1_80_0.eps},\includegraphics{dqb_0_1_80_0.eps}}
\caption{(continued) Simulated field distributions. Here we compare
ASS (left hand groups) and BSS (right hand groups) planar models
(Z~=~0). The galaxy inclination is 60$^\circ$ in the upper series of
groups and 80$^\circ$ in the lower series. }
\end{figure*}
\subsection{Thick disk models\label{subsection:thickmod}}
In Figs.~\ref{figure:sims} -- \ref{figure:rhsims}, we explore the
extension of the basic ASS topology with either a dipole (D~=~1,
left-hand groups) or quadrupole (Q~=~1, right-hand groups)
modification to the out-of-plane field, as given by
Eq.~\ref{eqn:ass}. In keeping with the discussion of
Sect.\,\ref{section:trends}, we calculate a projected distribution that
is populated within a specified range of mid-plane heights extending
from Z1 to Z2 on the near-side of the galaxy and from $-$Z2 to $-$Z1
on the far-side. For lines-of-sight that intersect the mid-plane disk
within the nominal galaxy radius, only the near-side region (Z~$>$~0)
contributes to the integral, to approximate the effect of mid-plane
depolarization. For other lines-of-sight, out to 1.2 times the nominal
galaxy radius, both the nearside and farside zones are
integrated. The integral is determined by first evaluating
$B_{\parallel}$ and $B_{\perp}$ at a sequence of finely-sampled
line-of-sight depths and then forming the average. The two lower
panels of each group were modified to allow more direct
comparison with the data of Fig.~\ref{figure:fdpofaz} by plotting the
mean modeled quantity within azimuthal wedges rather than the
azimuthal traces at discrete radii. The error bars represent the RMS
variation within each wedge, which are principally caused by a dependence
on radius.
We first consider the dipole and quadrupole models that are
integrated over mid-plane heights of $|z|~=~2 \rightarrow 4$ (about 10\% of the
disk radius) in Fig.~\ref{figure:sims}. Although the azimuthal trace
of projected field modulation with azimuth show some variation in
amplitude with radius (as indicated by the error bars), there are
well-defined maxima and minima. The local modification to the planar
spiral field is illustrated in Fig.~\ref{figure:geom} by the vector
orientations along the horizontal line drawn at Z~=~$+$4. The addition
of a poloidal field component has introduced several notable
differences from the planar case. Although $B_{\parallel}(\phi)$
exhibits maxima and minima at the same azimuth as previously (near the
major axes), it no longer has excursions that are symmetric about
$B_{\parallel}~=~0$, $B_{\parallel}(\phi)$ being offset to negative
values of $B_{\parallel}$ (for an inwardly-directed spiral field) and
showing complementary modulation in the dipole and quadrupole cases. The
character of $B_{\perp}(\phi)$ is more substantially modified by the
addition of a poloidal field. We first consider the quadrupole
case (Q~=~1) shown in the right-hand groups of Fig.~\ref{figure:sims}.
At low inclinations, a single minimum in $B_{\perp}$ occurs
near the receding major axis at $\phi~\sim~\psi^\prime_{xy}$ and a
broad single peak centered near the approaching major axis. As the
inclination increases, the single global minimum in $B_{\perp}$
remains close to the receding major axis, while the single broad maximum
divides into two maxima that separate and shift toward the two ends
of the minor axis. This is exactly the trend shown by the data in
Fig.~\ref{figure:fdpofaz}.
The other noteworthy attribute of these distributions is that the
largest excursion in $|B_{\parallel}|$ is found on the receding major
axis at all inclinations, representing the greatest Faraday depth at this
azimuth for coextensive emitting and dispersive media. For low
inclinations, there is a broad minimum in $|B_{\parallel}|$ at the
approaching major axis, which becomes a secondary peak at this
location (of opposite sign) as the inclination increases. This
roughly agrees with the patterns seen for the majority
of lower inclination galaxies in Fig.~\ref{figure:fdpofaz}, these
models still being more symmetric than the data, which exhibit a much
clearer nearside/farside asymmetry.
For the dipole case (D~=~1), shown in the left-hand groups of
Fig.~\ref{figure:sims}, the receding and approaching major axes are
reversed, since the poloidal field component of the dipole at positive
mid-plane offsets is directed outward and not inward (as shown in
Fig.~\ref{figure:geom}). For this case, it is the {\it approaching\ }
major axis that is predicted to have a global minimum in $B_{\perp}$,
and hence also the minimum intrinsic polarized intensity. In a similar way, it is the
{\it approaching\ } major axis for which the greatest value
of $|B_{\parallel}|$, hence the largest associated Faraday
depth is found. The degree of modulation with azimuth is rather modest for the
dipole case integrated over these heights.
From this comparison, it is clear that a large-scale quadrupole ASS
field topology provides an excellent model for explaining the
modulation of polarized intensity with azimuth, while the dipole
clearly does not. The modulation of Faraday depth with azimuth is also
explained qualitatively for the low inclination systems, but is not
reproduced in detail. The crucial attributes of the quadrupole model
in matching the observed azimuthal modulation pattern, $<P(\phi)>$,
are the magnitude and particularly the sign of of the angle $\psi_z$
as given by Eq.~\ref{eqn:pz}. For typical values of the spiral pitch
angle, $|\psi^\prime_{xy}| < 25^\circ$, the projected field component,
$B_{y^\prime}$ of Eq.~\ref{eqn:pass} has minima and maxima near $\phi~=~0$
and $180^\circ$. The ``hour-glass'' shape of the quadrupole field yields
a positive sign of $\psi_z$ for small positive values of $z$,
i.e. toward the observer, and this is what yields a minimum of
$B_{\perp}$ near $\phi~=~0$ for the nearside emission. The ``donut''
shape of the dipole field, on the other hand, yields a negative sign of
$\psi_z$ for small positive values of $z$, resulting in a maximum of
$B_{\perp}$ near $\phi~=~0$.
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics{dqi_1_0_20_20_2_4.eps},\includegraphics{dqi_0_1_20_20_2_4.eps}}
\resizebox{\hsize}{!}{\includegraphics{dqi_1_0_40_20_2_4.eps},\includegraphics{dqi_0_1_40_20_2_4.eps}}
\caption{Simulated field distributions (as in
Fig.~\ref{figure:psims}). Here we compare dipole- (left hand
groups) and quadrupole- (right hand groups) ASS models integrated
over a zone from Z1 to Z2 ($|$Z$|$~=~2--4). For those lines-of-sight
that intersect the mid-plane of the disk, only positive values of Z
are included in the integral. The galaxy inclination is 20$^\circ$
in the upper series of groups and 40$^\circ$ in the lower series. }
\label{figure:sims}
\end{figure*}
\addtocounter{figure}{-1}
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics{dqi_1_0_60_20_2_4.eps},\includegraphics{dqi_0_1_60_20_2_4.eps}}
\resizebox{\hsize}{!}{\includegraphics{dqi_1_0_80_20_2_4.eps},\includegraphics{dqi_0_1_80_20_2_4.eps}}
\caption{(continued) Simulated field distributions. The galaxy inclination
is 60$^\circ$ in the upper series of groups and 80$^\circ$ in the
lower series.}
\end{figure*}
\subsection{Halo models\label{subsection:halomod}}
We now consider dipole and quadrupole models that extend to more
substantial mid-plane heights in an attempt to reproduce the azimuthal
modulations of Faraday depth seen in the data. In
Fig.~\ref{figure:hsims}, we illustrate the result of integrating the
same models just considered over mid-plane heights of $|z|~=~4
\rightarrow 10$ (or about 30\% of the disk radius). By considering
first the quadrupole models in the right-hand panels, we see
substantial similarity between these models and those originating
closer to the mid-plane (Fig.~\ref{figure:sims}). The modulation
pattern in $B_{\perp}(\phi)$ has a slightly smaller amplitude. The
modulation pattern of the halo $B_{\parallel}(\phi)$, while similar to
its thick disk counterpart, has a clear asymmetry in the approaching
major axis minimum about an azimuth of 180$^\circ$, which is
reminiscent of what is seen in the data of
Fig.~\ref{figure:fdpofaz}. The pattern shown is for a positive pitch
angle of 20$^\circ$ (CCW spiral), while for a negative pitch angle (CW
spiral) it is mirrored about an azimuth of 180$^\circ$. We recall that
changing the sign of the spiral field from inward directed (as shown)
to outward directed changes only the sign of
$B_{\parallel}(\phi)$. Qualitative agreement of the halo predictions
for the quadrupole with the basic $<\Phi(\phi)>$ patterns is
reasonable for many of the low inclination galaxies of our sample. The
dipole halo models (shown in the left-hand panels) with their maximal
excursion near an azimuth of 180$^\circ$ reproduces the observations
less successfully.
Although the models considered to this point can reproduce the
general pattern of $<P(\phi)>$ modulation at all galaxy inclinations
and the $<\Phi(\phi)>$ modulations for many low inclination targets,
they have not provided any agreement with the distinctive doubly
peaked pattern of $<\Phi(\phi)>$ around the minor axes seen in our
four highest inclination galaxies. We now consider an additional variant
of our halo models in an attempt to reproduce those $<\Phi(\phi)>$
patterns. In Fig.~\ref{figure:rhsims} we consider dipole and
quadrupole models for an extremely large pitch angle of 85$^\circ$, to
illustrate the impact of a planar field component that is essentially
radial. The distinctive doubly-peaked pattern of $<\Phi(\phi)>$ at
large inclinations can be reasonably reproduced in this case, but only
for the dipole topology.
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics{dqi_1_0_20_20_4_10.eps},\includegraphics{dqi_0_1_20_20_4_10.eps}}
\resizebox{\hsize}{!}{\includegraphics{dqi_1_0_40_20_4_10.eps},\includegraphics{dqi_0_1_40_20_4_10.eps}}
\caption{Simulated field distributions (as in
Fig.~\ref{figure:psims}). Here we compare dipole- (left hand
groups) and quadrupole- (right hand groups) ASS models integrated
over a zone from Z1 to Z2 ($|$Z$|$~=~4--10). For those
lines-of-sight that intersect the mid-plane of the disk, only
positive values of Z are included in the integral. The galaxy
inclination is 20$^\circ$ in the upper series of groups and
40$^\circ$ in the lower series. }
\label{figure:hsims}
\end{figure*}
\addtocounter{figure}{-1}
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics{dqi_1_0_60_20_4_10.eps},\includegraphics{dqi_0_1_60_20_4_10.eps}}
\resizebox{\hsize}{!}{\includegraphics{dqi_1_0_80_20_4_10.eps},\includegraphics{dqi_0_1_80_20_4_10.eps}}
\caption{(continued) Simulated field distributions. The galaxy inclination
is 60$^\circ$ in the upper series of groups and 80$^\circ$ in the
lower series.}
\end{figure*}
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics{dqi_1_0_20_85_4_10.eps},\includegraphics{dqi_0_1_20_85_4_10.eps}}
\resizebox{\hsize}{!}{\includegraphics{dqi_1_0_40_85_4_10.eps},\includegraphics{dqi_0_1_40_85_4_10.eps}}
\caption{Simulated field distributions (as in
Fig.~\ref{figure:psims}). Here we compare dipole- (left hand
groups) and quadrupole- (right hand groups) ASS models integrated
over a zone from Z1 to Z2 ($|$Z$|$~=~4--10) with a large pitch angle
(P~=~85$^\circ$). For those lines-of-sight that intersect the
mid-plane of the disk, only positive values of Z are included in the
integral. The galaxy inclination is 20$^\circ$ in the upper series
of groups and 40$^\circ$ in the lower series. }
\label{figure:rhsims}
\end{figure*}
\addtocounter{figure}{-1}
\begin{figure*}
\resizebox{\hsize}{!}{\includegraphics{dqi_1_0_60_85_4_10.eps},\includegraphics{dqi_0_1_60_85_4_10.eps}}
\resizebox{\hsize}{!}{\includegraphics{dqi_1_0_80_85_4_10.eps},\includegraphics{dqi_0_1_80_85_4_10.eps}}
\caption{(continued) Simulated field distributions. The galaxy inclination
is 60$^\circ$ in the upper series of groups and 80$^\circ$ in the
lower series.}
\end{figure*}
\section{Discussion\label{section:disc}}
Our analysis of the projected three-dimensional magnetic field topologies
presented in Sect.~\ref{section:Bdist} and their predicted observable consequences
for the azimuthal modulation, $B_{\parallel}(\phi)$ and
$B_{\perp}(\phi)$, has provided a plausible explanation of the very
general observed trends noted in Sect.~\ref{section:trends}. A
self-consistent scenario has emerged that accounts for the polarized
intensity and its Faraday dispersion observed at GHz frequencies from
galaxy disks. The detected polarized intensity is dominated by a zone
of emissivity above the mid-plane on the side of the galaxy facing the
observer (at a height of perhaps 5 -- 10 \% of the disk radius). This
thick disk emission arises in a region that is dominated by an
axisymmetric spiral with an out-of-disk quadrupole topology, which is
responsible for a distinctive modulation of $B_{\perp}(\phi)$ and its
variation with galaxy inclination. This emission is affected by only a modest
amount of Faraday dispersion, of a few tens of rad~m$^{-2}$, within
the nearside halo of the galaxy in its subsequent propagation. For
the majority of low to modest inclination galaxies ($\le60^\circ$),
the dispersive foreground topology is consistent with an extension of
the thick disk ASS quadrupole out to larger heights above the
mid-plane (of perhaps 30\% of the disk radius).
The most highly inclined galaxies of our sample require an alternative
halo field topology, in the form of a radially-dominated
dipole, which yields a distinctive doubly-peaked modulation of
$\Phi(\phi)$. It seems significant that in many or possibly all of the
highly inclined galaxies of our sample there is evidence of a
significant circum-nuclear outflow component to the polarized emission, in
addition to that of the disk. This circum-nuclear component would
quite naturally be expected to be associated with a dipole, rather
than a quadrupole field, in view of the likely dominance of the
$\alpha^2$ over the $\alpha\omega$ dynamo mechanism at small galactic
radii \citep[e.g.][]{elstner_etal_1992}. This circum-nuclear dipole
field would also be less likely to have any association with the
spiral pitch angle of the disk given its origin. Because of the
shallower roll-off with radius of a dipole compared to the quadrupole
field (see Eq.~\ref{eqn:dq}), a dipole component may
come to dominate the halo field of the associated galaxy when both are
present. \citet{sokoloff_shukurov_1990} also argued that a
$\alpha\omega$ dynamo operating in the halo would directly
produce a dipole field. Non-stationary global halo models
\citep[e.g.][]{brandenburg_etal_1992} may also provide a natural
explanation of the dipole signature on the largest scales.
In addition to the bright polarized emission originating in the
nearside, the corresponding rear-facing region of polarized
emissivity of the thick disk can also be detected in relatively
face-on galaxies if sufficient sensitivity is available. This
emission is substantially weaker, by a factor of 4 -- 5, and consistent
with depolarization caused by fluctuations in the magneto-ionic medium of
the mid-plane on scales smaller than a pc. This fainter polarized
component is affected by much greater Faraday dispersion, corresponding to
both plus and minus 150 -- 200~rad~m$^{-2}$ in its propagation
through the dense mid-plane plasma, as well as the near-side
halo. These two maxima (one positive and one negative) in Faraday
depth are aligned approximately with the major axes of each galaxy,
and have approximately symmetric excursions about the Galactic
foreground value. This pattern is consistent with the
expectation for a simple planar ASS field in the galaxy disk.
Future observations of nearby galaxy disks at frequencies below 200
MHz, such as with the upcoming LOFAR facility, will likely detect
net polarized emission from even larger heights above the galaxy
mid-plane and exclusively from regions unobstructed by the mid-plane
in projection. A good indication for the predicted observables is
given in Fig.~\ref{figure:hsims} in which we present model
integrations of the upper halo (out to 30\% of the disk radius).
\begin{acknowledgements}
The Westerbork Synthesis Radio Telescope is operated by ASTRON (The
Netherlands Institute for Radio Astronomy) with support from the
Netherlands Foundation for Scientific Research (NWO).
\end{acknowledgements}
\bibliographystyle{aa}
|
1,116,691,498,617 | arxiv | \section{Introduction}
\label{sec:intro}
Recent years have witnessed a great success of Deep Learning with deep Convolutional Networks (ConvNets)~\cite{lecun1989backpropagation, krizhevsky2012imagenet} in several visual tasks. Originally mainly used for image classification~\cite{krizhevsky2012imagenet, simonyan2015very, he2016deep}, they are now widely used for others tasks such as object detection~\cite{girshick2014rich, girshick2015fast, dai2016r, zagoruyko2016a, lin2017feature} or semantic segmentation~\cite{long2015fully, chen2015semantic, li2017fully}. In particular for detection, region-based deep ConvNets~\cite{girshick2014rich, girshick2015fast, dai2016r} are currently the leading methods. They exploit region proposals~\cite{ren2015faster, pinheiro2016learning, gidaris2016attend} as a first step to focus on interesting areas within images, and then classify and finely relocalize these regions at the same time.
Although they yield excellent results, region-based deep ConvNets still present a few issues that need to be solved. Networks are usually initialized with models pre-trained on ImageNet dataset~\cite{russakovsky2015imagenet} and are therefore prone to suffer from mismatches between classification and detection tasks. As an example, pooling layers bring invariance to local transformations and help learning more robust features for classification, but they also reduce the spatial resolution of feature maps and make the network less sensitive to the positions of objects within regions~\cite{dai2016r}, both of which are bad for accurate localization. Furthermore, the use of rectangular bounding boxes limits the representation of objects, in the way that boxes may contain a significant fraction of background, especially for non-rectangular objects.
Before the introduction of Deep Learning into object detection with~\cite{girshick2014rich}, state of the art was led by approaches exploiting Deformable Part-based Models (DPMs)~\cite{felzenszwalb2010object}. These methods are in contrast with region-based deep ConvNets: while the latter relies on strong features learned directly from pixels and exploit region proposals to focus on interesting areas of images, DPM explicitly takes into account geometry of objects by optimizing a graph-based representation and is usually applied in a sliding window fashion over images. Both approaches exploit different hypotheses and seem therefore complementary.
\begin{figure}[t]
\begin{center}
\includestandalone[width=\textwidth,mode=image]{images/figure_intro}
\end{center}
\caption{\textbf{Architecture of DP-FCN.} It is composed of a FCN to extract dense feature maps with high spatial resolution (Section~\ref{sec:fcn}), a deformable part-based RoI pooling layer to compute a representation aligning parts (Section~\ref{sec:dproi}) and two sibling classification and localization prediction branches (Section~\ref{sec:pred}). Initial rectangular region is deformed to focus on discriminative elements of object. Alignment of parts brings invariance for classification and geometric information refining localization \textit{via} a deformation-aware localization module.}
\label{fig:archi}
\end{figure}
In this paper, we propose Deformable Part-based Fully Convolutional Network (DP-FCN), an end-to-end model integrating ideas from DPM into region-based deep ConvNets for object detection, as an answer to the aforementioned issues. It learns a part-based representation of objects and aligns these parts to enhance both classification and localization. Training is done with box-level supervision only, \emph{i.e}\bmvaOneDot without part annotation. It improves upon existing object detectors with two key contributions.
The first one is the introduction of a new deformable part-based RoI pooling layer, which explicitly selects discriminative elements of objects around region proposals by simultaneously optimizing latent displacements of all parts (middle of Fig.~\ref{fig:archi}). Using a fixed box geometry must be sub-optimal, especially when objects are not rigid and parts can move relative to each others. Through alignment of parts, deformable part-based RoI pooling increases the limited invariance to local transformations brought by pooling, which is beneficial for classification.
Aligning parts also gives access to their configuration (\emph{i.e}\bmvaOneDot their positions relative to each others), which brings important geometric information about objects, \emph{e.g}\bmvaOneDot their shapes, poses or points of view. The second improvement is the design of a deformation-aware localization module (right of Fig.~\ref{fig:archi}), a specific module exploiting configuration information to refine localization. It improves bounding boxes regression by explicitly modeling displacements of parts within the localization branch, in order to tightly fit boxes around objects.
By integrating previous ideas into Fully Convolutional Networks (FCNs)~\cite{he2016deep, dai2016r} (left of Fig.~\ref{fig:archi}), we obtain state-of-the-art results on standard datasets PASCAL VOC 2007 and 2012~\cite{everingham2015the} with VOC data only. We show that those architectures are amenable to an efficient computation of parts and their deformations, and offer natural solutions to keep spatial resolution. The application of deformable part-based approaches is in particular severely dependent on the availability of rather fine feature maps~\cite{savalle2014deformable, girshick2015deformable, wan2015end}.
\section{Related work}
\label{sec:rw}
\paragraph{Region-based object detectors.}
Region-based deep ConvNets are currently the leading approach in object detection. Since the seminal works of R-CNN~\cite{girshick2014rich} and Fast R-CNN~\cite{girshick2015fast}, most of object detectors exploit region proposals or directly learn to generate them~\cite{ren2015faster, gidaris2016attend, pinheiro2016learning}. Compared to sliding window approach, the use of region proposals allows the model to focus computation on interesting areas of images and to balance positive and negative examples to ease learning. Other improvements are now commonly used, \emph{e.g}\bmvaOneDot using intermediate layers to refine feature maps~\cite{bell2016inside, kong2016hypernet, zagoruyko2016a, lin2017feature} or selecting interesting regions for building mini-batches~\cite{shrivastava2016training, dai2016r}.
\vspace{-1em}
\paragraph{Deformable Part-based Models.}
The core idea of DPM~\cite{felzenszwalb2010object} is to represent each class by a root filter describing whole appearances of objects and a set of part filters to finely model local parts. Each part filter is assigned to an anchor point, defined relative from the root, and move around during detection to model deformations of objects and best fit them. A regularization is further introduced in the form of a deformation cost penalizing large displacements. Each part is then optimizing a trade-off between maximizing detection score and minimizing deformation cost. Final output combines scores from root and all parts. Accurate localization is done with a post-processing step.
Several extensions have been proposed to DPM, \emph{e.g}\bmvaOneDot using a second hierarchical level of parts to finely describe objects~\cite{zhu2010latent}, sharing part models between classes~\cite{ott2011shared}, learning from strongly supervised annotations (\emph{i.e}\bmvaOneDot at the part level) to get a better model~\cite{azizpour2012object}, exploiting segmentation clues to improve detection~\cite{fidler2013bottom}.
\vspace{-1em}
\paragraph{Part-based deep ConvNets.}
The first attempts to use deformable parts with deep ConvNets simply exploited deep features learned by an AlexNet~\cite{krizhevsky2012imagenet} to use them with DPMs~\cite{savalle2014deformable, girshick2015deformable, wan2015end}, but without region proposals. However tasks implying spatial predictions (\emph{e.g}\bmvaOneDot detection, segmentation) require fine feature maps in order to have accurate localization~\cite{lin2017feature}. The fully connected layers were therefore discarded to keep enough spatial resolution, lowering results. We solve this issue by using a FCN, well suited for these kinds of applications as it naturally keeps spatial resolution.
Thanks to several tricks easily integrable into FCNs (\emph{e.g}\bmvaOneDot dilated convolutions~\cite{chen2015semantic, long2015fully, yu2016multi} or skip pooling~\cite{bell2016inside, kong2016hypernet, zagoruyko2016a}), FCNs have recently been successful in various tasks, \emph{e.g}\bmvaOneDot image classification~\cite{he2016deep, zagoruyko2016wide, xie2017aggregated}, object detection~\cite{dai2016r}, semantic segmentation~\cite{li2017fully}, weakly supervised learning~\cite{durand2017wildcat}.
\cite{zhang2014part} introduces parts for detection by learning part models and combining them with geometric constraints for scoring. It is learned in a strongly supervised way, \emph{i.e}\bmvaOneDot with part annotations. Although manually defining parts can be more interpretable, it is likely sub-optimal for detection as they might not correspond to most discriminative elements.
Parts are often used for fine-grained recognition. \cite{lin2015deep} proposes a module for localizing and aligning parts with respect to templates before classifying them, \cite{simon2015neural} finds part proposals from activation maps and learns a graphical model to recognize objects, \cite{zhang2016spda} uses two sub-networks for detection and classification of parts, \cite{sicre2017unsupervised} considers parts as a vocabulary of latent discriminative features decoupled from the task and learns them in an unsupervised way.
Usage of parts is also common in semantic segmentation, \emph{e.g}\bmvaOneDot~\cite{wang2015joint, dai2016instance, li2017fully}.
The work closest to ours is R-FCN~\cite{dai2016r}, which also uses a FCN to achieve a great efficiency. We improve upon it by learning more flexible representations than with fixed box geometry.
It allows our model to align parts of objects to bring invariance into classification and to exploit geometric information from positions of parts to refine localization.
\section{Deformable Part-based Fully Convolutional Networks}
\label{sec:dpfcn}
In this section, we present Deformable Part-based Fully Convolutional Network (DP-FCN), a deep network for object detection.
It represents regions with several parts that it aligns by explicitly optimizing their positions. This alignment improves both classification and localization: the part-based representations are more invariant to local transformations and the configurations of parts give important information about the geometry of objects.
This idea can be inserted into most of state-of-the-art network architectures. The model is end-to-end trainable without part annotation and adds a small computational overhead only.
The complete architecture is depicted in Fig.~\ref{fig:archi} and is composed of three main modules: \begin{enumerate*}[label=(\roman*)]
\item a Fully Convolutional Network (FCN) applied on whole images,
\item a deformable part-based RoI pooling layer, and
\item two sibling prediction layers for classification and localization\end{enumerate*}.
We now describe all three parts of our model in more details.
\subsection{Fully convolutional feature extractor}
\label{sec:fcn}
Our model relies on a FCN (\emph{e.g}\bmvaOneDot~\cite{he2016deep, zagoruyko2016wide, xie2017aggregated}) as backbone architecture, as this kind of network enjoys several practical advantages, leading to several successful models, \emph{e.g}\bmvaOneDot \cite{dai2016r, li2017fully, durand2017wildcat}. First, it allows to share most computation on whole images and to reduce per-RoI layers, as noted in R-FCN~\cite{dai2016r}. Second and most important to our work, it directly provides feature maps linked to the task at hand (\emph{e.g}\bmvaOneDot detection heatmaps, as illustrated in the middle of Fig.~\ref{fig:archi} and on the left of Fig.~\ref{fig:dproi}) from which final predictions are simply pooled, as done in~\cite{dai2016r,durand2017wildcat}. Within DP-FCN, inferring the positions of parts for a region is done with a particular kind of RoI pooling that we describe in Section~\ref{sec:dproi}.
The fully convolutional structure is therefore suitable for computing responses of all parts for all classes as a single map for each of them. A corresponding structure is used for localization. The complete representation for a whole image (classification and localization maps for each part of each class) is obtained with a single forward pass and is shared between all regions of the same image, which is very efficient.
Since relocalization of parts is done within feature maps, the resolution of those maps is of practical importance. FCNs contain only spatial layers and are therefore well suited for preserving spatial resolution, as opposed to networks ending with fully connected layers, \emph{e.g}\bmvaOneDot~\cite{krizhevsky2012imagenet, simonyan2015very}. Specifically, if the stride is too large, deformations of parts might be too coarse to describe objects correctly.
We reduce it by using dilated convolutions~\cite{chen2015semantic, long2015fully, yu2016multi} on the last convolution block and skip pooling~\cite{bell2016inside, kong2016hypernet, zagoruyko2016a} to combine the last three.
\subsection{Deformable part-based RoI pooling}
\label{sec:dproi}
\begin{figure}[t]
\begin{center}
\begin{tabular}{@{}lr@{}}
\parbox[c][1em][c]{.73\textwidth}{\includestandalone[width=.73\textwidth,mode=image]{images/figure_dproi}} & \begin{tabular}{@{}c@{}}\includegraphics[width=.235\textwidth]{images/image/cat_box_parts2.png}\\\includegraphics[width=.235\textwidth]{images/image/cat_box_def_parts2.png}\end{tabular}\\
\end{tabular}
\end{center}
\caption{\textbf{Deformable part-based RoI pooling (left).} Each input feature map corresponds to a part of a class (or background). Positions of parts are optimized separately within detection maps with deformation costs as regularization, and values are pooled within parts at the new locations. Output includes a map for each class and the computed displacements of parts, to be used for localization. \textbf{Illustration of deformations (right).} Parts are moved from their initial positions to adapt to the shape of the object and better describe it.}
\label{fig:dproi}
\end{figure}
The aim of this layer is to divide region proposals $R$ into several parts and to locally relocalize these to best match shapes of objects (see Fig.~\ref{fig:dproi}).
Each part then models a discriminative local element and is to be aligned at the corresponding location within the image.
This deformable part-based representation is more invariant to transformations of objects because the parts are positioned accordingly and their local appearances are stable~\cite{felzenszwalb2010object}. This is especially useful for non-rigid objects, where a box-based representation must be sub-optimal.
The separation into parts is done with a regular grid of size $k \times k$ fitted to regions~\cite{girshick2015fast,dai2016r}. Each cell $(i,j)$ is then interpreted as a distinct part $R_{i,j}$. This strategy is simple yet effective~\cite{zhu2010latent, wan2015end}. Since the number of parts (\emph{i.e}\bmvaOneDot $k^2$) is fixed as a hyper-parameter, it is easy to have a complete detection heatmap $z_{i,j,c}$ already computed for each region $(i,j)$ of each class $c$ (left of Fig.~\ref{fig:dproi}).
Parts then only need to be optimized within corresponding maps.
The deformation of parts draws ideas from the original DPM~\cite{felzenszwalb2010object}: it allows parts to slightly move around their reference positions (partitions of the initial regions), selects the optimal latent displacements, and pools values from selected locations.
The pooled score $p^R_c(i,j)$ for part $(i,j)$ and class $c$ is a trade-off between maximizing the score on the feature map and minimizing the displacement $(dx,dy)$ from the reference position (see Fig.~\ref{fig:dproi}):
\begin{equation}
p^R_c(i,j) = \max_{dx, dy} \left[ \Pool_{(x,y) \in R_{i,j}} z_{i,j,c}(x+dx,y+dy) - \lambda^{def} \left( dx^{2} + dy^{2} \right) \right]
\label{eq:dproi}
\end{equation}
where $\lambda^{def}$ represents the strength of the regularization (small deformations), and $\Pool$ is an average pooling as in~\cite{dai2016r}, but any pooling function could be used instead. The deformation cost is here the squared distance of the displacement on the feature map, but other functions could be used equally.
Implementation details can be found in Appendix~\ref{sec:impl_dproi}.
During training, deformations are optimized without part-level annotations. Displacement computed during the forward pass are stored and used to backpropagate gradients at the same locations.
We further note that the deformations are computed for all parts and classes independently.
However, no deformation is computed for the \textit{background} class: they would not bring any relevant information as there is no discriminative element for this class.
The same displacements of parts are used to pool values from the localization maps.
$\lambda^{def}$ is directly linked to the magnitudes of the displacements of parts, and therefore to the deformations of RoIs too, by controlling the squared distance regularization (\emph{i.e}\bmvaOneDot preference for small deformations). Increasing it puts a higher weight on the regularization and effectively reduces displacements of parts, but setting it too high prevents parts from moving and removes the benefits of our approach. It is noticeable this deformable part-based RoI pooling is a generalization of position-sensitive RoI pooling from~\cite{dai2016r}. Setting $\lambda^{def} = +\infty$ clamps $dx$ and $dy$ to 0, leading to the formulation of position-sensitive RoI pooling:
\begin{equation}
p^R_c(i,j) = \Pool_{(x,y) \in R_{i,j}} z_{i,j,c}(x,y).
\label{eq:psroi}
\end{equation}
On the other hand, setting $\lambda^{def} = 0$ removes regularization and parts are then free to move. With $\lambda^{def}$ too low, the results decrease, indicating that regularization is practically important. However the results appeared to be stable within a large range of values of $\lambda^{def}$.
\subsection{Classification and localization predictions with deformable parts}
\label{sec:pred}
\begin{figure}[t]
\begin{center}
\includestandalone[width=.75\textwidth,mode=image]{images/figure_refine}
\end{center}
\caption{\textbf{Deformation-aware localization refinement.} Relocalizations of bounding boxes obtained by averaging pooled values from localization maps (upper path) do not benefit from deformable parts. To do so, displacements of parts are forwarded through two fully connected layers (lower path) and are element-wise multiplied with previous output to refine it, separately for each class. Localization is done with 4 values per class, following~\cite{girshick2014rich, girshick2015fast}.}
\label{fig:loc_ref}
\end{figure}
Predictions are performed with two sibling branches for classification and relocalization of region proposals as is common practice~\cite{girshick2015fast}. The classification branch is simply composed of an average pooling followed by a SoftMax layer. This is the strategy employed in R-FCN~\cite{dai2016r}, however the deformations introduced before (with deformable part-based RoI pooling) bring more invariance to transformations of objects and boost classification.
Regarding localization, we also use an average pooling to compute a first localization output from corresponding features. However, the configuration of parts (\emph{i.e}\bmvaOneDot their positions relative to each others) is obtained as a by-product of the alignment of parts performed before. It gives rich geometric information about the appearances of objects, \emph{e.g}\bmvaOneDot their shapes or poses, that can be used to enhance localization accuracy.
To that end we introduce a new deformation-aware localization refinement module (see Fig.~\ref{fig:loc_ref}). For each region $R$, we extract the feature vector $d_c^{R}$ of displacements $(dx,dy)$ for all parts of class $c$ (as shown on Fig.~\ref{fig:dproi}) and use it to refine previous output for the same class. $d_c^{R}$ is forwarded through two fully connected layers and is then element-wise multiplied with the first values to yield the final localization output for this class.
Since refinement is mainly geometric, it is done for all classes separately and parameters are shared between classes.
\section{Experiments}
\subsection{Ablation study}
\label{sec:ablation}
\paragraph{Experimental setup.}
We perform this analysis with the fully convolutional backbone architecture ResNet-50~\cite{he2016deep} and exploit the region proposals computed by AttractioNet~\cite{gidaris2016locnet, gidaris2016attend} released by the authors. We use $k \times k = 7 \times 7$ parts, as advised by the authors of R-FCN~\cite{dai2016r}.
Setting of all others hyper-parameters can be found in Appendix~\ref{sec:xpsetup_ab}.
All experiments in this section are conducted on the PASCAL VOC 07+12 dataset~\cite{everingham2015the}: training is done on the union of the 2007 and 2012 trainval sets and testing on the 2007 test set. In addition to the standard [email protected] (\emph{i.e}\bmvaOneDot PASCAL VOC style) metric, results are also reported with the [email protected] and mAP@[0.5:0.05:0.95] (\emph{i.e}\bmvaOneDot MS COCO style) metrics to thoroughly evaluate the effects of proposed improvements.
\vspace{-1em}
\paragraph{Comparison with R-FCN.}
\begin{table}[b]
\begin{center}
\begin{tabular}{@{}lcc|ccc@{}}
\toprule
Model & Deformations & \begin{tabular}{@{}c@{}}Localization\\refinement\end{tabular} & \begin{tabular}{@{}c@{}}mAP@\\0.5\end{tabular} & \begin{tabular}{@{}c@{}}mAP@\\0.75\end{tabular} & \begin{tabular}{@{}c@{}}mAP@\\{[}0.5:0.95]\end{tabular} \\
\midrule
R-FCN & & & 73.7 & 38.3 & 39.8 \\
& \checkmark & & 75.8 & 38.8 & 40.4 \\
DP-FCN & \checkmark & \checkmark & \textbf{76.1} & \textbf{40.9} & \textbf{41.3} \\
\bottomrule
\end{tabular}
\end{center}
\caption{\textbf{Ablation study of DP-FCN} on PASCAL VOC 2007 test in average precision (\%). Without deformable part-based RoI pooling nor localization refinement module, it is equivalent to R-FCN (the reported results are those of our implementation with the given setup).}
\label{tab:rfcn}
\end{table}
\begin{figure}[t]
\centering
\null
\hfill
\includegraphics[width=.379\textwidth]{images/pr_rfcn.pdf}
\hfill
\includegraphics[width=.379\textwidth]{images/pr_dpfcn.pdf}
\hfill
\null
\caption{\textbf{Precision-recall curves for R-FCN (left) and DP-FCN (right).} Detailed analysis of false positives on unseen VOC07 test images averaged over all categories.}
\label{fig:pr}
\end{figure}
Performances of our implementation of R-FCN~\cite{dai2016r} with the given setup are shown in the first row of Tab.~\ref{tab:rfcn}. Adding the deformable part-based RoI pooling to R-FCN (second row of Tab.~\ref{tab:rfcn}) improves [email protected] by 2.1 points.
Indeed, this metric is rather permissive so the localization does not need to be very accurate: we see that the gain on [email protected] is much smaller. The improvements are therefore mainly due to a better recognition, thus validating the role of deformable parts.
With the localization refinement module (third row of Tab.~\ref{tab:rfcn}), the [email protected] has only a small improvement, because localization accuracy is not a issue. However, it further improves [email protected] by 2.1 points (\emph{i.e}\bmvaOneDot 2.6 points with respects to R-FCN), validating the need for such a module. This confirms that aligning parts brings geometric information useful for localization.
\begin{figure}[H]
\centering
\includegraphics[height=3.3cm]{images/image/class01_im003574.jpg}
\hfill
\includegraphics[height=3.3cm]{images/image/class15_im001437.jpg}
\hfill
\includegraphics[height=3.3cm]{images/image/class03_im000115.jpg}\\
\medskip
\includegraphics[height=3.2cm]{images/image/class08_im001128.jpg}
\hfill
\includegraphics[height=3.2cm]{images/image/class04_im000558.jpg}
\hfill
\includegraphics[height=3.2cm]{images/image/class17_im004827.jpg}
\caption{\textbf{Example detections of R-FCN (red) and DP-FCN (green).} DP-FCN tightly fits objects (first row) and separates close instances (second row) better than R-FCN.}
\label{fig:examples}
\end{figure}
Detailed breakdowns of false positives are provided in Fig.~\ref{fig:pr} for R-FCN and DP-FCN.\footnote{See \url{http://mscoco.org/dataset/\#detections-eval} for full details of metrics.}
We see that the biggest gain comes from reduced localization errors (C75 and C50 metrics), and the corresponding curves are higher for DP-FCN. Ignoring those errors, recognition accuracy is consistently around 1 point better (Loc and Oth metrics). However, both models roughly keep the same number of false negatives (BG metric).
Examples of detection outputs are illustrated in Fig.~\ref{fig:examples} to visually evaluate proposed improvements. It appears that R-FCN can more easily miss extremal parts of objects (see first row, \emph{e.g}\bmvaOneDot the right wing of the plane) and that DP-FCN is better at separating close instances (see second row, \emph{e.g}\bmvaOneDot the two sheep one behind the other), thanks to deformable parts.
\vspace{-1em}
\paragraph{Interpretation of parts.}
As in the original DPM~\cite{felzenszwalb2010object}, the semantics of parts is not explicit in our model. Part positions are instead automatically learned to optimize detection performances, in a weakly supervised manner. Therefore the interpretation in terms of semantic parts is not systematic, especially because our division of regions into parts is finer than in DPM, leading to smaller part areas. Some deformed parts are displayed on Fig.~\ref{fig:parts} with a $3\times3$ part division for easier visualization. It is noticeable that DP-FCN is able to better fit to objects with deformable parts than with simple bounding boxes.
\begin{figure}[t]
\centering
\includegraphics[height=4cm]{images/image/img0016_rp001_model3.jpg}
\hfill
\includegraphics[height=4cm]{images/image/img0337_rp001_model3.jpg}
\hfill
\includegraphics[height=4cm]{images/image/img0183_rp003_model3.jpg}
\caption{\textbf{Examples of deformations of parts.} Initial region proposals are shown in yellow and deformed parts in red. Only $3\times3$ parts are displayed for clarity.}
\label{fig:parts}
\end{figure}
\vspace{-1em}
\paragraph{Network architecture.}
We compare DP-FCN with several FCN backbone architectures in Tab.~\ref{tab:net}, in particular the 50- and 101-layer versions of ResNet~\cite{he2016deep}, Wide ResNet~\cite{zagoruyko2016wide} and ResNeXt~\cite{xie2017aggregated}.
We see that the detection mAP of DP-FCN can be significantly increased by using better networks.
ResNeXt-101 (64x4d) gives the best results among the tested ones, with large improvements in all metrics, despite not using dilated convolutions.
\begin{table}[h]
\begin{center}
\begin{tabular}{@{}l|ccc@{}}
\toprule
FCN architecture for DP-FCN & [email protected] & [email protected] & mAP@[0.5:0.95] \\
\midrule
ResNet-50~\cite{he2016deep} & 76.1 & 40.9 & 41.3 \\
ResNeXt-50 (32x4d)~\cite{xie2017aggregated}$^\star$ & 76.3 & 40.8 & 41.4 \\
Wide ResNet-50-2~\cite{zagoruyko2016wide} & 77.9 & 43.3 & 42.9 \\
ResNet-101~\cite{he2016deep} & 78.1 & 44.2 & 43.6 \\
ResNeXt-101 (32x4d)~\cite{xie2017aggregated}$^\star$ & 78.6 & 45.2 & 44.4 \\
ResNeXt-101 (64x4d)~\cite{xie2017aggregated}$^\star$ & \textbf{79.5} & \textbf{47.8} & \textbf{45.7} \\
\bottomrule
\end{tabular}
\end{center}
\caption{\textbf{Comparison of DP-FCN with different FCN architectures} on PASCAL VOC 2007 test in average precision (\%). Entries marked with $^\star$ do not use dilated convolutions.}
\label{tab:net}
\end{table}
\vspace{-1em}
\subsection{PASCAL VOC results}
\label{sec:xp}
\paragraph{Experimental setup.}
We bring the following improvements to the setup of Section~\ref{sec:ablation}, the details of which are in Appendix~\ref{sec:xpsetup_voc}: we use ResNeXt-101 (64x4d)~\cite{xie2017aggregated} and increase the number of iterations.
We include common tricks: color data augmentations~\cite{krizhevsky2012imagenet}, bounding box voting~\cite{gidaris2015object}, and averaging of detections between original and flipped images~\cite{bell2016inside, zagoruyko2016a}. We set the relative weight of the multi-task (classification/localization) loss~\cite{girshick2015fast} to 7 and enlarge input boxes by a factor 1.3 to include some context.
\vspace{-1em}
\paragraph{PASCAL VOC 2007 and 2012.}
Results of DP-FCN along with those of recent methods are reported in Tab.~\ref{tab:res_voc07} for VOC 2007 and in Tab.~\ref{tab:res_voc12} for VOC 2012. For fair comparisons we only report results of methods trained on VOC07+12 and VOC07++12 respectively, but using additional data, \emph{e.g}\bmvaOneDot COCO images, usually improves results~\cite{he2016deep, dai2016r}. DP-FCN achieves 83.1\% and 80.9\% on these two datasets, yielding large gaps with all competing methods. In particular, DP-FCN outperforms R-FCN~\cite{dai2016r}, the work closest to ours, by significant margins (2.6\% and 3.3\% respectively). We note that these results could be further improved with additional common enhancements, \emph{e.g}\bmvaOneDot multi-scale training and testing~\cite{he2015spatial} or OHEM~\cite{shrivastava2016training}.
\begin{table}[t]
\begin{center}
\begin{adjustbox}{max width=\textwidth}
\begin{tabular}{@{}l|c|cccccccccccccccccccc@{}}
\toprule
Method & mAP & {\footnotesize aero} & {\footnotesize bike} & {\footnotesize bird} & {\footnotesize boat} & {\footnotesize bottle} & {\footnotesize bus} & {\footnotesize car} & {\footnotesize cat} & {\footnotesize chair} & {\footnotesize cow} & {\footnotesize table} & {\footnotesize dog} & {\footnotesize horse} & {\footnotesize mbike} & {\footnotesize person} & {\footnotesize plant} & {\footnotesize sheep} & {\footnotesize sofa} & {\footnotesize train} & {\footnotesize tv} \\
\midrule
FRCN~\cite{girshick2015fast} & 70.0 & 77.0 & 78.1 & 69.3 & 59.4 & 38.3 & 81.6 & 78.6 & 86.7 & 42.8 & 78.8 & 68.9 & 84.7 & 82.0 & 76.6 & 69.9 & 31.8 & 70.1 & 74.8 & 80.4 & 70.4 \\
HyperNet~\cite{kong2016hypernet} & 76.3 & 77.4 & 83.3 & 75.0 & 69.1 & 62.4 & 83.1 & 87.4 & 87.4 & 57.1 & 79.8 & 71.4 & 85.1 & 85.1 & 80.0 & 79.1 & 51.2 & 79.1 & 75.7 & 80.9 & 76.5 \\
Faster R-CNN~\cite{ren2015faster} & 76.4 & 79.8 & 80.7 & 76.2 & 68.3 & 55.9 & 85.1 & 85.3 & 89.8 & 56.7 & 87.8 & 69.4 & 88.3 & 88.9 & 80.9 & 78.4 & 41.7 & 78.6 & 79.8 & 85.3 & 72.0 \\
SSD~\cite{liu2016ssd} & 76.8 & 82.4 & 84.7 & 78.4 & 73.8 & 53.2 & 86.2 & 87.5 & 86.0 & 57.8 & 83.1 & 70.2 & 84.9 & 85.2 & 83.9 & 79.7 & 50.3 & 77.9 & 73.9 & 82.5 & 75.3 \\
MR-CNN~\cite{gidaris2015object} & 78.2 & 80.3 & 84.1 & 78.5 & 70.8 & 68.5 & 88.0 & 85.9 & 87.8 & 60.3 & 85.2 & 73.7 & 87.2 & 86.5 & 85.0 & 76.4 & 48.5 & 76.3 & 75.5 & 85.0 & 81.0 \\
LocNet~\cite{gidaris2016locnet} & 78.4 & 80.4 & 85.5 & 77.6 & 72.9 & 62.2 & 86.8 & 87.5 & 88.6 & 61.3 & 86.0 & 73.9 & 86.1 & 87.0 & 82.6 & 79.1 & 51.7 & 79.4 & 75.2 & 86.6 & 77.7 \\
FRCN OHEM~\cite{shrivastava2016training} & 78.9 & 80.6 & 85.7 & 79.8 & 69.9 & 60.8 & 88.3 & 87.9 & 89.6 & 59.7 & 85.1 & \textbf{76.5} & 87.1 & 87.3 & 82.4 & 78.8 & 53.7 & 80.5 & 78.7 & 84.5 & 80.7 \\
ION~\cite{bell2016inside} & 79.4 & 82.5 & 86.2 & 79.9 & 71.3 & 67.2 & 88.6 & 87.5 & 88.7 & 60.8 & 84.7 & 72.3 & 87.6 & 87.7 & 83.6 & 82.1 & 53.8 & 81.9 & 74.9 & 85.8 & 81.2 \\
R-FCN~\cite{dai2016r} & 80.5 & 79.9 & 87.2 & 81.5 & 72.0 & 69.8 & 86.8 & 88.5 & 89.8 & \textbf{67.0} & \textbf{88.1} & 74.5 & 89.8 & 90.6 & 79.9 & 81.2 & 53.7 & 81.8 & \textbf{81.5} & 85.9 & 79.9 \\
DP-FCN~[ours] & \textbf{83.1} & \textbf{89.8} & \textbf{88.6} & \textbf{85.2} & \textbf{73.9} & \textbf{74.7} & \textbf{92.1} & \textbf{90.4} & \textbf{94.4} & 58.3 & 84.9 & 75.2 & \textbf{93.4} & \textbf{93.1} & \textbf{87.4} & \textbf{85.9} & \textbf{53.9} & \textbf{85.3} & 80.0 & \textbf{90.4} & \textbf{85.9} \\
\bottomrule
\end{tabular}
\end{adjustbox}
\end{center}
\caption{\textbf{Detailed detection results on PASCAL VOC 2007 test} in average precision (\%). For fair comparisons, the table only includes methods trained on PASCAL VOC 07+12.}
\label{tab:res_voc07}
\end{table}
\begin{table}[t]
\begin{center}
\begin{adjustbox}{max width=\textwidth}
\begin{tabular}{@{}l|c|cccccccccccccccccccc@{}}
\toprule
Method & mAP & {\footnotesize aero} & {\footnotesize bike} & {\footnotesize bird} & {\footnotesize boat} & {\footnotesize bottle} & {\footnotesize bus} & {\footnotesize car} & {\footnotesize cat} & {\footnotesize chair} & {\footnotesize cow} & {\footnotesize table} & {\footnotesize dog} & {\footnotesize horse} & {\footnotesize mbike} & {\footnotesize person} & {\footnotesize plant} & {\footnotesize sheep} & {\footnotesize sofa} & {\footnotesize train} & {\footnotesize tv} \\
\midrule
FRCN~\cite{girshick2015fast} & 68.4 & 82.3 & 78.4 & 70.8 & 52.3 & 38.7 & 77.8 & 71.6 & 89.3 & 44.2 & 73.0 & 55.0 & 87.5 & 80.5 & 80.8 & 72.0 & 35.1 & 68.3 & 65.7 & 80.4 & 64.2 \\
HyperNet~\cite{kong2016hypernet} & 71.4 & 84.2 & 78.5 & 73.6 & 55.6 & 53.7 & 78.7 & 79.8 & 87.7 & 49.6 & 74.9 & 52.1 & 86.0 & 81.7 & 83.3 & 81.8 & 48.6 & 73.5 & 59.4 & 79.9 & 65.7 \\
Faster R-CNN~\cite{ren2015faster} & 73.8 & 86.5 & 81.6 & 77.2 & 58.0 & 51.0 & 78.6 & 76.6 & 93.2 & 48.6 & 80.4 & 59.0 & 92.1 & 85.3 & 84.8 & 80.7 & 48.1 & 77.3 & 66.5 & 84.7 & 65.6 \\
SSD~\cite{liu2016ssd} & 74.9 & 87.4 & 82.3 & 75.8 & 59.0 & 52.6 & 81.7 & 81.5 & 90.0 & 55.4 & 79.0 & 59.8 & 88.4 & 84.3 & 84.7 & 83.3 & 50.2 & 78.0 & 66.3 & 86.3 & 72.0 \\
FRCN OHEM~\cite{shrivastava2016training} & 76.3 & 86.3 & \textbf{85.0} & 77.0 & 60.9 & 59.3 & 81.9 & 81.1 & 91.9 & 55.8 & 80.6 & \textbf{63.0} & 90.8 & 85.1 & 85.3 & 80.7 & 54.9 & 78.3 & 70.8 & 82.8 & 74.9 \\
ION~\cite{bell2016inside} & 76.4 & 88.0 & 84.6 & 77.7 & 63.7 & 63.6 & 80.8 & 80.8 & 90.9 & 55.5 & 81.9 & 60.9 & 89.1 & 84.9 & 84.2 & 83.9 & 53.2 & 79.8 & 67.4 & 84.4 & 72.9 \\
R-FCN~\cite{dai2016r} & 77.6 & 86.9 & 83.4 & 81.5 & 63.8 & 62.4 & 81.6 & 81.1 & 93.1 & 58.0 & 83.8 & 60.8 & \textbf{92.7} & 86.0 & 84.6 & 84.4 & 59.0 & 80.8 & 68.6 & 86.1 & 72.9 \\
DP-FCN~[ours]\footnotemark & \textbf{80.9} & \textbf{89.3} & 84.2 & \textbf{85.4} & \textbf{74.4} & \textbf{70.0} & \textbf{84.0} & \textbf{86.2} & \textbf{93.9} & \textbf{62.9} & \textbf{85.1} & 62.7 & \textbf{92.7} & \textbf{87.4} & \textbf{86.0} & \textbf{86.8} & \textbf{61.3} & \textbf{85.1} & \textbf{74.8} & \textbf{88.2} & \textbf{78.5} \\
\bottomrule
\end{tabular}
\end{adjustbox}
\end{center}
\caption{\textbf{Detailed detection results on PASCAL VOC 2012 test} in average precision (\%). For fair comparisons, the table only includes methods trained on PASCAL VOC 07++12.}
\label{tab:res_voc12}
\end{table}
\section{Conclusion}
In this paper, we propose DP-FCN, a new deep model for object detection. While traditional region-based detectors use generic bounding boxes to extract features from, DP-FCN is more flexible and focuses on discriminative elements to align them. It learns a part-based representation of objects in an efficient way with a natural integration into FCNs and without any additional annotations during training. This improves both recognition by building invariance to local transformations, and localization thanks to a dedicated module explicitly leveraging computed positions of parts to refine predictions with geometric information. Experimental validation shows significant gains on several common metrics. As a future work, we will test our model on a larger-scale dataset, such as MS COCO~\cite{lin2014microsoft}.\footnotetext{\url{http://host.robots.ox.ac.uk:8080/anonymous/QNUYVS.html}}
|
1,116,691,498,618 | arxiv | \section{\bigskip Introduction}
Clique-width is, like tree-width, an integer graph invariant that is an
appropriate parameter for the contruction of many fixed-parameter
tractable\ algorithms ([4, 6, 7, 11]). It is thus important to know that the
graphs of a particular type have bounded tree-width or clique-width. The
article [13] is a survey of clique-width bounded classes. Gurski has reviewed
in [9] how clique-width behaves under different graph operations. He asks
whether, for each $k$, the class of graphs of clique-width at most $k$ is
stable under edge contractions.\ This is true for $k=2$, \emph{i.e.}, for
cographs, and we prove that this is false for $k=3$.\ (For each $k$, this
stability property is true for graphs of tree-width at most $k$.\ It is thus
natural to ask the question for clique-width.)
Gurski proves that contracting one edge can at most double the
clique-width.\ The conjecture is made in [14] (Conjecture 8) that contracting
several edges in a graph of clique-width $k$ yields a graph of clique-width at
most $f(k)$ for some fixed function $f$. We disprove this conjecture and
answer Gurski's question by proving the following proposition.
\bigskip
\textbf{Proposition 1} : The graphs obtained by edge contractions from graphs
of clique-width 3 or of linear clique-width 4, have unbounded clique-width.
\bigskip
The graphs of clique-width at most 2 (they are the cographs) are preserved
under edge contractions.The validity of the conjecture of [14] would have
implied that the \emph{restricted vertex multicut problem} is fixed-parameter
tractable\ if the parameter is the clique-width of a certain graph describing
the input in a natural way.\ This problem consists in finding a set of
vertices of given size that meets every path between the two vertices of each
pair of a given set and does not contain any vertex of these pairs. Without
Conjecture 8, this problem is fixed-parameter tractable under the additional
condition that no two vertices from different pairs are adjacent.\
\bigskip
For sake of comparison, we also consider contractions of edges, one end of
which has degree 2. We say in this case that we \emph{erase a vertex}: we
erase $x$ if it has exactly two neighbours ; to do so, we add an edge between
them (unless they are adjacent, we only consider graphs without parallel
edges) and we delete $x$ and its two incident edges. The graphs obtained from
a graph by erasing and deleting vertices are its \emph{induced topological
minors}.
\bigskip
\textbf{Proposition 2:}\ The induced topological minors of the graphs of
clique-width $k$ have clique-width at most $2^{k+1}-1$.
\bigskip
\section{Some basic facts}
\bigskip
Graphs are finite, undirected, loop-free and without parallel edges. To keep
this note as short as possible, we refer the reader to any of [3, 11, 13, 15,
16] for the definitions of \emph{clique-width }and\emph{ rank-width}.\ Other
references for clique-width are [1, 5, 8, 14]. A variant of clique-width
called \emph{linear clique-width} is studied in [3, 10]. We denote by $cwd(G)$
and $rwd(G)$ the clique-width and, respectively, the rank-width of a graph
$G$.\ We recall from [16] that we have $rwd(G)\leq cwd(G)\leq2^{rwd(G)+1}-1$.
Proving that $cwd(G)>k$ for given $G$ and $k$ is rather difficult in most
cases.\ (See for instance the computation of the exact clique-width of a
square grid in [8].\ The computation of its rank-width in [12] is not
easier.)\ We overcome this difficulty by using \emph{monadic second-order
transductions} : they are graph transformations specified by formulas of
monadic second-order logic. The (technical) definition is in [2,3].\ We will
only need the fact that the graphs defined by a monadic second-order
transduction $\tau$\ from graphs of clique-width at most $k$ have clique-width
at most $f_{\tau}(k)$ for some computable function $f_{\tau}$ that can be
determined from the formulas forming the definition of $\tau$ (Corollary
7.38(2), [3]). However, we also give an alternative proof based on rank-width
and vertex-minors that does not use monadic second-order transductions.
\bigskip
A \emph{vertex-minor} of a graph is obtained by deleting vertices\ (and the
incident edges) and performing \emph{local complementations}. (Local
complementation exchanges edges and non-edges between the neighbours of a
vertex.)\ These transformations do not increase rank-width [15].\ Erasing\ a
vertex $x$ yields a vertex-minor of the considered graph: let $y$ and $z$ be
its neighbours; if they are adjacent, erasing $x$ is the same as deleting it
because we fuse parallel edges; if they are not, erasing $x$ is the same as
performing a local complementation at $x$ (which creates an edge between $y$
and $z$), and then deleting $x$.\ Hence, by transitivity, every induced
topological minor is a vertex-minor.
\section{Proofs}
\bigskip
\bigskip\textbf{Definitions and notation }
(a) We denote by $H/F$\ the graph obtained from a graph $H$ by contracting the
edges of a set $F$. (Parallel edges are fused, no loops are created.) If
$\mathcal{H}$ is a set of graphs, we denote by $EC(\mathcal{H})$ the set of
graphs $H/F$ such that $H\in\mathcal{H}$ and $F$ is a set of edges of $H$.\
(b) We denote by $\mathcal{R}$ the set of graphs having a \emph{proper edge
coloring} with colors in $\{1,...,4\}$: every two adjacent edges have
different colors.\ These graphs have unbounded tree-width and clique-width as
they include the square grids (the $n\times n$ grid has clique-width $n+1$ if
$n\geq2$ by [8]).
(c) For $n\geq2,$ we define a graph $G_{n}$.\ Its vertices are $x_{1
,...,x_{n},y_{1},...,y_{n}$ \ and its edges are $x_{i}-y_{i},y_{i}-y_{j}$ for
all $i,j\neq i.$ (The notation $x-y$ designates an edge between $x$ and $y$).
We let $D$ consist of 4 vertices and no edge, and we let $H_{n}$ be obtained
from $G_{n}$ by substituting disjoint copies of $D$ to each vertex $y_{i
$.\ Hence and more precisely, $H_{n}$ has the $5n$ vertices $x_{1
,...,x_{n},y_{1}^{1},y_{1}^{2},y_{1}^{3},y_{1}^{4},y_{2}^{1},...,y_{n}^{4}$
and the $8n^{2}-4n$ edges $x_{i}-y_{i}^{c},y_{i}^{c}-y_{j}^{d}$ for all
$i,j\neq i$ and $c,d=1,...,4$. We denote by $\mathcal{H}$ the set of graphs
$H_{n}$. It is easy to construct expressions of $G_{n}$ and $H_{n}$ showing
that they have clique-width at most 3 and linear clique-width at most 4. If
$n\geq3,$ they have clique-width 3 because they contain, as an induced
subgraph, the path with 4 vertices, so that they do not have clique-width 2,
and linear clique-width 4 because they contain, as an induced subgraph, the
graph $G_{3}$ that is not a cocomparability graph, so that they do not have
linear clique-width at most 3 by Proposition 14\ of [10].
(d) We define a monadic second-order transduction $\alpha$ with one parameter
$X$.\ If $G$ is a graph and $X$ is a set of vertices, then the graph
$\alpha(G,X)$ is defined if $X$ is stable (no two vertices are adjacent); its
vertex set is $X$ and it has an edge between $x$ and $y$ if and only if these
vertices are at distance 2 in $G$.\ We denote by $\alpha(G)$ the set of all
such graphs, and by $\alpha(\mathcal{G)}$ the union of the sets $\alpha(G)$
for $G$ in a set $\mathcal{G}$.
\bigskip
\textbf{Lemma 3 }: We have $\alpha(EC(\mathcal{H}))\supseteq\mathcal{R}.$\
\bigskip
\textbf{Proof}: Let $R$ be a graph in $\mathcal{R}$ with vertices
$x_{1},...,x_{n}$ and a proper edge coloring with colors 1 to 4.\ The set
$X=\{x_{1},...,x_{n}\}$ is also a subset of the vertex set of $H_{n}$. The
four neighbours of $x_{i}$ in $H_{n}$ are $y_{i}^{1},y_{i}^{2},y_{i}^{3}$
\ and $y_{i}^{4}.$
Let $F$ be the set of edges of the form $y_{i}^{c}-y_{j}^{c}$ such that
$x_{i}-x_{j}$ is an edge of $R$ colored by $c$, $c\in\{1,...,4\}$
(hence,\ $x_{i}$ and $x_{j}$ are at distance 3 in $H_{n}$). The graph
$K=H_{n}/F$ belongs to $EC(\mathcal{H})$ and $X$ is stable in this graph (the
vertices $x_{1},...,x_{n}$ are not affected by the contractions of edges). It
is clear that $x_{i}-x_{j}$ is an edge of $R$ if and only if there is in $K$ a
path\ $x_{i}-z-x_{j}$ where $z$ results from the contraction of the edge
$y_{i}^{c}-y_{j}^{c}$ and $c$ is the color of $x_{i}-x_{j}$ in $R$.\ It
follows that $R=\alpha(K,X)$. $\square$
\bigskip
\textbf{Proof of Proposition 1: }By Lemma 3, the set $\alpha(EC(\mathcal{H}))$
has unbounded clique-width.\ Hence, so has $EC(\mathcal{H})$ by Corollary
7.38(2) of [3] recalled above. This concludes the proof because the graphs
$H_{n}$ have clique-width 3 and linear clique-width 4 for $n\geq3$. $\square$
\bigskip
\emph{NLC-width} and clique-width are linearly related (see [9]). Hence, the
graphs obtained by edge contractions from graphs of NLC-width at most 3 have
unbounded NLC-width.\
\textbf{Remark: }For each $n,$ the graph $H_{m^{2}}$ of clique-width 3 having
$5m^{2}$\ vertices where $m=f_{\alpha}(n)$ yields by edge contractions a graph
of clique-width at least $n+1$.\ Here, $f_{\alpha}$ is the computable function
of Section 2 that can be assumed monotone.\ To prove this, we let $F$ be a set
of edges such that $\alpha(H_{m^{2}}/F)$ contains the $m\times m$ grid $R_{m}$
and $k=cwd(H_{m^{2}}/F)$. Then, $m+1=cwd(R_{m})\leq f_{\alpha}(k)$, hence
$f_{\alpha}(n)+1\leq f_{\alpha}(k),$\ and so, $k>n$.\ The function $f_{\alpha
}$ is very fast growing.\ A much better upper-bound will be obtained from the
alternative proof we give next.
\bigskip
Edge contractions can increase rank-width because the same sets of graphs have
bounded rank-width and bounded clique-width [16]. The following proof shows
this directly.
\bigskip
\textbf{Alternative proof of Proposition 1}\ : The construction is similar and
we use the same notation. We construct $H_{n}^{\prime}$ from $G_{n}$ by
substituting disjoint copies of $K_{4}$ to each vertex $y_{i}$ and by adding a
vertex $y_{0}$ adjacent to all vertices $y_{1}^{1},y_{1}^{2},y_{1}^{3
,y_{1}^{4},y_{2}^{1},...,y_{n}^{4}$. Hence, $H_{n}^{\prime}$ has $5n+1$
vertices.We denote by $\mathcal{H}^{\prime}$ the set of graphs $H_{n}^{\prime
}$. They have clique-width 3 and linear clique-width 4 (by the same argument
as for $H_{n}$).
Let us fix $n$ and let $R$ be the $n\times n$ grid with vertices
$x_{1},...,x_{n^{2}}$. To prove that it is a vertex-minor of $H_{n^{2
}^{\prime}$, we take a proper edge-coloring of $R$ with colors $1,...,4$, we
contract the edges $y_{i}^{c}-y_{j}^{c}$ of $H_{n^{2}}^{\prime}$ such that
$x_{i}-x_{j}$ is an edge of $R$ colored by $c$.\ This gives a graph
$R^{\prime}$ that has $R$ as vertex-minor.\ To prove this, we delete the
vertices $y_{i}^{c}$ such that $x_{i}$ has no incident edge colored by $c$, we
take a local complementation at $y_{0}$, we delete $y_{0}$ and finally, we
erase the vertices resulting from the contraction of the edges $y_{i
^{c}-y_{j}^{c}$ after taking local complementations at them.
The rank-width of $R$ is $n-1$ by [12], that of $R^{\prime}$ is thus at least
$n-1,$ and so is its clique-width. Hence, by contracting edges in a graph of
clique-width 3 (and linear clique-width 4) that has $5n^{2}+1$ vertices, one
can get a graph of clique-width at least $n-1$.$\square$
\bigskip
\textbf{Remark:} An algorithm can determine a graph of clique-width 3 that
yields a graph of clique-width more than 3 by the contraction of a single
edge. It performs an exhaustive search until some graph is obtained: for each
$n=2,3$ ... it considers the finitely many sets $F$ of\ pairwise nonadjacent
edges of $H_{n}$.\ By using the polynomial-time algorithm of [1] to check if a
graph has clique-width at most 3, it can look for a set\ $F$ and an edge $f\in
F$ such that $H_{n}/(F-\{f\})$ has clique-width 3 and $H_{n}/F$ has
clique-width more than 3 (actually 4, 5 or 6 by Theorem 4.8 of [9]). By
Proposition 1, one must find some $n$ and such $F$ and $f$. However, we have
not implemented this algorithm.
\bigskip
Gurski has proved that erasing a\ vertex of degree 2 can increase (or
decrease) the clique-width by at most 2.\ In Proposition 2, we consider the
effect of erasing several vertices and taking induced subgraphs.\
\bigskip
\textbf{Proof of Proposition 2}\ : As noted above, an induced topological
minor is a vertex-minor. The result follows since, for every graph $G$, we
have $rwd(G)\leq cwd(G)\leq2^{rwd(G)+1}-1$.$\square$
\bigskip
This proof leaves open the question of improving the upper bound\ $2^{k+1}-1$,
possibly to a polynomial in $k$ or even to $k$.\
\bigskip
\textbf{Acknowledgement:} I thank the anonymous referee who suggested the
alternative proof of Proposition 1, and M.\ Kant\'{e} and D.\ Meister for
useful comments.
\bigskip
{\large References}
\bigskip
[1] D.\ Corneil, M. Habib, J.-M. Lanlignel, B. Reed and U. Rotics:
Polynomial-time recognition of clique-width at most 3 graphs. \emph{Discrete
Applied Mathematics} \textbf{160} (2012) 834-865.
\bigskip
[2] B.\ Courcelle, Monadic Second-Order Definable Graph Transductions: A
Survey. \emph{Theor. Comput. Sci.} \textbf{126} (1994) 53-75.
\bigskip
[3] \ B. Courcelle and J. Engelfriet, \emph{Graph structure and monadic
second-order logic, a language theoretic approach}, Cambridge University
Press, 2012.
\bigskip
[4] B. Courcelle, J. Makowsky and U. Rotics, Linear Time Solvable Optimization
Problems on Graphs of Bounded Clique-Width. \emph{Theory Comput. Syst.}
\textbf{33} (2000) 125-150.
\bigskip
[5] B. Courcelle and S. Olariu, Upper bounds to the clique width of graphs.
\emph{Discrete Applied Mathematics }\textbf{101 }(2000) 77-114.
\bigskip
[6] R.\ Downey and M.\ Fellows, \emph{Parameterized Complexity}, Springer, 1999.
\bigskip
[7] J.\ Flum and M. Grohe, \ \emph{Parameterized Complexity Theory}, Springer, 2006.
\bigskip
[8] \ M.\ Golumbic and U.\ Rotics: On the Clique-Width of Some Perfect Graph
Classes. \emph{Int. J. Found. Comput. Sci. }11 (2000) 423-443.
\bigskip
[9] F.\ Gurski, Graph operations on clique-width bounded graphs, 2007, CoRR abs/cs/0701185.
\bigskip
[10] P. Heggernes, D. Meister and C. Papadopoulos, Graphs of linear
clique-width at most 3, \emph{Theoretical Computer Science} \textbf{412}
(2011) 5466-5486.
\bigskip
[11] P.\ Hlin\v{e}n\'y, S. Oum, D.\ Seese and G. Gottlob: Width Parameters
Beyond Tree-width and their Applications. \emph{Comput. J}. \textbf{51 }(2008) 326-362.
\bigskip
[12] V. Jel\'{\i}nek, The rank-width of the square grid, \emph{Discrete
Applied Mathematics} \textbf{158} (2010) 841-850.
\bigskip
[13] \ M.\ Kaminski, V. Lozin and M. Milanic: Recent developments on graphs of
bounded clique-width. \emph{Discrete Applied Mathematics} \textbf{157} (2009) 2747-2761.
\bigskip
[14] M.\ Lackner, R. Pichler, S. R\"{u}mmele and S. Woltran: Multicut on
Graphs of Bounded Clique-Width, in "Proceedings of COCOA'12",
\emph{Lec.\ Notes Comp.\ Sci.}, \textbf{7402} (2012) 115-126.
\bigskip
[15] S.\ Oum: Rank-width and vertex-minors, \emph{J. Comb. Theory, Ser. B
}\textbf{95} (2005) 79-100.
\bigskip
[16] S. Oum and P. Seymour: Approximating clique-width and branch-width.
\emph{J. Comb. Theory, Ser. B} \textbf{96} (2006) 514-528.
\bigskip
\end{document} |
1,116,691,498,619 | arxiv | \section{Introduction}
In statistical mechanics, the term {\em phase transition} originally refers to the emergence of multiple Gibbs states for the Hamiltonian associated with a collective system \cite{R69}, most notably accompanied with symmetry breaking, as in the Ising model. While this notion has been formulated for equilibrium measures, from the dynamical viewpoint, it conveys some breakdown of ergodicity in a related dynamical process \cite{vEvH84}. Using out-of-equilibrium procedures, such ergodicity failures have been rigorously proved for suitably designed Markov processes and probabilistic cellular automata (PCA) \cite{H00,L05}. More precisely, Metropolis rules, Glauber dynamics and the like have been designed to account for relaxation to pre-constructed equilibrium states, including the case when several such states coexist \cite{M81}.
Notwithstanding the success of this dynamical approach to phase transitions, the fact that classical mechanics is ruled by deterministic laws of motion calls for evidences in purely deterministic systems, irrespective of any consideration of random processes. Barring the use of out-of-equilibrium procedures, can chaotic attractors in deterministic analogues of random models of interacting particles systems exhibit ergodicity breaking (associated with symmetry breaking)? Despite having generated considerable attention, this question still eludes a satisfactory response that would exclude numerical flaws or unverifiable theoretical assumptions, even in basic examples such as networks of coupled expanding or hyperbolic maps.
Indeed, some examples of infinite lattices of interacting chaotic maps have been designed so that their dual dynamics acting on measures consist of PCA exhibiting phase transitions \cite{GM00,M05}. Thus, out-of-equilibrium approaches can in principle be lifted to deterministic dynamical systems. However, this operation requires explicit knowledge of a coupling-intensity-independent Mar\-kov partition, in particular one that ensures a pre-selected Markov chain/PCA for the dual measure evolution. Yet, such a complete understanding of Markovian properties is rare for realistic deterministic systems. This is especially the case of chaotic collective systems with homogenizing interactions \cite{B05}, which typically fail to fit the standard assumptions of the theory of dynamical systems, such as being diffeomorphisms. In general, very little is known about the symbolic dynamics, and, if at all, only for weak interaction intensity \cite{JP98,CF13}. This shortcoming calls for verifications of non-ergodicity in a purely deterministic setting, that are independent of any knowledge of the associated symbolic dynamics.
In this setting, various studies have reported changes in the global dynamics of coupled chaotic maps as their interaction strength is increased. Infinite lattice examples have been provided, for which the invariant densities associated with the Perron-Frobenius operator undergo bifurcations. In particular, an analogous transition to the one in the Curie-Weiss model of statistical mechanics, which only affects the dynamics at the thermodynamics limit, has been proved for the model in \cite{BKZ09}. Moreover, convergence to a point mass for strong coupling has been established in \cite{BKST18} for a mean-field model acting on measures on the circle.
This convergence is reminiscent of phenomenological changes in finite systems. However, such alterations have always been observed to be preceded by a reduction in the Lyapunov dimension. Further, these changes have repeatedly resulted in stationary or periodic behaviors of the spatial averages of symmetry-related observables, see e.g.\ \cite{BBCFP95,CM93,J95,MH93}. This phenomenology is comparable with synchronization-like scenarios in which trajectories asymptotically shrink to lower dimensional subspaces. Hardly compatible with a coupling-independent symbolic grammar, they cast doubt on the nature of phase transitions in dynamical systems: do such transitions only take place at the thermodynamics limit? If not, do they necessarily require a lowering of the dimension of the attractor, or can they take place while hyperbolicity properties remain unchanged, and in absent knowledge of a Markov partition?
To address these issues, we consider a family of piecewise affine mappings $F_{N,\epsilon}$ of the $N$-dimensional torus $\mathbb{T}^N$, which mimic interacting particle dynamics driven by chaotic individual stirring and homogenizing interactions. Interactions are of mean-field type (all-to-all coupling) with adjustable strength $\epsilon$ (Section 2).\footnote{Details of the considerations in this paragraph and the following ones will be given below.} Moreover, the mappings $F_{N,\epsilon}$ have many symmetries, most notably, they commute with flipping the sign of all coordinates. For $\epsilon=0$, the units are decoupled and evolve independently. For $\epsilon\in [0,\tfrac12)$, all mappings are expanding and must support (Lebesgue) {\bf absolutely continuous invariant measures} (ACIM). Moreover, each ACIM must be ergodic - and hence invariant under the action of any symmetry - when $\epsilon$ is small enough. For such mappings, it is therefore natural to
(mathematically) examine ergodicity persistence or its failure in this expanding regime, together with accompanying symmetry features.
Preliminary evidences, based on simulations of trajectories, are presented in Section \ref{S-EMPIRIC}. Completing previous findings in \cite{F14}, they reveal that for each $N>2$, ergodicity is broken via symmetry breaking, for $\epsilon$ sufficiently large.
For $N=3$ and 4, this phenomenology has been confirmed by analytic demonstrations \cite{F14,S18,SB16}. For convenience, the proofs actually deal with $D=(N-1)$-dimensional mappings (denoted by $G_{D,\epsilon}$ below) whose ergodicity and its failure coincide with $F_{N,\epsilon}$. While these proofs seem to have captured essential information about the dynamics, their approach, which relies on inspiration from direct observations of trajectories in phase space, is hardly applicable when $D$ is large.
In order to address large $D$ (or $N$), we propose to use instead computer-assisted proof. To that goal, an algorithm is introduced (Section 4) that aims to rigorously construct asymmetric $G_{D,\epsilon}$-invariant sets containing ACIM. The algorithm relies on empirical input from simulations and is based on two important properties.\footnote{Likewise, see the extended section 4.2 and additional information in the appendices for details and numerical implementation.} One characteristics is that the dynamics of suitable-for-the-proof polytopes can be encoded into one on vectors in $\mathbb{R}^{D(D+1)}$. In other words, the dynamics of suitable sets can be captured by a limited number of real variables. The other one is that computations can be designed so that, when $\epsilon$ is itself a rational number, they involve rational numbers only.
Results of the algorithmic construction are given for $D$ up to 5 (Section 4.1). However, they clearly indicate that ergodicity breaking should be provable for arbitrary $D$. A discussion about possible improvements and alternative proofs is provided in Section 5.
\section{Expanding systems of piecewise affine globally coupled maps}
The mappings $F_{N,\epsilon}$ are defined as follows \cite{F14}
\[
(F_{N,\epsilon}(u))_i=2u_i+\frac{2\epsilon}{N}\sum_{j=1}^{N}g(u_j-u_i)\ \text{mod}\ 1,\ 1\leq i\leq N,
\]
where $g$ represents pairwise elastic interactions on the circle \cite{KY10} and is defined by $g(u)=u-h(u)$ for all $u\in\mathbb{T}^1$ with
\[
h(u)=\left\{\begin{array}{ccl}
\lfloor u+\frac12\rfloor&\text{if}&u\not\in\frac12+\mathbb{Z}\\
0&\text{if}&u\in\frac12+\mathbb{Z} .
\end{array}\right.
\]
Here, $\lfloor \cdot \rfloor$ is the floor function. Hence $g$ is piecewise affine, with slope 1 and discontinuities at all points of $\frac12+\mathbb{Z}$.
Mean field coupling implies $F_{N,\epsilon}\circ \sigma =\sigma\circ F_{N,\epsilon}$ for every $\sigma\in \Pi_N$, where $\Pi_N$ is the group of permutations of $\{1,\dots,N\}$.
The symmetry $g(-u)=-g(u)\ \text{mod}\ 1$ implies commutativity with the {\bf sign flip} $-\text{\rm Id}_{\mathbb{T}^{N}}$.
The mappings $F_{N,\epsilon}$ are non-singular. Therefore, considerations about ergodicity of their ACIM can ignore the dynamics of sets of vanishing Lesbegue measure, and in particular, the orbits of discontinuity sets.
Away from these discontinuities, $F_{N,\epsilon}$ is piecewise affine with constant derivative
\[
(DF_{N,\epsilon}v)_i=2(1-\epsilon)v_i+\frac{2\epsilon}{N}\sum_{j=1}^{N}v_j,\ 1\leq i\leq N,
\]
from which it follows that $F_{N,\epsilon}$ is expanding for $\epsilon<\frac12$. As a consequence,
its Milnor attractor \cite{Buescu97,M85} in this regime must consist of a finite union of Lebesgue ergodic components \cite{T01}, {\sl viz.}\ the attractor of almost every trajectory must be a set of positive Lebesgue measure (thereby excluding any dimension reduction).
Focusing on this expanding regime, as mentioned above, we aim to address ergodicity of the attractor, that is whether the Lebesgue ergodic components are unique or not. Up to semi-conjugacy, this question can be examined in a more convenient family of piecewise affine mappings of the $D=(N-1)$-dimensional torus. The reduced mappings $G_{D,\epsilon}$ (defined below) have equivalent features to the originals. Namely, their symmetry group is isomorphic to $\mathbb{Z}_2\times \Pi_{D+1}$ and in particular, we have $G_{D,\epsilon}\circ -\text{\rm Id}_{\mathbb{T}^{D}}=-\text{\rm Id}_{\mathbb{T}^{D}}\circ G_{D,\epsilon}$. Furthermore, their asymptotic dynamics for $\epsilon\in [0,\frac12)$ must also lie in finitely many ergodic components of positive Lebesgue measure.
More precisely, the transformation $\pi_N$ of $\mathbb{T}^N$ defined by \cite{SB16}
\[
(\pi_Nu)_i=\left\{\begin{array}{ccl}
u_i-u_{i+1}\ \text{mod}\ 1&\text{if}&1\leq i\leq N-1\\
\sum_{j=1}^Nu_j\ \text{mod}\ 1&\text{if}&i=N
\end{array}\right.
\]
(semi-)conjugates $F_{N,\epsilon}$ to the direct product $G_{N-1,\epsilon}\times F_{1,\epsilon}$ ({\sl viz.} $\pi_N\circ F_{N,\epsilon}=(G_{N-1,\epsilon}\times F_{1,\epsilon})\circ \pi_N$). The mapping $F_{1,\epsilon}(u)=2u\ \text{mod}\ 1$ (which acts on the sum coordinate $(\pi_Nu)_N$) does not depend on $\epsilon$ and is ergodic with respect to the Lebesgue measure on $\mathbb{T}^1$. Therefore, any failure of ergodicity for $F_{N,\epsilon}$ has to be concomitant with the same phenomenon for $G_{N-1,\epsilon}$.
The mapping $G_{N-1,\epsilon}$ does not involve the sum coordinate. Moreover, its (constant) derivative conveniently turns out to be a multiple of the identity in $\mathbb{T}^{N-1}$, {\sl i.e.}\ $DG_{N-1,\epsilon}=2(1-\epsilon)\text{\rm Id}_{\mathbb{T}^{N-1}}$. Explicitly, we have $G_{D,\epsilon}=2(1-\epsilon)\text{\rm Id}_{\mathbb{T}^{D}}+\tfrac{2\epsilon}{D+1}B_D\ \text{mod}\ 1$, where for $i\in\{1,\dots,D\}$
\[
(B_{D}(x))_i=2h(x_i)+\sum_{j=1}^{i-1}h(\sum_{k=j}^{i}x_k)-h(\sum_{k=j}^{i-1}x_k)+\sum_{j=i+1}^{D}h(\sum_{k=i}^{j}x_k)-h(\sum_{k=i+1}^{j}x_k).
\]
Perturbative arguments at the uncoupled limit $\epsilon=0$, applied to the related transfer operator acting on measure densities \cite{KK92}, demonstrate ergodicity for $\epsilon>0$ sufficiently small, for each integer $D$. Instead, the dynamics is not expanding at the limit $\epsilon=\tfrac12$. All mappings $G_{D,\frac12}$ consist of piecewise isometries and their global dynamics can be hardly described. In any case, continuation arguments appear inapplicable at this limit. In order to evaluate ergodicity or its failure at the upper end of the expanding regime, we therefore opted to collect numerical evidences.
\begin{figure}[ht]
\begin{center}
\includegraphics*[width=27mm]{TrajecD2Eps03}
\includegraphics*[width=27mm]{TrajecD2Eps0405}
\includegraphics*[width=27mm]{TrajecD2Eps0416}
\includegraphics*[width=27mm]{TrajecD2Eps0418}
\vspace{0.2cm}
\includegraphics*[width=27mm]{TrajecD3Eps0392}
\includegraphics*[width=27mm]{TrajecD3Eps0394}
\includegraphics*[width=27mm]{TrajecD3Eps0436}
\includegraphics*[width=27mm]{TrajecD3Eps0438}
\end{center}
\caption{Direct empirical evidence of ergodicity/symmetry breaking for the maps $G_{2,\epsilon}$ (top row) and $G_{3,\epsilon}$ (bottom row). Superimposed plots of $n_1$ consecutive points of orbits (one color for each orbit), after projection onto $(0,1)^D$ and discarding of the first $n_0$ iterates ($n_0=15\times 10^3$ and $n_1=5\times 10^3$ for $D=2$, $n_0=40\times 10^3$ and $n_1=10\times 10^3$ for $D=3$). In each image, one orbit is started from a representative initial condition and the other ones follow by applying symmetry. For $\epsilon<\epsilon_D$ (where $\epsilon_2\simeq 0.417$ and $\epsilon_3\simeq 0.393$), all orbits appear to cover the same set, suggesting that ergodicity holds. However, for $\epsilon>\epsilon_D$, no two points of distinct orbits overlap, suggesting the existence of multiple Lebesgue ergodic components. For $D=2$, discontinuity lines $x_{1,2}=\frac12$ and $x_1+x_2=\frac12,\frac32$ are also shown. For $D=3$, several color schemes are used to differentiate groups of distinct symmetry type: black for the symmetric trajectory and rainbow (resp.\ neon) colors for the 6 (resp. 8) element group, which exist for $\epsilon>\epsilon_3$ (resp.\ $\epsilon>0.437$).}
\label{PHASPA}
\end{figure}
\section{Empirical results from numerical trajectories}\label{S-EMPIRIC}
This section presents evidences of ergodicity breaking that were obtained from numerical simulations of trajectories. The hints are of two types: rendering of trajectories when $D\leq 3$ and order parameter estimates for arbitrary $D\geq 2$.\footnote{For $D=1$, no ergodicity failure occurs since the Milnor attractor of $G_{\epsilon,1}$ is transitive, and hence ergodic, for every $\epsilon\in [0,\frac12)$ \cite{F14,SB16}.} We begin by presenting evidences of the first type.
In low dimension, direct visualization of asymptotic-orbit traces in phase space offers a straightforward evaluation of ergodicity/symmetry and their failure. On Fig.\ \ref{PHASPA}, late iterates of orbits started from representative initial conditions, and their images under symmetries, are plotted for $\epsilon$ across $[0,\frac12)$. Initial conditions were selected using random sampling of phase space, in a way to render, if not all, then the most essential attractor features, with an emphasis on detecting asymmetry.
\begin{figure}[ht]
\begin{center}
\includegraphics*[width=28mm]{OneCoordOPDim2}
\includegraphics*[width=28mm]{OneCoordOPDim4}
\includegraphics*[width=28mm]{OneCoordOPDim10}
\includegraphics*[width=28mm]{OneCoordOPDim18}
\vspace{0.2cm}
\includegraphics*[width=28mm]{AlterOPDim3}
\includegraphics*[width=28mm]{AlterOPDim5}
\includegraphics*[width=28mm]{AlterOPDim11}
\includegraphics*[width=28mm]{AlterOPDim19}
\vspace{0.2cm}
\includegraphics*[height=17mm]{OneCoordOPDim30}
\includegraphics*[height=17mm]{AlterOPDim31}
\includegraphics*[height=17mm]{OneCoordOPDim86}
\includegraphics*[height=17mm]{OneCoordOPDim150}
\vspace{0.2cm}
\includegraphics*[height=17mm]{OneCoordOPDim70}
\includegraphics*[height=17mm]{AlterOPDim71}
\includegraphics*[height=17mm]{OneCoordOPDim100}
\includegraphics*[height=17mm]{OneCoordOPDim200}
\end{center}
\caption{Empirical estimates of the sign-flip asymmetry observable. Superimposed plots of the order parameter $\left|\frac1{T}{\displaystyle\sum_{t=1}^{T}}x^t_{\lceil\frac{D}2\rceil}-\tfrac12\right|$ for the projected points $x^t= \pi_{(0,1)^D}\circ G_{D,\epsilon}^t(x)$ in $(0,1)^D$ of orbits started from random initial conditions $x$ drawn from the uniform distribution. In each picture, 100 values, corresponding to 100 initial conditions, are plotted for each value of $\epsilon$ and each color; first blue ($T=10^4$), then green ($T=10^5$) and finally red ($T=10^6$). The process is repeated for 100 values of $\epsilon$. For $D\geq 70$, fluctuations of the values for $T=10^4$ are too large to assert symmetry breaking. For the sake of clarity, these estimates are not reported on the pictures for $D\geq 86$. Black segments: linear interpolation of maximum order parameter in the non-ergodic regime.}
\label{ORDERPAR}
\end{figure}
Although the bifurcation scenarios differ for the two cases, the pictures reveal that when increasing $\epsilon$ from 0, ergodicity persists for $\epsilon$ up to some $\epsilon_D$ and then fails beyond that threshold. In the $D=2$ case, the transitive and fully symmetric attractor continuously splits at $\epsilon=\epsilon_2$ into 6 disjoint and asymmetric invariant pieces. Each emerging piece breaks all map symmetries except one (Appendix \ref{SymBreakG2}). In the $D=3$ case, a fully symmetric invariant set exists throughout the expanding domain. In addition, 6 asymmetric invariant components discontinuously appear at $\epsilon=\epsilon_3$, away from the symmetric set, and persist from thereon as $\epsilon$ continues to increase. Then, at $\epsilon\simeq 0.437$, this group of partly asymmetric orbits is augmented in a similarly discontinuous way by an additional analogously persisting group, composed of 8 asymmetric orbits (see the involved symmetries in Appendix \ref{SymBreakG3}). Despite that the phenomenology differs in the two cases, the asymmetric components always appear to be disjoint from their image under the sign-flip, {\sl ie.}\ the $\mathbb{Z}_2$-symmetry generated by $-\text{Id}_{\mathbb{T}^D}$ is systematically broken.
The analytic proofs of ergodicity breaking in \cite{F14,S18,SB16} established the existence of so-called InAsUP (see Definition 1 below) for all $\epsilon$ larger than thresholds that are remarkably close to the $\epsilon_D$ above. InAsUP were guessed using trajectory renderings as above.
For large(r) $D$, trajectory renderings are obviously more involved and certainly not so simply useful to detect ergodicity/symmetry breaking. Following standard diagnostics in statistical physics, one can instead use order parameter (OP) empirical estimates. An empirical OP consists of unsigned averages over consecutive iterates of an asymmetry-related observable, which is designed to suggest the existence of asymmetry ergodic components when positive. In order to establish failure of sign-flip symmetry, we use the "central" coordinate $x_{\lceil\frac{D}2\rceil}$ as an observable (Fig.\ \ref{ORDERPAR}); other estimates based on different quantifiers {\sl e.g.}\ any single coordinate, two/all coordinate mean values, etc., all yield similar plots with identical bifurcation values (data not shown). Finite-time effects are accounted for by superimposing results from averages over increasing numbers of iterates. Similarly, dependence on the initial condition is evaluated using multiple runs based upon randomly drawn inputs.
In agreement with ergodicity, the OP in Fig.\ \ref{ORDERPAR} vanishes at small coupling, for every $D$. However, as soon as $\epsilon$ exceeds some $\epsilon_D$, this quantity takes on positive values for a positive fraction of initial conditions. This was observed for all investigated values of the dimension, from $D=2$ up to $D=200$. For $D=2$ and $3$, the data are consistent with the phase space plots of Fig.\ \ref{PHASPA}; the emergence of positive OP coincides with the appearance of asymmetric ergodic components.\footnote{Surprisingly, for $D=3$, the additional asymmetric group emerging at $\epsilon\simeq 0.437>\epsilon_3$ in Fig.\ \ref{PHASPA} -- signs of the corresponding transition are barely visible on Fig.\ \ref{ORDERPAR} -- shows OP values that are very close, if not identical, to the primary asymmetric group.}
In addition to clear evidence of ergodicity breaking, Fig.\ \ref{ORDERPAR} reveals various interesting phenomenological features of the maps $G_{D,\epsilon}$ dynamics. At first, the bifurcation diagrams show characteristics that are specific to the parity of $D$. For $D$ odd, the bifurcation values $\epsilon_D\sim 0.42$ appear to be almost insensitive to $D>3$, and OP estimates are localized around a single ($\epsilon$-dependent) value. Moreover, the fraction of initial conditions that yield non-zero estimates decreases with $D$, making it more difficult to collect marked evidence of ergodicity breaking. For $D$ even, $\epsilon_D$ increases with $D$ and seems to approach a limit value $\epsilon_{\ast}<\frac12$.
OP estimates appear to be uniformly distributed between 0 and the ($\epsilon$-dependent) maximal value, making it difficult to discriminate limit values from finite size fluctuations.
Furthermore,
maximal OP values for $\epsilon>\epsilon_D$ show a linear $\epsilon$-dependence of tiny slope, which eventually vanishes for large $D$. Also, these maxima decrease as $D$ increases, and asymptotically behave as $\frac1{2D}$ for $D$ even (resp.\ $\frac1{D}$ for $D$ odd), see
Fig.\ \ref{MAXOP}.
Accordingly, to unambiguously distinguish asymmetry from short time fluctuations requires longer averages when $D$ increases.\footnote{In particular, $T=10^4$ averages do not suffice to identify the emergence of positive OP for dimensions $D\geq 70$.} This issue, when combined with linear increase of the dimension of the variables in the iterations, substantially increases the computation time required for conclusive evidence. For instance, to obtain the $100\times 100$ averages over $T=10^6$ iterates in each of the pictures for $D\geq 70$ required running times of $\sim 100\text{h}$ on a $2.4$ GHz multi-processor computer (compared to $\sim 1\text{min}$ for $D=2$). Therefore, to show ergodicity breaking for dimensions that are commensurate with realistic physical systems appears to be a considerable challenge.
\begin{figure}[ht]
\centering
\includegraphics*[height=30mm]{ScalingOP}
\caption{Maximal order parameter range$\times D$ vs.\ $D$, respectively for even and odd values of $D$.}
\label{MAXOP}
\end{figure}
\section{Computer proof of ergodicity breaking}
The previous section evidences of emergence of asymmetric Lebesgue ergodic components
call for rigorous confirmation, in particular to exclude transient effects and other computer round-off shortcomings that may impact numerical simulations and finite-time estimates. A computer-based rigorous proof of ergodicity breaking is presented in this section.
For piecewise affine and expanding maps such as $G_{D,\epsilon}$, any forward invariant finite union of (convex) polytopes must support an absolutely continuous invariant measure \cite{T01}. Therefore, in order to prove existence of asymmetric Lebesgue ergodic components,
it suffices to obtain such unions of polytopes that are disjoint from their image under $-\text{\rm Id}_{\mathbb{T}^{D}}$.\footnote{Throughout this section, a set $S$ is said to be {\bf asymmetric} iff $S\cap -\text{\rm Id}_{\mathbb{T}^{D}}(S)=\emptyset$.} Accordingly, we shall rely on the following notion.
\begin{Def}
Let $D,\epsilon$ and a finite union of (non-empty) polytopes $P\subset \mathbb{T}^D$ be given. Then $P$ is said to be an {\bf InAsUP} (acronym for Invariant Asymmetric Union of Polytopes) if it satisfies the following conditions
\[
G_{D,\epsilon}(P)\subset P\quad\text{and}\quad P\cap -\text{\rm Id}_{\mathbb{T}^{D}}(P)=\emptyset.
\]
\end{Def}
As already mentioned, analytic proofs of existence of InAsUP have been established for $D=2,3$. However, the proofs rely on observations of trajectories and would be hardy generalisable to large values of $D$. Instead, a fully computational approach is proposed below, which consists of an algorithm designed to generate InAsUP for arbitrary $D$.
\subsection{Principles of the algorithm and exact computer results}
In few words, the algorithm simply consists in applying repeated iterations of $G_{D,\epsilon}$ to an asymmetric initial polytope.\footnote{See subsection \ref{S-ALGO} below for more details and a pseudo-code is given in Appendix \ref{A-ALGO}).} It
terminates when the resulting union set becomes $G_{D,\epsilon}$-invariant, or prematurely stops if the set under construction happens to intersect it image under $-\text{\rm Id}_{\mathbb{T}^{D}}$.
As analytic proofs did, the algorithm optimizes its input by using dynamical information from simulations. Initial polytopes are chosen among cylinder sets that are given by symbolic codes associated with empirical trajectories of positive OP (subsection \ref{S-ADAPT}).
That initial polytopes are cylinder sets is crucial when $\epsilon\in\mathbb{Q}$ because all algorithmic calculations then involve rational numbers only (subsection \ref{S-POLDYN}). Using exact computer arithmetics on such numbers\footnote{In particular, we use the GNU arithmetic library GMP \cite{GMP}.}, this implies that when the construction completes, the resulting asymmetric invariant set must be a genuine InAsUP. In other words, when the construction completes, the computer provides a rigorous proof of ergodicity breaking for the pair $(D,\epsilon)$ under consideration.
To employ exact arithmetic is an important feature of the InAsUP algorithmic construction. Indeed, analytic proofs for $D=2,3$ have revealed that some polytope facets are exactly mapped onto other ones, even when $\epsilon$ is an irrational number. Hence, some analytic cancellations must take place in the construction, which the algorithm would have to carefully monitor, if it dealt instead with floating-point arithmetic. In short terms, in using floating-point arithmetic, control of round-off errors does not suffice to rigorously assert invariance; hence the choice of exact arithmetic here.
Results on exact computer InAsUP constructions are summarized in the following formal statement.
\begin{Claim}
For all values of the pair $(D,\epsilon)$ in Table \ref{ALGOTAB}, the map $G_{D,\epsilon}$ has an InAsUP.
\label{COMPCLAIM}
\end{Claim}
\begin{table}
\caption{Data summary of InAsUP's exact numerical construction}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\multicolumn{7}{|c|}{Main results}\\
\hline
D&$N_D$&$\epsilon$&$|$Cyl$|$&Succ.\ ratio&$\#$ InAsUP&CPU time\\
\hline
2&3&0.44&5&24/24&22 (43)&1.5ms (3.5ms)\\
\hline
3&13&0.4&9&11584/11706&1250 (9100)&0.6s (13.9s)\\
\hline
4&75&0.47&11&355/373$^\ast$&3.6 (6.2)$\times10^5$&1.4h (5.2h)\\
\hline
5&541&0.44&7&3/3$^\ast$&6 (10)$\times 10^6$&120h (200h)\\
\hline
\end{tabular}
\end{center}
\noindent
\begin{tabular}{p{12cm}}
{\sl Legend:}\\
$N_D=$ cardinality of atomic partition ({\sl ie.}\ numb. of atoms in $S_D$)\\
$|$Cyl$|$ = length $\ell+1$ of words $a_0\cdots a_\ell$ that define initial cylinders\\
Succ.\ ratio = $\frac{\text{numb.\ of cylinders for which the construction succeeded}}{\text{numb.\ of cylinders for which the construction ran}}$. The construction ran for each cylinder generated by some empirical trajectory with positive OP ($^\ast$ = the construction ran for less cylinders, for the sake of computation time.)\\
$\#$ InAsUP = typical (resp.\ maximal) number of polytopes in InAsUP\\
$\#$ CPU time = typical (resp.\ maximal) computation time for InAsUP completion (CPU frequency = $2.2$ GHz)
\end{tabular}
\label{ALGOTAB}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
\multicolumn{4}{|c|}{Additional InAsUP success ratios}\\
\hline
D&$\epsilon$&$|$Cyl$|$&Succ. ratio\\
\hline
2&0.44&2&2/6\\
&&3&8/10\\
\hline
3&0.4&2&12/61\\
&&4&398/585\\
\hline
4&0.47&5&0/29$^\ast$\\
&&6&7/43$^\ast$\\
\hline
5&0.44&4&3/27$^\ast$\\
&&5&1/6$^\ast$\\
\hline
\end{tabular}
\end{center}
\end{table}
To our best knowledge, these results provide the first proof of ergodicity breaking of $G_{D,\epsilon}$ for $D\geq 4$, and hence of the coupled maps $F_{N,\epsilon}$ for $N\geq 5$. For $D=2,3$, the results confirm the previous analytic proof conclusions. For $D\leq 4$, InAsUP have also been obtained for other values of $\epsilon$ (data not shown). As Fig.\ \ref{ORDERPAR} suggests, we expect InAsUP to exist for every $\epsilon>\epsilon_D$.
In addition, Table \ref{ALGOTAB} provides success ratio statistics against the length of the initial cylinder.
Clearly, the larger the length is, the higher a success ratio results. This suggests that it suffices to choose a sufficiently small neighborhood of any point of some asymmetric trajectory to generate an InAsUP. Not only ergodicity breaking is not a spurious numerical illusion, but transient behaviors and round-off errors in simulations (Fig.\ \ref{ORDERPAR}) do not impact this phenomenon.
For completeness, Table \ref{ALGOTAB} also provides statistics about InAsUP cardinality and related CPU computation times ({\sl viz.}\ typical value and maximum).\footnote{As explained in the end of subsection \ref{S-ADAPT}, polytopes in InAsUP may overlap. InAsUP cardinality thus makes no obvious sense other than justifying the measured CPU times.} These quantities show substantial variations upon initial cylinder. However, their variation ranges barely vary with the cylinder length. Furthermore, Table \ref{ALGOTAB} reveals rapid growth in these statistics - which accompanies an exponential growth of the atomic partition cardinality - as the dimension $D$ increases. Therefore, even though the construction could in principle operate for any $D$, these exploding features, which imply similar demand of computational resources, may prevent the construction to terminate for large $D$.\footnote{In particular, for $D=6$ ($N_6=4683$), no construction has terminated, due to insufficient available RAM and CPU time.}
\subsection{Details of the algorithm for InAsUP construction and its numerical implementation}
A copy of the source files of the InAsUP construction code (in C) is available online \cite{F19} (and, again, see Appendix \ref{A-ALGO} for a pseudo-code). Most of the procedures in the algorithm may appear to be standard. However, to manipulate and to apply geometric and topological operations on polytopes in arbitrary dimension (such as testing inclusion, testing intersection, computing intersection set, etc), seem not so common. Thanks to the isotropic nature of the $G_{D,\epsilon}$, a convenient, dimension-independent, approach to polytope manipulation and their operations can be developed in our setting. The purpose of this section is to provide insights into these specific procedures and their numerical implementation.
For convenience, we regard $G_{D,\epsilon}$ as a mapping from $S_D=(-\tfrac12,\tfrac12)^D$ into $\mathbb{T}^D$ (to be combined with the canonical projection from $\mathbb{T}^D$ to $S_D$), and we denote by
\[
\bigcup_{a=1}^{N_D}A_a\ \text{mod}\ 0,
\]
the partition of $S_D$ into polytopes $A_a$ - called {\bf atoms} - on which the mapping is affine (or, equivalently, on which the vector function $B_D$ is constant). Important for future purposes, every atom is a polytope whose facets are included in discontinuity planes (or in parallel planes). Given the expression of $B_D$, every discontinuity plane is characterized by the condition $h(\sum_{k=i}^jx_k)\in\tfrac12+\mathbb{Z}$ for some $i<j$.
\subsubsection{Algorithmic procedure}\label{S-ALGO}
The InAsUP construction algorithm can be sketched as follows:
\begin{itemize}
\item[$\bullet$] Choose an initial polytope $P^0\subset\mathbb{T}^D$ such that $P^0\cap -\text{\rm Id}_{\mathbb{T}^{D}} (P^0)=\emptyset$.
\item[$\bullet$] Compute subsequent iterates $P^{t+1}=G_{D,\epsilon}(P^t)$ for $t\in\mathbb{N}$ -- which consist of unions of polytopes\footnote{More precisely, $P^{t+1}$ consists of a single polytope as long as ($t$ is such that) $P^t\subset A_a$ for a single $a$. However, as soon as we have $P^t\cap A_{a_i}\neq \emptyset$ for several $i$, then
\[
G_{D,\epsilon}(P^t)=\bigcup_i G_{D,\epsilon}|_{A_{a_i}}(P^t\cap A_{a_i})\ \text{mod}\ 0
\]
{\sl viz.} $G_{D,\epsilon}(P^t)$ becomes the union of several polytopes (mod 0), and therefore so must be all subsequent images $P^{t'}$ for $t'>t$.} -- until either $P^{t+1}\subset \bigcup_{k=0}^{t}P^k\ \text{mod}\ 0$, or some polytope in $-\text{\rm Id}_{\mathbb{T}^{D}} (P^{t+1})$ intersects $\bigcup_{k=0}^{t+1}P^k$.
\end{itemize}
As explained before, the purpose of the second terminating condition is to interrupt the construction when guarantee fails that the constructed sets will be asymmetric. Otherwise, when the algorithm terminates in the first instance, then both resulting $\bigcup_{k=0}^{t}P^k$ and $-\text{\rm Id}_{\mathbb{T}^{D}} (\bigcup_{k=0}^{t}P^k)$ must be disjoint InAsUPs, as desired.
The map $G_{D,\epsilon}$ does not show any basic property, such as Markov partition \cite{KH95}, that would ensure the algorithm to always terminate in finite time. However, this is the case of all results in Table \ref{ALGOTAB}, {\sl viz.}\ when the construction does not succeed, intersection with symmetric image is always the cause.
\subsubsection{Adapted polytopes and their vector representation}\label{S-ADAPT}
A major aspect of the numerical implementation is to establish a suitable consideration of polytopes and their dynamics. This is the purpose of this subsection.
Since the affine part of $G_{D,\epsilon}$ is a multiple of the identity, the map $G_{D,\epsilon}$ itself preserves polytope facets orientation. A moment of reflection then concludes that the algorithm may deal exclusively with convex polytopes whose facets are aligned with discontinuity planes (because the corresponding set is invariant both under forward and backward dynamics). In particular, this is the case when the initial polytope is a {\bf cylinder set}, {\sl viz.}\ $P^0=A_{a_0\dots a_\ell}$ for some word $\{a_k\}_{k=0}^{\ell}$ with letters $a_k\in\{1,\dots,N_D\}$, where $A_{a_0\dots a_\ell}$ is defined by \cite{KH95}
\[
A_{a_0\dots a_\ell}=\bigcap_{k=0}^{\ell}G_{D,\epsilon}^{-k}(A_{a_k}).
\]
In practice, candidate words $\{a_k\}_{k=0}^{\ell}$ for non-empty cylinders are obtained from empirical trajectories (whose OP is positive), by recording successive labels of visited atoms.
Besides, following standards \cite{BV04}, polytopes in arbitrary dimension can be characterized as intersections of half-spaces, using inequality constraints on coordinates. Half-spaces in our context are delimited by discontinuity planes. Hence polytopes in the algorithm will be characterized using inequality constraints on the sums $\sum_{k=i}^{j}x_k$ of coordinates.
Accordingly, given a vector
\[
m=\left((\underline{m}_{i,j},\overline{m}_{i,j})_{i=1}^j\right)_{j=1}^D\in\mathbb{R}^{D(D+1)}\ \text{for which}\ \underline{m}_{i,j}<\overline{m}_{i,j}\ \forall i\leq j,
\]
we define the polytope $P_{m}$ by\footnote{Strict inequalities in this definition are chosen on purpose. Indeed, excluding polytope facets is a convenient way to exclude discontinuities from InAsUP construction.}
\begin{equation}
P_m=\left\{x\in\mathbb{R}^D:\underline{m}_{i,j}<\sum_{k=i}^jx_k<\overline{m}_{i,j}, 1\leq i\leq j\leq D\right\},
\label{DEFPOLY}
\end{equation}
(provided that this set is not empty). Overdetermination in this expression is a convenient way to capture in a unique formal expression, all possible types of polytopes that can occur in the construction. In particular, while the number of atom facets may vary, every atom can be expressed as a polytope $P_m$ {\sl ie.}\ for every $A_a\subset S_D$, we have $A_a=P_{m_a}$ for some $m_a\in (\frac12+\mathbb{Z})^{D(D+1)}$.
Not only polytopes can be represented by vectors (so that the polytope dynamics reduces to that of a $D(D+1)$-dimensional dynamical system), but some of their operations can be expressed in terms of vectors. The following operations are of special interest for our purpose. In all lines below, the indices $i,j$ run over all pairs such that $1\leq i\leq j\leq D$.
\begin{itemize}
\item[$\bullet$] Inclusion: $P_m\subset P_{m'}$ if $\underline{m'}_{i,j}\leq \underline{m}_{i,j}$ and $\overline{m}_{i,j}\leq \overline{m'}_{i,j}$.
\item[$\bullet$] Intersection:
$P_m\cap P_{m'}=P_{m\cap m'}$ where $\underline{(m\cap m')}_{i,j}=\max\{\underline{m}_{i,j},\underline{m'}_{i,j}\}$ and $\overline{(m\cap m')}_{i,j}=\min\{\overline{m}_{i,j},\overline{m'}_{i,j}\}$.
\item[$\bullet$] Dilation: For every $a>0$, we have $a\text{Id}_{\mathbb{T}^D}(P_m)=P_{m'}$, where $\overline{\underline{m'}}_{i,j}=a\overline{\underline{m}}_{i,j}$.
\item[$\bullet$] Translation: For every $x\in\mathbb{R}^D$, we have $P_m+x=P_{m'}$ where $\overline{\underline{m'}}_{i,j}=\overline{\underline{m}}_{i,j}+\sum_{k=i}^jx_k$.
\item[$\bullet$] Symmetry: $-P_m=P_{\Sigma(m)}$ where $\underline{\Sigma(m)}_{i,j}=-\overline{m}_{i,j}$ and $\overline{\Sigma(m)}_{i,j}=-\underline{m}_{i,j}$.
\end{itemize}
The union operation is however not so convenient. In particular, depending on $m$ and $m'$, there might not exist $m''\in \mathbb{R}^{D(D+1)}$ such that
\[
P_m\cup P_{m'}=P_{m''}\ \text{mod}\ 0.
\]
Therefore, when testing that $P^{t+1}\subset \bigcup_{k=0}^{t}P^k\ \text{mod}\ 0$, the algorithm can only test if for every $P_m\subset P^{t+1}$, there exists $P_{m'}$ in the collection $\bigcup_{k=0}^{t}P^k$ such that $P_m\subset P_{m'}$.
While the intersection property ensures immediate detection of asymmetry failure, a consequence of this constrained inclusion test is that the algorithm may continue to run while InAsIUP construction has been completed. In particular, polytopes in the final union may overlap and the corresponding cardinality might be unnecessarily excessive.
Polytopes overlap can however be reduced by chopping off pieces. Indeed one can test if a polytope $P_m\subset P^{t+1}$ writes $P_m=\bigcup_\ell P_{m_\ell}\ \text{mod}\ 0$ with $P_{m_\ell}\subset P_{m'}\subset \bigcup_{k=0}^{t}P^k$ for some $\ell$, so that the construction can only retain the complementary $P_{m_\ell}$ that do not belong to any $P_{m'}$ in the existing union. Retaining smaller polytopes makes it more likely that the inclusion $P^{t+1}\subset \bigcup_{k=0}^{t}P^k\ \text{mod}\ 0$ holds for $t$ smaller, and thus that the final union cardinality is smaller. Such a diminution has been observed in practice, accompanied with substantial reduction of computer resources when $D>3$.
For simplicity, we opted to test if $P_m\subset P^{t+1}$ decomposes as the union of two pieces, namely
\[
P_m=P_{m_\text{in}}\cup P_{m_\text{out}}\ \text{mod}\ 0
\]
with $P_{m_\text{in}}=P_m\cap P_{m'}$ for some $P_{m'}\subset \bigcup_{k=0}^{t}P^k$ (so that it suffices to retain $P_{m_\text{out}}$), see Appendix \ref{A-CHOP} for mathematical foundations and criteria.\footnote{Actually, a more elaborated version of the algorithm also tests the three-piece decomposition
\[
P_m=P_{m_\text{in}}\cup P_{m_{\text{out},1}}\cup P_{m_{\text{out},2}}\ \text{mod}\ 0
\]
in the simplest case where $P_m$ intersects $P_{m'}$ transversally, so that the $P_{m_{\text{out},i}}$ result from chopping off along parallel faces.}
\subsubsection{Polytope constraint optimization}
While expression \eqref{DEFPOLY} is convenient, it does not implies that $P_m$ is not empty, nor uniqueness of the constraining vector $m$. Indeed, $P_m$ can be unambiguously specified and yet, as many as $D(D-1)$ constraints may remain inactive (Fig.\ \ref{POLYTOPCONST}). Inactive constraints \cite{BV04} are problematic in an automated numerical implementation because they may yield spurious polytopes.
\begin{figure}[h]
\begin{center}
\includegraphics*[height=35mm]{PolytopeConstraints1}
\includegraphics*[height=35mm]{PolytopeConstraints2}
\end{center}
\caption{Illustrations of polygons $P_m$ for $D=2$. {\sl Left.} Hexagonal example for which all constraints in expression \eqref{DEFPOLY} must be active. {\sl Right.} Rectangular example for which the constraints $\underline{m}_{1,2}\leq x_1+x_2\leq \overline{m}_{1,2}$ need not be active. Any vector $\left((\underline{m}_{i,j},\overline{m}_{i,j})_{i=1}^j\right)_{j=1}^2$ for which $\underline{m}_{1,2}\leq \underline{m}_{1,1}+\underline{m}_{2,2}$ and $\overline{m}_{1,1}+\overline{m}_{2,2}\leq \overline{m}_{1,2}$ defines the same $P_m$.}
\label{POLYTOPCONST}
\end{figure}
These issues call for constraint optimization. For the sake of notations, we present an optimization procedure in a slightly more general setting. The procedure relies on standard Linear Programming arguments.
Let $D\leq E$ be arbitrary integers. Suppose that a collection $\alpha=\left((\alpha_{ij})_{j=1}^D\right)_{i=1}^E$ of distinct vectors with non-negative elements is given.\footnote{That is to say, $\alpha_{ij}\geq 0$ and $(\alpha_{ij})_{j=1}^D\neq (\alpha_{i'j})_{j=1}^D$ when $i\neq i'$.} For any constraint vector $m=(\underline{m}_i,\overline{m}_i)_{i=1}^{E}$, consider the polytope $P_m^\alpha$ defined by
\[
P_m^\alpha=\left\{x\in \mathbb{R}^D\ :\ \underline{m}_i<\sum_{j=1}^D\alpha_{ij}x_j<\overline{m}_i,\ 1\leq i\leq E\right\}.
\]
For each $i\in\{1,\dots ,E\}$, consider the set $\Lambda_{i}$ of vectors $(\lambda_{k})_{k=1}^{E}$ with at least $E-D$ vanishing coordinates,
which uniquely solve the system of equations
\begin{equation}
\sum_{k=1}^{E}\lambda_{k}\alpha_{kj}=\alpha_{ij},\ 1\leq j\leq D.
\label{LAGRANGE}
\end{equation}
More precisely, given any $s\in \{1,\dots,D\}$ and $S\subset\{1,\dots,E\}$ of cardinality $s$, if it exists, let $\lambda^{S}$ be the unique solution of the equations obtained from \eqref{LAGRANGE} by letting $\lambda_k=0$ for $k\in \{1,\dots,E\}\setminus S$. The set $\Lambda_i$ is made of all such solutions $\lambda^{S}$ when $S$ and $s$ vary.\footnote{In particular, $\Lambda_{i}$ is a finite set which can be obtained by listing all possible sets $S$ and computing, when they exist, the unique solutions $\lambda^{S}$ of the corresponding systems. Moreover, $\Lambda_{i}$ cannot be empty because it contains the canonical vector $(\lambda_k)_{k=1}^{E}=(\delta_{k,i})_{k=1}^{E}$, where $\delta_{k,i}$ is the Kronecker symbol.}
Independently, given $\lambda\in\mathbb{R}$ and a constraint vector $m$, consider the vectors $(\underline{e}_k(\lambda,m))_{k=1}^{E}$ and $(\overline{e}_k(\lambda,m))_{k=1}^{E}$ respectively defined by
\[
\underline{e}_{k}(\lambda,m)=\left\{\begin{array}{ccl}
\underline{m}_{k}&\text{if}&\lambda\geq 0\\
\overline{m}_{k}&\text{if}&\lambda< 0
\end{array}\right.
\quad\text{and}\quad
\overline{e}_{k}(\lambda,m)=\left\{\begin{array}{ccl}
\overline{m}_{k}&\text{if}&\lambda\geq 0\\
\underline{m}_{k}&\text{if}&\lambda< 0
\end{array}\right.
\]
Optimized constraint vectors, together with existence condition, are given in the following statement.
\begin{Lem}
Given any constraint vector $m=(\underline{m}_i,\overline{m}_i)_{i=1}^{E}$, consider $O(m)=(\underline{O(m)}_{i},\overline{O(m)}_{i})_{i=1}^{E}$ defined by
\begin{equation*}
\underline{O(m)}_i=\max_{(\lambda_k)\in\Lambda_i}\sum_{k=1}^{E}\lambda_k\underline{e}_k(\lambda_k,m)\quad\text{and}\quad
\overline{O(m)}_i=\min_{(\lambda_k)\in\Lambda_i}\sum_{k=1}^{E}\lambda_k\overline{e}_k(\lambda_k,m).
\end{equation*}
Then
\begin{itemize}
\item[(i)] $P_m^\alpha$ is not empty iff $\underline{O(m)}_i<\overline{O(m)}_i$ for all $1\leq i\leq E$.
\item[(ii)] If $P_m^\alpha$ is not empty, then $P_{O(m)}^\alpha=P_m^\alpha$ and all constraints in the definition of $P_{O(m)}^\alpha$ are active.
\item[(iii)]
The plane $\sum_{j=1}^D\alpha_{ij}x_j=\underline{O(m)}_i$ (resp.\ $\sum_{j=1}^D\alpha_{ij}x_j=\overline{O(m)}_i$) defines a face of $P_m^\alpha$ iff
\[
\max_{(\lambda_k)\in\Lambda_i\atop (\lambda_k)\neq (\delta_{k,i})}\sum_{k=1}^{E}\lambda_k\underline{e}_k(\lambda_k,O(m))<\underline{O(m)}_i\
\left(\text{resp.}\ \overline{O(m)}_i<\min_{(\lambda_k)\in\Lambda_i\atop (\lambda_k)\neq (\delta_{k,i})}\sum_{k=1}^{E}\lambda_k\overline{e}_k(\lambda_k,m).\right)
\]
\end{itemize}
\label{OPTIVEC}
\end{Lem}
The proof, given in Appendix \ref{A-OPTIM}, is inspired from the Karush-Kuhn-Tucker (KKT) approach to linear programming, an extension of the method of Lagrange multipliers to the case of inequalities constraints \cite{BV04}.
The algorithm extensively relies on Lemma \ref{OPTIVEC} and in particular, replaces $m$ by $O(m)$ every time the intersection of two polytopes is tested or computed.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
D&2&3&4&5&6\\
\hline
$\#\Lambda_i$&2&5&16&65&326\\
\hline
\end{tabular}
\end{center}
\caption{Cardinality of the sets $\Lambda_i$ involved in the optimisation procedure for the collection $\alpha$ involved in \eqref{DEFPOLY}.}
\label{CARDICOMB}
\end{table}
For the collection $\alpha$ involved in the definition \eqref{DEFPOLY}, the cardinality of the sets $\Lambda_i$ does not depend on $i$ but it exponentially increases with $D$, see Table \ref{CARDICOMB}. According to the GNU performance analysis tool {\tt gprof}, the optimization procedure $m\leftarrow O(m)$ is the most consuming resource task of the overall execution, in terms of CPU time.
In order to speed up the process, the results in Table \ref{ALGOTAB} have actually been obtained using the semi-optimization procedure $O'$ which maximises/minimises over the subsets $\Lambda'_i\subset\Lambda_i$ of vectors having at most two non-vanishing coordinates. Naturally, the resulting vectors $O'(m)$ are no longer optimal (hence some of the $P_{O'(m)}$ may be spurious). Therefore, the cardinality of the constructing InAsIUP may be larger than when using $O$ only. However, this deficiency does not seem to affect InAsIUP successful completion in practice, especially because the chopping procedure of Appendix \ref{A-CHOP} still applies. More importantly, computation times appear to be reduced by a factor $\sim\tfrac{\#\Lambda_i}{\#\Lambda'_i}$ which, since $\#\Lambda'_i=D$, is a substantial gain for $D\geq 4$.
\subsubsection{Polytope related vector dynamics}\label{S-POLDYN}
Let $B_a=B_D|_{A_a}$ be the expression in atom $A_a$ of the vector involved in the constant part of $G_{D,\epsilon}$. In each atom, $G_{D,\epsilon}$ regarded as acting on polytopes $P_m$ induces a mapping $\Gamma_{D,\epsilon,a}$ on vectors $m$, via the relation $G_{D,\epsilon}|_{A_a}(P_m)=P_{\Gamma_{D,\epsilon,a} (m)}$. Explicitly, we have
\[
\overline{\underline{\Gamma_{D,\epsilon,a} (m)}}_{i,j}=2(1-\epsilon)\overline{\underline{m}}_{i,j}+\frac{2\epsilon}{D+1}\sum_{k=i}^jB_{a,k},\ \forall 1\leq i\leq j\leq D.
\]
Moreover, the atomic decomposition
\[
P_m=\bigcup_{a}P_{m\cap m_a}\ \text{mod}\ 0,
\]
where $P_{m\cap m_a}=P_m\cap P_{m_a}$ and $P_{m_a}=A_a$, induces an extension of the vector map $\Gamma_{D,\epsilon}$ to arbitrary polytopes $P_m\subset S_D$, {\sl ie.}\ we have
\[
\Gamma_{D,\epsilon} (m)=\bigcup_a\Gamma_{D,\epsilon,a} (m\cap m_a)
\]
where the label index $a$ runs over all labels for which $P_{m\cap m_a}$ is not empty. Furthermore, in relation with the previous section, notice that the vector dynamics preserve the optimization procedure, {\sl ie.} we have
\[
O\circ \Gamma_{D,\epsilon,a}=\Gamma_{D,\epsilon,a}\circ O.
\]
Notice finally that, since every $B_a\in\mathbb{Q}^D$, when $\epsilon\in\mathbb{Q}$ a rational number, every map $\Gamma_{D,\epsilon,a}$ has the properties
\[
\Gamma_{D,\epsilon,a} (\mathbb{Q}^{D(D+1)})\subset \mathbb{Q}^{D(D+1)}\ \text{and}\ \Gamma_{D,\epsilon,a}^{-1} (\mathbb{Q}^{D(D+1)})\subset \mathbb{Q}^{D(D+1)},
\]
{\sl viz.}\ the polytope related vector dynamics effectively implies only rational numbers when $\epsilon\in Q$.
\section{Discussion}
The literature on the deterministic dynamics of collective systems mostly describes phenomenological changes related to pattern formation or transition to synchrony. These changes typically correspond to bifurcations of steady states or periodic solutions \cite{ABP-VRS05,DF18,CH93,PRK01}. These transitions -- which may also imply a reduction in the phase space to lower dimensional subspaces \cite{Buescu97} -- can be regarded as the breaking of ergodicity of an atomic or singular measure. Instead, analogues of phase transitions with spontaneous symmetry breaking should involve ACIM, in order to preserve all degrees of freedom of the corresponding chaotic attractor. Rigorously articulated examples of this are few, especially when appeal to a underlying {\sl ad hoc} random process possessing the desired property is excluded.
In this paper, we have first provided (complementary) numerical evidence for symmetry breaking of a chaotic attractor of full dimension, in a model of a collective system of interacting individuals (whose Markov partition remains elusive). Furthermore, we have developed and benchmarked an algorithm for exact computer proof of the corresponding breaking of ACIM ergodicity. Even though the algorithm could only terminate for systems with limited number of individuals (due to limitations of computational resources), it indicates that phase transitions can be rigorously proven in a purely deterministic setting, without any reference to statistical mechanics.
In order to improve their physical relevance, such features should be confirmed for systems with larger numbers of individuals and for more realistic models. For our systems, such confirmation might require algorithmic improvements. For instance, one may rely on parallel implementation -- even though the iterative construction of an invariant set is an intrinsically sequential process -- on automated decomposition into unions of non-overlapping polytopes, and on the use of optimized libraries for rational arithmetic, as well as dedicated computational resources.
Besides the existence of InAsUP, alternative proofs of the breaking of ACIM ergodicity could be obtained using spectral properties of the transfer operator associated with the coupled map system. Since this operator governs the dynamics of the densities associated with measures, it suffices to prove that it acquires multiple fixed points \cite{KL09} as the coupling intensity increases. Likewise, one could aim at delineating (coupling-dependent) Markov partitions, which are compatible with ergodicity at small coupling and simultaneously imply splitting into asymmetric ergodic components as interactions become strong. To our best knowledge, these approaches have not been considered in the literature.
In any case, while transitions in statistical mechanics only occur in the thermodynamic limit, $N\to +\infty$ -- especially because irreducible Markov chains, which govern the dynamics for finite $N$, must be uniquely ergodic -- this paper, together with \cite{F14,S18,SB16}, shows that without Markov chain considerations, deterministic models do not require to consider this limit, and can display similar non-ergodic behaviors and symmetry breaking in finite dimension. This feature is particularly interesting for the modelling of real particle systems, which must be finite-dimensional.
\subsection*{Acknowledgments}
I am grateful to P. B\'alint, J. Buzzi, V. Perchet, F. S\'elley and L-S. Young for scientific discussions, to Y. Legrandg\'erard, L. Ollivier and D. Simon for computational insights and to P.\ B\'alint, N.\ Cuneo and G.\ Francfort for a critical reading of the manuscript and thoughtful suggestions of improvements.
|
1,116,691,498,620 | arxiv | \section{Introduction}
We consider fixed points of the Feigenbaum (periodic-doubling)
operator~\cite{feig2} whose orders tend to infinity.
It has been shown in~\cite{LSw2},~\cite{LSw3}, that the hyperbolic
dimension of their Julia sets go to $2$. In this paper we prove that
the Lebesgue measure (area) of these Julia sets tend to zero.
The question whether the area is indeed zero for finite orders
remains open. For the measure problem for maps with Fibonacci combinatorics,
see~\cite{SN},
and for quadratic Julia sets with positive area, see~\cite{BC}.
\paragraph{Outline of the proof.}
We follow the path known as ``the random walk argument''.
In Sect. 3 we build a Markov partition
by modifying the partition that we used in~\cite{LSw2}.
This partition defines a ``level function'' on the phase space
which tends to $+\infty$ at $\infty$ and to $-\infty$ at $0$.
With respect to the level function, the dynamics of the tower of the
limit map defines a random process. We then study the
probability distribution for this process and finally show that for
almost every point the process oscillates between $-\infty$ and
$+\infty$. The last step uses a martingale argument in the spirit
of~\cite{BKNS}.
The process we study has transition probabilities that are
asymptotically symmetric with respect to the change of the sign and
their magnitude is $\sim n^{-2}$. This, of course, makes them
non-integrable. There has been a considerable interest in such process
coming from probability theory. The simplest case is a Markov process
$X_n$
with independent increments with a distribution law of this type. That
case was studied in~\cite{Linnik} with a further development
in~\cite{Russian, Aronson1}. The main result is that $\frac{X_n}{n} - c$ tends
in probability to an analytic limit distribution law. From this, it is
easy to conclude that almost every orbit oscillates between $-\infty$
and $+\infty$. This was then extended by~\cite{Aronson2} to the
dynamical context of iterated function systems with distortion, that
is, the case in which the increments are no longer independent. The
result about a limit distribution law, under suitable assumptions,
remains the same.
One important corollary from our results, which was announced
previously~\cite{LSw3}, p.424, is that the Julia set of the limit
transcendental map has area $0$, see Theorem~\ref{A}.
\paragraph{Acknowledgment.} The authors thank
Yuval Peres, Benjamin Weiss,
Jon Aaronson, and Michel Zinsmeister for valuable suggestions
and discussions on different stages of present work.
\subsection{Main results}
\paragraph{Notations and basic facts.}
We will write unimodal mappings of an interval,
$H :\: [0,1] \rightarrow [0,1]$ in the following
non-standard form:
\[ H(x) = |E(x)|^{\ell} \]
where $\ell>1$ is a real number and
$E$ is an analytic mapping with strictly negative
derivative on $[0,1]$ which maps $0$ to $1$ and $1$ to a point inside
$(-1,0)$. Then $H$ is unimodal with the minimum at some
$x_{0} = E^{-1}(0) \in (0,1)$ and $x_0$ is the critical
point of order $\ell$.
For $\ell$ which are
even integers there exists a unique pair $H:=H^{(\ell)}(x) =
|E_{\ell}(x)|^{\ell}$ and $\tau:=\tau_{\ell}>1$ which provides a
solution to the {\em Feigenbaum functional equation}
\begin{equation}\label{equ:14fa,1}
\tau H^2\tau^{-1}(x) = H(x)
\end{equation}
for $x \in [0,1]$.
As $\ell$ goes to $\infty$, mappings $H^{(\ell)}$ converge to a
non-trivial analytic limit denoted by $H$~\cite{oldwit, LSw1}.
It satisfies the Feigenbaum equation with $\tau=\lim\tau_\ell>1$.
According to ~\cite{LSw1}, the limit map $H$ extends
to an infinite unbranched cover
of either of two topological disc $U_{-}$ and $U_{+}$
onto a punctured round disc
$D_*=D(0, R)\setminus \{0\}$. Here
$U=U_{-}\cup U_+$ is compactly contained in the disc
$D(0, R)$ and $U_{\pm}$ touch each other at a single
point $x_0$, which is the limit
of the critical points for $H^{(\ell)}$.
In particular, the (filled) Julia set
$J(H)$ of $H$ is well-defined as the closure
of non-escaping points of $H: U\to D_*$.
$J(H)$ has no interior.
\paragraph{Statements.}
\begin{theo}\label{A}
The Julia set $J(H)$ of $H$ has area zero.
\end{theo}
Note that by~\cite{LSw2},~\cite{LSw3} the hyperbolic
(in particular, Hausdorff) dimension of $J(H)$ is $2$.
A stronger result is presented in Theorem~\ref{rec},
which provides an additional property of the
corresponding tower dynamics, roughly that almost every point visits
every neighborhood of both $0$ and $\infty$.
\begin{coro}\label{finite}
The area of the Julia set of the map
$H^{(\ell)}$ tends to zero
as the order $\ell$ grows.
\end{coro}
Theorem~\ref{A} together with Theorem 7 of~\cite{LSw1} and
~\cite{LSw2}
immediately imply
\begin{coro}\label{example}
There exist real parameters $a,c>0$, such that the map
\[f(z)=a\exp{(-(z-c)^{-2})}\]
has the following properties:
(a) $f$ is quasi-conformally conjugate to $H$ on the entire domain of $H$,
(b) the set of points in the plane whose $\omega$-limit sets under $f$
are contained in the $\omega$-limit set of $0$ has Hausdorff dimension $2$,
(c) the hyperbolic dimension of the Julia set $J(f)$
of $f$ is equal to $2$,
(d) the area of $J(f)$ is equal to zero.
\end{coro}
\section{Induced Dynamics}
We will build on~\cite{LSw2} adopting the notations of that paper.
\subsection{Limit Feigenbaum map}
The following statement proved in~\cite{LSw1},~\cite{LSw2} describes a maximal
dynamical extension of the map $H: U\to D_*$ and related facts.
\begin{theo}\label{prop}
(1) On the interval $[0,1]$, $H^{(\ell)}$ converge uniformly to a unimodal
map $H$ with a critical point at $x_0$ which satisfies the Feigenbaum
fixed point equation~(\ref{equ:14fa,1}) with some $\tau > 1$.
(2) There is an analytic map $h$ defined on the union of two open
topological disks
$\Omega_- \ni 0$ and $\Omega_+$, both symmetric with respect to the
real axis with closures intersecting exactly at $x_0$.
(3) $\Omega_+$ and $\Omega_-$ are bounded and their boundaries
are Jordan curves with
Hausdorff dimension $1$.
Moreover, $\overline\Omega_{\pm}\cap \mathbb{R}=\overline{\Omega_{\pm}\cap \mathbb{R}}$.
(4) $h$ is
univalent on $\Omega_-$ and maps it onto $C_h := \mathbb{C} \setminus \{ x\in \mathbb{R}
:\: x \geq 2\log\tau\}$ and also univalent on $\Omega_+$ mapping it
onto $\mathbb{C} \setminus \{ x \in \mathbb{R} :\: x \geq\log\tau\}$.
(5) On any compact subset of $\Omega_+ \cup \Omega_-$, $H^{(\ell)}$ are
defined and analytic for all $\ell$ large enough and converge
uniformly to $H:=\exp(h)$, which is an analytic extension
of the map $H: U\to D_*$ previously introduced.
(6) if $G=H\circ \tau^{-1}$, then $h\circ G=h-\log \tau$ on $\Omega_{\pm}$.
That is to say, the map $h/(-2\log \tau)$ is an attracting
Fatou coordinate for $G^2$ at $x_0$.
$G^{-1}(\Omega_+) = \Omega_-$ and $G^{-1}(\Omega_- \setminus [y,0]) =
\Omega_+$ where $G^{-1}$
is an inverse branch of $G$ defined on $\mathbb{C}
\setminus \left( (-\infty,0] \cup [\tau x_0, + \infty)\right)$ which fixes
$x_0$ and $y<0$ is chosen so that
$G^2$ maps $(y, x_0)$ monotonically onto $(0, x_0)$.
(7) $\Omega_+ \subset \tau \Omega_-$ and $\Omega_- \subset \tau \Omega_-$
\end{theo}
\begin{figure}\label{fig:16fa,1}
\psfig{figure=pic3.ps,width=528pt,height=396pt}
\caption{This is a schematic drawing of $\Omega_- \cap \mathbb{H}_+$ and regions
inside it. Areas delineated with thicker lines represent
$\tau^{-1}\Omega_-$ and $\tau^{-1}\Omega_+$. Shaded areas correspond
under $G^2$.}
\end{figure}
\paragraph{The geometry of $H$.}
See Figure~\ref{fig:16fa,1} for an illustration and explanation of
some notations.
Let us define $B = \Omega_- \setminus \tau^{-1}\overline{\Omega_-}$
and $B_{\pm} := B \cap \mathbb{H}_{\pm}$. Then define $D_{\pm} = B_{\pm} \setminus
\tau^{-1}\overline{\Omega_+}$.
A convenient parametrization of the set $\Omega_-$ is given
by the map $h^{-1}$
from a slit plane $C_h$, as described by item~(4) of Theorem~\ref{prop}. If
we write $w=h(z)$, then the map $H(z)$ corresponds to $\exp(w)$ and,
more strikingly $z\rightarrow G^2(z)$ is conjugated to $w\rightarrow w -
2\log \tau$. Geometrically, it is worth noting that the beginning of
the slit at $w=2\log\tau$ corresponds to the point $y$ in
Figure~\ref{fig:16fa,1} where the boundaries of $\Omega_+$ and
$\Omega_-$ which follow the real line to the right of $y$, split.
Following~\cite{LSw2}, connected sets $V_{k,k'}$, $k,k'\in \mathbb {Z}$,
are chosen in $\Omega_-$ so that
each is mapped by $H$
onto $\tau^{k'+1}B_{\pm}$.
More explicitly, in the
$w$-coordinate
\[ h(V_{k,k'}) = \{ w\in C_h :\: \exp(w-k'\log\tau) \in \tau B_{\pm},\, k\pi < \Im
w < (k+1)\pi \} \; .\]
Now $\tau^{-1}\Omega_- \subset \overline{V_{0,0} \cup V_{-1,0}}$.
Hence, for $k=0,-1$
and $k'$ even and non-negative, $V_{k,k'}$ contains the preimage of
$\tau^{-1}\Omega_{-}$ by $G^{k'}$. To exclude this preimage, if $k=0$ or $k=-1$
and $k'$ is even end non-negative, we define $W_{k,k'} = V_{k,k'} \setminus
G^{-k'}(\tau^{-1}\Omega_-)$. For all other pairs $(k,k')$, $W_{k,k'} = V_{k,k'}$.
\paragraph{Rescaled map.}
Let us define the ``rescaled map'' $\tilde{H}$ as follows.
\begin{itemize}
\item
if $z\in W_{k,k'}$, $(k,k') \in \mathbb {Z} \times \mathbb {Z}$, then $\tilde{H}(z) =
\tau^{-k'-1}H(z)$,
\item
if $z\in V_{k,k'} \setminus W_{k,k'}$, $(k,k') \in \{0,-1\} \times
2\mathbb {Z}_+$,
then first consider $Z = G^{k'}(z)$. $Z\in
\overline{\tau^{-1}\Omega_-}$ which consists of the rescaled copies of
$B_{\pm}$. Then $\tilde{H}(z)$ is defined if and only if
$Z\in\tau^{-p}B_{\pm}$ for some $p>0$ and then $\tilde{H}(z)=\tau^p Z$.
\end{itemize}
Notice that this definition ensures that $\tilde{H}$ maps every
connected component of its domain univalently onto one of the four possible
pieces $D_{\pm}, B_{\pm}$. The image is $B_{\pm}$ in the
second case of the definition of the rescaled map and also in the
first case whenever $W_{k,k'}=V_{k,k'}$.
On the other hand $\tilde{H}$ on $W_{0,0}$ is
$\tau^{-1} H = \tau^{-1} G \tau$. It maps $W_{0,0}$
univalently onto $D_-$ since $G$ maps $\Omega_-$ onto $\Omega_+$.
On $W_{-1,0}$,
$\tilde{H}$ is the mirror reflection of this map, so the same formula
actually holds. We will refer to
$W_{0,0}, W_{-1,0}$ as the {\em central pieces}.
On $W_{2p,0}$, $p>0$, $\tilde{H}=\tilde{H}_{|W_{0,0}} \circ G^{2p}$,
so it also maps onto $D_-$ and similarly $W_{-1,2p}$ is mapped onto
$D_+$.
Distortion properties of $\tilde{H}$ are given by Proposition 1
of~\cite{LSw2}. As it turns out, most branches of $\tilde{H}^n$ can be
be continued as univalent maps onto fixed neighborhoods of $B_{\pm},
D_{\pm}$, fixed meaning independent of a branch or $n$, with the
exception of those branches whose domains are send to the central
pieces by $\tilde{H}^{n-1}$.
\paragraph{Towers.}
The following is a trivial application of the concept of a tower
used in~\cite{profesorus}.
\begin{defi}\label{defi:20ga,1}
Suppose we have a pair $(H,\tau)$ which satisfies the
equation~\ref{equ:14fa,1}. For every $k\in \mathbb {Z}$, $H$ gives rise to a
rescaled mapping $H_k(z) = \tau^k H (\tau^{-k} z)$. The set $\{ H_k
:\: k\in\mathbb {Z} \}$ will be called the {\em tower of $H$}.
The set of all possible compositions of maps from a tower will be
referred to as {\em tower dynamics}.
\end{defi}
Towers will be used when $H$ could be the limiting map discussed in
the previous paragraph, or one of the fixed point transformations $H^{(\ell)}$ of
finite degree.
Tower dynamics forms a
dynamical system, namely it defines an action of the semi-group of
non-negative binary
rational numbers under which integers correspond to ordinary iterates
of $H$ and $2^{-k}$ acts as $H_k$. This follows from the following
lemma.
\begin{lem}\label{lem:20ga,1}
For every $k\in\mathbb {Z}$, $H_k^2 = H_{k-1}$.
\end{lem}
\begin{proof}
Based on the functional equation~\ref{equ:14fa,1},
\[ H_k \circ H_k(x) = \tau^{k-1} \tau H^2 (\tau^{-1}(\tau^{-k+1}x)) =
\tau^{k-1} H(\tau^{-k+1}x) = H_{k-1}(x) \; .\]
\end{proof}
\subsection{Further inducing}
Map $\tilde{H}$ has satisfactory properties from the
combinatorial point view, since $B_+$ and $B_-$ are cut into countably
many topological disks, each is of which is mapped univalently back
onto $B_+$ or $B_-$. However, we would like it to have bounded
distortion and that is generally not so. A standard approach to
obtaining bounded distortion is by inducing and will follow that route
now.
Start by introducing new pieces $K_{\pm} = B_{\pm}\setminus
\overline{W_{0,0}\cup W_{-1,0}}$ and $L_{\pm} = D_{\pm} \setminus
\overline{W_{0,0} \cup W_{-1,0}}$.
We will next define the map $\tilde{C}$ almost everywhere on the union
of the central pieces, induced by $\tilde{H} = \tau^{-1} G \tau$,
for which every branch maps on $L_{\pm}$. This will allow us to build
the map $\tilde{J}$ defined on the domain of $\tilde{H}$ by
$\tilde{H}$ except on $\tilde{H}^{-1}(W_{0,0}\cup W_{-1,0})$ and by
$\tilde{C} \circ \tilde{H}$ otherwise.
\paragraph{The mapping $\tilde{C}$.}
We will only consider $\tilde{C}$ on $W_{0,0}$. Mapping on $W_{-1,0}$
will be the mirror reflection.
By Lemma 2.13 in~\cite{LSw2}, no point will stay forever in $W_{0,0}$
under the iteration by $\tilde{H}$. Hence, $\tilde{C}$ is simply
defined as the first entry map into $L_{+}$ under the iteration by
$\tilde{H}$.
\begin{lem}\label{lem:22xp,1}
Every branch of $\tilde{C}$ has a univalent extension onto a simply
connected neighborhood $U_L$ of $L_{+}$. $U$ is the same for all
branches of $\tilde{C}$ and its preimages by any branch is contained
in the set $S := \{ z :\: \Re z < 0\; \mbox{or}\; \Im z > 0\}$.
\end{lem}
\begin{proof}
Since $\tilde{H}$ on $W_{0,0}$ is $G_{-1} = \tau^{-1} G \tau$
and $\overline{L_+}$
only intersects $\mathbb{R}_+$ at $x_0$ which is not a critical
point of $\tilde{H}$, we can define and inverse branch on a
neighborhood of $L_+$. Since the preimage of $\overline{L_+}$ by
$G_{-1}$ now only
intersects the real line at $y$, that neighborhood $U_L$ can be chosen to
fit into $S$. This proved the needed extension for the branch of
$\tilde{C}$ which is the first iterate of $\tilde{H}$. To examine
further branches, continue mapping by the inverse branch of
$\tilde{H}$ defined on $S$. From the properties of $G$, that inverse
branch sends $S$ into itself, or even into the upper half plane.
\end{proof}
\paragraph{Bounded distortion for $\tilde{J}$.}
Now define map $\tilde{J}$ which equals $\tilde{H}$
everywhere on the domain of $\tilde{H}$ except on preimages to the central pieces
and $\tilde{C}\circ \tilde{H}$ on such preimages.
Each branch of $\tilde{J}$ maps onto one of the pieces $K_{\pm},
L_{\pm}$.
\begin{prop}\label{prop:22xp,1}
There are fixed neighborhoods of sets $\overline{K_{\pm}}$ and
$\overline{L_{\pm}}$ such that for any $n$ any branch of $\tilde{J}^n$
which maps onto one those sets can also be extended univalently so
that it maps onto the corresponding neighborhood.
\end{prop}
The map $\tilde{J}^n$ can be expended as a composition of $\tilde{H}$
and $\tilde{C}$ in which $\tilde{C}$ cannot be followed by another
$\tilde{C}$. Since $\tilde{C}$ is also induced by $\tilde{H}$, we can
use Proposition 1 from~\cite{LSw2}. It asserts that if $\tilde{H}$ is
the last mapping applied in this composition, then the claim of
Proposition~\ref{prop:22xp,1} holds. So consider the situation when
$\tilde{C}$ is applied last. By Lemma 2.10 from~\cite{LSw2}, the
entire composition that comes before it can be continued so that it
maps onto $\mathbb{C} \setminus [0,+\infty)$. This obviously contains the set
$S$ mentioned in Lemma~\ref{lem:22xp,1}, so again we get a univalent
extension mapping over a fixed neighborhood of $\overline{L_{\pm}}$.
This proves Proposition~\ref{prop:22xp,1}.
By K\"{o}be's Lemma we now know that all branch of $\tilde{J}^n$ have
distortion bounded uniformly with respect to $n$.
\subsection{Tower Dynamics}
By Lemma 2.14 from~\cite{LSw2}, for every branch $\zeta$ of $\tilde{H}$ there
exists an integer $p$ such that $\tau^p \zeta$ belongs to the tower of
$H$, see Definition~\ref{defi:20ga,1}. Let us call $p$ the {\em combinatorial displacement} of $\zeta$.
This leads to the following definition.
\begin{defi}\label{defi:2ja,1}
A mapping defined on an open set contained in the fundamental ring
$\Omega_- \setminus \tau^{-1}\overline{\Omega_-}$ is called {\em
tower-induced} if on each connected component of its domain it has the
form $\tau^{q} h$ where $q$ is an integer and $h$ belongs to the tower
of $H$.
\end{defi}
For a tower-induced mapping, the choice of $q$ and $h$ is unique.
\begin{lem}\label{lem:2ja,1}
If $\tau^q h = \tau^{q'} h'$ on a connected open set, with
$q,q'\in\mathbb {Z}$ and $h,h'$ in the tower of $H$, then $q=q'$ and $h=h'$.
\end{lem}
\begin{proof}
Both $h$ and $h'$ are iterates of the same $h_0$ in the tower, say
$h=h_0^m, h'=h_0^{m'}$, $m,m' > 0$. Without loss of generality, $m'
\geq m$. Then
\[ \tau^{q-q'} = h_0^{m'-m} \]
on an open set, but this is impossible given that no iterate of $h_0$ is a
linear map.
\end{proof}
\begin{defi}\label{defi:21ga,1}
Given a tower-induced map $\Phi$ on a subset $U$ of the fundamental
ring, we can define its {\em associated map} as follows.
On $U$, wherever $\Phi = \tau^q h$, the associated map is just $h$.
On $\tau^p U$, where $p\in \mathbb {Z}$, the associated map is $\tau^p h
\tau^{-p}$.
\end{defi}
In this way, the associated map belongs to the tower.
\begin{lem}\label{lem:2ja,2}
If the combinatorial displacement of a tower induced map is $q$ at
some point $x$, then for any $p\in \mathbb {Z}$ the associated map sends
$\tau^p x$ in $\tau^{p+q} (\Omega_- \setminus
\tau^{-1}\overline{\Omega_-})$.
\end{lem}
\begin{proof}
It is a direct consequence of the definitions.
\end{proof}
\begin{lem}\label{lem:22xp,2}
If $\zeta_1$ and $\zeta_2$ are two tower-induced mappings with
associated maps $\Theta_1,\Theta_2$, respectively, then
$\zeta_1\circ\zeta_2$ is also a tower-induced map with the associated
map $\Theta_1 \circ \Theta_2$.
\end{lem}
\begin{proof}
Denote $\zeta_1 = \tau^{q_1} h_1, \zeta_2 = \tau^{q_2} h_2$. Without
loss of generality, the domain of $\zeta_2$ is connected and so $q_2$
is constant, while $q_1$ is only piecewise constant and $h_1$ is only
piecewise a map from the tower.
Then
\[ \zeta_1 \circ \zeta_2 = \tau^{q_1} h_1 \tau^{q_2} h_2 =
\tau^{q_1+q_2} (\tau^{-q_2} h_1 \tau^{q_2}) h_2 \; .\]
Mappings $h_2$ and $\tau^{-q_2} h_1 \tau^{q_2}$ both belong to the
tower and so does their composition. Thus, the composition $\zeta_1
\circ \zeta_2$ is tower-induced and its associated map is
$(\tau^{-q_2} h_1 \tau^{q_2}) h_2$ on the domain of $\zeta_2$.
On the other hand, $h_2$ maps the domain of $\zeta_2$ into
$\tau^{-q_2} (\Omega_- \setminus \tau^{-1} \Omega_-)$. So, the
composition of the associated maps is indeed
\[ (\tau^{-q_2} h_1 \tau^{q_2}) h_2 \]
on the domain of $\zeta_2$. So, the associated map of the composition
is equal to the composition of the associated maps on the domain of
$\zeta_2$. When considered on rescaled images of the domain of
$\zeta_2$, both $\Theta_1 \circ \Theta_2$ and the associated map of
$\zeta_1\circ\zeta_2$ are equivariant with respect to such rescalings,
so the equality holds everywhere.
\end{proof}
As a consequence of Lemma~\ref{lem:2ja,2} and Lemma~\ref{lem:22xp,2},
combinatorial displacements are additive under the composition of
tower-induced maps.
\paragraph{Dynamical interpretation of $\tilde{J}$.}
Let us recall the mapping $\tilde{J}$ defined previously. Map
$\Lambda$ is equal to $\tilde J$ except on $\tau^{-1}\Omega_+$, where
we modify the definition to ${\tilde J} \circ {\tilde J}$.
\begin{prop}\label{prop:2jp,1}
If $x$ belongs to the Julia set of $H$ and to the domain of
$\Lambda^p$, $p>0$, then the map associated to $\Lambda^p$ is equal
to an iterate of $H$ on a neighborhood of $x$.
\end{prop}
Throughout this proof we assume that $x$ belongs to the Julia set of
$H$.
We split the proof depending on whether $x$ belongs to $\Omega_+$ or
$\Omega_-$.
The first case to consider is $x\in \Omega_+$. To determine the map
associated to $\Lambda$ on a neighborhood of $x$, we need to look at
$\Lambda$ on a neighborhood of $\tau^{-1}x \in \tau^{-1} \Omega_+$.
By the modification we just described, $\Lambda$ is ${\tilde J}^2$ on
a neighborhood of $\tau^{-1}x$ and so the associated map at
$\tau^{-1}x$, as well as $x$, is the associated map of ${\tilde J}$
composed with itself.
On $\Omega_+$ the associated map $\tilde J$ is $H_1 = \tau H
\tau^{-1}$. Then we know that $H_1(H_1(x)) = H(x)$ is in
$\Omega_-\cup\Omega_+ \subset \tau\Omega_-$. i.e. $H_1(x) \in H_1^{-1}(\tau\Omega_-) \cap
\tau\Omega_-$ or $\tau^{-1} H_1(x) \in H^{-1}(\Omega_-) \cap
\Omega_-$. It follows that ${\tilde J}$ on a neighborhood of ${\tilde
J}(\tau^{-1}x) = \tau^{-1}H_1(x)$ is $H$, and therefore its associate
map at $H_1(x)$ is $H_1$ again. So, by Lemma~\ref{lem:22xp,2}, the
associated map of $\Lambda$ is $H_1 \circ H_1 = H$ in a neighborhood
of $x$.
Let us now consider $x\in\Omega_-$.
Since ${\tilde J} = {\tilde C} \circ {\tilde H}$, with both $\tilde C$
and $\tilde H$ tower-induced maps, we have an analogous decomposition
of the map $\Phi_J$ associated to $\tilde J$ into the composition of
$\Phi_H$ associated to $\tilde H$ and $\Phi_C$ associated to $\tilde
C$.
\begin{lem}\label{lem:2jp,1}
Suppose $x\in \Omega_-$. Then $\Phi_C$ on a neighborhood of $x$ is an
iterate of $H_{-1}$.
\end{lem}
\begin{proof}
$\tilde C$ is induced by the map $G_{-1} = \tau^{-1} H$. So, the
associated map is $H$ on the fundamental ring $\Omega_- \setminus
\tau^{-1}\overline{\Omega_-}$. However, the combinatorial displacement
of $G_{-1}$ is $1$, so by Lemma~\ref{lem:22xp,1} the map associated
to ${\tilde C}^2$ is $H_1 \circ H$ and, inductively, the map
associated to ${\tilde C}^k$, $k\geq 1$, is $H_{k-1} \circ \cdots
\circ H$ wherever ${\tilde C}^k$ is defined on the fundamental ring.
Observe that $H$ maps any point in the domain of ${\tilde C}$ outside
of $\Omega_- \cup \Omega_+$. Hence, no $x$ from the Julia set of $H$
can be found there. However, we may encounter points from the Julia
set on the domain of ${\tilde C}$ rescaled by $\tau^{-p}$, $p>0$. By
the equivariance with respect to the rescaling by $\tau$, the map
associated to ${\tilde C}^k$ on a neighborhood of such a point is
$H_{k-1-p} \circ \cdots \circ H_{-p}$. Again, this composition cannot
contain $H_0 = H$ which would eject the point out of the Julia set,
hence $k-1-p<0$, hence $\Phi_C$ is generated by $H_{-1}$ in the
neighborhood of $x$.
\end{proof}
In the light of Lemma~\ref{lem:2jp,1} in order to conclude that
$\phi_C \circ \Phi_H$ is an iterate of $H$ in a neighborhood of $x$ it
will be enough to show that $\Phi_H$ is an iterate of $H$ on such a
neighborhood. Then, $\Phi_H(x)$ is in the Julia set of $H$ and
Lemma~\ref{lem:2jp,1} is applicable.
${\tilde H}$ is simply $\tau^q H$ on most of its domain, with the sole
exception of domains $G^{-2k}(\tau^{-1}\Omega_-)$ where the inverse
branch of $G$ which fixes $x_0$ is used. On any such domain,
$\tilde{H}$ is $\tau^q G^{2k}$. Since $G=\tau^{-1} H_1$ its associated
map is $H_1$ and the combinatorial displacement is $1$. Hence, the map
associated to $\tilde{H}$ on such a domain is
\begin{equation}\label{equ:2jp,1}
H_{2k} \circ H_{2k-1} \circ \cdots \circ H_1 \; .
\end{equation}
Also,
\[ H(G^{-2k}(\tau^{-1}\Omega_-)) = \tau^{2k} H\tau^{-1}(\Omega_-) =
\tau^{2k} G(\Omega_-) = \tau^{2k}\Omega_+ \; .\]
Now take $x$ in the Julia and in $\tau^{-p}(\Omega_-\setminus
\tau^{-1}\overline{\Omega_-})$. Without loss of generality $p\geq 0$
since the case of $x\in \Omega_+$ was already considered. If $x$ is
not in the rescaled image of one of the exceptional domains discussed
in the previous paragraph, then the map associated to $\tilde{H}$ is
just $H_{-p}$.
If $x$ is in $\tau^{-p}G^{-2k}(\tau^{-1}(\Omega_-))$, then $H_{-p}$ maps
it into$\tau^{2k-p}\Omega_+$. But $H_{-p}$ is an iterate of
$H$, so it has to map $x$ into the Julia set of $H$ and thus
$2k-p\leq 0$
or $2k\leq p$. By formula~(\ref{equ:2jp,1}), the associated map is
given by
\[ \tau^p H_{2k} \circ \cdots \circ H_1 \tau^{-p} \]
which is clearly generated by $\tau^p H_{2k} \tau^{-p} = H_{2k-p}$,
thus by $H_0$ in view of the inequality $2k\leq p$.
What we now proved is that if $x\in \Omega_-$, then the map associated
to ${\tilde J}$ is an iterate of $H$ on a neighborhood. This is the
same as the map associated to $\Lambda$ unless $x\in \tau^{-1-p}
\Omega_+$ for $p\geq 0$. If that happens, $\Lambda = {\tilde
J}\circ{\tilde J}$ and the map associated to $\tilde J$ is $H_{-p}$ on
a neighborhood of $x$ and therefore maps $x$ into $\tau^{-p}\Omega_-$.
Then, again the map associated to the second iterate of $\tilde J$ is
generated by $H$ on a neighborhood of $H_{-p}(x)$.
Proposition~\ref{prop:2jp,1} has been demonstrated.
\section{Drift Estimates}
\subsection{Martingale estimates}
We will be using the following abstract probabilistic
statement. Its stronger form under stronger conditions
can be found in the literature, see the discussion and references in
the Introduction.
Define $\gamma_k(x) = x \chi_{[-k,k]}(x)$ for $k>0$.
\begin{prop}\label{prop:14za,1}
On a certain probability space $\Omega$ with measure $\mu$ consider an integer-valued stochastic process
$(Z_n)_{n=0}^{\infty}$. Let ${\cal F}_n$ denote the $\sigma$-algebra
generated by $Z_0,\cdots, Z_n$. For $n\geq 1$, let $F_n = Z_n -
Z_{n-1}$. Assume that for each $n\geq 1$ we have a decomposition $F_n =
\Delta_n + I_n$, with $\Delta_n$ and $I_n$ both integer-valued. Moreover, assume that
positive constants $K_1, K_2$, $p>1$, exist with which the following estimates
hold for every $n\geq 1$:
\begin{itemize}
\item
for every $k\in\mathbb {Z}, k\neq 0$
\[ K_1^{-1} k^{-2} \leq P({\Delta_n = k}|{\cal F}_{n-1})(\omega) \leq K_1 k^{-2} \]
for $\mu$-almost all $\omega$,
\item
for every positive $k$,
\[ |E(\gamma_k(\Delta_n) |{\cal F}_{n-1})(\omega)| \leq K_2 \] almost surely,
\item
\[ E(|I_n|^p){\cal F}_{n-1})(\omega) \leq K^p_2 \] almost surely.
\end{itemize}
Then, $\mu$-almost surely $\lim\sup_{n\rightarrow\infty} \frac{Z_n}{n} =
+\infty$ and $\lim\inf_{n\rightarrow\infty} \frac{Z_n}{n} =
-\infty$.
\end{prop}
Let us define $\log^+(x)$, $x\in \mathbb{R}$ to be $\log(x)$ if $x>1$ and
$0$ otherwise.
\begin{lem}\label{lem:12za,1}
Consider a probability space $P$ with measure $\mu$. Let $\Delta$
and $I$ be integer-valued random variables and $F=\Delta+I$. Assume
that for some $Q', Q''>0$, $p>1$ and every $k\neq 0$:
\begin{itemize}
\item
\[ (Q' k^2)^{-1} \leq \mu({\Delta=k}) \leq Q' k^{-2} \; ,\]
\item
if $k>0$, then
\[ |E(\gamma_k(\Delta))| \leq Q'' \; ,\]
\item
\[ E(|I|^p) \leq (Q'')^p\]
\end{itemize}
There exists $Q_0 > 1$ which only depends on $Q',Q'',p$ such that for
every $Q \geq Q_0$
\[ E(\log^+ (Q+F)) < \log Q \; .\]
\end{lem}
\begin{proof}
Without loss of generality we can replace $I(x)$ with $\max
(I(x),0)$. i.e. assume that $I(x)$ is a non-negative function.
Assume $Q>800$ and distinguish sets $X_Q := \{ x\in P :\: Q+\Delta(x) > 1\}$
and $Y_Q := \{ x\in P :\: |\Delta(x)|\leq Q\log Q - Q\}$.
\[ \int_{P\setminus Y_Q} \log^+(Q+F(x))\, d\mu(x) \leq \int_{X_Q \setminus
Y_Q} \log^+ (Q+F(x))\, d\mu(x) + \]
\[ + \int_{P\setminus (Y_Q\cup X_Q)}
\log^+ \left[ I(x)- Q(\log Q -2) \right]\, d\mu(x) \]
since on the complement of $X_Q \cup Y_Q$ we have $\Delta(x)< -Q\log Q+Q$.
To estimate that last term, denote $S=\{ x\in P\setminus (Y_Q\cup
X_Q):I(x)\geq Q(\log Q-2) + 1)$.
Using Jensen's inequality for conditional expectations
\[ \int_{P\setminus (Y_Q\cup X_Q)}
\log^+ \left[ I(x)-Q(\log Q - 2)\right]\, d\mu(x) \leq \int_S \log I(x)\, d\mu(x) =
\]
\[ = \mu(S) E(\log I(x)| x\in S) \leq \mu(S) \log E(I(x)|x\in S)\; .\]
Furthermore,
\[ E(\log I(x) | x\in S) \leq \frac{E(I)}{\mu(S)} \leq
\frac{Q''}{\mu(S)} \]
and
\[ \mu(S) \log E(I(x)|x\in S) \leq \mu(S) \log Q'' - \mu(S)\log(\mu(S))
\; .\]
Since $Q>800$, we have
\[ \frac{Q}{2}\log Q \cdot\mu(S) < Q(\log Q-2)\mu(S) < \int_S I(x)\,
d\mu(x) \leq Q''\]
which implies $\mu(S) < \frac{2Q''}{Q\log Q}$ and from the previous estimate
\[ \int_{P\setminus (Y_Q\cup X_Q)} \log^+ \left[ I(x)-Q(\log Q - 2)
\right]\,
d\mu(x) \leq \]
\[ \leq \frac {2Q''\log Q''}{Q\log Q} + \frac{2Q''}{Q\log Q}(\log Q +
\log\log Q - \log (2Q'')) \leq \frac{Q'_1}{Q} \]
for an appropriately chosen constant $Q'_1$ which only depends on
$Q''$.
\[\int_{P\setminus Y_Q} \log^+(Q+F(x))\, d\mu(x) \leq \frac{Q'_1}{Q}
+ \int_{X_Q \setminus Y_Q} \log (Q+\Delta(x))\, d\mu(x)
+\]
\[ +\int_{X_Q\setminus Y_Q} \frac{I(x)}{Q+\Delta(x)}\, d\mu(x) \leq
\frac{Q'_1}{Q} + \sum_{n> Q\log Q} Q' \log (Q + n) n^{-2} +
\frac{Q''}{Q} \]
\begin{equation}\label{equ:12za,1}
\leq \frac{Q'_1}{Q} + \frac{Q''}{Q} + 2Q' \sum_{n\geq Q\log Q} \log (n) n^{-2} \leq Q_1
Q^{-1}
\end{equation}
where the final estimate arises from an explicit integration of the
function $x^{-2}\log x$ and $Q_1$ only depends on $Q'$ and $Q''$.
Let $\lambda_Q$ denote the affine function tangent to $\log x$ at
$Q$, i.e. $\lambda_Q(x) = \log Q + \frac{x}{Q} -1$. Then
\begin{equation}\label{equ:12za,2}
\int_{Y_Q} \log^+(Q+F(x))\, d\mu(x) =
\end{equation}
\[ = \int_{Y_Q}
\lambda_Q(Q+F(x))\; d\mu(x) - \int_{Y_Q}
(\lambda_Q-\log^+)(Q+F(x))\, d\mu(x) \; .\]
As to the first term, we estimate
\[ \int_{Y_Q} \lambda_Q(Q+F(x))\; d\mu(x) = \]
\[ = \log(Q)\mu(Y_Q) +
\frac{1}{Q} \int_{Y_Q} F(x) \, d\mu(x) -1 < \log(Q) + (2Q'') Q^{-1} \]
Taking this into account together with
estimates~(\ref{equ:12za,1}) and~(\ref{equ:12za,2}), we get
\begin{equation}\label{equ:12za,3}
\int_{P} \log^+ (Q + F(x)) \, d\mu(x) - \log Q < \frac{Q_1+
2Q''}{Q} -
\end{equation}
\[ - \int_{Y_Q}\left(\lambda_Q - \log^{+})(Q + F(x)\right) \, d\mu(x)\; .\]
The rest of the proof will consist in estimating the final negative
term in~(\ref{equ:12za,3}) to show that it goes to $0$ as
$Q\rightarrow\infty$ more slowly than $O(Q^{-1})$ and hence prevails
for sufficiently large $Q$.
The values of $\lambda_Q(x)$ remain above $\log_+ (x)$ for $x>-Q\log Q+Q$.
Since $I(x)$ is non-negative and $\Delta(x) \geq -Q\log Q+Q$ on $Y_Q$,
$(\lambda_Q -\log^+)(Q+F(x))$ is non-negative on $Y_Q$. Choose $Z_Q :=
\{ x\in P :\: -3Q < \Delta(x) < -2Q \}$. Since $Q>800$, $Z_Q \subset
Y_Q$ and
\begin{equation}\label{equ:23fa,1}
\int_{Y_Q}\left(\lambda_Q - \log^{+})(Q + F(x)\right) \, d\mu(x) \geq
\int_{Z_Q}\left(\lambda_Q - \log^{+})(Q + F(x)\right) \, d\mu(x)
\end{equation}
For $Q>800$ and $x\in Z_Q$,
\[ \lambda_Q(Q+\Delta(x)) \geq \log Q - 3 > \frac{\log
Q}{2}\; .\]
At the same time, for $x\in Z_Q$,
\[ Q+F(x) = Q+\Delta(x)+I(x) < I(x)-Q\ .\]
Hence,
\[ \int_{Z_Q} \left(\lambda_Q - \log^+\right) (Q + F(x))\, d\mu(x) >
\]
\[ > \frac{\log Q}{2}\mu(Z_Q) + \int_{Z_Q} \left[ \frac{I(x)}{Q} -
\log^+(I(x)-Q)\right] \, d\mu(x) \;
.\]
By the hypothesis of the lemma, $\mu(Z_Q) > 2Q_2/Q$ for some
positive $Q_2$ where $Q_2$ depends only on $Q''$ and so
\[ \int_{Z_Q} \left(\lambda_Q - \log^+\right) (Q + F(x))\, d\mu(x) > \]
\[ > Q_2 \frac{\log
Q}{Q} + \int_{Z_Q} \left[ \frac{I(x)}{Q} - \log^+(I(x)-Q)\right]\, d\mu(x) \; .\]
In the integral term, the integrand is non-negative if $I(x)\leq Q+1$ or
$I(x) \geq Q^2$, keeping in mind that $Q>800$. For other values of
$x$, the lower bound by $-\log Q^2$ holds.
It follows that
\[ \int_{Z_Q} \left[ \frac{I(x)}{Q} - \log^+(I(x)-Q)\right]\, d\mu(x)
> - \log Q^2 \mu(\{ x :\: Q<I(x)<Q^2\}) \; .\]
Since
\[ \int_{Q<I(x)<Q^2} I^p(x)\, d\mu(x) >
(Q'')^p \mu(\{ x :\: Q<I(x)<Q^2\}\; ,\]
one gets
\[ \mu(\{ x :\: Q<I(x)<Q^2\}) < \frac{(Q'')^p}{Q^p} \]
Thus,
\[ \int_{Z_Q} \left(\lambda_Q - \log^+\right) (Q + F(x))\, d\mu(x) > Q_2 \frac{\log
Q}{Q} - 2 \frac{(Q'')^p}{Q^{p-1}} \frac{\log Q}{Q} > \frac{Q_2}{2} \frac{\log
Q}{Q}\]
for $Q\geq Q_0 = (\frac{4 (Q'')^p}{Q_2})^{\frac{1}{p-1}}$.
Hence for $Q\geq Q_0$, in view of~(\ref{equ:23fa,1}),
the negative term on the right-hand side of
estimate~(\ref{equ:12za,3}) dominates and that proves the assertion of
Lemma~\ref{lem:12za,1}.
\end{proof}
Lemma~\ref{lem:12za,1} will be used with $Q'=K_1$ and $Q''=K_2$ from
Proposition~\ref{prop:14za,1}. This defines a constant $Q_0$.
\paragraph{Supermartingale construction.}
Choose $N\geq 0$. Under the hypothesis of
Proposition~\ref{prop:14za,1}, define a stochastic process
$(\zeta_n^{(N)})_{n\geq N}$ as follows. If for some $N\leq k\leq n$,
$Z_n(x)<Q_0$, then pick the smallest such $k$ and set $\zeta_n^{(N)}(x) =
\log^+ Z_k(x)$. Otherwise, let $\zeta_n^{(N)}(x)=\log Z_n(x)$. In other
words, $\zeta_n^{(N)}$ is the process $\log^+ Z_n$ starting at $N$ and
stopped when $Z_n$ first dips below $Q_0$.
\begin{lem}\label{lem:23fa,1}
For every $N\geq 0$, $\zeta_n^{(N)}$ is a supermartingale with respect
to the filtration $({\cal F})_n$ and converges almost surely to a
finite limit.
\end{lem}
\begin{proof}
If $\zeta_{n-1}^{(N)} < \log Q_0$, then the process is stopped and its
conditional increment is $0$. Otherwise, if $\zeta_{n-1}^{(N)} = \log
Q \geq \log Q_0$, Lemma~\ref{lem:12za,1} can be applied to the
conditional increments. That, we put $F,\Delta,I$ equal to $F_n,
\Delta_n, I_n$, respectively and the probabilistic space is the set
$S=\{ \omega :\: \zeta_{n-1}^{(N)}(\omega) = Q\}$ with normalized
measure $\mu$. Then the Lemma says that
$E(\zeta_{n}^{(N)}-\log Q|{\cal F}_{n-1})(\omega) < 0$ almost surely
on $S$.
Since $\zeta_{n}^{(N)}$ is non-negative by definition, it converges
almost surely by martingale theory.
\end{proof}
\paragraph{Proof of Proposition~\ref{prop:14za,1}.}
We will first show that $\lim_{n\rightarrow\infty} Z_n = +\infty$ with
probability $0$. Suppose otherwise. Then there is $N$ such that with
positive probability $Z_n(x) > Q_0$ for all $n\geq N$ and
$\lim_{n\rightarrow\infty} Z_n(x) = +\infty$. Considering
$\zeta_n^{(N)}$ we see that on this set $\zeta_n^{(N)}(x) = \log
Z_n(x)$ for all $x$ and thus diverges to $\infty$ contrary to the
assertion of Lemma~\ref{lem:23fa,1}.
Now pick an arbitrary $M>0$ and consider the process $\tilde{Z}_n =
Z_n + nM$. It is measurable with respect to the same filtration $({\cal
F})_n$ and evidently satisfies the hypothesis of
Proposition~\ref{prop:14za,1}, since we can just set $\tilde{I}_n = I_n
+ M$ for all $n$. The hypothesis of Proposition~\ref{prop:14za,1} is
satisfied with the same $K_1$ and $K_2:=K_2+M$.
Hence, the conclusion that
$\lim_{n\rightarrow\infty} \tilde{Z}_n = \infty$ almost nowhere
remains valid.
But that means $Z_n < -Mn/2$ infinitely often almost surely, and so
\[ \lim\inf_{n\rightarrow\infty} \frac{Z_n}{n} \leq M/2 \; .\]
Since $M$ was arbitrary, we further conclude that
\[ \lim\inf_{n\rightarrow\infty} \frac{Z_n}{n} = -\infty \]
almost surely and by considering the process $(-Z_n)$ instead of
$(Z_n)$, we also get that the upper limit of $\frac{Z_n}{n}$ is
$+\infty$ almost surely.
\subsection{The drift function}
Based on Lemma~\ref{lem:22xp,2} we can define combinatorial
displacements for all branches induced by $\tilde{H}$ by simply adding
the displacements for all branches of $\tilde{H}$ that occur in the
composition. It will then remain true that if a branch $\zeta$ of the
induced map has combinatorial displacement $p$, then $\tau^p\zeta$
belongs to the tower.
\begin{defi}\label{defi:22xp,1}
Given a map ${\cal J}$ induced by $\tilde{H}$, define its {\em drift
function} $\Delta_{\cal J}$ to be equal on the domain of any branch of
$\cal J$ to the combinatorial displacement of that branch.
\end{defi}
Define
\[ \gamma_n(x) := \left\{\begin{array}{ccc} 0 & \mbox{if} & x\geq n\\
x & \mbox{if} & -n < x < n\\
0 & \mbox{if} & x\leq -n \; .
\end{array} \right. \]
Fix one of the four pieces $K_{\pm}, L_{\pm}$ and denote it $P$. The
set $M_P$ consists of all probabilistic measures $\mu$ on $P$ which can be
obtained as $\mu = \zeta_*( Q\lambda )$ where $\zeta$ is a branch of
$\tilde{J}^n$, for any $n\geq 1$, which
maps onto $P$, $\lambda$ is the
Lebesgue measure and $Q$ a normalizing constant equal to the
reciprocal of the area of the domain of $\zeta$.
Define the function $\Delta^0$ as follows: $\Delta^0(z) = n$ if $z\in
V_{k,n}$ for $k\neq 0,1$ and $n\in \mathbb {Z}$ and $0$ otherwise. Then
$\Delta^0$ coincides with $\Delta_{\tilde{H}}$ except on the ``central
rows'' $V_{k,n}$, $k=0,-1$. The idea of the proposition to follow is
that $\Delta^0$ is a good approximation of the much more complicated
function $\Delta_{\cal J}$ and that $\Delta^0$ has certain helpful
properties.
\begin{prop}\label{prop:22xp,2}
If $P$ is one of $K_{\pm}, L_{\pm}$, then there exist positive $Q_1,
Q_2, Q_3$ so that for every $\mu\in M_P$:
\begin{itemize}
\item
\[ \int_P |\Delta_{\tilde{\cal J}} - \Delta^0|^{\frac{3}{2}}\, d\mu < Q_1 \; ,\]
\item
for every $n\neq 0$,
\[ Q_2^{-1} |n|^{-2} < \mu(\{ x\in P :\: \Delta^0 = n\}) < Q_2 |n|^{-2} \; ,\]
\item
for all $n$
\[ |\int_P \gamma_n \circ \Delta^0\, d\mu | \leq Q_3\; .\]
\end{itemize}
\end{prop}
Observe that the first two properties would be enough to prove for the Lebesgue
measure instead of $\mu$, since the densities $\frac{d\mu}{d\lambda}$
bounded for all $\mu\in M_P$ in view of bounded distortion.
The last property deserves attention. Although
$\Delta^0$ is non-integrable in view of the second claim, its
integrals in a certain principal value sense remain bounded. Also,
this one would not be enough to prove for the Lebesgue measure as it
involves cancellations.
\paragraph{Proof of Proposition~\ref{prop:22xp,2}.}
Let us start with the following general Lemma.
\begin{lem}\label{lem:24fp,1}
Let $\Phi$ be a holomorphic function defined on a neighborhood of $0$,
with the power series expansion at $0$ in the form
\[ \Phi(z) = z + az^3 + O(|z|^4) \]
with some complex $a\neq 0$. Choose $\tilde h$ to be its Fatou coordinate, so that
\[ \tilde h\circ\Phi(z) = \tilde h(z)+1 \] for all $z$ in an attracting petal of
$0$. Let $f,g$ be continuous functions defined for $x>r>0$ for some
$r$ such that
$f(x) > g(x)$ for all $x$ and $1$-periodic.
There exists $K$ so that for any $n>r$ the area of the set
\[ \tilde h^{-1} ( \{ x+iy :\: n<x<n+1, g(x) < y < f(x)\} ) \]
is bounded above by $Kn^{-3}$.
\end{lem}
\begin{proof}
It is well known (see also the proof of Lemma~\ref{lem:24fp,5})
that the
\[ |(\tilde h^{-1})'(z)| = \tilde L |z|^{-3/2} + o(|z|^{-3/2})\; .\]
Hence, the preimage by $\tilde h$ of any square \[\{ x+iy :\: n<x<n+1,
c<y<c+1\}\] for $n$ large has area bounded by $K_1 n^{-3}$ and the
hypotheses of continuity and 1-periodicity for $f,g$, any region in
the form $ \{ x+iy :\: n<x<n+1, g(x) < y < f(x) \} $ is contained in
the union of $m$ such squares with $m$ independent of $n$.
\end{proof}
Observe that under $\tilde h^{-1}$, the graphs of $f$ and $g$ are mapped to
curves invariant under $\Phi$ and tangent to the attracting direction
of $\Phi$ at $0$ and, conversely, any two such curves give rise to
functions, $f,g$ which satisfy the hypotheses of
Lemma~\ref{lem:24fp,1}.
\begin{lem}\label{lem:24fp,2}
Function $\Delta_{\tilde{C}}^{\frac{3}{2}}$ is integrable with respect to the
Lebesgue measure on $W_{0,0}$.
\end{lem}
\begin{proof}
By the definition of $\tilde{C}$, $\Delta_{\tilde{C}}(z)$ is equal to
the number of iterates of $\tilde{H} = \tau^{-1} G \tau$ needed to map
$z$ outside of $W_{0,0} \cup W_{-1,0}$, which is bounded above by
twice the number of iterates of $\tilde{H}^2$ needed to map $z$
outside of $W_0,0$. $\tilde{H}$ has a degenerate neutral fixed point
at $\tau^{-1}x_0$ in a neighborhood of the fixed point $W_{0,0}$ is
just the complement of $\tau^{-1}(\Omega_-\cup\Omega_+)$ whose
boundary is mapped invariant under $G^2$ if neighborhood is small
enough. Once
$z$ leaves that fixed neighborhood of the fixed point, it will leave
$W_{0,0}$ after a bounded number of further iterations. If we apply
Lemma~\ref{lem:24fp,1} to $\Phi := \tilde{H}^{-1}$ we get that the
measure of the set $S_n$ of points $z$ which stay in the neighborhood for
exactly $n$ iterates of $\tilde{H}^2$ is bounded by $Kn^{-3}$. Since
$\Delta_{\tilde{C}}$ on $S_n$ is bounded by $n$ plus a $Q$, the
the integral of $\Delta_{\tilde{C}}^{\frac{3}{2}}$ over $W_{0,0}$ is bounded by
\[ 2C |W_{0,0}| + 2K \sum n^{-\frac{3}{2}} < \infty \; .\]
\end{proof}
\begin{lem}\label{lem:24fp,3}
For a certain $Q_4$
\[ \int_P |\Delta_{\tilde {\cal J}} - \Delta_{\tilde{H}}|^{\frac{3}{2}} \, d\lambda < Q_4 \; .\]
\end{lem}
\begin{proof}
Since $\tilde{\cal J}$ is either $\tilde{H}$, or $\tilde{C} \circ \tilde{H}$
if $\tilde{H}$ maps into $W_{0,0}\cup W_{-1,0}$,
\[ \Delta_{\tilde{\cal J}} = \Delta_{\tilde{H}} + \Delta_{\tilde{C}} \]
where we put $\Delta_{\tilde{C}}$ equal to $0$ outside the domain of
$\tilde{C}$.
\[ \int_P |\Delta_{\tilde{\cal J}} - \Delta_{\tilde{H}}|^{\frac{3}{2}}\, d\lambda =
\int_P |\Delta_{\tilde{C}}(\tilde{H}(z))|^{\frac{3}{2}}\, d\lambda(z) = \]
\[ =\int_{(W_{0,0}\cup W_{0,-1}) \cap P} |\Delta_{\tilde{C}}(w)|^{\frac{3}{2}}
|(H^{-1})'(w)|^3 \, d\lambda(w) \; .\]
The derivative of $\tilde{H}^{-1}$ is bounded on the central
pieces, since $\tilde{H}$ is univalent and maps onto a neighborhood of
their closure. Thus, $\Delta_{\tilde{C}}$ is multiplied by a bounded
factor and hence, in view of
Lemma~\ref{lem:24fp,2}, the integral is finite.
\end{proof}
\begin{lem}\label{lem:24fp,4}
There is $Q_5$ so that
\[ \int_P |\Delta_{\tilde{H}} - \Delta^0|^{\frac{3}{2}}\, d\lambda < Q_5 \; .\]
\end{lem}
\begin{proof}
Let $\chi_0$ be the characteristic function of the ``central rows'',
i.e. the union of pieces $V_{k,n}$, $k=0,-1$, $n\in \mathbb {Z}$.
Clearly,
\[ \Delta_{\tilde{H}} - \Delta^0 = \chi_0\Delta_{\tilde{H}}\; .\]
Recall that on $W_{0,k}, W_{-1,k}$, the combinatorial displacement is
just $k$. When $k$ is positive and even, then $G^{k}$ is used to map
$V_{0,k} \setminus W_{0,k}$ onto $\tau^{-p} B_+$ and the combinatorial
displacement is $k-p$. The dynamics on $V_{-1,k}$ is the mirror image
of this. On $V_{0,k}$,
$\Delta_{\tilde{H}}$ is $k-p(z)$ where $p(z)$ is zero unless $k$ is
positive and even, in which case it given by the condition
$z \in G^{-k} \tau^{-p(z)} B_+$. Now $G^{k-1}$ maps $W_{0,k}$ with
bounded distortion into a neighborhood of $\tau x_0$, which is the
critical of $G$. Since $G^k$ is univalent on $W_{0,k}$, it follows
that
the area of the set of $z\in V_{0,k}$ such that $p(z) = p$ is bounded by
$Q_1 |V_{0,k}| |p|^{-3}$. It follows that the integral of
$|\Delta_{\tilde{H}}|^{\frac{3}{2}}$ over $V_{0,k}$ is bounded by
$|V_{0,k}|(|k|^{\frac{3}{2}}+10 Q_1)$. By Lemma~\ref{lem:24fp,1},
$|V_{0,k}| \leq K k^{-3}$ so that integral of $|\Delta_{\tilde{H}}|^{\frac{3}{2}}$
over the union of all pieces $V_{0,k}, k\in \mathbb {Z}$ is finite. The same
reasoning is applied to pieces $V_{-1,k}$.
\end{proof}
From Lemmas~\ref{lem:24fp,3} and~\ref{lem:24fp,4}, we derive the first
claim of Proposition~\ref{prop:22xp,2}.
We will now deal with the remaining two claims which are only
concerned with the function $\Delta^0$.
Start by defining sets $V_n = \bigcup_{k\not=0,-1} V_{k,n}$.
\begin{lem}\label{lem:24fp,5}
There exist $C,K_1>0$, such that for all $n$
\[ \lambda(V_n) - C |n|^{-2} \leq K_1 |n|^{-5/2} \; .\]
\end{lem}
\begin{proof}
Consider the map $h^{-1}$ from the slit plane
$C_h$ onto $\Omega_-$ as described by item (4) of Theorem~\ref{prop}.
The measure of $V_n$ is equal to the integral of $|(h^{-1})'|^2$ over the set
$S_{n}\cup \bar S_n$, where
$\bar S_n=\{z: \bar z\in S_n\}$ is the mirrow symmetric to $S_n$ set,
and $S_n$ is a set in the upper half plane $\mathbb{H}^+$, which is a
``half-strip''
bounded by the horizontal line $\Im z=\pi$ and two transversal curves
$\log(\partial \Omega_-)+(n-1)\log\tau$,
$\log(\partial \Omega_-)+n\log\tau$.
To estimate the integral as $n\to \pm\infty$ we use the parabolic
fixed point theory applied to the map
$G^2(z)=z - A(z-x_0)^3 + \cdots$, where $A>0$.
The map $h_a:=(-2\log\tau)^{-1}h$ is an attracting Fatou
coordinate of the neutral fixed point $x_0$ of $G^2$:
$h_a\circ G^2(z)=\sigma\circ h_a(z)$,
for $z\in \Omega$ where $\sigma(w)=w+1$ is the shift.
According to the general theory,
\[h_a(z) = \phi_a(L(z-x_0)^{-2})\]
where
$L=(2A)^{-1/2}$
and
$\phi_a(w)=w+O(|w|^{1/2})$, as $|w|$ tends to $\infty$
in some sector $\Sigma_a=\{w: \Re w>c-\Im w\}$, $c>0$.
Similarly, there exists a repelling Fatou coordinate
$h_r$, such that
$h_r\circ G^2(z)=\sigma\circ h_r(z)$
for $z\in G^{-2}(\mathbb{H}^{\pm})$, and
\[h_r(z) = \phi_r(L(z-x_0)^{-2})\]
with the same constant $L$ as for $h_a$, and
$\phi_r(w)=w+O(|w|^{1/2})$, as $|w|$ tends to $\infty$
in a sector $\Sigma_r=\{w: \Re w<-c+\Im w\}$.
We have:
\[|(h_{a}^{-1})'(w)|^2=L/4 |w|^{-3}(1+O(|w|^{-1/2}))\]
as $|w|\to +\infty$ in $\Sigma_a$, and similarly
\[|(h_{r}^{-1})'(w)|^2=L/4 |w|^{-3}(1+O(|w|^{-1/2}))\]
as $|w|\to +\infty$ in $\Sigma_r$.
Note that the picture is mirrow symmetric w.r.t. the real axis.
In particular, $h_a(\bar z)=\overline{h_a(z)}$ etc.
Since we apply $h^{-1}(w)$ as $\Re w\to \pm\infty$,
introduce a pasting map (called also a horn map) $\Psi=h_r\circ h_a^{-1}$.
The map $\Psi$ has an analytic extension from $\Sigma_a\cap \Sigma_r$ to the
upper and lower half planes, it commutes with the shift $\sigma$,
and $\Psi(w)=w+O(|w|^{1/2})$ as $\Im w\to \infty$. It follows, that
\begin{equation}\label{assy}
\Psi(w)=w+v_{\pm}+O(\exp(-\pi |\Im w|))
\end{equation}
uniformly in half-planes compactly contained in
$\mathbb{H}^{\pm}$, where $v_{\pm}$ are two complex conjugated vectors.
By the symmetry, the area $|V_n|$ of $V_n$
is twice the area of
\[ h^{-1}(S_n)=h_{a}^{-1}(S_n/(-2\log\tau))\; .\]
Notice that $S_{n}=S_0+n\log\tau=\cup_{m=0}^{+\infty} (P+(n\log \tau + i\pi
m))$ where
$P$ is a ``rectangle'' bounded by the curves
$\Im z=\pi$, $\Im z=2\pi$ and
$\log(\partial \Omega)-\log\tau$,
$\log(\partial \Omega)$.
We will denote $\hat S_n=(-2\log\tau)^{-1}S_n$ etc.
The sets $\hat S_n$, $\hat S_0$, $\hat P$ switch the half planes,
i.e. lie in $\mathbb{H}^-$.
Thus,
\[|V_n|=2\int\int_{\hat S_n} |(h_{a}^{-1})'(w)|^2 d\sigma_w
=2 \int\int_{\hat P}\sum_{m=0}^\infty
|(h_{a}^{-1})'(t-\frac{n}{2}-\frac{i\pi m}{2\log\tau})|^2 d\sigma_t \]
where $d\sigma_z$ denotes the area element of a complex variable $z$.
First, let $n\to -\infty$, so that
$\Re (t-\frac{n}{2}-\frac{i\pi m}{2\log\tau})\to +\infty$.
By the asymptotics of $(h_a^{-1})'(w)$ in $\Sigma_a$,
\[|V_n|=\frac{2L}{4}\, \times \]
\[ \times \int\int_{\hat P}
\sum_{m=0}^\infty\left[ |t+\frac{|n|}{2}-\frac{i\pi m}{2\log\tau}|^{-3} +
O(|t+\frac{|n|}{2}+\frac{i\pi m}{-2\log\tau}|^{-7/2})\right]\, d\sigma_t.\]
Since $t$ belongs to a bounded domain $\hat P$,
one can replace the sums by corresponding integrals
and arrive at the following asymptotic formula:
\[|V_n|=\frac{4L |\hat P| I \log\tau}{\pi}\frac{1}{|n|^{2}} + \Delta_1(n),\]
where $I=\int_{0}^\infty dx/(1+x^2)^{3/2}$,
and
$|\hat P|$ is the area of the
bounded domain $\hat P$, and $|\Delta_1(n)|<K_1|n|^{-5/2}$, for some
$K_1$ and all negative $n$.
As for $n$ positive, we can write
(assuming for definiteness that $n$ is even)
\[h_a^{-1}(\hat S_n)=h_{a}^{-1}(\hat S_0-n/2)=
h_{a}^{-1}\circ \sigma^{-n/2}(\hat S_0)= \]
\[ =G^{-n}\circ h_{a}^{-1}(\hat S_0)=
h_r^{-1}\circ \sigma^{-n/2}\circ \Psi(\hat S_0)=
h_r^{-1}(\Psi(\hat S_0)-n/2).\]
As $n\to +\infty$,
using the asymptotics for $(h_r^{-1})'(w)$ in $\Sigma_r$
and (\ref{assy}) for $\Psi$,
\[|V_n|=\frac{2L}{4}\, \times \]
\[ \times \int\int_{\hat P}
\sum_{m=0}^\infty \{ |t-\frac{n}{2}-\frac{i\pi m}{2\log\tau} +v_-
+O(\exp(-\frac{m \pi}{\log \tau}))|^{-3} + \]
\[ + O(|t-\frac{n}{2}-\frac{i m \pi}{2\log\tau}+v_-
+O(\exp(-\frac{m \pi}{\log \tau})|^{-7/2})\}
\{ 1+O(\exp(-\frac{m \pi}{\log\tau}))\} d\sigma_t.\]
One rewrites it as
\[|V_n|=\frac{L}{2}\, \times \]
\[ \times \int\int_{\hat P}
\sum_{m=0}^\infty \{ |t-\frac{n}{2}-\frac{i m \pi}{2\log\tau}+v_-|^{-3} +
O(|t-\frac{n}{2}-\frac{i m \pi}{\log\tau})|^{-7/2})\}
\{ 1+O(\exp(-\frac{m \pi}{\log\tau}))\} d\sigma_t \]
\[=
\frac{L}{2}\, \times \]
\[ \times \int\int_{\hat P}
\sum_{m=0}^\infty \{ [ |t-\frac{n}{2}-\frac{i m \pi}{\log\tau}+v_-|^{-3} ]
[ 1+O(\exp(-\frac{m \pi}{\log\tau}) ]
+ O(|t-\frac{n}{2}-\frac{i m \pi}{2\log\tau})|^{-7/2})\} d\sigma_t \]
Now we use
the invariance of the Lebesgue measure under shifts and get the same
asymptotic formula as for $n\to -\infty$.
\end{proof}
Now Lemma~\ref{lem:24fp,5} implies the second claim of
Proposition~\ref{prop:22xp,2}.
To address that last claim, first define
\[ c_n(\mu) = \frac{\mu(V_n)}{\lambda(V_n)} \; .\]
Then
\begin{equation}\label{equ:25fp,1}
\int_P \gamma_N \circ \Delta^0 d\mu = \sum_{n=-N}^N n c_n(\mu)
\lambda(V_n) \; .
\end{equation}
To uniformly bound this quantity, we will need certain properties of
coefficients $c_n(\mu)$ for $\mu \in M_P$. As the result of bounded
distortion, $|\log c_n(\mu)|$ can be bounded independently of $\mu$,
but need stronger properties.
\begin{lem}\label{lem:25fp,1}
Let $n$ be any integer with $|n| > 1$. Then there exists a constant
$Q$ so that for any $n$ and $\mu\in M_P$
\begin{itemize}
\item
\[ |c_n(\mu) - c_{n+1}(\mu)| < Q |n|^{-3/2} \]
\item
\[ |c_n(\mu) - c_{-n}(\mu)| < Q |n|^{-1/2} \; .\]
\end{itemize}
\end{lem}
\begin{proof}
The basic fact will use, which follows from
Proposition~\ref{prop:22xp,1}, is that functions $\log
\frac{d\mu}{d\lambda}(z)$ are bounded and Lipschitz-continuous,
uniformly for all $\mu\in M_P$.
For any $k>0$ the set $V_{k,n}\cup V_{k_{n+1}}$ has diameter bounded
by $Q_1 |n|^{-3/2}$. This follows since the derivative of the Fatou
coordinate $h^{-1}(w)$ is asymptotically $|w|^{-3/2}$. By the uniform
Lipschitz property $\log \frac{d\mu}{d\lambda}$ differs by no more
than $O(|n|^{-3/2})$ between any tho points of this set and hence
\[ (1 - Q_2 |n|^{-3/2} ) \frac{
\mu(V_{k,n+1})}{\lambda(V_{k,n+1})} \leq
\frac{\mu(V_{k,n})}{\lambda(V_{k,n})} \leq (1 + Q_2 |n|^{-3/2})
\frac{\mu(V_{k,n+1})}{\lambda(V_{k,n+1})} \; .\]
Since $c_n(\mu), c_{n+1}(\mu)$ are just averages of these quantities
for various $k$,
\[ (1-Q_2 |n^{-3/2}|) \leq \frac{c_{n+1}(\mu)}{c_n(\mu)} \leq (1+Q_2
|n^{-3/2}|) \; .\]
Since $c_n(\mu)$ are uniformly bounded above, the first claim
follows.
To see the second claim, observe that $V_{n}$ and $V_{-n}$ are in
a disk centered at the fixed point with radius $O(|n|^{-1/2})$. This
follows again from the asymptotics $|w|^{-\frac{1}{2}}$ for the Fatou
coordinate $h^{-1}(w)$.
The uniform Lipschitz estimate then says that
\[ |\frac{d\mu}{d\lambda}(z_1) - \frac{d\mu}{d\lambda}(z_2)| \leq Q_3
|n|^{-\frac{1}{2}} \]
if $z_1 \in V_n$ and $z_2\in V_{-n}$. Since $c_n$ can be bounded above
and below by the extrema of $\frac{d\mu}{d\lambda}(z_1)$ for $z_1\in
V_n$ and $c_{-n}$ can be expressed in an analogous fashion, the
second claim follows.
\end{proof}
Let us now denote
\[ B_N = \sum_{n=1}^N n \lambda(V_n) \]
for $N>0$, $B_N = \sum_{n=-N}^{-1} n \lambda(V_n)$ for $N<0$ and
$B_0=0$.
Applying Abel's transformation to the series in
Equation~\ref{equ:25fp,1}
\[ \int_P \gamma_N \circ \Delta^0 d\mu = \]
\[ = \sum_{n=1}^{N-1}
B_n(c_{n+1}(\mu) - c_n(\mu)) +
\sum_{n=-N+1}^{-1} B_n (c_{n-1}(\mu)-c_n(\mu)) +
B_N c_N(\mu) + B_{-N} c_{-N}(\mu) \; .\]
The first sum can be bounded by
\[ Q_1 \sum_{n=1}^{N-1} |B_n| n^{-3/2} \]
by Lemma~\ref{lem:25fp,1}. Since $|B_n| < Q_2 \log n$ by
Lemma~\ref{lem:24fp,5}, the sum is uniformly bounded for all $N$ and
$\mu$. The second sum is dealt with in the same way.
Then
\[ c_{-N}(\mu) B_{-N} + c_N(\mu) B_N = (c_{-N}(\mu)-c_N(\mu)) B_{-N} +
c_N(\mu)(B_{-N}+B_{N}) \; .\]
By Lemma~\ref{lem:25fp,1}, $(c_{-N}(\mu)-c_N(\mu)) B_{-N}$ goes to $0$ with
$N$. At the same time, $B_{-N}+B_{N}$ are bounded independently of $N$
by Lemma~\ref{lem:24fp,5}, since the leading terms $C |n|^{-2}$ in
$\lambda(V_n)$ give rise to exactly canceling contributions and the
$O(|n|^{-5/2})$ corrections after multiplying by $n$ result in
convergent series.
This ends the proof of Proposition~\ref{prop:22xp,2}.
\section{Main results: Proofs}
\paragraph{The level process.}
For $x\in K_+$ and $n>0$ let us define $Z_n$ to be the combinatorial
displacement of the branch of ${\tilde {\cal J}}^n$ whose domain contains
$x$. For Lebesgue-a.e. $x$, $Z_n$ are thus defined for all positive
$n$. We may set $Z_0$ to be $0$ everywhere. If ${\tilde {\cal J}}^n$ maps $x$
into a piece $P$ (where $P$ maybe any of the four pieces $K_{\pm},
L_{\pm}$), then clearly $Z_{n+1} = Z_{n} + \Delta_{\tilde J}({\tilde {\cal J}}^n(x))$.
The sequence $(Z_n)_{n\geq 0}$ may be viewed as a stochastic process
on a probabilistic space $K_+$ with probability given by the Lebesgue
measure on $K_+$ normalized to total mass $1$.
To this process we can apply Proposition~\ref{prop:14za,1}, because
its hypotheses are satisfied in view of
Proposition~\ref{prop:22xp,2}.
\paragraph{The combinatorial displacements for the iterates of
$\Lambda$.}
Recall map $\Lambda$ which is equal to ${\tilde J}$ or ${\tilde J}^2$ on
various pieces of its domain. At almost every point $z$ of $K_+$, we
have a sequence $n_m$ where $\Lambda^m = {\tilde J}^{n_m}$ on a
neighborhood of $z$. In particular, the combinatorial displacement of
$\Lambda^m$ is $Z_{n_m}$. Also, $n_{m+1} - n_{m} \leq 2$.
\begin{prop}\label{prop:2jp,2}
For almost every $x\in K_+$ both $\lim\inf_{m\rightarrow\infty} Z_{n_m}(x) =
-\infty$ as well as $\lim\sup_{m\rightarrow\infty} Z_{n_m}(x) = +\infty$
hold true.
\end{prop}
Suppose this is not the case and the first statement fails. Then for a
set $S$ of positive measure $Z_{n_m}(x) \geq M$ for all $m$ and $x\in
S$. Let $x_0$ be a density point of $S$ and, by
Proposition~\ref{prop:14za,1},
$\lim\inf_{n\rightarrow\infty} Z_{n}(x) = -\infty$. Choose $n$ so that
$Z_n(x) < M$. Let $U_n$ be the domain of the branch of ${\tilde J}^n$
which contains $x_0$. By the bounded distortion of ${\tilde J}$,
$U_n$ for all such $n$ form a basis of
neighborhoods of $x_0$ such that $|U_n| \geq \kappa (\mbox{diam}\, U_n)^2$ for
a constant $\kappa > 0$. By the bounded distortion of $\tilde J$, each
$U_n$ contains a fixed proportion of points $x$ for which $Z_{n+1}(x)
< Z_n(x)$. But for all such $x$ either $n$ or $n+1$ is in the
subsequence $n_m$, so none of them belongs to $S$ and $x_0$ is not a
density point.
$\lim\sup_{m\rightarrow\infty} Z_{n_m}(x) = +\infty$ is proved by
contradiction in the same way.
\paragraph{Theorem~\ref{A} and the symmetry of the tower}
Recall that $H$ is a limiting map introduced in Theorem~\ref{prop}.
Here we prove a statement which is stronger than Theorem~\ref{A}:
\begin{theo}\label{rec}
There is a map $\Phi$ defined on a countable union of disjoint
open topological disks whose complement in $\mathbb{C}$ has measure $0$,
and such that on each connected component of its domain $\Phi$ belongs
to the tower dynamics of $H$, with the following property:
\begin{itemize}
\item almost every point in the plane
visits any neighborhood of zero and infinity under the iterates of
$\Phi$,
\item
for any point $x$ of the Julia set of $H$ which is in the domain of
$\Phi^p$, $p>0$, $\Phi^p$ is an iterate of $H$ on a neighborhood of
$x$.
\end{itemize}
\end{theo}
{\bf Remark.} It seems to be natural to call the dynamics of $H$ with
such properties {\it metrically symmetric}.
Map $\Phi$ is defined to be associated, in the sense of
Definition~\ref{defi:21ga,1}, to the induced map $\Lambda$
introduced by Proposition~\ref{prop:2jp,1}.
Proposition~\ref{prop:14za,1} asserts that for almost every point its
combinatorial displacements vary from $-\infty$ to $+\infty$. Recalling
Lemma~\ref{lem:22xp,2}, for almost every point $z$ there is a sequence of
iterates in the maximal tower which map $z$ into $\tau^{k_n} P_{k_n}$
where $k_n \rightarrow +\infty$ and each $P_{k_n}$ is one of the four
pieces $K_{\pm}, L_{\pm}$. Since all $P_{k_n}$ are contained in a
fixed ring centered at $0$, that means images of $z$ under those
iterates tend to $\infty$. But similarly there is a sequence $l_N
\rightarrow -\infty$ with the same property and images of $z$ under
those iterates tend to $0$.
\paragraph{Finite order Feigenbaum maps: Corollary~\ref{finite}}
We use mainly Theorem~\ref{prop}, see also~\cite{LSw1}.
The Julia set $J(H)$ of $H$ is a compact set.
Fix a neighborhood $V$ of $J(H)$.
To show that the area $|J(H^{(\ell)})|$ tends to zero,
it is enough to show that $J(H^{(\ell)})\subset V$
for all $\ell$ large enough.
To this end, for any point $w$ outside of $V$ there is
a minimal $j\ge 0$, such that $H^j(w)$ is outside of
the closure of $\Omega$. Since $H^{(\ell)}$ converges to $H$
uniformly on compact sets in $\Omega$, we have that
also $(H^{(\ell)})^j(w)$ is outside of the closure
of $\Omega$ as well. On the other hand, for every $\ell$,
there is a maximal polynomial-like extension
of $H^{(\ell)})$ to a domain $\Omega_\ell$ onto a slit
complex plane~\cite{Epstein}.
The boundary of $\Omega_\ell$ is invariant
under $G_\ell^{-1}$, where $G_\ell=H^{(\ell)}\circ \tau_\ell^{-1}$.
Then $G_\ell^{-1}$ converge to $G^{-1}$ in $\mathbb{H}^{\pm}$ uniformly
on compacts. It follows, that the boundaries of
$\Omega^{(\ell)}$ converge uniformly to the boundary of $\Omega$.
Therefore, $(H^{(\ell)})^j(w)$ is outside of
$\Omega^{(\ell)}$, for $\ell$ large enough, i.e. $w$
is not in the Julia set of $J(H^{(\ell)})$.
This proves Corollary~\ref{finite}. However, on the question of
whether maps of finite order have Julia sets of zero measure, our
method sheds little light, since it is based on the infinite variance of
the drift function, which does not hold in any finite order case.
|
1,116,691,498,621 | arxiv | \section{Introduction}\label{sec_01}
Let us start with a work out example to explain a motivation to consider the simultaneous linearization theorem.
\paragraph{Cauliflowers.}
In the family of quadratic maps, the simplest parabolic fixed point is given by $g(w)=w+w^2$. (Whose Julia set is called the \textit{cauliflower}.) Now we consider its perturbation of the form $f(w)=\lambda w +w^2$ with $\lambda \nearrow 1$. According to \cite[\S 8 and \S 10]{MiBook}, we have the following fact:
\begin{prop}[K\"onigs and Fatou coordinates]\label{prop_Kf_Kg}
Let $K_f$ and $K_g$ be the filled Julia sets of $f$ and $g$. Then we have the following:
\begin{enumerate}
\item There exists a unique holomorphic branched covering map $\phi_f:K_f^\circ \to \mathbb{C}$ satisfying the Schr\"oder equation $\phi_f(f(w))=\lambda \phi_f(w)$ and $\phi_f(0)=\phi_f(-\lambda/2)-1=0$. $\phi_f$ is univalent near $w=0$.
\item There exists a unique holomorphic branched covering map $\phi_g:K_g^\circ \to \mathbb{C}$ satisfying the Abel equation $\phi_g(g(w))=\phi_g(w)+1$ and $\phi_g(-1/2)=0$. $\phi_g$ is univalent on a disk $|w+r|<r$ with small $r>0$.
\end{enumerate}
\end{prop}
Note that $-\lambda/2$ and $-1/2$ are the critical points of $f$ and $g$ respectively.
\paragraph{Observation.}
Set $\tilde{w}=\phi_f(w)$. Now the proposition above claims that the action of $f|_{K_f^\circ}$ is semiconjugated to $\tilde{w} \mapsto \lambda \tilde{w}$ by $\phi_f$. Let us consider a M\"obius map $W=S_f(\tilde{w})=\lambda(\tilde{w}-1)/(\lambda-1)\tilde{w}$ that sends $\skakko{0, 1, \lambda}$ to $\skakko{\infty,0,1}$ respectively. By taking conjugation by $S_f$, the action of $\tilde{w} \mapsto \lambda \tilde{w}$ is viewed as $W \mapsto W/\lambda+1$. Let us set $W=\Phi_f(w):=S_f \circ \phi_f(w)$. Now we have
$$
\Phi_f(f(w)) ~=~ \Phi_f(w)/\lambda + 1 \text{~~~and~~~} \Phi_f(-\lambda/2) ~=~ 0.
$$
in total. On the other hand, by setting $W=\Phi_g(w):=\phi_g(w)$, we can see the action of $g|_{K_g^\circ}$ as $W \mapsto W+1$. Thus we have
$$
\Phi_g(g(w)) ~=~ \Phi_g(w) + 1 \text{~~~and~~~} \Phi_g(-1/2) ~=~ 0.
$$
If $\lambda$ tends to $1$, that is, $f \to g$, the semiconjugated action in $W$-coordinate converges uniformly on compact sets. Now it would be natural if $\Phi_f$ tends to $\Phi_g$. However, as one can see by referring the proof of the proposition in \cite[\S 8 and \S 10]{MiBook}, $\phi_f$ and $\phi_g$ are given in completely different ways thus we cannot conclude the convergence of $\Phi_f \to \Phi_g$ a priori.
\begin{figure}[htbp]
\label{fig_Phi}
\centering{\vspace{4cm}
}
\caption{Semiconjugation inside the filled Julia sets of Cauliflowers.}
\end{figure}
But there is another evident that supports this observation. Figure 1 shows the equipotential curves of $\phi_f$ and $\phi_g$ in the filled Julia sets. We can find similar patterns and it seems one converges to another. Actually, we have the following:
\begin{thm}\label{thm_convergence}
For any compact set $E \subset K_g^\circ$,
\begin{enumerate}[(1)]
\item $E \subset K_f^\circ$ for all $f \approx g$; and
\item $\Phi_f \to \Phi_g$ uniformly on $E$ as $f \to g$.
\end{enumerate}
\end{thm}
Here $f \approx g$ means that $f$ is sufficiently close to $g$, equivalently, $\lambda$ sufficiently close to $1$. (See \cite[Theorem 5.5]{Ka} for more generalized version of this proposition.) The proof of this theorem is given in Section 5, by using the \textit{simultaneous linearization theorem}.
\section{Simultaneous linearization}\label{sec_02}
In this section we state the simultaneous linearization theorem. We first generalize the cauliflower setting above:
\paragraph{Perturbation of parabolics.}
Let $f$ be an analytic map defined on a neighborhood of $0$ in $\bar{\mathbb{C}}$ which is tangent to identity at $0$. That is, $f$ near $0$ is of the form
$$
f(w) ~=~ w + A w^{m+1}+O(w^{m+2})
$$
where $A \neq 0$ and $m \in \mathbb{N}$. By taking a linear coordinate change $w \mapsto A^{1/m}w$, we may assume that $A=1$. In the theory of complex dynamics such a germ appears when we consider iteration of local dynamics near the parabolic periodic points, and plays very important roles. (See \cite[\S 10]{MiBook} and \cite{Sh} for example.) Now we consider a perturbation $f_\epsilon \to f$ of the form
$$
f_\epsilon(w) ~=~ \varLambda_\epsilon w \, (1+ w^{m}+O(w^{m+1}))
$$
with $\varLambda_\epsilon \to 1$ as $\epsilon \to 0$. By taking branched coordinate changes $z=-\varLambda_\epsilon^{m}/(m w^m)$ and setting $\tau_\epsilon:=\varLambda_\epsilon^{-m}$, we have
\begin{align*}
f_\epsilon(z) & ~=~ \tau_\epsilon z +1 +O(|z|^{-1/m}) \\
\longrightarrow
f_0(z) & ~=~ z +1 +O(|z|^{-1/m})
\end{align*}
uniformly near $w=\infty$ on the Riemann sphere $\hat{\mathbb{C}}$. The simultaneous linearization theorem will give partially linearizing coordinates of $f_\epsilon$ that depend continuously on $\epsilon$ when $\tau_\epsilon \to 1$ non-tangentially to the unit circle.
Let us formalize non-tangential accesses to $1$ in the complex plane: After C.McMullen, for a variable $\tau \in \mathbb{C}$ converging to $1$, we say $\tau \to 1$ \textit{radially} (or more precisely, \textit{$\alpha$-radially}) if $\tau$ satisfies $|\arg(\tau-1)| \le \alpha$ for some fixed $\alpha \in [0, \pi/2)$.
\paragraph{Ueda's modulus.}
Let us consider a continuous family of complex numbers $\skakko{\tau_\epsilon}$ with $\epsilon \in [0,1]$ such that $|\tau_\epsilon|\ge 1$ and $\tau_\epsilon ~\to~ 1$ $\alpha$-radially as $\epsilon \to 0$. For simplicity we assume that $\tau_\epsilon = 1$ iff $\epsilon=0$. Set $\ell_\epsilon(z):=\tau_\epsilon z +1$, which is an isomorphism of the Riemann sphere $\hat{\mathbb{C}}$. If $\epsilon>0$, then $b_\epsilon:=1/(1-\tau_\epsilon)$ is the repelling fixed point of $\ell_\epsilon$ with $\ell_\epsilon(z)-b_\epsilon=\tau_\epsilon(z-b_\epsilon)$. Thus the function
$$
N_\epsilon(z) ~:=~ |z-b_\epsilon|-|b_\epsilon
$$
satisfies the uniformly increasing property
$$
N(\ell_\epsilon(z)) ~=~ |\tau_\epsilon| N(z)+ \frac{|\tau_\epsilon|-1}{|\tau_\epsilon-1|} ~\ge~ |\tau_\epsilon| N(z) + \cos \alpha.
$$
Similarly, if $\epsilon=0$, the function
$$
N_0(z)~:=~ \sup\skakko{\mathrm{Re}\, (e^{i\theta}z)~:~|\theta|<\alpha}
$$
also has the corresponding property
$$
N_0(\ell_0(z)) ~\ge~ N_0(z) + \cos \alpha.
$$
In both cases, set
$$
\V_\epsilon(R) ~:=~ \skakko{z \in \mathbb{C}: N_\epsilon(z) \ge R}
$$
for $R > 0$. One can check that $N_\epsilon(z) \le |z|$ and $\V_\epsilon(R) \subset \mathbb{B}(R):=\skakko{z \in \mathbb{C}:|z| \ge R}$ for all $\epsilon \in [0,1]$.
We will establish:
\begin{thm}[Simultaneous linearization]\label{thm_ueda}
Let $\skakko{f_\epsilon: \epsilon \in [0,1] }$ be a family of holomorphic maps on $\mathbb{B}(R)$ such that as $\epsilon \to 0$ we have the uniform convergence on compact sets of the form
\begin{align*}
f_\epsilon(z) & ~=~ \tau_\epsilon z +1 +O(1/|z|^\sigma) \\
\longrightarrow
f_0(z) & ~=~ z +1 +O(1/|z|^\sigma)
\end{align*}
for some $\sigma \in (0,1]$ and $\tau_\epsilon \to 1$ $\alpha$-radially. If $R \gg 0$, then:
\begin{enumerate}[(1)]
\item For any $\epsilon \in [0,1]$ there exists a holomorphic map $u_\epsilon:\mathbb{V}_\epsilon(R) \to \mathbb{C}$ such that
$$
u_\epsilon(f_\epsilon(z)) ~=~ \tau_\epsilon u_\epsilon(z)+1.
$$
\item For any compact set $K$ contained in $\V_\epsilon(R)$ for all $\epsilon \in [0,1]$, $u_\epsilon \to u_0$ uniformly on $K$.
\end{enumerate}
\end{thm}
This theorem is a mild generalization of Ueda's theorem in \cite{Ue} that deals with the case of $\sigma=1$. (See also \cite{Ue2}.)
This plays a crucial role to show the continuity of tessellation of the filled Julia set for hyperbolic and parabolic quadratic maps. See \cite{Ka}. C.McMullen showed that there exist quasiconformal linearizations with much wider domain of definition. Indeed, $\tau_\epsilon \to 1$ may be tangent to the unit circle (\textit{horocyclic} in his term). See \cite[\S 8]{Mc}.
\paragraph{Remark on the domain of convergence.}
We can take such a compact subset $K$ above in
\begin{align*}
\Pi(R) &~:=~ \mathbb{C}-\skakko{e^{\theta i}z : \mathrm{Re}\, z < R,~|\theta|\le \alpha} \\
&~~=~ \skakko{z \in \mathbb{C}: \mathrm{Re}\,(z-R') \ge |z-R'| \sin\alpha},
\end{align*}
which is a closed sector at $z=R'=R/(\cos \alpha)>0$. In fact, for any $R>0$ and $\epsilon \in [0,1]$, $\Pi(R)$ is contained in $\V_\epsilon(R)$. One can check it as follows: Now the complement of $\V_\epsilon(R)$ is contained in $\skakko{e^{\theta i}z : \mathrm{Re}\, z < R,~\theta=\arg(-b_\epsilon)}$. Since $|\arg(-b_\epsilon)| \le \alpha$, we have the claim.
In the next section we give a proof of this theorem that is also an alternative proof of Ueda's simultaneous linearization when $\sigma=1$. His original proof given in \cite{Ue} uses a technical difference equation but it makes the proof beautiful and the statement a little more detailed. Here we present a simplified proof based on the argument of \cite[Lemma 10.10]{MiBook} (its idea can be traced back at least to Leau's work on the Abel equation \cite{L}) and an estimate on polylogarithm functions given in Section 4.
\section{Proof of the theorem}\label{sec_03}
Let us start with a couple of lemmas. Set $\delta := (\cos \alpha)/2>0$. We first check:
\begin{lem}\label{lem_ueda1}
If $R \gg 0$, there exists $M>0$ such that $|f_\epsilon(z)-(\tau_\epsilon z+1)| \le M/|z|^\sigma$ on $\mathbb{B}(R)$ and $N_\epsilon(f_\epsilon(z)) \ge N_\epsilon(z)+ \delta$ on $\mathbb{V}_\epsilon(R)$ for any $\epsilon \in [0,1]$.
\end{lem}
\begin{pf}
The first inequality and the existence of $M$ is obvious. By replacing $R$ by a larger one, we have $|f_\epsilon(z)-(\tau_\epsilon z+1)| \le M/R^\sigma < \delta$ on $\mathbb{B}(R)$. Then
$$
N_\epsilon(f_\epsilon(z)) ~\ge~ N_\epsilon (\ell_\epsilon(z)) -\delta ~\ge~ N_\epsilon(z)+\delta.
$$
\hfill $\blacksquare$
\end{pf}
Let us fix such an $R\gg 0$. Then the lemma above implies that $f_\epsilon(\V_\epsilon(R)) \subset \V_\epsilon(R)$. Moreover, since $N_\epsilon(z) \le |z|$, we have
\begin{equation}\label{eq_1}
|f_\epsilon^n(z)| ~\ge~ N_\epsilon(f_\epsilon^n(z)) ~\ge~ N_\epsilon(z)+n\delta ~\ge~ R+n\delta
~\to~ \infty. \tag{2.1}
\end{equation}
Thus $\V_\epsilon(R)$ is contained in the basin of infinity and uniformly attracted to $\infty$ in spherical metric of $\hat{\mathbb{C}}$. In particular, this convergence to $\infty$ is uniform on $\Pi(R)$ for any $\epsilon$.
Next we show a key lemma for the theorem:
\begin{lem}\label{lem_key}
There exists $C >0$ such that for any $\epsilon \in [0,1]$ and $z_1,~z_2 \in \mathbb{B}(2S)$ with $S > R$, we have:
$$
\abs{\frac{f_\epsilon(z_2)-f_\epsilon(z_1)}{z_2-z_1}-\tau_\epsilon} ~\le~ \frac{C}{S^{1+\sigma}}.
$$
\end{lem}
\paragraph{Proof.}
Set $g_\epsilon(z):=f_\epsilon(z)-(\tau_\epsilon z+1)$. For any $z \in \mathbb{B}(2S)$ and $w \in \mathbb{D}(z,S):=\skakko{w:|w-z|<S}$, we have $|w|>S$. This implies $|g_\epsilon(w)| \le M/|w|^\sigma < M/S^\sigma$ and thus $g_\epsilon$ maps $\mathbb{D}(z,S)$ into $\mathbb{D}(0,M/S^\sigma)$. By the Cauchy integral formula (or the Schwarz lemma), it follows that $|g_\epsilon'(z)| \le (M/S^\sigma)/S=M/S^{1+\sigma}$ on $\mathbb{B}(2S)$.
Let $[z_1,z_2]$ denote the oriented line segment from $z_1$ to $z_2$. If $[z_1,z_2]$ is contained in $\mathbb{B}(2S)$, the inequality easily follows by
$$
|g_\epsilon(z_2)-g_\epsilon(z_1)| ~=~ \abs{\int_{[z_1,z_2]} g_\epsilon'(z) dz }
~\le~ \int_{[z_1,z_2]} |g_\epsilon'(z)| |dz|
~\le~ \frac{M}{S^{1+\sigma}}|z_2-z_1|
$$
with $C:=M$. Otherwise we have to take a roundabout way to get the estimate. Let us consider a circle with diameter $[z_1,z_2]$. Then $[z_1,z_2]$ cut the circle into two semicircles, and at least one of them is contained in $\mathbb{B}(2S)$. Let $\skakko{z_1,z_2}$ denote the one. Then
$$
|g_\epsilon(z_2)-g_\epsilon(z_1)| ~=~ \abs{\int_{\skakko{z_1,z_2}} g_\epsilon'(z) dz }
~\le~ \int_{\skakko{z_1,z_2}} |g_\epsilon'(z)| |dz|
~\le~ \frac{M}{S^{1+\sigma}}\cdot \frac{\pi}{2}|z_2-z_1|
$$
and the lemma holds by setting $C:=M \pi/2~(>M)$ for any $z_1, ~z_2 \in \mathbb{B}(2S)$.
\hfill $\blacksquare$
\paragraph{Proof of \thmref{thm_ueda}.}
Set $z_n:=f_\epsilon^n(z)$ for $z \in \V_\epsilon(2R)$. Note that such $z_n$ satisfies $|z_n| \ge N_\epsilon(z_n) \ge 2R+n\delta$ by (\ref{eq_1}). Now we fix $a \in \V_\epsilon(2R)$ and define $\phi_{n,\epsilon}=\phi_n:\V_\epsilon(2R) \to \mathbb{C}~(n \ge 0)$ by
$$
\phi_n(z) ~:=~ \frac{z_n -a_n}{\tau_\epsilon^n}.
$$
For example, one can take such an $a$ in $\Pi(2R)$ independently of $\epsilon$. Then we have
$$
\abs{\frac{\phi_{n+1}(z)}{\phi_n(z)}-1}
~=~ \abs{ \frac{z_{n+1} - a_{n+1}}{\tau_\epsilon(z_{n} - a_{n})} -1} \\
~=~ \frac{1}{|\tau_\epsilon|}
\cdot \abs{ \frac{f_\epsilon(z_{n}) - f_\epsilon(a_{n})}{z_{n} - a_{n}} -\tau_\epsilon}.
$$
We apply \lemref{lem_key} with $2S=2R+n\delta$. Since $z_n,~a_n \in \V_\epsilon(2S) \subset \mathbb{B}(2S)$, we have
$$
\abs{\frac{\phi_{n+1}(z)}{\phi_n(z)}-1} ~\le~ \frac{C}{|\tau_\epsilon|(R+n\delta/2)^{1+\sigma}} ~\le~ \frac{C'}{(n+1)^{1+\sigma}},
$$
where $C'=2^{1+\sigma}C/\delta^{1+\sigma}$ and we may assume $R>\delta/2$. Set $P:=\prod_{n \ge 1}(1+C'/n^{1+\sigma})$. Since $|\phi_{n+1}(z)/\phi_n(z)|\le 1+C'/(n+1)^{1+\sigma}$, we have
$$
|\phi_n(z)| ~=~
\abs{
\frac{\phi_{n}(z)}{\phi_{n-1}(z)}
} \cdots \abs{
\frac{\phi_{1}(z)}{\phi_0(z)}
} \cdot |\phi_0(z)| ~\le~ P|z-a|.
$$
Hence
$$
|\phi_{n+1}(z)-\phi_n(z)| ~=~
\abs{\frac{\phi_{n+1}(z)}{\phi_n(z)}-1} \cdot |\phi_n(z)|
~\le~ \frac{C'P}{(n+1)^{1+\sigma}}\cdot |z-a|.
$$
This implies that $\phi_\epsilon=\phi_0+(\phi_1-\phi_0)+\cdots=\lim \phi_n$ converges uniformly on compact subsets of $\V_\epsilon(2R)$ and for all $\epsilon \in [0,1]$. The univalence of $\phi_\epsilon$ is shown in the same way as \cite[Lemma 10.10]{MiBook}.
Next we claim that $\phi_\epsilon(f_\epsilon(z))=\tau_\epsilon \phi_\epsilon(z)+B_\epsilon$ with $B_\epsilon \to 1$ as $\epsilon \to 0$. One can easily check that $\phi_n(f_\epsilon(z))=\tau_\epsilon \phi_{n+1}(z)+B_n$ where
$$
B_n ~=~ \frac{a_{n+1}-a_n}{\tau_\epsilon^n}
~=~
\frac{(\tau_\epsilon-1)a_n}{\tau_\epsilon^n}+
\frac{1+g_\epsilon(a_n)}{\tau_\epsilon^n}.
$$
When $\tau_\epsilon=1$, $B_n$ tends to $1$ since
$$
|g_\epsilon(a_n)| ~\le~ \frac{M}{|a_n|^\sigma} ~\le~ \frac{M}{(2R+n\delta)^\sigma}
~\le~ \frac{M}{(n\delta)^\sigma} ~\to~ 0.
$$
When $|\tau_\epsilon| >1$, the last term of the equation on $B_n$ above tends to 0. For $n \ge 1$, we have
$$
a_n ~=~ \tau_\epsilon^n a ~+~\frac{\tau_\epsilon^n-1}{\tau_\epsilon-1}~+~\sum_{k=0}^{n-1}\tau_\epsilon^{n-1-k}g_\epsilon(a_{k}).
$$
Thus
$$
\frac{(\tau_\epsilon-1)a_n}{\tau_\epsilon^n} ~=~
(\tau_\epsilon -1) \kakko{a+ \frac{g_\epsilon(a)}{\tau_\epsilon}
+\sum_{k=1}^{n-1}\frac{g_\epsilon(a_{k})}{\tau_\epsilon^{k+1}}
}+1-\frac{1}{\tau_\epsilon^n}.
$$
By the inequality on $|g_\epsilon(a_n)|$ above, we have
$$
\abs{(\tau_\epsilon -1) \sum_{k=1}^{n-1}\frac{g_\epsilon(a_{k})}{\tau_\epsilon^{k+1}} }
~\le~ \frac{M}{\delta^\sigma} \frac{|\tau_\epsilon-1|}{|\tau_\epsilon|}\sum_{k=1}^{n-1} \frac{1}{k^\sigma |\tau_\epsilon|^k}
~\le~ \frac{M}{2\delta^{1+\sigma}} (1-\frac{1}{|\tau_\epsilon|})\mathrm{Li}_\sigma(\frac{1}{|\tau_\epsilon|})
$$
where we used the inequality
$$
|\tau_\epsilon -1| ~\le~ \frac{\mathrm{Re}\, \tau_\epsilon -1}{\cos \alpha} ~\le~ \frac{|\tau_\epsilon|-1}{2\delta}
$$
that comes from the radial convergence. By \propref{prop_polylog} in the next section, $B_n$ converges to some $B_\epsilon$. More precisely, if we set $|\tau_\epsilon|=e^{L}$, then $\tau_\epsilon-1=O(L)$ and one can check that $B_\epsilon=1+O(L^{\sigma/(1+\sigma)})$.
Finally, $u_\epsilon(z):=\phi_\epsilon(z)/B_\epsilon$ gives a desired holomorphic map (with $R$ in the statement replaced by $2R$). \hfill $\blacksquare$
\paragraph{Remarks.}
\begin{itemize}
\item
When $\sigma=1$, we have
$$
\abs{\sum_{k=1}^{n-1}\frac{g_\epsilon(a_{k})}{\tau_\epsilon^{k+1}} }
~\le~ \frac{M}{\delta|\tau_\epsilon|} \sum_{k=1}^{n-1} \frac{1}{k|\tau_\epsilon|^k}
~\le~ -\frac{M}{\delta} \log (1-\frac{1}{|\tau_\epsilon|})
$$
and this implies that $B_\epsilon=1+O(L|\log L|)$ if we set $|\tau_\epsilon|=e^{L}$. This fact is consistent with the result in \cite{Ue}.
\item By this proof, if $\skakko{f_\epsilon(z)}$ analytically depends on $\epsilon$, then $\skakko{B_\epsilon}$ and $\skakko{u_\epsilon(z)}$ do the same for fixed $a$ in $\Pi(2R)$.
\item It is not difficult to check that $u_\epsilon(z)=z(B_\epsilon^{-1}+o(1))$ as $z \to \infty$ within $\V_\epsilon(R)$. (It is well-known that if $f_0(z)=z+1+a_0/z+\cdots$ then the Fatou coordinate is of the form $u_0(z)=z-a_0 \log z+O(1)$. See \cite{Sh}.)
\end{itemize}
\section{An estimate on polylogarithm functions}\label{sec_04}
We define the \textit{polylogarithm function} of exponent $s \in \mathbb{C}$ by
$$
\mathrm{Li}_s(z) ~:=~ \sum_{n=1}^\infty \frac{z^n}{n^s}.
$$
This function makes sense when $|z| < 1$ and $\sigma:=\mathrm{Re}\, s >0$ and it is a holomorphic function of $z$. In particular, if $\mathrm{Re}\, s >1$ the function tends to $\zeta(s)$ as $z \to 1$ within the unit disk. In the following we consider the behavior of $\mathrm{Li}_s(z)$ as $z \to 1$ within the unit disk when $0<\sigma \le 1$. We claim:
\begin{prop}\label{prop_polylog}
Suppose $0<\mathrm{Re}\, s=\sigma \le 1$ and $z \to 1$ with $|z|<1$. Set $\epsilon:=1-|z| $. Then there exists a uniform constant $C$ independent of $s$ such that
$$
|\mathrm{Li}_s(z)| ~\le~ C \epsilon^{-\frac{1}{1+\sigma}}
$$
as $z \to 1$. In particular, we have
$$
|(z-1) \, \mathrm{Li}_s(z)| ~\le~ C \epsilon^{\frac{\sigma}{1+\sigma}} ~\to~ 0
$$
if $z \to 1-0$ along the real axis.
\end{prop}
\newcommand{\sum_{n=1}^\infty}{\sum_{n=1}^\infty}
\newcommand{\sum_{k=1}^n}{\sum_{k=1}^n}
\paragraph{Proof.}
Clearly
$
|\mathrm{Li}_s(z)| ~\le~ \sum_{n=1}^\infty |z|^n/n^\sigma
$
so it is enough to consider the sum
$$
S ~:=~ \sum_{n=1}^\infty \frac{1}{n^\sigma}\cdot \lambda^n
$$
where $\lambda:=|z|=1-\epsilon$. Let $S_n$ be the partial sum to the $n$th term. By the H\"older inequality, we have
$$
S_n ~\le~
\kakko{
\sum_{k=1}^n \frac{1}{k^{\sigma p}}
}^{\frac{1}{p}}
\kakko{
\sum_{k=1}^n \lambda^{kq}
}^{\frac{1}{q}}
$$
for any $p, q > 1$ with $1/p+1/q=1$. Now let us set $p:=1/\sigma+1 \ge 2$ (then $1<q=1+\sigma \le 2$). Since $\sigma p=1+\sigma>1$, the first sum is uniformly bounded as follows:
$$
\sum_{k=1}^n \frac{1}{k^{\sigma p}} ~\le~ 1+\int_1^\infty \frac{1}{x^{1+\sigma}} dx
~=~ 1+\frac{1}{\sigma} = p.
$$
On the other hand, for the second sum, we still have $0<\lambda^q<1$ and thus
$$
\sum_{k=1}^n \lambda^{kq} ~\le~ \frac{\lambda^q}{1-\lambda^q}
~=~ \frac{1}{q \epsilon}(1+o(1)) ~\le~ \frac{2}{q \epsilon}
$$
when $\epsilon \ll 1$. Hence we have the following uniform bound:
$$
S_n ~\le~ p^{\frac{1}{p}}
\kakko{\frac{2}{q \epsilon}}^{\frac{1}{q}}
~\le~ 2 \kakko{\frac{p^{\frac{1}{p}}}{q^{\frac{1}{q}}}} \epsilon^{-\frac{1}{q}}.
$$
One can easily check that $1 \le x^{\frac{1}{x}} \le e^{\frac{1}{e}}=1.44467\cdots$ for $x \ge 1$. Thus
$$
S ~\le~ 2 e^{\frac{1}{e}} \epsilon^{-\frac{1}{q}} ~=~ 2 e^{\frac{1}{e}} \epsilon^{-\frac{1}{1+\sigma}}
$$
when $\epsilon \ll 1$ and we have the desired estimate with $C=2 e^{\frac{1}{e}}< 3$. The last inequality of the statement follows by:
$$
|(z-1) \, \mathrm{Li}_s(z)| ~\le~ C \epsilon^{1-\frac{1}{q}} ~=~ C \epsilon^{\frac{1}{p}}
~=~ C \epsilon^{\frac{\sigma}{1+\sigma}}.
$$
(Indeed, $|(z-1) \, \mathrm{Li}_s(z)| = O(\epsilon^{\frac{\sigma}{1+\sigma}})$ if $z \to 1$ radially.)
\hfill $\blacksquare$
\section{Application: Proof of \thmref{thm_convergence}}\label{sec_05}
As an application of \thmref{thm_ueda}, we give a proof of \thmref{thm_convergence}. Though \thmref{thm_convergence} only deals with the simplest parabolic fixed point and its simplest perturbation, one can easily extend the result to general parabolic cycles with multiple petals and their ``non-tangential" perturbations.
\paragraph{Proof of \thmref{thm_convergence}.}
Let us take an general expression $f_\lambda (w)=\lambda w+w^2$ with $0<\lambda \le 1$ (thus $f_1=g$). By looking the action of $f_\lambda$ through a new coordinate $z=\chi_\lambda(w)=-\lambda^2/w$, we have
$$
\chi_\lambda\circ f_\lambda \circ \chi_\lambda^{-1}(z) ~=~ z/\lambda + 1 + O(1/z)
$$
near $\infty$. Now we can set $\tau_\epsilon:=1/\lambda=1+\epsilon$ and $f_\epsilon :=\chi_\lambda\circ f_\lambda \circ \chi_\lambda^{-1}$ to have the same setting as \thmref{thm_ueda}. We consider that $f$ and $g$ are parameterized by $\lambda$ or $\epsilon$. (It is convenient to use both parameterization.) Note that $\Pi(R)=\skakko{\mathrm{Re}\, z \ge R}$ in this case. By the same argument as \lemref{lem_ueda1}, we can check that $\mathrm{Re}\, f_\epsilon(z) \ge \mathrm{Re}\, z + 1/2$ if $z \in \Pi(R)$ and $R \gg 0$. In particular, we have $f_\epsilon(\Pi(R)) \subset \Pi(R)$ for $R \gg 0$.
Let us show (1): For any compact $E \subset K_g^\circ$ and small $r>0$, there exists $N \gg 0$ such that $g^N(E) \subset P_r=\skakko{|w+r| \le r}$. (For instance, one can show this fact by existence of the Fatou coordinate.) By uniform convergence, we have $f^N(E) \subset P_r$ for all $f \approx g$. To show $E \subset K_f^\circ$, it is enough to show that $f(P_r) \subset P_r$ for all $f \approx g$. Since $\chi_\lambda(P_r)=\Pi(R)$ for some $R \gg 0$, we have $f_\epsilon(\Pi(R)) \subset \Pi(R)$ independently of $\epsilon$. This is equivalent to $f_\lambda(P_r) \subset P_r$ in a different coordinate. Thus we have (1).
Next let us check (2): Set $\Phi_\epsilon :=\Phi_f$ and $\Phi_0:=\Phi_g$. Then we have $\Phi_\epsilon(f_\lambda(w))=\tau_\epsilon \Phi_\epsilon(w)+1$. On the other hand, by simultaneous linearization, we have a uniform convergence $u_\epsilon \to u_0$ on $\Pi(R)$ that satisfies $u_\epsilon(f_\epsilon(z))=\tau_\epsilon u_\epsilon(z)+1$. By setting $\Psi_\epsilon(w):=u_\epsilon \circ \chi_\lambda(w)$, we have $\Psi_\epsilon \to \Psi_0$ compact uniformly on $P_r$, and $\Psi_\epsilon(f_\lambda(w))=\tau_\epsilon \Psi_\epsilon(w)+1$.
We need to adjust the images of critical orbits mapped by $\Phi_\epsilon$ and $\Psi_\epsilon$. Since $g^n(-1/2) \to 0$ along the real axis, there is an $M \gg 0$ such that $g^M(-1/2)=:a_0 \in P_r$. By uniform convergence, we also have $f^M(-\lambda/2)=: a_\epsilon \in P_r$ and $a_\epsilon \to a_0$ as $\epsilon \to 0$. Set $b_\epsilon:=\Psi_\epsilon(a_\epsilon)$ and $c_\epsilon:=\Phi_\epsilon(a_\epsilon)$ for all $\epsilon \ge 0$. Set also $\ell_\epsilon(W)=\tau_\epsilon W +1$, then we have $c_\epsilon=\ell_\epsilon^M(0)=\tau_\epsilon^{M-1}+\cdots+\tau_\epsilon+1$ and $c_\epsilon \to c_0=M$ as $\epsilon \to 0$. When $\epsilon >0$, we take an affine map $T_\epsilon$ that fixes $1/(1-\tau_\epsilon)$ and sends $b_\epsilon$ to $c_\epsilon$. When $\epsilon=0$, we take $T_0$ that is the translation by $b_0-c_0$. Then one can check that $T_\epsilon \to T_0$ compact uniformly on the plane and $\tilde{\Phi}_\epsilon:=T_\epsilon \circ \Psi_\epsilon$ satisfies $\tilde{\Phi}_\epsilon \to \tilde{\Phi}_0$ on any compact sets of $P_r$. Moreover, $\tilde{\Phi}_\epsilon$ still satisfies $\tilde{\Phi}_\epsilon(f_\lambda(w))=\tau_\epsilon \tilde{\Phi}_\epsilon(w)+1$ and the images of the critical orbit by $\Phi_\epsilon$ and $\tilde{\Phi}_\epsilon$ agree. Finally by uniqueness of $\phi_f$ and $\phi_g$, one can check that $\Phi_\epsilon=\tilde{\Phi}_\epsilon$ on $P_r$.
Since
$$
\Phi_f(w) ~=~ \ell_\epsilon^{-N}\circ \tilde{\Phi}_\epsilon(f^N(w))
~\longrightarrow ~ \ell_0^{-N}\circ \tilde{\Phi}_0(g^N(w)) ~=~ \Phi_g(w)
$$
uniformly on $E$, we have (2). \hfill $\blacksquare$
\paragraph{Acknowledgement.}
I would like to thank T.Ueda for correspondence. This research is partially supported by Inamori Foundation and JSPS.
|
1,116,691,498,622 | arxiv | \section{Introduction}
One of the most important questions that cosmology faces today is the origin of structure in the universe. The generally accepted paradigm is that of inflation~\cite{Infl1,Infl2,Infl3,Infl4} which
produces small adiabatic perturbations that evolve into the observed structure. The inflationary paradigm is extremely powerful as it remedies most of the problems of
the original Big Bang scenario and also has a set of predictions that are well confirmed by current observations. On the other hand, although the generic predictions
of inflation are quite clear, the nature of specific physical processes that govern inflation are still poorly understood.
The major obstacle in understanding inflation is that it can not be directly observed either in the laboratory or with telescopes. This problem is at the same time a
virtue of inflation as it allows to indirectly probe physics at energies and time-scales that are far beyond the reach of current facilities. By comparing
astrophysical observations with predictions of various inflationary models one can expect to distinguish between different extensions of the Standard Model of
particle physics~\cite{LythReview}. Understanding of the reheating phase of inflation can provide a link between scalar fields driving inflation and the observable Universe that
consists of dark and baryonic matter.
One of the many possible ways to deeper understand inflation is by studying the primordial density fluctuations. The usual inflationary model of a slowly-rolling
inflaton field requires that the perturbations are highly Gaussian~\cite{Gaus1,Gaus2,Gaus3,Gaus4} and hence the detection of non-Gaussianity in either the cosmic
microwave background (CMB) spectrum or the large scale structure (LSS) distribution would be a clear evidence that the physics driving inflation is more complicated
than the standard inflaton scenario.
Non-Gaussianity naturally arises in inflationary models with more than one field~\cite{MField1,MField2,MField3,MField4}. One of the most studied models is the curvaton model~\cite{MField4,Curv1,Curv2,Curv3,Curv4,Curv5}, in which initial perturbations
are generated by the curvaton field after inflation is over. In this model significant non-Gaussianity can be generated since the predicted
curvature perturbation is proportional to the square of the curvaton field (as distinct from single-field inflation, where the required smoothness of the inflaton
potential renders the curvature perturbation very nearly linear in the field fluctuations).
Most attempts to constrain non-Gaussianity have used the so-called ``local-type'' or $f_{\rm NL}$ parameterization~\cite{Komatsu00} in which one includes a quadratic
term into the
primordial potential, $\Phi = \phi + f_{\rm NL}\phi^2$. In this parametrization both linear and quadratic terms in the potential originate from the same Gaussian
field, e.g. a curvaton field, and the contributions from perturbations in other fields (e.g. the inflaton field responsible for inflation itself) are negligible.
The signature of local-type non-Gaussianity in the CMB has been described at length \cite{CMBNG1}.
It has also been established that $f_{\rm NL}$ has an effect on the galaxy bispectrum \cite{Verde00,Sefusatti07,Jeong09}.
The effect on the large-scale galaxy power spectrum has
been considered only recently \cite{Dalal08, Slosar08, Carbone08, Afshordi08, McDonald08}, but it rapidly became clear that the method was competitive, stimulating
work on $N$-body simulations of halo formation in non-Gaussian cosmologies \cite{Desjacques09, Grossi, P10, Reid}.
Recent constraints have been derived from the CMB bispectrum as measured by WMAP \cite{Komatsu03, Spergel07, Yadav08, Komatsu08, Komatsu10, Smith09,i3}
and from large scale structure in the
Sloan Digital Sky Survey (SDSS) \cite{Slosar08}. Recently, $\sim3\sigma$ evidence for excess clustering consistent with non-Gaussianity has been
identified in the NRAO VLA Sky Survey (NVSS) \cite{XiaNVSS}.
In this paper we extend the formalism to include the case where both the inflaton and curvaton contribute significantly to the curvature perturbation. Perturbations generated by the inflaton field are
purely Gaussian, while curvaton fluctuations can result in non-Gaussianity if the conversion from curvaton fluctuation $\delta\sigma$ to primordial potential $\Phi$ contains quadratic terms. The ratio
of inflaton to curvaton contributions $\xi$ is arbitrary: the framework of the curvaton model allows it to take on any positive value. Usually one takes $\xi\gg 1$ since in the opposite limit ($\xi\ll
1$) the curvaton has no observable effect on the primordial perturbations. Here we investigate the consequences of general $\xi$ -- including values of order unity -- for the CMB and LSS. The type of
non-Gaussianity generated could be called ``local-stochastic,'' in that it results from local nonlinear evolution of the inflaton and curvaton fields (and thus the primordial bispectrum will have the
local-type configuration dependence), but that the full nonlinear potential $\Phi$ is not a deterministic function of the linear potential.
Studying non-Gaussianity is particularly important in the face of the current generation of CMB projects~\cite{CMBTF} such as the {\slshape Planck} satellite as well
as ongoing and future LSS projects. To fully exploit the potential of the future probes it is imperative to investigate theoretically the range of types of
non-Gaussianities that can be produced in unconventional inflation (e.g. multi-field models), and understand what effect they have on the CMB and LSS.
The rest of the paper is organized as follows. In Sec.~\ref{sec:theory} we discuss the generation of non-Gaussian primordial perturbations in the inflationary model
with both inflaton and curvaton fields contributing to the curvature perturbation.
In Sec.~\ref{sec:cmb} we describe the effect of two-field models on the CMB bispectrum.
In Sec.~\ref{sec:halo} we derive the halo power spectrum using the peak-background
split formalism \cite{Cole}, and in Sec.~\ref{sec:lss} we consider the angular
power spectrum of galaxies. Section~\ref{sec:constraints} provides the constraints on the two-field model from existing data, and we conclude in Sec.~\ref{sec:disc}.
\section{Non-Gaussian initial perturbations in two-field inflationary models}
\label{sec:theory}
We consider a model of inflation where both the inflaton {\em and} the curvaton contribute to the primordial density perturbations. This configuration can exhibit a rich set of phenomenology, including
both non-Gaussianity and various mixtures of adiabatic and isocurvature perturbations \cite{a0403258, a0407300, h0409335, LVW, 1004.0818}. In this paper, we will restrict ourselves to the
case where the dark matter decouples after the curvaton decays and its energy density is thermalized. This ensures that no dark matter isocurvature perturbation is produced, and the only observable
perturbation is the curvature perturbation $\zeta$ that is conserved between curvaton decay and horizon entry.
The simplest case is that of two non-interacting scalar fields: the inflaton $\varphi$ and the curvaton $\sigma$. The latter is taken to have a quadratic potential,
\beq
V_\sigma(\sigma) =
\frac12m^2\sigma^2.
\label{eq:Vsigma}
\eeq
During inflation, the inflaton dominates the energy density of the Universe, whereas the curvaton is effectively massless ($m\ll H$) and pinned by Hubble friction to a fixed value $\bar\sigma$ (aside
from perturbations to be described later). Quantum fluctuations generate a spectrum of perturbations
$\delta\varphi$ and $\delta\sigma$ in both the inflaton and curvaton fields:
\beq
P_{\delta\varphi}(k) = \frac{H_*^2}{2k^3} \qquad {\rm and} \qquad P_{\delta\sigma}(k) = \frac{H_*^2}{2k^3},
\eeq
where $H_*$ is the Hubble rate evaluated at the horizon crossing for a given mode, i.e. when $k = aH$, and the post-horizon-exit field perturbations are defined on the uniform total density slice.
The $\varphi$ and $\sigma$ perturbations are nearly Gaussian and uncorrelated.
The inflaton perturbation is parallel to the unperturbed trajectory in $(\varphi,\sigma)$-space and hence is an adiabatic perturbation; indeed it behaves the same way that perturbations behave in
single-field inflation.
The curvaton perturbation
however is an isocurvature perturbation and can have complicated dynamics. In the
simplest version of the curvaton scenario, the curvaton begins to
oscillate after the end of inflation when the Hubble rate drops to $H\sim m$. As a massive scalar with zero spatial momentum, its energy density subsequently redshifts as $\rho_\sigma\propto a^{-3}$.
Of interest to us is the fact that for quadratic potentials (Eq.~\ref{eq:Vsigma}) this energy density is also proportional to the square of the curvaton field $\sigma=\bar\sigma + \delta\sigma$, i.e.
\beq
\delta_\sigma \equiv
\frac{\delta\rho_{\sigma}}{\bar\rho_{\sigma}}
= 2\frac{\delta\sigma}{\bar\sigma} + \frac{\delta\sigma^2-\langle\delta\sigma^2\rangle}{\bar\sigma^2},
\eeq
where the subtraction of the variance arises from the ${\cal O}(\delta\sigma^2)$ expansion of $\bar\rho_{\sigma}\propto\bar\sigma^2+\langle\delta\sigma^2\rangle$.
The quadratic term allows the curvaton to generate a non-Gaussian density perturbation. In the radiation-dominated era, the curvaton's contribution to the energy density increases as
$\Omega_\sigma\propto a$, thereby enhancing the importance of $\delta_\sigma$. The decay of the curvaton and the thermalization of its energy density result in a non-Gaussian adiabatic perturbation.
The $\delta N$ formalism \cite{dN1} extended into the nonlinear regime \cite{Gaus4} quantitatively provides the curvature perturbation to second order in the field
perturbations
$(\delta\varphi,\delta\sigma)$; this is (Eq.~26 of Langlois {\slshape et~al.}~\cite{LVW})
\beq
\zeta = -\frac{H_* \delta\varphi}{\dot\varphi_*} + \frac{2r}{3}\frac{\delta\sigma}{\sigma_*}
+ \frac{2r}{9}\left(\frac32-2r-r^2\right)\frac{\delta\sigma^2}{\sigma_*^2},
\eeq
where the subscript $_*$ denotes evaluation at horizon exit, and $r$ is related to the fraction of the energy density in the curvaton when it decays:
\beq
r = \left. \frac{3\rho_\sigma}{4\rho_{\rm rad} + 3\rho_\sigma} \right|_{\rm decay}.
\label{eq:rdef}
\eeq
The primordial potential perturbation in the Newtonian gauge is then given by the usual expression $\Phi=\frac35\zeta$
(e.g. \cite{LL}; but note that in large scale structure non-Gaussianity studies the opposite sign
convention is adopted, so that $\Phi>0$ corresponds to overdensities).
We may put the primordial potential in a form more closely related to that of large-scale structure non-Gaussianity studies:
\beq
\Phi({\bf x}) = \phi_1({\bf x}) + \phi_2({\bf x}) + \fnl[ \phi_2^2({\bf x})-\langle\phi_2^2\rangle ],
\label{eq:Phi}
\eeq
where $\phi_1$ and $\phi_2$ are the parts of the linear primordial potential corresponding to the inflaton and curvaton fields respectively.
Their power spectra are given by
\beq
\frac{k^3}{2\pi^2} P_{\phi_1}(k) = \frac9{25} \left( \frac {H_*^2}{2\pi\dot\varphi_*} \right)^2
\label{phi1}
\eeq
and
\beq
\frac{k^3}{2\pi^2} P_{\phi_2}(k) = \frac{4r^2}{25} \left( \frac{H_*}{2\pi\sigma_*} \right)^2.
\eeq
The non-Gaussianity parameter is
\beq
\fnl = \frac{5}{6r}\left(\frac32-2r-r^2\right).
\label{eq:fnl}
\eeq
(We use the tilde since the label ``$f_{\rm NL}$'' is usually used to denote the non-Gaussianity parameter appearing in the primordial bispectrum.)
It is convenient to specify the relative contribution of the inflaton and curvaton fields to the primordial potential using the
ratio of standard deviations $\xi = \sigma(\phi_1)/\sigma(\phi_2)$. Thus a fraction $\xi^2/(1+\xi^2)$ of the power comes from the
inflaton and a fraction $1/(1+\xi^2)$ from the curvaton. This ratio is
\beq
\xi(k) = \left| \frac{3\sigma_*H_*}{2r\dot\phi_*} \right| = \frac3{2r} \left| \frac{\sigma_*}{(d\phi/dN)_*} \right|,
\label{eq:xi}
\eeq
where $N$ is the number of $e$-folds remaining in inflation.
Thus the observable features of this model are specified by the primordial power spectrum $P_\Phi(k)$ and by the two new parameters $\fnl$ and $\xi$ (in principle $\xi$ will have a scale dependence
$d\ln\xi/d\ln k$ of order the slow roll parameters, but unless non-Gaussianity is detected at high statistical significance this cannot be measured). We will work with these parameters from here
forward.
\section{The CMB bispectrum}
\label{sec:cmb}
The effect of local-type non-Gaussianity on the CMB bispectrum has a long history, both in purely adiabatic models as considered here, and in locally non-Gaussian isocurvature models \cite{i1,i2}.
We evaluate the CMB bispectrum using our set of parameters here.
The CMB constraints on primordial non-Gaussianity come from the measurements of the CMB angular bispectrum~\cite{Komatsu00},
\begin{equation}
\label{eq:blllmmm}
B_{\ell_1\ell_2\ell_3}^{m_1m_2m_3}\equiv
\left\langle a_{\ell_1m_1}a_{\ell_2m_2}a_{\ell_3m_3}\right\rangle,
\end{equation}
where $a_{\ell m}$ is the CMB temperature fluctuation expanded in spherical harmonics:
\begin{equation}
a_{\ell m}\equiv \int d^2\hat{\mathbf n}\frac{\Delta T(\hat{\mathbf n})}{\bar T}
Y_{\ell m}^*(\hat{\mathbf n}).
\end{equation}
If the primordial fluctuations are adiabatic scalar fluctuations, then $a_{\ell m}$ can be easily expressed in terms of the primordial potential $\Phi$ and the radiation transfer function $g_\ell(k)$:
\begin{equation}
a_{\ell m}=4\pi(-i)^\ell
\int\frac{d^3{\mathbf k}}{(2\pi)^3}\Phi({\mathbf k})g_\ell(k)
Y_{\ell m}^*(\hat{\mathbf k}).
\end{equation}
From Eq.~(\ref{eq:Phi}) it follows that in the Fourier space primordial potential can be decomposed into
parts associated with the linear potential perturbations $\phi_1$ and $\phi_2$, and with the nonlinear coupling $\fnl$:
\beq
\Phi({\mathbf k})=\phi_1({\mathbf k}) + \phi_2({\mathbf k}) + \phi_{NL}({\mathbf k}).
\eeq
Here $\phi_{NL}({\mathbf k})$ is the $\fnl$-dependent part,
\begin{equation}
\label{eq:nonlinear}
\phi_{NL}({\mathbf k})\equiv
\fnl
\int \frac{d^3{\mathbf p}}{(2\pi)^3}
\phi_2({\mathbf k}+{\mathbf p})\phi^\ast_2({\mathbf p}).
\end{equation}
Using Wick's theorem we calculate the bispectrum of the total potential, $B_\Phi(k_1,k_2,k_3)$. It contains one contribution from allowing each of
the $\Phi({\mathbf k}_i)$ to have a contribution from $\fnl$; in the case where this is $k_3$:
\beqa
\nonumber
\left\langle[\phi_1({\mathbf k}_1) + \phi_2({\mathbf k}_1)]
[\phi_1({\mathbf k}_2) + \phi_2({\mathbf k}_2)]
\phi_{NL}({\mathbf k}_3)\right\rangle = \\
2(2\pi)^3\delta^{(3)}({\mathbf k}_1+{\mathbf k}_2+{\mathbf k}_3)
\frac{\fnl}{(1 + \xi^2)^2}P_\Phi(k_1)P_\Phi(k_2),
\eeqa
where $P_\Phi(k)$ is the power spectrum of the total potential. The total bispectrum is the sum of this and the similar contributions where $\fnl$
contributes to ${\mathbf k}_1$ and to ${\mathbf k}_2$:
\beqa
B_\Phi(k_1,k_2,k_3) &=& 2\frac{\fnl}{(1 + \xi^2)^2} [
P_\Phi(k_1)P_\Phi(k_2) +
\nonumber \\ &&
\hspace{-1.5cm} P_\Phi(k_1)P_\Phi(k_3)+ P_\Phi(k_2)P_\Phi(k_3)].
\label{eq:bphi}
\eeqa
For constraining non-Gaussianity it is convenient to introduce two new variables: $x_1 = \fnl/(1+\xi^2)^2$ and $x_2 = 1/(1 + \xi^2)$. Using the
bispectrum of $\Phi({\mathbf k})$,
we can finally write the CMB angular bispectrum (via a calculation similar to Ref.~\cite{Komatsu00}):
\begin{eqnarray}
\nonumber
B_{\ell_1\ell_2\ell_3}^{m_1m_2m_3} &=&
2{\cal G}_{\ell_1\ell_2\ell_3}^{m_1m_2m_3}
\int_0^\infty r^2 dr
\nonumber \\ && \times
[b^L_{\ell_1}(r)b^L_{\ell_2}(r)b^{NL}_{\ell_3}(r)
+ b^L_{\ell_1}(r)b^{NL}_{\ell_2}(r)b^{L}_{\ell_3}(r)
\nonumber \\ &&
+ b^{NL}_{\ell_1}(r)b^L_{\ell_2}(r)b^{L}_{\ell_3}(r)],
\end{eqnarray}
where ${\cal G}_{\ell_1\ell_2\ell_3}^{m_1m_2m_3}$ is the Gaunt integral, and we use
\begin{equation}
\label{eq:bLr}
b^L_{\ell}(r) \equiv
\frac2{\pi}\int_0^\infty k^2 dk\, P_\Phi(k)g(k)j_\ell(kr)
\end{equation}
and
\begin{equation}
\label{eq:bNLr}
b^{NL}_{\ell}(r) \equiv
\frac{2x_1}{\pi}\int_0^\infty k^2 dk\, g(k)j_\ell(kr).
\end{equation}
We see from Eq.~(\ref{eq:bphi}) that CMB bispectrum measurements of $f_{\rm NL}^{\rm loc}$ that assume a purely curvaton contribution ($\xi=0$) are actually measuring
$x_1$ in the more general case.
Clearly, when the curvaton field dominates the perturbation power ($x_2 \approx 1$), we have $f_{\rm NL}=\fnl$, and
our $\fnl$ constraints are identical to the constraints on models with negligible inflaton perturbation. However, as the
contribution from the inflaton field increases ($x_2 \rightarrow 0$) the role of the contribution of the curvaton field to the primordial curvature perturbation becomes negligible and the statistics describing the density distribution becomes very nearly Gaussian. In that case $\fnl$ becomes completely unconstrained (Fig.~\ref{fnl1}) and we can no longer make robust predictions regarding the presence of the second field on the basis of the non-Gaussianity observations alone. The CMB bispectrum is therefore not capable of breaking the degeneracy between $\fnl$ and $\xi$. The use of other constraints is necessary. In this paper, we will use large-scale structure, although we note that in principle the CMB trispectrum might also be useful for this purpose, since it scales as $\fnl^2/(1+\xi^2)^3 = x_1^2/x_2$ and thus in combination with the bispectrum would allow the $(\fnl,\xi)$ degeneracy to be broken.
\begin{figure}
\vspace{20pt}
\includegraphics[width=3.4in]{fnl1.eps}
\caption{\label{fnl1}The allowed range ($2\sigma$) of $\fnl$ as a function of $x_2$ derived from the WMAP data~\cite{Komatsu08}. As discussed in the text, $\tilde f_{NL}$ becomes unconstrained as $x_2 \rightarrow 0$ because in this case the statistics describing the density distribution is dominated by the inflaton field and is nearly Gaussian.}
\end{figure}
\section{Local non-Gaussianity in peak-background split formalism}
\label{sec:halo}
In this section we outline the inflationary scenario in which both inflaton and curvaton fields are contributing to the initial curvature perturbation and derive the
power spectrum of galaxies using the peak-background split formalism~\cite{Cole}.
We decompose the density field into a long-wavelength and short-wavelength pieces:
\beq
\rho(\mathbf{x}) = \bar{\rho}(1 + \delta_l + \delta_s).
\eeq
The linear density field here is a sum of two independent Gaussian components $\delta = \delta^{(1)} + \delta^{(2)}$ originating from the inflaton and curvaton fields, respectively.
The local Lagrangian density of halos now depends on the large scale matter perturbations of both gaussian fields:
\beq
n(\mathbf{x}) = \bar{n}[1 + b_1\delta_l^{(1)} + b_2\delta_l^{(2)}].
\label{eq:nb1}
\eeq
We will see that $b_i$ can be scale-dependent ($k$-dependent), which means that in position space it should be thought of as a (possibly nonlocal)
operator acting on the density field,
i.e. $b_2\delta_l^{(2)}(\mathbf{x})$ is the convolution of $\delta_l^{(2)}(\mathbf{x})$ with the Fourier transform of $b_2(\mathbf{k})$.
We can express the bias parameters in terms of the number density function
\beq
b_i = \bar{n}^{-1}\frac{\partial n}{\partial \delta_l^{(i)}}.
\eeq
It is easy to check that the bias for the $\delta_1$ field is just the usual Lagrangian bias that applies to Gaussian cosmologies;
for example, in the Press-Schechter model~\cite{PS} it is $b_1 =
b_g \equiv \delta_c/\sigma_{\delta}^2-\delta_c^{-1}$ with $\delta_c=1.686$ quantifying the spherical collapse linear over-density. To calculate $b_2$
we note that
short-wavelength
modes $\delta_s$ in an overdense region determined by $\delta_l$ can be written as:
\beqa
\delta_s = \alpha \left[ (1+2\fnl\phi_l^{(2)})\phi_s^{(2)} + \fnl(\phi_s^{(2)})^2 + \phi_s^{(1)} \right]
\nonumber\\
\equiv \alpha \left[ X_1\phi_s^{(2)} + X_2(\phi_s^{(2)})^2 + \phi_s^{(1)} \right], \label{eq:ds}
\eeqa
where $X_1=1+2\fnl\phi_l^{(2)}$ and $X_2=\fnl$. Here $\alpha$ is the transfer function that converts the potential into the density
field, $\delta(k) = \alpha(k)\Phi(k)$. In general one may think of $\alpha$ as an operator defined by its action in Fourier space, i.e. when
applied to a real-space function such as $\phi({\bf x})$, we have
\beq
\alpha\phi({\bf x}) \equiv \int \frac{d^3{\bf k}}{(2\pi)^3} \alpha(k) e^{i{\bf k}\cdot{\bf x}} \int d^3y\,e^{-i{\bf k}\cdot{\bf y}} \phi({\bf y}).
\eeq
The specific function $\alpha(k)$ is given by Eq.~(7) of Slosar {\it et~al.} \cite{Slosar08}:
\beq
\alpha(k;z) = \frac{2c^2k^2T(k)D(z)}{3\Omega_mH_0^2},
\eeq
where $T(k)$ is the linear transfer function with conventional normalization $T(0)=1$, and $D(a)$ is the growth function normalized to $D(z)=
(1+z)^{-1}$ at high redshift. The inverse operator $\alpha^{-1}$ is obtained by the replacement $\alpha(k)\rightarrow\alpha^{-1}(k)=1/\alpha(k)$.
This shows that local number density in the non-Gaussian case depends not only on $\delta_l$, but also on $X_1$, $X_2$, and hence $b_2$ becomes
\begin{equation}
b_2 = \bar n^{-1} \left[ \frac{\partial n}{\partial \delta_l^{(2)}({\bf x})}
+ 2\fnl \alpha^{-1} \frac{\partial n}{\partial X_1}
\right],\label{eq:hb1}
\end{equation}
where the derivative is taken at the mean value $X_1=1$.
We can further simplify this expression by considering a rescaling of the power spectrum on
the small scales
due to the presence of non-Gaussianity. In a ``local'' region of some size $R$, and for small-scale Fourier modes $k\gg R^{-1}$ within this region,
there is a local power spectrum
\beqa
P_s^{\rm local}(k) &=& \frac{\xi^2 + (1 + 2\fnl\alpha^{-1}\delta_l^{(2)})^2}{1+\xi^2} P_s^{\rm global}(k)
\nonumber \\ &\equiv& X_0P_s^{\rm global}(k),
\eeqa
from which we obtain the rescaling of $\sigma_8$:
\beq
\sigma_8^{\rm local} = \sigma_8 \sqrt{X_0}.
\eeq
Using these expressions we can change the derivatives in Equation (\ref{eq:hb1}) to finally obtain
\begin{equation}
b_2(k) = b_{\rm g} + \frac{2\fnl}{1+\xi^2} \alpha^{-1}(k) \frac{\partial \ln{n}}{\partial \ln{\sigma_8^{\rm local}}}.
\label{eq:hb2}
\end{equation}
For further calculations we assume the mass function to be universal, i.e. we assume that it depends only on the significance function $\nu(M)\equiv
\delta_{\rm c}^2/\sigma^2(M)$:
\begin{equation}
n(M,\nu)= M^{-2} \nu f(\nu) \frac{\rmd\ln\nu}{\rmd\ln M}.
\end{equation}
This assumption is much more general than the Press-Schechter picture, e.g. it holds for the Sheth-Tormen mass function \cite{S-T} as well. Universality implies that
\beq
\frac{\partial \ln{n}}{\partial \ln{\sigma_8^{\rm local}}} = \delta_{\rm c}b_{\rm g},
\eeq
from which we derive
\beq
b_2(k) = b_{\rm g} + \frac{2\delta_{\rm c}\fnl}{1+\xi^2}b_{\rm g} \alpha^{-1}(k).
\label{eq:b2}
\eeq
The standard Gaussian bias in Eulerian space is given by $b\equiv b_{\rm g}+1$. The halo overdensity in Eulierian space in the non-Gaussian case is then obtained by
multiplying Eq.~(\ref{eq:nb1}) by $1+\delta_l$; to first order,
\beqa \label{deltah}
\delta_{\rm h} &\equiv& \frac{n({\mathbf x})}{\bar n}-1 = \delta_l + b_1\delta_l^{(1)} + b_2\delta_l^{(2)}
\nonumber \\
&=& [1+b_1(k)]\delta_l^{(1)} + [1+b_2(k)]\delta_l^{(2)}.
\eeqa
We can now write down the halo power spectrum in the form:
\beq
P_{\rm hh}(k) = \frac{(1+b_1)^2\xi^2 + [1+b_2(k)]^2}{1+\xi^2} P^{\rm lin}(k).
\eeq
Finally, plugging in $b_1(k)=b$ and using Eq.~(\ref{eq:b2}) for $b_2(k)$, we obtain
\beqa \label{Pgeq}
P_{\rm hh}(k) &=&
\frac{\xi^2b^2 + [b + 2(b-1)\fnl\delta_c(1+\xi^2)^{-1}\alpha^{-1}(k)]^2 }{1+\xi^2}
\nonumber \\ && \times P^{\rm lin}(k).
\eeqa
In the limit of $\xi \rightarrow 0$, i.e. when the contribution from the inflaton field is negligible
we recover the standard expression for the power spectrum with the curvaton generated non-Gaussianity (Eq.~32 of~\cite{Slosar08}).
(It should be noted that the non-Gaussianity also introduces small corrections to the scale-independent part of the bias, because the small-scale
fluctuations that must collapse to form a massive halo have a non-Gaussian density distribution. This effect has been seen in simulations with
$f_{\rm NL}$-type non-Gaussianity, where it is negative for $f_{\rm NL}>0$, resulting in a slight reduction of the non-Gaussian bias enhancement at
large $k$, and even a change in sign of the $f_{\rm NL}$ effect at very small scales \cite{Desjacques09,P10}. However, since current studies of
non-Gaussianity using LSS allow the scale-independent bias to be a free parameter, they are not sensitive to this effect; it would only be important
if the Gaussian contribution to the bias were inferred indepedently, e.g. from measurements of halo mass and the mass-bias relation.)
A further consequence of this model that does not arise in the case with only curvaton perturbations is large-scale stochasticity. In particular, the squared correlation coefficient
\beq
\chi(k) = \frac{P_{\rm hm}^2(k)}{P_{\rm hh}(k)P_{\rm mm}(k)}
\eeq
deviates from unity on the largest scales. We can see this by writing
the cross power spectrum $P_{\rm hm}(k)$ as
\beq
P_{\rm hm}(k) = \frac{(1+b_1)\xi^2 + [1+b_2(k)]}{1+\xi^2} P^{\rm lin}(k).
\eeq
In the linear Gaussian theory, one would have $\chi=1$, whereas in our case we have
\beq
\chi(k) = \frac{\{(1+b_1)\xi^2 + [1+b_2(k)]\}^2}{(1+\xi^2)\{(1+b_1)^2\xi^2 + [1+b_2(k)]^2\}}.
\label{eq:chi-k}
\eeq
Note that if $\fnl\neq0$, on large scales $|b_2(k)|\gg 1,b_1$ and hence
\beq
\lim_{k\rightarrow 0^+} \chi(k) = \frac1{1+\xi^2} = x_2.
\eeq
An example of the onset of scale-dependent bias and stochasticity is shown in Figure~\ref{fig:stoch}; note that this type of stochasticity effect exists only for $x_2\neq1$.
It is important to note that stochasticity can arise even for $x_2=1$ in two ways. One is that on small scales, there is a breakdown of linear biasing. However, since our constraints on non-Gaussianity arise from the largest scales in the survey (mainly the $l<25$ quasar data points) this effect can be neglected. The other is halo shot noise (e.g. \cite{Seljak}), which arises from the fact that haloes containing multiple galaxies (or quasars) can produce a large ``1-halo'' contribution to the correlation function at small separations. When transformed to Fourier space at large scales (small $k$), this results in additional contribution to the power spectrum of
\begin{equation}
\lim_{k\rightarrow 0^+} P_{\rm 1~halo}(k) =
\int 4\pi r^2\xi_{\rm 1~halo}(r)\,dr,
\label{eq:corrfuncint}
\end{equation}
where $\xi_{\rm 1~halo}(r)$ is one-halo correlation function. In principle since $P(k)\propto k$ on large scales, the halo shot noise term ($\propto k^0$) may become important; since it is random and not determined by the underlying long-wavelength modes of the density field, it also produces stochasticity.
However, the halo shot noise is expected to be a very small contribution for our data sets. A simple way to see this is to note that the ratio of the halo shot noise to the usual shot noise is equal to twice the ratio of 1-halo pairs to the number of galaxies (this follows from Eq.~(\ref{eq:corrfuncint}) and the definition of the correlation function). For the quasar sample, a simple counts-in-pixels analysis of the catalog suggests that 0.6\% of the quasars are in pairs (the Healpix pixels \cite{Gorski} used are 3.5 arcmin in size, i.e. much larger than haloes at $z>1$), suggesting that the halo shot noise term is $C_{l,{\rm 1 halo}}\sim 0.012/\bar n_{\rm 2D}$. This is two orders of magnitude smaller than the error bars on the lowest-$l$ quasar autopower point displayed in Slosar {\slshape et~al.} \cite{Slosar08} and hence negligible.
\begin{figure}
\includegraphics[angle=-90,width=3.2in]{stoch}
\caption{\label{fig:stoch}The bias and stochasticity of galaxies at $z=1$ in a model with $x_1=30$ and $x_2=0.5$ ($\fnl=120$).
The solid lines show a tracer with $b=2$ and the dashed lines a tracer with $b=3$.
The background cosmology and power spectrum are those of WMAP5.}
\end{figure}
\section{Galaxy angular power spectrum}
\label{sec:lss}
Additional constraints on the primordial non-Gaussianity come from the observations of the Large Scale Structure (LSS), and in particular from large galaxy surveys such as SDSS. These constraints are primarily driven by extremely large scales, corresponding to wave numbers $k < 0.01 Mpc^{-1}$. The data sets used for LSS constraints include spectroscopic and photometric Luminous Red Galaxies (LRGs) from SDSS, photometric quasars from SDSS and cross-correlation between galaxies and dark matter via Integrated Sach-Wolfe effect. Detailed description of the data used for LSS constraints can be found in~\cite{Ho,Slosar08} and here we will only emphasize the redshift ranges of the most important datasets we used.
The photometric LRGs dataset was constructed and discussed in detail in~\cite{Paddy05}. The sample was sliced into two redshift bins with $0.2 < z < 0.4$ and $0.4 < z < 0.6$. Our spectroscopic LRG power spectrum comes from~\cite{Tegmark93} and is based on a galaxy sample that covers $4000$ square degrees of sky over the redshift range $0.16 < z < 0.47$. Quasars for our constraints were photometrically selected from the SDSS quasar catalog consisting of UVX objects~\cite{Richards} and DR3 catalog~\cite{Richards2}. Quasars in our sample fall into two redshift bins with $0.65 < z < 1.45$ and $1.45 < z < 2.0$.
The tightest LSS constraints on primordial non-Gaussianity involve purely photometric samples (where one observes the two-dimensional projection of the
galaxy distribution with a set of color cuts) since this allows the largest volume to be probed with the highest number density. At low $k$, where the effects of primordial non-Gaussianity on the power spectrum are largest, there is less of an
advantage to having the large number of modes ($\propto k_{\rm max}^3$ instead of $k_{\rm max}^2$) achievable via spectroscopy.
In order to obtain constraints on the $\fnl$ and $\xi$ parameters from LSS we need to modify Eq.~(\ref{Pgeq}) to give the
angular power spectrum.
To obtain the angular power spectrum, we project the galaxy density field $\delta_{g,3D}$ along the line of sight ${\nhat}$ and take into account redshift distortions~\cite{AngPk1, AngPk2, Padmanabhan91}:
\beq
1+\delta_{g}(\nhat) = \int \, dy\, f(s) [1+\delta_{g,3D}(y, y\nhat)] \,\,.
\label{eq:deltag_red}
\eeq
Here, $s = y + H^{-1}{\mathbf v}\cdot\nhat$ is the redshift distance, $f(s)$ is the normalized radial selection function, and we have explicitly written the mean contribution to the density field. We
further note that peculiar velocities are generally small compared to the size of the redshift slice and hence we can Taylor expand selection function as:
\beq
f(s) \approx f(y) + \frac1{aH} \frac{df}{dy} \,{\mathbf v}(y\nhat)\cdot\nhat \,\,.
\label{eq:fs}
\eeq
At large scales where the density perturbations are $\ll1$, we may ignore second-order terms, i.e. we may ignore the product of the velocity term in Eq.~(\ref{eq:fs}) with the $\delta_{g,3D}(y, y\nhat)$
term in Eq.~(\ref{eq:deltag_red}).
This allows us to split $\delta_g$ into two terms as $\delta_{g} = \delta_{g}^{0} + \delta_{g}^{r}$. In terms of the Fourier transformed fields, we can write $\delta_g^0$ and $\delta_g^r$ as:
\beq
\delta_{g}^0(\nhat) = \int \, dy \, f(y) \int\frac{d^{3}\kvec}{(2\pi)^3} \delta_{g,3D}(y, \kvec) e^{-i \kvec \cdot \nhat y} \,\,
\label{eq:deltagl1}
\eeq
and
\beq
\delta_{g}^{r}(\nhat) = \int \, dy\, \frac{df}{dy} \int \frac{d^{3} \kvec}{(2 \pi)^{3}}
\frac1{aH}{\mathbf v}(y,\kvec) \cdot \nhat e^{-i \kvec \cdot \nhat y} \,\,.
\label{eq:deltag_v}
\eeq
The velocity can be related to the density perturbation using linearized continuity equation:
\beq
H^{-1}{\mathbf v}(y,\kvec) = -i \frac{\Omega_{m}^{0.6}}{b} \delta_{g}(y,\kvec) \frac{\kvec}{k^{2}}\,\,.
\label{eq:velocity}
\eeq
We can further transfer redshift dependence of $\delta_g$ into a growth function $D(y)$ and expand $\delta$'s in Legendre polynomials $P_{\ell}(x)$ using the following identity:
\beq
e^{-i \kvec \cdot \nhat y} = \sum_{\ell=0}^{\infty} (2\ell+1) i^{\ell} j_{\ell}(ky) P_{\ell}(\khat \cdot \nhat) \,\,,
\eeq
where $j_\ell(s)$ is the spherical Bessel function of order $\ell$. We obtain
\beq
\delta_{g}^0(\nhat) = \int\frac{d^{3}\kvec}{(2\pi)^3} \sum_{\ell=0}^{\infty} (2\ell+1) P_{\ell}(\khat \cdot \nhat)\delta_{g,\ell}^0 \,\,,
\eeq
where $\delta_{g,\ell}^0$ is the observed galaxy transfer function [analogous to the CMB radiation transfer function $g_\ell(k)$]:
\beq
\delta_{g,\ell}^0 = i^{\ell} \int dy \, f(y) \delta_{g,3D}(\kvec)D(y)
j_{l}(ky). \,\,
\eeq
Similarly, we can write $\delta_{g,\ell}^r$ as
\beq
\delta_{g,\ell}^r = i^{\ell} \int dy \, \frac{df}{dy} \delta_{g,3D}(\kvec)D(y)
\frac{\Omega_{m}^{0.6}}{kb} j'_{l}(ky) \,\,.
\eeq
Now we use Eq.~(\ref{deltah}) to express $\delta_{g,\ell}$ in terms of the overdensities generated by inflaton and curvaton fields:
\beqa
\delta_{g,\ell}^0 = i^{\ell} \int dy \, f(y)D(y)( [1+b_1(k)]\delta_\ell^{(1)}
\nonumber \\
+ [1+b_2(k)]\delta_\ell^{(2)} ) j_{\ell}(ky)\hspace{1cm}
\eeqa
and
\beq
\delta_{g,\ell}^r = i^{\ell} \int dy \, \frac{df}{dy}D(y)\left(\delta_l^{(1)} + \delta_l^{(2)} \right)
\frac{\Omega_{m}^{0.6}}{k} j'_{l}(ky) \,\,.
\eeq
Using these expressions together with equation~(\ref{Pgeq}) it is straightforward to calculate angular power spectrum $C_\ell$, which can be conveniently divided into
three terms:
\beq
C_\ell = C^{gg}_\ell + C^{gv}_\ell +C^{vv}_\ell,
\eeq
corresponding to the pure real-space galaxy density contribution ($gg$), the redshift space distortion term ($vv$), and the cross-correlation ($gv$).
These components of the angular power spectrum are expressed in terms of the 3D linear matter power spectrum
$\Delta_k^2$ and $x_1$, $x_2$ parameters, introduced in the previous section:
\begin{eqnarray}
C^{gg}_\ell &=& 4\pi\int \frac{\rmd k}k \,\Delta_k^2 (|W_\ell^0(k)|^2(1-x_2) + |W_\ell^1(k)|^2x_2),
\nonumber \\
C^{gv}_\ell &=& 8\pi\int \frac{\rmd k}k \,\Delta_k^2 \Bigl( \Re \left[ W_\ell^{0\ast}(k) W_\ell^r(k)\right](1 -x_2) +
\nonumber \\
&+& \Re \left[ W_\ell^{1\ast}(k) W_\ell^r(k) \right]x_2 \Bigr), {\rm ~~and}
\nonumber \\
C^{vv}_\ell &=& 4\pi\int \frac{\rmd k}k \,\Delta_k^2 |W_\ell^r(k)|^2,
\label{eq:cgv}
\end{eqnarray}
where the window functions are given by
\begin{eqnarray}
W^0_\ell(k) &=& \int bD(y)f(y) j_\ell(ky)\,\rmd y,
\nonumber \\
W^1_\ell(k) &=& \int (b+\Delta b)D(y)f(y) j_\ell(ky)\,\rmd y, {\rm ~~and}
\nonumber \\
W_\ell^r(k) &=& \int \Omega_m^{0.6}(r) D(y)
f(y)
\Bigl[ \frac{2\ell^2+2\ell-1}{(2\ell-1)(2\ell+3)} j_{\ell}(ky)
\nonumber \\ & &
- \frac{\ell(\ell-1)}{(2\ell-1)(2\ell+1)}j_{\ell-2}(ky)
\nonumber \\ & &
- \frac{(\ell+1)(\ell+2)}{(2\ell+1)(2\ell+3)}j_{\ell+2}(ky)
\Bigr]\,\rmd y.
\end{eqnarray}
Here $b$ is the standard Gaussian bias, while $\Delta b$ is a correction that applies to contributions from the curvaton field {\em only}:
\beq
\Delta b = 3 \frac{x_1}{x_2} (b-1) \delta_c \frac{\Omega_m}{k^2T(k)D(r)}
\left(\frac{H_0}{c}\right)^2.
\label{eq:DB}
\eeq
\section{Constraints}
\label{sec:constraints}
To constrain $x_1$ and $x_2$ parameters using large scale structure we use the code developed and first implemented in Slosar {\slshape et~al.} \cite{Slosar08}.
We included the same data: the 5-year WMAP bispectrum $x_1=51\pm 30$ (1$\sigma$) \cite{Komatsu08}; and the SDSS data (spectroscopic and
photometric luminous red galaxies, the photometric quasar sample, and the integrated Sachs-Wolfe effect via cross-correlation).
We further included ancillary data to constrain the background cosmological model and break degeneracies with the non-Gaussianity
parameters $(x_1,x_2)$: the CMB power spectrum \cite{p1,p2,p3,p4,p5} and supernova data \cite{p6}.
The Markov chain results are displayed in Fig.~\ref{fig:2dplot} where the probability density distribution is plotted on the $(x_1,x_2)$ plane. Dark regions show regions with the highest likelihood and
the contours outlines $68\%$, $95\%$ and $99.7\%$ confidence levels. As the role of the curvaton field in the primordial density perturbation decreases, i.e. $x_2 \rightarrow 0$ ($\xi\rightarrow\infty$),
the upper limit on $x_1$
decreases. This is because at small $x_2$, LSS becomes much more sensitive to $x_1$, as one may see from the $x_2$ in the denominator of Eq.~(\ref{eq:DB}); the $|W_\ell^1(k)|^2$ term in
Eq.~(\ref{eq:cgv}) scales as $x_1^2/x_2$. In particular, if a local-type CMB bispectrum is ever robustly detected ($x_1\neq 0$) then the non-detection of excess large-scale clustering in SDSS would
immediately set a lower limit on $x_2$.
\section{Discussion}
\label{sec:disc}
This paper has extended the analysis of non-Gaussianity constraints into a two field inflationary models. In most previous studies of non-Gaussianity it was assumed that primordial density perturbations
were generated either by inflaton field, in which case they are perfectly Gaussian, or only by the second field (for example curvaton) which contains quadratic part and generates non-Gaussian initial
conditions. It is important, however, to realize the possibility of an intermediate case where part of the curvature perturbation is derived from quantum fluctuations of the inflaton field, while an
additional part is associated with a second field and converted to an adiabatic perturbation upon its decay. This results in a peculiar type of non-Gaussian initial condition (which we may call
``local-stochastic'' since the field $\phi_2$ entering in the nonlinear term is correlated with but not identical to the linear potential) that is both observable and distinguishable from the
curvaton-only ``local-deterministic'' or $f_{\rm NL}$ form. This type of non-Gaussianity has two parameters: a nonlinear
coupling coefficient $\fnl$, and the ratio $\xi$ of inflaton to curvaton contributions to the primordial density perturbation spectrum. We connect these parameters with parameters characterizing
inflationary fields in Eqs.~(\ref{eq:fnl}) and (\ref{eq:xi}).
\begin{figure}
\includegraphics[width=3.3in]{fnl4.eps}
\caption{\label{fig:2dplot}Constraints in the $(x_1,x_2)$ plane, including both the CMB bispectrum and the galaxy power spectrum.}
\end{figure}
Using the power spectrum and bispectrum constraints from SDSS and WMAP we are able to constrain these parameters. Adding two sets of constraints together allows us to break the degeneracy in the
$(\fnl, \xi)$ parameters
that exists with the CMB bispectrum alone. If non-Gaussianity in the CMB is ever detected, and the bispectrum has the local configuration dependence, this will enable us to measure the relative
contributions of the inflaton and curvaton.
We have found that in contrast to the local-deterministic non-Gaussianity, whose main effect on large scale structure is a scale-dependent increase in
the bias, the local-stochastic non-Gaussianity can
introduce stochasticity between the matter and halo distributions. It can also lead to relative stochasticity between haloes of different masses,
since Eq.~(\ref{eq:chi-k}) depends on the Gaussian bias $b_{\rm
g}$ of the haloes (e.g. $\chi\rightarrow 1$ if $b_{\rm g}\rightarrow 0$). The potential use of these effects to directly test the hypothesis of multiple fields contributing to the primordial
perturbations is left to future work.
\section*{Acknowledgments}
The authors are grateful to Shirley Ho for providing the large scale structure samples used in this study.
D. T. and C. H. are supported by the U.S. Department of Energy (DE-FG03-92-ER40701) and the National Science Foundation (AST-0807337).
C. H. is supported by the Alfred P. Sloan Foundation.
This work was supported in part by the U.S. Department of Energy under
Contract No. DE-AC02-98CH10886.
\bibliographystyle{prsty}
|
1,116,691,498,623 | arxiv | \section{Introduction}
Blazars are the class of radio loud active galactic nuclei (AGNs), with relativistic jets oriented close to the observer's line of sight \citep{1995PASP..107..803U}. The non-thermal emission from blazars extends from radio to $\gamma$-ray energies. Blazars are characterized by their extreme properties like luminous core, rapid variability, high polarization, superluminal motion, occasional spectacular flares etc. These extreme properties are usually attributed to the Doppler boosting of the emission which occurs from the relativistic regions in the jet.
Blazars are broadly classified into two sub-classes, namely, BL\,Lac objects and Flat Spectrum Radio Quasars (FSRQs). The difference between these two sub-classes is based on the presence or absence of emission/absorption line features in their optical spectrum, such that FSRQs show strong emission lines whereas for BL\,Lacs, the emission lines are weak/absent \citep{1995PASP..107..803U, 2003ApJ...585L..23F}.
The non-thermal emission originating from blazar jet produces a double humped spectral energy distribution (SED) \citep{1998MNRAS.299..433F}, with the low energy component peaking at optical/UV/soft X-ray energies, while the high energy component peaks at GeV energies. The low energy component is well established to be produced by synchrotron emission from relativistic electrons gyrating in the magnetic field of the jet, whereas the high energy component is usually attributed either to the inverse Compton (IC) scattering of low energy photons \citep{1992ApJ...397L...5M, 1992A&A...256L..27D, 2017MNRAS.470.3283S} or to the hadronic cascades initiated in the jet \citep{2007Ap&SS.307...69B}. Based on the location of the peak frequency in the low energy component, blazars are further classified into three sub-classes namely, high energy peaked BL\,Lac (HBL; $\nu_{p}>10^{15.3}$ Hz; \citep{1995ApJ...444..567P}), intermediate energy peaked BL\,Lac (IBL; $10^{14} < \nu_{p} {\leq} 10^{15.3}$ Hz), and low energy peaked BL\,Lac (LBL; $\nu_{p} {\leq} 10^{14}$ Hz) \citep{2016ApJS..226...20F}.
The light curves of blazars show unpredictable variations over a broad range of time-scales ranging from minutes to years across the entire electromagnetic spectrum. The clue to the mechanism causing such variations can be obtained by studying their long-term flux distributions. Typically, a Gaussian distribution of fluxes suggests additive processes, which is obtained when the flux variation is stochastic and linear. However, if the stochastic flux variation is non-linear, a Gaussian distribution in logarithmic flux values is expected, and such distributions are known as log-normal, which are often found in galactic and extra-galactic sources, like X-ray binaries, gamma-ray bursts, and AGNs \citep{lognorm_xrb, 2002PASJ...54L..69N, 1997MNRAS.292..679L, 2002A&A...385..377Q, 2009A&A...503..797G}. In the case of AGNs, the log-normal behavior are observed on time-scales ranging from minutes to days \citep{2004ApJ...612L..21G}; whereas, for X-ray binaries, such behavior is seen in sub-second time scales \citep{2005MNRAS.359..345U}. Among blazars, BL\,Lac is the first blazar in which log-normal distribution of X-ray flux was observed \citep{2009A&A...503..797G}. Subsequently, such log-normal behavior in fluxes with the excess r.m.s correlating linearly with the average flux, have been found in different energy-bands for a HBL, PKS\,2155-304 \citep{2017A&A...598A..39H}. Recently, a log-normal distribution of flux was shown in PKS\,2155-304 at X-ray and optical bands by using ten years of data \citep{2019MNRAS.484..749C}.
In the very high energy (VHE) $\gamma$-ray band, the log-normal behavior of flux distribution has been detected in the well known high synchrotron peaked BL\,Lac objects Mrk\,501 \citep{2010A&A...524A..48T, 2018Galax...6..135R} and PKS 2155-304 \citep{2009A&A...502..749A, 2010A&A...520A..83H}.
The brightest blazar sources seen by \emph{Fermi}-LAT show a similar trend in their long term monthly binned $\gamma$-ray light curves \citep{2018RAA....18..141S}. Also, using the \emph{Fermi}-LAT observations, the long-term $\gamma$-ray flux variability of the VHE source 1ES 1011+496 is independently confirmed to be log-normal by \citealp{my1011}. In addition to single log-normal distribution, the double log-normal profile has also been revealed at multi-wavelength bands for some blazars \citep{pankaj_ln, 2018RAA....18..141S}.
The log-normal behavior of these astrophysical sources is generally explained in terms of variations from the accretion disk, which are multiplicative in nature \citep{2005MNRAS.359..345U, 2010LNP...794..203M}. However, the fluctuations in the accretion disk may not produce minute scale variability as seen in most of the blazar sources \citep{1996Natur.383..319G, vhe501, vaidehi421}. Therefore, as an alternative to the accretion disk model, the log-normal flux distribution in blazars have also been explained in terms of a sum of emission from a large collection of mini-jets, which are randomly oriented inside the relativistic jet \citep{minijet}. Moreover, recently it has been shown that a linear Gaussian perturbation in the particle acceleration time scale can produce the log-normal flux distribution \citep{2018MNRAS.480L.116S}. They showed that perturbation in the acceleration time scale produces a Gaussian distribution in the index which in turn results in a log-normal distribution of the flux, whereas the perturbation in the particle cooling rate produces neither a Gaussian nor a Log-normal flux distribution.
In this work, we study the flux and photon index distribution properties of blazar sources having statistically significant light curves in the 16 years of RXTE observation. The sample selection criteria from the RXTE AGN catalog is described in Section \$2, Characterization of flux/index distribution and correlation study between the flux and photon index is described in Section \$3. The physical interpretations of the obtained flux distributions are discussed in Section \$4.
\section{RXTE archieve and Sample selection}
The Rossi X-ray Timing Explorer (RXTE) public database provides the systematically analyzed long-term light curves and spectral information of AGN sources, for the period from January 1996 to January 2012. RXTE has two co-aligned instruments, the Proportional Counter Array (PCA; 2-60 keV) \citep{2006ApJS..163..401J} and the High-Energy X-ray Timing Experiment (HEXTE; 15-250 keV) \citep{1998ApJ...496..538R}. It provides light curves in the energy range 2--10 keV. In these light curves, the sampling of data is uneven as different periods had been proposed for various scientific goals. However, despite the time gaps in the light curves, the sixteen years of RXTE observation provides a large data set, which are some times statistically suitable to investigate the variability behavior.
We first selected all the blazars from the RXTE AGN Timing \& Spectral Database (RATSD \footnote{\url{https://cass.ucsd.edu/~rxteagn/}}), for which the flux and photon spectral index light curves are available with more than 90 number of flux/index points. The spectra of these blazars in RATSD are mostly fitted with simple power-law model, except few BL Lac sources which are fitted better with a broken power-law model with break energy below $\approx$ 10 keV \citep{2013ApJ...772..114R}. The RATSD provide light curves in the energy range 2-10 keV, 2-4 keV, 4-7 keV and 7-10 keV. However, in order to have flux light curves with good statistics, we consider flux light curves which have been obtained in the full energy range of 2--10 keV. Further, to select the light curves with good statistics, we define light curve significance fraction as
\begin{equation}
R=\frac{\overline{\sigma_{err}^2}}{\sigma^2}
\end{equation}
where $\overline{\sigma_{err}^2}$ is the mean square error of flux/index distribution and $\sigma^2$ is the variance of flux/index distribution. From the selected blazars, only those sources are considered for further analysis which has $R<0.2$ in both the flux and index light curves. However, this condition was not satisfied in the unbinned index light curves (light curves taken directly from RATSD) of the blazars except Mkn\,501, therefore, for each selected blazar, we binned the flux/index light curves by combining the flux/index points from 2 days to a maximum of 10 days. The upper limit of 10 days bin is chosen to ensure that the minimum number of flux/index points in the binned light curve is $\geq90$.
After implementing these conditions i.e., length of the light curve $\geq90$ and $R<0.2$, we are restricted to two BL\,Lacs viz. Mkn\,501, Mkn\,421 and one FSRQ viz. 3C\,273. The above conditions are met with 2-days time-bin in 3C\,273 and 10-days time-bin for Mkn 421, while in Mkn\,501, these conditions are satisfied in unbinned and two-days time binned light curves (see Table \ref{tab:R}). However, in order to obtain evenly sampled light curves in all three sources, we have used two-days time binned light curve for Mkn\,501. The length of light curves and the `R' values of 3C\,273, Mkn\,501 and Mkn\,421 are given in Table \ref{tab:R}. \\
3C\,273 is a well studied FSRQ source located at a redshift $z\sim0.158$. The RXTE X-ray spectra of this source is fitted with a power-law model with an average power-law index of $\Gamma=$ 1.70$\pm$0.01 \citep{2013ApJ...772..114R}. The obtained RXTE flux/index light curves of 3C\,273 are shown in Fig. \ref{fig:3c279}. The two BL Lac objects viz. Mkn\,501 and Mkn\,421 at a redshift of 0.033 and 0.031 respectively, are very well known high synchrotron peaked BL Lac (HBL) objects, with the low energy SED component peaking at frequency $\nu_{p}> 10^{15.3}$ Hz. The RXTE spectra of these sources are fitted better with a broken power-law model \citep{2013ApJ...772..114R}. For Mkn\,501, the average values of power-law photon index before break energy ($\Gamma1$) and after break energy ($\Gamma2$) are obtained as 1.97$\pm$0.02 and 2.02$\pm$0.01 respectively with a break energy at 6.9$\pm$1.2 keV, where as for Mkn\,421, the average values of $\Gamma1$ and $\Gamma2$ are obtained as 2.41$\pm$0.09 and 2.75$\pm$0.01 with break energy of 6.6$\pm$0.4 keV. The RXTE flux and $\Gamma1$ light curves of Mkn\,501 and Mkn\,421 are shown in Figs \ref{fig:mkn501} and \ref{fig:mkn421}.
Moreover, since RATSD provides light curves from a pointing instrument (PCA), there is a possibility of bias being introduced in the flux distributions, if the sources were preferentially observed in the high flux states. To check for this, we looked for correlation between the flux and time-gap (time between two observations) with the Spearman's rank correlation test. The null hypothesis probability values (P-value) for 3C273, Mkn 501 and Mkn 421 are obtained as 0.07, 0.15 and 0.08 respectively. In the case of Spearman's correlation, the null hypothesis ($H_{0}$) is that there is no correlation between two variables, and we reject the null hypothesis if P<0.01. Therefore, the obtained P-values indicate that there is no significant correlation between flux and time-gap.
\begin{table}
\centering
\caption{Light curve significance fraction R values for the unbinned/binned flux and index light curves. Col:- \,1: Selected blazars satisfying the conditions R<0.2 and length of binned flux/index light curve $\geq 90$, 2: Number of data points in the distributions, 3: R-value for index light curve, and 4: R-value for the flux light curve.}
\begin{tabular}{lccc}
\hline
\hline
Blazar name & \multicolumn{1}{c}{Number of } & \multicolumn{1}{c}{$R_\Gamma$} & \multicolumn{1}{c}{$R_{Flux}$}\\
& data points & & \\
\hline
3C\,273 & 1960 (unbinned) & 0.22 & $2.0\times10^{-3}$\\
& 1151 (2-days binned) & 0.19 & $1.5\times10^{-3}$\\
\hline
Mkn\,501 & 496 (unbinned) & 0.05 & $1.0\times10^{-4}$\\
& 188 (2-days binned) & 0.06 & $1.0\times10^{-4}$\\
\hline
Mkn\,421 & 1182 (unbinned) & 0.75 & $6.8\times10^{-5}$\\
& 93 (10-days binned) & 0.17 & $9.6\times10^{-6}$\\
\hline
\end{tabular}
\label{tab:R}
\end{table}
\begin{figure}
\centering
\includegraphics[scale=0.44,angle=-90]{3c273.eps}
\vspace{-0.4cm}
\caption{The X-ray flux/index light curves of 3C\,273 obtained by using the sixteen years of RXTE archive data. Top panel: 2-days time binned flux light curve obtained in the energy range 2-10\,keV, bottom panel: 2-days time binned index light curve.}
\label{fig:3c279}
\end{figure}
\begin{figure}
\vspace{-0.4cm}
\centering
\includegraphics[scale=0.44,angle=-90]{mkn501.eps}
\vspace{-0.4cm}
\caption{The X-ray flux and $\Gamma1$ light curves of Mkn\,501 obtained by using the sixteen years of RXTE archive data. Top panel and bottom panel are the same as in Fig. \ref{fig:3c279}.}
\label{fig:mkn501}
\end{figure}
\begin{figure}
\vspace{-0.4cm}
\centering
\includegraphics[scale=0.44,angle=-90]{mkn421.eps}
\vspace{-0.4cm}
\caption{The X-ray flux and $\Gamma1$ light curves of Mkn\,421 obtained by using the sixteen years of RXTE archive data. Top panel: 10-days time binned flux light curve obtained in the energy range 2-10\,keV, bottom panel: 10-days time binned $\Gamma1$ light curve.}
\label{fig:mkn421}
\end{figure}
\section{Distribution study for blazars}
\subsection{AD test} \label{sec:tau_a_pert}
Generally, the flux distribution of blazar light curves shows an asymmetric/tailed trend. Anderson-Darling (AD) test statistic is an useful tool for the normality test, which is sensitive towards the tails of a distribution \citep{1992nrfa.book.....P}. The null hypothesis ($H_0$) of the AD test is that the data sample is drawn from a particular distribution (say normal distribution in our case). The AD test calculates the null hypothesis probability value (p--value) such that p--value < 0.01 would indicate the deviation from the normality of the sample.
The AD test for 3C\,273 shows that the p--values for flux in log-scale and index in normal-scale are larger than 0.01, which suggests the flux distribution is consistent with a log-normal and the index distribution with a normal one. However, for the two BL\,Lac objects viz. Mkn\,501 and Mkn\,421, p--values for flux in linear-scale and log-scale are much smaller than 0.01, which indicates that the flux distribution will be neither normal nor log-normal. Moreover, the p--value of index distribution also suggests a non-normal distribution of index in Mkn\,501 and Mkn\,421. The AD test results of 3C\,273, Mkn\,501 and Mkn\,421 are summarized in Table \ref{tab:AD}.
\subsection{Histogram of Flux and Index}
Histogram fitting is also a helpful tool to characterize the nature of the distribution. Here, we construct the normalized histograms of the logarithm of flux and photon index in linear-scale for the three selected blazars. The bin-widths of the histogram are chosen such that each bin carries an equal number of data points. In the case of FSRQ 3C\,273, the single peak in the flux and index histograms suggest for the single distribution; hence we fitted these histograms with the probability density function (PDF) given by
\begin{equation}\label{eq:gauss}
\rm f(x) = \frac{1}{\sqrt{2\pi \sigma^2}} e^\frac{-(x-\mu)^2}{2\sigma^2}
\end{equation}
where $\rm \mu$ and $\rm \sigma$ are the centroid and width of the distribution.
The flux and index histograms along with best fitted PDF (equation \ref{eq:gauss}) are shown in the multiplot (Fig. \ref{fig:273_corr}) and the best fit parameters are summarized in Table \ref{tab:2distpar_fsrq}. The fit parameters confirm that the flux distribution of 3C\,273 is log-normal while its index is normally distributed. These results are consistent with the results obtained from the AD test statistic. On the other hand, the flux and index histograms of the two BL\,Lacs (Mkn\,501 and Mkn\,421) show a double-peaked structure. Also, the AD test results of these two BL\,Lacs show that the flux distribution is neither Gaussian nor log-normal, and the index distribution is not Gaussian. We therefore analyzed these distributions by fitting their histograms with double PDF
\begin{equation}\label{eq:dpdf}
\rm d(x) = \frac{a}{\sqrt{2\pi \sigma_1^2}} e^\frac{-(x-\mu_1)^2}{2\sigma_1^2} \
+ \frac{(1-a)}{\sqrt{2\pi \sigma_2^2}} e^\frac{-(x-\mu_2)^2}{2\sigma_2^2}
\end{equation}
where, $\rm a$ is the normalization fraction, $\rm \mu_1$ and $\rm \mu_2$ are the centroids of
the distribution with widths $\rm \sigma_1$ and $\rm \sigma_2$, respectively. The fit of flux histogram in log-scale and index histogram in linear-scale with equation \ref{eq:dpdf} will result in the double log-normal fit of the flux distribution and double normal fit of the index distribution. During the double log-normal fit of flux histogram, we have kept all parameters free viz. $\mu_1$, $\mu_2$, $\sigma_1$, $\sigma_2$ and a. However, in case of double normal fit of index histogram, we have fixed the normalization fraction `a' to the best fit parameter value obtained in the double log-normal flux histogram fit and carried fitting with free parameters as $\mu_1$, $\mu_2$, $\sigma_1$ and $\sigma_2$. The same normalization fraction in the flux and index double PDF will ensure the similar contribution of the respective components in the flux and index distribution. The best fit parameter values of fitting the double distribution flux/index histograms with equation \ref{eq:dpdf} are given in Table \ref{tab:2distpar} and corresponding plots are shown in Figs \ref{fig:501_corr} and \ref{fig:421_corr}. In case of Mkn\,421, the error on the normalization fraction `a' is not well constrained due to poor data statistics in the flux histogram, its best fit value ranges from 0.05--0.95. In this case, we have fixed the normalization fraction `a' for both flux and index distribution as 0.3 (Table \ref{tab:2distpar}). The double log-normal fit to flux histograms of Mkn\,501 and Mkn\,421 gave a ${\chi^2}/{dof}$ of ${38.28}/{33}$ and ${11.85}/{15}$ respectively, while the other double distribution functions, such as combination of log-normal and Gaussian gave a ${\chi^2}/{dof}$ of ${38.38}/{33}$ for Mkn\,501 and ${12.75}/{15}$ for Mkn\,421. Combination of Gaussian and log-normal gave a ${\chi^2}/{dof}$ of ${50.36}/{33}$ for Mkn\,501 and ${14.10}/{15}$ for Mkn\,421, while a double Gaussian fit gave a ${\chi^2}/{dof}$ of ${41.58}/{33}$ for Mkn\,501 and ${14.70}/{15}$ for Mkn\,421. These reduced $\chi^2$ values suggest that the double log-normal fit and lognormal+Gaussian fit to the flux histograms of two BL\,Lacs are equally good. The best fit parameter values obtained by fitting the flux histogram with lognormal+Gaussian PDF are given in Table \ref{tab:2distpar_lg}. Further, we found that the photon index distribution in both Mkn\,501 and Mkn\,421 are fitted with the double Gaussian distribution function with ${\chi^2}/{dof}$ of ${39.1}/{34}$ and $16.94/14$ respectively.
\begin{table*}
\caption{ AD test results for the flux/index distribution of three selected blazars viz. 3C\,273, Mkn\,501 and Mkn\,421 Col:- 1: Selected
blazars satisfying the conditions R<0.2 and length of binned flux/index light curve $\geq90$, 2: Number of data points in the distributions, 3,4: AD statistics for Flux and Logarithm of flux distribution, and 5: AD statistics for index distribution.}
\begin{tabular}{c c c c c}
\hline
\hline
Blazar name & \multicolumn{1}{c}{Number of } & \multicolumn{1}{c}{Normal (Flux)} & \multicolumn{1}{c}{Log-normal (Flux)} & \multicolumn{1}{c}{Normal (Spectral index)}\\
& data points & AD(p--value) & AD(p--value) & AD(p--value) \\ \hline
3C\,273 & 1151 (2-days binned) & 13.02 (<$2.2\times10^{-16}$) & 0.76 (0.06) & 0.57 (0.14) \\
\hline
Mkn\,501 & 188 (2-days binned) & 15.89 (<$2.2\times10^{-16}$) & 2.78 ($4.96\times10^{-7}$) & 1.24 ($3.0\times10^{-3}$)\\
\hline
Mkn\,421 & 93 (10-days binned) & 2.29 ($7.44\times10^{-6}$) & 1.27 ($2.5\times10^{-3}$) & 1.09 ($7.0\times10^{-3}$) \\
\hline
\hline
\end{tabular}
\label{tab:AD}
\end{table*}
\begin{table*}
\caption{Best fit parameter values of the PDF (equation \ref{eq:gauss}) fitted to the logarithm of flux and index histograms. Col:- 2: Histogram obtained from the logarithm of flux and linear index distribution, 3,4: Best fit values of $\mu$ and $\sigma$, 5: Degrees of freedom and 6: Reduced $\chi^{2}$.}
\begin{tabular}{c c c c c c }
\hline
\hline
Blazar name & Histogram & $\mu$ & $\sigma$ & dof & $\rm \chi^2/dof$ \\
\hline
3C\,273 & log10(Flux) & -9.922$\pm$0.004 & 0.124$\pm$0.003 & 21 & 0.92\\
&Index & 1.642$\pm$0.003 & 0.088$\pm$0.002 & 21 & 1.22\\
\hline
\end{tabular}
\label{tab:2distpar_fsrq}
\end{table*}
\begin{table*}
\caption{Best fit parameter values of the double PDF (equation \ref{eq:dpdf}) fitted to the logarithm of flux and index histograms. Col:- 2: Histogram obtained from the logarithm of flux and linear index distribution, 3--6: Best fit values of $\mu_1$, $\sigma_1$, $\mu_2$ and $\sigma_2$, 7: Normalization fraction, 8: Degrees of freedom and 9: Reduced $\chi^{2}$. }
\begin{tabular}{c c c c c c c c c}
\hline
\hline
Blazar name & Histogram & $\mu_1$ & $\sigma_1$ & $\mu_2$ & $\sigma_2$ & \multicolumn{1}{c}{a} & dof & $\rm \chi^2/dof$ \\
\hline
Mkn\,501 & log10(Flux) & -9.62$\pm$0.04 & 0.10$\pm$0.03 & -10.02$\pm$0.02 & 0.14$\pm$0.02 & 0.83$\pm$0.06 & 33 & 1.16\\
& Index & 1.74$\pm$0.03 & 0.09$\pm$0.02 & 2.19$\pm$0.02 & 0.15$\pm$0.01 & 0.83 & 34 & 1.15\\
\hline
Mkn\,421 & log10(Flux) & -9.36$\pm$0.05 & 0.26$\pm$0.05 & -10.10$\pm$0.08 & 0.29$\pm$0.08 & 0.3 & 15 & 0.79\\
& Index & 2.54$\pm$0.05 & 0.21$\pm$0.04 & 3.09$\pm$0.48 & 0.51$\pm$0.31 & 0.3 & 14 & 1.21\\
\hline
\end{tabular}
\label{tab:2distpar}
\end{table*}
\begin{table*}
\caption{Best fit parameter values of the double PDF (lognormal+Gaussian) fitted to the logarithm of flux histogram. Col:- 2: Histogram obtained from the logarithm of flux distribution, 3--6: Best fit values of $\mu_1$, $\sigma_1$, $\mu_2$ and $\sigma_2$, 7: Normalization fraction, 8: Degrees of freedom and 9: Reduced $\chi^{2}$.}
\begin{tabular}{c c c c c c c c c}
\hline
\hline
Blazar name & Histogram & $\mu_1$ & $\sigma_1$ & $\mu_2$ & $\sigma_2$ & \multicolumn{1}{c}{a} & dof & $\rm \chi^2/dof$ \\
\hline
Mkn\,501 & log10(Flux) & -10.03$\pm$0.03 & 0.14$\pm$0.02 & 2.26e-10$\pm$3.998e-11 & 6.51e-11$\pm$2.10e-11 & 0.79$\pm$0.09 & 33 & 1.163\\
\hline
Mkn\,421 & log10(Flux) & -10.10$\pm$0.08 & 0.28$\pm$0.08 & 4.07e-10$\pm$5.20e-11 & 2.48e-10$\pm$4.43e-11 & 0.3 & 15 & 0.85\\
\hline
\end{tabular}
\label{tab:2distpar_lg}
\end{table*}
\subsection{Correlation study}
The three blazars selected from RXTE catalog has a sufficient number of data points to study the correlation between the photon index and the flux in the 2-10 keV energy band. We have used the Spearman's rank correlation method to assess the correlation behavior between the index and flux. The obtained Spearman's rank correlation coefficient ($r_{s}$), its chance correlation probability (P) and the correlation slope (A) are summarized in Table \ref{tab:corrpar}. The correlation parameters show a significant negative correlation between the flux and index in all the three blazars, which is the usual trend blazars show across the electromagnetic spectrum \citep{2015A&A...573A..69W, 2017ApJ...841..123P, 2011MNRAS.413.2785B}.
In the correlation plots (left top panel in Figs \ref{fig:273_corr}, \ref{fig:501_corr} and \ref{fig:421_corr}), gray bands represent the 1-$\sigma$ error on the centroids of the logarithm of flux and index distributions (Figs \ref{fig:273_corr}, \ref{fig:501_corr} and \ref{fig:421_corr}). In the bottom panel of Fig. \ref{fig:421_corr}, the error on higher index centroid is large, so a single vertical line is shown instead of a gray band.
\begin{table}
\centering
\caption{ Spearman Correlation results obtained by comparing flux and index distribution of three selected blazars. Col:- 1: Selected blazar sources 2: Spearman's rank correlation coefficient $(r_{s})$, 3: Probability chances for correlation (P) and 4: Slope of the best fitted line to correlation plots (A).}
\begin{tabular}{ c c c c }
\hline
\hline
Blazar name & \multicolumn{1}{c}{$r_{s}$ } & \multicolumn{1}{c}{P} & \multicolumn{1}{c}{A}\\
\hline
3C\,273 & -0.60 & $2.0\times10^{-4}$ & -0.61$\pm$0.22\\
Mkn\,501 & -0.65 & $2.96\times10^{-24}$ & -0.94$\pm$0.06\\
Mkn\,421 & -0.86 & $7.25\times10^{-29}$ & -1.21$\pm$0.07\\
\hline
\hline
\end{tabular}
\label{tab:corrpar}
\end{table}
\begin{figure}
\centering
\includegraphics[scale=0.35,angle=-90]{fig2.eps}
\caption{Multi-plot for the characterization of flux/index distribution of 3C\,273. Top panel left is the logarithm of flux vs index scatter plot along with the best fit line (dotted line). Top panel right is the histogram of logarithmic of flux distribution. Bottom panel is the histogram of index distribution. The solid curve in the top panel right and bottom panel indicates the best fitted PDF (equation \ref{eq:gauss}). The vertical and horizontal gray bands indicate the 1-$\sigma$ error range on the centroid ($\mu$) of the PDF (equation \ref{eq:gauss}) fitted to the index distribution and logarithm of flux distribution respectively.}.
\label{fig:273_corr}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.35,angle=-90]{fig3.eps}
\caption{Multi-plot for the characterization of flux/index distribution of Mkn\,501. Top panel left is the logarithm of flux vs index scatter plot along with the best fit line (dotted line). Top panel right is the histogram of logarithm of flux distribution. Bottom panel is the histogram of index distribution. The solid curve in the top panel right and bottom panel indicates the best fitted double PDF (equation \ref{eq:dpdf}). The vertical and horizontal gray bands indicate the 1-$\sigma$ error range on the centroids ($\mu1$, $\mu2$) of the double PDF (equation \ref{eq:dpdf}) fitted to the index distribution and logarithm of flux distribution respectively.}
\label{fig:501_corr}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.35,angle=-90]{fig4.eps}
\caption{Multi-plot for the characterization of flux/index distribution of Mkn\,421. The three panels in this plot are same as described in Fig. \ref{fig:501_corr}. A single vertical line represents the centroid value of the higher index distribution (Section 3.3).}
\label{fig:421_corr}
\end{figure}
\section{Discussion}\label{sect:disc}
After selecting three blazars viz. 3C\,273, Mkn\,501 and Mkn\,421 from the sample of blazars in the RXTE catalog, we used the AD test and histogram fitting to characterize their flux and index distributions.
We found that the flux distribution of FSRQ 3C\,273 follows a log-normal distribution while its index is Gaussian distributed. The log-normal distribution of flux in 3C\,273 is also observed in the monthly binned $\gamma$-ray light curve \citep{2018RAA....18..141S}. Since the variations in the index are related to the fluctuations in acceleration and escape time scales of the emitting particles in the acceleration region, the observed Gaussian distribution in index therefore indicate a linear normal fluctuations in the intrinsic time scales in the acceleration region. This result is consistent with the study by \citealp{2018MNRAS.480L.116S}, where they showed that the linear normal fluctuation in the intrinsic particle acceleration time scale in the acceleration region can produce a log-normal flux distribution. The connection of log-normal flux distribution with the Gaussian index variation indicates that the source of X-ray flux variation in 3C\,273 are not from the accretion disc, instead, the perturbations are mostly local to the jet.
Using the AD test, we found that the flux and index distributions of two BL\,Lacs viz. Mkn\,501 and Mkn\,421 are not consistent with a single distribution, and their histograms can be fitted with a double distribution. Interestingly, the overlapping of the centroids (within 1-$\sigma$ error -- gray bands in Figs \ref{fig:501_corr} and \ref{fig:421_corr}) of the double distributions in the log of flux and index distribution on the best fitted correlation line (blue dotted line in the correlation plots), with same normalization fraction values for both the distributions (Section 3.3, Table \ref{tab:2distpar}), implies that the double distribution in photon index is connected to the double distribution in flux. The reduced $\chi^2$ test shows that the flux distributions of Mkn\,501 and Mkn\,421 are either double log-normal or combination of log-normal and Gaussian, while their index distributions are double Gaussian. However, using the interpretation of \citealp{2018MNRAS.480L.116S}, the double Gaussian distribution in the index would preferably indicate double log-normal distribution in flux. \citealp{2018MNRAS.480L.116S}, showed that Gaussian distribution in the index can be initiated through linear fluctuations in the particle acceleration rate and hence, the log-normal flux distribution may carry information regarding the acceleration processes in the blazar jets.
In the study of multi-wavelength flux variations in PKS 1510-089, the two distinct log-normal profiles found in the flux distribution at near-infrared (NIR), optical and $\gamma$-ray energies are connected to two possible flux states in the source \citep{2016ApJ...822L..13K}. In our work, the observation of double Gaussian distribution in index with bi-lognormal flux distribution in Mkn\,501 and Mkn\,421 further confirms the two flux states hypothesis. Moreover, contrary to double log-normal flux distribution in Mkn\,501 and Mkn\,421 in X-ray, the study of the $\gamma$-ray flux distribution of brightest \emph{Fermi} blazars show a single log-normal flux distribution at $\gamma$-ray energy \citep{2018RAA....18..141S}. However, it should be noted that the results of Shah et al. (2018) are obtained from light curves with a bin size of a month, which is longer than those used in this work. A longer binning might remove the second component of the distribution. In case of HBL sources like Mkn\,501 and Mkn\,421, the X-ray spectrum lies beyond the synchrotron peak and hence is mainly emitted by the high-energy end of the electron distribution. While in FSRQs, the X-ray emission is mainly due to the low energy tail of the electron distribution. Further, in the case of HBL sources, the low energy $\gamma$-ray spectrum occurs before the break energy of the inverse Compton component. Therefore, the differences observed in the flux distribution of FSRQ 3C\,273 and HBL's (Mkn\,501 and Mkn\,421) at X-rays may possibly be related to the energy of emitting particles. Thus, it seems that low energy emitting particles produce a single log-normal flux distribution while the high energy tail of the electron distribution produces double log-normal flux distribution. However, such inference can be confirmed by carrying out a detailed flux distribution study for a sample of sources with more statistically significant light curves. In this direction, a systematic regular long-term monitoring of blazars with MAXI would be important to probe such information. It will be interesting and important to quantify the flux distribution of these sources in other wave-bands, as that may strengthen the results presented here. However, continuous significant and reliable detection of flux and index at other bands is not available at present. We note that the upcoming Cherenkov Telescope Array \citep{2019EPJWC.20901038C} may provide such high quality light curves in gamma-rays.
\section*{Acknowledgements}
We thank the anonymous referee for valuable comments and suggestions. This work has made use of light curves provided by the University of California, San Diego Center for Astrophysics and Space Sciences, X-ray Group (R.E. Rothschild, A.G. Markowitz, E.S. Rivers, and B.A. McKim), obtained at \url{https://cass.ucsd.edu/~rxteagn/}. R. Khatoon and R. Gogoi would like to thank CSIR, New Delhi (03(1412)/17/EMR-II) for financial support. R. Khatoon would like to thank IUCAA, Pune for the hospitality. R. Gogoi would like to thank IUCAA for associateship.
\bibliographystyle{mnras}
|
1,116,691,498,624 | arxiv | \section{Introduction}
Recently, much effort has been devoted to understanding, mapping, and modeling
large-scale human and animal trajectories~\cite{BrockmannA,Gonzalez,Viswanathan,Sims, Benichou3,Benichou2,Rhee,Koren,BrockmannB,Fedreschi,Lee,BrockmannC}. Examples include models that describe agents searching a space for a target of unknown position \cite{Benichou2,Benichou3,Koren}; mobility tracking in cellular environments~\cite{Fedreschi,Das,KimB}; human infectious disease dynamics and mobile phone viruses~\cite{BrockmannB,BrockmannC,Lloyd,Kleinberg,Pu}; traffic transportation, and urban planning \cite{Gonzalez}, and references therein. Understanding spatiotemporal patterns and characterizing human mobility and social interactions can now be achieved due to extensive and widespread use of wireless communication devices~\cite{Fedreschi}.
There is growing experimental evidence that the movement of many species, including humans, can be described by a class of non-trivial random walk models known as L\'{e}vy flights~\cite{BrockmannA,Gonzalez,Klafter}. A L\'{e}vy flight can be considered a generalization of Brownian motion \cite{Klafter} and belongs to the class of scale-invarient, fractal random processes. L\'{e}vy flights are Markovian stochastic processes in which step lengths $\lambda$ are drawn from a power law distribution:
\begin{equation}
\lambda(x)\simeq1/|x|^{\alpha+1},
\end{equation}
where $0 < \alpha \leq 2$ is the L\'{e}vy exponent. This implies that the second moment of $\lambda$ diverges and extremely long jumps are possible. For review, see \cite{Chechkin}.
L\'{e}vy statistics, somewhat controversially, have been found in the search behavior of many species including human hunter-gatherers~\cite{Brown}. L\'{e}vy flight-like movement patterns have been observed while tracking dollar bills \cite{BrockmannA}, mobile phone users \cite{Gonzalez}, and GPS trajectory traces obtained from taxicabs and volunteers in various outdoor settings \cite{Rhee,Jiang}.
The mobile phone users studied in \cite{Gonzalez} reveal behavior similar to L\'{e}vy patterns, but individual trajectories show a high degree of temporal and spatial regularity. That investigation analyzed the trajectories of $10^5$ anonymized phone users, randomly selected from more than six million subscribers. The unique mobility patterns found in~\cite{Gonzalez} show a time-independent characteristic travel distance and high probability to frequently return to a few locations. Additionally, real user's mobility patterns may be approximated by L\'{e}vy flights but only up to a distance characterized by the radius of gyration $r_{g}$. This quantity represents the characteristic distance travelled by a user $a$ observed up to time $t$, defined as
\begin{equation}
r_{g}^{a}(t)=\sqrt{\frac{1}{n_{c}^{a}(t)}\sum_{i=1}^{n_{c}^{a}}(\mathbf{r}_{i}^{a}-\mathbf{r}_{\mathrm{CM}}^{a})^{2}},
\end{equation}
where $\mathbf{r}_{i}^{a}$ represents the $i=1,\ldots,n_{c}^{a}(t)$ positions recorded for user $a$ and $\mathbf{r}_{\mathrm{CM}}^{a}=1/{n_{c}^{a}}(t)\sum_{i=1}^{n_{c}^{a}}\mathbf{r}_{i}^{a}$ is the center of mass of the trajectory~\cite{Gonzalez}. Interestingly, it has been observed that the radius of gyration increases only logarithmically in time, which cannot be explained by traditional L\'{e}vy models; therefore, we must return to the data for further analysis.
\subsection{Motivation and Open Questions}
The work in~\cite{Gonzalez} showed that human mobility patterns are well characterized by the radius of gyration $r_g$. This is a static measure, in the sense that the order of movement between locations is irrelevant, and can then characterize time-invariant properties of the trajectory. Mobile phone data only samples the actual underlying trajectory, however, with a user-driven, heterogeneous sampling rate~\cite{Gonzalez}, and this complicates the study of the user's real mobility. In the same way that $r_g$ avoids these sampling problems by ``integrating'' over time, an appropriate spatial course-graining can provide a basic picture of the time-dependent, evolving characteristics of a subject's mobility pattern.
In this paper, we apply a simple clustering algorithm to the spatial locations of a user's trajectory. Finding clusters of frequently visited locations (such as home and work) and collapsing them to a single entity reduces the complexity of the full trajectory while allowing for simple statistics to capture properties relating to how users move between locations. Interesting questions include:
\begin{enumerate}
\item How spatially separated are such clusters?
\item How often are clusters (re-)visited? How long do users dwell within clusters?
\item Are larger clusters (more recorded calls over time) more spatially dispersed (as quantified by the $r_g$ of the cluster's elements) than smaller clusters? How do the $r_g$'s of clusters relate to the total $r_g$?
\end{enumerate}
The remainder of this paper is organized as follows. We first briefly discuss the dataset (Sec.~\ref{subsec:data}) and the clustering algorithm used to analyze it (Sec.~\ref{subsec:clustering}). We then introduce several important statistics, calculate them from the dataset at hand, and discuss their implications (Sec.~\ref{sec:results}). A summary and discussion of future work follows (Sec.~\ref{sec:conc}).
\subsection{Data Set}\label{subsec:data}
As in~\cite{Gonzalez,Pu} we analyze data from a European mobile phone carrier. The data contains the date and time of phone calls and text messages from 6 million anonymous users as well as the spatial location of the phone towers routing these communications. User locations within a tower's service area are not known. From this full dataset, we select a random subset of 60\,000 users that make or receive at least one phone call during June -- August 2007. The call history of each user was then used to reconstruct their trajectory of motion during that time period.
\subsection{Clustering}\label{subsec:clustering}
Our analysis is based on $k$-means clustering which, in general, divides $N$-dimensional populations (or observations) into $k$ distinct sets or clusters by minimizing the intra-cluster sum of squares $w^2$ of an appropriately defined distance metric. Using the notation in \cite{MacQueen} (see also \cite{Gan}):
\begin{equation}
w^2(S)=\sum_{i=1}^{k}\int_{S_i}\left|z-u_i\right|^2 \mathrm{d} p(z),
\end{equation}
where $p$ is the probability mass function for the observations $S=\{S_1,S_2,\ldots,S_k\}$, and $u_i$ $(i=1, \ldots, k)$ is the conditional mean of $p$ over the set $S_i$. For this work, $\mu_i$ will be the center of mass of cluster $i$ and we seek to find the locations of $\mu_i$ that minimize the square Euclidean distance between $\mu_i$ and that cluster's recorded call locations.
For simplicity, we assume $k=2$ clusters throughout this work. This is a serious assumption, and identifying the correct number of clusters on a per-user basis remains important future work. However, we have found that the majority of users are well clustered with $k=2$. To show this, we compute the mean \emph{silhouette value}~\cite{silhouette} for each user. Define $A(\mathbf{r}_i)$ as the average square (Euclidean) distance from $\mathbf{r}_i$ to all other points in the same cluster, and define $B(\mathbf{r}_i)$ as the average square distance from $\mathbf{r}_i$ to all points in the other cluster. The silhouette value $s(\mathbf{r}_i)$ for point $\mathbf{r}_i$ is
\begin{equation}
s(\mathbf{r}_i) \equiv \frac{B(\mathbf{r}_i) - A(\mathbf{r}_i)}{\max\left\{B(\mathbf{r}_i), A(\mathbf{r}_i)\right\}},
\end{equation}
and takes values between -1 and 1, with larger values indicating $r_i$ is increasingly well separated from the other cluster. Taking the average $\left<s\right> \equiv \left<s(\mathbf{r}_i)\right>_i$ over all points then provides a single statistic measuring how well the whole data are clustered. Poor choices of $k$, for example, lead to smaller $\left<s\right>$~\cite{silhouette}. We find that 91.8\% of users have $\left<s\right> > 0.8$ and 80.8\% have $\left<s\right> > 0.9$, indicating that the majority of users are well clustered with just $k=2$.
\section{Results}\label{sec:results}
For each user, we apply the $k$-means algorithm to their trajectory, partitioning the call locations into $k=2$ sets.
The number of calls in clusters 1 and 2 for user $a$ are $N_1(a)$ and $N_2(a)$, respectively (we identify $N_1(a) \geq N_2(a)$ such that cluster 1 is the primary cluster) and $N_T(a) = N_1(a)+N_2(a)$ is the total number of calls that user $a$ makes during the sample period. The distribution $P(N)$ of the number of calls is shown in Fig.~\ref{fig:cluster}.
Using the spatial distribution of calls we compute each cluster's center of mass, $\mathbf{r}_{\mathrm{CM}}^{(1)}$ and $\mathbf{r}_{\mathrm{CM}}^{(2)}$, and radius of gyration, $r_g^{(1)}$ and $r_g^{(2)}$, as well as the total center of mass and radius of gyration for all points, $\mathbf{r}_{\mathrm{CM}}^{(T)}$, and $r_g^{(T)}$, respectively. The distributions of these quantities are shown in Fig.~\ref{fig:spatial}.
To quantify relationships between the two clusters, we compute the separation between their centers of mass, $d_{\mathrm{CM}}$:
\begin{equation}
d_{\mathrm{CM}} = \left\|\mathbf{r}_{\mathrm{CM}}^{(1)}-\mathbf{r}_{\mathrm{CM}}^{(2)}\right\|,
\end{equation}
where $\left\|\ldots\right\|$ is the Euclidean norm, and we also count how often a user ``jumps'' between the two clusters. A user who makes $N_T$ calls will have $N_T-1$ jumps between locations (including remaining at the current location). We define $F_\mathrm{CC}$ as the fraction of cross-cluster jumps, those that begin and end in different clusters. The distributions of $F_\mathrm{CC}$ and $d_{\mathrm{CM}}$ are shown in the insets of Figs.~\ref{fig:cluster} and \ref{fig:spatial}, respectively.
The number of calls per cluster indicates that users spend the majority of their time in one cluster and visit the other cluster more rarely. The fraction of cross-cluster jumps is small, $\left<F_\mathrm{CC}\right>_\mathrm{users}$ is less than 0.1, (where $\left<\ldots\right>_\mathrm{users}$ is an average over all sampled users), indicating that the primary cluster provides a stable location in which the user dwells. Likewise, $d_\mathrm{CM}$ is relatively large, $\left<d_\mathrm{CM}\right>_\mathrm{users} = 157.8$ km, indicating that we are finding a semi-frequent but long-distance destination. It would be interesting to see temporal dependencies on the cluster's occupation probability: are users more likely to be in the secondary cluster on weekends, for example.
The cluster's radius of gyration summarizes how compact or dispersed user movement is within that cluster. We find that the larger cluster (in terms of the number of calls) tends to be slightly more spatially compact than the smaller cluster, $r_g^{(1)} < r_g^{(2)}$. Both $r_g^{(1)}$ and $r_g^{(2)}$ are much smaller than $r_g^{(T)}$, which is to be expected when $d_\mathrm{CM}$ is large. This means that much of the user's total radius of gyration is generated by movement between two well-separated clusters, as opposed to homogeneous motion over a large space.
\begin{figure*}[!t]
\centering
\begin{minipage}[t]{0.48\linewidth}
\centering
\includegraphics[height=\textwidth,angle=-90]{plot_cluster}
\caption{Temporal properties of clusters. Shown is the distribution $P(N)$ of the number of phone calls inside each cluster and the total number of calls, $N_1$, $N_2$, and $N_T=N_1+N_2$, respectively. The majority of calls take place in cluster 1, but a non-negligible amount occur in cluster 2. (inset) The fraction of jumps $F_\mathrm{CC}$ from one cluster to another, quantifying how often users travel between their clusters. The primary cluster tends to contain the majority of calls and users tend to move between clusters somewhat rarely, $\left<F_\mathrm{CC}\right>_\mathrm{users}=0.098$, though some move much more frequently.\label{fig:cluster}}
\end{minipage}
\hfill
\begin{minipage}[t]{0.48\linewidth}
\centering
\includegraphics[height=\textwidth,angle=-90]{plot_spatial}
\caption{Spread and separation of trajectory clusters. Shown is the distribution $P(r_g)$ of the radii of gyration for both clusters and over all points, $r_g^{(1)}$, $r_g^{(2)}$, and $r_g^{(T)}$, respectively. The secondary cluster tends to be slightly more spatially dispersed than the primary cluster. Lines are truncated power laws of the form $\left(r_g + r_g^0\right)^{-\beta_r}e^{-r_g / \kappa}$, characterized by parameters $\Theta \equiv \left(r_g^0,\beta_r, \kappa\right)$. For the above curves, $\Theta_1 = \left(5.5,1.5,70\right)$, $\Theta_2 = \left(0.75,0.9,70\right)$, and $\Theta_T = \left(15,1.4,260\right)$, for cluster 1, cluster 2, and both, respectively. (inset) The distribution $P(d_\mathrm{CM})$ of distances between cluster's centers of mass, over all users. The straight line is an exponential distribution with mean $\lambda^{-1} = 157.8$ km, indicating that clusters are often well separated, but distances fall off rather quickly. \label{fig:spatial}}
\end{minipage}
\end{figure*}
\section{Conclusions and Future Work}\label{sec:conc}
We have applied a simple $k$-means clustering algorithm to a large sample of human trajectories generated from mobile phone records. Doing this characterizes how users move within their set of visited locations and we find that people tend to have one dense, primary cluster and one secondary, dispersed cluster. Course-graining a user's trajectory into clusters also quantifies how often users move between clusters and we find that users spend the majority of their time in the primary cluster but visit the secondary cluster semi-frequently. The clusters themselves tend to be well separated, indicating that the secondary cluster is a long-range destination, but the distribution of these distances over all users falls off exponentially quickly, compared to the total radius of gyration.
The most important avenue for future work involves relaxing the assumption of $k=2$ clusters. While mean silhouette values have shown that the data are well characterized by two clusters, it remains to be seen if introducing more clusters improves the picture. Furthermore, since so much of a typical user's time is spent in the primary cluster, there remains the tantalizing possibility that further sub-structure is present within it. In other words, the secondary cluster may represent infrequent long-range trips while the primary cluster may represent the union of home and work clusters, or home and school. Information about important routines such as daily commuting may be contained within the primary cluster.
\section*{Acknowledgment}
The authors would like to thank Marta Gonz\'alez and Albert-L\'aszl\'o Barab\'asi for fruitful discussions, and Pu Wang and Dashun Wang for providing data. This work was supported by the James S. McDonnell Foundation 21st Century Initiative in Studying Complex Systems; NSF within the Dynamic Data Driven Applications Systems (CNS-0540348), Information Technology Research (DMR-0426737), and IIS-0513650 programs; the Defense Threat Reduction Agency Award HDTRA1-08-1-0027; and the U.S. Office of Naval Research Award N00014-07-C. JPB gratefully acknowledges support from DTRA grant BRBAA07-J-2-0035.
|
1,116,691,498,625 | arxiv | \subsubsection*{\small Article Type:}
Overview
\hfill \break
\subsubsection*{Abstract}
\begin{flushleft}
AI-based systems are widely employed nowadays to make decisions that have far-reaching impacts on individuals and society. Their decisions might affect everyone, everywhere and anytime,
entailing concerns about potential human rights issues.
Therefore, it is necessary to move beyond traditional AI algorithms optimized for predictive performance and embed ethical and legal principles in their design, training and deployment to ensure social good while still benefiting from the huge potential of the AI technology.
The goal of this survey is to provide a broad multi-disciplinary overview of the area of bias in AI systems, focusing on technical challenges and solutions as well as to suggest new research directions towards approaches well-grounded in a legal frame.
In this survey, we focus on data-driven AI, as a large part of AI is powered nowadays by (big) data and powerful Machine Learning (ML) algorithms.
If otherwise not specified, we use the general term bias to describe problems related to the gathering or processing of data that might result in prejudiced decisions on the bases of demographic features like race, sex, etc.
\end{flushleft}
\end{center}
\section*{\sffamily \Large Introduction}
Artificial Intelligence (AI) algorithms are widely employed
by businesses, governments, and other organisations in order to make decisions that have far-reaching impacts on individuals and society.
Their decisions might influence everyone, everywhere and anytime, offering solutions to
problems faced in different disciplines or in daily life, but at the same time entailing risks like being denied a job or a medical treatment.
The discriminative impact of AI-based decision making to certain population groups has been already observed in a variety of cases.
For instance, the COMPAS system for predicting the risk of re-offending was found to predict higher risk values for black defendants (and lower for white ones) than their actual risk \cite{angwin2016machine} (\textit{racial-bias}). In another case, Google’s Ads tool for targeted advertising was found to serve significantly fewer ads for high paid jobs to women than to men \cite{datta2015automated} (\textit{gender-bias}).
Such incidents have led to an ever increasing public concern about the impact of AI in our lives.
Bias is not a new problem rather \textit{\q{bias is as old as human civilization}} and \textit{\q{it is human nature for members of the dominant majority to be oblivious to the experiences of other groups}}.\footnote{Fei-Fei Li, Chief-Scientist for AI at Google and Professor at Stanford (\url{http://fortune.com/longform/ai-bias-problem/}).}
However, AI-based decision making may magnify pre-existing biases and evolve new classifications and criteria with huge potential for new types of biases.
These constantly increasing concerns have led to a reconsideration of AI-based
systems towards new approaches that also address the fairness of their decisions.
In this paper, we survey recent technical approaches on bias and fairness in AI-based decision making systems, we discuss their legal ground\footnote{The legal discussion in this paper refers primarily to the EU situation. We acknowledge the difficulty of mapping the territory between AI and the law as well as the extra complexity that country-specific legislation brings upon and therefore, we believe this is one of the important areas for future work.} as well as open challenges and directions towards AI-solutions for societal good.
We divide the works into three broad categories:
\begin{itemize}
\item \textit{Understanding bias}. Approaches that help understand how bias is created in the society and enters our socio-technical systems, is manifested in the data used by AI algorithms, and can be modelled and formally defined.
\item \textit{Mitigating bias.} Approaches that tackle bias in different stages of AI-decision making, namely, pre-processing,
in-processing and post-processing methods focusing on data inputs, learning algorithms and model outputs, respectively.
\item \textit{Accounting for bias.} Approaches that account for bias proactively, via bias-aware data collection,
or retroactively, by explaining AI-decisions in human terms.
\end{itemize}
Figure \ref{fig:overview} provides a visual map of the topics discussed in this survey.
\begin{figure}[t]
\includegraphics[width=\textwidth, trim={1cm 6.5cm 1cm 6cm}, clip]{bias-survey-overview.pdf}
\caption{Overview of topics discussed in this survey.}
\label{fig:overview}
\end{figure}
This paper complements existing surveys that either have a strong focus on machine ethics, such as \cite{yu2018building}, study a specific sub-problem, such as explaining black box models \cite{DBLP:journals/csur/GuidottiMRTGP19,atzmueller2017declarative},
or focus in specific contexts, such as the Web~\cite{baeza2018bias}, by providing a broad categorisation of the technical challenges and solutions, a comprehensive coverage of the different lines of research as well as their legal grounds.
We are aware that the problems of bias and discrimination are not limited to AI and that the technology can be deployed (consciously or unconsciously) in ways that reflect, amplify or distort real world perception and status quo. Therefore, as the roots to these problems are not only technological, it is naive to believe that technological solutions will suffice.
Rather, more than technical solutions are required including socially acceptable definitions of fairness and meaningful interventions to ensure the long-term well-being of all groups. These challenges require multi-disciplinary perspectives and a constant dialogue with the society as bias and fairness are multifaceted and volatile.
Nevertheless, as the AI technology penetrates in our lives, it is extremely important for technology creators to be aware of bias and discrimination and to ensure responsible usage of the technology, keeping in mind that a technological approach on its own is not a panacea for all sort of bias and AI problems.
\section*{\sffamily \Large Understanding Bias}
\label{sec:understanding}
Bias is an old concept in Machine Learning (ML), traditionally referring to the assumptions made by a specific model (\emph{inductive bias})~\cite{Mitchell:1997:ML:541177}. A classical example is Occam's razor preference for the simplest hypothesis. With respect to human bias, its many facets have been studied by many disciplines including psychology, ethnography, law, etc.
In this survey, we consider as bias the \textit{inclination or prejudice of a decision made by an AI system which is for or against one person or group, especially in a way considered to be unfair}. Given this definition, we focus on how bias enters AI systems
and how it is manifested in the data comprising the input to AI algorithms.
Tackling bias entails answering the question how to define fairness such that it can be considered in AI systems; we discuss different fairness notions employed by existing solutions.
Finally, we close this section with legal implications of data collection and bias definitions.
\subsection*{\sffamily \large Socio-technical causes of bias}
\label{sec-bias-society}
AI relies heavily on data generated by humans (e.g., user-generated content)
or collected via systems created by humans. Therefore, whatever biases exist in humans enter our systems and even worse, they are amplified due to the complex sociotechnical systems, such as the Web.\footnote{The Web Science Manifesto - By Web Scientists. Building Web Science. For a Better World, \url{https://www.webscience.org/manifesto/}}
As a result, algorithms may reproduce (or even increase) existing inequalities or discriminations \cite{karimi2018homophily}. Within societies, certain social groups may be disadvantaged, which usually results in \q{institutional bias}
where there is a tendency for the procedures and practices of particular institutions to operate in ways in which some social groups are being advantaged and others disadvantaged.
This need not be the result of conscious discrimination but rather of the majority following existing norms. Institutional racism and sexism are common examples \cite{chandler2011dictionary}.
Algorithms are part of existing (biased) institutions and structures, but they may also amplify or introduce bias as they favour those phenomena and aspects of human behaviour that are easily quantifiable over those which are hard or even impossible to measure. This problem is exacerbated by the fact that certain data may be easier to access and analyse than others, which has caused, for example, the role of Twitter for various societal phenomena to be overemphasized \cite{tufekci2014big}. Once introduced, algorithmic systems encourage the creation of very specific data collection infrastructures and policies, for example, they may suggest tracking and surveillance \cite{introna2004picturing} which then change or amplify power relations. Algorithms thus shape societal institutions and potential interventions, and vice versa.
It is currently not entirely clear, how this complex interaction between algorithms and structures plays out in our societies. Scholars have thus called for \q{algorithmic accountability} to improve understanding of the power structures, biases, and influences that algorithms exercise in society \cite{diakopoulos2015algorithmic}.
\subsection*{\sffamily \large How is bias manifested in data?}
\label{sec:bias_in_data}
Bias can be manifested in (multi-modal) data through sensitive features and their causal influences, or through under/over-representation of certain groups.
\subsubsection*{\sffamily \normalsize Sensitive features and causal influences}
Data encode a number of people characteristics in the form of feature values. Sensitive characteristics that identify grounds of discrimination or bias may be present or not. Removing or ignoring such sensitive features does not prevent learning biased models, because other correlated features (also known as \textit{redundant encodings}) may be used as proxies for them. For example, neighbourhoods in US cities are highly correlated with race, and this fact has been used for systematic denial of services such as bank loans or same-day purchase delivery.\footnote{Amazon doesn’t consider the race of its customers. Should It? \url{https://www.bloomberg.com/graphics/2016-amazon-same-day/}} Rather, including sensitive features in data may be beneficial in the design of fair models \cite{DBLP:journals/ail/ZliobaiteC16}. Sensitive features may also be correlated with the target feature that classification models want to predict. For example, a minority’s preference for red cars may induce bias against the minority in predicting accident rate if red cars are also preferred by aggressive drivers. Higher insurance premium may then be set for red car owners, which disproportionately impacts on minority members. Simple correlation between apparently neutral features can then lead to biased decisions.
Discovering and understanding causal influences among variables is a fundamental tool for dealing with bias, as recognised in the legal circles \cite{Foster2004} and in medical research \cite{Grimes2002}. The interested reader is referred to the recent survey on causal approaches to fairness in classification models
\cite{DBLP:journals/corr/abs-1805-05859}.
\subsubsection*{\sffamily \normalsize Representativeness of data}
Statistical (including ML) inferences require that the data from which the model was learned be representative of the data on which it is applied.
However, data collection often suffers from biases that lead to the over- or under-representation of certain groups,
especially in big data, where many
data sets have not been created with the rigour of a
statistical study,
but they are the by-product of other activities with different, often operational, goals \cite{BarocasSelbst16}.
Frequently occurring biases
include {\em selection
bias} (certain individuals are more likely to be selected for study),
often as {\em self-selection bias}, and the reverse {\em exclusion bias};
{\em reporting bias}
(observations of a certain kind are more likely to be reported, which leads to a sort of selection bias on observations);
and
{\em detection bias}
(a phenomenon is more likely to be observed for a particular set of subjects).
Analogous biases can lead to under- or over-representations of properties of individuals, e.g., \cite{boyd2012critical}.
If the mis-represented groups coincide with social groups against which there already exists social bias such as prejudice or discrimination, even ``unbiased computational processes can lead to discriminative decision procedures'' \cite{DBLP:series/sapere/CaldersZ13}.
Mis-representation in the data can lead to vicious cycles that perpetuate discrimination and disadvantage \cite{BarocasSelbst16}.
Such ``pernicious feedback loops''
\cite{O'Neil:2016:WMD:3002861}
can occur with both under-representation of historically disadvantaged groups, e.g., women and people of colour in IT developer communities and image datasets \cite{buolamwini2018gender}, and
with over-representation, e.g., black people in drug-related arrests \cite{LumIsaac}.
\subsubsection*{\sffamily \normalsize Data modalities and bias}
Data comes in different modalities (numerical, textual, images, etc.) as well as in multimodal representations (e.g., audio-visual content).
Most of the fairness-aware ML approaches refer to structured data represented in some fixed feature space.
Data modality-specific approaches also exist, especially for textual data and images.
Bias in language has attracted a lot of recent interest with many studies exposing a large number of offensive associations related to gender and race on publicly available word embeddings~\cite{bolukbasi2016man} as well as how these associations have evolved over time~\cite{C18-1117}.
Similarly for the computer vision community where standard image collections like MNIST are exploited for training, or off-the-shelf pre-trained models are used as feature extractors, assuming the collections comprise representative samples of the real world. In reality, though, the collections can be biased as many recent studies have indicated.
For instance, \cite{buolamwini2018gender} have found that commercial facial recognition services perform much better on lighter male subjects than darker female ones.
Overall, the additional layer of feature extraction that is typically used within AI-based multimodal analysis systems makes it even more challenging to trace the source of bias in such systems.
\label{fairness_measures}
\subsection*{\sffamily \large How is fairness defined?}
\label{sec:definitions}
More than 20 different definitions of fairness have appeared thus far in the computer science literature~\cite{DBLP:conf/icse/VermaR18,DBLP:journals/datamine/Zliobaite17}; and some of these definitions and others were proposed and investigated in work on formalizing fairness from other disciplines, such as education, over the past 50 years~\cite{DBLP:conf/fat/HutchinsonM19}.
Existing fairness definitions can be categorized into: (i) ``predicted outcome", (ii) ``predicted and actual outcome", (iii) ``predicted probabilities and actual outcome", (iv) ``similarity based" and (v) ``causal reasoning"~\cite{DBLP:conf/icse/VermaR18}.
``Predicted outcome" definitions solely rely on a model's predictions (e.g., \textit{demographic parity} checks the percentage of protected and non-protected groups in the positive class). ``Predicted and actual outcome" combine a model's predictions with the true labels
(e.g., \textit{equalized odds} requires false positive and negative rates to be similar amongst protected and non-protected groups). ``Predicted probabilities and actual outcome" employ the predicted probabilities instead of the predicted outcomes (e.g., \textit{good calibration} requires the true positive probabilities between protected and non-protected groups to be the same). Contrary to definitions (i)-(iii) that only consider the sensitive attribute, ``similarity based" definitions also employ non-sensitive attributes (e.g., \textit{fairness through awareness} states that similar individuals must be treated equally).
Finally, ``causal reasoning" definitions are based on directed acyclic graphs that capture relations between features and their impact on the outcomes by structural equations (e.g., \textit{counterfactual fairness}~\cite{kusner2017counterfactual} constructs a graph that verifies whether the attributes defining the outcome are correlated to the sensitive attribute).
Despite the many formal, mathematical definitions of fairness proposed over the last years the problem of formalising fairness is still open as well as the discussion about the merits and demerits of the different measures. \cite{corbett2018measure} show the statistical limitations of prevailing mathematical definitions of fairness and the (negative) effect of enforcing such fairness-measures on group well-being and urge the community to explicitly focus on consequences of potential interventions.
\subsection*{\sffamily \large Legal issues of bias and fairness in AI}
\label{sec:legal_on_bias_in_data}
Taking into account the variety of bias creation in AI systems and its impact on society, the question arises whether the law should provide regulations for non-discriminatory AI- based decision making.
Generally speaking, existing EU regulation comes into play when (discriminatory) decisions have been taken, while provisions tackling the quality of selected data are rare. For the earlier, the control of discriminatory decisions, the principle of equality and the prohibition of discrimination (Art. 20, 21 EU Charter of Fundamental Rights, Art. 4 Directive 2004/113 and other directives) apply. However, these provisions only address discrimination on the basis of specific criteria and require prima facie evidence of a less favourable treatment on grounds of a prohibited criterion, which will often be difficult to establish~\cite{hacker2018teaching}. For the latter, the control of the quality of the selected data, with respect to “personal data” Art. 5 (1) GDPR,\footnote{The General Data Protection Regulation (EU) 2016/679 (GDPR), \url{https://gdpr-info.eu/}} stipulates “the principle of data accuracy” which, however, does not hinder wrongful or disproportionate selection.
With respect to automated decision-making (Art. 22 GDPR), recital 71 only points out that appropriate mathematical or statistical procedures shall be used and that discriminatory effects shall be prevented. While the effectiveness of Art. 22 GDPR is uncertain~\cite{zuiderveen2018discrimination}
it provides some safeguards, such as restrictions on the use of automated decision-making, and, where it is used, a right to transparency, to obtain human intervention and to contest the decision. Finally, some provisions in area-specific legislation can be found, e.g., Art. 12 Regulation (EC) No 223/2009 for European statistics.
\section*{\sffamily \Large Mitigating Bias}
\label{sec:mitigating}
Approaches for bias mitigation can be categorised into: i) pre-processing methods focusing on the data, ii) in-processing methods focusing on the ML algorithm, and iii) post-processing methods focusing on the ML model.
We conclude the section with a discussion on the legal issues of bias mitigation.
\subsection*{\sffamily \large Pre-processing approaches}
\label{sec:preprocessing}
Approaches in this category focus on the data, the primary source of bias, aiming to produce a ``balanced'' dataset that can then be fed into any learning algorithm. The intuition behind these approaches is that the fairer the training data is, the less discriminative the resulting model will be.
Such methods modify the original data distribution by altering class labels of carefully selected instances close to the decision boundary~\cite{kamiran2009classifying} or in local neighbourhoods~\cite{DBLP:conf/kdd/ThanhRT11},
by assigning different weights to instances based on their group membership~\cite{DBLP:conf/icdm/CaldersKP09} or by carefully sampling from each group.
These methods use heuristics aiming to balance the protected and unprotected groups in the training set; however, their impact is not well controlled despite their efforts for minimal data interventions.
Recently, \cite{DBLP:conf/nips/CalmonWVRV17} proposed a probabilistic fairness-aware framework that alters the data distribution towards fairness while controlling the per-instance distortion and by preserving data utility for learning.
\subsection*{\sffamily \large In-processing approaches}
\label{sec:inprocessing}
In-processing approaches reformulate the classification problem by explicitly incorporating the model's discrimination behaviour in the objective function through regularization or constraints, or by training on latent target labels.
For example, \cite{DBLP:conf/icdm/KamiranCP10} modify the splitting criterion of decision trees to also consider the impact of the split w.r.t. the protected attribute.
\cite{kamishima2012fairness} integrate a regularizer to reduce the effect of ``indirect prejudice'' (mutual information between the sensitive features and class labels).
\cite{dwork2012fairness} redefine the classification problem by minimizing an arbitrary loss function subject to the \emph{individual fairness-constraint} (similar individuals are treated similarly).
\cite{zafar2017fairness} propose a constraint-based approach for \emph{disparate mistreatment} (defined in terms of misclassification rates) which can be incorporated into logistic-regression and SVMs.
In a different direction, \cite{krasanakis2018adaptive} assume the existence of latent fair classes and propose an iterative training approach towards those classes which alters the in-training weights of the instances.
\cite{IosNto19} propose a sequential fair ensemble, AdaFair, that extends the weighted distribution approach of AdaBoost by also considering the cumulative fairness of the learner up to the current boosting round and moreover, it optimises for balanced error instead of overall error to account for class imbalance.
While most of the in-processing approaches refer to classification, approaches for the unsupervised case have also emerged recently, for example, the fair-PCA approach of \cite{DBLP:conf/nips/SamadiTMSV18} that forces equal reconstruction errors for both protected and unprotected groups. \cite{chierichetti2017fair} formulate the problem of fair clustering as having approximately equal representation for each protected group in every cluster and define fair-variants of classical $k$-means and $k$-medoids algorithms.
\subsection*{\sffamily \large Post-processing approaches}
\label{sec:postprocessing}
The third strategy is to post-process the classification model once it has been learned from data. This consists of altering the model's internals (white-box approaches) or its predictions (black-box approaches). Examples of the white-box approach consist of correcting the confidence of CPAR classification rules \cite{DBLP:conf/sdm/PedreschiRT09}, probabilities in Na\"ive Bayes models \cite{DBLP:journals/datamine/CaldersV10}, or the class label at leaves of decision trees \cite{DBLP:conf/icdm/KamiranCP10}.
White-box approaches have not been further developed in recent years, being superseded by in-processing methods.
Examples of the black-box approach aim at keeping proportionality of decisions among protected vs unprotected groups by promoting or demoting predictions close to the decision boundary \cite{DBLP:journals/isci/KamiranMKZ18}, by differentiating the decision boundary itself over groups \cite{DBLP:conf/nips/HardtPNS16},
or by wrapping a fair classifier on top of a black-box base classifier \cite{DBLP:conf/icml/AgarwalBD0W18}. An analysis of how to post-process group-wise calibrated classifiers under fairness constraints is given in \cite{Canetti:2019:SCH:3287560.3287561}.
While the majority of approaches are concerned with classification models, bias post-processing has been deemed as relevant when interpreting clustering models as well \cite{Lorimer2017}.
\subsection*{\sffamily \large Legal issues of mitigating bias}
\label{subsec:legalMitigation}
Pertinent legal questions involve whether modifications of data as envisaged by the pre-and in-processing approaches, as well as altering the model in the post-processing approach, could be considered lawful. Besides intellectual property issues that might occur, there is no general legal provision dealing with the way data is collected, selected or (even) modified. Provisions are in place mainly if such training data would (still) be personal data. Modifications (as well as any other processing) would need a legal basis. However, legitimation could derive from informed consent (provided that specific safeguards are met), or could rely on contract or legitimate interest. Besides, data quality could be relevant in terms of warranties, if a data provider sells data. A specific issue arises when \lq{}debiasing\rq{} involves sensitive data, as under Art. 9 GDPR special category data such as ethnicity often requires explicit consent~\cite{kilbertus2018blind}. A possible solution could be Art. 9(2)(g) GDPR which permits processing for reasons of substantial public interest, which arguably could be seen in ‘debiasing’. The same grounds of legitimation apply when altering the model.
However, contrary to data modification, data protection law would arguably not be applicable here, as the model would not contain personal data, unless the model is vulnerable to confidentiality attacks such as model inversion and membership inference~\cite{veale2018algorithms}.
\section*{\sffamily \Large Accounting for Bias}
\label{sec:accounting}
Algorithmic accountability refers to the assignment of responsibility for how an algorithm is created and its impact on society \cite{algoaccountability18}.
In case of AI algorithms the problem is aggravated as we do not codify the solution, rather the solution is inferred via machine learning algorithms and complex data.
AI accountability has many facets, we focus below on the most prominent ones that account for bias either \textit{proactively}, via bias-aware data collection, or \textit{retroactively} by explaining AI decisions in human terms; furthermore, we discuss the importance of describing and documenting bias by means of formalisms like ontologies.
\subsection*{\sffamily \large Proactively: Bias-aware data collection}
\label{sec:data-collection}
A variety of methods are adopted for data acquisition to serve diverse needs; these may be prone to introducing bias at the data collection stage itself, e.g., \cite{morstatter2014biased}. Proposals have been made for a structured approach to bias elicitation in evidence synthesis, including bias checklists and elicitation tasks that can be performed either by individual assessors and mathematical pooling, group elicitation and consensus building or hybrid approaches \cite{turner:08}. However, bias elicitations have themselves been found to be biased even when high quality assessors are involved and remedies have been proposed \cite{manzi:18}.
Among other methods, crowdsourcing is a popular approach that relies on large-scale acquisition of human input for dealing with data and label scarcity in ML.
Crowdsourced data and labels may be subject to bias at different stages of the process:
task design and experimental setup, task decomposition and result aggregation, selection of workers and the entailing human factors ~\cite{hube2019understanding,DBLP:conf/nips/KargerOS11,kamar2015identifying}.
Mitigating biases in crowdsourced data becomes harder in subjective tasks, where the presence of varying ideological and cultural backgrounds of workers means that it is possible to observe biased labels with complete agreement among the workers.
\subsection*{\sffamily \large Describing and modeling bias using ontologies}
\label{sec:describingBias}
Accounting for bias not only requires understanding of the different sources, i.e., data, knowledge bases, and algorithms, but more importantly, it demands the interpretation and description of the meaning, potential side effects, provenance, and context of bias.
Usually unbalanced categories are understood as bias and considered as sources of negative side effects. Nevertheless, skewed distributions may simply hide features or domain characteristics that, if removed, would hinder the discovery of relevant insights. This situation can be observed, for instance, in populations of lung cancer patients. As highlighted in diverse scientific reports, e.g., \cite{Garrido2018}, lung cancer in women and men has significant differences such as aetiology, pathophysiology, histology, and risk factors, which may impact in cancer occurrence, treatment outcomes, and survival. Furthermore, there are specific organizations that collaborate in lung cancer prevention and in the battle against smoking; some of these campaigns are oriented to particular focus groups and the effects of these initiatives are observed in certain populations. All these facts impact on the gender distribution of the population and could be interpreted as bias. However, in this context, imbalance reveals domain specific facts that need to be preserved in the population, and a formal description of these uneven distributions should be provided to avoid misinterpretation. Moreover, as any type of data source, knowledge bases and ontologies can also suffer from various types of bias or knowledge imbalance. For example, the description of the existing mutations of a gene in a knowledge base like COSMIC,\footnote{\url{https://cancer.sanger.ac.uk/cosmic}} or the properties associated with a gene in the Gene Ontology (GO),\footnote{\url{http://geneontology.org/}} may be biased by the amount of research that has been conducted in the diseases associated with these genes. Expressive formal models are demanded in order to describe and explain the characteristics of a data source and under which conditions or context, the data source is biased.
Formalisms like description and causal logics, e.g., \cite{BesnardCM14,DehaspeR96,Krotzsch0OT18,LeBlancBV19}, allow for measuring and detecting bias in data collections of diverse types, e.g, online data sets \cite{PitouraTFFPAW17} and recommendation systems \cite{SerbosQMPT17}. They also enable the annotation of statements with trustworthiness \cite{SonPB15} and temporality \cite{OzakiKR19}, as well as causation relationships between them \cite{LeBlancBV19}. Ontologies also play a relevant role as knowledge representation models for describing universe of discourses in terms of concepts such as classes, properties, and subsumption relationships, as well as contextual statements of these concepts.
NdFluents~\cite{Gimenez-GarciaZ17} and Context Ontology Language (CoOL) \cite{StrangLF03}, represent exemplar ontology formal models able to express and combine diverse contextual dimensions and interrelations (e.g., locality and vicinity). Albeit expressive, existing logic-based and ontological formalisms are not tailored for representing contextual bias or differentiating unbalanced categories that consistently correspond to instances of a real-world domain. Therefore, expressive ontological formalisms are demanded to represent the contextual dimensions of various types of sources, e.g., data collections, knowledge bases, or ontologies, as well as annotations denoting causality and provenance of the represented knowledge. These formalisms will equip bias detection algorithms with reasoning mechanisms that not only enhance accuracy but also enable explainability of the meaning, conditions, origin, and context of bias. Thus, domain modelling using ontologies will support context-aware bias description and interpretability.
\subsection*{\sffamily \large Retroactively: Explaining AI decisions}
\label{sec:explainability}
Explainability refers to the extent the internal mechanics of a learning model can be explained in human terms. It is often used interchangeably with interpretability, although the latter refers to whether one can predict what will happen given a change in the model input or parameters.
Although attempts to tackle \emph{interpretable} ML have existed for some time
\cite{DBLP:journals/expert/HoffmanK17},
there has been an exceptional growth of research literature in the last years with emerging keywords such as \emph{explainable AI} \cite{DBLP:journals/access/AdadiB18} and \emph{black box explanation}~\cite{DBLP:journals/csur/GuidottiMRTGP19}.
Many papers propose approaches for understanding the \textit{global} logic of a model by building an interpretable classifier able to mimic the obscure decision system.
Generally, these methods are designed for explaining specific models, e.g.,~deep neural networks~\cite{DBLP:journals/dsp/MontavonSM18}.
Only few are agnostic to the black box model~\cite{DBLP:journals/datamine/HeneliusPBAP14}.
The difficulties in explaining black boxes and complex models \textit{ex-post}, have motivated proposals of transparent classifiers which are interpretable on their own and exhibit predictive accuracy close to that of obscure models. These include Bayesian models \cite{DBLP:conf/kdd/0013H17}, generalized additive models \cite{DBLP:conf/kdd/LouCGH13}, supersparse linear models \cite{DBLP:journals/ml/UstunR16}, rule-based decision sets \cite{DBLP:conf/kdd/LakkarajuBL16}, optimal classification trees \cite{DBLP:journals/ml/BertsimasD17}, model trees \cite{broelemann2019modeltrees} and neural networks with interpretable layers~\cite{zhang2018interpretable}.
A different stream of approaches focuses on the \textit{local} behavior of a model, searching for an explanation of the decision made for a specific instance \cite{DBLP:journals/csur/GuidottiMRTGP19}. Such approaches are either \textit{model-dependent}, e.g., Taylor approximations~\cite{Kasneci2016}, saliency masks (the image regions that are mainly responsible for the decision) for neural network decisions ~\cite{DBLP:conf/cyberc/MaYY15}, and attention models for recurrent networks~\cite{choi2016retain},
or \textit{model-agnostic}, such as those started by the LIME method~\cite{DBLP:conf/kdd/Ribeiro0G16}.
The main idea is to derive a local explanation for a decision outcome on a specific instance by learning an interpretable model from a randomly generated neighbourhood of the instance.
A third stream aims at bridging the local and the global ones by defining a strategy for combining local models in an incremental way \cite{pedreschi2019}.
More recent work has asked the fundamental question \emph{What is an explanation?} \cite{Mittelstadt:2019:EEA:3287560.3287574} and reject such usage of the term `explanation', criticizing that it might be appropriate for a modelling expert, but not for a lay man, and that, for example, humanities or philosophy have an entirely different understanding of what explanations are.
We speculate that there are computational methods that will allow us to find some middle ground. For instance,
some approaches in ML,
statistical relational learning in particular \cite{raedt2016statistical}, take the perspective of knowledge representation and reasoning into account when developing ML models on more formal logical and statistical
grounds.
AI knowledge representation has been developing a rich theory of argumentation over the last 25 years \cite{DBLP:journals/ai/Dung95}, which recent approaches \cite{DBLP:conf/comma/CocarascuT16}
try to leverage for generalizing the reasoning aspect of
ML towards
the use of computational models of argumentation. The outcome are models of arguments and counterarguments towards certain classifications that can be inspected by a human user
and might be used as formal grounds for explanations
in the manner that \cite{Mittelstadt:2019:EEA:3287560.3287574} called out for.
\subsection*{\sffamily \large Legal issues of accounting for bias}
While data protection rules affect both the input (data) and the output (automated decision) level of AI decision-making, anti-discrimination laws, as well as consumer and competition rules, address discriminatory policies primarily from the perspective of the (automated) decision and the actions based on it. However, the application of these rules to AI-based decisions is largely unclear. Under present law and the principle of private autonomy, decisions by private parties normally do not have to include reasons or explanations. Therefore, a first issue will be how existing rules can be applied to algorithmic decision-making. Given that a decision will often not be reasoned (hence the reasons will be unknown), it will be difficult to establish that it was made on the basis of a biased decision-making process \cite{Mittelstadt:2019:EEA:3287560.3287574}.
Even if bias can be proven, a second issue is the limited scope of anti-discrimination law. Under present law, only certain transactions between private parties fall under the EU anti-discrimination directives \cite{LiddellOFlaherty2018}. Moreover, in most cases AI decision-making instruments will not directly use an unlawful criterion (e.g., gender) as a basis for their decision, but rather a ``neutral’’ one (e.g., residence) which in practice lead to a less favourable treatment of certain groups. This raises the difficult concept of indirect discrimination, i.e., a scenario where an ``apparently neutral rule disadvantages a person or a group sharing the same characteristics’’ \cite{LiddellOFlaherty2018}. Finally, most forms of differential treatment can be justified where it pursues a legitimate aim and where the means to pursue that aim are appropriate and necessary. It is unclear whether the argument that AI-based decision making systems produce decisions which are economically sound can be sufficient as justification.
\section*{\sffamily \Large Future directions and conclusions}
There are several directions that can impact this field going forward.
First, despite the large number of methods for mitigating bias, there are still no conclusive results regarding what is the state of the art method for each category,
which of the fairness-related interventions perform best, or whether category-specific interventions perform better comparing to holistic approaches that tackle bias at all stages of the analysis process.
We believe that a systematic evaluation of the existing approaches is necessary to understand their capabilities and limitations and also, a vital part of proposing new solutions.
The difficulty of the evaluation lies on the fact that different methods work with different fairness notions and are applicable to different AI models.
To this end, benchmark datasets should be made available that cover different application areas and manifest real-world challenges.
Finally, standard evaluation procedures and measures covering both model performance and fairness-related aspects should be followed, in accordance to international standards like the IEEE - ALGB-WG - Algorithmic Bias Working Group\footnote{https://standards.ieee.org/project/7003.html}.
Second, we recognize that
``fairness cannot be reduced to a simple self-contained mathematical definition'', ``fairness is dynamic and social and not a statistical issue''.\footnote{\url{https://www.wired.com/story/ideas-joi-ito-insurance-algorithms/}}
Also, ``fair is not fair everywhere'' \cite{schafer2015fair} meaning that the notion of fairness varies across countries, cultures and application domains. Therefore, it is important to have realistic and applicable fairness definitions for different contexts as well as domain-specific datasets for method development and evaluation. Moreover, it is important to move beyond the typical training-test evaluation setup and to consider the consequences of potential fairness-related interventions to ensure long-term wellbeing of different groups.
Finally, given the temporal changes of fairness perception, the question of whether one can train models on historical data and use them for current fairness-related problems becomes increasingly pressing.
Third, the related work thus far focuses mainly on supervised learning. In many cases however, direct feedback on the data (i.e., as labels) is not available. Therefore alternative learning tasks should be considered, like unsupervised learning or Reinforcement Learning (RL) where only intermediate feedback is provided to the model. Recent works have emerged in this direction, e.g., \cite{jabbari2017fairness} examine fairness in the RL context where one needs to reconsider the effects of short-term actions on long-term rewards.
Fourth, there is a general trend in the ML community recently for generating plausible data from existing data using Generative Adversarial Networks (GAN) in an attempt to cover the high data demand of modern methods, especially DNNs. Recently, such approaches have been used also in the context of fairness~\cite{xu2018fairgan}, i.e., how to generate synthetic fair data that are similar to the real data. Still however, the problem of representativeness of the training data and its impact on the representativeness of the generated data might aggravate issues of fairness and discrimination.
In the same topic, recent work revealed that DNNs are vulnerable to adversarial attacks, i.e., intentional perturbations of the input examples, and therefore there is a need for methods to enhance their resilience \cite{song2018mat}.
Fifth, AI scientists and everyone involved in the decision making process should be aware of bias-related issues and the effect of their design choices and assumptions.
For instance, studies show that representation-related biases creep into development processes because the development teams are not aware of the importance of distinguishing between certain categories \cite{buolamwini2018gender}.
Members of a privileged group may not even be aware of the existence of (e.g., racial) categories in the sense that they often perceive themselves as ``just people'', and the interpretation of this as an unconscious default requires the voice of individuals from underprivileged groups, who persistently perceive their being ``different''.
Two strategies appear promising for addressing this cognitive bias: try to improve diversity in development teams, and subject algorithms to outside and as-open-as-possible scrutiny, for example by permitting certain forms of reverse engineering for algorithmic accountability.
Finally, from a legal point of view, apart from data protection law, general provisions with respect to data quality or selection are still missing. Recently an ISO standard on data quality (ISO 8000) was published, though not binding and not with regard to decision-making techniques.
Moreover, first important steps have been made, e.g., the Draft Ethics Guidelines for trustworthy AI from the European Commission’s high-level Expert group on AI
or the European parliament resolution containing recommendations to the Commission on Civil Law Rules on Robotics.
However, these resolutions are still generic. Further interdisciplinary research is needed to define specifically what is needed to meet the balance between the fundamental rights and freedoms of citizens by mitigating bias, while at the same time considering the technical challenges and economical needs. Therefore, any legislative procedures will require a close collaboration of legal and technical experts.
As already mentioned, the legal discussion in this paper refers to the EU where despite the many recent efforts, there is still no consensus for algorithmic fairness regulations across its countries. Therefore, there is still a lot of work to be done on analysing the legal standards and regulations at a national and international level to support globally legal AI designs.
To conclude, the problem of bias and discrimination in AI-based decision making systems has attracted a lot of attention recently from science, industry, society and policy makers, and there is an ongoing debate on the AI opportunities and risks for our lives and our civilization. This paper surveys technical challenges and solutions as well as their legal grounds in order to advance this field in a direction that exploits the tremendous power of AI for solving real world problems but also considers the societal implications of these solutions.
As a final note, we want to stress again that biases are deeply embedded in our societies and it is an illusion to believe that the AI and bias problem will be eliminated only with technical solutions.
Nevertheless, as the technology reflects and projects our biases into the future, it is a key responsibility of technology creators to understand its limits and to propose safeguards to avoid pitfalls.
Of equal importance is also for the technology creators to realise that technical solutions without any social and legal ground cannot thrive and therefore multidisciplinary approaches are required.
\section*{Acknowledgement}
This work is supported by the project \textit{\lq\lq{}{}NoBias - Artificial Intelligence without Bias\rq\rq{}{}}, which has received funding from the
European Union's Horizon 2020 research and innovation programme, under the Marie Skłodowska-Curie (Innovative Training Network) grant agreement no. 860630.
|
1,116,691,498,626 | arxiv | \section{Introduction}
Through the recent discovery of the Higgs particle the Standard Model (SM)
of strong and electroweak interactions is now complete, with the masses of
all its particles being below $200\, {\rm GeV}$, corresponding to scales above one
Attometer ($10^{-18}~{\rm m}$). With the help of the Large Hadron Collider (LHC) the second half of this
decade, together with the next decade, should allow us to probe directly the
existence of other particles present in nature with masses
up to a few TeV. Many models considered in the literature predict new gauge bosons, new fermions
and new scalars in this mass range, but until now no clear signal of these new particles has
been seen at the LHC.
It is still possible that with the increased energy at the LHC new discoveries will
be made in the coming years. But what if the lightest new particle in nature is in the multi-TeV range
and out of the direct reach of the LHC?
The past successes of flavour physics in predicting new particles prior
to their discovery may again help us in such a case, in particular in
view of significant improvements on the precision of experiments and
significant reduction of hadronic uncertainties through lattice QCD.
But the question arises whether we will ever reach the energy scales as high as $200\, {\rm TeV}$ corresponding to short distances
in the ballpark of $10^{-21}$ m -- the {\it Zeptouniverse} -- in this manner and learn about the nature of New Physics (NP) at these very short
distances.\footnote{We consider scales in the same ballpark, for example 50~TeV and 1000~TeV, which correspond respectively to 4 and 0.2 zeptometers and also belong to the Zeptouniverse.} The scale of $200\, {\rm TeV}$ is given here only
as an example, and learning about NP at any scale above the LHC scale in this manner would be very important. Recent reviews on flavour physics beyond the
SM can be found in \cite{Buras:2013ooa,Isidori:2014rba}.
Some readers may ask why we are readdressing this question
in view of the comprehensive analyses in the framework of effective theories in
\cite{Bona:2007vi,Isidori:2010kg,Charles:2013aka}.
These analyses, which dealt dominantly with $\Delta F=2$ observables, have
already shown that in the presence of left-right operators one could be in
principle sensitive to scales as high as $10^4\, {\rm TeV}$, or even higher scales. Here we would like to
point out that the study of such processes alone will not really give
us significant information about the particular nature of this NP.
To this end also $\Delta F=1$ processes, in particular rare $K$ and
$B_{s,d}$ decays, have to be considered. As left-right operators involving
four quarks are not the driving force in these decays, which generally contain
operators built out of one quark current and one lepton current, it is not
evident that these decays can help us in reaching the Zeptouniverse even in
the flavour precision era.
In fact as will be evident from our analysis
below, NP at scales well above 1000 TeV cannot be probed by rare meson decays.\footnote{In principle this could be achieved in the future with the help of lepton flavour
violating decays such as $\mu\to e \gamma$ and $\mu\to 3e$, $\mu\to e$
conversion in nuclei, and electric dipole moments
\cite{Hewett:2012ns,Engel:2013lsa,McKeen:2013dma,Moroi:2013sfa,Moroi:2013vya,Eliaz:2013aaa,Kronfeld:2013uoa,deGouvea:2013zba,Bernstein:2013hba,Altmannshofer:2013lfa}.}
In this paper we address this question primarily in the context of one of the simplest
extensions of the SM, a $Z^\prime$ model in which a heavy neutral gauge boson
mediates FCNC processes in the quark sector at tree-level and
has left-handed (LH) and/or right-handed (RH) couplings to quarks and leptons. This model
has been studied recently for the general case in \cite{Buras:2012jb,Buras:2013qja} and in \cite{Buras:2012dp,Buras:2013dea,Buras:2014yna} in the context
of 331 models.
However, in these papers $M_{Z^\prime}$ has been chosen in the reach of the LHC, typically in the ballpark of $3\, {\rm TeV}$.
{Here the philosophy will be to focus on the highest mass scales possibly accessible through flavour measurements.}
It is evident from \cite{Buras:2014yna} that in 331 models NP effects for $M_{Z^\prime}\ge 10\, {\rm TeV}$ are too small to be measured in rare $K$ and $B_{s,d}$ decays even in the
flavour precision era. On the other hand, as we will see, this is still possible
in a general $Z^\prime$ model. References to other analyses in $Z^\prime$ models
are collected in \cite{Buras:2013ooa}.
The $Z^\prime$ model that we will analyze is only one possible NP scenario and should thereby be considered as a useful concrete example in which our questions can be answered in explicit terms.
It is nevertheless important to investigate whether
other NP scenarios could also give sufficiently strong signals from
very short distance scales so that they could be detected in future
measurements. If fact we find that tree-level scalar exchanges could also give us informations about these very short scales through $B_{s,d}\to\mu^+\mu^-$ decays.
Our paper is organised as follows. In Section~\ref{sec:2} we outline the
strategy for finding the maximal possible resolution of short distance scales with the help of rare meson decays. This
depends on the {\it maximal value} of the $Z^\prime$ couplings to fermions that are allowed by perturbativity and present experimental
constraints. It also depends on the {\it minimal} deviations from SM expectations that in the flavour precision era could be
considered as a clear signal of NP.
In Section~\ref{sec:3} we perform the analysis for
$Z^\prime$ scenarios with only LH or only RH flavour violating couplings to
quarks.
In Section~\ref{sec:4} the case of $Z^\prime$ with LH and RH
flavour violating couplings to quarks is analysed. In Section~\ref{sec:4a}
we repeat the analysis of previous sections for tree-level (pseudo-)scalar
contributions restricting the discussion to the decays $B_{s,d}\to\mu^+\mu^-$.
In Section~\ref{sec:5} we
discuss briefly other NP scenarios. In Section~\ref{sec:5a} we present
a simple idea for a rough indirect determination of $M_{Z^\prime}$ by means of the next linear $e^+e^-$ or $\mu^+\mu^-$ collider and flavour data.
We conclude in Section~\ref{sec:6}.
\section{Setup and strategy}\label{sec:2}
The virtue of the $Z^\prime$ scenarios is the paucity of their parameters
that enter all flavour observables in a given meson system, which should be
contrasted with most NP scenarios outside the Minimal Flavour Violation (MFV) framework. Indeed,
the $\Delta F=2$ and $\Delta F=1$ transitions in the $K$, $B_d$ and $B_s$
systems are fully described by the following ratios of the $Z^\prime$ couplings
to SM fermions over its mass $M_{Z^\prime}$,
\begin{align}\label{C1}
&\Delta_{L,R}^{sd}/M_{Z^\prime},&
&\Delta_{L,R}^{bd}/M_{Z^\prime},&
&\Delta_{L,R}^{bs}/M_{Z^\prime},
\end{align}
and
\begin{align}\label{C2}
&\Delta_L^{\nu\bar\nu}/M_{Z^\prime},&
&\Delta_A^{\mu\bar\mu}/M_{Z^\prime}, &
\Delta_{V}^{\mu\bar\mu} &= 2\Delta_L^{\nu\bar\nu} + \Delta_A^{\mu\bar\mu},
\end{align}
where the last formula follows from the $SU(2)_L$ symmetry relation
$\Delta_{L}^{\nu\bar\nu} =\Delta_{L}^{\mu\bar\mu}$.
These couplings are defined as in~\cite{Buras:2012jb,Buras:2013qja} through
\begin{equation}\label{equ:Lquarks}
{\mathcal L}_{\rm FCNC}^{\rm quarks}=\left[\bar q_i\, \gamma_\mu\, P_L\, q_j \,\Delta_L^{ij}
+\bar q_i\,\gamma_\mu\, P_R \,q_j \,\Delta_R^{ij}+h.c.\right] Z^{\prime \mu},
\end{equation}
with $i,j = d,s,b$ and $i\neq j$ throughout the rest of the paper. The analogous definition applies to the lepton sector where only flavour conserving couplings are considered,
\begin{equation}\label{equ:Lleptons}
{\mathcal L}^{\rm leptons}=\left[\bar\mu\,\gamma_\mu\, P_L \,\mu \,\Delta_L^{\mu\bar\mu}
+\bar\mu\,\gamma_\mu\, P_R \,\mu\,\Delta_R^{\mu\bar\mu}+
\bar\nu\,\gamma_\mu\, P_L\, \Delta_L^{\nu\bar\nu}\right] Z^{\prime \mu}\,.
\end{equation}
We recall that the couplings $\Delta_{A,V}^{\mu\bar\mu}$ are defined as
\begin{align}\label{DeltasVA}
\Delta_V^{\mu\bar\mu} &= \Delta_R^{\mu\bar\mu} + \Delta_L^{\mu\bar\mu}, & \Delta_A^{\mu\bar\mu} &= \Delta_R^{\mu\bar\mu} - \Delta_L^{\mu\bar\mu}.
\end{align}
Other definitions and normalisation of couplings can be found in \cite{Buras:2012jb}. The quark couplings are in general complex whereas the leptonic ones are
assumed to be real.
It is evident from these expressions that in order to find out the maximal
value of $M_{Z^\prime}$ for which measurable NP effects in $\Delta F=2$ and
$\Delta F=1$ exist one has to know the maximal values of the couplings $\Delta_{L,R}^{ij}$ and $\Delta_{L,R}^{\mu\bar\mu}$ allowed by
perturbativity. From the
$\Delta F=2$ analyses in \cite{Bona:2007vi,Isidori:2010kg,Charles:2013aka} it follows that by choosing these couplings to be $\mathcal{O}(1)$ the lower bound on the scale of new physics $\Lambda_\text{NP}$ could be
in the range of $10^5\, {\rm TeV}$ for the case of $K^0-\bar K^0$ mixing. On the
other hand, choosing sufficiently small couplings by means of a suitable flavour symmetry it is possible to suppress the FCNCs related to NP with the NP
scale $\Lambda_\text{NP}$ in the ballpark of a few TeV \cite{Chivukula:1987py,Hall:1990ac,D'Ambrosio:2002ex,Barbieri:2011ci,Barbieri:2012uh,Barbieri:2012bh,Barbieri:2012tu,Feldmann:2006jk}.
In view of the fact that flavour physics in the rest of this decade and in
the next decade will be dominated by new precise measurements of rare $K$ and rare $B_{s,d}$ decays
and not $\Delta F=2$ transitions, our strategy will differ from the one
in \cite{Bona:2007vi,Isidori:2010kg,Charles:2013aka}.
We will assume that future measurements will be precise enough to identify conclusively the presence of NP in rare decays when the deviations from SM predictions for various branching ratios will be larger than 10\,--\,30\% of the SM branching ratio. The precise value of the detectable deviation will depend on the decay considered and will be smaller for the ones
with smaller experimental, hadronic and parametric uncertainties. We will be more specific about this in the next section. The framework considered here
goes beyond MFV, where even for $\Lambda_\text{NP}$ in the ballpark of a few TeV only moderate departures from the SM in $\Delta F=1$ observables
are predicted. A model independent analysis of $b\to s$ transitions in this framework can be found in \cite{Hurth:2008jc} and in a recent review in \cite{Isidori:2012ts}.
In order to proceed we have to make assumptions about the size of the couplings involved.
There is in general a lot of freedom here, but as we are searching
for the maximal values of $M_{Z^\prime}$ which could still provide measurable
NP effects in rare meson decays, we will choose maximal couplings that are consistent with perturbativity.
Subsequently we will check whether such couplings are also consistent with $\Delta F=2$ constraints for a given $M_{Z^\prime}$.
An estimate of the perturbativity upper bound
on $\Delta_{L,R}^{sd}$ was made in \cite{Buras:2014sba}, in the context of a study of the isospin amplitude $A_0$ in $K\to\pi\pi$ decays,
by considering the loop expansion
parameter
\begin{equation}\label{loop}
L=N_c\left(\frac{\Delta_{L,R}^{sd}}{4\pi}\right)^2,
\end{equation}
where $N_c=3$ is the number of colours.
For $\Delta_{L,R}^{sd}=3.0$ we find $L=0.17$, a
coupling strength that is certainly allowed.
The same estimate can be made for other LH and RH couplings considered by us.
However, as we will see below, the correlation of $\Delta F=1$ and $\Delta F=2$ processes in
the case of $Z^\prime$ exchange, derived in \cite{Buras:2012jb}, will give
some additional insight on the allowed size of the quark couplings and
will generally not allow us to reach the perturbativity bounds on quark couplings. On the other hand, large values of the {leptonic} couplings
$\Delta_{L}^{\nu\bar\nu}$ and $\Delta_{V,A}^{\mu\bar\mu}$ at the perturbativity upper bound will
give an estimate of the maximal $M_{Z^\prime}$ for which measurable
effects in rare $K$ and $B_{s,d}$ decays could be obtained.
In the case of a $U(1)$ gauge symmetry with large gauge couplings at a given scale it is difficult to avoid a Landau pole at still higher scales. However, for the coupling values used in
our paper, this happens at much higher scales than $M_{Z^\prime}$.
Moreover, if $Z^\prime$ is associated
with a non-abelian gauge symmetry that is asymptotically free this problem
does not exist.
\subsection*{Projections for the coming years}
\begin{table}[t]
\begin{center}
\renewcommand{\arraystretch}{1.2}
\scalebox{0.85}{
\begin{tabular}{|c|c|ccc|}
\hline
Observable & 2014 & $2019$ & $2024 $ & $2030$ \\
\hline
$\mathcal{B}(K^+ \to \pi^+ \nu\bar\nu)$ & $\left(17.3^{+11.5}_{-10.5}\right)\times 10^{-11}$\hfill\cite{Artamonov:2008qb} & \hfill 10\%\hfill\cite{Anelli:2005ju} & \hfill $5\%$\hfill\cite{Butler:2013kdw} & \\
$\mathcal{B}(K_{\rm L} \to \pi^0 \nu\bar\nu)$ & $<2.6\times 10^{-8}\ {\rm (90\%\, CL)}$\hfill\cite{Ahn:2009gb} & & \hfill $5\%$\hfill\cite{Butler:2013kdw} & \\
\hline
$\mathcal{B}(B^+ \to K^+\nu\bar\nu)$ & $<1.3\times 10^{-5}\ {\rm (90\%\, CL)}$\hfill\cite{delAmoSanchez:2010bk} & & \hfill 30\%\hfill\cite{Aushev:2010bq} & \\
$\mathcal{B}(B^0_d \to K^{*0}\nu\bar\nu)$ & $<5.5\times 10^{-5}\ {\rm (90\%\, CL)}$\hfill\cite{Lutz:2013ftz} & & \hfill 35\%\hfill\cite{Aushev:2010bq} & \\
\hline
$\overline{\mathcal{B}}(B_s\to \mu^+\mu^-)$ & $\left(2.9\pm 0.7\right)\times 10^{-9}$\hfill\cite{Aaij:2013aka,Chatrchyan:2013bka,CMSandLHCbCollaborations:2013pla} & \hfill15\%\hfill\cite{CMS:2013vfa,Bediaga:2012py} & \hfill12\%\hfill\cite{CMS:2013vfa} & \hfill 10--12\%\hfill\cite{CMS:2013vfa,Bediaga:2012py} \\
$\mathcal{B}(B_d\to \mu^+\mu^-)$ & $\left(3.6^{+1.6}_{-1.4}\right)\times 10^{-10}$~$^{\dagger}$\hfill\cite{Aaij:2013aka,Chatrchyan:2013bka,CMSandLHCbCollaborations:2013pla} & \hfill 66\%\hfill\cite{CMS:2013vfa} & \hfill 45\%\hfill\cite{CMS:2013vfa} & 18\%~\cite{CMS:2013vfa} \\
$\mathcal{B}(B_d\to \mu^+\mu^-)/\overline{\mathcal{B}}(B_s\to \mu^+\mu^-)$ & & \hfill 71\%\hfill\cite{CMS:2013vfa} & \hfill 47\%\hfill\cite{CMS:2013vfa} & \hfill 21--35\%\hfill\cite{CMS:2013vfa,Bediaga:2012py} \\
\hline
\end{tabular}
}
\caption{\it
The current best experimental measurements (2014) together with the precision expected in 5, 10 and 15 years for the rare decay observables studied in this paper.
The percentages are relative to SM predictions.
$^\dagger$The statistical significance of this measurement is less than $3\sigma$ i.e.\ there is still no {\it evidence} for this process. $\overline{\mathcal{B}}(B_s\to \mu^+\mu^-)$ denotes
the corrected branching ratio as defined in Appendix~\ref{app:Bsmumu}.
\label{tab:rareProjections}
}
\end{center}
\end{table}
Clearly, the outcome of our strategy depends sensitively on the precision
of future measurements and the reduction of hadronic and CKM uncertainties.
In Table~\ref{tab:rareProjections} we give the precision expected in the next 5, 10 and 15 years for the rare decay observables that we study in this paper.
In Table~\ref{tab:lattProjections} we do the same for the lattice and CKM matrix parameters that contribute with sizeable errors in our numerical analysis.
We also list the current experimental precision for these quantities.
The chosen years of 2019, 2024 and 2030 correspond approximately to the integrated luminosity milestones of the relevant experiments.
For Belle-II the years 2019 and 2024 correspond to 5~ab$^{-1}$ and 50~ab$^{-1}$, respectively.
For LHCb the years 2019, 2024 and 2030 correspond to 6~fb$^{-1}$, 15~fb$^{-1}$ and 50~fb$^{-1}$, respectively.
For CMS the years 2018, 2024 and 2030 correspond to 100~fb$^{-1}$, 300~fb$^{-1}$ and 3000~fb$^{-1}$, respectively. Needless to say all these projections
can change in the future, yet the collected numbers show that the coming
years indeed deserve the label of the {\it flavour precision era}. In view of
these prospects we will keep in mind throughout this paper that NP effects
that are at least as large as 10\,--\,30\% of the SM branching ratios could
one day be resolved in rare meson decays. We will be more explicit about this in the next section.
\begin{table}[t]
\begin{center}
\renewcommand{\arraystretch}{1.2}
\scalebox{0.85}{
\begin{tabular}{|c|c|ccc|}
\hline
& 2014 & $2019$ & $2024$ & $2030$ \\
\hline
$F_{B_s}$ & $(227.7\pm 4.5)\ {\rm MeV}$\hfill\cite{Aoki:2013ldr} & \hfill$<1\%$\hfill\cite{Blum:2013xxx} & & \\
$F_{B_d}$ & $(190.5\pm 4.2)\ {\rm MeV}$\hfill\cite{Aoki:2013ldr} & \hfill$<1\%$\hfill\cite{Blum:2013xxx} & &\\
$F_{B_s}\sqrt{\hat{B}_{B_s}}$ & $(266\pm 18)\ {\rm MeV}$\hfill\cite{Aoki:2013ldr} & \hfill$2.5\%$\hfill\cite{Blum:2013xxx} & \hfill$<1\%$\hfill\cite{Blum:2013mhx} & \\
$F_{B_d}\sqrt{\hat{B}_{B_d}}$ & $(216\pm 15)\ {\rm MeV}$\hfill\cite{Aoki:2013ldr} & \hfill$2.5\%$\hfill\cite{Blum:2013xxx} & \hfill$<1\%$\hfill\cite{Blum:2013mhx} & \\
$\hat{B}_K$ & $0.766\pm 0.010$\hfill\cite{Aoki:2013ldr} & \hfill$<1\%$\hfill\cite{Blum:2013xxx} & & \\
\hline
$|V_{ub}|_{\rm incl}$ & $(4.40\pm 0.25)\times 10^{-3}$\hfill\cite{Aoki:2013ldr} & \hfill 5\%\hfill\cite{Aushev:2010bq} & \hfill 3\%\hfill\cite{Aushev:2010bq} & \\
$|V_{ub}|_{\rm excl}$ & $(3.42\pm 0.31)\times 10^{-3}$\hfill\cite{Aoki:2013ldr} & \hfill 12\%~$^{\dagger\dagger}$\hfill\cite{Aushev:2010bq} & \hfill 5\%~$^{\dagger\dagger}$\hfill\cite{Aushev:2010bq} & \\
$|V_{cb}|_{\rm incl}$ & $(42.4\pm 0.9)\times 10^{-3}$\hfill\cite{Gambino:2013rza} & \hfill 1\%\hfill\cite{Ricciardi:2013cda} & \hfill$<1\%$\hfill\cite{Ricciardi:2013cda} & \\
$|V_{cb}|_{\rm excl}$ & $(39.4\pm 0.6)\times 10^{-3}$\hfill\cite{Aoki:2013ldr} & \hfill 1\%\hfill\cite{Ricciardi:2013cda} & \hfill$<1\%$\hfill\cite{Ricciardi:2013cda} & \\
$\gamma$ & $(70.1\pm7.1)^\circ$~$^\dagger$\hfill\cite{UTfit} & \hfill 6\%\hfill\cite{Aushev:2010bq} & \hfill$1.5\%$\hfill\cite{Aushev:2010bq} & \hfill$1.3\%$\hfill\cite{Bediaga:2012py} \\
$\phi_d^{\rm SM} = 2\beta$ & $(43.0^{+1.6}_{-1.4})^\circ$\hfill\cite{Barberio:2007cr} & \hfill$\sim 1^\circ$~$^{\ddagger}$\hfill\cite{Faller:2008zc,Ciuchini:2011kd} & & \\
$\phi_s^{\rm SM} = -2\beta_s$ & $(0\pm 4)^\circ$\hfill\cite{Barberio:2007cr} & \hfill$1.4^\circ$\hfill\cite{Bediaga:2012py} & \hfill$\sim 1^\circ$~$^{\ddagger}$\hfill\cite{Faller:2008gt} & \\
\hline
\end{tabular}
}
\caption{\it Current best determinations and future forecasts for the precision of lattice and CKM matrix parameters that contribute with sizeable errors in our numerical analysis. $^\dagger$Combined fit from charmed B decay modes. $^{\dagger\dagger}$These predictions assume dominant lattice errors. $^\ddagger$At this precision the theoretical uncertainty due to penguin pollution in the dominant decay modes used to extract these phases starts to dominate.}
\label{tab:lattProjections}
\end{center}
\end{table}
\begin{table}[t]
\renewcommand{\arraystretch}{1.2}
\centering%
\scalebox{0.85}{
\begin{tabular}{|l|l|}
\hline
$|\epsilon_K|= 2.228(11)\times 10^{-3}$\hfill\cite{Nakamura:2010zzi} & $\alpha_s(M_Z)= 0.1185(6) $\hfill\cite{Beringer:1900zz}
\\
$\Delta M_K= 0.5292(9)\times 10^{-2} \,\text{ps}^{-1}$\hfill\cite{Nakamura:2010zzi} & $m_s(2\, {\rm GeV})=93.8(24) \, {\rm MeV}$ \hfill\cite{Aoki:2013ldr}
\\
$\Delta M_d = 0.507(4)\,\text{ps}^{-1}$\hfill\cite{Amhis:2012bh} & $m_c(m_c) = 1.279(13) \, {\rm GeV}$ \hfill\cite{Chetyrkin:2009fv}
\\
$\Delta M_s = 17.72(4)\,\text{ps}^{-1}$\hfill\cite{Amhis:2012bh} & $m_b(m_b)=4.19^{+0.18}_{-0.06}\, {\rm GeV}$\hfill\cite{Nakamura:2010zzi}
\\
$|V_{us}|=0.2252(9)$\hfill\cite{Amhis:2012bh} & $m_t(m_t) = 163(1)\, {\rm GeV}$\hfill\cite{Laiho:2009eu,Allison:2008xk}
\\\cline{2-2}
$\Delta\Gamma_s/\Gamma_s=0.123(17)$\hfill\cite{Amhis:2012bh} & $F_K = 156.1(11)\, {\rm MeV}$\hfill\cite{Laiho:2009eu}
\\
$m_K= 497.614(24)\, {\rm MeV}$\hfill\cite{Nakamura:2010zzi} & $F_{B^+} =185(3)\, {\rm MeV}$\hfill \cite{Dowdall:2013tga}
\\
$m_{B_d}= m_{B^+}=5279.2(2)\, {\rm MeV}$\hfill\cite{Beringer:1900zz} & $\kappa_\epsilon=0.94(2)$\hfill\cite{Buras:2008nn,Buras:2010pza}
\\
$m_{B_s} = 5366.8(2)\, {\rm MeV}$\hfill\cite{Beringer:1900zz} & $\eta_{cc}=1.87(76)$\hfill\cite{Brod:2011ty}
\\
$\tau_{B^\pm}= 1.642(8)\,\text{ps}$\hfill\cite{Amhis:2012bh} & $\eta_{tt}=0.5765(65)$\hfill\cite{Buras:1990fn}
\\
$\tau_{B_d}= 1.519(7) \,\text{ps}$\hfill\cite{Amhis:2012bh} & $\eta_{ct}= 0.496(47)$\hfill\cite{Brod:2010mj}
\\
$\tau_{B_s}= 1.509(11)\,\text{ps}$\hfill\cite{Amhis:2012bh} &
$\eta_B=0.55(1)$\hfill\cite{Buras:1990fn,Urban:1997gw}\\
\hline
\end{tabular} }
\caption{\it Values of other experimental and theoretical
quantities used as input parameters. For future
updates see PDG~\cite{Beringer:1900zz}, FLAG~\cite{Aoki:2013ldr} and HFAG~\cite{Amhis:2012bh}.}\label{tab:input}
\end{table}
\section{Left-handed and right-handed $Z^\prime$ scenarios}\label{sec:3}
\subsection{Left-handed scenario}
It will be useful to begin our analysis with the case of $Z^\prime$ having
only LH flavour violating couplings to quarks $\Delta_L^{ij}$. In this scenario NP effects from $Z^\prime$ can be compactly summarised through the flavour non-universal shifts in
the basic functions $X$, $Y$ and $S$, as defined in \cite{Buchalla:1995vs,Buras:2013ooa,Buras:2012jb}, which are flavour universal in the SM:
\begin{align}
X_L(M)&=X^\text{SM}+\Delta X_L(M),\\
Y_A(M)&=Y^\text{SM}+ \Delta Y_A(M),\\
S(M)&=S^\text{SM}+\Delta S(M),
\label{DeltaFunDefns}
\end{align}
with $M=K,B_d,B_s$. $X_L(M)$ and $Y_A(M)$ enter the amplitudes
for decays with $\nu\bar\nu$ and $\mu\bar\mu$ final states, respectively; $S(M)$
enters $\Delta F=2$ transitions. We recall that the functions $X^\text{SM}$, $Y^\text{SM}$ and $S^\text{SM}$ enter the top quark contributions to the
corresponding amplitudes in the SM.
We suppressed here for simplicity
the functions related to vector ($V$) couplings. We will return to them later
on.
In what follows we will concentrate our discussion mainly on the functions $\Delta X_L(M)$,
since in the left-handed scenario (LHS) $\Delta Y_A(M)$ are given by~\cite{Buras:2012jb}
\begin{equation}\label{REL3}
\Delta Y_A(K)=\Delta X_L(K) \frac{\Delta_A^{\mu\bar\mu}}{\Delta_L^{\nu\bar\nu}}, \qquad \Delta Y_A(B_q)=\Delta X_L(B_q) \frac{\Delta_A^{\mu\bar\mu}}{\Delta_L^{\nu\bar\nu}},
\end{equation}
as follows from the definitions of these functions given in Appendix~\ref{app:A}.
The fundamental equations for the next steps of our analysis are the correlations in the LHS between
$\Delta X(M)$ and $\Delta S(M)$ derived in \cite{Buras:2012jb}. Rewriting them in a form suitable for our applications
we find
\begin{equation}\label{REL1}
\frac{\Delta X_L(K)}{\sqrt{\Delta S(K)}}=\frac{\Delta X_L(B_q)}{\sqrt{\Delta S(B_q)^*}}=
\frac{\Delta_L^{\nu\bar\nu}}{2M_{Z^\prime}g_{\rm SM}\sqrt{\tilde r}}=0.25
\left[\frac{\Delta_L^{\nu\bar\nu}}{3.0}\right]\left[\frac{15\, {\rm TeV}}{M_{Z^\prime}}\right],
\end{equation}
{where $\tilde r$ is a QCD correction which depends on the $Z^\prime$ mass \cite{Buras:2012jb} ($\tilde r\approx 0.90$ for $M_{Z^\prime} = 50\, {\rm TeV}$, but its dependence on $M_{Z^\prime}$ is very weak), and}
\begin{equation}\label{gsm}
g_{\text{SM}}^2=
4 \frac{M_W^2 G_F^2}{2 \pi^2} = 1.78137\times 10^{-7} \, {\rm GeV}^{-2}\,,
\end{equation}
where $G_{F}$ is the Fermi constant.
Now comes an important observation: in the limit where the $Z^\prime$ coupling $\Delta_{L}^{sd}$ is approximately real and the $\varepsilon_K$ constraint is
easily satisfied, the allowed range for $\Delta S(K)$ can be much larger
than the ones for $\Delta S(B_q)$ even if the ratios in (\ref{REL1}) are
flavour universal. Indeed the $\Delta S(B_q)$ are directly constrained by the
$B^0_q-\bar B_q^0$ mass differences $\Delta M_q$ because the function $S_\text{SM}$ enters the
top quark contribution to $\Delta M_q$, which is by far dominant in the SM. On the other hand
$\Delta M_K$ is dominated in the SM by charm quark contribution and the function
$S$ is multiplied there by small CKM factors. Consequently, the shift $\Delta S(K)$ is allowed to be much larger than the shifts in $\Delta S(B_q)$, with interesting consequences for rare $K$ decays as discussed below.
Of course this assumes that the SM gives a good description of the experimental values of $\varepsilon_K$ and $\varepsilon'/\varepsilon$. We will relax this assumption later.
Let us first illustrate the case of $\Delta S(B_s)$ in the simplified scenario where $\Delta_{L}^{bs}$ is real, in accordance with the small CP violation
observed in the $B_s$ system. Assuming then that a NP contribution to $\Delta M_s$
at the level of $15\%$ is still allowed, the result of taking into account all the experimental and hadronic uncertainties implies that only $|\Delta S(B_s)|\le 0.36$ is allowed by present data. This gives
\begin{equation}\label{REL2}
|\Delta X_L(B_q)| \le
0.16 \sqrt{\frac{|\Delta S(B_q)|}{0.36}}
\left[\frac{\Delta_L^{\nu\bar\nu}}{3.0}\right]\left[\frac{15\, {\rm TeV}}{M_{Z^\prime}}\right].
\end{equation}
Since $X^\text{SM}\approx 1.46$, the shift $|\Delta X_L(B_q)|=0.16$ amounts to about $11\%$
at the level of the amplitude and $22\%$ for the branching ratios. Such NP
effects could in principle one day be measured in $b\to s\nu\bar\nu$ transitions such as
$B_d\to K(K^*)\nu\bar\nu$ and
$B\to X_s\nu\bar\nu$, and can still be increased by increasing slightly
$\Delta_L^{\nu\bar\nu}$ or lowering $M_{Z^\prime}$. However, this analysis
shows that with the help of a $Z^\prime$ with only LH couplings
one cannot reach the Zeptouniverse using $B_s$ decays, although distance scales in the
ballpark of $10^{-20}$m, corresponding to $15\, {\rm TeV}$, could be resolved.
A similar analysis can be performed for the function $Y_A(B_s)$ relevant
for $B_s\to\mu^+\mu^-$: as $Y^\text{SM}\approx 0.96$, a shift of $|\Delta Y_A(B_s)| = 0.16$ results
in a $33\%$ modification in the branching ratio.
For $B_d$ the discussion is complicated by the significant phase of $V_{td}$.
Because $|V_{td}|\approx 0.25|V_{ts}|$, at first sight one may expect the shortest distance scales that can be resolved with rare $B_d$ decays to be about two times higher than the ones for $B_s$. But, as seen in (\ref{REL1}) for fixed lepton couplings, only $M_{Z^\prime}$ and the $\Delta F = 2$ constraints on $S$ determine the maximal size of $\Delta F = 1$ effects, independently of the CKM matrix elements. Similar effects to the ones allowed for rare $B_s$ decays are therefore also expected for rare $B_d$ decays in LHS, for the same values of $M_{Z^\prime}$.
Slightly lower scales than $15\, {\rm TeV}$ can however be reached in this case, as is shown in our analysis below, because of the lower experimental precision expected for rare $B_d$ decays (see Table~\ref{tab:rareProjections}).
\begin{figure}
\centering%
\includegraphics[width=0.49\textwidth]{Bs_15TeV.pdf}\hfill%
\includegraphics[width=0.49\textwidth]{Bd_15TeV.pdf}
\caption{\it Prospects for observing new physics in $B_s$ (left) and $B_d$ (right) decays. The green regions show the 68\% C.L. and 95\% C.L. allowed regions in the $\Delta F = 2$ fit.
The black lines show the $3\sigma$ (solid) and $5\sigma$ (dashed) contours for $\mathcal{\bar B}(B_s\to\mu^+\mu^-)$ and $\mathcal{B}(B_d\to \mu^+\mu^-)$ expected in 2019; the red lines show the same projections for 2024.
In both figures $M_{Z^\prime} = 15\,{\rm TeV}$ and $\Delta_A^{\mu\bar\mu} = -3$.\label{figBd}}
\end{figure}
The prospects for the observation of NP in $B_{d,s}\to\mu^+\mu^-$ are shown in Fig.~\ref{figBd} for the following benchmark scenario:
\begin{itemize}
\item $M_{Z^\prime} = 15 $ TeV, which corresponds approximately to the highest accessible scale, and $\Delta_A^{\mu\bar\mu} = -3$; the negative sign of $\Delta_A^{\mu\bar\mu}$ is compatible with \eqref{C2} and perturbativity for $\Delta_L^{\nu\bar\nu}=3.0$ (to be discussed in Section~\ref{Js}, see in particular
(\ref{leptonhigh})).
\end{itemize}
Virtually identical results are obtained for $M_{Z^\prime} = 5$ TeV, which is in the reach of direct detection at the LHC~\cite{Weiler,Weiler2}, and $\Delta_A^{\mu\bar\mu} = -1$, which is compatible with
the LEP-II \cite{Schael:2013ita} and LHC \cite{Aad:2014cka,CMS:2013qca} bounds on lepton couplings.\footnote{Flavour-conserving quark couplings of similar size, for the same values of the $Z^\prime$ mass, are also allowed by the present LHC constraints \cite{deVries:2014apa}.
The $\Delta F = 2$ constraints on the flavour-violating quark couplings, obtained by a
global maximal-likelihood fit to the input parameters given in Table~\ref{tab:lattProjections}
and \ref{tab:input}, are shown in the $\Delta_L^{bq}$--$\phi_L^{bq}$ plane\,\footnote{With a slight abuse of notation we write here $\Delta_{L}^{bq} = \Delta_L^{bq}e^{i\phi_L^{bq}}$, with $\Delta_L^{bq}$ real on the right-hand side.} (the green regions are
the 68\% and 95\% C.L. current allowed regions). In this fit the CKM matrix elements are determined
solely by the tree-level constraints, which are not affected by NP. All the hadronic parameters with
sizeable uncertainties are treated as nuisance parameters and are marginalised over. The continuous
and dashed lines show, in the same plane, the projected sensitivity for NP in $B_{d,s}\to \mu^+\mu^-$
at $3\sigma$ and $5\sigma$ as foreseen in 2019 (black) and 2024 (red), using the estimates of Table~\ref{tab:rareProjections}.
In all these projections we assume no deviations in the $\Delta F = 2$ observables in order to give the most optimistic prediction
for the sensitivity of rare decays. We therefore use the future errors also for the
CKM matrix elements and for the hadronic parameters, assuming SM-like central values. The impact of this choice on the $\Delta F = 1$ projections is however moderate.
These figures show that already in five years from now it could be possible to probe scales of 15 TeV with rare $B_s$ decays by observing deviations from the SM predictions at the level of $3\sigma$, and reaching a $5\sigma$ discovery with more data in the following years.
On the other hand, for $B_d$ a $3\sigma$ effect can be achieved only with the full sensitivity in about ten years from now, for the same value of $M_{Z'}$.
The corrections from NP to the Wilson coefficients $C_9$ and $C_{10}$, which weight the semileptonic operators in the effective Hamiltonian relevant for $b\to s\mu^+\mu^-$ transitions (see Appendix~\ref{app:bsll}) as used in the recent literature (see e.g. \cite{Buras:2013qja,Buras:2013dea,Buras:2014fpa,Altmannshofer:2011gn,Altmannshofer:2013foa,Descotes-Genon:2013wba,Beaujean:2013soa}) are given as follows \cite{Buras:2012jb}
\begin{align}
\sin^2\theta_W C^{\rm NP}_9 &=-\frac{1}{g_{\text{SM}}^2M_{Z^\prime}^2}
\frac{\Delta_L^{sb}\Delta_V^{\mu\bar\mu}} {V_{ts}^* V_{tb}} ,\label{C9}\\
\sin^2\theta_W C^{\rm NP}_{10} &= -\frac{1}{g_{\text{SM}}^2M_{Z^\prime}^2}
\frac{\Delta_L^{sb}\Delta_A^{\mu\bar\mu}}{V_{ts}^* V_{tb}}=-\Delta Y_A(B_s)\label{C10},
\end{align}
where $C^{\rm NP}_9$ involves the leptonic vector coupling
of $Z^\prime$ and $C^{\rm NP}_{10}$ the axial-vector one. $C^{\rm NP}_9$ plays a crucial role in
$B_d\to K^*\mu^+\mu^-$ transitions, $C^{\rm NP}_{10}$ for $B_s\to\mu^+\mu⁻$ transitions and
both coefficients are relevant for $B_d\to K\mu^+\mu^-$.
The $SU(2)_L$ relation between the leptonic couplings in (\ref{C2})
implies the following important relation \cite{Buras:2013qja}
\begin{equation}\label{SU2L}
-\sin^2\theta_W C^{\rm NP}_{9}= 2\Delta X_L(B_s)+\Delta Y_A(B_s)
\end{equation}
which leads to a triple correlation between $b\to s \nu\bar\nu$ transitions,
$B_s\to\mu\bar\mu$ and the coefficient $C^{\rm NP}_{9}$ or equivalently
$B_d\to K^*\mu^+\mu^-$. Thus even if
$\Delta_L^{\nu\bar\nu}$ and $\Delta_A^{\mu\bar\mu}$ are independent of each other, once they are fixed the values of the coupling $\Delta_{V}^{\mu\bar\mu}$ and of $C^{\rm NP}_{9}$ are known. We will use these relations in the next section.
Our study of the $K$ system is eased by the analysis in \cite{Buras:2014sba}, where an upper bound on the coupling $\Delta_{L}^{sd}$ from $\Delta M_K$
has been derived, assuming conservatively that the NP contribution is at most as
large as the short distance SM contribution to $\Delta M_K$.
Assuming that the NP contribution to $\Delta M_K$ is at most $30\%$ of
its SM value, and rescaling the formula (70) in \cite{Buras:2014sba}, we find
the upper limit
\begin{equation}
|\Delta_{L}^{sd}|\le 0.1 \left[\frac{M_{Z^\prime}}{100\, {\rm TeV}}\right],
\end{equation}
which is clearly in the perturbative regime, and is still the case for an $M_{Z^\prime}$ as large as $2000\, {\rm TeV}$. With
$|V_{td}|=8.5\times 10^{-3}$ and $|V_{ts}|=0.040$ this corresponds to $|\Delta S(K)| \leq 137$. Then, again from \eqref{REL1}, one has, for real $\Delta_L^{sd}$,
\begin{equation}
|\Delta X_{\rm L}(K)|\le 0.44\sqrt{\frac{|\Delta S(K)|}{137}} \left[\frac{\Delta_L^{\nu\bar\nu}}{3.0}\right] \left[\frac{100\, {\rm TeV}}{M_{Z^\prime}}\right].
\end{equation}
This shift for $M_{Z^\prime}$ in the ballpark of $100\, {\rm TeV}$ implies a correction of approximately $50\%$ to the branching ratio for
$K^+\rightarrow\pi^+\nu\bar\nu$ but no contribution to $K_{L}\rightarrow\pi^0\nu\bar\nu$ since we are assuming $\Delta_L^{sd}$
to be real. This clearly shows a non-MFV structure of NP because in models with
MFV the branching ratio for $K_{L}\rightarrow\pi^0\nu\bar\nu$ is automatically modified when the one
for $K^+\rightarrow\pi^+\nu\bar\nu$ is modified. If on the other hand $\Delta_L^{sd}$ is made complex, significant
NP contributions to $K_{L}\rightarrow\pi^0\nu\bar\nu$ are in general subject to severe constraints from $\varepsilon_K$ and $\varepsilon'/\varepsilon$, unless $\Delta_L^{sd}$ is purely imaginary, in which case the NP contributions to $\varepsilon_K$ vanish and the effects in $\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)$ and $\mathcal{B}(K_{L}\rightarrow\pi^0\nu\bar\nu)$ are correlated as in MFV.
We will perform a more detailed analysis of these two decays and their correlation in Section~\ref{Js}.
Let us discuss here just $K^+\rightarrow\pi^+\nu\bar\nu$, as this decay will be the first to be measured precisely.
\begin{figure}
\centering%
\includegraphics[width=0.49\textwidth]{K_5TeV.pdf}\hfill%
\includegraphics[width=0.49\textwidth]{K_50TeV.pdf}
\caption{\it Prospects for observing new physics in $K$ decays. The green regions show the 68\% C.L. and 95\% C.L. allowed regions in the $\Delta F = 2$ fit.
The contours show the $3\sigma$ and $5\sigma$ projections for $\mathcal{B}(K^+\to\pi^+\nu\bar\nu)$ in 2019 and 2024, the colours are as in Fig.~\ref{figBd}.
Left: $M_{Z^\prime}~=~5~{\rm TeV}$ and $\Delta_L^{\nu\bar\nu} = 1$. Right: $M_{Z^\prime} = 50~{\rm TeV}$ and $\Delta_L^{\nu\bar\nu} = 3$.\label{figK}}
\end{figure}
Fig.~\ref{figK} shows the prospects for $K^+\to\pi^+\nu\bar\nu$, together with the $\Delta S = 2$ constraints, in the $\Delta_L^{sd}$--$\phi_L^{sd}$ plane. We show two different scenarios:
\begin{itemize}
\item a beyond-LHC scale of $M_{Z^\prime} = 50$ TeV with $\Delta_L^{\nu\bar\nu} = 3$;
\item an LHC scale of $M_{Z^\prime} = 5$ TeV with $\Delta_L^{\nu\bar\nu} = 1$.
\end{itemize}
The conventions and colours are the same as in Fig.~\ref{figBd}. Notice the strong bound from $\epsilon_K$ for
large values of the phase $\phi_L^{sd}$, which implies that for NP at high scales with generic CP structure
at most a $3\sigma$ effect can be expected with the precision attainable at the end of
the next decade. For real or imaginary couplings, on the contrary, it is evident that scales {
of 50--100 TeV or even higher} may be accessible through $K$ decays.
The overall message that emerges from the plots in Figs.~\ref{figBd} and \ref{figK} is that through rare meson decays one can resolve energy scales beyond those directly accessible
at the LHC: at least in the LHS with suitable values of the $Z^\prime$ couplings one can still expect deviations from the SM at the level of 3\,--\,$5\,\sigma$ with the experimental
progress of the next few years that are consistent with perturbativity and the meson mixing constraints, for $M_{Z^\prime}$ in the ranges described above.
We want to stress once more that the results discussed here correspond to the most optimistic
scenarios and to the largest couplings compatible with all considered constraints. Needless to say,
in the case of smaller couplings, or in the presence of some approximate flavour symmetry, the scales that may eventually be accessible through rare meson decays are much lower.
\subsection{Right-handed scenario}
If only RH couplings are present the results of the $\Delta F=2$ LHS analysis remain unchanged as the relevant hadronic matrix elements
-- calculated in lattice QCD -- are insensitive to the sign of $\gamma_5$.
Therefore, as far as $\Delta F=2$ processes are concerned, it is
impossible to state whether in the presence of couplings of only one chirality the deviations from SM expectations are caused by LH or RH currents \cite{Buras:2012jb}. In order to make this distinction
one has to study $\Delta F=1$ processes. In particular in the right-handed scenario (RHS) the relations (\ref{REL3}) are modified to
\begin{equation}\label{REL4}
\Delta Y_A(K)=-\Delta X_R(K) \frac{\Delta_A^{\mu\bar\mu}}{\Delta_L^{\nu\bar\nu}}, \qquad \Delta Y_A(B_q)=-\Delta X_R(B_q) \frac{\Delta_A^{\mu\bar\mu}}{\Delta_L^{\nu\bar\nu}},
\end{equation}
where the sign flip plays a crucial role. The functions $\Delta X_R(M)$
are obtained from $\Delta X_L(M)$ by replacing the LH quark couplings by the
RH ones.
We also find for the coefficient of the primed operator $C_9^\prime$
\begin{equation}\label{SU2La}
-\sin^2\theta_W C^{\prime}_{9}=2\Delta X_R(B_s)+\Delta Y_A(B_s).
\end{equation}
We refer to the Appendix~\ref{app:A} for explicit formulae for all the involved
functions.
Therefore the correlations between decays with $\nu\bar\nu$ and $\mu\bar\mu$
in the final state are different in LH and RH scenarios. In particular
angular observables in $B_d\to K^*\mu^+\mu^-$ and also the decay
$B_d\to K\mu^+\mu^-$ can help in the distinction between LHS and RHS, as the presence of RH currents is signalled by
the effects of primed operators. In the future the correlation between the
decays $B_d\to K^*\nu\bar\nu$
and $B_d\to K\nu\bar\nu$ will be able by itself to identify RH currents
at work \cite{Colangelo:1996ay,Buchalla:2000sk,Altmannshofer:2009ma,Buras:2010pz,Biancofiore:2014uba,Buras:2014fpa,Girrbach-Noe:2014kea}. We will show this explicitly in the following sections.
\subsection{Numerical analysis}\label{Js}
We will now perform a numerical study of the $\Delta F = 1$ effects that can be expected for $M_{Z^\prime}$ close to its maximal value, and of their correlations.
As already
indicated by our preceding analysis, the $\Delta F=2$
constraints in these scenarios will not allow large $Z^\prime$ couplings to
quarks, but the lepton couplings could be significantly larger than the SM $Z$ boson couplings, which read\,\footnote{For these modified $Z$ couplings
we use the same definition as in~(\ref{equ:Lleptons}) and~(\ref{DeltasVA}), with $Z^\prime$ replaced by $Z$.}
\begin{equation}
\Delta_L^{\nu\bar\nu}(Z)=-0.372,\qquad \Delta_A^{\mu\bar\mu}(Z)=0.372, \qquad \Delta_V^{\mu\bar\mu}(Z)=-0.028 \,.
\end{equation}
Working with $M_{Z^\prime}\ge 15\, {\rm TeV}$ we will set
\begin{equation}\label{leptonhigh}
\Delta_L^{\nu\bar\nu} =\pm 3.0,\qquad \Delta_A^{\mu\bar\mu} =\mp 3.0, \qquad \Delta_V^{\mu\bar\mu} =\pm 3.0~.
\end{equation}
where the signs are chosen in order to satisfy the $SU(2)_L$ relation (\ref{C2}) in
the perturbativity regime. At $M_{Z^\prime}=15\, {\rm TeV}$, as well as for the higher masses
considered below, these lepton couplings are still
consistent with the constraints from LEP-II \cite{Schael:2013ita} and the LHC \cite{Aad:2014cka,CMS:2013qca}.
\begin{figure}[!tb]
\centering%
\includegraphics[width = 0.45\textwidth]{KLvsKpLHS50.png}
\caption{\it $\mathcal{B}(K_{L}\rightarrow\pi^0\nu\bar\nu)$ versus
$\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)$ for $M_{Z^\prime} = 50~{\rm TeV}$ in the LHS. The colours are as in (\ref{sa})--(\ref{sf}). The four red points correspond to
the SM central values of the four CKM scenarios, respectively. The black line corresponds to the Grossman-Nir bound. The gray region shows the
experimental range of $\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu))_\text{exp}=(17.3^{+11.5}_{-10.5})\times 10^{-11}$.}\label{fig:KLvsKpLHS50}
\end{figure}
In our analysis of $\Delta F=2$ processes we proceed as follows:
\begin{itemize}
\item
We set all non-perturbative parameters at their central values. The most
important ones are given in Table~\ref{tab:lattProjections}. The
remaining input can be found in \cite{Buras:2013ooa}. In
order to incorporate effectively the present uncertainties in these parameters
we proceed as explained below. See in particular (\ref{C3}), (\ref{DF2c}) and (\ref{DF2d}).
For future updates see PDG~\cite{Beringer:1900zz}, FLAG~\cite{Aoki:2013ldr} and HFAG~\cite{Amhis:2012bh}.
\item
For the elements $|V_{ub}|$ and $|V_{cb}|$ we use four scenarios corresponding
to different determinations from inclusive and exclusive decays with the
lower ones corresponding to exclusive determinations. They are given
in (\ref{sa})--(\ref{sf}) below where we have given the colour coding for these scenarios used in some plots below. The quoted errors are future projections.
Arguments have been given recently that NP explanation of the difference
between exclusive and inclusive determinations is currently ruled out
\cite{Crivellin:2014zpa} and must thus be due to underestimated theoretical errors in the form factors and/or the inclusive experimental determination.
Finally we use $\gamma=68^\circ$.
\end{itemize}
\begin{figure}[t]
\centering%
\includegraphics[width = 0.45\textwidth]{KstarvsK15.png}
\includegraphics[width = 0.45\textwidth]{KvsBsmu15.png}
\includegraphics[width = 0.45\textwidth]{ReC9NPvsBsmuLHS15.png}
\includegraphics[width = 0.45\textwidth]{ReC9PrimevsBsmuRHS15.png}
\caption{\it Correlations in the $B_s$ system for $M_{Z^\prime} = 15~{\rm TeV}$ in LHS (darker colours) and RHS (lighter colours) with colours as in (\ref{sa})--(\ref{sf}).
Due to the independence of $|V_{ub}|$ in this system purple is under green and cyan is under blue. The gray region shows the experimental 1$\sigma$ range $\overline{\mathcal{B}}(B_s\to\mu^+\mu^-) = (2.9\pm0.7)\times 10^{-9}$.}
\label{fig:BsLHSRHS}
\end{figure}
The four scenarios for $|V_{ub}|$ and $|V_{cb}|$ are given as follows:
\begin{align}
a)&\qquad |V_{ub}| = (3.4\pm 0.1)\times 10^{-3}\qquad |V_{cb}| = (39.0\pm 0.5)\times 10^{-3}\qquad ({\rm purple)} \label{sa}\\
b)& \qquad |V_{ub}| = (3.4\pm0.1)\times 10^{-3}\qquad |V_{cb}| = (42.0\pm 0.5)\times 10^{-3}\qquad ({\rm cyan)}\\
c)& \qquad |V_{ub}| = (4.3\pm 0.1)\times 10^{-3}\qquad |V_{cb}| = (39.0\pm 0.5)\times 10^{-3}\qquad ({\rm green)}\\
d)& \qquad |V_{ub}| = (4.3\pm 0.1)\times 10^{-3}\qquad |V_{cb}| = (42.0\pm 0.5)\times 10^{-3}\qquad ({\rm blue)}\label{sf}
\end{align}
In Fig.~\ref{fig:KLvsKpLHS50} we show the correlation between $\mathcal{B}(K_{L}\rightarrow\pi^0\nu\bar\nu)$ and $\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)$ in the LHS for the four scenarios $a)-d)$ for $(|V_{cb}|,|V_{ub}|)$. To this end
we set
\begin{equation}
\Delta_L^{\nu\bar\nu} =3.0,\qquad\qquad\qquad M_{Z^\prime}=50\, {\rm TeV},
\end{equation}
and impose the constraints from $\Delta M_K$ and $\varepsilon_K$ by demanding that they are in the ranges
\begin{equation}\label{C3}
0.75\le \frac{\Delta M_K}{(\Delta M_K)_{\rm SM}}\le 1.25,\qquad
2.0\times 10^{-3}\le |\varepsilon_K|\le 2.5 \times 10^{-3}.
\end{equation}
These ranges take into account all other uncertainties beyond CKM parameters such as long distance effects,
QCD corrections and the value of $\gamma$, which here we keep fixed.
The plot in Fig.~\ref{fig:KLvsKpLHS50} is familiar from other NP scenarios in
which the phase of the NP contribution to $\varepsilon_K$ is twice the one
of the NP contribution to $K^+\rightarrow\pi^+\nu\bar\nu$ and $K_{L}\rightarrow\pi^0\nu\bar\nu$ \cite{Blanke:2009pq},
as is the case in the scenario considered here.
$\mathcal{B}(K_{L}\rightarrow\pi^0\nu\bar\nu)$ can be strongly enhanced along one of the branches, as a consequence of which $\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)$ will also be enhanced. But $\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)$ can
also be enhanced without modifying $\mathcal{B}(K_{L}\rightarrow\pi^0\nu\bar\nu)$. The last feature
is not possible within the SM and any model with minimal flavour
violation, in which these two branching ratios are strongly correlated. The two branches correspond to the regions where the coupling $\Delta_L^{sd}$ is approximately real or purely imaginary, and the $\varepsilon_K$ constraint becomes irrelevant, which was already evident in Fig.~\ref{figK}. For
a better analytic understanding of this two branch structure we refer also to \cite{Blanke:2009pq}.
\begin{table}[t]
\centering
\begin{tabular}{|c|c|c||c|c|c|c|}
\hline
$\Delta_L^{\nu\bar\nu}$ & $\Delta_A^{\mu\bar\mu}$ & $\Delta_A^{\mu\bar\mu}$ & $(1,1)$& $(1,2)$ & $(2,1)$ & $(2,2)$\\
\hline
\hline
\parbox[0pt][1.6em][c]{0cm}{} $+$ & $+$ & $+$ & $+(-)$ & $+(-)$ & $-$ &$+$\\
\parbox[0pt][1.6em][c]{0cm}{} $+$ & $-$ & $+$ & $+(-)$ & $-(+)$ & $+$ &$-$\\
\parbox[0pt][1.6em][c]{0cm}{} $+$ & $-$ & $-$ & $+(-)$ & $-(+)$ & $-$ &$+$\\
\hline
\end{tabular}
\caption{\it Correlations $(+)$ and anti-correlations $(-)$ between various observables for different signs of the couplings. $(n,m)$ denotes the entry in the
$2\times 2$ matrix in Fig.~\ref{fig:BsLHSRHS}. For the elements $(1,1)$ and $(1,2)$ the signs
correspond to LHS (RHS). Flipping simultaneously the signs of all couplings does not change the correlations.}
\label{tab:signs}
\end{table}
In presenting these results we impose the constraint from $K_L\to\mu^+\mu^-$
in (\ref{eq:KLmm-bound}) which can only have an impact on $\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)$ on
the horizontal branch and not on $\mathcal{B}(K_{L}\rightarrow\pi^0\nu\bar\nu)$. Because
in this scenario the couplings $\Delta_L^{\nu\bar\nu}$ and $\Delta_A^{\mu\bar\mu}$ have opposite signs, in the LHS $\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)$ and
$\mathcal{B}(K_L\to \mu^+\mu^-)$ are anti-correlated so that the
constraint in (\ref{eq:KLmm-bound}) has no impact on the upper bound on
$\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)$.
On the other hand, for the chosen signs of leptonic couplings
these two branching ratios are correlated in the RH scenario and the maximal values of $\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)$ on the horizontal branch could in principle be smaller than the ones shown
in Fig.~\ref{fig:KLvsKpLHS50} due to the bound in (\ref{eq:KLmm-bound}). However, for
the chosen parameters this turns out not to be the case.
As far as the second branch is concerned,
as recently analysed in \cite{Buras:2014sba} and known from previous literature, the ratio $\varepsilon'/\varepsilon$ can in principle have a large impact on the largest allowed
values of $\mathcal{B}(K_{L}\rightarrow\pi^0\nu\bar\nu)$ and $\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)$ on the branch
where these branching ratios are correlated. Unfortunately, the present
large uncertainties in QCD penguin contributions to $\varepsilon'/\varepsilon$ do not allow
for firm conclusions and we do not show this constraint here.
We observe that large deviations from the SM can be measured
even at such high scales. Increasing $M_{Z^\prime}$ to $100\, {\rm TeV}$ would
reduce NP effects by a factor of two, which could still be measured in
the flavour precision era. We conclude therefore that $K^+\rightarrow\pi^+\nu\bar\nu$ and $K_{L}\rightarrow\pi^0\nu\bar\nu$
decays can probe the Zeptouniverse even if only LH or RH $Z^\prime$ couplings
to quarks are present.
In Fig.~\ref{fig:BsLHSRHS} we show the correlations for decays sensitive to $b\to s$
transitions. To this end we set in accordance with the signs in
(\ref{leptonhigh})
\begin{equation}\label{leptonhigh1b}
\Delta_L^{\nu\bar\nu} =3.0,\qquad \Delta_A^{\mu\bar\mu} =-3.0, \qquad \Delta_V^{\mu\bar\mu} = 3.0, \qquad M_{Z^\prime}=15\, {\rm TeV}\, .
\end{equation}
The $\Delta F=2$ constraint has been incorporated through the conditions
\begin{equation}\label{DF2c}
-8^\circ \le \phi_s \le 8^\circ, \qquad 0.9\le C_{B_s}\equiv
\frac{\Delta M_s}{\Delta M_s^{\text{SM}}}\le 1.1
\end{equation}
As {we have already shown}, measurable NP effects are still present at $15\, {\rm TeV}$ provided the lepton couplings are as large as assumed here, but for larger values of $M_{Z^\prime}$ the detection of NP would be hard.
We consider therefore $M_{Z^\prime}=15\, {\rm TeV}$ as an approximate upper value in LHS and RHS that can still be probed in the flavour precision era.
It
will be interesting to monitor the development of the values of $\phi_s$ and $C_{B_s}$ in the future. If they will depart significantly from
their SM values, $\phi_s\approx -2^\circ$ and $C_{B_s}=1.0$,
NP effects could be observed in rare decays.
In presenting these results we have chosen the leptonic couplings in
(\ref{leptonhigh1b}), but (\ref{leptonhigh}) admits a second possibility in
which all the couplings are reversed. It is an easy exercise to convince oneself
that the correlations presented by us are invariant under this change. On
the other hand, for smaller leptonic couplings there are other combinations of the signs of the three leptonic couplings involved that are consistent with perturbativity while satisfying the $SU(2)_L$ relation in (\ref{C2}). As $\Delta F=2$
constraints are independent of leptonic couplings it is not difficult
to translate our results into these different possibilities, even if
the decrease of leptonic couplings would suppress NP effects. Moreover if
the decrease of them was not by a common factor the slopes in our plots
would change. This freedom will be important once the experimental data relevant for our
plots becomes available. We collect various possibilities in Table~\ref{tab:signs}.
Finally in Fig.~\ref{fig:BdLHS} we show the branching ratio $\mathcal{B}(B_d\to\mu^+\mu^-)$ in the LHS as a function of $|\Delta_L^{bd}|$ for $M_{Z^\prime}=15\, {\rm TeV}$, imposing the constraints
\begin{equation}\label{DF2d}
40^\circ \le \phi_d \le 46^\circ, \qquad 0.9\le C_{B_d}=
\frac{\Delta M_d}{\Delta M_d^{\text{SM}}}\le 1.1\,.
\end{equation}
{As expected, there is a sizeable dependence on the CKM matrix elements. Even if $B_d^0-\bar B_d^0$ mixing in the SM is strongly suppressed relative to
$B_s^0-\bar B_s^0$ mixing, after the present experimental constraints from
$\Delta F=2$ observables are imposed the $B_d$ system allows us to explore
approximately the same scales as in the $B_s$ system. The situation could change
when the constraints in (\ref{DF2c}) and (\ref{DF2d}) will be modified in
a different manner.}
\begin{figure}[!tb]
\centering
\includegraphics[width = 0.45\textwidth]{Bdmumuvss1315.png}
\caption{\it $\mathcal{B}(B_d\to\mu^+\mu^-)$ versus $|\Delta_L^{bd}|$ for
$M_{Z^\prime}=15\, {\rm TeV}$ in LHS, with colours as in (\ref{sa})--(\ref{sf}).}
\label{fig:BdLHS}
\end{figure}
\section{Left-Right operators at work}\label{sec:4}
\subsection{Basic idea}
As seen in (\ref{REL1}), when the constraints from $\Delta F=2$ processes are
taken into account the $Z^\prime$ contributions to $\Delta F=1$ observables decrease with increasing $M_{Z^\prime}$. The reason is simple \cite{Buras:2012jb}: a tree-level $Z^\prime$ contribution to $\Delta F=2$
observables depends quadratically on $\Delta_{L,R}^{ij}/M_{Z^\prime}$.
For any high value of $M_{Z^\prime}$, even beyond the reach of the
LHC, it is possible to find couplings $\Delta_{L,R}^{ij}$ which are not
only consistent with the existing data but can even remove certain
tensions found within the SM. The larger $M_{Z^\prime}$, the larger
couplings are allowed.
Once $\Delta_{L,R}^{ij}$ are fixed in this manner, they can be used to
predict $Z^\prime$ effects in $\Delta F=1$ observables. However here
NP contributions to the amplitudes are proportional to
$\Delta_{L,R}^{ij}/M^2_{Z^\prime}$ and with the couplings proportional to $M_{Z^\prime}$, the $Z^\prime$ contributions to $\Delta F=1$ observables decrease with increasing
$M_{Z^\prime}$.
But this stringent correlation is only present in the LHS and RHS
considered until now. If both couplings are present this correlation can
be broken, simply because we then have four parameters instead of two in the $Z^\prime$ couplings to quarks of each meson system. As we will soon see,
this will allow us to increase the resolution of short distance scales
and allow one to reach Zeptouniverse sensitivities also with the help of $B_{s,d}$ decays
while satisfying their $\Delta F=2$ constraints.
\subsection{L+R scenario}
In the presence of both LH and RH
couplings of a $Z^\prime$ gauge boson to SM quarks left-right (LR) $\Delta F=2$ operators are generated whose contributions to the mixing amplitudes $M_{12}^{bq}$ and $M_{12}^{sd}$ in all three mesonic systems are enhanced through renormalisation group effects relative to left-left (VLL) and right-right (VRR) operators. Moreover in the
case of $M_{12}^{sd}$ additional chiral enhancements of the hadronic matrix elements of LR operators are present. As pointed out in \cite{Buras:2014sba} this fact can be used to suppress
NP contributions to $\Delta M_K$ through some fine-tuning between VLL, VRR and
LR contributions, thereby allowing for larger contributions to $K\to\pi\pi$ amplitudes while satisfying the $\Delta M_K$ constraint in the limit of
small NP phases. Here we generalise this idea to all three systems and NP phases
in $Z^\prime$ contributions. While the fine-tuning required in the case of
$K\to\pi\pi$ turned out to be rather large, it will be more modest in
the case at hand.\footnote{In order to distinguish this more general scenario from the LRS and ALRS in \cite{Buras:2012jb}, where the LH and RH couplings were either
equal or differed by sign, we denote it simply by L+R.}
To this end we write the $Z^\prime$ contributions to the mixing amplitudes as follows \cite{Buras:2012jb}:
\begin{equation}\label{ZpnewK}
(M_{12}^*)_{Z^\prime}^{sd} = \frac{(\Delta_L^{sd})^2}{2M_{Z^\prime}^2} \langle \hat Q_1^\text{VLL}(M_{Z^\prime})\rangle^{sd} z_{sd},
\end{equation}
and
\begin{equation}\label{Zpnewbq}
(M_{12}^*)_{Z^\prime}^{bq} = \frac{(\Delta_L^{bq})^2}{2M_{Z^\prime}^2} \langle \hat Q_1^\text{VLL}(M_{Z^\prime})\rangle^{bq} z_{bq},
\end{equation}
where $z_{sd}$ and $z_{bq}$ are generally complex. We have
\begin{equation}\label{deltasupp}
z_{sd}=\left[1+\left(\frac{\Delta_R^{sd}}{\Delta_L^{sd}}\right)^2+2\kappa_{sd}\frac{\Delta_R^{sd}}{\Delta_L^{sd}}\right],
\qquad \kappa_{sd}=\frac{\langle \hat Q_1^\text{LR}(M_{Z^\prime})\rangle^{sd}}{\langle \hat Q_1^\text{VLL}(M_{Z^\prime})\rangle^{sd}}
\end{equation}
with an analogous expressions for $z_{bq}$.
Here using the technology of \cite{Buras:2001ra,Buras:2012fs} we have expressed $z_{sd}$ in terms of the renormalisation scheme independent
matrix elements
\begin{align}
&\langle\hat Q_1^\text{VLL}(M_{Z^\prime})\rangle^{sd} = \langle Q_1^\text{VLL}(M_{Z^\prime})\rangle^{sd}\left(1+\frac{11}{3}\frac{\alpha_s(M_{Z^\prime})}{4\pi}\right),\label{Q1VLL}\\
&\langle \hat Q_1^\text{LR}(M_{Z^\prime})\rangle^{sd} =\langle Q_1^\text{LR}(M_{Z^\prime})\rangle^{sd}\left(1-\frac{1}{6}\frac{\alpha_s(M_{Z^\prime})}{4\pi}\right) -\frac{\alpha_s(M_{Z^\prime})}{4\pi}\langle Q_2^\text{LR}(M_{Z^\prime})\rangle^{sd}\,.\label{Q1LR}
\end{align}
$\langle Q_1^\text{VLL}(M_{Z^\prime})\rangle^{sd}$ and $\langle Q_{1,2}^\text{LR}(M_{Z^\prime})\rangle^{sd}$, which are defined in Appendix~\ref{app:operators}, are the matrix elements evaluated at $\mu=M_{Z^\prime}$ in the $\overline{\rm MS}$-NDR scheme, and the presence of $\mathcal{O}(\alpha_s)$ corrections removes
the scheme dependence. $\alpha_s(M_Z^\prime)$ is the value of the strong coupling at $M_Z^\prime$. The corresponding formulae for $B_q$ mesons are obtained by
simply changing $sd$ to $bq$ without changing $\alpha_s$ corrections.
In Table~\ref{tab:QME} we give the central values of the matrix elements in (\ref{Q1VLL}) and
(\ref{Q1LR}) for the three meson
systems considered and for different values of $M_{Z^\prime}$. For the $K^0-\bar K^0$ system we have
used weighted averages of the relevant $B_i$ parameters obtained in lattice QCD
in \cite{Boyle:2012qb,Bertone:2012cu}; for the
$B_{d,s}^0-\bar B^0_{d,s}$ systems we have used the ones in \cite{Carrasco:2013zta}.
As the values of the relevant $B_i$ parameters in these papers have been
evaluated at $\mu=3\, {\rm GeV}$ and $\mu = 4.29\, {\rm GeV}$, respectively, we have used the
formulae in \cite{Buras:2001ra} to obtain the values of the matrix
elements in question at $M_{Z^\prime}$.\footnote{For simplicity we choose the renormalisation scale to
be $M_{Z^\prime}$, but any scale of this order would give the same results for
the physical quantities up to NNLO QCD corrections that are negligible
at these high scales.} The renormalisation scheme dependence of the
matrix elements is canceled by the one of the Wilson coefficients as mentioned
above.
\begin{table}[t]
\begin{center}
\renewcommand{\arraystretch}{1.2}
\scalebox{0.85}{
\begin{tabular}{|c|cccccc|}
\hline
$M_{Z^\prime}$ & 5 TeV & 10 TeV & 20 TeV & 50 TeV & 100 TeV & 200 TeV\\
\hline
$\langle\hat{Q}_1^{\rm VLL}(M_{Z^\prime})\rangle^{sd}$& 0.00158 & 0.00156 & 0.00153 & 0.00150 & 0.00148 & 0.00146 \\
$\langle\hat{Q}_1^{\rm LR}(M_{Z^\prime})\rangle^{sd}$& $-0.183$ & $-0.197$ & $-0.211$ & $-0.230$ & $-0.244$ & $-0.259$ \\
$\kappa_{sd}(M_{Z^\prime})$& $-115.46$ & $-126.51$ & $-137.84$ & $-153.24$ & $-165.20$ & $-177.41$ \\
\hline
$\langle\hat{Q}_1^{\rm VLL}(M_{Z^\prime})\rangle^{bd}$& 0.0423 & 0.0416 & 0.0409 & 0.0401 & 0.0395 & 0.0390 \\
$\langle\hat{Q}_1^{\rm LR}(M_{Z^\prime})\rangle^{bd}$& $-0.183$ & $-0.195$ & $-0.206$ & $-0.222$ & $-0.234$ & $-0.246$ \\
$\kappa_{bd}(M_{Z^\prime})$& $-4.33$ & $-4.68$ & $-5.04$ & $-5.53$ & $-5.92$ & $-6.30$ \\
\hline
$\langle\hat{Q}_1^{\rm VLL}(M_{Z^\prime})\rangle^{bs}$& 0.0622 & 0.0611 & 0.0601 & 0.0589 & 0.0581 & 0.0573 \\
$\langle\hat{Q}_1^{\rm LR}(M_{Z^\prime})\rangle^{bs}$& $-0.268$ & $-0.284$ & $-0.301$ & $-0.323$ & $-0.340$ & $-0.357$ \\
$\kappa_{bs}(M_{Z^\prime})$& $-4.31$ & $-4.66$ & $-5.01$ & $-5.48$ & $-5.85$ & $-6.23$ \\
\hline
\end{tabular}}
\end{center}
\caption{\it Central values of the scheme-independent hadronic matrix elements evaluated at different values of $M_{Z^\prime}$. $\langle \hat Q_1^{\rm VLL}\rangle^{ij}$ and $\langle \hat Q_1^{\rm LR}\rangle^{ij}$ are in units of ${\rm GeV}^3$.}
\label{tab:QME}
\end{table}
Now, as seen in Table~\ref{tab:QME}, both $\kappa_{sd}$ and $\kappa_{bq}$ are negative, implying that with the same sign of LH and RH couplings the last term in (\ref{deltasupp}) could suppress the contribution of NP to $\Delta F=2$ processes. We also note that for $M_{Z^\prime}\ge 10\, {\rm TeV}$ one has
$|\kappa_{sd}|\ge 126$ and $|\kappa_{bq}|\ge 4.7$ implying that for $z_{sd}$ and $z_{bq}$ to be significantly below unity
the RH couplings must be much smaller than the LH ones.
This in turn implies that the second term in the expression for $z_{sd}$
in (\ref{deltasupp}) can be neglected in first approximation, and we obtain the following hierarchy
between LH and RH couplings necessary to suppress NP contributions to $\Delta F=2$ observables:
\begin{equation}\label{finetuning1}
\frac{\Delta_R^{sd}}{\Delta_L^{sd}}\simeq-\frac{a_{sd}}{2\kappa_{sd}}, \qquad\qquad\qquad \frac{\Delta_R^{bq}}{\Delta_L^{bq}}\simeq-\frac{a_{bq}}{2\kappa_{bq}}\,.
\end{equation}
The parameters $a_{sd}$ and $a_{bq}$ must be close to unity in order to make the suppression effective. How close they should be to unity depends on present and future results for hadronic and CKM parameters in $\Delta F=2$ observables.
Unfortunately the present errors on the hadronic matrix elements are quite large, and do not allow a precise determination of the level of fine-tuning required.
An estimate is however possible: in Fig.~\ref{tuning} we show the deviation of the $a_{ij}$ from 1, $\delta a_{ij}$, allowed by the $\Delta F = 2$ fit at $68\%$ and $95\%$ C.L. -- or, equivalently, the precision up to which the right-handed couplings have to be determined -- as a function of $M_{Z^\prime}$.
In these plots we have fixed the matrix elements in the NP contributions to their central values of Table~\ref{tab:QME}, while we included their errors in the SM part. This is justified by our assumption that the SM contribution is the dominant one and gives a good description of data. A shift in the matrix elements $\kappa_{ij}$ will change the values of $\Delta_R^{ij}/\Delta_L^{ij}$ that cancel $z_{ij}$ in \eqref{deltasupp}, but the allowed relative deviation from that value, parametrised by $a_{ij}$, mainly depends on the error in the SM prediction.
In Fig.~\ref{tuning} for concreteness we have taken maximal phases of $\pi/4$ for all the couplings and set $\Delta_L^{ij} = 3$.
\begin{figure}
\centering%
\includegraphics[width=0.5\textwidth]{tuning_K.pdf}\hfill%
\includegraphics[width=0.5\textwidth]{tuning_Bs.pdf}
\caption{\it Level of fine-tuning in the couplings $\Delta_R^{sd}$ (left) and $\Delta_R^{bs}$ (right) required,
taking maximal phases and $\Delta_L^{ij} = 3$, in order to suppress NP effects in $\Delta F = 2$ observables in the $K$ and $B_s$ systems,
respectively, as a function of $M_{Z^\prime}$. The dashed and solid lines represent the 68\% and 95\% C.L. contours.\label{tuning}}
\end{figure}
In any case the fact that $a_{sd}$ and $a_{bq}$ introduce in each case two
new parameters allows us to describe the $\Delta F=2$ observables independently
of rare decays as opposed to the LHS and RHS. On the other hand,
due to the hierarchy of couplings and the absence of LR operators in the
rare decays considered by us, rare decays are governed again by LH couplings as in the LHS,
with the bonus that now the constraint from $\Delta F=2$
observables can be ignored.
As $\kappa_{sd}\gg\kappa_{bq}$ the
hierarchy of couplings in this scenario
must be much larger in the $K$ system than in the
$B_{s,d}$ systems.
{It is evident from (\ref{ZpnewK}) and (\ref{deltasupp})
that our discussion above remains true if $L$ and
$R$ are interchanged because the hadronic matrix elements of $\Delta F=2$ operators do not depend on the sign of $\gamma_5$. In particular the values in Table~\ref{tab:QME} remain unchanged, except that now they apply to the matrix elements of $Q_1^{\rm VRR}$ that equal the ones of $Q_1^{\rm VLL}$. In turn $L$ and $R$ are
interchanged in (\ref{finetuning1}) and consequently rare decays are governed
by RH couplings in this case. While these two opposite hierarchies cannot be distinguished through $\Delta F=2$ observables they can be distinguished through
rare decays as we will demonstrate below.}
This picture of short distances should be contrasted with the
LR and ALR scenarios analysed in \cite{Buras:2012sd,Buras:2012jb,Buras:2012dp,Buras:2013uqa,Buras:2013rqa,Buras:2013raa,Buras:2013qja,Buras:2013dea}, in
which the LH and RH couplings were of the same size. In
that case the LR operators dominate NP contributions to $\Delta F=2$ observables, which implies significantly smaller allowed couplings, and in turn
stronger constraints on the $\Delta F=1$ observables.
Even if also there the signals from LH or
RH currents could in principle be observed in rare $K$ and $B_{sd}$ decays, their effects will only be measurable for scales below $10\, {\rm TeV}$.
The main message of this section is the following one: by appropriately
choosing the hierarchy between LH and RH flavour violating $Z^\prime$
couplings to quarks one can eliminate to a large extent the constraints
from $\Delta F=2$ transitions even in the presence of large CP-violating phases, and in this manner increase the resolution
of short distance scales, which now would be probed solely by rare $K$ and
$B_{s,d}$ decays. While in the $B_{d,s}$ systems this can be done at the price of a mild fine-tuning, and allows one to reach the Zeptouniverse, in the $K$ system it requires a fine-tuning of the couplings at the level of 1\% -- 1\permil\ because of the strong $\varepsilon_K$ constraint (see Fig.~\ref{tuning}). Notice however that $K$ decays already allowed us to reach 100 TeV in the LHS without the need of right-handed couplings.
The implications of this are rather profound. Even if
in the future SM would agree perfectly with all $\Delta F=2$ observables,
this would not necessarily
imply that no NP effects can be seen in rare decays, even if the $Z^\prime$
is very heavy. The maximal value of the $Z^\prime$ mass, $M_{Z^\prime}^{\rm max}$, for which
measurable effects in rare decays could in principle still be found, and perturbativity of
couplings is respected, is again rather different in different systems,
and depends
on the assumed perturbativity upper bounds on $Z^\prime$ couplings and
on the sensitivity
of future experiments.
In Appendix~\ref{app:basic_formulae} we give expressions for the rare decay branching ratio observables ${\cal B}$ given in Table~\ref{tab:rareProjections}, which depend on the functions $X_{L,R}$ and $Y_{L,R}$ listed in Appendix~\ref{app:A}.
Combining these formulae gives the following relation for a non-zero $\Delta X_L(M)$ (as defined in \eqref{DeltaFunDefns})
\begin{equation}\label{MZprimebound}
M_{Z^\prime}^\text{max}=K(M)\sqrt{\left|\frac{\Delta_L^{\nu\bar\nu}}{3.0}\right|}\sqrt{\left|\frac{\Delta_L^{ij}}{3.0}\right|}
\sqrt{\left|\frac{10\%}{\delta_{\rm exp}(M)}\right |},
\end{equation}
where $ij=sd,db,sb$ for $M=K,B_d,B_s$, respectively, and $\delta_{\rm exp}(M) \equiv \delta\mathcal{B}/\mathcal{B}$ is the experimental sensitivity that can be reached in $M$ decays, as listed in Table~\ref{tab:rareProjections}. For the present CKM parameters
the factors $K(M)$ are as follows:
\begin{equation}\label{KKBB}
K(K)\approx 1400\, {\rm TeV}, \qquad K(B_d)\approx 280\, {\rm TeV}, \qquad K(B_s)\approx 140\, {\rm TeV}.
\end{equation}
One has similar formulae for $Y_A(M)$, but as $Y^\text{SM}\approx 0.65 X^\text{SM}$ one can reach slightly higher values of $M_{Z^\prime}$ for the same experimental sensitivity. We note that this time there is a difference between the $B_d$ and $B_s$
system, which was not the case in Section~\ref{sec:3}.
We also note that, although these maximal values depend on the assumed maximal
values of the $Z^\prime$ couplings to SM fermions and the assumed sensitivity to
NP,
this is not a strong dependence due to the square roots involved.
Using the projections for 2024 in Table~\ref{tab:rareProjections}, we get
\begin{equation}
M_{Z^\prime}^{\rm max}(K) \approx 2000\, {\rm TeV},\qquad M_{Z^\prime}^{\rm max}(B_s)\approx M_{Z^\prime}^{\rm max}(B_d) \approx 160\, {\rm TeV}\,,
\end{equation}
so that $M_{Z^\prime}^{\rm max}$ in $B_s$ and $B_d$ systems are comparable in
spite of the difference in the factors $K(M)$ in (\ref{KKBB}).
\subsection{Numerical analysis}
\begin{figure}[t]
\centering
\includegraphics[width = 0.45\textwidth]{KLvsKptuned500.png}
\caption{\it $\mathcal{B}(K_{L}\rightarrow\pi^0\nu\bar\nu)$ versus
$\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)$ for $M_{Z^\prime} = 500~{\rm TeV}$ in L+R scenario. The colours are as in (\ref{sa})--(\ref{sf}). The four red points correspond to
the SM central values of the four CKM scenarios, respectively. The black line corresponds to the Grossman-Nir bound. The gray region shows the experimental range of $\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu))_\text{exp}=(17.3^{+11.5}_{-10.5})\times 10^{-11}$.}\label{fig:KLvsKptuned500}
\end{figure}
Our analysis of this scenario follows the one of Section~\ref{Js} except
that now we may ignore the $\Delta F=2$ constraints and increase all
left-handed quark couplings (in the case of the dominance of left-handed currents)
to
\begin{equation}
\Delta_L^{sd} =3.0\, e^{i\phi_L^{sd}}, \qquad\qquad \Delta_L^{bd} =3.0 \,e^{i\phi_L^{bd}},\qquad\qquad \Delta_L^{bs} = 3.0\, e^{i\phi_L^{bs}}
\end{equation}
with arbitrary phases $\phi_L^{ij}$. For the lepton couplings we use the values given in (\ref{leptonhigh}).
\begin{figure}[t]
\centering%
\includegraphics[width = 0.45\textwidth]{KstarvsKtuned80.png}
\includegraphics[width = 0.45\textwidth]{KvsBsmutuned80.png}
\includegraphics[width = 0.45\textwidth]{ReC9NPvsBsmutuned80.png}
\includegraphics[width = 0.45\textwidth]{ReC9PrimevsBsmutunedRHS80.png}
\caption{\it Correlations in the $B_s$ system for $M_{Z^\prime} = 80~{\rm TeV}$ in L+R scenario (colours as in (\ref{sa})--(\ref{sf}) but with much overlap, due to
the very weak dependence on $|V_{ub}|$, i.e.
purple is under green and cyan is under blue).
Darker colours correspond to the scenario where LH couplings dominate over RH and vice versa for lighter colours. The gray region shows the experimental 1$\sigma$ range in $\overline{\mathcal{B}}(B_s\to\mu^+\mu^-) = (2.9\pm0.7)\times 10^{-9}$.}\label{fig:Bstuned}\end{figure}
In Fig.~\ref{fig:KLvsKptuned500} we show the correlation between $\mathcal{B}(K_{L}\rightarrow\pi^0\nu\bar\nu)$ and $\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)$ for the four scenarios $a)-d)$ for $(|V_{cb}|,|V_{ub}|)$ and
$M_{Z^\prime}=500\, {\rm TeV}$.
The pattern of correlations in Fig.~\ref{fig:KLvsKptuned500} is very different from the one in Fig.~\ref{fig:KLvsKpLHS50} as
now the phase of the NP contribution to $\varepsilon_K$ is generally not
twice the one of the NP contribution to $K^+\rightarrow\pi^+\nu\bar\nu$ and $K_{L}\rightarrow\pi^0\nu\bar\nu$.
Therefore, as already discussed in general terms in \cite{Blanke:2009pq}
the two branch structure seen in Fig.~\ref{fig:KLvsKpLHS50} is absent here. In particular, it is
possible to obtain values for $\mathcal{B}(K_{L}\rightarrow\pi^0\nu\bar\nu)$ and $\mathcal{B}(K^+\rightarrow\pi^+\nu\bar\nu)$
that are outside the two branches seen in Fig.~\ref{fig:KLvsKpLHS50} and
that differ from the SM predictions. This feature could allow us to distinguish these two scenarios. It should also be added that without $\Delta F=2$ constraints
NP effects at the level of the amplitude decrease quadratically with increasing $M_{Z^\prime}$ so that for $M_{Z^\prime}=1000\, {\rm TeV}$ NP would contribute only
at the $15\%$ level. While such small effects are impossible to detect in other decays considered by us, the exceptional theoretical cleanness of $K^+\rightarrow\pi^+\nu\bar\nu$ and
$K_{L}\rightarrow\pi^0\nu\bar\nu$ could in principle allow to study such effect one day.
On the other hand for $M_{Z^\prime}=200\, {\rm TeV}$ the enhancements of both branching ratios could be much larger than
shown in Fig.~\ref{fig:KLvsKptuned500}. This would require higher fine-tuning in the $\Delta F=2$ sector as seen in Fig.~\ref{tuning}.
As we fixed the absolute values of the couplings in this example, the
different values of branching ratios on the circles correspond
to different values of the phase $\phi_L^{sd}$, when it is varied from $0$ to
$2\pi$. Measuring these two branching ratios would determine this
phase uniquely.
Most importantly, we observe that even at such high
scales NP effects are sufficiently large to be measured in the future.
In Fig.~\ref{fig:Bstuned} we show various correlations sensitive to the
$\Delta_{L,R}^{bs}$ couplings in L+R scenario for $M_{Z^\prime} = 80$~TeV. The choice of lepton couplings is as in (\ref{leptonhigh1b}).
We observe the following features:
\begin{itemize}
\item
The correlations have this time very similar structure to the one found in Fig.~\ref{fig:BsLHSRHS} for $M_{Z^\prime}=15\, {\rm TeV}$ but due to larger quark couplings and the absence of $\Delta F=2$ constraints NP effects can be sizeable even at
$M_{Z^\prime}=80\, {\rm TeV}$.
\item
As expected, a clear distinction between LH and RH couplings can be made provided
NP effects in $\mathcal{B}(B_s\to\mu^+\mu^-)$ will be sufficiently large in
order to allow measurable NP effects in other four observables shown in
the Fig.~\ref{fig:Bstuned}.
\end{itemize}
Due to the similarity of the plots in Figs.~\ref{fig:BsLHSRHS} and
\ref{fig:Bstuned} the question arises how we could distinguish these two
scales through future measurements. While some ideas for this distinction will be developed in Section~\ref{sec:5a}, here we just want to make the following
observation.
Once the values of $S_{\psi\phi}$ and $C_{B_s}$ will be much more precisely
known than assumed in (\ref{DF2c}), the range of allowed values for
the observables in Fig.~\ref{fig:BsLHSRHS} will be significantly
decreased, possibly ruling out this scenario through rare decay
measurements. On the other hand this progress in the determination of $\Delta F=2$ observables will have no impact on the plots in Fig.~\ref{fig:Bstuned} allowing the theory to pass these constraints.
Finally, in Fig.~\ref{fig:Bdtuned} we show $\mathcal{B}(B_d\to\mu^+\mu^-)$ as a function
of $M_{Z^\prime}$ together with the SM prediction and the experimental range.
We observe that even for $M_{Z^\prime}=200\, {\rm TeV}$ there are visible departures
from the SM prediction. For $M_{Z^\prime}=50\, {\rm TeV}$ even the present $1\sigma$
experimental range can be reached. This plot shows that for even smaller
values of $M_{Z^\prime}$ interesting results with smaller couplings can
be obtained.
\begin{figure}[t]
\centering%
\includegraphics[width = 0.45\textwidth]{BdvsMZptuned.png}
\caption{\it $\mathcal{B}(B_d\to\mu^+\mu^-)$ versus $M_{Z^\prime}$ in the L+R scenario. The red line corresponds to SM central value and grey area is the
experimental region: $\left(3.6^{+1.6}_{-1.4}\right)\times 10^{-10}$}\label{fig:Bdtuned}
\end{figure}
\section{The case of a neutral scalar or pseudoscalar}\label{sec:4a}
\subsection{Preliminaries}
Tree-level neutral scalar and pseudo-scalar exchanges\,\footnote{In what follows, unless specified, we will use the name {\it scalar} for both scalars and pseudo-scalars.} can
give large contributions to $\Delta F=2$ and $\Delta F=1$ processes. Prominent
examples are supersymmetric theories at large $\tan\beta$, two-Higgs doublet models (2HDMs) and left-right
symmetric models. In the case of $\Delta F=2$ transitions new scalar operators
are generated and in the presence of $\mathcal{O}(1)$ flavour-violating couplings
one can be sensitive to scales as high as $10^4\, {\rm TeV}$, or even more \cite{Bona:2007vi,Isidori:2010kg,Charles:2013aka}. The question then arises
which distance scales can be probed by $\Delta F=1$ processes mediated by tree-level scalar exchanges. In order to
answer this question in explicit terms we will concentrate here on the decays
$B_{s,d}\to\mu^+\mu^-$, which, as we will momentarily show, allow to reach the Zeptouniverse without
any fine-tuning in the presence of new heavy scalars with large couplings to quarks and leptons. As we have seen in Section~\ref{sec:3} this was not
possible in the case of a heavy $Z^\prime$. We have checked that other decays
analysed in the previous sections cannot compete with $B_{s,d}\to\mu^+\mu^-$ in
probing very short distance scales in the presence of neutral heavy scalars
with flavour-violating couplings. In fact, as we will see, $B_{s,d}\to\mu^+\mu^-$ play a prominent role in testing very short distance scales in this case,
as $K^+\rightarrow\pi^+\nu\bar\nu$ and $K_{L}\rightarrow\pi^0\nu\bar\nu$ play in $Z^\prime$ NP scenarios.
A very detailed analysis of generic scalar tree-level contributions to $\Delta F=2$ and $\Delta F=1$ processes has been presented in \cite{Buras:2013rqa,Buras:2013uqa}. In particular in \cite{Buras:2013rqa} general formulae for various
observables have been presented. We will not repeat these formulae here but
we will use them to derive a number of expressions that will allow us a direct
comparison of this NP scenario with the $Z^\prime$ one.
Our goal then is to find out what is the highest energy scale which can be
probed by $B_{s,d}\to\mu^+\mu^-$ when the dominant NP contributions are
tree-level scalar exchanges subject to present $\Delta F=2$ constraints and
perturbativity. We will first present general expressions and subsequently
we will discuss in turn the cases analogous to the $Z^\prime$ scenarios of
Sections~\ref{sec:3} and \ref{sec:4}.
\subsection{General formulae}
Denoting by $H$ a neutral scalar with mass $M_H$ the mixing amplitudes
are given as follows ($q=s,d$)
\begin{equation}
\label{Snewbq}
(M_{12}^*)_{H}^{bq} = -\left[\frac{(\Delta_L^{bq}(H))^2}{2M_{H}^2}+\frac{(\Delta_R^{bq}(H))^2}{2M_{H}^2} \right]\langle \hat Q_1^\text{SLL}(M_{H})\rangle^{bq}
-\frac{\Delta_L^{bq}(H)\Delta_R^{bq}(H)}{M_{H}^2}\langle \hat Q_2^\text{LR}(M_{H})\rangle^{bq}.
\end{equation}
Here $\Delta_{L,R}^{bq}(H)$ are the left-handed and right-handed scalar couplings
and the renormalisation scheme independent
matrix elements are given as follows
\cite{Buras:2012fs}
\begin{align}
\langle\hat Q_1^\text{SLL}(M_H)\rangle^{bq} &= \langle Q_1^\text{SLL}(M_{H})\rangle^{bq}\left(1+\frac{9}{2}\frac{\alpha_s(M_{H})}{4\pi}\right)+
\frac{1}{8}\frac{\alpha_s(M_{H})}{4\pi}\langle Q_2^\text{SLL}(M_{H})\rangle^{bq}
,\label{Q1SLL}\\
\langle \hat Q_2^\text{LR}(M_{H})\rangle^{bq} &= \langle Q_2^\text{LR}(M_{H})\rangle^{bq}\left(1-\frac{\alpha_s(M_{H})}{4\pi}\right) -\frac{3}{2}\frac{\alpha_s(M_{H})}{4\pi}\langle Q_1^\text{LR}(M_{H})\rangle^{bq}\,.\label{Q2LR}
\end{align}
The operators $Q_{1,2}^\text{SLL}$ are defined in Appendix~\ref{app:operators}. The operators
$Q_{1,2}^\text{LR}$ were already present in the case of $Z^\prime$ but now, as seen
from (\ref{Q2LR}), the operator $Q_{2}^\text{LR}$ plays the dominant role.
In writing (\ref{Snewbq}) we have used the fact that the matrix elements
of the RH scalar operators $Q_{1,2}^\text{SRR}$ equal those of $Q_{1,2}^\text{SLL}$ operators. The Wilson coefficients of $Q_{1,2}^\text{SRR}$ are represented
in (\ref{Snewbq}) by the term involving $(\Delta_R^{bq}(H))^2$.
In analogy to (\ref{Zpnewbq}) we can rewrite (\ref{Snewbq})
\begin{equation}\label{Snewbq1}
(M_{12}^*)_{H}^{bq} = -\frac{(\Delta_L^{bq}(H))^2}{2M_{H}^2} \langle \hat Q_1^\text{SLL}(M_{H})\rangle^{bq} \tilde z_{bq}(M_H),
\end{equation}
where $\tilde z_{bq}(M_H)$ is generally complex, and is given by
\begin{align}
\label{deltasuppS}
\tilde z_{bq}(M_H) &=\left[1+\left(\frac{\Delta_R^{bq}(H)}{\Delta_L^{sd}(H)}\right)^2+2\tilde \kappa_{bq}(M_H)\frac{\Delta_R^{bq}(H)}{\Delta_L^{bq}(H)}\right],\\
\quad \tilde\kappa_{bq}(M_H) &=\frac{\langle \hat Q_2^\text{LR}(M_{H})\rangle^{sd}}{\langle \hat Q_1^\text{SLL}(M_{H})\rangle^{sd}}.
\end{align}
In Table~\ref{tab:QMES} we give the central values of the renormalization scheme independent matrix elements of
(\ref{Q1SLL}) and (\ref{Q2LR}) for the three meson
systems and for different values of $M_{H}$, using the lattice results of \cite{Boyle:2012qb,Bertone:2012cu,Carrasco:2013zta} as in Table~\ref{tab:QME}. For simplicity we set the renormalisation scale to
$M_{H}$, but any scale of this order would give the same results for
the physical quantities up to NNLO QCD corrections that are negligible
at these high scales. We also give the values of $\tilde\kappa_{bq}(M_H)$ and of
$m_b(M_H)$ that we will need below. The results for the K system are given here only
for completeness but we will not study rare $K$ decays in this section as they
are not as powerful as $B_{s,d}\to\mu^-\mu^-$ in probing short distance scales
in the scalar NP scenarios.
\begin{table}[t]
\begin{center}
\renewcommand{\arraystretch}{1.2}
\scalebox{0.85}{
\begin{tabular}{|c|cccccccc|}
\hline
$M_{H}$ & 5 TeV & 10 TeV & 20 TeV & 50 TeV & 100 TeV & 200 TeV & 500 TeV & 1000 TeV\\
\hline
$\langle\hat{Q}_1^{\rm SLL}(M_{H})\rangle^{sd}$& -0.089 & -0.093 & -0.096 & -0.101 & -0.105 & -0.108 & -0.113 & -0.116 \\
$\langle\hat{Q}_2^{\rm LR}(M_{H})\rangle^{sd}$& 0.291 & 0.312 & 0.334 & 0.362 & 0.384 & 0.405 & 0.434 & 0.456 \\
$\tilde{\kappa}_{sd}(M_{H})$& -3.27 & -3.37 & -3.46 & -3.58 & -3.66 & -3.75 & -3.86 & -3.94 \\
\hline
$\langle\hat{Q}_1^{\rm SLL}(M_{H})\rangle^{bd}$& -0.095 & -0.099 & -0.103 & -0.108 & -0.112 & -0.116 & -0.120 & -0.124 \\
$\langle\hat{Q}_2^{\rm LR}(M_{H})\rangle^{bd}$& 0.245 & 0.262 & 0.280 & 0.304 & 0.322 & 0.340 & 0.365 & 0.383 \\
$\tilde{\kappa}_{bd}(M_{H})$& -2.57 & -2.64 & -2.72 & -2.81 & -2.88 & -2.95 & -3.03 & -3.09 \\
\hline
$\langle\hat{Q}_1^{\rm SLL}(M_{H})\rangle^{bs}$& -0.140 & -0.146 & -0.152 & -0.159 & -0.164 & -0.170 & -0.177 & -0.182 \\
$\langle\hat{Q}_2^{\rm LR}(M_{H})\rangle^{bs}$& 0.348 & 0.373 & 0.399 & 0.432 & 0.458 & 0.484 & 0.519 & 0.545 \\
$\tilde{\kappa}_{bs}(M_{H})$& -2.48 & -2.56 & -2.63 & -2.72 & -2.79 & -2.85 & -2.93 & -2.99 \\
\hline
$m_b(M_H) {\rm [GeV]}$& 2.27 & 2.19 & 2.12 & 2.03 & 1.97 & 1.92 & 1.85 & 1.81 \\
\hline
\end{tabular}
}
\end{center}
\caption{\it Central values of the scheme-independent hadronic matrix elements evaluated at different values of $M_{H}$. $\langle \hat Q_1^{\rm SLL}\rangle^{ij}$ and $\langle \hat Q_2^{\rm LR}\rangle^{ij}$ are in units of ${\rm GeV}^3$.}
\label{tab:QMES}
\end{table}
We have summarised the formulae for the branching ratio observables of $B_{s,d}\to\mu^+\mu^-$ decays in Appendix~\ref{app:Bsmumu}.
In the case of tree-level scalar and pseudo-scalar exchanges, the Wilson coefficients of the corresponding effective Hamiltonian (see e.g. \cite{Buras:2013rqa}), which vanish in the SM, are given as follows
\begin{align}
m_b(\mu_H)\sin^2\theta_W C^{(\prime)}_S &= \frac{1}{g_{\text{SM}}^2}\frac{1}{M_H^2}\frac{\Delta_{R(L)}^{bq}(H)\Delta_S^{\mu\bar\mu}(H)}{V_{tq}^* V_{tb}},\\
m_b(\mu_H)\sin^2\theta_W C^{(\prime)}_P &= \frac{1}{g_{\text{SM}}^2}\frac{1}{M_H^2}\frac{\Delta_{R(L)}^{bq}(H)\Delta_P^{\mu\bar\mu}(H)}{V_{tq}^* V_{tb}},
\end{align}
where $\Delta_{S,P}^{\mu\bar\mu}(H)$ are given by
\begin{align}\begin{split}\label{equ:mumuSPLR}
&\Delta_S^{\mu\bar\mu}(H)= \Delta_R^{\mu\bar\mu}(H)+\Delta_L^{\mu\bar\mu}(H),\\
&\Delta_P^{\mu\bar\mu}(H)= \Delta_R^{\mu\bar\mu}(H)-\Delta_L^{\mu\bar\mu}(H),\end{split}
\end{align}
such that the corresponding Lagrangian reads~\cite{Buras:2013rqa}
\begin{equation}
\mathcal{L}=\frac{1}{2}\bar\mu \big[\Delta_S^{\mu\bar\mu}(H)+\gamma_5\Delta_P^{\mu\bar\mu}(H)\big]\mu H\,.
\end{equation}
$\Delta^{\mu\bar\mu}_{S}$ is real and $\Delta^{\mu\bar\mu}_{P}$ purely
imaginary as required by the hermiticity of the Hamiltonian. See \cite{Buras:2013rqa} for properties of these couplings.
It should be noted that $C_S$ and $C_P$ involve the scalar right-handed quark couplings, whereas
$C_S^\prime$ and $C_P^\prime$ the left-handed ones.
An important feature to be stressed here is that for the same values of the couplings $\Delta_S^{\mu\bar\mu}(H)$ and $\Delta_P^{\mu\bar\mu}(H)$
the pseudoscalar contributions play a more important role because they interfere with the SM contributions (see \eqref{PP}).
Therefore, in order to find the maximal values of $M_H$ that can be tested by $B_{s,d}\to\mu^+\mu^-$, it is in principle
sufficient to consider only the pseudoscalar contributions $P$. But for completeness we will also show the results for the scalar case.
\subsection{Left-handed and right-handed scalar scenarios}\label{sec:SLL}
These two scenarios correspond to the ones considered in Section~\ref{sec:3}
and involve respectively either only LH scalar currents (SLL scenario) or RH ones (SRR scenario). In
these simple cases it is straightforward to derive the correlations between
pseudoscalar contributions to $\Delta F=2$ observables and the values
of the Wilson coefficients $C_P$ and $C_P^\prime$. One finds
\begin{align}
m_b(\mu_H)\sin^2\theta_W\frac{C^{(\prime)}_{P}(B_q)}{\sqrt{[\Delta S(B_q)]_\text{RR(LL)}^\star}} &=
\frac{\Delta_{P}^{\mu\bar\mu}(H)}{2\,M_{H}\,g_{\rm SM}}
\sqrt{\frac{\langle Q_1^\text{VLL}(m_t)\rangle^{bq}}{-\langle\hat{Q}^\text{SLL}_{1}(M_H)\rangle^{bq}}} \notag\\
&=0.0015\,\Delta_{P}^{\mu\bar\mu}(H)\left[\frac{500\, {\rm TeV}}{M_{H}}\right],
\end{align}
where
$[\Delta S(B_q)]_{LL}$ and $[\Delta S(B_q)]_{RR}$ are the shifts in the SM one-loop $\Delta F=2$ function $S^\text{SM}$ caused by the pseudoscalar tree-level
exchanges in SLL and SRR scenarios respectively.
The matrix elements $\langle\hat{Q}^\text{SLL}_{1}(M_H)\rangle^{bq}$ are given for various values of $M_H$ in Table~\ref{tab:QMES}, while the $\langle Q_1^\text{VLL}(m_t)\rangle^{bq}$ evaluate to 0.046~GeV$^3$ and 0.067~GeV$^3$ for $q=d$ and $q=s$, respectively.
In order to find the maximal values of $M_H$ that can be tested by future
measurements we assume
\begin{equation}
\Delta_{P}^{\mu\bar\mu}(H)=3.0\,i, \qquad \big|[\Delta S(B_q)]_{LL}\big|\le 0.36.\label{mixCons}
\end{equation}
Then by using the formulae listed above we can calculate the ratio $\overline{R}_q$ of $\mathcal{\bar B}(B_q\to\mu^+\mu^-)$ to its SM expectation, given in
(\ref{Rdef}), as a function of $M_H$.
From Table~\ref{tab:rareProjections} we see that in 2024 a deviation of $3\sigma$ from the SM estimate of $\overline{\cal B}(B_s\to\mu^+\mu^-)$ will correspond to a 30\% deviation in $\overline{R}_s$ from one.
In the left panel of Fig.~\ref{fig:scalarMH} we show the dependence of $\overline{R}_s$ on $M_H$ for the case of pseudo-scalar and scalar exchanges.
We observe that measurable effects of pseudo-scalar exchanges can be obtained at $M_H$ as high as $600$\,--\,$700\, {\rm TeV}$ for the large couplings considered, which is also dependent on constructive or destructive interference with the SM\@.
Because scalars do not interfere with the SM contributions, they only just approach the Zeptouniverse scale of $200\, {\rm TeV}$.
In the right panel of Fig.~\ref{fig:scalarMH} we show the result of a fit of all the $\Delta F = 2$ constraints for an arbitrary phase of
the $\Delta_L^{bs}(H)$ coupling -- i.e. allowing for CP violation in the scalar sector -- together with the projections for $\mathcal{\bar B}(B_s\to\mu^+\mu^-)$ in 2019 and 2024,
in the plane\,\footnote{Writing the $\Delta_L^{bq}(H)$ coupling as $i\Delta_{L}^{bq}(H) e^{i\phi_{L}^{bq}(H)}$} $\Delta_L^{bs}(H)$--$\phi_L^{bs}(H)$. The notation is the same as in Fig.~\ref{figBd},
with the green regions being allowed by the $\Delta F = 2$ fit at 68\% and 95\% C.L., and the continuous and dashed lines indicating $3\sigma$ and $5\sigma$ effects in $B_s\to\mu^+\mu^-$,
respectively. We fixed $M_H = 500$ TeV and $|\Delta_P^{\mu\mu}| = 3$. The effects are maximal for real, positive values of the coupling, where there is maximal constructive interference
with the SM contribution.
Lower precision is expected for $\overline{\cal B}(B_d\to\mu^+\mu^-)$ in the LHC era, with a $3\sigma$ effect corresponding to a 60\% deviation in $\overline{R}_d$ by 2030.
Therefore with equivalent constraints on $B_d$ mixing as given in \eqref{mixCons} the scales that can be probed in the SLL or SRR scenarios are lower, yet still within the Zeptouniverse.
The maximal effects given here are of course lower for smaller values of the scalar lepton couplings $\Delta_{S,P}^{\mu\mu}$, as expected in most motivated concrete models.
\begin{figure}
\centering%
\raisebox{1cm}{\includegraphics[width=0.49\textwidth]{BsmumuProbe_comb.pdf}}\hfill%
\includegraphics[width=0.47\textwidth]{BsH500.pdf}
\caption{\it Left: dependence of $\overline{R}_s$ on the heavy scalar mass $M_H$, showing the pure LH
(or RH) scenario and the combined L+R scenario (see text for details). Right: analogous situation to Figure~\ref{figBd}, but for a heavy pseudo-scalar with $M_H=500\, {\rm TeV}$ and $|\Delta_P^{\mu\mu}(M_H)|=3$ in the LHS.}\label{fig:scalarMH}
\end{figure}
\subsection{L+R scalar scenario}\label{sec:SLL+LR}
In the presence of both LH and RH couplings the $\Delta F=2$ constraints can be
loosened so that higher values of $M_H$ can be probed.
Let us again set $|\Delta_L^{bq}| = 3$, consistent with pertubativity bounds.
In order for NP effects in $B_q$ mixing to be negligible, we require $\Delta_R^{bq}$ to be such that $\tilde{z}_{bq}(M_H)$ given in \eqref{deltasuppS} approximately vanish.
This happens when
\begin{equation}
\Delta_{R}^{bq} \approx -\left(\tilde{\kappa}_{bq}(M_H) \pm \sqrt{\tilde{\kappa}_{bq}(M_H)^2-1} \right)\Delta_{L}^{bq} \sim \frac{1}{5}\Delta_{L}^{bq},\label{sol}
\end{equation}
where in the last expression we have kept only the ``+'' solution in order to be consistent with perturbativity. As we already discussed in the previous sections, interchanging $L$ and $R$ in (\ref{sol}) and setting $|\Delta_R^{bq}|=3$ is also a solution.
In the left panel of Fig.~\ref{fig:scalarMH} we show the dependence of $\overline{R}_s$ on $M_H$ for the case of pseudo-scalar and scalar exchanges also for this scenario.
We observe that, for measurable effects in $\overline{R}_s$ greater than 30\%, scales of $700$\,--\,$750\, {\rm TeV}$ can be probed, which are only slightly higher as compared to the pure SLL or SRR cases. This is easily understood in terms of the fact that flavour-violating couplings of order 2\,--\,3, close to their perturbativity bound, were allowed by the $\Delta F = 2$ constraints already in the pure SLL and SRR cases (see the right panel of Fig.~\ref{fig:scalarMH}), giving also there large effects in $B_{s,d}$ decays.
In contrast, for NP effects that give a $\overline{R}_d$ greater than 60\%, which could be observable at $3\sigma$ in 2030, the additional smallness of the $\overline{\cal B}(B_d\to\mu^+\mu^-)$ SM estimate (due to $|V_{td}| \ll |V_{ts}|$) allows scales up to $1200\, {\rm TeV}$ to be probed for the large couplings we consider.
\section{Other New Physics scenarios}\label{sec:5}
\subsection{Preliminaries}
We would like now to address the question whether our findings can be generalised to other NP scenarios while keeping in mind that we would like to obtain the
highest possible resolution of short distance scales with the help of
$\Delta F=1$ processes and staying consistent with the constraints from
$\Delta F=2$ processes and perturbativity. After all our NP scenarios up till now have been very simple: one heavy gauge boson or (pseudo-)scalar contributing to both $\Delta F=1$ and $\Delta F=2$ transitions at tree-level. In general one could have several new particles and, moreover, there is the possibility of a GIM mechanism at work protecting against FCNCs at tree-level. Before discussing various
possibilities let us make a few general observations:
\begin{itemize}
\item
If a gauge boson or scalar (pseudoscalar) contributes at tree-level to $\Delta F=1$ transitions it will necessarily contribute also to $\Delta F=2$ transitions.
\item
On the other hand a gauge boson or a scalar (pseudoscalar) can contribute
to $\Delta F=2$ transitions at tree-level without having any impact on
$\Delta F=1$ transitions. This is the case, for instance, for a heavy gluon $G^\prime$, which, carrying colour, does not couple to leptons, or for a leptophobic $Z^\prime$. In the case of a scalar (pseudoscalar)
this could be realised if the coupling of these bosons to leptons
is suppressed through small lepton masses, which is the case if
these bosons take part in electroweak symmetry breaking.
\end{itemize}
We will now briefly discuss two large classes of NP models, reaching the following conclusions:
\begin{itemize}
\item
In order to achieve a high resolution of short-distance
scales in the presence of tree-level FCNCs that satisfy $\Delta F=2$ constraints, one generally has to break the correlation between $\Delta F=1$ and $\Delta F=2$ transitions. In the case of a single $Z^\prime$ or (pseudo-)scalar
this can be done via the L+R scenario, or by the introduction of multiple such NP
particles.\footnote{
In special cases such as the decays $K^+\rightarrow\pi^+\nu\bar\nu$ and $K_{L}\rightarrow\pi^0\nu\bar\nu$ in $Z^\prime$ scenarios, and $B_{s,d}\to \mu^+\mu^-$ in the scalar case, the Zeptouniverse can be reached even in the presence of $\Delta F=2$ constraints.}
\item
If the GIM mechanism is at work and there are no tree-level FCNCs
the pattern of correlations between $\Delta F=1$ and $\Delta F=2$
transitions could be different than in the case of tree-level FCNCs. Yet, as we will show, in this case the energy scales which can be explored by rare $K$
and $B_{s,d}$ decays are significantly lower than the ones found by us
in the previous sections.
\end{itemize}
\subsection{The case of two gauge bosons}
Let us assume that there are two gauge bosons $Z_1^\prime$ and $Z_2^\prime$
but only $Z_1^\prime$ couples to leptons i.e. $Z_2^\prime$ could be colourless
or an octet of $SU(3)_c$. In such a model it is possible to reach very high
scales with only LH or RH couplings to quarks. Indeed, let us assume that
these two bosons have only LH flavour violating couplings. Only $Z_1^\prime$
is relevant for $\Delta F=1$ transitions and if $Z^\prime_2$ were absent
we would have the LH scenario of Section~\ref{sec:3}, which does not allow
measurable effects in $B_{s,d}$ decays above $20\, {\rm TeV}$ due to $\Delta F=2$
constraints.
On the contrary, with two gauge bosons we can suppress NP contributions to $\Delta F=2$
transitions by choosing their couplings and masses such that their contributions to $\Delta M_{s,d}$ approximately cancel. This is clearly a tuned scenario.
Assuming that the masses of these bosons are of the same order so that we
can ignore the differences in RG QCD effects, a straightforward calculation
allows us to derive the relation
\begin{equation}
\left[\frac{\Delta^{ij}_L(Z^\prime_1)}{\Delta^{ij}_L(Z^\prime_2)}\right]^2=-\frac{1}{N_c}
\left[\frac{M_{Z_1^\prime}}{M_{Z_2^\prime}}\right]^2
\end{equation}
which should be approximately satisfied.
Here $N_c$ is equal to 3 or 1 for $Z^\prime_2$ with or without colour,
respectively. This in turn implies
\begin{equation}\label{cancelling}
\Delta^{ij}_L(Z^\prime_2)=i \sqrt{N_c}\,\Delta_L^{ij}(Z^\prime_1) \left[\frac{M_{Z_2^\prime}}{M_{Z_1^\prime}}\right]~
\end{equation}
so that the phases of these couplings must differ by $\pi$.
The same argument can be made for RH couplings. Moreover, it is not
required that both gauge bosons have LH or RH couplings and the relation
in (\ref{cancelling}) assures cancellation of NP contributions to $\Delta F=2$
processes for the four possibilities of choosing different couplings. The
two scenarios for $Z_1^\prime$ can be distinguished by rare decays. One can of course
also consider L+R scenario but it is not necessary here.
In the case of two gauge bosons with comparable masses also scenarios
could be considered in which these bosons have LH and RH couplings of
roughly the same size properly tuned to minimise constraints from $\Delta F=2$ observables. However, if perturbativity for their couplings is assumed the
highest resolution of short distance scales will still be comparable to
the one found in the previous section. On the other hand with two gauge bosons
having LH and RH couplings of the same size, the correlations between $\Delta F=1$ observables could be modified with respect to the ones presented in our
paper. We will return to this possibility in the future.
\subsection{The case of a degenerate scalar and pseudo-scalar pair}
We proceed to consider a model consisting of a scalar $H^0$ and a pseudo-scalar $A^0$ with equal (or nearly degenerate) mass $M_{H^0} = M_{A^0} = M_H$. This is, for example, essentially realised for 2HDMs in a decoupling regime, where $H^0$ and $A^0$ are much heavier than the SM Higgs $h^0$ and almost degenerate in mass.
Allowing for a scalar $H^0$ and pseudo-scalar $A^0$ with equal couplings to quarks, i.e.\
\begin{equation}
{\cal L} \ni \overline{D}_L \tilde{\Delta} D_R (H^0 + i A^0) + {\rm h.c},
\end{equation}
where $D=(d,s,b)$ and $\tilde{\Delta}$ is a matrix in flavour space, gives the couplings
\begin{equation}
\Delta_R^{qb}(H^0) = \tilde{\Delta}^{qb}, \quad \Delta_R^{qb}(A^0) = i \tilde{\Delta}^{qb}, \quad \Delta_L^{qb}(H^0) = \big(\tilde{\Delta}^{bq}\big)^*, \quad
\Delta_L^{qb}(A^0) = -i\big(\tilde{\Delta}^{bq}\big)^*.
\end{equation}
Restricting the couplings to be purely LH or RH and assuming a degenerate mass, we see from inspection of \eqref{Snewbq} that the contributions to $(M_{12}^*)^{bq}_H$ will automatically cancel,
without fine-tuning in the couplings.
However, if both LH and RH couplings are present, the LR operator will contribute to the mixing.
In 2HDMs with MFV, for example, the $\Delta_L^{qb}$ couplings are suppressed by $m_q/m_b$ relative to $\Delta_R^{qb}$, which will give small but non-zero contributions to the mixing even in the limit of a degenerate heavy neutral scalar and pseudoscalar.
Let us consider the case of only LH (or RH) couplings, and set $|\Delta_L^{sb}| = |\Delta_P^{\mu\mu}| = 3.0$ as before.
Then, for observable deviations in $\overline{R}_s$ greater than 30\%, we find that this model can probe scales up to $M_H=850\, {\rm TeV}$, which is comparable to the two scenarios discussed in Sections~\ref{sec:SLL} and \ref{sec:SLL+LR}.
\subsection{GIM case}
If there are no FCNCs at the tree-level, then new particles entering various
box and penguin diagrams enter the game, making the correlations between
$\Delta F=1$ and $\Delta F=2$ processes more difficult to analyse. However,
it is evident that for the same couplings NP effects in this case will be
significantly suppressed relative to the scenarios discussed until now.
This is good for suppressing NP contributions at relative low scales but it
does not allow us to reach energy scales as high as in the case of FCNCs at the
tree level.
Assuming that the involved one-loop functions are $\mathcal{O}(1)$ and comparing
tree-level expressions for $\Delta F=2$ and $\Delta F=1$ effective Hamiltonians with those one
would typically get by calculating box and penguin diagrams we find that
NP contributions from loop diagrams are suppressed relative to tree diagrams
by the additional factors
\begin{equation}
\kappa(\Delta F=2)=\frac{\Delta^2_{L,R}}{32\pi^2}, \qquad \kappa(\Delta F=1)=\frac{\Delta^2_{L,R}}{8\pi^2}.
\end{equation}
For couplings $\Delta_{L,R}\approx3$ these suppressions amount approximately to $1/40$
and $1/10$ respectively. This in turn implies that at the same precision
as in the previous sections the maximal scales at which NP could be
studied are reduced by roughly factors of $6$ and $3$ for $\Delta F=2$
and $\Delta F=1$, respectively. For smaller couplings this reduction is larger. Detail numbers are not possible without
the study of a concrete model.
\boldmath
\section{Can we determine $M_{Z^\prime}$ beyond the LHC scales?}\label{sec:5a}
\unboldmath
We have seen that all observables considered in {
$Z^\prime$ scenarios} depend on the ratios
of the $Z'$ couplings over the $Z'$ mass $M_{Z^\prime}$ as listed in (\ref{C1}) and (\ref{C2}). By assuming
the largest couplings consistent with perturbativity we have succeeded to give an
idea about the highest values of $M_{Z^\prime}$ that could still allow us to study
the structure of the NP involved. However it is not guaranteed that the $Z^\prime$
couplings are that large and $M_{Z^\prime}$ could also be smaller, yet still significantly higher than the LHC scales.
Let us therefore assume that in the future all observables considered in our paper
have been measured with high precision and all CKM and hadronic uncertainties
have been reduced to a few percent level. Moreover, let us assume significant departures from
SM predictions have been identified with the pattern of deviations from the
SM pointing towards the existence of a heavy $Z^\prime$. We then ask the
question whether in this situation we could determine at least approximately the value of $M_{Z^\prime}$ on the basis of flavour observables.
Before we answer this question let us recall that the masses of the SM gauge
bosons $Z$ and $W^\pm$ were predicted in the 1970's, several years before
their discovery, due to the knowledge of $G_F$, $\alpha_{\text{em}}$ and
$\sin^2\theta_W$ - all determined in low energy processes. Similarly also
the masses of the charm quark and top quark could be approximately predicted.
Yet, this was only possible because it was done within a concrete
theory, the SM, which allowed one to use all measured low energy processes
at that time.
Thus within a specific theory with not too many free parameters, one could
imagine that also the mass of $Z^\prime$ could be indirectly determined. But what if
the only information about $Z^\prime$ comes from the processes considered
by us?
Here we would like to point out the possibility of determining $M_{Z^\prime}$ from flavour observables
provided the next $e^+e^-$ or $\mu^+\mu^-$ collider, still with center of mass
energies well below $M_{Z^\prime}$, could determine indirectly the leptonic ratios in
(\ref{C2}). This will only be possible if in these collisions some
departures from SM expectations will also be found. Only the determination of the ratios involving muon couplings is necessary as the one involving neutrino couplings could be obtained through the $SU(2)_L$ relation in (\ref{C2}). These ratios could of course be obtained from the upgraded LHC, but the presence of protons in
the initial state will complicate this determination.
Knowing the values of the ratios in (\ref{C2}), one could determine all ratios
in (\ref{C1}) through rare $K$ and $B_{s,d}$ decays. Here the decays
governed by $b\to s$ transitions are superior to the other decays as there
are many of them, yet if the decays $K_L\to\pi^0\ell^+\ell^-$ could be
measured and the hadronic matrix elements entering $\varepsilon'/\varepsilon$ brought under control
also the $K$ system would be of interest here.
What is crucial for the idea that follows is that $\Delta F=2$ transitions have
not yet been used for the determination of the ratios in (\ref{C1}), that both LH and RH are present, and that both are relevant for rare decays to the extent that the ratios in (\ref{C1}) can be measured. This would not allow the
resolution of the highest scales but would still provide interesting results.
Now for the main point. With the ratios in (\ref{C1}) determined
by rare decays, the dependence of the right-hand sides of (\ref{ZpnewK}) and (\ref{Zpnewbq}) on
$M_{Z^\prime}$ is only through the hadronic matrix elements of the involved operators.
Although this dependence, as given in Table~\ref{tab:QME}, is only logarithmic,
it is sufficiently strong in the presence of LR operators to allow one to
estimate the value of $M_{Z^\prime}$ with the help of
$\Delta F=2$ observables. To this end
precise knowledge of the relevant hadronic matrix elements
is necessary. This also applies to CKM parameters entering SM contributions.
{In principle the same discussions can be made for scalars but it is unlikely that they can play a prominent role in $e^+ e^-$ collisions.}
\section{Conclusions}\label{sec:6}
In this paper we have addressed the question of whether
we could learn something about the very short distance scales that are beyond the
reach of the LHC on the basis of quark flavour observables alone. Certainly
this depends on the size of NP, its nature and in particular on the available precision
of the SM predictions for flavour observables. The latter precision depends on the extraction of CKM parameters from the data and on the
theoretical uncertainties. Both are expected to be reduced in this decade
down to 1\,--\,2\%, which should allow NP to be identified even if it contributed only at the level of 10\,--\,30\% to the branching ratios.
Answering this question in the context of $Z^\prime$ models and assuming that
all its couplings to SM fermions take values of at most $3.0$, our main findings
are as follows:
\begin{itemize}
\item
$\Delta F=2$ processes alone cannot give us any concrete information about
the nature of NP at short distance scales beyond the reach of the LHC. In particular if some deviations
from SM expectations will be observed, it will not be possible to find
out whether they come from LH currents, RH currents or both.
\item
On the other hand future precise measurements of several $\Delta F=1$ observables
and in particular correlations between them can distinguish between LH and
RH currents, but the maximal resolution consistent with perturbativity
strongly depends on whether only LH or only RH or both LH and
RH flavour changing $Z^\prime$ couplings to quarks are present in nature.
\item
If only LH or RH couplings are present in nature we can in principle reach
scales of $200\, {\rm TeV}$ and $15\, {\rm TeV}$ for $K$ and $B_{s,d}$, respectively.
These numbers depend on the room left for NP in $\Delta F=2$ observables, which have an important impact on the resolution available in these NP scenarios.
\item
Smaller distance scales can only be resolved if both RH and LH couplings are present in order to cancel the NP effects on the $\Delta F=2$ observables.
Moreover, to achieve the necessary tuning, the couplings should differ considerably from each other. This large hierarchy of couplings is dictated primarily by the ratio of hadronic matrix elements of
LR $\Delta F=2$ operators and those for LL and RR operators and by
the room left for NP in $\Delta F=2$ processes. We find that in this case
the scales as high as $2000\, {\rm TeV}$ and $160\, {\rm TeV}$ for $K$ and $B_{s,d}$ systems, respectively, could be in principle resolved.
\item
A study of tree-level (pseudo-)scalar exchanges shows that $B_{s,d}\to \mu^+\mu^-$ can probe scales close to $1000\, {\rm TeV}$, both for scenarios with purely LH or RH scalar couplings to quarks and for scenarios allowing for both
LH and RH couplings.
For the limit of a degenerate scalar and pseudoscalar NP effects in $\Delta F=2$ observables can cancel even without imposing a tuning on the couplings.
\item
We have discussed models with several gauge bosons. Also in
this case the basic strategy for being able to explore very high energy scales is to break the stringent correlation between $\Delta F=1$ and
$\Delta F=2$ processes and to suppress NP contributions to the latter without suppressing NP contributions to rare decays. The presence of a second heavy neutral gauge boson allows us to achieve the goal with only LH or RH currents
by applying an appropriate tuning.
\item
While the highest achievable resolution in the presence of several gauge bosons is comparable to the case of a single gauge boson because of the perturbativity bound, the correlations between $\Delta F=1$ observables could
differ from the ones presented here. This would be in particular the case if
LH and RH couplings of these bosons where of similar size. A detailed study
of such scenarios would require the formulation of concrete models.
\item
If FCNCs only occur at one loop level the highest energy scales that
can be resolved for maximal couplings are typically reduced relative to the case of tree-level FCNCs by a factor
of at least $3$ and $6$ for $\Delta F=1$ and $\Delta F=2$ processes, respectively.
\item
We have also presented
a simple idea for a rough indirect determination of $M_{Z^\prime}$ by means of the next linear $e^+e^-$ or $\mu^+\mu^-$ collider and precision flavour data. It uses
the fact that the LR operators present in $\Delta F=2$ transitions have
large anomalous dimensions so that $M_{Z^\prime}$ can be determined through
renormalisation group effects provided it is well above the LHC scales.
\end{itemize}
In summary we have demonstrated that NP with a particular pattern of dynamics
could be investigated through rare $K$ and $B_{s,d}$ decays even if
the scale of this NP would be close to the Zeptouniverse. As
expected from other studies it is in principle easier to reach the Zeptouniverse
with the help of rare $K$ decays than $B_{s,d}$ decays. However, this assumes
the same maximal couplings in these three systems and this could be not the
case. Moreover, in the presence of tree-level pseudoscalar exchanges very
short distance scales can be probed by $B_{s,d}\to\mu^+\mu^-$ decays.
We should also emphasise that although our main goal was to reach the
highest energy scales with the help of rare decays, it will of course be exciting
to explore any scale of NP above the LHC scales in this decade. Moreover, we
still hope that high energy proton-proton collisions at the LHC will exhibit at least some foot prints of new particles and forces. This would greatly facilitate flavour analyses as the one presented here.
\section*{Acknowledgements}
This research was done and financed in the context of the ERC Advanced Grant project ``FLAVOUR''(267104) and was partially
supported by the DFG cluster
of excellence ``Origin and Structure of the Universe''.
|
1,116,691,498,627 | arxiv | \section{\label{sec:intro}Introduction}
Physics educators have investigated student learning in introductory physics over the last three decades \cite{McDermott:1999tz,Meltzer:2012eg}. These investigations were driven in part by data collected from research-based assessment instruments such as the Force Concept Inventory \cite{1992PhTea..30..141H}. Such assessments have been instrumental in helping to identify common student difficulties. Furthermore, results from these instruments have supported curricular and pedagogical transformations in introductory physics and have provided evidence of the success of these transformations. We lack such research-based assessments for our upper-level classical mechanics courses. Research into student learning in middle- and upper-division physics has begun \cite{Meltzer:2012eg, ambrose2004investigating, christensen2009student, Caballero:2012wr,2012arXiv1207.1283W, pepper1289our,Singh:2006tv,Smith:2010wx}, but is far less mature than similar research in introductory physics \cite{McDermott:1999tz, Meltzer:2012eg}.
At CU Boulder (CU), we are transforming the first half of our two-semester classical mechanics sequence (CM 1), including developing consensus learning goals, investigating student learning, and creating student-centric instructional materials \cite{Pollock:2012uy,CMweb}. In order to assess the transformed course and to help further investigate student difficulties at this level, we have begun to develop an instrument that probes student learning. Here, we present the development of the Colorado Classical Mechanics/Math Methods Instrument (CCMI) and initial investigations into its validity and reliability.
\vspace*{-14pt}
\section{\label{sec:ccmi}The CCMI}
The CCMI is a 9-question open-ended test that focuses on topics taught in the first half of a two-semester classical mechanics sequence. This first course concludes before a discussion of the calculus of variations; hence, the Lagrangian and Hamiltonian formulations of mechanics are absent from the test. The CCMI focuses on core skills and commonly encountered problems. Students solve a variety of problems such as: determining the general solution to common differential equations (e.g., $\ddot{x}=-A^2x$); finding equilibria and sketching net forces on a potential energy contour map; and decomposing vectors in Cartesian and plane-polar coordinates. We have designed the CCMI to be given in a standard 50-minute lecture period. To accompany the longer post-test, we have developed a short (15-20 minute) pre-test that contains a subset of three problems gleaned from the post-test. Figure \ref{fig:mass} shows a sample CCMI question that appears on both the pre- and the post-test.
\begin{figure*}
\centering
\begin{minipage}{2.1\linewidth}
\begin{mdframed}
{\bf Learning goals, {\it students should be able to}}:\\
$\cdot$ choose appropriate area and volume elements to integrate over a given shape.\\
$\cdot$ translate the physical situation into an appropriate integral to calculate the gravitational force at a particular point away from some simple mass distribution.
\end{mdframed}
\vspace*{5pt}
\begin{minipage}{0.72\linewidth}
{\bf Q9} Consider an infinitely thin cylindrical shell with non-uniform mass per unit area of $\sigma (\phi, z)$. The shell has height $h$ and radius $a$, and is not enclosed at the top or bottom.
\medskip
(a) What is the area, $dA$, of the small dark gray patch of the shell which has height $dz$ and subtends an angle $d\phi$ as shown to the right?\\
(b) Write down (BUT DO NOT EVALUATE) an integral that would give you the MASS of the entire shell. Include the limits of integration.
\end{minipage}
\begin{minipage}{0.25\linewidth}
\flushright
\includegraphics[clip,trim=70mm 40mm 70mm 30mm,width=0.75\linewidth]{cylinder.pdf}
\end{minipage}
\end{minipage}
\caption{Certain topic-scale learning goals are evaluated by the CCMI questions. The sample question appears on the CCMI pre- and post-tests; vector calculus is a prerequisite for CM 1. This question constitutes 9\% of the total post-test score.}\label{fig:mass}
\end{figure*}
\vspace*{-14pt}
\section{\label{sec:dev}Instrument Development}
{\bf Writing Questions}: As the initial step towards transforming CM 1, a series of faculty meetings were held to develop consensus course-scale learning goals and to articulate the topical content coverage of the course \cite{pepper2012facilitating}. After the development of course-scale learning goals, a set of specific, topical learning goals were drafted. To develop these learning goals, we utilized field notes collected during lectures, weekly homework help sessions, and faculty meetings. A further set of faculty meetings were held in which the topical learning goals were agreed upon. In these meetings, several topical learning goals were selected to be assessed on the CCMI. These course-scale and topical-scale learning goals are available online \cite{CMweb}. Based on these topical learning goals determined by the faculty, sixteen open-ended questions were initially written. Some of these questions were adapted from exam or clicker questions written by CU faculty in previous semesters. All questions were informed by observed student difficulties \cite{Pollock:2012uy}.
{\bf Expert Validation}: Initially, two CU faculty members who had recently taught the course reviewed the sixteen CCMI questions for clarity and content. In working to establish the validity of the CCMI, these faculty aimed to answer two questions: (1) Does each question address the concepts and skills that my students should master? (2) Is each question written in a clear and concise manner? Over the course of the next two years, faculty meetings were held to discuss CCMI questions and students' responses to each question were scrutinized. Additional individual feedback was solicited from faculty at CU and elsewhere. Questions were iteratively improved between each administration of the instrument, and some questions were cut altogether. Over twenty faculty members provided input leading to a 9-question version of the CCMI that most students are able to complete in a 50-minute period. The latest version of the CCMI (v3.0) is available online \cite{CMweb}.
{\bf Student Validation}: In parallel to incorporating feedback from physics faculty, several rounds of student interviews were conducted to ensure the instrument probed persistent challenges and to determine if students were interpreting questions as we intended. After solving the full set of problems on the CCMI in a think-aloud setting, interviewees were asked to discuss their reasoning for specific answers in more detail. This process provided additional information about which questions should remain and which should be removed from each version of the CCMI. In addition, student interviews were critical to establishing concise wording and a clear focus for each question. Muddled responses or clarifying questions from interviewees were strong indicators that the wording or focus of a particular question needed to be reconsidered. After each round of interviews, an updated version of the CCMI was constructed. In total, fourteen physics students at CU were interviewed while solving the CCMI.
\vspace*{-14pt}
\section{\label{sec:grading}Grading Rubric Construction}
Once questions were finalized, we developed a detailed grading rubric for the CCMI that was informed by student work. For example, common student responses to question Q9(a) (Figure \ref{fig:mass}) included: (1) $a\,d\phi\,dz$ , (2) $r\,d\phi\,dz$, (3) $d\phi\,dz$, and (4) $r\,dr\,d\phi\,dz$. Full credit is awarded to the first response. Partial credit is earned for the second response. Here, students neglected to use the length-scale given in the problem statement and substituted the radial coordinate. However, interviews suggest that students treated $r$ as a constant, and were likely to construct an appropriate two-dimensional integral. No credit is awarded for the final two responses. The third has incorrect units and the last is a volume (not area) element. In this way, the rubric emphasizes concept mastery; partial credit is only awarded for minor mistakes.
For other questions, student responses are more varied; there were over 40 unique responses to the latest version of Q9(b) (Figure \ref{fig:mass}). The development of a common rubric for such questions can quickly become overly complex. For example, the grading rubric developed for the Colorado Upper-division Electrostatics diagnostic (CUE) \cite{Chasteen:2012fl} requires formal training. By emphasizing concept mastery, we avoided constructing a complicated grading rubric. For this question, the solution was decomposed into its constituent parts (i.e., a double integral over $d\phi$ and $dz$, correct limits on each integral, the inclusion of the mass density, an appropriate kernel, etc.). We were then able to develop a simple rubric in which constituents were graded.
With a mastery-focused rubric, we aim to decouple two important purposes for these types of assessments: (1) evaluating performance and (2) gaining insight into student learning. We are developing a complementary coding scheme that captures student difficulties on each question. Coding student work on the CCMI has helped identify common and persistent difficulties in key areas of classical mechanics \cite{Caballero:2012wr}. By separating the two roles, educators and researchers can choose the lens through which they want to view students' responses to CCMI questions based on their own interests and time. Results from this coding scheme will be the subject of a future publication.
\vspace*{-14pt}
\section{\label{sec:ruler}Results \& Test Statistics}
The most recent version of the CCMI was administered at CU for the last three semesters (N=167). One PER faculty member (SJP) and two traditional research faculty taught CM 1 using a variety of pedagogical techniques, which were developed as part of the larger course transformation project \cite{Pollock:2012uy,CMweb}. Table \ref{tab:cu} briefly summarizes these pedagogies along with the number of students who took the CCMI post-test and the mean score earned in each class.
\begin{table}
\begin{tabular}{cllcc}\hline\hline
{\bf Sem.} & {\bf Faculty} & {\bf Pedagogy} & {\bf N} & {\bf CCMI (\%)} \\\hline
1 & PER & CQ, T, GP, L & 62 & 59.7 $\pm$ 2.8 \\
2 & TRAD & CQ, T, GP, L & 41 & 46.1 $\pm$ 3.0 \\
3 & TRAD & CQ, L & 67 & 51.0 $\pm$ 2.5 \\\hline
\end{tabular}\label{tab:cu}
\caption{Courses taught at CU all involved different forms of engagement including Clickers (CQ), in-class Tutorials (T), Group Problem sessions (GP), and Lecture (L).}
\end{table}
Ultimately, we aim to develop an instrument that can help evaluate instructional strategies and curricular transformations like those presented above. To that end, we must develop a valid, reliable, and internally consistent instrument. Score distributions for each class were normal (or nearly so), the variances of each distribution were similar, and all course instructors made use of some transformed materials, which justify pooling the data to consider these issues. For the pooled data, the mean score over all three semesters was 52.9 $\pm$ 1.6 \%. The CCMI is a challenging test, and the grading rubric is strict. However, CM 1 students earned a wide range of scores (Fig.\ \ref{fig:dist}); some outperformed a sample of first-year graduate students at CU (avg. 74.5 $\pm$ 3.4 \%, N=5).
Looking at the three courses individually, scores in semesters 1 and 2 were normally distributed, while those in semester 3 were slightly non-normal ($A^2=0.76$, $p<0.05$) \cite{anderson1954test}. A Kruskall-Wallis test detected a group difference ($H=10.3$, $p<0.05$) \cite{kruskal1952use}, and a series of pairwise Mann-Whitney tests \cite{mann1947test} with family-wise error control ($\alpha = 0.017$) \cite{dunnett1955multiple} demonstrated that students taught by the PER faculty member (who was actively developing CM 1 course materials) outperformed students in both other courses. Additional testing in transformed and non-transformed courses at CU and elsewhere is needed to form clear conclusions about the effect of pedagogy and instructors on CCMI post-test scores. For example, in semester 2, which is the off-semester for CM 1, we found that students earned pre-test scores that were 9.5 and 6.7 points lower than students earned during on-semesters 1 and 3, respectively.
\begin{figure}
\includegraphics[width=0.95\linewidth, clip, trim=0mm 0mm 10mm 10mm]{ccmi_dist.pdf}
\caption{Distribution of CCMI post-test scores (N=167).}\label{fig:dist}
\end{figure}
{\bf Criterion Validity}: We have established, at least at CU, the face and content validity of the CCMI (Sec.\ \ref{sec:dev}), but establishing the instrument's criterion validity is equally important. Students' exams are the most similar measure to the CCMI. Like exams, the CCMI is completed individually in timed and controlled environments. But, unlike exams, it does not affect students' grades. Each class took three exams: two regular hour exams and a final. The averages of those three exams were z-scored to allow comparisons of different instructors. CCMI post-test scores were strongly correlated with these z-scored exam averages ($r=0.71$, $p<0.05$); a linear model can thus account for $50$\% of the variance in exam scores associated with CCMI scores. Similarly high correlations were observed on the CUE \cite{Chasteen:2012fl}.
\begin{figure}
\includegraphics[width=0.95\linewidth, clip, trim=0mm 0mm 10mm 10mm]{itemPerformance.pdf}
\caption{Performance on each CCMI item (N=167).}\label{fig:item}
\end{figure}
{\bf Item-test Correlation}: In addition to overall CCMI scores connecting well to external measures, it is desirable for individual items to connect well to the rest of the test. Individual items on the CCMI pose different challenges to students, and performance varies from 30\% to 65\% (Fig.\ \ref{fig:item}). Even so, we find that performance on individual items generally correlates well with the test overall. The item-test correlation for each question varies between 0.45 and 0.53 with exception of question 2 ($r=0.31$). While there is no widely accepted cutoff for item-test correlation, a common criteria is $r\geq0.2$ \cite{Ding:2006kq}, which all CCMI items achieve. The low correlation of question 2 is likely due to generally lower performance on this item. Question 2 covers Taylor series, a challenging topic for our sophomore students \cite{Caballero:2012wr}.
{\bf Internal Consistency}: Items must also give consistent results with one another. Cronbach's alpha measures the degree to which test items measure related constructs, that is, the degree of internal consistency of the test. Cronbach's alpha for the CCMI is 0.77, which is just below the low-stakes testing cutoff of 0.80. Cronbach's alpha depends strongly on the sample used to compute it \cite{wallace2010concept} and, thus, future data will provide a better estimate of the true alpha. However, the CCMI does not measure a single construct, which violates the underlying assumptions of Cronbach's alpha. Thus, our computed Cronbach's alpha underestimates the true alpha and therefore internal consistency of CCMI is still high \cite{graham2006congeneric}.
\vspace*{-14pt}
\section{\label{sec:closing}Concluding remarks}
While the CCMI is still under development and refinement, preliminary results suggest it is a valid and reliable instrument for investigating student learning in middle-division classical mechanics / math methods at CU. The mastery-focused grading philosophy provides a relatively simple rubric, which requires no formal training. The complementary coding scheme helps separate the dual roles of the assessment; it is both a tool for evaluation and a window into student thinking. At CU, student performance on individual items on the CCMI has helped set the research agenda for investigations into student thinking around Taylor series.
While presenting the results from the CCMI, we avoided a discussion of the data collected from eight partner institutions. This is first because most partner institutions have low enrollment in their classical mechanics courses; between 3 and 14 students took the post-test at these schools. Secondly, CM 1 is a combined classical mechanics / math methods course, which is an uncommon combination. Most of these partner institutions offer a single semester course that covers through Hamiltonian dynamics or do not explicitly teach mathematical methods in the first half of their two semester sequence. The CCMI represents the learning goals emphasized in our combined classical mechanics / math methods course. Therefore detailed discussions with partner faculty are needed to help frame the results from their institutions. Future research will address these concerns as we work to assist these instructors in evaluating their classical mechanics courses.
\vspace*{-14pt}
\begin{theacknowledgments}
We gratefully acknowledge the generous contributions of CU faculty, especially A.D. Marino, J.L. Bohn, K.P. McElroy, and collaborating faculty elsewhere. Particular thanks to the members of PER@C, including former member R.E. Pepper who designed the original CCMI. We also greatly appreciate the help of our student participants. This work was supported by University of Colorado's Science Education Initiative.
\end{theacknowledgments}
\bibliographystyle{aipproc}
|
1,116,691,498,628 | arxiv | \section{Semi-linear Poisson Mediated Flocking}
\label{Sec:BVP}
\subsection{Conversion to a system of PDEs}
We think of the function $\psi$ as a Green's function, i.e.,
as the impulse response of a linear differential equation,
represented by the operator $\mathcal{L}_x$,
such that
\begin{equation}
\mathcal{L}_x y(t,x) = g(t,x)
\end{equation}
implies
\begin{equation}
y(t,x) = \int_{\mathbb{R}^d} \psi(x,s) g(t,s) ds
\label{eq:greens}
\end{equation}
which results in
\begin{equation}
\mathcal L^{-1}_\psi = \mathcal L_x
\end{equation}
for all $t\geq 0 $, where
\begin{align}
\mathcal{L}_x\psi(x,s) = \delta(x-s),\ x,s\in\mathbb{R}^d.
\end{align}
Then the following proposition holds:
\begin{proposition}
Suppose $\psi$ is a Green's function
with respect to a linear differential operator $\mathcal{L}_x$.
Then system (\ref{eq:euler})
is equivalent to the augmented system
of ($2d+2$) partial differential equations:
\begin{equation}
\begin{cases}
\partial_t{\rho} + \nabla_x \cdot (\rho u) = 0 \\
\mathcal{L}_x y = \begin{bmatrix} \rho u & \rho \end{bmatrix}^T\\
\partial_t{(\rho u)} + \nabla_x \cdot (\rho u \otimes u) =
\sum_{i=1}^{d} (\rho y_i - \rho u_i y_{d+1}) \cdot \hat e_i
\end{cases}
\label{eq:pdes}
\end{equation}
where $\cbra{\hat e_i}_{i=1}^d$ is the standard basis in $\mathbb{R}^d$.
\end{proposition}
\subsection{The Boundary Value Problem}
Due to the time-dependence of the center of mass (\ref{eq:center_of_mass}),
$x_i$, $i=1,\ldots,N$, will escape any fixed and open bounded domain $\Omega\subset\mathbb{R}^d$,
unless in the trivial case where $v_c(0)=0$.
Because of the flocking behavior (Definition \ref{def:flocking}),
the position fluctuations with respect to the center of mass are uniformly bounded, i.e.,
\begin{equation}
\sup_{0\leq t \leq\infty} \sum_{i=1}^N \norm{x_i(t)-x_c(t)}^2 <\infty
\end{equation}
and, therefore we can define a Boundary Value Problem (BVP) in the moving domain
\begin{equation}
\Omega_c(t)=\cbra{x+x_c(t):x\in\Omega}
\end{equation}
where it is assumed that $0_d\in\Omega$, $0_d$ being the origin of $\mathbb{R}^d$.
We notice that solving system (\ref{eq:pdes})
for $(x,u)$, $x\in\Omega_c$ is equivalent to solving it
for the fluctuation variables $(\hat x,\hat u)$ (\ref{eq:fluctuations}),
with $\hat x\in\Omega$.
We note that the boundedness of the domain
has an effect on both the Green's function
and the flocking behavior of the system of interacting particles,
which should satisfy
\begin{equation}
x_i(t)-x_c(t) \in\Omega,\ i=1,\ldots,N,\ t\geq 0 .
\end{equation}
\section{One-Dimensional Case}
\label{Sec:Computations}
The BVP of the augmented system of PDEs (\ref{eq:pdes}) for $d=1$,
on $\Omega = \{\hat x \in [-\frac{L}{2},\frac{L}{2}]\}$ reads as:
\begin{equation}
\begin{cases}
\partial_t{\rho} + \partial_{\hat x}(\rho u) = 0 \\
\mathcal{L}_{\hat x} y = \begin{bmatrix} \rho u & \rho \end{bmatrix}^T \\
\partial_t(\rho u) + \partial_{\hat{x}} (\rho u^2)=
\rho y_1 - \rho u y_2
\end{cases}
\label{eq:pdes1}
\end{equation}
with homogeneous Dirichlet boundary conditions and initial conditions
\begin{equation}
\rho(0, \hat{x}) = \rho_0(\hat x), \quad
u(0, \hat{x}) = u_0(\hat x)
\label{eq:bvp_pdes}
\end{equation}
which are smooth functions.
We select the linear partial differential operator
\begin{equation}\label{op}
\mathcal{L}_x= -\frac{1}{2k}(\frac{\partial^2}{\partial x^2} - \lambda^2)
\end{equation}
with $k\neq 0$ and $\lambda \neq 0$, for which the associated parametric family of Green's functions with homogeneous Dirichlet boundary conditions on $[0,L]$ reads as:
\begin{equation}
\hat\psi(x,s) =
\begin{cases}
c_1(s) ( e^{\lambda x} - e^{-\lambda x}) & s\leq x \\
c_2(s) ( e^{\lambda(x - 2L)} - e^{-\lambda x} ) & s>x
\end{cases}
\label{eq:psi}
\end{equation}
\begin{equation}
\begin{aligned}
c_1(s) &= \frac {k}{\lambda(e^{-2L\lambda}-1)} (e^{\lambda(s-2L)} - e^{-\lambda s}) \\
c_2(s) &= \frac {k}{\lambda(e^{-2L\lambda}-1)}(e^{\lambda s} - e^{-\lambda s})
\end{aligned}
\label{eq:psi_coef}
\end{equation}
The solution over any interval of length $L$ can be obtained by a simple translation of coordinates.
The profile of the Green's function $\hat \psi$ and the effect of the bounded domain on on it is
illustrated in Fig. \ref{fig:psi}, where,
for different fixed values of $x$,
$\hat\psi(x,s)$ is compared to the function
\begin{equation}
\psi(x,s) = \frac{k}{\lambda}e^{-\lambda\|x-s\|}
\end{equation}
which is the Green's function corresponding to $\mathcal{L}_x$ in an infinite domain.
We note that the parameters $(k,\lambda,L)$ generate a family of interaction functions
(see also \citep{mavridis2020learning})
that can simulate widely used interaction functions as the one found in the original Cucker-Smale model
\citep{cucker2007emergent}:
\begin{equation}
G(x,s) = \frac{K}{(1+\|x-s\|^2)^{\gamma}}
\end{equation}
for given parameters $(K,\gamma)$.
\begin{figure}[ht!]
\centering
\includegraphics[trim=5 5 5 40, clip,width=.4\textwidth]{KernelExp.eps}
\caption{Illustration of $\hat\psi(x,\cdot)$ (\ref{eq:psi}) for different values of $x$, and for
$\lambda=1$, $k=4$ on $[-\pi, \pi]$.
The function $ \psi(x,s)= \frac{k}{\lambda}e^{-\lambda\|x-s\|}$, which is the
Green's function for $\mathcal{L}_x$ in infinite domain, is depicted in the dashed-dotted lines.}
\label{fig:psi}
\end{figure}
\subsection{Asymptotic Flocking}
Next we provide sufficient conditions such that the solution $\cbra{(x_i(t),v_i(t))}_{i=1}^N$, $t\geq 0$,
of system (\ref{eq:cs}) with interaction function $\hat\psi$
as defined in (\ref{eq:psi}), (\ref{eq:psi_coef}), satisfy the
flocking conditions in Definition \ref{def:flocking}, with
$\hat x_i(t)\in\Omega$, for all $t\geq 0$.
Similar to Section \ref{sSec:flocking},
we notice that
\begin{equation}
\oder{|\hat x|}{t} \leq |\hat v|
\label{eq:bound_x1}
\end{equation}
From (\ref{eq:xixj_bound}) and the fact that
$\|\hat x_i\|\leq \max_{1\leq i,j\leq N}\|\hat x_i-\hat x_j\|$, we get
\begin{equation}
|\hat x|\leq \frac{\hat x_M}{2} \implies \|\hat x_i\| \leq \hat x_M,\ i=1,\ldots,N
\end{equation}
Therefore, we are interested in showing asymptotic flocking with
$|\hat x(t)|\in[0,\frac{\hat x_M}{2}]$, for all $t\geq 0$.
For any given initial conditions $|\hat x_i(0)|$,
there is a large enough value of $L$ such that there exist an $\hat x_M\in[0,\frac L 2)$
for which
\begin{equation}
\hat x_M > 2|\hat x(0)|
\end{equation}
From (\ref{eq:psi}), (\ref{eq:psi_coef}) it follows that
for $|\hat x| \leq \frac{\hat x_M}{2}$,
\begin{equation}
\begin{aligned}
\hat\psi(x_j,x_i) &\geq \hat\psi(-\hat x_M, \|\hat x_j-\hat x_i\|) \\
&\geq \hat\psi(-\hat x_M, 2|\hat x|)
\end{aligned}
\end{equation}
which implies that
\begin{equation}
\begin{aligned}
\oder{|\hat v|^2}{t} &=-\frac 1 N \sum_{1\leq i,j\leq N} \hat\psi(\hat x_j, \hat x_i)\|\hat v_j-\hat v_i\|^2 \\
&\leq -\frac 2 N \hat\psi(-\hat x_M, 2|\hat x|) |\hat v|^2
\end{aligned}
\end{equation}
and
\begin{equation}
\oder{|\hat v|}{t} \leq -\frac 2 N \hat\psi(-\hat x_M, 2|\hat x|) |\hat v| :=-\phi(|\hat x|)|\hat v|
\label{eq:bound_v1}
\end{equation}
Next we notice that the Lyapunov function
\begin{equation}
V(|x|,|v|):=|\hat v|+ \int_{\alpha}^{|\hat x|} \phi(s) ds,\ \alpha\geq 0
\end{equation}
is non-increasing along the solutions of $(|\hat x(t)|,|\hat v(t)|)$
of the system of dissipative differential inequalities
(\ref{eq:bound_x}) and (\ref{eq:bound_v}),
for $|\hat x(t)| \leq \frac{\hat x_M}{2}$, since
\begin{equation}
\begin{aligned}
\oder{}{t}V(|\hat x|,|\hat v|) & = \oder{|\hat v|}{t} + \phi(|\hat x|)\oder{|\hat x|}{t} \\
&\leq \phi(|\hat x|)
\pbra{ -|v| + \oder{|\hat x|}{t}} \\
&\leq 0
\end{aligned}
\end{equation}
which implies that
\begin{equation}
|\hat v(t)|+ \int_{|\hat x_0|}^{|\hat x|} \phi(s) ds
\leq |\hat v(0)|,\ |\hat x| \leq \frac{\hat x_M}{2}
\label{eq:contradiction1}
\end{equation}
Choosing the initial velocity $|\hat v(0)|$
such that $|\hat v(0)|<\int_{|\hat x(0)|}^{\hat x_M/2} \phi(s)ds $,
and, since $\phi$ is non-negative for $|\hat x(t)| \leq \frac{\hat x_M}{2}$,
there exists a $\bar x \in [|\hat x(0)|,\frac{\hat x_M}{2}]$ for which
\begin{equation}
|\hat v(0)|=\int_{|\hat x(0)|}^{\bar x} \phi(s)ds
\end{equation}
Suppose there exists a $t^*\geq 0$, such that $\hat x^*:=|\hat x(t^*)|\in (\bar x,\frac{\hat x_M}{2}]$.
Then
\begin{equation}
\int_{|\hat x(0)|}^{\hat x^*} \phi(s)ds > |v(0)|
\end{equation}
which contradicts (\ref{eq:contradiction1}).
Therefore
\begin{equation}
|\hat x(t)| \leq \bar x \leq \frac{\hat x_M}{2},\ t\geq 0
\end{equation}
and from (\ref{eq:bound_v}) and the Grönwall-Bellman inequality
\begin{equation}
|\hat v(t)| \leq |\hat v(0)|e^{-\phi(\bar x)t},\ t\geq 0.
\end{equation}
\subsection{Conservation of Mass and Momentum}
\begin{lemma} The operator (\ref{op}) $
\mathcal L_x$ on $C^{\infty}_{\mathbb R, C}(\Omega)$, the space
of compactly supported test functions, is self-adjoint
and invertible, and therefore has a self-adjoint inverse
$\mathcal L_x^{-1}$ on $C^{\infty}_{\mathbb R, C}(\Omega)$.
\end{lemma}
\begin{proof} Self-adjointness of the inverse follows immediately from self-adjointness of $\mathcal L_x$ and the existence of the inverse \citep{taylor2010partial}. It is clear that $\mathcal L_x$ has an inverse since the Green's function is nontrivial.
We shall now show that the operator $\mathcal L_x$ is self-adjoint on $C^{\infty}_{\mathbb R, C}(\Omega)$.
Consider two functions $u,w \in C^{\infty}_{\mathbb R, C}(\Omega)$, $u\neq w$,
the space
of test functions, and associated
$f_u, f_w \in C^{\infty}_{\mathbb R, C}$, $f_u := \mathcal L_x u, f_w := \mathcal L_x w$.
Let $\Omega := [-\frac{L}{2},\frac{L}{2}]$.
We have
\begin{equation}
\int_\Omega (w \mathcal L_x u - u \mathcal L_x w)dx =
-\frac{1}{2k}\int_\Omega (w \partial_x^2 u - u \partial_x^2 w)dx.
\end{equation}
since the semi-linear term drops out. Using Green's second identity, and the compact support of $u,w$, we have that
\begin{equation}
\int_\Omega (w \partial_x^2 u - u \partial_x^2 w)dx=
\int_{\partial \Omega} (w \partial_{\mathbf n}u - u\partial_{\mathbf n}w)dx=0.
\end{equation}
Thus, $\mathcal L_x$ is self-adjoint and has a self-adjoint inverse, i.e.
\begin{equation}\label{conserved}
\int_\Omega (f_w \mathcal L_x^{-1} f_u - f_u \mathcal L_x^{-1} f_w) dx=
\int_{\Omega } (f_w u - f_u w)dx = 0.
\end{equation}
\end{proof}
\begin{proposition}
If $y$ is compactly supported, and $psi$ is as given, then mass
and momentum are conserved, i.e.
\begin{equation}\label{cons2}
\frac{d}{dt}\int_\Omega \begin{bmatrix} \rho & \rho u \end{bmatrix}^T d\hat x =
\int_\Omega \begin{bmatrix} 0 & \rho y_1 - \rho u y_2 \end{bmatrix}^T d \hat x = 0.
\end{equation}
\end{proposition}
\begin{proof}We obtain (\ref{cons2}) by simply integrating the conservation
laws in (\ref{eq:pdes1}) over the entire space and apply the Leibniz rule.
The conclusion follows directly from the self
adjointness of the inverse in (\ref{conserved}). The proposition holds for any self-adjoint alignment operator.
\end{proof}
\subsection{Computational Methods}
For compactness, we re-write the PDEs (\ref{eq:pdes1}) as
\begin{equation}
\begin{cases}
\partial_t U + \partial_{\hat x} F(U) = S(U, Y) \\
\mathcal L_{\hat x} Y = U \\
\end{cases}
\label{eq:1aux}
\end{equation}
with $U = \begin{bmatrix}\rho, \rho u \end{bmatrix}^T$, $Y = \begin{bmatrix}y_2, y_1 \end{bmatrix}^T$, $F = \begin{bmatrix} \rho u , \rho u^2 \end{bmatrix}^T$,
and $S = \begin{bmatrix}0, \rho y_1 - \rho u y_2 \end{bmatrix}^T$. Recall the transformation $m = \rho u$.
From this, the flux Jacobian is given by
\begin{equation}
\mathbf D_U F :=\begin{bmatrix}
0 & 1 \\ -u^2 & 2u
\end{bmatrix}
\end{equation}
which is not diagonalizable, and thus the system is only weakly hyperbolic. Its eigenvalues are $\pm u$.
With these notations established, we now detail the numerical solution of the PDEs.
\subsubsection{Hyperbolic Solver.}
To solve the hyperbolic system, we apply the finite volume method \citep{leveque_2002}. To begin, we define the sequence of points $\{\hat x_0, ..., \hat x_i, ..., \hat x_N \}$
which are the centers of the cells
$I_i := [\hat x_{i-\frac{1}{2}}, \hat x_{i+\frac{1}{2}})$. Then, we average the PDE
over these cells, which gives
\begin{equation}
\frac{1}{\lambda(I_i)}\frac{d}{dt}\int_{I_i} U d\hat x =
-\frac{1}{\lambda(I_i)}\int_{I_i} \partial_{\hat x} F d\hat x +
\frac{1}{\lambda(I_i)}\int_{I_i} S d\hat x
\end{equation}
where $\lambda(\cdot)$ denotes the length of an interval. Suppose these are
identical, so $\Delta \hat x :=\lambda(I_i) \forall i$. Then, using the divergence
theorem, and replacing the integrals of $U,F, S$ with their cell-averages,\
i.e. their midpoint values $\bar U, \bar F, \bar S$, we obtain
\begin{equation}
\frac{d}{dt} \bar U_i =
-\frac{1}{\Delta \hat x}(\bar F_{i+\frac{1}{2}} - \bar F_{i-\frac{1}{2}}) +
\bar S_i
\end{equation}
where $\bar U_i := \bar Y(\hat x_i), \bar F_i := \bar F(\hat x_i), \bar S := \bar S(\hat x_i)$.
In this work, we employ the second-order strong stability preserving Runge-Kutta scheme \citep{KURGANOV2000241} for time integration. For the fluxes, we assume piecewise linearity and use the Kurganov-Tadmor flux \citep{KURGANOV2000241}. The fluxes are given by
\begin{equation}
\begin{split}
\bar F_{i+\frac{1}{2}}&:=\frac{1}{2}[F^*_{i} + F^*_{i+1} - \max\{|u^*_i|, |u^*_{i+1}|\}(U^*_{i+1}-U^*_{i})]\\
U^*_{i+1} &:= U_{i+1} - \frac{\Delta \hat x}{2}minmod(\frac{U_{i+2} - U_{i+1}}{\Delta \hat x},\frac{U_{i+1} - U_{i}}{\Delta \hat x})\\
U^*_{i} &:= U_{i} + \frac{\Delta \hat x}{2}minmod(\frac{U_{i+1} - U_{i}}{\Delta \hat x},\frac{U_{i} - U_{i-1}}{\Delta \hat x})\\
\end{split}
\end{equation}
where $minmod(a,b) := \frac{1}{2}(sign(a)+sign(b))\min(|a|, |b|)$.
\subsubsection{Elliptic Solver.}
To solve the elliptic equations, we apply the classical second-order finite difference method, which is
\begin{equation}
\frac{y^j_{i+1} - 2y^j_i + y^j_{i-1}}{\Delta \hat x^2} - \lambda^2 y_i^j = -2kU^j_i
\end{equation}
Over the interior points, this yields linear equations
\begin{equation}\label{linsys}
(\frac{1}{\Delta \hat x^2}\mathbf A - \lambda^2 \mathbf I)y^j_{int} = -2kU^j_{int} -
\frac{1}{\Delta \hat x^2}\begin{bmatrix} y^j_{0} & 0 & \hdots & 0 & y^j_{N}\end{bmatrix}^T,
\end{equation}
\begin{equation}
\mathbf A = \begin{bmatrix}-2 & 1 & 0 & \hdots & \hdots & 0 \\
1 & -2 & 1 & 0& \hdots & 0 \\
0 & 1 & -2 & 1& \hdots & 0 \\
\vdots & \ddots & \ddots & \ddots & \ddots & \vdots \\
0 & 0 & 0 & 0 & 1 & -2\end{bmatrix}
\end{equation}
The matrix in (\ref{linsys}) is tridiagonal, so banded matrix algorithms \citep{golub} can be used to solve the corresponding system of equations.
As shown in Fig. \ref{fig:speed}, using finite differences is much faster than a convolution (Riemann) sum, even when the embarrassing parallelism of the sum is exploited.
\subsubsection{Particle Solver.}
We solve the system of particle equations using the velocity Verlet algorithm \citep{karniadakis_flockingDynamicsfractionalPDEs_2018}. Given
a system of ODEs of the form
\begin{equation}
\begin{cases}
\frac{dx}{dt} &= v \\
\frac{dv}{dt} &= a(x,v,t),
\end{cases}
\end{equation}
with appropriate initial conditions
and a time-discretization at steps $\{0,1,...,i,...\}$ with increment \
$\Delta t$, the discretization is
\begin{equation}
\begin{split}
v_{i+\frac{1}{2}} &= v_i + \frac{1}{2}a(x_i,v_i, t_i)\Delta t\\
x_{i+1} &= x_i + \Delta t v_{i+\frac{1}{2}} \\
v_{i+1} &= v_i + \frac{\Delta t}{2}[a(x_i,v_i,t_i) + a(x_{i+1}, v_{i+\frac{1}{2}}, t_{i+1})].
\end{split}
\end{equation}
\begin{figure}[ht!]
\centering
\includegraphics[trim=0 0 0 40,clip,width=0.4\textwidth]{CompTime.eps}
\caption{Computation Times for Nonlocal Terms using Finite Differences and Riemann Sum.}
\label{fig:speed}
\end{figure}
\section{Mathematical Models}
\label{Sec:CS}
In this section we introduce the Cucker-Smale dynamics under
general interaction functions, define time-asymptotic flocking,
and present the mean-field macroscopic equations.
\subsection{The Cucker-Smale Model}
Consider an interacting system of $N$ identical autonomous agents
with unit mass in $\mathbb{R}^d$, $d \in \{1,2,3\}$.
Let $x_i(t),\ v_i(t)\in\mathbb{R}^d$ represent the position and velocity
of the $i^{th}$-particle at each time $t\geq 0$, respectively, for $1\leq i\leq N$.
Then the general Cucker-Smale dynamical system \citep{cucker2007emergent} of $(2N)$ ODEs
reads as:
\begin{equation}
\begin{cases}
\oder{x_i}{t} &= v_i \\
\oder{v_i}{t} &= \frac{1}{N}\sum_{j=1}^{N}\psi(x_j,x_i)(v_j-v_i)
\end{cases}
\label{eq:cs}
\end{equation}
where $x_i(0)$, are $v_i(0)$ are given for all $i = 1,\ldots,N$,
and ${\psi:\mathbb{R}^d \times \mathbb{R}^d \rightarrow \mathbb{R}}$
represents the interaction function between each pair of particles.
The center of mass system $(x_c,v_c)$ of $\cbra{(x_i,v_i)}_{i=1}^N$
is defined as
\begin{equation}
x_c = \frac 1 N \sum_{i=1}^N x_i, \quad v_c = \frac 1 N \sum_{i=1}^N v_i
\end{equation}
When $\psi$ is symmetric, i.e., $\psi(x,s)=\psi(s,x)$,
system (\ref{eq:cs}) implies
\begin{equation}
\oder{x_c}{t}=v_c,\quad \oder{v_c}{t}=0
\end{equation}
which gives the explicit solution
\begin{equation}
x_c(t) = x_c(0) + t v_c(0),\ t\geq 0
\label{eq:center_of_mass}
\end{equation}
\subsection{Asymptotic Flocking}
\label{sSec:flocking}
We investigate the additional assumptions on the initial conditions and
the interaction function $\psi$, such that
system (\ref{eq:cs}) converges to a velocity consensus,
a phenomenon known in the literature as \emph{time-asymptotic flocking},
defined in terms of the center of mass system as
\begin{definition}[Asymptotic Flocking]
An $N-$body interacting system $\mathcal{G}=\cbra{(x_i,v_i)}_{i=1}^N$
exhibits time-asymptotic flocking if and only if the following two relations hold:
\begin{itemize}
\item(Velocity alignment):
$\lim_{t\rightarrow\infty}\sum_{i=1}^N \norm{v_i(t)-v_c(t)}^2=0$ ,
\item(Spatial coherence):
$\sup_{0\leq t \leq\infty} \sum_{i=1}^N \norm{x_i(t)-x_c(t)}^2 <\infty$ .
\end{itemize}
\label{def:flocking}
\end{definition}
We consider the new variables
\begin{equation}
(\hat x_i, \hat v_i):=(x_i-x_c, v_i-v_c)
\label{eq:fluctuations}
\end{equation}
which correspond to the fluctuations around the center of mass system,
and define
$\hat x:=(\hat x_1,\ldots,\hat x_N)$, $\hat v:=(\hat v_1,\ldots,\hat v_N)$,
$|\hat x| = \pbra{\sum_{i=1}^N \|\hat x_i\|^2}^{1/2}$, and
$|\hat v| = \pbra{\sum_{i=1}^N \|\hat v_i\|^2}^{1/2}$,
where $\|\cdot\|$ represents the standard $l_2$-norm
in $\mathbb{R}^d$.
Based on Definition \ref{def:flocking},
asymptotic flocking is achieved if
\begin{equation}
|\hat x(t)|<\infty, t\geq 0,\ \text{and }
\lim_{t\rightarrow\infty} |\hat v(t)|=0
\end{equation}
We first notice that
\begin{equation}
\oder{|\hat x|^2}{t}=2\abra{\oder{\hat x}{t},\hat x}\leq 2|\hat x||\hat v|
\end{equation}
which implies
\begin{equation}
\oder{|\hat x|}{t} \leq |\hat v|
\label{eq:bound_x}
\end{equation}
Suppose the interaction function $\psi$ is chosen such that
$\psi(x,s)=\tilde\psi(\|x-s\|)$,
with $\tilde\psi:\mathbb{R}_+\rightarrow\mathbb{R}_+$ being a
non-negative and non-increasing function.
Then $(\hat x_i, \hat v_i)$ are governed by the dynamical system (\ref{eq:cs}), and
\begin{equation}
\begin{aligned}
\oder{|\hat v|^2}{t} &=-\frac 1 N \sum_{1\leq i,j\leq N} \tilde\psi(\|\hat x_j-\hat x_i\|)\|\hat v_j-\hat v_i\|^2 \\
&\leq -\frac 1 N \tilde\psi(2|\hat x|) \sum_{1\leq i,j\leq N} \|\hat v_j-\hat v_i\|^2 \\
&= -\frac 2 N \tilde\psi(2|\hat x|) |\hat v|^2
\end{aligned}
\end{equation}
which implies
\begin{equation}
\oder{|\hat v|}{t} \leq -\frac 2 N \tilde\psi(2|\hat x|) |\hat v| :=-\phi(|\hat x|)|\hat v|
\label{eq:bound_v}
\end{equation}
where we have used the fact that
$\sum_{i=1}^N \hat v_i(t)=0$, $t\geq 0$, and
\begin{equation}
\max_{1\leq i,j\leq N}\|\hat x_i-\hat x_j\|\leq 2|\hat x|
\label{eq:xixj_bound}
\end{equation}
The following Theorem by \citep{ha2009simple} provides
sufficient conditions for time-asymptotic flocking.
\begin{theorem}
Suppose $(|x|,|v|)$ satisfy the system of dissipative differential inequalities
(\ref{eq:bound_x}), (\ref{eq:bound_v}) with $\phi\geq 0$.
Then if $|v(0)|<\int_{|x(0)|}^\infty \phi(s) ds $,
there is a $x_M\geq 0$ such that
$|v(0)|=\int_{|x(0)|}^{x_M} \phi(s) ds$,
and for every $t\geq 0$, $|x(t)|\leq x_M$, and
$|v(t)|\leq |v(0)|e^{-\phi(x_M)t}$.
\label{thm:flocking}
\end{theorem}
The following is an immediate consequence of Theorem \ref{thm:flocking}.
\begin{proposition}
Let $\mathcal{G}=\cbra{(x_i,v_i)}_{i=1}^N$ be an $N-$body interacting system
with dynamics given by (\ref{eq:cs}).
Suppose
$\psi(x,s)=\tilde\psi(\|x-s\|)$,
with $\tilde\psi:\mathbb{R}_+\rightarrow\mathbb{R}_+$ being a
non-negative and non-increasing function.
Then if $|v(0)-v_c(0)|<\int_{|x(0)-x_c(0)|}^\infty \frac 2 N \tilde\psi(2s) ds $,
$\mathcal{G}$ exhibits time-asymptotic flocking.
\label{pro:flocking}
\end{proposition}
\subsection{The Mean-Field Limit}
Consider the empirical joint probability distribution of the particle positions and velocities $\{x_i,v_i\}_{i=1}^N$
\begin{equation}
F_{xv}^N(t,x,v) := \frac{1}{N}\sum_{i=1}^N \delta(x_i,v_i)
\end{equation}
where $\delta(\cdot, \cdot)$ is the Dirac measure on $\mathbb R^{2d}$.
As the number of particles $N \rightarrow \infty$, we can use
McKean-Vlasov arguments to show that the empirical distribution
converges weakly to a distribution whose density $f_{xv}$ evolves according to the
forward Kolmogorov
equation \citep{carrillo2010particle}
\begin{equation}\label{fpke}
\begin{split}
&\partial_t f_{xv} + \nabla_x\cdot (vf_{xv})+\nabla_v\cdot (Af_{xv}) = 0\\
&A := \int_{\mathbb{R}^{2d}}\psi(x,s)(w-v)f_{xv}(t,s,w)dsdw.
\end{split}
\end{equation}
We define
\begin{equation}
\begin{split}
\rho(t,x) &:= \int_{\mathbb R^d} f_{xv}(t,x,v) dv \\
m(t,x)&:=\rho(t,x)u(t,x) := \int_{\mathbb R^d} v f_{xv}(t,x,v) dv.
\end{split}
\end{equation}
which are the marginal probability and momentum density functions.
Substituting these into (\ref{fpke}) yields the following $(d+1)$ compressible Euler
equations with non-local forcing:
\begin{equation}
\begin{cases}
\partial_t{\rho} + \nabla_x \cdot (\rho u ) = 0 \\
\partial_t{(\rho u)} + \nabla_x \cdot (\rho u \otimes u) =
\rho \mathcal L_\psi (\rho u) - \rho u \mathcal L_\psi \rho
\end{cases}
\label{eq:euler}
\end{equation}
where $u$ is the mean velocity, $\rho(0,x)$ and $u(0,x)$ are given and
\begin{equation}
\mathcal L_\psi f (t,x):= \int_{\mathbb{R}^d} \psi(x,s)f(t,s) ds.
\label{eq:convolution}
\end{equation}
\section{INTRODUCTION}
\label{Sec:Introduction}
Collective motion of autonomous agents is a widespread phenomenon
appearing in numerous applications
ranging from animal herding to complex networks and social dynamics
\citep{okubo1986dynamical, cucker2007emergent, giardina2008collective}.
In general, there are two broad approaches when investigating the underlying dynamics for flocks or swarms:
the microscopic, particle models described by ordinary differential equations (ODEs) or stochastic differential equations,
and the macroscopic continuum models, described by partial differential
equations (PDEs).
Agent-based models assume behavioral rules at the individual level, such as
velocity alignment, attraction, and repulsion
\citep{cucker2007emergent, giardina2008collective, ballerini2008interaction}
and are often used in numerical simulations
and in learning schemes where the interaction rules are inferred
\citep{Matei2019}.
As the number of interacting agents gets large,
the agent-based models become computationally expensive \citep{carrillo2010particle}. Considering pairwise interactions, the growth is $O(N^2)$, where $N$ is the number of agents.
As we approach the mean-field limit, it is useful to consider the probability density of the agents. Using Vlasov-like arguments
\citep{carrillo2010particle}, we can construct an equation analogous to the Fokker-Planck-Kolmogorov equation. We can then define momentum and density and construct a system of compressible hydrodynamic PDEs
\citep{carrillo2010particle, shvydkoy2017eulerian}.
In flocking dynamics
\citep{cucker2007emergent, carrillo2010particle},
the velocity alignment term is not only nonlocal but can also be nonlinear \citep{shvydkoy2017eulerian, karniadakis_flockingDynamicsfractionalPDEs_2018}.
The computation of the corresponding hydrodynamic equations with nonlocal forces
becomes quite costly due to
the approximation of the convolution integrals or integral transforms using
the various quadrature methods. The simplest `quadrature' method is the Riemann sum, whose complexity is $O(n^2)$, where $n$ is the number of grid points, when estimating a convolution integral as a convolution sum in one dimension. On the other hand, an equivalent solution may be obtained using finite differences if the interaction kernel is associated with a differential operator. If that operator can be put into a sparse form, ideally a tridiagonal form, a solution can be obtained efficiently.
In this work, we modify the classical Cucker-Smale model of nonlocal particle interaction for velocity consensus
\citep{cucker2007emergent, ha2009simple}.
We propose a family of parametric interaction functions in $\mathbb{R}^d$, $d \in \{ 1,2,3\}$,
that are Green's functions for appropriately defined linear partial
differential operators, which allow us to speed-up computation of the nonlocal interaction terms.
We investigate the conditions under which time-asymptotic flocking is achieved in the microscopic formulation in a centroid-fixed frame.
We solve the macroscopic formulation using the Kurganov-Tadmor MUSCL finite volume method \citep{KURGANOV2000241} and a second-order finite difference discretization of our chosen differential operator. The method is compared to bulk variables computed from the microscopic formulation for validation.
The rest of the manuscript is organized as follows:
Section \ref{Sec:CS} introduces the agent-based Cucker-Smale flocking dynamics and
the macroscopic Euler equations.
Section \ref{Sec:BVP} describes the conversion of the Euler equations
to an augmented system of PDEs, and the formulation of the boundary value problem.
In Section \ref{Sec:Computations} a family of interaction functions is proposed and
the computation process is explained.
Finally, Section \ref{Sec:Results} compares the numerical results and
Section \ref{Sec:Conclusion} concludes the paper.
\section{Conclusion}
\label{Sec:Conclusion}
A family of compactly supported
parametric interaction functions in the general
Cucker-Smale flocking dynamics was proposed such that the
macroscopic system of mass and momentum balance
equations with non-local damping terms
can be converted to an augmented system of coupled PDEs
in a compact set.
We approached the computation of the non-local damping
using the standard finite difference treatment of the chosen differential operator, which was solved using banded matrix algorithms.
The expressiveness of the proposed interaction functions
may be utilized for parametric learning from trajectory data.
\section{Numerical Results and Higher Dimensions}
\label{Sec:Results}
In this section we present numerical simulations of one-dimensional nonlocal flocking dynamics, by solving $(a)$ the agent-based Cucker-Smale model using the velocity Verlet method, and
$(b)$ the macroscopic model with initial conditions whose support is the
interval $[-\pi, \pi]$. Our aim is to verify that the agent based and continuum based approaches to the flocking problem produce similar results.
In the following, the initial density and velocity are given by
\begin{align}
\rho_0(\hat x) &= \frac{\pi}{2L} \cos(\frac{\pi \hat x}{L}), \\
u_0(\hat x) &= -c \sin(\frac{\pi \hat x)}{L}),\ \hat x\in[-\frac L 2,\frac L 2],
\end{align}
i.e. it is assumed that $\rho_0(\hat x) = u_0(\hat x) = 0,\ \forall \hat x\notin [-\frac L 2,\frac L 2]$,
where we have used $L=2\pi$.
\subsection{Cucker-Smale Model Simulation}
In all simulations, we take $\lambda=1$, $k=4$.
For the particle simulation, we use $N=10^4$ particles. For the macro-scale
simulation, we use $\Delta \hat x = \frac{2\pi}{600}$ as the spatial increment. In
both simulations, we take $\Delta t = .001$ as the time increment.
In both cases, the support of the initial profile shrinks as the bulk comes
together. The semi-linear Poisson-forced Euler system is highly dissipative, and the momentum profile
is damped until it flattens (although it is conserved over the domain), and the system attains an equilibrium
distribution.
Fig. \ref{fig:macros_micro} shows the agreement between
the particle model and the macro-scale model.
\begin{figure}[ht!]
\centering
\includegraphics[trim=10 20 10 80, clip,width=0.35\textwidth]{ParticleAndMacro.eps}
\caption{Evolution of the Probability Densities $\rho(t,\hat x)$ and Momentum Densities $m(t,\hat x)$ as computed by solving the macro-scale model and the particle model (dashed-line).}
\label{fig:macros_micro}
\end{figure}
\subsection{Higher Dimensions}
In higher dimensions, the radial symmetry of the interaction function $\psi$
suggests the use of a \emph{singular kernel}.
Singular kernels have been extensively studied in the literature and, under
mild assumptions in the initial conditions, have been shown to result in flocking behavior
while, at the same time, avoiding collisions \citep{Ahn2012OnCI}.%
In the BVP of the augmented system of PDEs (\ref{eq:pdes})
with the initial and boundary conditions (\ref{eq:bvp_pdes}),
we select the linear differential operator
(see also \citep{mavridis2020learning}):
\begin{equation}
\mathcal{L}_x= - k^{-d/2} ( \nabla_x^2 - \lambda^2)
\end{equation}
and $\Omega = B_d(0,r):=\cbra{x\in\mathbb{R}^d:\|x\| < r}$,
which results in a Green's function of the form
\begin{align}
\hat\psi(x,s) = \psi(x-s) + \phi(x,s)
\end{align}
where $\psi$ is given by
\begin{equation}
\begin{aligned}
\psi(x,s) &= \tilde\psi(\|x-s\|) \\
&= \pbra{\frac{k}{2\pi}}^{d/2} \pbra{\frac{\lambda}{\|x-s\|}}^{d/2-1}
K_{d/2-1}(\lambda \|x-s\|)
\end{aligned}
\end{equation}
with $K_\alpha(\cdot)$ being the modified Bessel function of the second kind of order $\alpha$,
and $\phi$ is a function such that
\begin{equation}
\begin{aligned}
&\mathcal{L}_s \phi(x,s) = 0,\ s\in B_d(0,r) \\
&\phi(x,s) = -\psi(x,s),\ s\in\partial B_d(0,r)
\end{aligned}
\end{equation}
For $s\in \partial B_d(0,r)$ we have
\begin{equation}
\begin{aligned}
\|x-s\|^2 &= \|x\|^2 - 2 \abra{x,s} + \|s\|^2 \\
&= \|x\|^2 \| \frac{s}{r} - \frac{r x}{\|x\|^2}\|^2
\end{aligned}
\end{equation}
and it can be shown that
\begin{equation}
\phi(x,s) = -\tilde\psi(\frac 1 r \|x\|\| s - r^2 \frac{x}{\|x\|^2}\|).
\end{equation}
The interaction function $\hat\psi$
is affected by the bounded domain in the same way as in the
one-dimensional case, and depends on
the parameter values $k$ and
$\lambda$ as illustrated in
Fig.\ref{fig:psi_param_high} for the $2$-dimensional case.
\begin{figure}[b]
\centering
\includegraphics[trim=60 20 110 60,clip,width=0.17\textwidth]{bessel2D105.eps}
\includegraphics[trim=60 20 110 60,clip,width=0.17\textwidth]{bessel2D210.eps}
\caption{The effect of the parameters $k$, $\lambda$ on the
profile of the interaction function $\hat\psi((0,0.5),s)$, $s\in B_2(0,1)$. Left: $(k,\lambda)=(1,0.5)$. Right: $(k,\lambda)=(2,10)$.}
\label{fig:psi_param_high}
\end{figure}
|
1,116,691,498,629 | arxiv | \section{Introduction}
The experimental achievement of Bose-Einstein condensation (BEC) of atomic chromium \cite{PhysRevLett.94.160401}, which has a magnetic moment
of $m=6\mu_{\rm B}$, where $\mu_{\rm B}$ stands for the Bohr magneton, and the subsequent detection of the long-range and anisotropic dipole-dipole
interaction (DDI) in that system \cite{stuhler_j_2005} paved the way for a systematic investigation of dipolar quantum systems. In chromium BECs, the
DDI is usually of secondary importance, as its relative strength to the short-range and isotropic contact interaction is only of about $15\%$.
However, by using a Feshbach resonance \cite{PhysRevLett.94.183201}, one can tune the contact interaction in order to improve the relative importance
of the DDI. Thus, in retrospect, the work with chromium has turned out to be highly valuable as many specific achievements have led to
a better understanding of bosonic dipolar systems \cite{baranov-review,citeulike:4464283,1367-2630-11-5-055009,doi:10.1021/cr2003568}. Among them,
one could emphasize the observation of a d-wave Bose-nova explosion pattern \cite{d-wave-pfau}, the strong dipolar character of the time-of-flight
analysis \cite{strong-pfau}, and the detection of the influence of the DDI in the low-lying excitations \cite{PhysRevLett.105.040404,gorceix_sound}.
Recently, a few major experimental successes were obtained, which might lead to even stronger dipolar quantum systems, consisting of both atomic
and molecular systems. Particularly promising candidates are polar molecules such as KRb \cite{K.-K.Ni10102008,citeulike:6565167,efficient_transfer}
and LiCs \cite{weidemueller1,weidemueller2},
which have a strong electric dipole moment so that the DDI may be up to ten thousand times stronger than in usual atomic
systems \cite{arXiv:0811.4618}, thus providing an ideal testing ground for strong dipolar systems. Moreover, atomic systems such as dysprosium,
the most magnetic atom with a magnetic moment of $m=10\mu_{\rm B}$, and erbium, which has $m=7\mu_{\rm B}$, are currently under intense investigation.
Indeed, the bosonic dysprosium isotope $^{164}$Dy has been Bose condensed \cite{PhysRevLett.107.190401}, while its fermionic isotope $^{161}$Dy has
been brought to quantum degeneracy \cite{PhysRevLett.108.215301}. Most recently,
Bose-Einstein condensation of erbium has been achieved \cite{PhysRevLett.108.210401}.
Concerning dipolar systems, one feature, which has received considerable attention, is the possibility of magnetostriction in momentum space,
which was first found theoretically in the fermionic case \cite{miyakawa_t_2008}, but has been intensively investigated in different contexts and
also in the
bosonic case \cite{PhysRevA.86.023605}. For fermionic dipolar gases, other interesting possibilities have been found such as
supersolid \cite{PhysRevA.83.053629}, ferronematic \cite{fregoso:205301}, and Berezinskii-Kosterlitz-Thouless phases \cite{bruun:245301} and their Fermi
liquid properties have been studied \cite{PhysRevA.81.023602}.
Much work has been devoted to the normal phase of a dipolar Fermi gas. Its equilibrium properties in the presence of a harmonic trap has been
considered at both zero \cite{rzazewski,miyakawa_t_2008,AristeuRapCom,Aristeu} and finite temperature \cite{endo,PhysRevA.81.033617}. Also the
dynamical properties of a trapped dipolar Fermi gas have been studied at both zero \cite{1367-2630-11-5-055017,AristeuRapCom,Aristeu} and finite
temperature \cite{PhysRevA.83.053628}.
Dynamical properties such as the low-lying excitations represent an important diagnostic tool for ultracold systems. Moreover, they can be
measured with high accuracy, so as to provide reliable physical information. In the case of non-dipolar unitary Fermi gases, for example, such
measurements have recently been used to discard predictions of the mean-field Bardeen-Cooper-Schrieffer theory (BCS) in favor of the predictions
of quantum Monte Carlo simulations \cite{PhysRevLett.95.030404} along the BCS-BEC crossover \cite{PhysRevLett.98.040401}. A more recent example is
that of the experimental support for the lower bound to the viscosity of an unitary gas \cite{Cao07012011}, which is conjectured to be
universal \cite{PhysRevLett.94.111601}. The aforementioned detection of the DDI through observing the hydrodynamic modes of a chromium BEC should also
be recalled in this context \cite{stuhler_j_2005}.
In the first studies, the investigations of the excitations of dipolar Fermi gases were concentrated in either the collisionless (CL) regime
\cite{1367-2630-11-5-055017,PhysRevA.83.053628}, where collisions can be neglected, or in the hydrodynamic (HD) regime \cite{AristeuRapCom,Aristeu},
where collisions occur so often that local equilibrium can be assumed. Recently, also the radial quadrupole mode has been studied in detail in
both the CL and the HD regime \cite{PhysRevA.85.033639}. The next natural step is the investigation of what happens, when the system undergoes a
crossover from one regime to the other. Along these lines, there has recently been a thorough investigation of quasi-two-dimensional dipolar Fermi gases.
Indeed, by considering a linearized scaling ansatz as well as numerical results, the first eight moments of the collisional Boltzmann-Vlasov equation
were analyzed. As a result, the graph of the collision rate against temperature was shown to exhibit an unexpected plateau, a unique
characteristic of dipolar systems, and also the low-lying modes in this quasi two-dimensional case were considered \cite{demler}. It is important to
remark that, additionally,
the effect of quantum correlations has been considered in this system. Building on a previous study, where quantum Monte Carlo methods
were used to investigate the Fermi liquid as well as the crystal phases in strictly two-dimensional systems \cite{matveeva_prl}, a mapping scheme could
be constructed, that allows for investigating correlations in quasi two-dimensional, spherically trapped dipolar Fermi gases \cite{demler2}.
In the present paper, we focus on a three-dimensional dipolar Fermi gas at zero temperature and investigate the transition of both the frequencies
and the damping of the low-lying modes from the CL to the HD regime by applying the relaxation-time approximation. To this end the paper
is organized as follows. In Section \ref{BoVlEq} we solve the Boltzmann-Vlasov equation (BVE)
for a harmonically trapped dipolar Fermi gas by rescaling the equilibrium distribution and
obtain ordinary differential equations for the respective scaling parameters. Afterwards, we specialize them in Section \ref{equi} at zero temperature to
the relaxation-time approximation for the collision integral and to a concrete equilibrium distribution.
A subsequent linearization of the equations of motion for the respective scaling parameters
in Section \ref{linear}
allows to determine both the frequency and the damping of the low-lying excitation modes. In particular, we investigate how
the properties of the monopole mode, the three-dimensional quadrupole mode, and the radial quadrupole mode change by varying the relaxation time
from the CL to the HD regime. The conclusion in Section \ref{CON} summarizes our findings and indicates possible future investigations
along similar lines. Finally, the
appendix presents a self-contained computation of the respective relevant energy integrals.
\section{Boltzmann-Vlasov Equation}\label{BoVlEq}
We start with describing the dynamic properties of a trapped dipolar Fermi gas by means of the
Wigner function $\nu({\bf x},{\bf q},t)$, which represents a
semiclassical distribution function in phase space spanned by coordinate ${\bf x}$ and wave vector ${\bf q}$ \cite{Wigner1,Wigner2}.
It allows to determine both the particle density through $n({\bf x},t)=\int d^3q \,\nu({\bf x},{\bf q},t)/(2\pi)^3$
and the wave vector distribution by $n({\bf q},t)=\int d^3x\,\nu({\bf x},{\bf q},t)/(2\pi)^3$ as well as the expectation value of any
observable according to $\langle O \rangle= \int d^3x \int d^3q\,O({\bf x},{\bf q}) \nu({\bf x},{\bf q},t)/(2\pi)^3$.
The time evolution of this Wigner function is
determined by the Boltzmann-Vlasov equation
\begin{equation}
\frac{\partial \nu}{\partial t}+\left\{\frac{\hbar {\bf q}}{M}+\frac{1}{\hbar} \frac{\partial\left[U({\bf x})
+U_{\rm mf}({\bf x},{\bf q},t)\right]}{\partial
{\bf q}}\right\} \frac{\partial \nu}{\partial {\bf x}}-\frac{1}{\hbar}\frac{\partial\left[U({\bf x})+U_{\rm mf}({\bf x},{\bf q},t)\right]}{\partial {\bf x}}
\frac{\partial \nu}{\partial {\bf q}}=I_{\rm coll}[\nu]({\bf x},{\bf q},t) \, , \label{bve}
\end{equation}
where $U({\bf x})=M/2\sum_i \omega_i^2 x_i^2$ is a general harmonic trapping potential and $\omega_i$ denotes the trap frequency in the $i$-direction.
The mean-field potential
\begin{equation}
U_{\rm mf}({\bf x},{\bf q},t)=\int d^3x'n({\bf x'},t)V_{\rm d}({\bf x}-{\bf x'}) - \int \frac{d^3q'}{(2\pi)^3}\nu({\bf x},{\bf q'},t)
\tilde{V}_{\rm d}({\bf q}-{\bf q'})
\label{umf}
\end{equation}
contains in the first and second term the Hartree and Fock contributions, respectively, where
$V_{\rm d}({\bf x})$ represents the dipole-dipole potential and $\tilde{V}_{\rm d}({\bf q})$ its Fourier transform. We consider a system of dipolar
fermions with the point dipoles aligned along the z-direction so that $V_{\rm d}({\bf x})$ reads
\begin{equation}
V_{\rm d}({\bf x})=\frac{C_{\rm dd}}{4\pi \left|{\bf x}\right|^3}\left(1-3 \cos^2 \vartheta \right)\,,
\label{dipolepotential}
\end{equation}
with $\vartheta$ being the angle between the direction of the polarization of the dipoles and their relative position. The dipole-dipole interaction
strength is
characterized for magnetic atoms by $C_{\rm dd}=\mu_0m^2$, with $\mu_0$ being the magnetic permeability in vacuum and $m$ denoting the magnetic dipole
moment. In the case
of heteronuclear molecules with electric moment, the interaction strength is given by $C_{\rm dd}=d^2/{\rm \epsilon_0}$, with the vacuum dielectric
constant
$\epsilon_0$ and the electric dipole moment $d$. Note that
the Fourier transform of the dipole-dipole interaction potential (\ref{dipolepotential}) is given by \cite{goral}
\begin{equation}
\tilde{V}_{\rm d}({\bf k})=\int d^3x V_{\rm d}({\bf x}){\rm e}^{i{\bf k}\cdot
{\bf x}}=\frac{C_{\rm dd}}{3}\left( \frac{3 k_z^2}{{\bf k}^2}-1\right)\,.
\label{FTDDP}
\end{equation}
The collision integral $I_{\rm coll}[\nu]$ represents a nonlinear functional of the distribution function and is
of second order of the dipole-dipole potential $V_{\rm d}({\bf x})$. Its concrete
form and derivation for a general two-body interaction potential can be found, for instance, in Ref.~\cite{KadanoffBaym}.
In order to find an approximate solution of the BVE in the vicinity of
equilibrium we use the scaling method from Ref.~\cite{String}. To this end, we assume that
the distribution function $\nu({\bf x},{\bf p},t)$ can be obtained from rescaling the equilibrium distribution function $\nu^0({\bf r},{\bf k})$,
which satisfies
\begin{equation}
\left\{\frac{\hbar {\bf q}}{m}+\frac{1}{\hbar} \frac{\partial\left[U({\bf x})+U_{\rm mf}({\bf x},{\bf q})\right]}{\partial {\bf q}}\right\}
\frac{\partial \nu^0}{\partial {\bf x}}-\frac{1}{\hbar}\frac{\partial\left[U({\bf x})+U_{\rm mf}({\bf x},{\bf q})\right]}{\partial {\bf x}}
\frac{\partial \nu^0}{\partial {\bf q}}=0\,,
\label{BVEE}
\end{equation}
according to
\begin{equation}
\nu({\bf x},{\bf q},t) = \Gamma \nu^0\left({\bf r}(t),{\bf k}(t)\right)\,.
\label{scalingWigner}
\end{equation}
Thereby we have introduced the scaling parameters $b_i$ and $\Theta_i$ via
\begin{eqnarray}
r_i &=&{\displaystyle \frac{x_i}{b_i(t)}}\,,
\label{b}\\
k_i &=&{\displaystyle \frac{1}{\sqrt{\Theta_i (t)}}\left( q_i-\frac{m\dot{b}_i(t) x_i}{\hbar b_i(t)}\right)}\,,
\label{scalingwignerk}
\end{eqnarray}
where the second term in Eq.~(\ref{scalingwignerk}) describes a transformation of the ${\bf q}$-vector resulting in a vanishing
local velocity field \cite{CastinDum}. As the scaling parameters
$b_i$ and $\Theta_i$ denote the time-dependent deviation from equilibrium, their equilibrium values are given by $b_i^0=\Theta_i^0=1$.
Furthermore, the term
\begin{equation}
\Gamma={\displaystyle \frac{1}{\prod_j b_j \sqrt{\Theta_j}}}
\end{equation}
ensures the normalization of the distribution function $\nu$.
Ordinary differential equations for the scaling parameters $b_i$ and $\Theta_i$ can be obtained by taking moments of the BVE, i.e.~by integrating it
with a
prefactor over the whole phase space. Using the prefactor $r_ik_i$, i.e.~performing the operation $\int d^3rd^3k/(2\pi)^3r_ik_i \,\times$ (\ref{bve}),
leads to the following coupled differential equations for the spatial scaling parameters $b_i$
\begin{align}
&\ddot{b}_i+ \omega_i^2 b_i-\frac{\hbar^2 \left\langle k_i^2 \right\rangle^0 \Theta_i}{m^2 b_i \left\langle r_i^2 \right\rangle^0}
+\frac{1}{2m b_i \left\langle r_i^2
\right\rangle^0}\left[\int \frac{d^3k}{(2\pi)^3}\tilde{W}_i(b,{\bf k}) \tilde{n}^0({\bf k}) \tilde{n}^0(-{\bf k})\right. \nonumber \\
&\hspace{1cm}\left. -\frac{1}{\prod_j b_j}\int \frac{d^3rd^3kd^3k'}{(2\pi)^6}\nu^0({\bf r},{\bf k})\nu^0({\bf r},{\bf k'}) \tilde{W}_i(\Theta,{\bf k}
-{\bf k'})\right]=0\,,
\label{diffglb}
\end{align}
where $\left\langle \bullet \right\rangle^0=\int d^3rd^3k\,\bullet \nu^0({\bf r},{\bf k})/(2\pi)^3$ denotes the phase-space average in equilibrium
and $\tilde{n}^0$ is the
Fourier transform of the spatial density in equilibrium $n^0({\bf r})=\int d^3q\,\nu^0({\bf r},{\bf q})/(2\pi)^3$.
Furthermore $\tilde{W}_i(b,{\bf k})$ represents an abbreviation for
\begin{equation}
\tilde{W}_i(b,{\bf k})=F\left[r_i \frac{\partial V_{\rm d}(b,{\bf r})}{\partial r_i}\right]\,,
\label{fouriertransformedhartreepotential}
\end{equation}
where $V_{\rm d}(b,{\bf r})$ denotes the rescaled dipole-dipole potential and $F[\bullet]$ the Fourier transform. The other abbreviation function
$\tilde{W}_i(\Theta,{\bf k})$ is defined by
\begin{equation}
\tilde{W}_i(\Theta,{\bf k}-{\bf k'})=F\left[ r_i\frac{\partial V_{\rm d}({\bf r})}{\partial r_i}\right]\left(\Theta_x^{\frac{1}{2}}(k_x-k_x'),
\Theta_y^{\frac{1}{2}}(k_y-k_y'),\Theta_z^{\frac{1}{2}}(k_z-k_z')\right)\,.
\label{fouriertransformedfockpotential}
\end{equation}
We observe that Eq.~(\ref{diffglb}) has a Newtonian
form containing trapping, kinetic, and Hartree-Fock mean-field energy terms. Note that the contribution of the collision integral in (\ref{diffglb})
vanishes,
because the quantity $r_ik_i$ is conserved under collisions \cite{String}.
The effect of collisions is only contained in the differential equations for the
momentum scaling parameters $\Theta_i$, which can be obtained by taking moments of (\ref{bve}) with the prefactor $k_i^2$, leading to
\begin{equation}
\frac{\dot{\Theta}_i}{\Theta_i}+2\frac{\dot{b}_i}{b_i}=\frac{1}{\Gamma
\left\langle k_i^2 \right\rangle^0}\int \frac{d^3rd^3k}{(2\pi)^3}k_i^2I_{\rm coll}[\nu]\,.
\label{diffglT}
\end{equation}
We remark that taking moments of the BVE weighted with a prefactor of the form $r_ir_j$ does not provide new constraints between
the scaling parameters $b_i$ and $\Theta_i$
\cite{Dusling}.
\section{Equilibrium}\label{equi}
Before solving the coupled set of
Eqs.~(\ref{diffglb}) and (\ref{diffglT}) we have to simplify them by explicitly evaluating the respective integrals.
This can be done analytically by determining
the equilibrium distribution function $\nu^0({\bf r},{\bf k})$ in a self-consistent way within a variational ansatz. In the low-temperature regime
it is appropriate to
choose an ansatz which resembles the Fermi-Dirac distribution of a non-interacting Fermi gas at zero temperature:
\begin{equation}
\nu^0({\bf r},{\bf k})=\Theta \left(1-\sum_j\frac{r_j^2}{R_j^2}-\sum_j \frac{k_j^2}{K_j^2}\right)\,.
\label{ansatzwigner}
\end{equation}
Here the variational parameters $R_i$ and $K_i$ represent the Thomas-Fermi radii and momenta, respectively, which describe the extension of the
equilibrium Fermi surface
in both coordinate
and momentum space. The dipole-dipole interaction stretches both the particle density \cite{rzazewski} and the momentum distribution
\cite{miyakawa_t_2008} in the direction of the polarization, which is taken into account by a
possible anisotropy of the parameters $R_i$ and $K_i$. With this ansatz, the normalization of (\ref{ansatzwigner}) to $N$ fermions leads to
\begin{equation}
48N=\overline{R}^3\overline{K}^3\,,
\label{particleconservation}
\end{equation}
where the bar denotes geometric averaging, i.e.~$\overline{R}=\left(R_xR_yR_z\right)^{\frac{1}{3}}$. In order to be physically
self-consistent the ansatz Eq.~(\ref{ansatzwigner}) has
to minimize the total Hartree-Fock energy of the system as the collision integral vanishes in equilibrium. Hence the total energy
$E_t$ of the system consists only of the kinetic term
\begin{equation}
E_{\rm k}=\int d^3r\int \frac{d^3k}{(2\pi)^3}\frac{\hbar^2 {\bf k}^2}{2m}\nu^0({\bf r},{\bf k})\,,
\label{Ekin}
\end{equation}
the trapping term
\begin{equation}
E_{\rm tr}=\int d^3r \int \frac{d^3k}{(2\pi)^3} \frac{m}{2}\left(\sum_j\omega_j^2r_j^2\right)\nu^0({\bf r},{\bf k})\,,
\label{Etrapf0}
\end{equation}
the direct Hartree term
\begin{equation}
E_{\rm d}=\frac{1}{2}\int d^3r\int d^3r' \int \frac{d^3k}{(2\pi)^3}\int \frac{d^3 k'}{(2\pi)^3}V_{\rm d}({\bf r}-{\bf r'}) \nu^0({\bf r},{\bf k})
\nu^0({\bf r'},{\bf k'})\,,
\label{Edf0}
\end{equation}
and the Fock exchange term
\begin{equation}
E_{\rm ex} =-\frac{1}{2}\int d^3r \int d^3r' \int \frac{d^3k}{(2\pi)^3}\int \frac{d^3 k'}{(2\pi)^3} V_{\rm d}({\bf r'}) {\rm e}^{i({\bf k}-{\bf k'})
\cdot {\bf r'}} \nu^0({\bf r},{\bf k}) \nu^0({\bf r},{\bf k'})\,.
\label{Eex}
\end{equation}
Inserting the variational ansatz Eq.~(\ref{ansatzwigner}) for the equilibrium distribution
into the respective energy contributions (\ref{Ekin})--(\ref{Eex}) leads to various phase space integrals.
Whereas both kinetic energy (\ref{Ekin}) and trapping energy (\ref{Etrapf0}) yield elementary solvable integrals,
the computation of the Hartree-Fock integrals (\ref{Edf0}) and (\ref{Eex}) turns out to be more elaborate and is, therefore,
relegated to Appendix A, see also Ref.~\cite{miyakawa_t_2008,Aristeu}. The resulting total energy reads
\begin{equation}
E_{\rm t}=\frac{N}{8}\sum_j\frac{\hbar^2K_j^2}{2m}+\frac{N}{8}\frac{m}{2}\sum_j\omega_j^2R_j^2-\frac{48N^2c_0}{8\overline{R}^3}f\left(
\frac{R_x}{R_z},\frac{R_y}{R_z}
\right)+\frac{48N^2c_0}{8\overline{R}^3}f\left( \frac{K_z}{K_x},\frac{K_z}{K_y} \right)
\label{Etotequilib}
\end{equation}
with the constant
\begin{equation}
c_0=\frac{2^{10}C_{\rm dd}}{3^4\cdot 5\cdot 7\cdot \pi^3}\, .
\label{C0}
\end{equation}
The Hartree and Fock terms in (\ref{Etotequilib}) depend on the aspect ratio
of the Thomas-Fermi radii and momenta via
the anisotropy function $f(x,y)$, which is defined through the integral
\begin{equation}
f(x,y)=-\frac{1}{4\pi}\int_0^{2\pi}d\phi \int_0^{\pi}d\vartheta \, {\rm sin}\vartheta \left[ \frac{3x^2y^2{\rm cos}^2\vartheta}{\left(y^2{\rm cos}^2
\phi+x^2{\rm sin}^2\phi\right) {\rm sin}^2\vartheta+x^2y^2{\rm cos}^2\vartheta}-1\right]
\label{definitionanistropyfunction}
\end{equation}
and which can also be represented as follows \cite{Aristeu,Aristeudoktor}
\begin{equation}
f(x,y)=1 + 3 x y \, \frac{E(\varphi,q) - F(\varphi,q)}{(1-y^2)\sqrt{1-x^2}} \, ,
\end{equation}
where $F(\varphi,q)$ and $E(\varphi,q)$ are the elliptic integrals of the first and second kind, respectively, with $\varphi= \arcsin \sqrt{1-x^2}$
and $q^2=(1-y^2)/(1-x^2)$.
The variational parameters $R_i$, $K_i$ are now
determined by minimizing Eq.~(\ref{Etotequilib}) under the constraint of the particle conservation (\ref{particleconservation}).
Note that from this result it turns out that the distribution function $\nu^0$ is deformed from a sphere to an ellipsoid
in momentum space due to the Fock term as was first clarified by
Ref.~\cite{miyakawa_t_2008}. The corresponding effect in the particle density had already been obtained before
by means of a Gaussian ansatz in real space
\cite{rzazewski}. A detailed discussion how the Thomas-Fermi radii and momenta depend on the trap frequencies for both cylinder-symmetric and tri-axial
traps as well as on the dipole-dipole interaction strength $C_{\rm dd}$ can be found, for instance, in
Refs.~\cite{miyakawa_t_2008,1367-2630-11-5-055017,AristeuRapCom,Aristeu}.
On the basis of the variational ansatz Eq.~(\ref{ansatzwigner}) for the equilibrium distribution
the integrals in Eqs.~(\ref{diffglb}) for the scaling parameters $b_i$ can now be calculated analytically, yielding
\begin{align}
\ddot{b}_i+\omega_i^2b_i-\frac{\hbar^2 K_i^2 \Theta_i}{m^2b_i R_i^2 }+ \frac{48N c_0}{ mb_i R_i^2 \prod_j b_j R_j}\left[ f\left( \frac{b_xR_x}{b_zR_z},
\frac{b_yR_y}{b_zR_z}\right)-b_iR_i\frac{\partial}{\partial b_i R_i}f\left( \frac{b_xR_x}{b_zR_z},\frac{b_yR_y}{b_zR_z}\right)\right] \nonumber \\
-\frac{48N c_0}{ mb_i R_i^2 \prod_j b_j R_j}\left[ f\left(\frac{\Theta_z^{\frac{1}{2}}K_z}{\Theta_x^{\frac{1}{2}}K_x},
\frac{\Theta_z^{\frac{1}{2}}K_z}{\Theta_y^{\frac{1}{2}}K_y}\right)+\Theta_i^{\frac{1}{2}}K_i \frac{\partial}{\partial \Theta_i^{\frac{1}{2}}K_i}
f\left(\frac{\Theta_z^{\frac{1}{2}}K_z}{\Theta_x^{\frac{1}{2}}K_x}, \frac{\Theta_z^{\frac{1}{2}}K_z}{\Theta_y^{\frac{1}{2}}K_y}\right)\right]=0\,.
\label{bequationcomplete}
\end{align}
The corresponding differential equations for the momentum scaling parameters $\Theta_i$ in Eq.~(\ref{diffglT}) still contain the collision integral.
In order to simplify the calculation, we model this collision integral within the widely used relaxation-time approximation
\cite{String,rt1,Dusling}
\begin{equation}
\label{rta}
I_{\rm coll} [\nu ]\approx -\frac{\nu-\nu^{\rm le}}{\tau}\,.
\end{equation}
Here the phenomenological parameter $\tau$ denotes the relaxation time, which corresponds to the average time between two collisions. Furthermore, we have introduced the local equilibrium
distribution function
$\nu^{\rm le}$, which is defined by the condition $I_{\rm coll}[\nu^{\rm le}]=0$ and represents the limiting function of the relaxation process for infinitely large times.
We assume that the collisions only change the
momentum distribution of $\nu^{\rm le}$ \cite{String}. This is justified by deriving the collision integral within a gradient expansion of the
distribution function and by considering only the first term \cite{KadanoffBaym}. Similar arguments have been used before
in the context of the local density approximation for bosonic dipolar gases \cite{quantumfluctuations,beyondmeanfield}. Therefore,
$\nu^{\rm le}$ is determined from rescaling the equilibrium distribution $\nu^0$
via an ansatz similar to Eq.~(\ref{scalingWigner}), i.e.
\begin{equation}
\nu^{\rm le}({\bf x},{\bf q},t)=\Gamma^{\rm le} \nu^0({\bf r}(t),{\bf k}^{\rm le} (t)) \, ,
\label{localscaling}
\end{equation}
with the old scaling parameters $b_i$ in real space according to (\ref{b}), but new scaling parameters $\Theta_i^{\rm le}$ in momentum space
\begin{equation}
k_i^{\rm le} =\frac{1}{\sqrt{\Theta_i^{\rm le} (t)}}\left( q_i-\frac{m\dot{b}_i(t) x_i}{\hbar b_i(t)}\right)\,,
\label{scalingwignerkle}
\end{equation}
thus yielding the corresponding normalization
\begin{equation}
\label{con}
\Gamma^{\rm le}=\frac{1}{\prod_j b_j \sqrt{\Theta_j^{\rm le}} } \,.
\end{equation}
Inserting the ansatz (\ref{rta}) into Eq.~(\ref{diffglT}) finally leads to
\begin{equation}
\dot{\Theta}_i+2\frac{\dot{b}_i}{b_i}\Theta_i=-\frac{1}{\tau}\left( \Theta_i-\Theta_i^{\rm le}\right) \, .
\label{thetaequationcomplete}
\end{equation}
The physical meaning of this equation is that dissipation occurs in the system outside of local equilibrium
as long as there are collisions, i.e.~as long as the relaxation time $\tau$ remains finite.
In order to obtain a closed set of equations we have yet to find additional equations which determine the momentum scaling parameters $\Theta_i^{\rm le}$.
Due to the presence of the DDI, the Fermi sphere is deformed, thus reducing the symmetry in the momentum scaling from spherical to cylindrical.
To this end, we evaluate the respective energy contributions Eqs.~(\ref{Ekin})--(\ref{Eex}) in local equilibrium in (\ref{localscaling}),
where again the collision integral vanishes by definition. With this we obtain
\begin{align}
E_{\rm t}=&-\frac{N}{8}\frac{m}{2}\sum_i \dot{b}_i^2 R_i^2+\frac{N}{8}\sum_i \frac{\hbar^2 K_i^2 \Theta_i^{\rm le}}{2M}+\frac{N}{8} \frac{M}{2} \sum_i
\omega_i^2 b_i^2 R_i^2-\frac{48N^2c_0}{8\prod_j b_j R_j}f\left( \frac{b_x R_x}{b_z R_z},\frac{b_y R_y}{b_z R_z} \right)\nonumber \\
&+\frac{48N^2c_0}{8\prod_j b_j R_j}f\left( \frac{(\Theta_z^{\rm le})^{\frac{1}{2}} K_z}{(\Theta_x^{\rm le})^{\frac{1}{2}} K_x},
\frac{(\Theta_z^{\rm le})^{\frac{1}{2}} K_z}{(\Theta_y^{\rm le})^{\frac{1}{2}} K_y} \right)\,.
\label{Etotloc}
\end{align}
However, when determining the momentum scaling parameters $\Theta_i^{\rm le}$ by minimizing Eq.~(\ref{Etotloc}), we have to take into account that they
turn out to be not independent of one another. Summing all three Eqs.~(\ref{thetaequationcomplete}) yields in local equilibrium the constraint
\begin{equation}
\label{conn}
\prod_j b_j \sqrt{\Theta_j^{\rm le}} =1 \,.
\end{equation}
With this the minimization of the energy (\ref{Etotloc}) leads to
\begin{align}
\Theta_x^{\rm le}&=\Theta_y^{\rm le}\, ,
\label{thetaxthetay} \\
\frac{\hbar^2 \Theta_z^{\rm le} K_z^2}{2m}-\frac{\hbar^2 \Theta_x^{\rm le} K_x^2}{2m}&=-\frac{3}{2}\frac{48N c_0}{\prod_j b_j R_j}\left\{ -1+\frac{
\left( 2\Theta_x^{\rm le} K_x^2+\Theta_z^{\rm le} K_z^2 \right) f_{\rm s}\left[ \frac{\left(\Theta_z^{\rm le}\right)^{\frac{1}{2}}
K_z}{\left(\Theta_x^{\rm le}\right)^{\frac{1}{2}} K_x} \right]}{2\left( \Theta_x^{\rm le} K_x^2-\Theta_z^{\rm le} K_z^2 \right)} \right\}
\label{couplinglocalequ}
\end{align}
with the symmetric anisotropy function $f_{\rm s}(x)=f(x,x)$ \cite{odell,AristeuRapCom,glaumpfau,bosetempdip}.
These two equations show the deformation of the local equilibrium distribution function in momentum space, which disappears if we set the dipole-dipole
interaction to zero. Furthermore, we note that
the right-hand side of Eq.~(\ref{couplinglocalequ}) originates from the Fock term, hence the momentum distribution retains
cylinder-symmetry even in
the case of an anisotropic harmonic trap. In the absence of the DDI, i.e. more precisely in the absence of the Fock exchange term of the DDI,
the momentum scaling parameters in local equilibrium assume the same values
in all three directions. This resembles the case of a two-component Fermi gas featuring contact interaction only \cite{String}.
Finally, we remark that a solution in the hydrodynamic regime, where the relaxation time goes to zero, i.e.~$\tau \rightarrow 0$,
has the same momentum symmetry as the local equilibrium. Hence
the momentum scaling parameters in this regime are also given by Eqs.~(\ref{conn})--(\ref{couplinglocalequ}) in accordance with
Refs.~\cite{AristeuRapCom,Aristeu}.
\section{Low-Lying Excitation Modes}\label{linear}
The equations of motion (\ref{bequationcomplete}), (\ref{thetaequationcomplete}), and (\ref{conn})--(\ref{couplinglocalequ}) for the scaling parameters $b_i$, $\Theta_i$,
and $\Theta_i^{\rm le}$ allow to calculate the properties of the low-lying excitation modes via a linearization around the
respective equilibrium values.
\subsection{Linearization}
In view of a linearization of the equations of motion
we decompose all spatial and momentum scaling parameters according to $b_i=b_i^0+\delta b_i$,
$\Theta_i=\Theta_i^0+\delta \Theta_i$, and $\Theta_i^{\rm le}=\Theta_i^{{\rm le},0}+\delta \Theta_i^{\rm le}$ with the equilibrium values
$b_i^0=\Theta_i^0=\Theta_i^{{\rm le},0}=1$
for all $i$. We start with the linearized equations for the local equilibrium Eqs.~(\ref{conn})--(\ref{couplinglocalequ}),
which leads to
\begin{eqnarray}
\delta \Theta_x^{\rm le}- \delta \Theta_y^{\rm le}&=&0\, ,\label{linequ1}\\
\sum_j \delta b_j+\delta \Theta^{\rm le}_x+\frac{1}{2}\delta \Theta_z^{\rm le}&=&0\, ,\\
A \sum_j \delta b_j-B\delta \Theta_x^{\rm le}+C \delta \Theta_z^{\rm le}&=&0\,,
\label{linequ}
\end{eqnarray}
where $A$, $B$, and $C$ represent the following abbreviations
\begin{eqnarray}
A&=&-\frac{48Nc_0}{2\prod_j R_j} \left[ 2\frac{K_z}{K_x}f_1\left( \frac{K_z}{K_x},\frac{K_z}{K_y}\right)
+\frac{K_z}{K_y}f_2\left( \frac{K_z}{K_x},\frac{K_z}{K_y}\right) \right]\,,\\
B&=&\frac{\hbar^2K_x^2}{2m}+\frac{48Nc_0}{2\prod_j R_j}\left[ \frac{K_z}{K_x}f_1\left( \frac{K_z}{K_x},\frac{K_z}{K_y}\right)
+ \frac{K_z^2}{K_x^2}f_{11}\left(
\frac{K_z}{K_x},\frac{K_z}{K_y}\right)\nonumber \right. \\
&&
\left. +\frac{1}{2}\frac{K_z^2}{K_xK_y}f_{21}\left( \frac{K_z}{K_x},\frac{K_z}{K_y}\right)+\frac{K_z^2}{K_xK_y}f_{12}\left(\frac{K_z}{K_x},\frac{K_z}{K_y}\right)
+\frac{1}{2}\frac{K_z}{K_y}f_{2}\left(\frac{K_z}{K_x},\frac{K_z}{K_y}\right)
+\frac{1}{2}\frac{K_z^2}{K_y^2}f_{22}\left(\frac{K_z}{K_x},\frac{K_z}{K_y}\right)\right]\,,\\
C&=&\frac{\hbar^2 K_z^2}{2m}+\frac{48Nc_0}{2 \prod_j R_j}\left[ \frac{K_z}{K_x}f_1\left( \frac{K_z}{K_x},\frac{K_z}{K_y}\right)
+ \frac{K_z^2}{K_x^2}f_{11}\left(
\frac{K_z}{K_x},\frac{K_z}{K_y}\right)+\frac{K_z^2}{K_xK_y}f_{12}\left( \frac{K_z}{K_x},\frac{K_z}{K_y}\right)
\right. \nonumber \\&&\left.
+\frac{1}{2}\frac{K_z}{K_y}f_2\left( \frac{K_z}{K_x},\frac{K_z}{K_y}\right)
+\frac{1}{2}\frac{K_z^2}{K_xK_y}f_{21}\left( \frac{K_z}{K_x},\frac{K_z}{K_y}\right)
+\frac{1}{2}\frac{K_z^2}{K_y^2}f_{22}\left( \frac{K_z}{K_x},\frac{K_z}{K_y}\right)\right]\,.
\end{eqnarray}
Here, we have introduced the short-hand notations $f_1(x,y)=\partial f(x,y) / \partial x$, $f_2(x,y)=\partial f(x,y) / \partial y$ and
$f_{ij}$ stand for performing the $i$ and the $j$ derivative. From the linearized versions of the local equilibrium condition
(\ref{linequ1})--(\ref{linequ}) we obtain formulas, which show how the elongations for the scaling parameters
$\delta \Theta_i^{\rm le}$ and $\delta b_i$ are related to each other
\begin{eqnarray}
\delta \Theta_x^{\rm le}&=&\frac{A-2C}{B+2C}\left(\sum_j \delta b_j\right)\,,
\label{thetaxle}\\
\delta \Theta_z^{\rm le}&=&-2\frac{A+B}{B+2C}\left(\sum_j \delta b_j\right)\,.
\label{thetayle}
\end{eqnarray}
Notice that, in the absence of the momentum deformation induced by the Fock exchange term, one would have $A=0$ and $B=C$, thus yielding $\delta \Theta_x^{\rm le}=\delta \Theta_z^{\rm le}$.
Inserting Eqs.~(\ref{thetaxle}) and (\ref{thetayle}) into the linearization of
Eqs.~(\ref{bequationcomplete}) and (\ref{thetaequationcomplete}) the 6 parameters $\delta b_i$
and $\delta \Theta_i$ are determined by 6 equations. At first
we mention the analytical expressions for the elongations of the momentum scaling
parameters $\delta \Theta_i$, which read $\delta \Theta_x= \delta \Theta_y$ due to the cylinder symmetry in momentum space,
whereas the equation for $\delta \Theta_x$ is given by
\begin{equation}
\delta \dot{\Theta}_x+2 \delta \dot{b}_x=-\frac{1}{\tau}\left\{\delta \Theta_x-\left[\frac{A-2C}{B+2C}\left(\sum_j \delta b_j\right)\right]\right\}\,, \label{deltadotthetax}
\end{equation}
and the equation for $\delta \Theta_z$ reads
\begin{equation}
\delta \dot{\Theta}_z+2 \delta \dot{b}_z=-\frac{1}{\tau}\left[\delta \Theta_z+2\frac{A+B}{B+2C}\left(\sum_j \delta b_j\right)\right]\,. \label{deltadotthetay}
\end{equation}
The equations for the elongations of the spatial scaling parameters $\delta b_i$ finally read
\begin{equation}
\delta \ddot{b}_i+\sum_j O_{ij}\delta b_j+\sum_j D_{ij}\delta \Theta_j=0\,,
\label{deltadotdotb}
\end{equation}
where we have defined
\begin{eqnarray}
O_{xx}&=&\omega_x^2+\frac{\hbar^2 K_x^2}{m^2 R_x^2}-\frac{48Nc_0}{mR_x^2 \prod_j R_j} E_1 \quad
O_{xy}=-\frac{48Nc_0}{mR_x^2\prod_j R_j}F_{12} \quad
O_{xz}=-\frac{48Nc_0}{mR_x^2\prod_j R_j} G_{12} \nonumber\\
O_{yx}&=&-\frac{48Nc_0}{mR_y^2\prod_jR_j} F_{21} \quad
O_{yy}=\omega_y^2+\frac{\hbar^2K_y^2}{m^2R_y^2}
-\frac{48Nc_0}{mR_y^2\prod_j R_j}E_2\nonumber\\
O_{yz}&=&-\frac{48Nc_0}{mR_y^2 \prod_j R_j} G_{21} \quad
O_{zx}=O_{zy}=-\frac{48Nc_0}{mR_z^2\prod_j R_j}I_{12} \quad
O_{zz}= \omega_z^2+\frac{\hbar^2 K_z^2}{m^2 R_z^2}
-\frac{48Nc_0}{mR_z^2 \prod_j R_j}J\nonumber\\
D_{xx}&=&-\left[ \frac{\hbar^2K_x^2}{m^2 R_x^2}+\frac{48Nc_0}{mR_x^2 \prod_j R_j} \frac{1}{2} \frac{K_z^2}{K_x^2} f_{11}
\left( \frac{K_z}{K_x},\frac{K_z}{K_y}\right)\right] \quad
D_{xy}=-\frac{48Nc_0}{mR_x^2 \prod_j R_j} H_{12}\nonumber\\
D_{xz}&=&\frac{48Nc_0}{mR_x^2 \prod_j R_j}\left[\frac{1}{2}\frac{K_z^2}{K_x^2}f_{11}\left( \frac{K_z}{K_x},\frac{K_z}{K_y}\right)+H_{12}\right] \quad
D_{yx}=-\frac{48Nc_0}{mR_y^2\prod_j R_j} H_{21}\nonumber\\
D_{yy}&=&-\left[ \frac{\hbar^2 K_y^2}{m^2R_y^2}+\frac{48Nc_0}{mR_y^2\prod_j R_j}
\frac{1}{2} \frac{K_z^2}{K_y^2}f_{22}\left(
\frac{K_z}{K_x},\frac{K_z}{K_y}\right) \right] \quad
D_{yz}=+\frac{48Nc_0}{mR_y^2\prod_j R_j}\left[\frac{1}{2}\frac{K_z^2}{K_y^2} f_{22}\left( \frac{K_z}{K_x},\frac{K_z}{K_y}\right)+H_{21}\right] \nonumber\\
D_{zx}&=&\frac{48Nc_0}{mR_z^2 \prod_j R_j}M_{12} \quad
D_{zy}=\frac{48Nc_0}{mR_z^2 \prod_j R_j}M_{21} \quad
D_{zz}=\frac{\hbar^2K_z^2}{m^2R_z^2}+\frac{48Nc_0}{mR_z^2
\prod_j R_j}\left( M_{12}+M_{21} \right)\,.
\end{eqnarray}
Here, we have introduced the following abbreviations
\begin{eqnarray}
E_{i}&=&2f\left( \frac{R_x}{R_z},\frac{R_y}{R_z}\right)-2\frac{R_i}{R_z}f_i
\left( \frac{R_x}{R_z},\frac{R_y}{R_z}\right)-2f\left( \frac{K_z}{K_x},\frac{K_z}{K_y}\right)
+2\frac{K_z}{K_i}f_i\left( \frac{K_z}{K_x},\frac{K_z}{K_y}\right)+\frac{R_x^2}{R_z^2}f_{ii}\left( \frac{R_x}{R_z},\frac{R_y}{R_z}\right)\,,\nonumber\\
F_{ij}&=&f\left( \frac{R_x}{R_z},\frac{R_y}{R_z}\right)-\frac{R_i}{R_z}f_i
\left( \frac{R_x}{R_z},\frac{R_y}{R_z}\right)-f\left( \frac{K_z}{K_x},\frac{K_z}{K_y}\right)
+\frac{K_z}{K_i}f_i\left( \frac{K_z}{K_x},\frac{K_z}{K_y}\right)-\frac{R_j}{R_z}f_j\left( \frac{R_x}{R_z},\frac{R_y}{R_z}\right)\nonumber \nonumber\\
&&+\frac{R_iR_j}{R_z^2}f_{ij}\left( \frac{R_x}{R_z},\frac{R_y}{R_z}\right),\nonumber\\
G_{ij}&=&f\left( \frac{R_x}{R_z},\frac{R_y}{R_z}\right)-\frac{R_i}{R_z}f_i
\left( \frac{R_x}{R_z},\frac{R_y}{R_z}\right)-f\left( \frac{K_z}{K_x},\frac{K_z} {K_y}\right)
+\frac{K_z}{K_x}f_i\left( \frac{K_z}{K_x},\frac{K_z}{K_y}\right)+\frac{R_j}{R_z}f_j\left( \frac{R_x}{R_z},\frac{R_y}{R_z}\right) \nonumber \nonumber\\
& &-\frac{R_i^2}{R_z^2}f_{ii}\left( \frac{R_x}{R_z},\frac{R_y}{R_z}\right)-\frac{R_iR_j}{R_z^2}f_{ij}\left( \frac{R_x}{R_z},\frac{R_y}{R_z}\right)\, ,\nonumber\\
H_{ij}&=&-\frac{1}{2}\frac{K_z}{K_j}f_j\left( \frac{K_z}{K_x},\frac{K_z}{K_y}\right)
+\frac{1}{2}\frac{K_z^2}{K_iK_j}f_{ij}\left( \frac{K_z}{K_x},\frac{K_z}{K_y}\right)\,,\nonumber\\
I_{ij}&=&f\left( \frac{R_x}{R_z},\frac{R_y}{R_z}\right)+\frac{R_j}{R_z}f_j
\left( \frac{R_x}{R_z},\frac{R_y}{R_z}\right)-f\left( \frac{K_z}{K_x},\frac{K_z}{K_y}\right)
-\frac{K_z}{K_i}f_i\left( \frac{K_z}{K_x},\frac{K_z}{K_y}\right) \nonumber \\
&&-\frac{K_z}{K_j} f_j\left( \frac{K_z}{K_x},\frac{K_z}{K_y}\right)
-\frac{R_i}{R_z}f_i\left( \frac{R_x}{R_z},\frac{R_y}{R_z}\right)-\frac{R_i^2}{R_z^2}f_{ii}\left(
\frac{R_x}{R_z},\frac{R_y}{R_z}\right)-\frac{R_jR_i}{R_z^2}f_{ji}\left( \frac{R_x}{R_z},\frac{R_y}{R_z}\right)\,,\nonumber\\
J&=&2f\left( \frac{R_x}{R_z},\frac{R_y}{R_z}\right)+4\frac{R_x}{R_z}
f_1\left( \frac{R_x}{R_z},\frac{R_y}{R_z}\right)+4\frac{R_y}{R_z}f_2\left( \frac{R_x}{R_z},
\frac{R_y}{R_z}\right) \nonumber \\
& &-2f\left( \frac{K_z}{K_x},\frac{K_z}{K_y}\right)-2\frac{K_z}{K_x}f_1\left(
\frac{K_z}{K_x},\frac{K_z}{K_y}\right)-2\frac{K_z}{K_y}f_2\left( \frac{K_z}{K_x},
\frac{K_z}{K_y}\right)+\frac{R_x^2}{R_z^2}f_{11}\left( \frac{R_x}{R_z},\frac{R_y}{R_z}\right) \nonumber \\
& &+\frac{R_xR_y}{R_z^2}f_{12}\left( \frac{R_x}{R_z},\frac{R_y}{R_z}\right)+\frac{R_xR_y}{R_z^2}f_{21}
\left( \frac{R_x}{R_z},\frac{R_y}{R_z}\right)+\frac{R_y^2}{R_z^2}
f_{22}\left( \frac{R_x}{R_z},\frac{R_y}{R_z}\right)\,,\nonumber\\
M_{ij}&=&\frac{K_z}{K_i}f_i\left( \frac{K_z}{K_x},\frac{K_z}{K_y}\right)+\frac{1}{2}
\frac{K_z^2}{K_i^2}f_{ii}\left( \frac{K_z}{K_x},\frac{K_z}{K_y}\right)+\frac{1}{2}
\frac{K_z^2}{K_iK_j}f_{ji}\left( \frac{K_z}{K_x},\frac{K_z}{K_y}\right)\, ,
\end{eqnarray}
where $R_1=R_x,R_2=R_y,K_1=K_x,$, $K_2=K_y$ and $i,j \in \{ 1,2\}$.
\subsection{Cylinder-Symmetric System}
This linearized set of differential equations~(\ref{deltadotthetax})--(\ref{deltadotdotb}) allows to determine the respective frequencies $\Omega$ of the low-lying excitations.
Due to the inclusion of the collisional term within the relaxation-time approximation, however, we obtain complex frequencies $\Omega$,
whose real parts represent the eigenfrequencies
of the low-lying modes of the system and whose imaginary parts describe the corresponding damping.
For simplicity we restrict ourselves from now on to a cylinder-symmetric system with $\omega_x=\omega_y=\omega_{\rho}$ and $\omega_z=\lambda \omega_{\rho}$, where we have used
\begin{equation}
\lim_{y \rightarrow x}{xf_1(x,y)}=\lim_{y \rightarrow x}{yf_2(x,y)}=-1+\frac{(2+x^2)f_s(x)}{2(1-x^2)},
\end{equation}
so that all derivatives of the anisotropy function can be reexpressed as algebraic functions containing $f_s$.
The differential equations (\ref{deltadotthetax})--(\ref{deltadotdotb}) are solved by assuming that all deviations from equilibrium oscillate with
one and the same frequency $\Omega$:
\begin{eqnarray}
\delta b_i=\xi_i {\rm e}^{\imath \Omega t}\,,\hspace{1cm}
\delta \Theta_i=\chi_i {\rm e}^{\imath \Omega t}\,.
\label{PW}
\end{eqnarray}
Eliminating the amplitudes $\chi_i$ in momentum space from (\ref{deltadotthetax}), (\ref{deltadotthetay}) yields from (\ref{deltadotdotb})
three coupled equations for the amplitudes $\xi_i$ in position space
\begin{equation}
-\Omega^2 \xi_i+\sum_j\left( O_{ij}-\frac{2\imath \Omega \tau}{1+\imath \Omega \tau}D_{ij}+\frac{\alpha_i}{1+\imath \Omega \tau}\right) \xi_j=0\,,
\label{LA}
\end{equation}
where we have used
\begin{eqnarray}
\alpha_x&=& \frac{A-2C}{B+2C} (D_{xx}+D_{xy})-2\frac{A+B}{B+2C}D_{xz}\,,\\
\alpha_z&=& 2\frac{A-2C}{B+2C} D_{zx}-2\frac{A+B}{B+2C}D_{zz}\,.
\end{eqnarray}
The set of linear algebraic equations (\ref{LA}) has non-trivial solutions provided the corresponding determinant vanishes:
\begin{eqnarray}
0&=&\left[ O_{xx}-O_{xy}-\frac{2\imath \Omega \tau}{1+\imath \Omega \tau}\left(D_{xx}-D_{xy} \right)-\Omega^2 \right] \left\{ -2\left( O_{xx}
-\frac{2\imath \Omega \tau D_{xz}-\alpha_x}{1+\imath \Omega \tau} \right) \left( O_{zz}-\frac{2\imath \Omega \tau D_{zx}-\alpha_z}{1+\imath \Omega \tau} \right) \right. \nonumber \\
&&\left.+\left[ O_{xx}+O_{xy}-\frac{2\imath \Omega \tau \left( D_{xx}+D_{xy}\right)-2\alpha_x}{1+\imath \Omega \tau}-\Omega^2 \right]
\left( O_{zz}-\frac{2\imath \Omega \tau D_{zz}-\alpha_z}{1+\imath \Omega \tau} -\Omega^2 \right)\right\}\,. \label{determinantequation}
\end{eqnarray}
Although it is not possible to determine analytically from (\ref{determinantequation}) the complex frequencies $\Omega$ of the low-lying excitation modes for an arbitrary relaxation time $\tau$,
explicit results can be obtained for the limiting cases in which the relaxation time either diverges, i.e. in the collisionless regime
\cite{1367-2630-11-5-055017,PhysRevA.83.053628}
\begin{eqnarray}
\Omega_{\rm rq;CL}^2&=&O_{xx}-O_{xy}+2\left( D_{xy}-D_{xx} \right)\,,\label{OmegarqCL}\\
\Omega_{+;{\rm CL}}^2&=&\frac{1}{2}\left( -2D_{xx}-2D_{xy}-2D_{zz}+O_{xx}+O_{xy}+O_{zz}+ \sqrt{R_1} \right) \,,\label{OmegamoCL}\\
\Omega_{-;{\rm CL}}^2&=&\frac{1}{2}\left( -2D_{xx}-2D_{xy}-2D_{zz}+O_{xx}+O_{xy}+O_{zz}- \sqrt{R_1} \right) \,,\label{OmegatqCL}
\end{eqnarray}
or vanishes, i.e. in the hydrodynamic regime \cite{AristeuRapCom,Aristeu}
\begin{eqnarray}
\Omega_{\rm rq;HD}^2&=&O_{xx}-O_{xy}\,,\label{Omegarqhy}\\
\Omega_{+;{\rm HD}}^2&=&\frac{1}{2}\left(2\alpha_x+\alpha_z+O_{xx}+O_{xy}+O_{zz}+\sqrt{R_2} \right)\,, \label{Omegamohy}\\
\Omega_{-;{\rm HD}}^2&=&\frac{1}{2}\left(2\alpha_x+\alpha_z+O_{xx}+O_{xy}+O_{zz}-\sqrt{R_2} \right)\,. \label{Omegatqhy}
\end{eqnarray}
Here the subscribes (rq), $(+)$, $(-)$ denote the radial quadrupole mode, the monopole mode, and the three-dimensional quadrupole
mode, respectively, and we have introduced the abbreviations
\begin{eqnarray}
R_1&=&\left( 2D_{xx}+2D_{xy}+2D_{zz}-O_{xx}-O_{xy}-O_{zz} \right)^2-4\left( -8D_{xz}D_{zx}+4D_{xx}D_{zz}+4D_{xy}D_{zz}-2D_{zz}O_{xx}\right. \nonumber \\
&&\left.-2D_{zz}O_{xy}+4D_{zx}O_{xz}+4D_{xz}O_{zx}-2O_{xz}O_{zx}-2D_{xx}O_{zz}-2D_{xy}O_{zz}+O_{xx}O_{zz}+O_{xy}O_{zz} \right), \\
R_2&=&\left( 2\alpha_x+\alpha_z+O_{xx}+O_{xy}+O_{zz} \right)^2-4\left( \alpha_zO_{xx}+\alpha_zO_{xy}-2\alpha_zO_{xz}\right.\nonumber \\
&&\left.-2\alpha_xO_{zx}-2O_{xz}O_{zx}+2\alpha_xO_{zz}+O_{xx}O_{zz}+O_{xy}O_{zz} \right)\,.
\end{eqnarray}
The limiting frequencies (\ref{OmegarqCL})--(\ref{Omegatqhy}) are useful to rewrite the characteristic equation (\ref{determinantequation})
in a compact form which is similar to one for a Bose gas with contact interaction \cite{String}
\begin{equation}
\left(P[\Omega]+\frac{1}{\imath \Omega \tau}Q[\Omega]\right) \left(S[\Omega]+\frac{1}{\imath \Omega \tau}T[\Omega]\right)=0\,,
\label{determinant}
\end{equation}
where the respective polynomials are given by
\begin{eqnarray}
P[\Omega]&=&(\Omega^2-\Omega_{+;{\rm CL}}^2)(\Omega^2-\Omega_{-;{\rm CL}}^2)\,, \hspace{0.5cm}
Q[\Omega]=(\Omega^2-\Omega_{+;{\rm HD}}^2)(\Omega^2-\Omega_{-;{\rm HD}}^2)\,,\\
S[\Omega]&=&\Omega^2-\Omega_{\rm rq;CL}^2, \hspace{0.5cm} T[\Omega]=\Omega^2-\Omega_{\rm rq;HD}^2\, .
\end{eqnarray}
Note that the characteristic equation (\ref{determinant}) represents our main result for the investigation of the low-lying modes in a cylinder-symmetric trap.
It has three physical solutions corresponding to the above mentioned monopole mode, three-dimensional quadrupole mode, radial-quadrupole mode,
and describes their dependence on the relaxation time $\tau$, thus interpolating monotonously between the collisionless regime
\cite{1367-2630-11-5-055017,PhysRevA.83.053628} and the hydrodynamic regime \cite{AristeuRapCom,Aristeu}.
\setlength{\unitlength}{1cm}
\begin{center}
\begin{figure}[t]
\includegraphics[scale=0.98]{figure1.eps}
\caption{(Color online) Low-lying collective excitation frequencies within a cylinder-symmetric trap in the collisionless regime according to (\ref{OmegarqCL})--(\ref{OmegatqCL})
(upper red curves) and the hydrodynamic regime according to (\ref{Omegarqhy})--(\ref{Omegatqhy}) (lower blue curves) as a function
of the dimensionless dipolar interaction strength $\epsilon_{\rm dd}$ for different trap aspect ratios $\lambda$.}
\label{fig1}
\end{figure}
\end{center}
\subsection{Results}
In view of a numerical solution of Eq.~(\ref{determinant})
it is advantageous to introduce dimensionless variables. The
noninteracting case provides us with adequate units for all quantities of interest
in this paper, namely the Thomas-Fermi radius in the
noninteracting case
\begin{eqnarray}
R_i^0&=\sqrt{\displaystyle \frac{2E_F}{m\omega_i^2}}
\end{eqnarray}
and the corresponding Fermi momentum
\begin{eqnarray}
K_F&=\sqrt{\displaystyle \frac{2mE_F}{\hbar^2}}\,,
\end{eqnarray}
which depends on the Fermi energy
\begin{eqnarray}
E_F&=\hbar \overline{\omega}(6N)^{\frac{1}{3}}
\end{eqnarray}
with the geometric mean of the trap frequencies
$\overline{\omega}=\omega_\rho \lambda^{1/3}$, where $\lambda= \omega_{\rho}/\omega_z$ denotes the trap aspect ratio.
Using these physical dimensions leads to the dimensionless
dipole-dipole interaction strength
\begin{equation}
\epsilon_{\rm dd} =\frac{C_{\rm dd}}{4\pi}\left( \frac{m^3 \overline{\omega}}{\hbar^5} \right)^{\frac{1}{2}}N^{\frac{1}{6}} \, .
\end{equation}
At first, we discuss the frequencies of all three low-lying modes in the limiting cases of the collisionless regime ($\tau \rightarrow \infty$) and the hydrodynamic regime
($\tau \rightarrow 0$), which are depicted in Fig.~\ref{fig1} as a function of $\epsilon_{\rm dd}$ for the trap aspect ratios $\lambda=0.8, 4,$ and $10$. We observe that the
collisionless frequencies turn out to be always larger than the corresponding hydrodynamic frequencies due to an additional kinetic energy term \cite{String}.
This can be further analyzed by the example of the radial quadrupole mode, where the
the frequencies (\ref{OmegarqCL}) and (\ref{Omegarqhy}) even reduce to concise expressions
\begin{eqnarray}
\label{RQHY}
\Omega_{\rm rq;HD}^2&=&2\omega_x^2\left[1+\frac{3 \lambda^2 \epsilon_{\rm dd} c_{\rm d}}{8 \prod_j \tilde{R}_j}\frac{2\left( \tilde{R}_z^2-\lambda^2
\tilde{R}_x^2\right)-\left(4\tilde{R}_z^2+\lambda^2\tilde{R}_x^2\right)
f_s\left(\frac{\lambda\tilde{R}_x}{\tilde{R}_z}\right)}{\left(\tilde{R}_z^2-\lambda^2\tilde{R}_x^2\right)^2}\right] \, ,\\
\Omega_{\rm rq;CL}^2&=&4\omega_x^2\left\{1+\frac{\epsilon_{\rm dd} c_{\rm d}}{\prod_j \tilde{R}_j}\left[\frac{8\tilde{R}_z^4
-10\lambda^2\tilde{R}_x^2\tilde{R}_z^2-24\lambda^2\tilde{R}_x^2\tilde{R}_z^2 f_s\left(\frac{\lambda \tilde{R}_x}{\tilde{R}_z}\right)
+2\lambda^4\tilde{R}_x^4+9\lambda^4\tilde{R}_x^4f_s\left(\frac{\lambda \tilde{R}_x}{\tilde{R}_z}\right)}{16\left(\tilde{R}_z^2- \lambda^2 \tilde{R}_x^2\right)^2} \right. \right.\nonumber \\
& &\left.\left. \frac{8\tilde{K}_x^4-10\tilde{K}_x^2\tilde{K}_z^2
-24\tilde{K}_x^2\tilde{K}_z^2f_s\left(\frac{\tilde{K}_z}{\tilde{K}_x}\right)+2\tilde{K}_z^4+9\tilde{K}_z^4f_s
\left(\frac{\tilde{K}_z}{\tilde{K}_x}\right)}{16\left( \tilde{K}_z^2-\tilde{K}_x^2 \right)^2} \right]\right\} \, ,
\label{RQCL}
\end{eqnarray}
where the dimensionless Thomas-Fermi radii and momenta read $R_i=\tilde{R}_iR_i^{0}$ and $K_i=\tilde{K}_iK_F$. Note, however, that our result (\ref{RQHY}), (\ref{RQCL}) differs
significantly from the corresponding one of Ref.~\cite{String}, where the Fock exchange term and, thus, the deformation of the Fermi sphere to an ellipsoid is not
taken into account.
Furthermore, we have verified by solving the characteristic equation (\ref{determinant})
that for all possible values of both the interaction strength $\epsilon_{\rm dd}$ and the
trap anisotropy parameter $\lambda$ both the real and the imaginary parts of
the collective excitation frequencies $\Omega$ depend in the same qualitative way from the relaxation time $\tau$.
Let us therefore consider exemplarily
the frequencies of the low-lying modes for the trap anisotropy parameter $\lambda= 5$, which corresponds to a
pancake-like cloud, and for the relative dipolar strength $\epsilon_{\rm dd}=1.33$ as shown in Fig.~\ref{fig2}. We observe that
all eigenmodes have smaller values in the
hydrodynamic than in the collisionless regime. The frequencies of all three modes increase monotonously with the relaxation time
and, eventually, reach
a plateau for larger values of the relaxation time, in which the system can be well described as completely collisionless.
Despite these qualitative features, which are common to all three modes, it is interesting to remark that the passage from the HD to the CL regime with increasing
relaxation time occurs differently for the respective modes.
Indeed, by comparing the graphs in Fig.~\ref{fig2}, we find that the transition from the hydrodynamic to the collisionless regime with
increasing relaxation time $\tau$ occurs fastest for the monopole mode, moderate for the three-dimensional quadrupole mode and slowest for the radial quadrupole
mode. Moreover, the monopole mode experiences the largest frequency change during this transition, while the three-dimensional quadrupole
mode exhibits the smallest one.
This overall picture, how the transition from the collisional to the hydrodynamic regime occurs,
is confirmed if one also analyzes the imaginary parts of the complex
frequencies $\Omega$, which represent
the damping rates of the low-lying collective modes. First of all we note that they
vanish in the limiting cases of the hydrodynamic and collisionless regime, as depicted
in Fig.~\ref{fig3}. Furthermore, from comparing
the real with the imaginary parts of the frequencies, we read off two important conclusions.
At first, both the position and the width of the damping peaks
reveal the respective regions, where the main change of the real part of the frequencies occurs.
This enables to determine the crossover regions as well as
the regions in which the system behaves mainly either hydrodynamic or collisionless. Accordingly, one recognizes in Fig.~3
that the transition for the monopole mode is at first most abrupt, i.e.
the oscillation frequency changes by a large amount, while the ones for the other two modes take place for larger values of the
relaxation times and their frequency change is not so large.
The second conclusion is that
the imaginary part exhibits a peak for an intermediate relaxation time.
A quantitative analysis reveals that
the height of this peak is in good approximation proportional to the difference between the real parts of the frequencies
in the hydrodynamic and the collisionless regime \cite{Stringari}. However, detailed numerical studies show small deviations from this behaviour.
Therefore, we analyzed the dependence of the peak height from the limiting frequencies analytically for the radial quadrupole mode, whose frequency follows
according to (\ref{determinant}) from solving
\begin{equation}
\Omega^2-\Omega^2_{\rm rq;CL}+\frac{\Omega^2-\Omega^2_{\rm rq;HD}}{\imath \Omega \tau}=0\,.
\end{equation}
Splitting the complex frequency $\Omega$ into its respective
real and imaginary part allows to derive an analytic formula for the peak height, which turns out to depend
on the limiting frequencies as follows
\begin{equation}
{\rm Im} \Omega(\tau^*)=\frac{1}{4}\left( \frac{\Omega_{\rm rq;CL}}{\Omega_{\rm rq;HD}}+1 \right)
\left( \Omega_{\rm rq;HD}-\Omega_{\rm rq;CL} \right)\,,
\label{peakhightanalytic}
\end{equation}
where $\tau^*$ denotes the relaxation time at the peak in the imaginary part, which is determined from
$\frac{d {\rm Im} \Omega(\tau)}{d\tau}|_{\tau=\tau^*}=0$. Equation (\ref{peakhightanalytic}) shows that the peak height of the imaginary part of the radial quadrupole mode
depends approximately linear on the difference between the limiting frequencies.
However, the prefactor in Eq.~(\ref{peakhightanalytic}) leads to a small deviation from this linear dependence for the radial quadrupole mode.
In order to reveal this deviation graphically, one has to choose a large value for the relative dipolar interaction strength
$\epsilon_{\rm dd}$, which, however, excludes a cigarre-like cloud due to the instability of the dipolar interaction \cite{AristeuRapCom,Aristeu}.
According to Fig.~\ref{fig4} the ratio of dissipative peak height and difference of collisionless and hydrodynamic frequencies decreases 5.7 \%, once the
trap aspect ratio $\lambda$ increases from 2 to 9 for $\epsilon_{\rm dd}=1.33$.
Numerically we find that also the other two modes reveal a similar small deviation from the
linear dependence of the imaginary peak height from the difference of the limiting frequencies, which turns out to be most pronounced for the three-dimensional
quadrupole mode.
\setlength{\unitlength}{1cm}
\begin{center}
\begin{figure}[t]
\includegraphics[scale=1.0]{figure2.eps}
\caption{(Color online) Low-lying excitation mode frequencies within a cylinder-symmetric trap with respect to the relaxation time $\tau$
for the trap aspect ratio $\lambda=5$ and the dimensionless interaction strength $\epsilon_{\rm dd}=1.33$;
a) monopole mode (red curve), b) radial quadrupole (lower green curve) and three-dimensional quadrupole mode (upper blue curve).}
\label{fig2}
\end{figure}
\end{center}
\begin{figure}[t]
\includegraphics[scale=0.9]{figure3.eps}
\caption{(Color online) Damping of monopole mode (upper red curve), radial-quadrupole mode (middle green curve), and three-dimensional quadrupole mode (lower blue curve)
plotted with the same values for the trap aspect ratio and and the dimensionless interaction strength as in Fig.~\ref{fig2}.}
\label{fig3}
\end{figure}
\begin{figure}[t]
\includegraphics[scale=0.9]{figure4.eps}
\caption{(Color online) Ratio of dissipative peak height and difference of collisionless and hydrodynamic frequency for the radial quadrupole mode (green upper curve)
according to (\ref{peakhightanalytic}), the three-dimensional quadrupole mode (blue middle curve) and the monopole mode (red lower curve)
as a function of the trap aspect ration $\lambda$ at the dimensionless interaction strength $\epsilon_{\rm dd}=1.33$.}
\label{fig4}
\end{figure}
\section{Conclusion} \label{CON}
We studied the low-lying excitations of a harmonically trapped three-dimensional Fermi gas featuring the long-range and
anisotropic dipole-dipole interaction all the way from the collisionless to the hydrodynamic regime. Within the realm of the
relaxation-time approximation, we were able to include the effects of collisions in the Boltzmann-Vlasov equation.
In particular, we introduced
the local equilibrium distribution, which corresponds to the hydrodynamic regime \cite{AristeuRapCom,Aristeu}, and we treated the relaxation
time as a phenomenological parameter. Furthermore, we followed Ref. \cite{String} and solved the BV equation by
rescaling appropriately both space and momentum variables of the equilibrium distribution. With this, we obtained equations of motion
for the scaling parameters, whose linearization yields both the frequencies
and the damping rates of the oscillations from the real and imaginary parts of complex frequencies, respectively.
In order to access the radial quadrupole mode in addition to the monopole and three-dimensional quadrupole mode, we
started our calculation with a tri-axial configuration, which was later on specialized to the case of a cylinder-symmetric trap.
The values of the frequencies that we found interpolate, as expected, between the values obtained previously in both the
hydrodynamic regime \cite{AristeuRapCom,Aristeu} and the collisionless regime \cite{1367-2630-11-5-055017,PhysRevA.83.053628}, by increasing the
relaxation time from zero to infinity.
By considering different values of the relaxation time, which could be achieved experimentally by means of different interaction
strengths, for example, our analysis was able to identify qualitative and quantitative features of the transition from the
hydrodynamic to the collisionless regime. For particular values of the trap anisotropy and of the interaction strength, the
transition might be smooth for one mode while being abrupt for another one. In view of the great precision with which measurements
of the excitation frequencies in cold atomic systems are carried out nowadays, the present theoretical analysis could provide
important information on the collisional properties of such systems.
A few questions remain open, which could be addressed with the help of the present theoretical framework. For example, the
influence of the collisional term on the time-of-flight dynamics of the system could be considered, as this is a major
diagnostic tool for cold atomic gases. To this end, however, it would be of major importance to determine microscopically the phenomenological introduced relaxation time.
Moreover, the inclusion of finite-temperature effects on the analysis could also
be of interest, as actual experiments are always performed at some finite, though very low temperature.
\section*{Acknowlegements}
We acknowledge inspiring discussions with Sandro Stringari.
One of us (A. R. P. L.) acknowledges financial support from the Coordena\c{c}\~ao de Aperfei\c{c}oamento de Pessoal de N\'ivel Superior (CAPES) at previous stages
of this work as well as the hospitality of the Departments of Physics of the Federal University of Cear\'a (Brazil)
and of the Technische Universit\"at Kaiserslautern (Germany), where part of this work
was carried out. Furthermore, we thank the support from the German Research Foundation (DFG) via the Collaborative
Research Center SFB /TR49 Condensed Matter Systems with Variable Many-Body Interactions.
\begin{appendix}
\section{Computation of the Energy Integrals}
In order to make the paper self-contained, we present
in the appendix the relevant steps for the evaluation of the Hartree-Fock energy integrals Eqs.~(\ref{Edf0}) and (\ref{Eex})
for the equilibrium distribution (\ref{ansatzwigner}).
This will lead to explicit expressions which depend only on the Thomas-Fermi radii and momenta
$R_i$ and $K_i$, see also Ref.~\cite{miyakawa_t_2008,Aristeu}.
\subsection{Hartree Energy}
The basic idea behind the calculation of the Hartree integral (\ref{Edf0}) is to decouple the distribution functions and the
interaction potential with respect to their spatial arguments via Fourier transforms.
Thus, we rewrite the Hartree term by using the Fourier transform of the potential $\tilde{V}_{\rm int}({\bf k})$ according to
\begin{equation}
E_{\rm d} =\frac{1}{2} \int \frac{d^3k''}{(2\pi)^3} \tilde{V}_{\rm int}({\bf k''}) \int \frac{d^3k}{(2\pi)^3}
\tilde{\nu}^0({\bf -k''},{\bf k}) \int \frac{d^3k'}{(2\pi)^3} \tilde{\nu}^0({\bf k''},{\bf k'})\,.
\label{unterteiltEd}
\end{equation}
In a first step we now have to compute the Fourier transform of the equilibrium Wigner function (\ref{ansatzwigner})
\begin{equation}
\label{firstFT}
\tilde{\nu}^0(-{\bf k''},{\bf k})=\int d^3x {\rm e}^{i{\bf k''} {\bf x}}\Theta \left[h({\bf k})-\sum_j\frac{x_j^2}{R_j^2}\right]\,,
\end{equation}
where $h({\bf k})=1-\sum_j k_j^2/K_j^2$ is a suitable abbreviation. The Fourier-transformed distribution function yields
\begin{equation}
\tilde{\nu}^0(-{\bf k}'',{\bf k})=\frac{(2\pi)^{\frac{3}{2}}\overline{R}^3
h({\bf k})^{\frac{3}{4}}\Theta[h({\bf k})]}{(k_x''^2R_x^2+k_y''^2R_y^2+k_z''^2R_z^2)^{\frac{3}{4}}}J_{\frac{3}{2}}\left[
h({\bf k})^{\frac{1}{2}}\left(k_x''^2R_x^2+k_y''^2R_y^2+k_z''^2R_z^2\right)^{\frac{1}{2}} \right] \,,
\label{FTW1}
\end{equation}
where $J_{i}(x)$ is the $i$-th Bessel function of first kind. The next step is to calculate the ${\bf k}$-integral over the Fourier-transformed Wigner function (\ref{FTW1})
in view of Eq.~(\ref{unterteiltEd}).
To this end we perform the substitution $k_i=K_i u_i$ and use the spherical symmetry of the integrand which leads to
\begin{align}
\int \frac{d^3k}{(2\pi)^3}\tilde{\nu}^0({\bf k}'',{\bf k})=&\frac{4 \pi \overline{R}^3
\overline{K}^3}{(2\pi)^{\frac{3}{2}}(k_x''^2R_x^2+k_y''^2R_y^2+k_z''^2R_z^2)^{\frac{3}{4}}} \int_0^1du u^2
(1-u^2)^{\frac{3}{4}} \nonumber \\
&\times J_{\frac{3}{2}}\left[ (1-u^2)^{\frac{1}{2}}(k_x''^2R_x^2+k_y''^2R_y^2+k_z''^2R_z^2)^{\frac{1}{2}} \right] \, .
\end{align}
This integral can be calculated after substituting $u={\cos}\,\vartheta$ and using the identity \cite[(6.683)]{Grad}
\begin{equation}
\int_0^{\pi/2}d\vartheta J_{\mu}(a \,{\rm sin}\,\vartheta){\rm sin}^{\mu+1}\vartheta \,{\rm cos}^{2\rho+1}
\vartheta=2^{\rho}\Gamma(\rho+1)a^{-\rho-1}J_{\rho+\mu+1}(a) \, ,
\end{equation}
which is valid for ${\rm Re}~\rho,{\rm Re}~\mu>-1$. Thus,
the integral over the Fourier transformed Wigner function reads
\begin{equation}
\int \frac{d^3k}{(2\pi)^3}\tilde{\nu}^0({\bf k}'',{\bf k})=\frac{\overline{R}^3
\overline{K}^3}{(k_x''^2R_x^2+k_y''^2R_y^2+k_z''^2R_z^2)^{\frac{3}{2}}}
J_3\left[ (k_x''^2R_x^2+k_y''^2R_y^2+k_z''^2R_z^2)^{\frac{1}{2}} \right]\,.
\label{FTD}
\end{equation}
With the Fourier transform of the dipole-dipole interaction potential (\ref{FTDDP}) the Hartree energy (\ref{unterteiltEd}) then reduces to
\begin{equation}
E_{\rm d}=\int \frac{d^3k''}{(2\pi)^3}\frac{C_{dd}}{6}\left( \frac{3k_z''}{k''^2}-1\right)
\frac{\overline{R}^6\overline{K}^6}{(k_x''^2R_x^2+k_y''^2R_y^2+k_z''^2R_z^2)^3}
J_3^2\left[ (k_x''^2R_x^2+k_y''^2R_y^2+k_z''^2R_z^2)^{\frac{1}{2}} \right]\,.
\end{equation}
After substituting $k_i''R_i=u_i$ and switching into spherical coordinates, this integral yields
\begin{align}
E_{\rm d}=&\frac{C_{dd}\overline{R}^3 \overline{K}^6}{6 (2\pi)^3}\int_0^{\pi}
d\vartheta\int_0^{2\pi}d\phi \,{\rm sin}\,\vartheta \nonumber \\
&\times \left( \frac{3{\rm cos}^2\vartheta}{(R_z/R_x)^2{\rm cos}^2\phi{\rm sin}^2
\vartheta+(R_z/R_y)^2{\rm sin}^2\phi{\rm sin}^2\vartheta+{\rm cos}^2\vartheta}-1\right) \int_0^{\infty}du u^{-4}J_{3}^2(u)\,.
\end{align}
With the definition of the anisotropy function in Eq.~(\ref{definitionanistropyfunction}) and the identity \cite[(6.574.2)]{Grad}
\begin{align}
\int_0^{\infty}dt J_{\nu}(\alpha t)J_{\mu}(\alpha t)t^{-\lambda}=\frac{\alpha^{\lambda-1}\Gamma(\lambda)
\Gamma \left(\frac{\nu+\mu-\lambda+1}{2}\right)}{2^{\lambda} \Gamma\left( \frac{-\nu+\mu+\lambda+1}{2} \right)
\Gamma\left( \frac{\nu+\mu+\lambda+1}{2} \right) \Gamma\left( \frac{\nu-\mu+\lambda+1}{2} \right)} \, ,
\end{align}
which is valid for ${\rm Re}~(\nu+\mu+1)>{\rm Re}~(\lambda)>0,\alpha>0$, it can be cast in the final form
\begin{equation}
E_{\rm d}=\frac{-48N^2 c_0}{8 \overline{R}^3}f\left( \frac{R_x}{R_z},\frac{R_y}{R_z} \right),
\end{equation}
with the constant (\ref{C0}).
\subsection{Fock Energy}
It is possible to compute the Fock integral (\ref{Eex}) along similar lines.
Thus, the Fock term can be rewritten in the following form
\begin{equation}
E_{\rm ex}=-\frac{1}{2}\int d^3x' \int \frac{d^3k'}{(2\pi)^3}\int \frac{d^3k''}{(2\pi)^3}
\overline{\tilde{\nu}}^0({\bf k''},{\bf x'}) \overline{\tilde{\nu}}^0(-{\bf k''},-{\bf x'})\tilde{V}_{\rm int}({\bf k'})
{\rm e}^{i{\bf x'}\cdot {\bf k'}} \, ,
\label{generalfockequation}
\end{equation}
where $\overline{\nu}^0({\bf k'},{\bf k})$ denotes the Fourier transform of $\nu^0({\bf x},{\bf k})$ with respect to the first
variable and $\tilde{\nu}^0({\bf x},{\bf x'})$ the Fourier transform with respect to the second variable.
The integral over the interaction potential leads
to the anisotropy function exactly as in the calculation of the Hartree energy.
In order to solve the integrals, the two Fourier transforms
of the Wigner function (\ref{ansatzwigner}) have to be determined. As the first one (\ref{firstFT})
was already calculated in Eq.~(\ref{FTW1}), we use this result for solving the second one
\begin{eqnarray}
\overline{\tilde{\nu}}^0(-{\bf k}'',{\bf x})
=\int \frac{d^3k}{(2\pi)^3}{\rm e}^{i {\bf k}\cdot {\bf x}}
\tilde{\nu}^0(-{\bf k}'',{\bf k})\, .
\end{eqnarray}
With this we get
\begin{eqnarray}
\overline{\tilde{\nu}}^0(-{\bf k}'',{\bf x})
=\int \frac{d^3k}{(2\pi)^{\frac{3}{2}}} {\rm e}^{i {\bf k}\cdot {\bf x}} \frac{\overline{R}^3\Theta\left(1-\sum_j
\frac{k_j^2}{K_j^2}\right)}{g({\bf k}'')^{\frac{3}{4}}}^{\frac{3}{4}}\left( 1-\sum_l \frac{k_l^2}{K_l^2} \right)^{\frac{3}{4}}
J_{\frac{3}{2}}\left[ \left(1-\sum_m \frac{k_m^2}{K_m^2} \right)^{\frac{1}{2}} g({\bf k}'')^{\frac{1}{2}}\right]\,,
\label{secondfouriertrafo}
\end{eqnarray}
where $g({\bf k}'')=k_x''^2R_x^2+k_y''^2R_y^2+k_z''^2R_z^2$ is a suitable
abbreviation. The three ${\bf k}$-integrals can all be treated in the same way,
so it is only necessary to demonstrate the computation
of one of them. By using the symmetry of the integrand and the substitution
$k_z=K_z\sqrt{1-\frac{k_x^2}{K_x^2}-\frac{k_y^2}{K_y^2}}\cos \vartheta$, we can rewrite Eq.~(\ref{secondfouriertrafo}) as
\begin{eqnarray}
\overline{\tilde{\nu}}^0(-{\bf k}'',{\bf x})&=&\frac{\overline{R}^3}{(2\pi)^{\frac{3}{2}}}\frac{2}{g(\bf k)''^{\frac{3}{4}}}
\int dk_x dk_y {\rm e}^{ixk_x+iyk_y}\Theta \left( 1-\frac{k_x^2}{K_x^2}-\frac{k_y^2}{K_y^2}\right)
\left( 1-\frac{k_x^2}{K_x^2}-\frac{k_y^2}{K_y^2}\right)^{\frac{5}{4}} \\
&& \times \int_0^{\frac{\pi}{2}} d\vartheta \sin^{\frac{5}{2}}\vartheta K_z \cos\left( zK_z \sqrt{1-\frac{k_x^2}{K_x^2}
-\frac{k_y^2}{K_y^2}}\cos \vartheta \right) J_{\frac{3}{2}}\left[ g({\bf k}'')^{\frac{1}{2}} \left( 1-\frac{k_x^2}{K_x^2}
-\frac{k_y^2}{K_y^2} \right)^{\frac{1}{2}} \cos \vartheta \right].\nonumber
\end{eqnarray}
The remaining $\vartheta$-integral can be calculated using the identity \cite[(6.688.2)]{Grad}
\begin{align}
\int_0^{\pi/2}dx\, {\rm sin}^{\nu+1}x\,{\rm cos}(\beta {\rm cos}x)\,J_{\nu}(\alpha {\rm sin}x)=2^{-\frac{1}{2}}\sqrt{\pi}
\alpha^{\nu} \left( \alpha^2+\beta^2 \right)^{-\frac{1}{2}\nu-\frac{1}{4}} J_{\nu+\frac{1}{2}}\left[ \left( \alpha^2
+\beta^2\right)^{\frac{1}{2}}\right] \, ,
\label{gradshteynformel6.688.2}
\end{align}
which is valid for ${\rm Re}~\nu>-1$. The other two $k$-integrals can be treated in the same way leading finally to
\begin{equation}
\overline{\tilde{\nu}}^0(-{\bf k}'',{\bf x})=\frac{\overline{R}^3 \overline{K}^3}{\left[g({\bf k}'')
+z^2K_z^2+y^2K_y^2+x^2K_x^2\right]^{\frac{3}{2}}}J_3\left\{ [g({\bf k}'')+z^2K_z^2+y^2K_y^2+x^2K_x^2]^{\frac{1}{2}} \right\}\,.
\end{equation}
It is clear that $\overline{\tilde{\nu}}^0({\bf k}'',{\bf x})$ is an even function with respect to
${\bf k}''$, which simplifies further calculations.
The next step is to calculate the ${\bf x}'$-integral in Eq.~(\ref{generalfockequation}).
In order to avoid a quadratic Bessel function,
we use the integral representation \cite[(6.519.2.2)]{Grad}
\begin{equation}
\int_0^{\frac{\pi}{2}}J_{2\nu}(2z \sin t)dt=\frac{\pi}{2}J_{\nu}^2(z) \, ,
\end{equation}
which is valid for ${\rm Re}~\nu>-1/2$, thus leading to an integral over a Bessel function
\begin{equation}
J_3^2\left\{\left[x^2K_x^2+y^2K_y^2+z^2K_z^2+g({\bf k}'')\right]^{\frac{1}{2}}\right\}=\frac{2}{\pi}\int_0^{\frac{\pi}{2}} dt
J_6 \left\{ 2\sin t \left[ x^2K_x^2+y^2K_y^2+z^2K_z^2+g({\bf k}'') \right]^{\frac{1}{2}} \right\} \, .
\label{Besselidentity}
\end{equation}
We will treat the three spatial integrals separately, starting with the $z$-integral.
After the substitution $u_z=z K_z$, we use the identity \cite[(6.726.2)]{Grad}
\begin{eqnarray}
&&\hspace{-2cm}\int_0^{\infty}(x^2+b^2)^{-\frac{1}{2}\nu}J_{\nu}\left(a\sqrt{x^2+b^2}\right) \cos (cx) dx \nonumber \\
&& =\left\{
\begin{array}{cc}
\sqrt{\frac{\pi}{2}}a^{-\nu}
b^{-\nu+\frac{1}{2}}(a^2-c^2)^{\frac{1}{2}\nu-\frac{1}{4}}J_{\nu-\frac{1}{2}}\left( b\sqrt{a^2-c^2} \right)\, ;
& 0<c<a,b>0,{\rm Re}~\nu>- 1/2\\
0 \, ;& 0<a<c,b>0,{\rm Re}~\nu>- 1/2
\end{array}
\right.
\label{gradformel67262}
\end{eqnarray}
and obtain
\begin{align}
\int d^3x' \overline{\tilde{\nu}}^0({\bf k}'',{\bf x'})^2 {\rm e}^{i{\bf k'}\cdot {\bf x'}} =&\overline{R}^6\overline{K}^6\int
dxdy {\rm e}^{ix'k_x'+iy'k_y'}\frac{2 \sqrt{2}}{\sqrt{\pi} K_z}
\int_0^{\frac{\pi}{2}}dt \frac{\left(4\sin ^2t-\frac{k_z'^2}{K_z^2}
\right)^{\frac{11}{4}}\Theta\left( 2\sin t-\sqrt{\frac{k_z'^2}{K_z^2}} \right)}{(2 \sin t)^6\left[
x'^2 K_x^2+y^2K_y^2+g({\bf k}'') \right]^{\frac{11}{4}}} \nonumber \\
&\times J_{\frac{11}{2}}\left\{ \left( 4\sin ^2t-\frac{k_z'^2}{K_z^2} \right)^{\frac{1}{2}}
\left[ x'^2 K_x^2+y^2K_y^2+g({\bf k}'') \right]^{\frac{1}{2}} \right\}\,.
\label{heav}
\end{align}
The Heaviside function in Eq.~(\ref{heav}) ensures that both possible solutions of Eq.~(\ref{gradformel67262}) are covered.
The other two integrals will be calculated in the same way. The solution of the ${\bf x}'$-integral reads
\begin{align}
\int d^3x' \overline{\tilde{\nu}}^0({\bf k}'',{\bf x'})^2 {\rm e}^{i{\bf k'}\cdot {\bf x'}}=\frac{2(2\pi)^{\frac{3}{2}}}{\pi}
\overline{R}^6\overline{K}^3\int_0^{\frac{\pi}{2}}\frac{dt}{(2\sin t)^6}\frac{\left( 4\sin^2 t -\frac{k_z'^2}{K_z^2}
-\frac{k_y'^2}{K_y^2}-\frac{k_x'^2}{K_x^2}\right)^{\frac{9}{4}}}{g({\bf k}'')^{\frac{9}{4}}} \nonumber \\
\times J_{\frac{9}{2}}\left[ g({\bf k}'')^{\frac{1}{2}}\left( 4\sin^2 t -\frac{k_z'^2}{K_z^2}-\frac{k_y'^2}{K_y^2}
-\frac{k_x'^2}{K_x^2}\right)^{\frac{1}{2}} \right] \Theta\left(2\sin t-\sqrt{\frac{k_z'^2}{K_z^2}+\frac{k_y'^2}{K_y^2}
+\frac{k_x'^2}{K_x^2}}\,\right)\,.
\end{align}
The next step is to integrate the ${\bf k}''$-integral. Using the spherical symmetry the calculation of this integral can be done by
substituting $u_i=k_i''R_i$ and by transforming afterwards these new integration variables into spherical coordinates.
This enables us to use the identity \cite[(6.561.17)]{Grad}
\begin{equation}
\int_0^{\infty}\frac{J_{\nu}(ax)}{x^{\nu-q}}dx=\frac{\Gamma\left( \frac{1}{2}q+\frac{1}{2} \right)}{2^{\nu-q}a^{\nu-q+1}
\Gamma\left( \nu-\frac{1}{2}q+\frac{1}{2} \right)} \, ,
\end{equation}
which is valid for $-1<{\rm Re}~ q<{\rm Re}~ \nu-1/2$, and leads to
\begin{align}
\int d^3k''\int d^3x' \overline{\tilde{\nu}}^0({\bf k}'',{\bf x'})^2 {\rm e}^{i{\bf k'}\cdot {\bf x'}}=&\frac{\pi^2}{3}\int_0^{\frac{\pi}{2}}
\frac{dt}{(2 \sin t)^6}\left( 4\sin ^2t-\frac{k_z'^2}{K_z^2}-\frac{k_y'^2}{K_y^2}-\frac{k_x'^2}{K_x^2} \right)^3 \nonumber \\
&\times \Theta\left( 2\sin t-\sqrt{\frac{k_z'^2}{K_z^2}+\frac{k_y'^2}{K_y^2}+\frac{k_x'^2}{K_x^2}} \,\right)\,.
\label{doubleintdoublefourexp}
\end{align}
The last step of the calculation of the Fock term is to solve the ${\bf k}'$-integral. To this end we substitute
$u_i=k_i'/K_i$ and switch to spherical coordinates. With this the Fock term reads
\begin{align}
E_{\rm ex}=&-\frac{1}{8 \cdot 3^2 \cdot (2\pi)^4}
\int_0^{2\pi}d\phi \int_0^{\pi} d\vartheta \sin \vartheta \left( \frac{3 \cos ^2 \vartheta}{\frac{K_x^2}{K_z^2}
\sin ^2 \vartheta \cos ^2 \phi +\frac{K_y^2}{K_z^2} \sin ^2 \vartheta \sin ^2 \phi +\cos ^2 \vartheta} -1\right) \nonumber \\
&\times \overline{K}^6\overline{R}^3\int_0^{\frac{\pi}{2}}\frac{dt}{(2\sin t)^6} \int_0^{2 \sin t}du u^2 \left(
4\sin ^2 t-u^2 \right)^3\,.
\end{align}
The $\vartheta$- and $\phi$-integrals lead to the anisotropy function
(\ref{definitionanistropyfunction}), and the $u$- and $t$-integrals can be elementary solved, thus leading to the final result
\begin{equation}
E_{\rm ex}=\frac{48N^2c_0}{8\overline{R}^3}f\left( \frac{K_z}{K_x},\frac{K_z}{K_y} \right)
\end{equation}
with the constant (\ref{C0}).
\end{appendix}
|
1,116,691,498,630 | arxiv | \section{Introduction}
\label{sec:intro}
The standard description of quantum mechanics considers the time-evolution of isolated quantum systems whose unitary dynamics are governed by the Schr\"{o}dinger equation. Measurement is treated as an instantaneous non-unitary process through which a quantum system is projected into an eigenstate of the measured observable with a probability given by Born's rule. In reality, no system is completely isolated from its environment, and measurements are never truly instantaneous, but occur over some finite timescale determined by the details of the interaction between the measured system and its environment. The theory of quantum trajectories \cite{carmbook, gardinerbook} considers measurement as a continuous process in time, describing how the state of the quantum system evolves during measurement.
Due to the intrinsic quantum fluctuations of the environment, measurement is an inherently stochastic process. If a quantum system starts in a known quantum state $\ket{\psi(0)}$, then by accurately monitoring the fluctuations of its environment it is possible to reconstruct single quantum trajectories $\ket{\psi(t)}$, which describe the evolution of the quantum state in an individual experimental iteration.
The concept of quantum trajectories was first developed in the early 1990's as a theoretical tool to model continuously monitored quantum emitters \cite{carmbook, dali92,gard92}. For the next decade, quantum trajectories were used primarily in the quantum optics community, as a theoretical tool for numerical simulations of the ensemble behavior of open quantum systems \cite{dali92,scha95}. Typically, the master equation of an open quantum system cannot be solved analytically, and thus numerical solutions are often necessary. For a Hilbert space of dimension $N$, the density matrix $\rho$ consists of $N^2$ real numbers, and the computational time required to solve for its time evolution through the master equation scales as $N^4$ \cite{wisebook}. In contrast, the pure quantum state $\ket{\psi(t)}$ of an individual quantum trajectory can be described by $N$ complex numbers. Therefore, it is often advantageous to simulate an ensemble of stochastic quantum trajectories, which can be averaged together to recover the evolution of the density matrix, $\rho(t)$. Although the formalism of quantum trajectories is constructed from standard quantum mechanics \cite{brun02}, it can provide insight into foundational questions such as the quantum measurement problem \cite{gisi84, dios88, gisi92} and bears a close resemblance to the consistent histories interpretation of quantum mechanics \cite{grif84}.
Despite widespread theoretical use, quantum trajectories have only been investigated in a handful of experiments, due in part to the difficulty of performing highly efficient continuous quantum measurements. The earliest experiments to continuously monitor individual quantum systems were in the regime of strong measurement, where the system is quickly projected into an eigenstate of measurement, destroying any information about the phase of a coherent superposition. In such experiments, it is possible to track the `quantum jumps' between eigenstates \cite{ nago86,saut86,berg86, vija11}. Cavity quantum electrodynamics (CQED) experiments with Rydberg atoms have explored the weak measurement regime, tracking the quantum trajectories of a cavity field as it collapses from a coherent state into a photon number eigenstate \cite{guer07}. Other CQED experiments have used a cavity probe to continuously track the position of individual Cesium atoms \cite{hood00}. Quantum trajectories were first considered for solid state systems in the context of a quantum dot qubit monitored in real time by a quantum point contact charge sensor \cite{koro99, goan01}. In 2007, the conditional measurement dynamics of a quantum dot were investigated experimentally \cite{sukh07}. More recently, quantum trajectory theory has been used to solve for the conditional evolution of a continuously monitored superconducting qubit \cite{gamb08,koro11}. These results, when combined with recent advances in nearly-quantum-limited parametric amplifiers, which can be used to achieve highly efficient qubit readout, have enabled a detailed investigation of measurement backaction \cite{hatr13, camp14}.
In this article, we review recent experiments \cite{murc13,webe14,roch14,tan14} which, by weakly probing the field of a microwave frequency cavity containing a superconducting qubit, track the individual quantum trajectories of the system. These are the first experiments, on any system, which use quantum state tomography at discrete times along the trajectory to verify that the qubit state has been faithfully tracked. From the perspective of quantum information technology, these experiments demonstrate the great extent to which the process of measurement is understood in this system, and may inform future efforts in measurement-based feedback \cite{sayr11, vija12, dela14, groe13} for state stabilization \cite{blok14} and error correction.
The review is organized as follows. In section 2, we present a physical picture for continuous quantum measurement of superconducting qubits. In section 3, we demonstrate how to use a measurement result to reconstruct the conditional qubit state after measurement. In section 4, we explain how to reconstruct and tomographically verify individual quantum trajectories. Then, in section 5, we examine ensembles of quantum trajectories to gain insight into qubit state dynamics under measurement. In section 6, we discuss time-symmetric evolution under quantum measurement, and in section 7 we demonstrate quantum trajectories of a two-qubit system. Finally, in section 8 we explore potential applications in measurement-based feedback control and continuous quantum error correction.
\section{Continuous measurement of superconducting qubits}
\label{cont}
The experiments discussed in this review use artificial atoms formed from superconducting circuits. We focus in particular on the transmon circuit \cite{koch07} (Fig. 1A) which is composed of the non-linear inductance of a Josephson junction and a parallel shunting capacitance $C_{\Sigma}$. This circuit is characterized by the Josephson energy scale $E_J \equiv \hbar I_0/2e$ and the capacitive energy scale $E_C \equiv e^2/2C_\Sigma$, where $I_0$ is the junction critical current and $e$ is the elementary charge. A typical transmon circuit, with $E_J/h \sim 20$ GHz and $E_C/h \sim 200$ MHz, has several bound eigenstates (Fig. 1B) with energies $E_m$, where $m$ is a whole number that indexes the states. The lowest two levels form a qubit subspace, with transition frequency $\omega_{01}/2\pi \equiv (E_1-E_0)/h \sim 5$ GHz, and the difference in frequency between transitions to successively higher levels is given by the anharmonicity $ \alpha \approx E_C$. Due to the large $E_J/E_C$ ratio the transmon qubit is insensitive to charge noise, which, when combined with low-loss materials \cite{megr12, chan13} and designs that minimize the participation of surface dielectric loss \cite{paik11}, has allowed for planar qubits with coherence times of many tens of microseconds.
\begin{figure*}
\begin{center}
\includegraphics[angle = 0, width = .8\textwidth]{fig1v5}
\end{center}
\caption{\label{fig:fig1} Dispersive measurements of a superconducting qubit. (A) A transmon qubit couples dispersively to a microwave frequency cavity. Signals that reflect off of the cavity are amplified by a nearly quantum-limited lumped-element Josephson parametric amplifier. (B) Schematic representation of the transmon potential and corresponding energy levels. The two lowest energy levels form the qubit subspace. (C) Reflected signal as a function of probe frequency. The resonance frequency is shifted by $2\chi$ depending on whether the qubit is prepared in the $\ket{0}$ or $\ket{1}$ state. (D,E) The cavity is probed with a coherent microwave tone at a frequency $\omega_m$, initially aligned along the $X_1$ quadrature. After leaving the cavity, the tone acquires a quit-state-dependent phase shift. (F) Phase-sensitive amplification along the $X_2$ (top) and the $X_1$ (bottom) quadratures. }
\end{figure*}
In order to to control the qubit's interaction with its external environment, it is coupled to the fundamental mode of a three dimensional waveguide cavity of frequency $\omega_c$ at a rate $g$, realizing a cavity quantum electrodynamics (CQED) architecture. In the dispersive regime, where the qubit-cavity detuning $\omega_q-\omega_r$ is large compared to $g$, the system is described by the Hamiltonian \cite{webe14,blai04} $H = H_0 + H_{\text{int}}+ H_{\text{R}}$, where
\begin{align}
&H_{\text{int}} = - \hbar \chi a^{\dagger}a \sigma_z \\
&H_{\text{R}} = \hbar \frac{\Omega}{2}\sigma_y.
\end{align}
\noindent Here $H_0$ describes the uncoupled qubit and cavity energies and decay terms, $\hbar$ is the reduced Plank's constant, $\chi$ is the dispersive coupling rate, $a^{\dagger}$ and $a$ are the creation and annihilation operators for the cavity mode, and $\sigma_z$ is the qubit Pauli operator that acts on the qubit state in its energy eigenbasis. $H_{\text{int}}$ is an interaction term which equivalently describes a qubit-state-dependent frequency shift of the cavity of $-\chi \sigma_z$ (with the $\ket{0}$ state defined as $\sigma_z = +1$) and a qubit frequency that depends on the intracavity photon number $\hat{n} = a^{\dagger}a$ (an a.c. Stark shift). $H_\text{R}$ describes the effect of an optional microwave drive at the qubit frequency which causes the qubit state to rotate about the $y$ axis of the Bloch sphere at the Rabi frequency $\Omega$.
Because the $H_{\text{int}}$ commutes with $\sigma_z$, the qubit-state-dependent phase shift can be used to perform continuous quantum non-demolition (QND) measurment of the qubit state in its energy eigenbasis \cite{blai04,bragbook}. Figure 1C illustrates the phase of the reflected signal as a function of frequency. If we choose to measure at a frequency $\omega_m = (\omega_{\ket{0}} + \omega_{\ket{1}})/2$, where $\omega_{\ket{0}}$ and $\omega_{\ket{1}}$ are the cavity frequencies when the qubit is in the ground and excited states, respectively, then the phase difference in the internal cavity field for the two qubit states is given by $\Delta \theta = 4|\chi|/\kappa$, where $\kappa$ is the cavity decay rate. In the experiments presented here, we work in the small phase shift limit, with $|\chi|/\kappa \sim 0.05$.
We probe the cavity by applying a measurement tone at frequency $\omega_m$ initially aligned along the $X_1$ quadrature (Fig. 1D). Due to the vacuum fluctuations of the electromagnetic field, the quadrature amplitudes $X_1$ and $X_2$ of this field will fluctuate in time. The circle in Figure 1D represents the Gaussian variance of the input signal time-averaged for a time $\Delta t$. The area of the circle is inversely proportional to $\Delta t$. After reflecting off of the cavity, the measurement signal acquires a qubit-state-dependent phase shift, as depicted in Figure 1E. In the small $|\chi|/\kappa$ limit, the $X_2$ quadrature of the reflected signal signal contains information about the cavity phase, which is proportional to the qubit state. Likewise, the $X_1$ quadrature contains information about the amplitude of the cavity field and thus the fluctuating intracavity photon number.
In order to track the qubit state through an individual measurement, we need to accurately monitor the quantum fluctuations of the measurement signal, which are typically much smaller than the thermal fluctuations of the room temperature electronics that are needed to record the measurement result. Therefore, we must first amplify the signal above this noise floor. State-of-the-art commercial low-noise amplifiers, which are based on high electron mobility transistors (HEMTs) and can be operated at $4$ K, add tens of photons of noise to the measurement signal. Therefore, a more sensitive pre-amplifier is needed in order to overcome the added noise of the HEMT amplifier.
Over the past few years, Josephson junction based superconducting parametric amplifiers have emerged as an effective tool for realizing nearly-quantum-limited amplification. Phase-preserving amplifiers such as the Josephson parametric converter \cite{berg10} amplify both quadrature amplitudes evenly by a factor of $\sqrt{G}$, where $G$ is the power gain of the phase-preserving amplifier, and add at least a half photon of noise \cite{cave82} to the signal. Here, we focus instead on phase-sensitive amplification from a lumped-element Josephson parametric amplifier \cite{hatr11}, where one quadrature is amplified by a factor of $2\sqrt{G}$ and the other quadrature is de-amplified by the same factor.
When we apply a coherent measurement tone, characterized by an average intracavity photon number $\bar{n}$, its quantum fluctuations will cause the phase coherence of a qubit superposition state to decay at the ensemble dephasing rate $\Gamma = 8\chi^2\bar{n}/\kappa$. The ensemble dephasing rate will be the same regardless of how we choose to process the measurement signal after it leaves the cavity. However, the backaction of an $\emph{individual}$ measurement will depend significantly on our choice of amplification scheme.
As depicted in Figure 1F, after a measurement tone initially aligned along the $X_1$ quadrature reflects off of the cavity and acquires a qubit-state-dependent phase shift and is then displaced to the origin of the $X_1\text{-}X_2$ plane by a coherent tone, we can choose to either amplify the $X_2$ quadrature (top panel) which contains qubit-state information or the $X_1$ quadrature (bottom panel) which contains information about the fluctuating intracavity photon number. Consider a qubit initially prepared in an equal superposition of $\sigma_z$ eigenstates, say $\sigma_x = +1$. If we perform ideal phase-preserving amplification of $X_1$, we also de-amplify the photon number information, and the measurement backaction drives the qubit state along a meridian of the Bloch sphere toward one of the poles. We refer to this case as a $z-$measurement, because we acquire information about the qubit state in the $\sigma_z$ basis. If instead we amplify $X_2$, we also de-amplify the qubit-state information, and measurement backaction drives the qubit state along the equator of the Bloch sphere. We refer to this case as a $\phi-$measurement, because we can track the phase of a qubit superposition state over the course of an individual measurement. For phase-preserving amplification both types of backaction are present.
We first focus on the case of the $z-$measurement. After the measurement tone leaves the parametric amplifier and passes through further stages of amplification we demodulate the signal and record the $X_2$ quadrature amplitude as a digitizer voltage $V(t)$. For a measurement of duration $\Delta t$, the measurement outcome $V_m$ is given by the time-average of $V(t)$. Depending on whether the qubit is initially prepared in the ground of the excited state, the Gaussian distribution describing the probability attaining a particular measurement outcome will be shifted by a voltage $\Delta V \propto \Delta \theta$. In this review, we define a dimensionless measurement outcome $r = 2 V_m / \Delta V$, such that the ground and excited state distributions are centered about $r = \pm 1$, respectively, as illustrated in Figure 2A,D. From these distributions, we define the dimensionless measurement strength $S \equiv (2/a)^2$, where $a$ is the standard deviation of the dimensionless measurement distributions, which scales as $(\Delta t)^{-1/2}$. We also define the characteristic timescale $\tau$ over which the qubit state is projected as the amount of time required for the measurement histograms to be separated by twice their standard deviation, $\tau \equiv 4 \Delta t / S$. When $S$ is large ($\Delta t \gg \tau$), the ground and excited state histograms are well separated (Fig. 2A), and it is possible to determine the qubit state with high fidelity in an individual measurement. The measurement projects the qubit into an energy eigenstate, where it will remain after measurement. Instead, if $S$ is small ($\Delta t \lesssim \tau$), the ground and excited state histograms overlap (Fig. 2D), and an individual measurement only partially projects the qubit state.
\begin{figure*}
\begin{center}
\includegraphics[angle = 0, width = 1\textwidth]{fig2v2}
\end{center}
\caption{\label{fig:fig2} Continuous quantum measurement. Illustrative measurement histograms for a single time-step (A,D) along with simulated measurement records (B,E) and corresponding quantum trajectories (C,F). In the top panels, $\Delta t = 200$ ns, $\tau = 50$ ns, and $\Omega/2\pi = 8$ MHz, illustrating the quantum jumps regime. In the bottom panels, $\Delta t = 20$ ns, $\tau = 150$ ns, and $\Omega = 0$, illustrating the diffusive regime.}
\end{figure*}
Formally, general quantum measurements (partial or projective) are described by the theory of positive operator-valued measures (POVM) which yield the probability $P(m)=\mathrm{Tr}(\Omega_m \,\rho\, \Omega_m^\dagger)$ for obtaining an outcome $m$, and the associated back action on the quantum state, $\rho \rightarrow \Omega_m \,\rho\, \Omega_m^\dagger/P(m)$, where the operators $\Omega_m$ obey $\sum_m \Omega^\dagger_m \Omega_m = \hat{I}$. For example, for the projective measurements $\Omega_{\pm z} = (\hat{I} \pm \sigma_z)/2$, the probability of a measurement yielding the qubit in the $+z$ state is $P(+z) = \mathrm{Tr}(\Omega_{+z} \rho \Omega_{+z})= (1+\langle \sigma_z\rangle)/2$. The partial measurements discussed in this review are described by the POVM \cite{wisebook, jaco06},
\begin{align}
\Omega_r = \left(2 \pi a^2 \right)^{-1/4} e^{(-(r- \sigma_z)^2/4a^2)}
\label{povm}
\end{align}
\noindent where, $1/4a^2 = \Delta t/ 4 \tau$. The $\sigma_z$ term in $\Omega_r$ causes the back action on the qubit degree of freedom, $\rho \rightarrow \Omega_r \rho \Omega_r^\dagger$, due to the readout of the measurement result $r$, resulting in the measurement dynamics discussed in this review.
Our ability to reconstruct the qubit state after an individual partial measurement is determined by the measurement quantum efficiency $\eta_m$. We have established that the noisy measurement tone contains information about the qubit state and cause an ensemble dephasing rate of $\Gamma$. In general, only a fraction, $\eta_m$ of this information is experimentally accessible, and the remainder is lost to environmental degrees of freedom. The measurement efficiency can be reduced from its ideal value of $\eta_m =1$ by losses between the cavity and the parametric amplifier, described by the collection efficiency $\eta_{col}$ and by added noise in the amplification chain, described by an the amplification efficiency $\eta_{amp}$. In the experiments presented here, $\eta_m = \eta_{col} \eta_{amp} \sim 0.4$. The measurement strength depends linearly on $\eta_m$, and for dispersive measurements in the small phase shift limit is given by $S = 64 \chi^2 \bar{n} \eta_m \Delta t/ \kappa$.
We now turn our attention to continuous quantum measurement. In Figure 2 we illustrate quantum trajectories in the limiting cases of strong and weak measurement. We consider a sequence of $n$ measurements occurring at times $\{t_k = k \Delta t\}$ for $k = 0,1,...,n-1$, which result in a set of dimensionless measurement results $\{r_k\} = \{r_0,r_1,...,r_{n-1}\}$. If the qubit is prepared in a known initial state, then we can use the measurement results to track the qubit state as it evolves under measurement, computing the set of conditional qubit states $\{q_k\} = \{q_0,q_1,...,q_{n-1}\}$ corresponding to an individual measurement record $\{r_k\}$. Here the Bloch vector $q = (x,y,z)$ describes a general mixed single-qubit state in terms of the the components $x \equiv \text{tr}[\rho\hat{\sigma}_x]$, $y \equiv \text{tr}[\rho\hat{\sigma}_y]$, and $z \equiv \text{tr}[\rho\hat{\sigma}_z]$, where $\rho$ is the qubit density matrix. In the limit where $\tau \lesssim \Delta t$, each time-step constitutes a (nearly) projective measurement. In the absence of any non-measurement dynamics, after the first time-step subsequent measurements will continue to project the qubit into the eigenstate corresponding to the initial measurement result. However, any additional dynamics, such as energy relaxation or Rabi driving, which occur on a timescale faster than $\Delta t$ will result in discontinuous jumps in the measurement record corresponding to quantum jumps \cite{vija11} of the qubit state (Fig. 2B,C).
In the opposite limit, where $\Delta t \ll \tau$, each measurement will only slightly perturb the qubit state. However, by performing a sequence of repeated partial measurements such that $t_{n-1} \gg \tau$, we can realize a projective measurement. In this case, the noisy detector signal can be used to reconstruct the diffusive trajectory of the qubit state as it is gradually projected toward a measurement eigenstate (Fig. 2E,F).
\section{Reconstructing the conditional quantum state}
\label{cond}
In this section, we discuss in detail how to reconstruct the qubit state conditioned on an individual measurement outcome. One approach, taken in references \cite{roch14, tan14} is to solve a stochastic master equation for conditional qubit state. Here, we instead describe phenomenological approach based on a Bayesian statistics \cite{koro11}, which provides a particularly simply approach to single-qubit trajectories and was used in references \cite{murc13} and \cite{webe14}. As recently demonstrated in reference \cite{tan14}, for the a single qubit under weak measurement and weak Rabi driving, both approaches yield similar results.
Consider a qubit prepared in the initial state $\rho(t = 0)$, which is weakly measured for a time $\Delta t$, yielding a measurement result $r$. Here, we show how to apply Bayes rule of conditional probabilities to update our knowledge of the qubit state after the measurement. We first focus on the case of a $z-$measurement, with $\Omega = 0$. From Bayes rule, we have
\begin{align}
\label{prob}
P(i|r) =&\, \dfrac{P(r|i)P(i)}{P(r)},
\end{align}
\noindent where $i$ describes the basis states $\{\ket{0},\ket{1}\}$. Here, the initial probabilities for finding the qubit in the ground or excited states are given by $P(0) = \rho_{00}(t\! =\!0)$ and $ P(1) = \rho_{11}(t\! = \!0)$. By expressing the measurement distributions $P(r|i)$ explicitly and taking the ratio of the conditional probabilities in equation \eqref{prob} for both basis states, we find that
\begin{eqnarray}
\label{diag}
\frac{\rho_{11}(\Delta t)}{\rho_{00}(\Delta t)} = \frac{\rho_{11}(0)}{\rho_{00}(0)}\frac{\text{exp}[-(r+1)^2/2a^2]}{\text{exp}[-(r-1)^2/2a^2]}.
\end{eqnarray}
\noindent For a qubit initially prepared in the state $q_I = (1,0,0)$ we find that
\begin{eqnarray}
\label{zz}
z^z= \text{tanh}\left(\frac{r\Delta t}{\tau}\right).
\end{eqnarray}
\noindent where the superscript `$z$' denotes a $z-$measurement.
Note that thus far we have used a classical rule of conditional probabilities to determine how the qubit populations evolve under measurement. Following reference \cite{koro11}, we account for the qubit coherence through the phenomenological assumption that
\begin{eqnarray}
\label{xz}
x^z = \sqrt{1-(z^z)^2}e^{-\gamma \Delta t}
\end{eqnarray}
\noindent Here the first term enforces normalization and the second term reflects our imperfect knowledge of the environment and leads to qubit dephasing characterized by the rate $\gamma = \Gamma - 1/2\tau$, where $\Gamma = \Gamma + 1/T_2^*$ is the ensemble dephasing rate and $T_2^*\sim 20 \,\mu$s is the characteristic timescale for extra environmental dephasing.
For the case of a $\phi-$measurement, $z$ remains zero, and $x$ and $y$ are periodic in the accumulated qubit phase shift, and are given by \cite{koro11}
\begin{align}
\label{xp}&x^{\phi} = \text{cos}\left(\frac{r \Delta t}{\tau}\right)e^{-\gamma \Delta t}, \\
\label{yp}&y^{\phi} = -\text{sin}\left(\frac{r \Delta t}{\tau}\right)e^{-\gamma \Delta t}, \\
\end{align}
\noindent where the superscript `$\phi$' denotes a $\phi-$measurement. Figure 3 illustrates the conditional quantum state as a function $r$ for a $z-$measurement (panel A) and a $\phi-$measurement (panel B), with $\tau = 600$ ns and $\Delta t = 400$ ns. Note that the dephasing rate $\gamma$ due the unaccessible part of the measurement signal is the same regardless of our choice of amplification axis. In both cases, a measurement outcome of $r = 0$ will leave $y$ and $z$ unchanged, but $x$ is reduced by a factor of $\text{Exp}[-\gamma \Delta t]$.
A useful feature of dispersive CQED measurements is the ability to rapidly tune the measurement strength by changing the amplitude of the measurement tone. Therefore, it is straightforward to implement experimental sequences which combine partial and projective measurement. The sequence shown in Figure 3c is used to implement conditional quantum state tomography to verify that we can accurately account for the backaction of an individual measurement. The qubit is prepared in the initial state $(1,0,0)$, and weakly measured for a time $\Delta t$. Then, we perform an optional qubit rotation (of $\pi/2$ about the $\hat{y}$ axis to reconstruct $x$, $\pi/2$ about $-\hat{x}$ to reconstruct $y$, and no pulse to reconstruct $z$) followed by a projective measurement. For a given measurement outcome $r$, we perform a tomographic state reconstruction on the sub-ensemble of experimental iterations with similar measurement outcomes, in the range $r\pm\epsilon$, where $\epsilon \ll 1$. For superconducting qubits, this technique was first introduced in reference \cite{hatr13}, which considers the case of phase-preserving amplification. Shortly thereafter, this technique was demonstrated for phase-sensitive amplification \cite{murc13}.
\begin{figure}
\begin{center}
\includegraphics[angle = 0, width = .5\textwidth]{fig3v1}
\end{center}
\caption{\label{fig:fig2} Reconstructing the conditional quantum state. (A,B) The conditional quantum state after a measurement result $r$ for a qubit initially prepared in the state $(1,0,0)$, with $\Delta t = 400$ ns, $\tau = 600$ ns, and $\gamma = 1.3*10^6 \text{s}^{-1}$. Panel A depicts a $z-$measurement, and panel B depicts a $\phi$ measurement. (C) Experimental sequence for reconstructing the $x$ component of the conditional quantum state. (D) To perform quantum state tomography conditioned on the measurement result $r=1.7$, we average together the projective measurement outcomes for the sub-ensemble of measurement outcomes where $r = 1.7 \pm \epsilon$.}
\end{figure}
\section{Tracking individual quantum trajectories}
\label{traj}
Consider a qubit initially prepared in a known state $q_I$, which undergoes a sequence of $n$ partial measurements with outcomes $\{r_k\}$, as described in section \ref{cont}. In the limit where the duration $\Delta t$ of each measurement approaches zero, the set of conditional states $\{q_k\}$ describes a the quantum trajectory $q(t)$. For simplicity, from here on we restrict our discussion to the case of a $z-$measurement. When $\omega = 0$, we can calculate the conditional quantum state at each time-step $t_k = k \Delta t$ from equations \eqref{zz} and \eqref{xz} using only the initial state and the time-averaged measurement signal $\bar{r} = 1/k \sum_{k=0}^{k-1}r_k$. However, when $\Omega > 0$ the measurement dynamics do not commute with the Rabi drive, and therefore the order of the measurement outcomes matters, and $\bar{r}$ no longer contains sufficient information to reconstruct the quantum trajectory. Instead, if $\Delta t \ll \Omega$ we can perform a sequential two-step state update procedure introduced in reference \cite{webe14}. For each time-step $t_k$, we calculate $q_k$ by first applying a Bayesian update to the state $q_{k-1}$ to account for the measurement result $r_k$, and then by applying a unitary rotation to account for the the Rabi drive during the time $\Delta t$. Example quantum trajectories are shown in Figure 4A,B for $\Omega/2\pi = 0$ and $0.4$ MHz, respectively, and $\tau = 1.28 \, \mu$s. The corresponding ensemble average evolution is shown in panels C and D.
\begin{figure}
\begin{center}
\includegraphics[angle = 0, width = .5\textwidth]{fig4v2}
\end{center}
\caption{\label{fig:fig4} Reconstructing individual quantum trajectories. Here, $\tau = 1.28 \, \mu$s, $\gamma = 2.7\times 10^{-7}s^{-1}$, and $\Omega/2\pi = 0$ (A, C) and $0.4$ MHz (B,D). Panels A and B depict the ensemble average evolution. Panels C and D display simulated individual quantum trajectories ending at $t_n = 2 \, \mu$s, with the $x, y$, and $z$ components depicted in blue, red, and black, respectively. The orange regions represent a matching window of $\epsilon = 0.05$ at $t = 0.66 \, \mu$s (C) and $1.48 \, \mu$s (D). Sample trajectories that end within the matching window are shown in other colors. }
\end{figure}
While pervious experiments in other systems have reconstructed individual diffusive quantum trajectories \cite{guer07}, reference \cite{murc13} was the first to use conditional quantum state tomography to verify that the trajectories were reconstructed accurately. Here, we present a brief outline of the tomographic validation procedure. We perform a large number of experimental iterations ending at different times $t_f$, which are followed by a qubit rotation and a projective measurement. We use a single full-length experimental iteration (with $t_f = (n-1) \Delta t$ to generate a target trajectory, denoted $\tilde{q}(t) \equiv (\tilde{x}(t),\tilde{y}(t), \tilde{z}(t))$. Then, for each experimental sequence of total measurement duration $t_k$ (and a given orientation of tomography pulse), we compute the quantum trajectory $q(t)$. We perform conditional quantum state tomography separately at each time $t_k$ using the subset of experimental iterations with $x(t_k) = \tilde{x}(t_k)\pm\epsilon$ and $z(t_k) = \tilde{z}(t_k)\pm\epsilon$ where $\epsilon \ll 1$, and we have assumed that $y=0$. The orange shaded regions in panels C and D of Figure 4 represent matching windows at $t_k = 0.66 \, \mu$s and $1.28 \, \mu$s, respectively. Trajectories which fall within the matching window at $t_k$ are used in the tomographic reconstruction of $q(t_k)$.
\section{Distributions of trajectories}
\label{dist}
By tomographically reconstructing individual quantum trajectories, as discussed above and initially demonstrated in reference \cite{murc13}, we have proven that we can accurately track the qubit state over the course of any individual measurement. In this section, we consider how quantum trajectory experiments are useful for building an intuition for how the qubit state is most likely to evolve under measurement. As discussed in reference \cite{webe14}, distributions of quantum trajectories offer a convenient qualitative tool for visualizing the interplay between measurement dynamics and unitary evolution.
The greyscale histograms in panels A and B of Figure 5 display the simulated distribution of quantum trajectories for $\tau = 1.28 \, \mu$s, $\gamma = 2.7\times 10^{-7}s^{-1}$, and $\Omega/2\pi = 0.4$ MHz. Note that due to the Rabi drive, the measurement initially projects the qubit preferentially toward the excited state ($z = -1$). At intermediate times a wide range of qubit states are possible, and after half a Rabi period the qubit is preferentially projected toward the ground state. In experiments with superconducting qubits, $\tau$ and $\Omega$ can be readily tuned, and distributions of quantum trajectories are experimentally accessible for a wide range of parameters.
It is also possible to consider the conditional quantum dynamics of the sub-ensemble of trajectories which end in a particular quantum state, or post-selection. Panels C and D of Figure 5 display the distribution of trajectories which end in the final state $x_F = 0.1 \pm .08$, $z_F = 0.55 \pm .08$. By analyzing the statistical properties of such distributions, it is possible to answer questions of broad interest in the field of quantum control. The experiments of reference \cite{webe14} focus on one such question: what is the most probable path through quantum state space connecting an initial state $\ket{\psi_i}$ and a final state $\ket{\psi_f}$ in a given time $T$?
One straightforward theoretical approach to this problem would be to solve the stochastic master equation (SME) numerically for a large ensemble of repetitions, and then to perform statistical analysis on the sub-ensemble of trajectories which end in $\ket{\psi_f}$ at time $T$. An alternative approach based on an action principle for continuous quantum measurement was developed in reference \cite{chan13b}. The action principle naturally incorporates post-selection and yields a set of ordinary (non-stochastic) differential equations for the most probable path which are simper to solve numerically than the full SME. In the limit of no post-selection, this approach is consistent with SME formulation. The results of reference \cite{webe14} experimentally verify the predictions of this action principle for the case of a single qubit under simultaneous measurement and Rabi drive. However, the action principle is a general theory which can be applied to a wide variety of quantum systems and may prove useful in designing optimal quantum control protocols.
\begin{figure}
\begin{center}
\includegraphics[angle = 0, width = .48\textwidth]{fig5v1}
\end{center}
\caption{\label{fig:fig4} Greyscale histograms of quantum trajectories based on $5\times 10^{4}$ simulated trajectories. Here, $\tau = 1.28 \, \mu$s, $\gamma = 2.7\times 10^{-7}s^{-1}$, and $\Omega/2\pi = 0.4$ MHz. Histograms are normalized such that the most frequent value at each time point is $1$. Panels A and B depict the full distribution of $x$ (A) and $y$ (B) trajectories. Panels C and D display the sub-ensemble of trajectories which end in the final state $x_F = 0.1 \pm .08$, $z_F = .55 \pm .08$.}
\end{figure}
\section{Time-symmetric state estimation}
\label{time}
We have so far focused on the use of the quantum state as a predictive tool; the quantum trajectories presented in the previous sections describe the evolution of expected average outcome of observables $\sigma_x,\sigma_y,\sigma_z$, which relate in a straightforward way to the probability of obtaining a certain outcome in subsequent projective tomography measurements. However, it is also possible to follow the qubit state evolution \emph{backward} in time to predict an unknown measurement result from the past.
Consider the following guessing game: Two experimenters can perform measurements on the same quantum system. At a time $t$ the first experimenter makes a measurement of some observable $\Omega_m$ and hides the result. The second experimenter then must guess the outcome $m$ that the first experimenter received. If the second experimenter only has access to the quantum system's state before the first experimenter's measurement, then the theory of POVMs provides the second experimenter with probability for each outcome $m$ and the ability to make the best possible guess. However, if the second experimenter is allowed to probe the quantum system after the first experimenter has conducted her measurements, can he make a better prediction for the hidden result? Indeed, since more information about the system is available at a later time the second experimenter can make more confident predictions. It can be shown \cite{wise02, gamm13} that the probability of the outcome $m$ is given by,
\begin{eqnarray}
P_p(m) = \frac{\mathrm{Tr}(\Omega_m \rho_t \Omega_m^\dagger E_t)}{\sum_m \mathrm{Tr}(\Omega_m \rho_t \Omega_m^\dagger E_t)}, \label{eq:pqs}
\end{eqnarray}
where $\rho_t$ is the system density matrix at time $t$, conditioned on previous measurement outcomes and propagated forward in time until time $t$, while $E_t$ is a matrix which is propagated backwards in time in a similar manner and accounts for the time evolution and measurements obtained after time $t$. The subscript $p$ denotes ``past'', and in \cite{gamm13} it was proposed that, if $t$ is in the past, the pair of matrices $(\rho_t,E_t)$, rather than only $\rho_t$, is the appropriate object to associate with the state of a quantum system at time $t$. It is worth noting that information from before the first experiment's measurement, which is encoded in $\rho_t$, and information after this measurement (encoded in $E_t$) play a formally equivalent role in the prediction for the experimenter's result. It is natural that full measurement records would contain more information about the system and several precision probing theories \cite{mank09,mank09pra,mank11,arme09,whea10} have incorporated full measurement records.
Recent experiments have applied Eq.(\ref{eq:pqs}) to systems with Rydberg atoms \cite{ryba14} and superconducting qubits \cite{tan14}, confirming how full measurement records allow more confident predictions for measurements performed in the past. This applies to both projective and weak (weak value) measurements and in the case of weak measurements the experiments reveal how the orthogonality of initial ($\rho_t$) and final ($E_t$) states leads to the occurrence of anomalous weak values \cite{tan14}.
\section{Two-qubit trajectories}
\label{two}
To this point, we have focused on the trajectories of a single-qubit system. However, the trajectory formalism is readily extensible for studying the dynamics of a multiple-qubit system. Such systems are of interest because they allow us to directly observe and study the generation of entanglement on a single-shot basis \cite{will08b}. In particular, we study a cascaded CQED system comprised of two superconducting qubits housed in two separate cavities that sequentially probed in reflection by a single coherent state (Figure \ref{fig:fig6}). The quantum trajectory formalism for such a system was first developed in 1993 \cite{carmbook}, and was observed experimentally in 2014 \cite{roch14}.
The Hamiltonian for this cascaded quantum system is given by
\begin{equation}
H = H_0 + \chi_1 a^\dagger a \sigma_z^1 + \chi_2 b^\dagger b \sigma_z^2 - i \frac{\sqrt{\kappa_1\kappa_2 \eta_{loss}}}{2} (a^\dagger b - b^\dagger a),
\end{equation}
where $H_0$ comprises the uncoupled Hamiltonians for the two cavities and qubits; $\chi_i$ are the dispersive shifts; $a$ $(b)$ and $a^\dagger$ $(b^\dagger)$ are the creation and annihilation operators for the first (second) cavity; $\kappa_i$ are the decay rates of the cavities; and $\eta_{loss} \approx 0.8$ represents the transmission efficiency between the two cavities.
\begin{figure*}
\begin{center}
\includegraphics[angle = 0, width = .78\textwidth]{fig6v3}
\end{center}
\caption{\label{fig:fig6} Two-qubit trajectories (A) Schematic of the measurement setup. (B) The cascaded cavities are probed with a coherent microwave tone, initially aligned along the $X_1$ quadrature. After probing both cavities, the tone can acquire three different phase shifts: $+\Delta\theta$, $-\Delta\theta$ and $0$ corresponding to the states $\ket{11}$, $\ket{00}$ and $\ket{01}/\ket{10}$ respectively. (C) Conditional quantum state after a measurement result $r$ of length $t_k=k\Delta t=0.8\mu s$, for both quits initially prepared in the maximally superposed state ($(\ket{0}+\ket{1})/\sqrt(2)\otimes(\ket{0}+\ket{1})/\sqrt(2)$ and $\tau=0.75\mu s$. (D) Single quantum trajectory of the cascaded two-qubit system. The color code is similar to the one used in panel C. The inset shows the corresponding concurrence.}
\end{figure*}
In the multiple-qubit regime, it is more convenient to work in the measurement basis ($\ket{00}$, $\ket{01}$, $\ket{10}$ and $\ket{11}$) rather than the Pauli basis set ($\sigma_i \otimes \sigma_j$), since the measurement in the multi-qubit case does not project along a single-qubit Pauli operator. In the experiments described in reference \cite{roch14}, the sequential measurement realizes a half-parity operation (Fig 5. A,B): the measurement tone acquires distinct phase shifts of $\pm \Delta\theta$ for the even-parity states $\ket{00}$ and $\ket{11}$, and an identical (null) phase shift in the odd parity subspace ($\ket{01}$ and $\ket{10}$).
In cascaded quantum systems, the effect of the losses between the systems is of primary importance and need to be fully taken into account when a quantitative description is needed \cite{roch14}. However in this review, we make the choice to set $\eta_{loss}=1$ (zero losses) for the sake of simplicity. In addition, we make the assumption that the dispersive shifts are equals for the two cavities ($\chi_1=\chi_2=\chi$) as well as the decay rates ($\kappa_1=\kappa_2=\kappa$).
Similarly to the single-qubit case, we can define the measurement outcome $V_k$ as the time-average of the $X_2$ quadrature voltage: $V_k=1/(k\Delta t)\int_0^{k\Delta t}V_{X_2}(t) dt$. The dimensionless measurement outcome is thus given by $r_k=2V_k/\Delta V$ where $\Delta V \propto \Delta\theta$ is defined as the distance between the measured Gaussian histogram centers for $\ket{00}$ and $\ket{01}/\ket{10}$. The measurement realises a projection on a timescale $\tau \equiv 4\Delta t/S$ with the dimensionless measurement strength $S=64\chi^2\bar{n}\eta_m\Delta t/\kappa$.
The formalism for generating a joint qubit trajectory is quite similar to that of the single qubit case. We collect a series of measurements $\{r_k\}$ at times $\{t_k\}$, and use these measurements to calculate the conditional density matrices $\{\rho_k^{ij,lm}\}$, where $\{ij, lm\}$ index the computational states. The diagonal density matrix elements can be calculated using a Bayes' rule, for example:
\begin{equation}
\frac{\rho_k^{00,00} }{\rho_k^{11,11}}= \frac{\rho_0^{00,00} (0)e^{\left[ - \left(r_k +2\right)^2/2\sigma^2\right]}}{\rho_0^{11,11} (0)e^{\left[ - \left(r_k -2\right)^2/2\sigma^2\right]}}.
\label{diags}
\end{equation}
Here, $\sigma$ is the width of the Gaussian histograms, which decreases as $1/\sqrt{\Delta t}$. The off-diagonal density matrix elements can also be calculated within the same Bayesian formalism. Neglecting internal losses in the cavity and $T_1$ relaxation, the off-diagonal density matrix terms $\rho_k^{01,10}$ are given by:
\begin{eqnarray}
&& \label{offdiags} \hspace{-0.7cm} |\rho^{ij,lm}_{k}|= |\rho^{ij,lm}_{0}| \,
\frac{\sqrt{\rho^{ij,ij}_{k}\rho^{lm, lm}_{k}}} {\sqrt{\rho^{ij,ij}_{0}\rho^{lm,lm}_{0}}} e^{-\gamma_{ij,lm} k\Delta t} \\
&&\hspace{-0.7cm} \times \text{ exp} \left[-\frac{k\Delta t}{2} \left( 1- \eta_m\right) |V_k^{ij} - V_k^{lm}|^2 \right] \nonumber,
\end{eqnarray}
\noindent where $V^{i,j}_k$ is the average voltage corresponding to the qubits prepared in the states $i,j$. Here, the first term represents the Bayesian update and includes intrinsic $T_2^*$ dephasing of the matrix element $\gamma_{ij,lm}$; the second term accounts for partial dephasing due to uncollected measurement photons. Notice that in the case where $V_k^{01} = V_k^{10}$, there is no dephasing of the $\rho^{01,10}$ off-diagonal term due to nonunity $\eta_m$: the odd-parity subspace becomes protected with respect to added noise, and we expect the measurement to probabilistically generate entanglement in the odd-parity subspace.
Equations (\ref{diags}) and (\ref{offdiags}) provide a mapping $\{r_k\} \mapsto \{\rho_k\}$ at each measurement time $t_k=k\Delta t$. We can also reconstruct the trajectories experimentally using conditional tomography, as described in Section \ref{cond} and elaborated in reference \cite{roch14}. Figure 4C shows such a Baeysian mapping. However, as mentioned earlier, the losses between the cavities need to be accounted for. Thus we used a more refined formalism as explained in reference \cite{roch14} . From this mapping, one can reconstruct the trajectory of a single iteration of the experiment.
As mentioned before, the main advantage of this cascaded system is its ability to generate entanglement between remote qubits. To quantify this entanglement, we can reconstruct the quantum trajectory of the concurrence, which is a monotone of entanglement. A simplified definition is given by:
\begin{equation}
C_k=2\max(0,|\rho^{01,10}_k|-\sqrt{\rho_k^{00,00}\rho_k^{11,11}})
\end{equation}
Concurrence reaches a maximum value of 1 for a maximally-entangled Bell state, and is zero for joint qubit states that cannot be distinguished from separable or from classically mixed states. An exemplar quantum trajectory of the joint qubit state, and of the concurrence, are shown in Figure 4D. We see an initial transient during which $C$ is zero, followed by a non-monotonic increase in the $C$ as the joint qubit state stochastically projects towards the entangled manifold, reaching an eventual concurrence of $C\sim0.55$, indicating a highly nonclassical state.
\section{Outlook}
The experiments presented in this review demonstrate precise control and a detailed understanding of the process of continuous quantum measurement of a superconducting qubit. This knowledge may benefit a wide range of future research directions in quantum control and multi-qubit state estimation \cite{silb05, smit13}. In this final section we highlight one such research direction: measurement-based quantum feedback.
A continuous measurement record, which contains information about how the qubit state evolves in real time, can be incorporated into a feedback loop for a number of applications including state preparation, state stabilization, and continuous quantum error correction. Without feedback, a combination of projective measurement and unitary rotation can be used to probabilistically prepare an arbitrary qubit state \cite{john12}. Using feedback, the measurement result can be used to control a subsequent qubit rotation, allowing for deterministic state-reset protocols \cite{rist12,camp13} which can be repeated on a timescale much faster than $T_1$.
In the case of weak measurement, it is possible prepare arbitrary states by measurement alone, without applying any subsequent qubit rotations. Without feedback, state preparation is probabilistic: one simply post-selects an ensemble of trajectories which end up in the desired state. With feedback, it is possible to prepare an arbitrary qubit state deterministically through adaptive measurement, as recently demonstrated using nitrogen vacancy centers \cite{blok14}.
In addition to state preparation, quantum feedback can also be used to stabilize a qubit state or trajectory. In reference \cite{vija12}, we used weak measurements and continuous quantum feedback to stabilize Rabi oscillations of a superconducting qubit. Reference \cite{camp13} demonstrated that stroboscopic projective measurements and feedback can be used to stabilize an arbitrary trajectory, such as Rabi or Ramsey oscillations. Furthermore, with the with the two-qubit setup from reference \cite{roch14} it should be possible to use feedback to to stabilize an entangled state.
Looking forward, proposals for fault tolerant quantum rely on quantum error correction (QEC) protocols in which a single logical qubit is composed of many physical qubits. While many QEC schemes such as surface codes rely on discrete projective measurements of syndrome qubits \cite{brav98, chow14, bard14}, a wide body of QEC proposals are based instead on continuous measurement-based quantum feedback \cite{wisebook}. In these techniques, a single logical qubit is encoded in several physical qubits, and an error syndrome is detected by processing (or `filtering') a continuous measurement signal. The error signal is used to generate a suitable feedback Hamiltonian which corrects for errors in real time.
By tomographically validating individual quantum trajectories, the experiments presented in this review have demonstrated the ability to correctly `filter' a measurement signal for one and two qubit systems. A sensible next step is to build a system of several qubits and attempt to correctly filter an error syndrome. Then, the following step would be to feed-back on this error syndrome to realize a single logical qubit whose lifetime exceeds that of its constituent physical qubits. To realize this goal will require a robust multi-qubit architecture and improvements in the measurement quantum efficiency. Recent experiments \cite{bard14, kell14} have demonstrated that it is possible to individually measure and control $5\text{-}9$ qubits in a planar cQED architecture, and efforts to improve the measurement quantum efficiency are currently underway in a number of different research groups. Although there are still formidable challenges to overcome, and while the ultimate utility of measurement based QEC in comparison to other methods remains an open question, it seems that an initial demonstration of measurement based QEC may lie on the horizon.
\section*{Acknowledgements}
This research was supported in part by the Army Research Office, Office of
Naval Research and the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), through the Army Research Office. All statements of fact, opinion or conclusions contained herein are those of the authors and should not be construed as representing the official views or policies of IARPA, the ODNI or the US government. MES acknowledges support from the Fannie and John Hertz Foundation.
\label{out}
\bibliographystyle{elsarticle-num}
\section{Introduction}
\label{sec:intro}
The standard description of quantum mechanics considers the time-evolution of isolated quantum systems whose unitary dynamics are governed by the Schr\"{o}dinger equation. Measurement is treated as an instantaneous non-unitary process through which a quantum system is projected into an eigenstate of the measured observable with a probability given by Born's rule. In reality, no system is completely isolated from its environment, and measurements are never truly instantaneous, but occur over some finite timescale determined by the details of the interaction between the measured system and its environment. The theory of quantum trajectories \cite{carmbook, gardinerbook} considers measurement as a continuous process in time, describing how the state of the quantum system evolves during measurement.
Due to the intrinsic quantum fluctuations of the environment, measurement is an inherently stochastic process. If a quantum system starts in a known quantum state $\ket{\psi(0)}$, then by accurately monitoring the fluctuations of its environment it is possible to reconstruct single quantum trajectories $\ket{\psi(t)}$, which describe the evolution of the quantum state in an individual experimental iteration.
The concept of quantum trajectories was first developed in the early 1990's as a theoretical tool to model continuously monitored quantum emitters \cite{carmbook, dali92,gard92}. For the next decade, quantum trajectories were used primarily in the quantum optics community, as a theoretical tool for numerical simulations of the ensemble behavior of open quantum systems \cite{dali92,scha95}. Typically, the master equation of an open quantum system cannot be solved analytically, and thus numerical solutions are often necessary. For a Hilbert space of dimension $N$, the density matrix $\rho$ consists of $N^2$ real numbers, and the computational time required to solve for its time evolution through the master equation scales as $N^4$ \cite{wisebook}. In contrast, the pure quantum state $\ket{\psi(t)}$ of an individual quantum trajectory can be described by $N$ complex numbers. Therefore, it is often advantageous to simulate an ensemble of stochastic quantum trajectories, which can be averaged together to recover the evolution of the density matrix, $\rho(t)$. Although the formalism of quantum trajectories is constructed from standard quantum mechanics \cite{brun02}, it can provide insight into foundational questions such as the quantum measurement problem \cite{gisi84, dios88, gisi92} and bears a close resemblance to the consistent histories interpretation of quantum mechanics \cite{grif84}.
Despite widespread theoretical use, quantum trajectories have only been investigated in a handful of experiments, due in part to the difficulty of performing highly efficient continuous quantum measurements. The earliest experiments to continuously monitor individual quantum systems were in the regime of strong measurement, where the system is quickly projected into an eigenstate of measurement, destroying any information about the phase of a coherent superposition. In such experiments, it is possible to track the `quantum jumps' between eigenstates \cite{ nago86,saut86,berg86, vija11}. Cavity quantum electrodynamics (CQED) experiments with Rydberg atoms have explored the weak measurement regime, tracking the quantum trajectories of a cavity field as it collapses from a coherent state into a photon number eigenstate \cite{guer07}. Other CQED experiments have used a cavity probe to continuously track the position of individual Cesium atoms \cite{hood00}. Quantum trajectories were first considered for solid state systems in the context of a quantum dot qubit monitored in real time by a quantum point contact charge sensor \cite{koro99, goan01}. In 2007, the conditional measurement dynamics of a quantum dot were investigated experimentally \cite{sukh07}. More recently, quantum trajectory theory has been used to solve for the conditional evolution of a continuously monitored superconducting qubit \cite{gamb08,koro11}. These results, when combined with recent advances in nearly-quantum-limited parametric amplifiers, which can be used to achieve highly efficient qubit readout, have enabled a detailed investigation of measurement backaction \cite{hatr13, camp14}.
In this article, we review recent experiments \cite{murc13,webe14,roch14,tan14} which, by weakly probing the field of a microwave frequency cavity containing a superconducting qubit, track the individual quantum trajectories of the system. These are the first experiments, on any system, which use quantum state tomography at discrete times along the trajectory to verify that the qubit state has been faithfully tracked. From the perspective of quantum information technology, these experiments demonstrate the great extent to which the process of measurement is understood in this system, and may inform future efforts in measurement-based feedback \cite{sayr11, vija12, dela14, groe13} for state stabilization \cite{blok14} and error correction.
The review is organized as follows. In section 2, we present a physical picture for continuous quantum measurement of superconducting qubits. In section 3, we demonstrate how to use a measurement result to reconstruct the conditional qubit state after measurement. In section 4, we explain how to reconstruct and tomographically verify individual quantum trajectories. Then, in section 5, we examine ensembles of quantum trajectories to gain insight into qubit state dynamics under measurement. In section 6, we discuss time-symmetric evolution under quantum measurement, and in section 7 we demonstrate quantum trajectories of a two-qubit system. Finally, in section 8 we explore potential applications in measurement-based feedback control and continuous quantum error correction.
\section{Continuous measurement of superconducting qubits}
\label{cont}
The experiments discussed in this review use artificial atoms formed from superconducting circuits. We focus in particular on the transmon circuit \cite{koch07} (Fig. 1A) which is composed of the non-linear inductance of a Josephson junction and a parallel shunting capacitance $C_{\Sigma}$. This circuit is characterized by the Josephson energy scale $E_J \equiv \hbar I_0/2e$ and the capacitive energy scale $E_C \equiv e^2/2C_\Sigma$, where $I_0$ is the junction critical current and $e$ is the elementary charge. A typical transmon circuit, with $E_J/h \sim 20$ GHz and $E_C/h \sim 200$ MHz, has several bound eigenstates (Fig. 1B) with energies $E_m$, where $m$ is a whole number that indexes the states. The lowest two levels form a qubit subspace, with transition frequency $\omega_{01}/2\pi \equiv (E_1-E_0)/h \sim 5$ GHz, and the difference in frequency between transitions to successively higher levels is given by the anharmonicity $ \alpha \approx E_C$. Due to the large $E_J/E_C$ ratio the transmon qubit is insensitive to charge noise, which, when combined with low-loss materials \cite{megr12, chan13} and designs that minimize the participation of surface dielectric loss \cite{paik11}, has allowed for planar qubits with coherence times of many tens of microseconds.
\begin{figure*}
\begin{center}
\includegraphics[angle = 0, width = .8\textwidth]{fig1v5}
\end{center}
\caption{\label{fig:fig1} Dispersive measurements of a superconducting qubit. (A) A transmon qubit couples dispersively to a microwave frequency cavity. Signals that reflect off of the cavity are amplified by a nearly quantum-limited lumped-element Josephson parametric amplifier. (B) Schematic representation of the transmon potential and corresponding energy levels. The two lowest energy levels form the qubit subspace. (C) Reflected signal as a function of probe frequency. The resonance frequency is shifted by $2\chi$ depending on whether the qubit is prepared in the $\ket{0}$ or $\ket{1}$ state. (D,E) The cavity is probed with a coherent microwave tone at a frequency $\omega_m$, initially aligned along the $X_1$ quadrature. After leaving the cavity, the tone acquires a quit-state-dependent phase shift. (F) Phase-sensitive amplification along the $X_2$ (top) and the $X_1$ (bottom) quadratures. }
\end{figure*}
In order to to control the qubit's interaction with its external environment, it is coupled to the fundamental mode of a three dimensional waveguide cavity of frequency $\omega_c$ at a rate $g$, realizing a cavity quantum electrodynamics (CQED) architecture. In the dispersive regime, where the qubit-cavity detuning $\omega_q-\omega_r$ is large compared to $g$, the system is described by the Hamiltonian \cite{webe14,blai04} $H = H_0 + H_{\text{int}}+ H_{\text{R}}$, where
\begin{align}
&H_{\text{int}} = - \hbar \chi a^{\dagger}a \sigma_z \\
&H_{\text{R}} = \hbar \frac{\Omega}{2}\sigma_y.
\end{align}
\noindent Here $H_0$ describes the uncoupled qubit and cavity energies and decay terms, $\hbar$ is the reduced Plank's constant, $\chi$ is the dispersive coupling rate, $a^{\dagger}$ and $a$ are the creation and annihilation operators for the cavity mode, and $\sigma_z$ is the qubit Pauli operator that acts on the qubit state in its energy eigenbasis. $H_{\text{int}}$ is an interaction term which equivalently describes a qubit-state-dependent frequency shift of the cavity of $-\chi \sigma_z$ (with the $\ket{0}$ state defined as $\sigma_z = +1$) and a qubit frequency that depends on the intracavity photon number $\hat{n} = a^{\dagger}a$ (an a.c. Stark shift). $H_\text{R}$ describes the effect of an optional microwave drive at the qubit frequency which causes the qubit state to rotate about the $y$ axis of the Bloch sphere at the Rabi frequency $\Omega$.
Because the $H_{\text{int}}$ commutes with $\sigma_z$, the qubit-state-dependent phase shift can be used to perform continuous quantum non-demolition (QND) measurment of the qubit state in its energy eigenbasis \cite{blai04,bragbook}. Figure 1C illustrates the phase of the reflected signal as a function of frequency. If we choose to measure at a frequency $\omega_m = (\omega_{\ket{0}} + \omega_{\ket{1}})/2$, where $\omega_{\ket{0}}$ and $\omega_{\ket{1}}$ are the cavity frequencies when the qubit is in the ground and excited states, respectively, then the phase difference in the internal cavity field for the two qubit states is given by $\Delta \theta = 4|\chi|/\kappa$, where $\kappa$ is the cavity decay rate. In the experiments presented here, we work in the small phase shift limit, with $|\chi|/\kappa \sim 0.05$.
We probe the cavity by applying a measurement tone at frequency $\omega_m$ initially aligned along the $X_1$ quadrature (Fig. 1D). Due to the vacuum fluctuations of the electromagnetic field, the quadrature amplitudes $X_1$ and $X_2$ of this field will fluctuate in time. The circle in Figure 1D represents the Gaussian variance of the input signal time-averaged for a time $\Delta t$. The area of the circle is inversely proportional to $\Delta t$. After reflecting off of the cavity, the measurement signal acquires a qubit-state-dependent phase shift, as depicted in Figure 1E. In the small $|\chi|/\kappa$ limit, the $X_2$ quadrature of the reflected signal signal contains information about the cavity phase, which is proportional to the qubit state. Likewise, the $X_1$ quadrature contains information about the amplitude of the cavity field and thus the fluctuating intracavity photon number.
In order to track the qubit state through an individual measurement, we need to accurately monitor the quantum fluctuations of the measurement signal, which are typically much smaller than the thermal fluctuations of the room temperature electronics that are needed to record the measurement result. Therefore, we must first amplify the signal above this noise floor. State-of-the-art commercial low-noise amplifiers, which are based on high electron mobility transistors (HEMTs) and can be operated at $4$ K, add tens of photons of noise to the measurement signal. Therefore, a more sensitive pre-amplifier is needed in order to overcome the added noise of the HEMT amplifier.
Over the past few years, Josephson junction based superconducting parametric amplifiers have emerged as an effective tool for realizing nearly-quantum-limited amplification. Phase-preserving amplifiers such as the Josephson parametric converter \cite{berg10} amplify both quadrature amplitudes evenly by a factor of $\sqrt{G}$, where $G$ is the power gain of the phase-preserving amplifier, and add at least a half photon of noise \cite{cave82} to the signal. Here, we focus instead on phase-sensitive amplification from a lumped-element Josephson parametric amplifier \cite{hatr11}, where one quadrature is amplified by a factor of $2\sqrt{G}$ and the other quadrature is de-amplified by the same factor.
When we apply a coherent measurement tone, characterized by an average intracavity photon number $\bar{n}$, its quantum fluctuations will cause the phase coherence of a qubit superposition state to decay at the ensemble dephasing rate $\Gamma = 8\chi^2\bar{n}/\kappa$. The ensemble dephasing rate will be the same regardless of how we choose to process the measurement signal after it leaves the cavity. However, the backaction of an $\emph{individual}$ measurement will depend significantly on our choice of amplification scheme.
As depicted in Figure 1F, after a measurement tone initially aligned along the $X_1$ quadrature reflects off of the cavity and acquires a qubit-state-dependent phase shift and is then displaced to the origin of the $X_1\text{-}X_2$ plane by a coherent tone, we can choose to either amplify the $X_2$ quadrature (top panel) which contains qubit-state information or the $X_1$ quadrature (bottom panel) which contains information about the fluctuating intracavity photon number. Consider a qubit initially prepared in an equal superposition of $\sigma_z$ eigenstates, say $\sigma_x = +1$. If we perform ideal phase-preserving amplification of $X_1$, we also de-amplify the photon number information, and the measurement backaction drives the qubit state along a meridian of the Bloch sphere toward one of the poles. We refer to this case as a $z-$measurement, because we acquire information about the qubit state in the $\sigma_z$ basis. If instead we amplify $X_2$, we also de-amplify the qubit-state information, and measurement backaction drives the qubit state along the equator of the Bloch sphere. We refer to this case as a $\phi-$measurement, because we can track the phase of a qubit superposition state over the course of an individual measurement. For phase-preserving amplification both types of backaction are present.
We first focus on the case of the $z-$measurement. After the measurement tone leaves the parametric amplifier and passes through further stages of amplification we demodulate the signal and record the $X_2$ quadrature amplitude as a digitizer voltage $V(t)$. For a measurement of duration $\Delta t$, the measurement outcome $V_m$ is given by the time-average of $V(t)$. Depending on whether the qubit is initially prepared in the ground of the excited state, the Gaussian distribution describing the probability attaining a particular measurement outcome will be shifted by a voltage $\Delta V \propto \Delta \theta$. In this review, we define a dimensionless measurement outcome $r = 2 V_m / \Delta V$, such that the ground and excited state distributions are centered about $r = \pm 1$, respectively, as illustrated in Figure 2A,D. From these distributions, we define the dimensionless measurement strength $S \equiv (2/a)^2$, where $a$ is the standard deviation of the dimensionless measurement distributions, which scales as $(\Delta t)^{-1/2}$. We also define the characteristic timescale $\tau$ over which the qubit state is projected as the amount of time required for the measurement histograms to be separated by twice their standard deviation, $\tau \equiv 4 \Delta t / S$. When $S$ is large ($\Delta t \gg \tau$), the ground and excited state histograms are well separated (Fig. 2A), and it is possible to determine the qubit state with high fidelity in an individual measurement. The measurement projects the qubit into an energy eigenstate, where it will remain after measurement. Instead, if $S$ is small ($\Delta t \lesssim \tau$), the ground and excited state histograms overlap (Fig. 2D), and an individual measurement only partially projects the qubit state.
\begin{figure*}
\begin{center}
\includegraphics[angle = 0, width = 1\textwidth]{fig2v2}
\end{center}
\caption{\label{fig:fig2} Continuous quantum measurement. Illustrative measurement histograms for a single time-step (A,D) along with simulated measurement records (B,E) and corresponding quantum trajectories (C,F). In the top panels, $\Delta t = 200$ ns, $\tau = 50$ ns, and $\Omega/2\pi = 8$ MHz, illustrating the quantum jumps regime. In the bottom panels, $\Delta t = 20$ ns, $\tau = 150$ ns, and $\Omega = 0$, illustrating the diffusive regime.}
\end{figure*}
Formally, general quantum measurements (partial or projective) are described by the theory of positive operator-valued measures (POVM) which yield the probability $P(m)=\mathrm{Tr}(\Omega_m \,\rho\, \Omega_m^\dagger)$ for obtaining an outcome $m$, and the associated back action on the quantum state, $\rho \rightarrow \Omega_m \,\rho\, \Omega_m^\dagger/P(m)$, where the operators $\Omega_m$ obey $\sum_m \Omega^\dagger_m \Omega_m = \hat{I}$. For example, for the projective measurements $\Omega_{\pm z} = (\hat{I} \pm \sigma_z)/2$, the probability of a measurement yielding the qubit in the $+z$ state is $P(+z) = \mathrm{Tr}(\Omega_{+z} \rho \Omega_{+z})= (1+\langle \sigma_z\rangle)/2$. The partial measurements discussed in this review are described by the POVM \cite{wisebook, jaco06},
\begin{align}
\Omega_r = \left(2 \pi a^2 \right)^{-1/4} e^{(-(r- \sigma_z)^2/4a^2)}
\label{povm}
\end{align}
\noindent where, $1/4a^2 = \Delta t/ 4 \tau$. The $\sigma_z$ term in $\Omega_r$ causes the back action on the qubit degree of freedom, $\rho \rightarrow \Omega_r \rho \Omega_r^\dagger$, due to the readout of the measurement result $r$, resulting in the measurement dynamics discussed in this review.
Our ability to reconstruct the qubit state after an individual partial measurement is determined by the measurement quantum efficiency $\eta_m$. We have established that the noisy measurement tone contains information about the qubit state and cause an ensemble dephasing rate of $\Gamma$. In general, only a fraction, $\eta_m$ of this information is experimentally accessible, and the remainder is lost to environmental degrees of freedom. The measurement efficiency can be reduced from its ideal value of $\eta_m =1$ by losses between the cavity and the parametric amplifier, described by the collection efficiency $\eta_{col}$ and by added noise in the amplification chain, described by an the amplification efficiency $\eta_{amp}$. In the experiments presented here, $\eta_m = \eta_{col} \eta_{amp} \sim 0.4$. The measurement strength depends linearly on $\eta_m$, and for dispersive measurements in the small phase shift limit is given by $S = 64 \chi^2 \bar{n} \eta_m \Delta t/ \kappa$.
We now turn our attention to continuous quantum measurement. In Figure 2 we illustrate quantum trajectories in the limiting cases of strong and weak measurement. We consider a sequence of $n$ measurements occurring at times $\{t_k = k \Delta t\}$ for $k = 0,1,...,n-1$, which result in a set of dimensionless measurement results $\{r_k\} = \{r_0,r_1,...,r_{n-1}\}$. If the qubit is prepared in a known initial state, then we can use the measurement results to track the qubit state as it evolves under measurement, computing the set of conditional qubit states $\{q_k\} = \{q_0,q_1,...,q_{n-1}\}$ corresponding to an individual measurement record $\{r_k\}$. Here the Bloch vector $q = (x,y,z)$ describes a general mixed single-qubit state in terms of the the components $x \equiv \text{tr}[\rho\hat{\sigma}_x]$, $y \equiv \text{tr}[\rho\hat{\sigma}_y]$, and $z \equiv \text{tr}[\rho\hat{\sigma}_z]$, where $\rho$ is the qubit density matrix. In the limit where $\tau \lesssim \Delta t$, each time-step constitutes a (nearly) projective measurement. In the absence of any non-measurement dynamics, after the first time-step subsequent measurements will continue to project the qubit into the eigenstate corresponding to the initial measurement result. However, any additional dynamics, such as energy relaxation or Rabi driving, which occur on a timescale faster than $\Delta t$ will result in discontinuous jumps in the measurement record corresponding to quantum jumps \cite{vija11} of the qubit state (Fig. 2B,C).
In the opposite limit, where $\Delta t \ll \tau$, each measurement will only slightly perturb the qubit state. However, by performing a sequence of repeated partial measurements such that $t_{n-1} \gg \tau$, we can realize a projective measurement. In this case, the noisy detector signal can be used to reconstruct the diffusive trajectory of the qubit state as it is gradually projected toward a measurement eigenstate (Fig. 2E,F).
\section{Reconstructing the conditional quantum state}
\label{cond}
In this section, we discuss in detail how to reconstruct the qubit state conditioned on an individual measurement outcome. One approach, taken in references \cite{roch14, tan14} is to solve a stochastic master equation for conditional qubit state. Here, we instead describe phenomenological approach based on a Bayesian statistics \cite{koro11}, which provides a particularly simply approach to single-qubit trajectories and was used in references \cite{murc13} and \cite{webe14}. As recently demonstrated in reference \cite{tan14}, for the a single qubit under weak measurement and weak Rabi driving, both approaches yield similar results.
Consider a qubit prepared in the initial state $\rho(t = 0)$, which is weakly measured for a time $\Delta t$, yielding a measurement result $r$. Here, we show how to apply Bayes rule of conditional probabilities to update our knowledge of the qubit state after the measurement. We first focus on the case of a $z-$measurement, with $\Omega = 0$. From Bayes rule, we have
\begin{align}
\label{prob}
P(i|r) =&\, \dfrac{P(r|i)P(i)}{P(r)},
\end{align}
\noindent where $i$ describes the basis states $\{\ket{0},\ket{1}\}$. Here, the initial probabilities for finding the qubit in the ground or excited states are given by $P(0) = \rho_{00}(t\! =\!0)$ and $ P(1) = \rho_{11}(t\! = \!0)$. By expressing the measurement distributions $P(r|i)$ explicitly and taking the ratio of the conditional probabilities in equation \eqref{prob} for both basis states, we find that
\begin{eqnarray}
\label{diag}
\frac{\rho_{11}(\Delta t)}{\rho_{00}(\Delta t)} = \frac{\rho_{11}(0)}{\rho_{00}(0)}\frac{\text{exp}[-(r+1)^2/2a^2]}{\text{exp}[-(r-1)^2/2a^2]}.
\end{eqnarray}
\noindent For a qubit initially prepared in the state $q_I = (1,0,0)$ we find that
\begin{eqnarray}
\label{zz}
z^z= \text{tanh}\left(\frac{r\Delta t}{\tau}\right).
\end{eqnarray}
\noindent where the superscript `$z$' denotes a $z-$measurement.
Note that thus far we have used a classical rule of conditional probabilities to determine how the qubit populations evolve under measurement. Following reference \cite{koro11}, we account for the qubit coherence through the phenomenological assumption that
\begin{eqnarray}
\label{xz}
x^z = \sqrt{1-(z^z)^2}e^{-\gamma \Delta t}
\end{eqnarray}
\noindent Here the first term enforces normalization and the second term reflects our imperfect knowledge of the environment and leads to qubit dephasing characterized by the rate $\gamma = \Gamma - 1/2\tau$, where $\Gamma = \Gamma + 1/T_2^*$ is the ensemble dephasing rate and $T_2^*\sim 20 \,\mu$s is the characteristic timescale for extra environmental dephasing.
For the case of a $\phi-$measurement, $z$ remains zero, and $x$ and $y$ are periodic in the accumulated qubit phase shift, and are given by \cite{koro11}
\begin{align}
\label{xp}&x^{\phi} = \text{cos}\left(\frac{r \Delta t}{\tau}\right)e^{-\gamma \Delta t}, \\
\label{yp}&y^{\phi} = -\text{sin}\left(\frac{r \Delta t}{\tau}\right)e^{-\gamma \Delta t}, \\
\end{align}
\noindent where the superscript `$\phi$' denotes a $\phi-$measurement. Figure 3 illustrates the conditional quantum state as a function $r$ for a $z-$measurement (panel A) and a $\phi-$measurement (panel B), with $\tau = 600$ ns and $\Delta t = 400$ ns. Note that the dephasing rate $\gamma$ due the unaccessible part of the measurement signal is the same regardless of our choice of amplification axis. In both cases, a measurement outcome of $r = 0$ will leave $y$ and $z$ unchanged, but $x$ is reduced by a factor of $\text{Exp}[-\gamma \Delta t]$.
A useful feature of dispersive CQED measurements is the ability to rapidly tune the measurement strength by changing the amplitude of the measurement tone. Therefore, it is straightforward to implement experimental sequences which combine partial and projective measurement. The sequence shown in Figure 3c is used to implement conditional quantum state tomography to verify that we can accurately account for the backaction of an individual measurement. The qubit is prepared in the initial state $(1,0,0)$, and weakly measured for a time $\Delta t$. Then, we perform an optional qubit rotation (of $\pi/2$ about the $\hat{y}$ axis to reconstruct $x$, $\pi/2$ about $-\hat{x}$ to reconstruct $y$, and no pulse to reconstruct $z$) followed by a projective measurement. For a given measurement outcome $r$, we perform a tomographic state reconstruction on the sub-ensemble of experimental iterations with similar measurement outcomes, in the range $r\pm\epsilon$, where $\epsilon \ll 1$. For superconducting qubits, this technique was first introduced in reference \cite{hatr13}, which considers the case of phase-preserving amplification. Shortly thereafter, this technique was demonstrated for phase-sensitive amplification \cite{murc13}.
\begin{figure}
\begin{center}
\includegraphics[angle = 0, width = .5\textwidth]{fig3v1}
\end{center}
\caption{\label{fig:fig2} Reconstructing the conditional quantum state. (A,B) The conditional quantum state after a measurement result $r$ for a qubit initially prepared in the state $(1,0,0)$, with $\Delta t = 400$ ns, $\tau = 600$ ns, and $\gamma = 1.3*10^6 \text{s}^{-1}$. Panel A depicts a $z-$measurement, and panel B depicts a $\phi$ measurement. (C) Experimental sequence for reconstructing the $x$ component of the conditional quantum state. (D) To perform quantum state tomography conditioned on the measurement result $r=1.7$, we average together the projective measurement outcomes for the sub-ensemble of measurement outcomes where $r = 1.7 \pm \epsilon$.}
\end{figure}
\section{Tracking individual quantum trajectories}
\label{traj}
Consider a qubit initially prepared in a known state $q_I$, which undergoes a sequence of $n$ partial measurements with outcomes $\{r_k\}$, as described in section \ref{cont}. In the limit where the duration $\Delta t$ of each measurement approaches zero, the set of conditional states $\{q_k\}$ describes a the quantum trajectory $q(t)$. For simplicity, from here on we restrict our discussion to the case of a $z-$measurement. When $\omega = 0$, we can calculate the conditional quantum state at each time-step $t_k = k \Delta t$ from equations \eqref{zz} and \eqref{xz} using only the initial state and the time-averaged measurement signal $\bar{r} = 1/k \sum_{k=0}^{k-1}r_k$. However, when $\Omega > 0$ the measurement dynamics do not commute with the Rabi drive, and therefore the order of the measurement outcomes matters, and $\bar{r}$ no longer contains sufficient information to reconstruct the quantum trajectory. Instead, if $\Delta t \ll \Omega$ we can perform a sequential two-step state update procedure introduced in reference \cite{webe14}. For each time-step $t_k$, we calculate $q_k$ by first applying a Bayesian update to the state $q_{k-1}$ to account for the measurement result $r_k$, and then by applying a unitary rotation to account for the the Rabi drive during the time $\Delta t$. Example quantum trajectories are shown in Figure 4A,B for $\Omega/2\pi = 0$ and $0.4$ MHz, respectively, and $\tau = 1.28 \, \mu$s. The corresponding ensemble average evolution is shown in panels C and D.
\begin{figure}
\begin{center}
\includegraphics[angle = 0, width = .5\textwidth]{fig4v2}
\end{center}
\caption{\label{fig:fig4} Reconstructing individual quantum trajectories. Here, $\tau = 1.28 \, \mu$s, $\gamma = 2.7\times 10^{-7}s^{-1}$, and $\Omega/2\pi = 0$ (A, C) and $0.4$ MHz (B,D). Panels A and B depict the ensemble average evolution. Panels C and D display simulated individual quantum trajectories ending at $t_n = 2 \, \mu$s, with the $x, y$, and $z$ components depicted in blue, red, and black, respectively. The orange regions represent a matching window of $\epsilon = 0.05$ at $t = 0.66 \, \mu$s (C) and $1.48 \, \mu$s (D). Sample trajectories that end within the matching window are shown in other colors. }
\end{figure}
While pervious experiments in other systems have reconstructed individual diffusive quantum trajectories \cite{guer07}, reference \cite{murc13} was the first to use conditional quantum state tomography to verify that the trajectories were reconstructed accurately. Here, we present a brief outline of the tomographic validation procedure. We perform a large number of experimental iterations ending at different times $t_f$, which are followed by a qubit rotation and a projective measurement. We use a single full-length experimental iteration (with $t_f = (n-1) \Delta t$ to generate a target trajectory, denoted $\tilde{q}(t) \equiv (\tilde{x}(t),\tilde{y}(t), \tilde{z}(t))$. Then, for each experimental sequence of total measurement duration $t_k$ (and a given orientation of tomography pulse), we compute the quantum trajectory $q(t)$. We perform conditional quantum state tomography separately at each time $t_k$ using the subset of experimental iterations with $x(t_k) = \tilde{x}(t_k)\pm\epsilon$ and $z(t_k) = \tilde{z}(t_k)\pm\epsilon$ where $\epsilon \ll 1$, and we have assumed that $y=0$. The orange shaded regions in panels C and D of Figure 4 represent matching windows at $t_k = 0.66 \, \mu$s and $1.28 \, \mu$s, respectively. Trajectories which fall within the matching window at $t_k$ are used in the tomographic reconstruction of $q(t_k)$.
\section{Distributions of trajectories}
\label{dist}
By tomographically reconstructing individual quantum trajectories, as discussed above and initially demonstrated in reference \cite{murc13}, we have proven that we can accurately track the qubit state over the course of any individual measurement. In this section, we consider how quantum trajectory experiments are useful for building an intuition for how the qubit state is most likely to evolve under measurement. As discussed in reference \cite{webe14}, distributions of quantum trajectories offer a convenient qualitative tool for visualizing the interplay between measurement dynamics and unitary evolution.
The greyscale histograms in panels A and B of Figure 5 display the simulated distribution of quantum trajectories for $\tau = 1.28 \, \mu$s, $\gamma = 2.7\times 10^{-7}s^{-1}$, and $\Omega/2\pi = 0.4$ MHz. Note that due to the Rabi drive, the measurement initially projects the qubit preferentially toward the excited state ($z = -1$). At intermediate times a wide range of qubit states are possible, and after half a Rabi period the qubit is preferentially projected toward the ground state. In experiments with superconducting qubits, $\tau$ and $\Omega$ can be readily tuned, and distributions of quantum trajectories are experimentally accessible for a wide range of parameters.
It is also possible to consider the conditional quantum dynamics of the sub-ensemble of trajectories which end in a particular quantum state, or post-selection. Panels C and D of Figure 5 display the distribution of trajectories which end in the final state $x_F = 0.1 \pm .08$, $z_F = 0.55 \pm .08$. By analyzing the statistical properties of such distributions, it is possible to answer questions of broad interest in the field of quantum control. The experiments of reference \cite{webe14} focus on one such question: what is the most probable path through quantum state space connecting an initial state $\ket{\psi_i}$ and a final state $\ket{\psi_f}$ in a given time $T$?
One straightforward theoretical approach to this problem would be to solve the stochastic master equation (SME) numerically for a large ensemble of repetitions, and then to perform statistical analysis on the sub-ensemble of trajectories which end in $\ket{\psi_f}$ at time $T$. An alternative approach based on an action principle for continuous quantum measurement was developed in reference \cite{chan13b}. The action principle naturally incorporates post-selection and yields a set of ordinary (non-stochastic) differential equations for the most probable path which are simper to solve numerically than the full SME. In the limit of no post-selection, this approach is consistent with SME formulation. The results of reference \cite{webe14} experimentally verify the predictions of this action principle for the case of a single qubit under simultaneous measurement and Rabi drive. However, the action principle is a general theory which can be applied to a wide variety of quantum systems and may prove useful in designing optimal quantum control protocols.
\begin{figure}
\begin{center}
\includegraphics[angle = 0, width = .48\textwidth]{fig5v1}
\end{center}
\caption{\label{fig:fig4} Greyscale histograms of quantum trajectories based on $5\times 10^{4}$ simulated trajectories. Here, $\tau = 1.28 \, \mu$s, $\gamma = 2.7\times 10^{-7}s^{-1}$, and $\Omega/2\pi = 0.4$ MHz. Histograms are normalized such that the most frequent value at each time point is $1$. Panels A and B depict the full distribution of $x$ (A) and $y$ (B) trajectories. Panels C and D display the sub-ensemble of trajectories which end in the final state $x_F = 0.1 \pm .08$, $z_F = .55 \pm .08$.}
\end{figure}
\section{Time-symmetric state estimation}
\label{time}
We have so far focused on the use of the quantum state as a predictive tool; the quantum trajectories presented in the previous sections describe the evolution of expected average outcome of observables $\sigma_x,\sigma_y,\sigma_z$, which relate in a straightforward way to the probability of obtaining a certain outcome in subsequent projective tomography measurements. However, it is also possible to follow the qubit state evolution \emph{backward} in time to predict an unknown measurement result from the past.
Consider the following guessing game: Two experimenters can perform measurements on the same quantum system. At a time $t$ the first experimenter makes a measurement of some observable $\Omega_m$ and hides the result. The second experimenter then must guess the outcome $m$ that the first experimenter received. If the second experimenter only has access to the quantum system's state before the first experimenter's measurement, then the theory of POVMs provides the second experimenter with probability for each outcome $m$ and the ability to make the best possible guess. However, if the second experimenter is allowed to probe the quantum system after the first experimenter has conducted her measurements, can he make a better prediction for the hidden result? Indeed, since more information about the system is available at a later time the second experimenter can make more confident predictions. It can be shown \cite{wise02, gamm13} that the probability of the outcome $m$ is given by,
\begin{eqnarray}
P_p(m) = \frac{\mathrm{Tr}(\Omega_m \rho_t \Omega_m^\dagger E_t)}{\sum_m \mathrm{Tr}(\Omega_m \rho_t \Omega_m^\dagger E_t)}, \label{eq:pqs}
\end{eqnarray}
where $\rho_t$ is the system density matrix at time $t$, conditioned on previous measurement outcomes and propagated forward in time until time $t$, while $E_t$ is a matrix which is propagated backwards in time in a similar manner and accounts for the time evolution and measurements obtained after time $t$. The subscript $p$ denotes ``past'', and in \cite{gamm13} it was proposed that, if $t$ is in the past, the pair of matrices $(\rho_t,E_t)$, rather than only $\rho_t$, is the appropriate object to associate with the state of a quantum system at time $t$. It is worth noting that information from before the first experiment's measurement, which is encoded in $\rho_t$, and information after this measurement (encoded in $E_t$) play a formally equivalent role in the prediction for the experimenter's result. It is natural that full measurement records would contain more information about the system and several precision probing theories \cite{mank09,mank09pra,mank11,arme09,whea10} have incorporated full measurement records.
Recent experiments have applied Eq.(\ref{eq:pqs}) to systems with Rydberg atoms \cite{ryba14} and superconducting qubits \cite{tan14}, confirming how full measurement records allow more confident predictions for measurements performed in the past. This applies to both projective and weak (weak value) measurements and in the case of weak measurements the experiments reveal how the orthogonality of initial ($\rho_t$) and final ($E_t$) states leads to the occurrence of anomalous weak values \cite{tan14}.
\section{Two-qubit trajectories}
\label{two}
To this point, we have focused on the trajectories of a single-qubit system. However, the trajectory formalism is readily extensible for studying the dynamics of a multiple-qubit system. Such systems are of interest because they allow us to directly observe and study the generation of entanglement on a single-shot basis \cite{will08b}. In particular, we study a cascaded CQED system comprised of two superconducting qubits housed in two separate cavities that sequentially probed in reflection by a single coherent state (Figure \ref{fig:fig6}). The quantum trajectory formalism for such a system was first developed in 1993 \cite{carmbook}, and was observed experimentally in 2014 \cite{roch14}.
The Hamiltonian for this cascaded quantum system is given by
\begin{equation}
H = H_0 + \chi_1 a^\dagger a \sigma_z^1 + \chi_2 b^\dagger b \sigma_z^2 - i \frac{\sqrt{\kappa_1\kappa_2 \eta_{loss}}}{2} (a^\dagger b - b^\dagger a),
\end{equation}
where $H_0$ comprises the uncoupled Hamiltonians for the two cavities and qubits; $\chi_i$ are the dispersive shifts; $a$ $(b)$ and $a^\dagger$ $(b^\dagger)$ are the creation and annihilation operators for the first (second) cavity; $\kappa_i$ are the decay rates of the cavities; and $\eta_{loss} \approx 0.8$ represents the transmission efficiency between the two cavities.
\begin{figure*}
\begin{center}
\includegraphics[angle = 0, width = .78\textwidth]{fig6v3}
\end{center}
\caption{\label{fig:fig6} Two-qubit trajectories (A) Schematic of the measurement setup. (B) The cascaded cavities are probed with a coherent microwave tone, initially aligned along the $X_1$ quadrature. After probing both cavities, the tone can acquire three different phase shifts: $+\Delta\theta$, $-\Delta\theta$ and $0$ corresponding to the states $\ket{11}$, $\ket{00}$ and $\ket{01}/\ket{10}$ respectively. (C) Conditional quantum state after a measurement result $r$ of length $t_k=k\Delta t=0.8\mu s$, for both quits initially prepared in the maximally superposed state ($(\ket{0}+\ket{1})/\sqrt(2)\otimes(\ket{0}+\ket{1})/\sqrt(2)$ and $\tau=0.75\mu s$. (D) Single quantum trajectory of the cascaded two-qubit system. The color code is similar to the one used in panel C. The inset shows the corresponding concurrence.}
\end{figure*}
In the multiple-qubit regime, it is more convenient to work in the measurement basis ($\ket{00}$, $\ket{01}$, $\ket{10}$ and $\ket{11}$) rather than the Pauli basis set ($\sigma_i \otimes \sigma_j$), since the measurement in the multi-qubit case does not project along a single-qubit Pauli operator. In the experiments described in reference \cite{roch14}, the sequential measurement realizes a half-parity operation (Fig 5. A,B): the measurement tone acquires distinct phase shifts of $\pm \Delta\theta$ for the even-parity states $\ket{00}$ and $\ket{11}$, and an identical (null) phase shift in the odd parity subspace ($\ket{01}$ and $\ket{10}$).
In cascaded quantum systems, the effect of the losses between the systems is of primary importance and need to be fully taken into account when a quantitative description is needed \cite{roch14}. However in this review, we make the choice to set $\eta_{loss}=1$ (zero losses) for the sake of simplicity. In addition, we make the assumption that the dispersive shifts are equals for the two cavities ($\chi_1=\chi_2=\chi$) as well as the decay rates ($\kappa_1=\kappa_2=\kappa$).
Similarly to the single-qubit case, we can define the measurement outcome $V_k$ as the time-average of the $X_2$ quadrature voltage: $V_k=1/(k\Delta t)\int_0^{k\Delta t}V_{X_2}(t) dt$. The dimensionless measurement outcome is thus given by $r_k=2V_k/\Delta V$ where $\Delta V \propto \Delta\theta$ is defined as the distance between the measured Gaussian histogram centers for $\ket{00}$ and $\ket{01}/\ket{10}$. The measurement realises a projection on a timescale $\tau \equiv 4\Delta t/S$ with the dimensionless measurement strength $S=64\chi^2\bar{n}\eta_m\Delta t/\kappa$.
The formalism for generating a joint qubit trajectory is quite similar to that of the single qubit case. We collect a series of measurements $\{r_k\}$ at times $\{t_k\}$, and use these measurements to calculate the conditional density matrices $\{\rho_k^{ij,lm}\}$, where $\{ij, lm\}$ index the computational states. The diagonal density matrix elements can be calculated using a Bayes' rule, for example:
\begin{equation}
\frac{\rho_k^{00,00} }{\rho_k^{11,11}}= \frac{\rho_0^{00,00} (0)e^{\left[ - \left(r_k +2\right)^2/2\sigma^2\right]}}{\rho_0^{11,11} (0)e^{\left[ - \left(r_k -2\right)^2/2\sigma^2\right]}}.
\label{diags}
\end{equation}
Here, $\sigma$ is the width of the Gaussian histograms, which decreases as $1/\sqrt{\Delta t}$. The off-diagonal density matrix elements can also be calculated within the same Bayesian formalism. Neglecting internal losses in the cavity and $T_1$ relaxation, the off-diagonal density matrix terms $\rho_k^{01,10}$ are given by:
\begin{eqnarray}
&& \label{offdiags} \hspace{-0.7cm} |\rho^{ij,lm}_{k}|= |\rho^{ij,lm}_{0}| \,
\frac{\sqrt{\rho^{ij,ij}_{k}\rho^{lm, lm}_{k}}} {\sqrt{\rho^{ij,ij}_{0}\rho^{lm,lm}_{0}}} e^{-\gamma_{ij,lm} k\Delta t} \\
&&\hspace{-0.7cm} \times \text{ exp} \left[-\frac{k\Delta t}{2} \left( 1- \eta_m\right) |V_k^{ij} - V_k^{lm}|^2 \right] \nonumber,
\end{eqnarray}
\noindent where $V^{i,j}_k$ is the average voltage corresponding to the qubits prepared in the states $i,j$. Here, the first term represents the Bayesian update and includes intrinsic $T_2^*$ dephasing of the matrix element $\gamma_{ij,lm}$; the second term accounts for partial dephasing due to uncollected measurement photons. Notice that in the case where $V_k^{01} = V_k^{10}$, there is no dephasing of the $\rho^{01,10}$ off-diagonal term due to nonunity $\eta_m$: the odd-parity subspace becomes protected with respect to added noise, and we expect the measurement to probabilistically generate entanglement in the odd-parity subspace.
Equations (\ref{diags}) and (\ref{offdiags}) provide a mapping $\{r_k\} \mapsto \{\rho_k\}$ at each measurement time $t_k=k\Delta t$. We can also reconstruct the trajectories experimentally using conditional tomography, as described in Section \ref{cond} and elaborated in reference \cite{roch14}. Figure 4C shows such a Baeysian mapping. However, as mentioned earlier, the losses between the cavities need to be accounted for. Thus we used a more refined formalism as explained in reference \cite{roch14} . From this mapping, one can reconstruct the trajectory of a single iteration of the experiment.
As mentioned before, the main advantage of this cascaded system is its ability to generate entanglement between remote qubits. To quantify this entanglement, we can reconstruct the quantum trajectory of the concurrence, which is a monotone of entanglement. A simplified definition is given by:
\begin{equation}
C_k=2\max(0,|\rho^{01,10}_k|-\sqrt{\rho_k^{00,00}\rho_k^{11,11}})
\end{equation}
Concurrence reaches a maximum value of 1 for a maximally-entangled Bell state, and is zero for joint qubit states that cannot be distinguished from separable or from classically mixed states. An exemplar quantum trajectory of the joint qubit state, and of the concurrence, are shown in Figure 4D. We see an initial transient during which $C$ is zero, followed by a non-monotonic increase in the $C$ as the joint qubit state stochastically projects towards the entangled manifold, reaching an eventual concurrence of $C\sim0.55$, indicating a highly nonclassical state.
\section{Outlook}
The experiments presented in this review demonstrate precise control and a detailed understanding of the process of continuous quantum measurement of a superconducting qubit. This knowledge may benefit a wide range of future research directions in quantum control and multi-qubit state estimation \cite{silb05, smit13}. In this final section we highlight one such research direction: measurement-based quantum feedback.
A continuous measurement record, which contains information about how the qubit state evolves in real time, can be incorporated into a feedback loop for a number of applications including state preparation, state stabilization, and continuous quantum error correction. Without feedback, a combination of projective measurement and unitary rotation can be used to probabilistically prepare an arbitrary qubit state \cite{john12}. Using feedback, the measurement result can be used to control a subsequent qubit rotation, allowing for deterministic state-reset protocols \cite{rist12,camp13} which can be repeated on a timescale much faster than $T_1$.
In the case of weak measurement, it is possible prepare arbitrary states by measurement alone, without applying any subsequent qubit rotations. Without feedback, state preparation is probabilistic: one simply post-selects an ensemble of trajectories which end up in the desired state. With feedback, it is possible to prepare an arbitrary qubit state deterministically through adaptive measurement, as recently demonstrated using nitrogen vacancy centers \cite{blok14}.
In addition to state preparation, quantum feedback can also be used to stabilize a qubit state or trajectory. In reference \cite{vija12}, we used weak measurements and continuous quantum feedback to stabilize Rabi oscillations of a superconducting qubit. Reference \cite{camp13} demonstrated that stroboscopic projective measurements and feedback can be used to stabilize an arbitrary trajectory, such as Rabi or Ramsey oscillations. Furthermore, with the with the two-qubit setup from reference \cite{roch14} it should be possible to use feedback to to stabilize an entangled state.
Looking forward, proposals for fault tolerant quantum rely on quantum error correction (QEC) protocols in which a single logical qubit is composed of many physical qubits. While many QEC schemes such as surface codes rely on discrete projective measurements of syndrome qubits \cite{brav98, chow14, bard14}, a wide body of QEC proposals are based instead on continuous measurement-based quantum feedback \cite{wisebook}. In these techniques, a single logical qubit is encoded in several physical qubits, and an error syndrome is detected by processing (or `filtering') a continuous measurement signal. The error signal is used to generate a suitable feedback Hamiltonian which corrects for errors in real time.
By tomographically validating individual quantum trajectories, the experiments presented in this review have demonstrated the ability to correctly `filter' a measurement signal for one and two qubit systems. A sensible next step is to build a system of several qubits and attempt to correctly filter an error syndrome. Then, the following step would be to feed-back on this error syndrome to realize a single logical qubit whose lifetime exceeds that of its constituent physical qubits. To realize this goal will require a robust multi-qubit architecture and improvements in the measurement quantum efficiency. Recent experiments \cite{bard14, kell14} have demonstrated that it is possible to individually measure and control $5\text{-}9$ qubits in a planar cQED architecture, and efforts to improve the measurement quantum efficiency are currently underway in a number of different research groups. Although there are still formidable challenges to overcome, and while the ultimate utility of measurement based QEC in comparison to other methods remains an open question, it seems that an initial demonstration of measurement based QEC may lie on the horizon.
\section*{Acknowledgements}
This research was supported in part by the Army Research Office, Office of
Naval Research and the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), through the Army Research Office. All statements of fact, opinion or conclusions contained herein are those of the authors and should not be construed as representing the official views or policies of IARPA, the ODNI or the US government. MES acknowledges support from the Fannie and John Hertz Foundation.
\label{out}
\bibliographystyle{elsarticle-num}
|
1,116,691,498,631 | arxiv | \section{Introduction}
An orientation of a graph is {\em semi-transitive} if it is acyclic, and for any directed path $v_0\rightarrow v_1\rightarrow \cdots \rightarrow v_k$ either there is no arc between $v_0$ and $v_k$, or $v_i\rightarrow v_j$ is an arc for all $0\leq i<j\leq k$. An undirected graph is {\em semi-transitive} if it admits a semi-transitive orientation. The notion of a semi-transitive orientation generalizes that of a {\em transitive orientation}; it was introduced by Halld\'{o}rsson, Kitaev and Pyatkin \cite{HKP11} in 2011 as a powerful tool to study {\em word-representable graphs} defined via alternation of letters in words and studied extensively in recent years (see \cite{K17,KL15}). The hereditary class of semi-transitive graphs is precisely the class of word-representable graphs, and its significance is in the fact that it generalizes several important classes of graphs. In particular, we have the following useful fact.
\begin{theorem}[\cite{HKP16}]\label{3-color-graphs} Any $3$-colourable graph is semi-transitive. \end{theorem}
A {\em shortcut} $C$ in a directed acyclic graph is an induced subgraph on vertices $\{v_0,v_1,\ldots, v_k\}$ for $k\geq 3$ such that $v_0\rightarrow v_1\rightarrow \cdots \rightarrow v_k$ is a directed path, $v_0\rightarrow v_k$ is an arc, and there exist $0\leq i<j\leq k$ such that there is no arc between $v_i$ and $v_j$. The arc $v_0\rightarrow v_k$ in $C$ is called the {\em shortcutting arc}, and the path $v_0\rightarrow v_1\rightarrow \cdots \rightarrow v_k$ is the {\em long path} in $C$. Thus, an orientation is semi-transitive if and only if it is acyclic and shortcut-free.
The following lemma is an easy, but very helpful observation that will be used many times in this paper. Note that it was first proved in \cite{AKM15} for the case of $m=4$.
\begin{lemma}[\cite{AKM15}]\label{lemma} Suppose that an undirected graph $G$ has a cycle $C=x_1x_2\cdots x_mx_1$, where $m\geq 4$ and the vertices in $\{x_1,x_2,\ldots,x_m\}$ do not induce a clique in $G$. If $G$ is oriented semi-transitively, and $m-2$ edges of $C$ are oriented in the same direction (i.e. from $x_i$ to $x_{i+1}$ or vice versa, where the index $m+1:=1$) then the remaining two edges of $C$ are oriented in the opposite direction.\end{lemma}
\begin{proof} Clearly, if all arcs of $C$ have the same direction then we obtain a cycle; if $m-1$ arcs of $C$ have the same direction, we obtain a shortcut. So, the direction of both remaining arcs must be opposite.
\end{proof}
\begin{figure}
\begin{center}
\begin{tikzpicture}[node distance=1cm,auto,main
node/.style={circle,draw,inner sep=1pt,minimum size=2pt}]
\node[main node] (1) {{\tiny 1}};
\node[main node] (2) [right of=1,xshift=5cm] {{\tiny 2}};
\node[main node] (3) [below of=1,yshift=0.3cm] {{\tiny 3}};
\node[main node] (4) [below of=3,yshift=0.3cm] {{\tiny 4}};
\node[main node] (5) [below of=4,yshift=0.3cm] {{\tiny 5}};
\node[main node] (6) [below of=5,yshift=0.3cm] {{\tiny 6}};
\node[main node] (7) [below of=2,yshift=0.3cm] {{\tiny 7}};
\node[main node] (8) [below of=7,yshift=0.3cm] {{\tiny 8}};
\node[main node] (9) [below of=8,yshift=0.3cm] {{\tiny 9}};
\node[main node] (10) [below of=9,yshift=0.3cm] {{\tiny 10}};
\node[main node] (11) [below of=6,yshift=0.3cm] {{\tiny 11}};
\node[main node] (13) [below of=11,yshift=0.3cm] {{\tiny 13}};
\node[main node] (14) [below of=13,yshift=0.3cm] {{\tiny 14}};
\node[main node] (12) [below of=10,yshift=0.3cm] {{\tiny 12}};
\node[main node] (15) [below of=12,yshift=0.3cm] {{\tiny 15}};
\node[main node] (16) [below of=15,yshift=0.3cm] {{\tiny 16}};
\path
(1) edge (2)
edge (3)
edge [bend right=60] (4)
edge [bend right=60] (5)
edge [bend right=60] (6);
\path
(2) edge (7)
edge [bend left=60] (8)
edge [bend left=60] (9)
edge [bend left=60] (10);
\path
(3) edge [bend right=60] (11) edge (12) edge (16);
\path
(7) edge [bend left=60] (12) edge (11) edge (14);
\path
(4) edge (8) edge (9) edge (10) edge [bend right=60] (13);
\path
(5) edge (8) edge (9) edge (10) edge (16);
\path
(6) edge (8) edge (9) edge (10) edge (12);
\path
(8) edge [bend left=60] (15);
\path
(9) edge (14);
\path
(10) edge (11);
\path
(13) edge (15) edge (16) edge (11);
\path
(14) edge (15) edge (16);
\path
(12) edge (15);
\end{tikzpicture}
\caption{A minimal non-semi-transitive subgraph of $K(8,3)$}\label{K83-fig}
\end{center}
\end{figure}
Determining if a triangle-free graph is semi-transitive is an NP-hard problem \cite{HKP16}. The existence of non-semi-transitive triangle-free graphs has been established via Erd\H{o}s' theorem \cite{Erdos} by Halld\'{o}rsson and the authors \cite{HKP11} in 2011 (also see \cite[Section 4.4]{KL15}). However, no explicit examples of such graphs were known until recent work of the first author and Saito \cite{KS19} who have shown {\em computationally} (using the user-friendly freely available software \cite{G}) that a certain subgraph on 16 vertices and 36 edges of the triangle-free Kneser graph $K(8,3)$ is not semi-transitive; the subgraph is shown in Fig.~\ref{K83-fig}. Thus, $K(8,3)$ itself on 56 vertices and 280 edges is non-semi-transitive. The question on the existence of smaller triangle-free non-semi-transitive graphs has been raised in~\cite{KS19}.
In Section~\ref{sec2} we prove that the Gr\"otzsch graph in Fig.~\ref{Grotzsch-graph} on 11 vertices
is a smallest (by the number of vertices) non-semi-transitive triangle-free graph, and that the Chv\'atal graph in Fig.~\ref{Chvatal-graph} is the smallest triangle-free 4-regular non-semi-transitive graph.
In Section~\ref{sec3} we address the question on the existence of triangle-free semi-transitive graphs with chromatic number 4, and prove, in particular, that
Toft's graphs and the circulant graph $C(13;1,5)$ (the same as the Toeplitz graph $T_{13}(1,5,8,12)$) are such graphs.
Finally, in Section~\ref{sec4} we discuss some open problems.
\section{Non-semi-transitive orientability of the Gr\"otzsch graph and the Chv\'atal graph}\label{sec2}
The leftmost graph in Fig.~\ref{Grotzsch-graph} is the well-known Gr\"otzsch graph (also known as Mycielski graph). It is well-known \cite{Ch} and is easy to prove that this graph is a minimal 4-chromatic triangle-free graph (and the only such graph on 11 vertices).
\begin{figure}
\begin{center}
\begin{tiny}
\begin{tabular}{ccc}
\begin{tikzpicture}[every node/.style={draw,circle}]
\graph [clockwise,math nodes] {
subgraph C [V={ {3}, {4}, {5}, {1}, {2} }, name=A, radius=2cm];
subgraph N [V={ {3'}, {4'}, {5'}, {1'}, {2'} }, name=B, radius=1cm];
subgraph W [V={ {0} }, name=C, radius=0cm];
A 1 -- B 2;
A 1 -- B 5;
A 2 -- B 1;
A 2 -- B 3;
A 3 -- B 2;
A 3 -- B 4;
A 4 -- B 3;
A 4 -- B 5;
A 5 -- B 1;
A 5 -- B 4;
\foreach \i in {1,...,5}{
C 1 -- B \i;
}
};
\end{tikzpicture}
&
\begin{tikzpicture}[every node/.style={draw,circle}]
\graph [clockwise,math nodes] {
subgraph C [V={ {3}, {4}, {5}, {1}, {2} }, name=A, radius=2cm];
subgraph N [V={ {3'}, {4'}, {5'}, {1'}, {2'} }, name=B, radius=1cm];
subgraph W [V={ {0} }, name=C, radius=0cm];
A 1 -- B 2;
A 1 -- B 5;
A 2 -- B 1;
A 2 -- B 3;
A 3 -- B 2;
A 3 -- B 4;
A 4 -- B 3;
A 4 -- B 5;
A 5 -- B 1;
A 5 -- B 4;
A 2 -> A 1;
A 3 -> A 2;
A 3 -> A 4;
A 5 -> A 1;
A 4 -> A 5;
\foreach \i in {1,...,5}{
C 1 -- B \i;
}
};
\end{tikzpicture}
&
\begin{tikzpicture}[every node/.style={draw,circle}]
\graph [clockwise,math nodes] {
subgraph C [V={ {3}, {4}, {5}, {1}, {2} }, name=A, radius=2cm];
subgraph N [V={ {3'}, {4'}, {5'}, {1'}, {2'} }, name=B, radius=1cm];
subgraph W [V={ {0} }, name=C, radius=0cm];
A 1 -- B 2;
A 1 -- B 5;
A 2 -- B 1;
A 2 -- B 3;
A 3 -- B 2;
A 3 -- B 4;
A 4 -- B 3;
A 4 -- B 5;
A 5 -- B 1;
A 5 -- B 4;
A 2 -> A 1;
A 2 -> A 3;
A 4 -> A 3;
A 5 -> A 1;
A 4 -> A 5;
\foreach \i in {1,...,5}{
C 1 -- B \i;
}
};
\end{tikzpicture}
\end{tabular}
\end{tiny}
\end{center}
\vspace{-5mm}
\caption{The Gr\"otzsch graph and two of its partial orientations}\label{Grotzsch-graph}
\end{figure}
\begin{theorem}\label{Knes-K62-thm} The Gr\"otzsch graph $G$ is a smallest (by the number of vertices) non-semi-transitive graph. \end{theorem}
\begin{proof} To obtain a contradiction, suppose that $G$ is oriented semi-transitively. Then, the outer cycle formed by the vertices 1--5 either contains a directed path of length 3, or the longest directed path formed by the vertices is of length 2. Thus, we have two cases to consider. \\[-3mm]
\noindent
{\bf Case 1.} Taking into account symmetries, without loss of generality we can assume that $5 \rightarrow 1 \rightarrow 2 \rightarrow 3$ is a path of length 3, so that the orientation of the remaining two arcs must be $5 \rightarrow 4 \rightarrow 3$ by Lemma~\ref{lemma} as shown in the middle graph in Fig.~\ref{Grotzsch-graph}. Moreover, Lemma~\ref{lemma} can be used to complete orientations of the subgraphs induced by the vertices in the sets $\{1, 2, 3, 2'\}$, $\{1, 2, 1', 5\}$ and $\{3, 4, 5, 4'\}$, as shown in the left graph in Fig.~\ref{Grotzsch-graph-2}. We consider two subcases here depending on orientation of the arc $02'$.\\[-3mm]
\noindent
{\bf Case 1a.} Suppose $0 \rightarrow 2'$ is an arc. By Lemma~\ref{lemma},
\begin{itemize}
\item from the subgraph induced by $0, 2', 3, 4'$, we have $0\rightarrow 4'$;
\item from the subgraph induced by $0, 1', 5, 4'$, we have $0\rightarrow 1'$;
\item from the subgraph induced by $0, 1', 2, 3'$, we have $0\rightarrow 3'$ and $3' \rightarrow 2$;
\item from the subgraph induced by $2, 3, 4, 3'$, we have $3' \rightarrow 4$;
\item from the subgraph induced by $0, 3', 4, 5'$, we have $0\rightarrow 5'$ and $5' \rightarrow 4$.
\end{itemize}
Now if $5'\rightarrow 1$ were an arc, the subgraph induced by $0, 5', 1, 2'$ would be a shortcut, while if $1\rightarrow 5'$ were an arc, the subgraph induced by $1, 5', 4, 5$ would be a shortcut; a contradiction. \\[-3mm]
\noindent
{\bf Case 1b.} Suppose $2' \rightarrow 0$ is an arc. By Lemma~\ref{lemma},
\begin{itemize}
\item from the subgraph induced by $0, 5', 1, 2'$, we have $1\rightarrow 5'$ and $5' \rightarrow 0$;
\item from the subgraph induced by $1, 5, 4, 5'$, we have $4 \rightarrow 5'$;
\item from the subgraph induced by $0, 3', 4, 5'$, we have $4\rightarrow 3'$ and $3' \rightarrow 0$;
\item from the subgraph induced by $2, 3, 4, 3'$, we have $2 \rightarrow 3'$.
\end{itemize}
The contradiction is now obtained by the fact that there is no way to orient the arc $0\rightarrow 1'$ in the subgraph formed by $0, 1', 2, 3'$ without creating a cycle or a shortcut. \\[-3mm]
\begin{figure}
\begin{center}
\begin{tiny}
\begin{tabular}{cc}
\begin{tikzpicture}[every node/.style={draw,circle}]
\graph [clockwise,math nodes] {
subgraph C [V={ {3}, {4}, {5}, {1}, {2} }, name=A, radius=2cm];
subgraph N [V={ {3'}, {4'}, {5'}, {1'}, {2'} }, name=B, radius=1cm];
subgraph W [V={ {0} }, name=C, radius=0cm];
A 3 -> B 4 -> A 5;
A 3 -> B 2 -> A 1;
A 4 -> B 5 -> A 1;
A 1 -- B 2;
A 1 -- B 5;
A 2 -- B 1;
A 2 -- B 3;
A 3 -- B 2;
A 3 -- B 4;
A 4 -- B 3;
A 4 -- B 5;
A 5 -- B 1;
A 5 -- B 4;
A 2 -> A 1;
A 3 -> A 2;
A 3 -> A 4;
A 5 -> A 1;
A 4 -> A 5;
\foreach \i in {1,...,5}{
C 1 -- B \i;
}
};
\end{tikzpicture}
&
\begin{tikzpicture}[every node/.style={draw,circle}]
\graph [clockwise,math nodes] {
subgraph C [V={ {3}, {4}, {5}, {1}, {2} }, name=A, radius=2cm];
subgraph N [V={ {3'}, {4'}, {5'}, {1'}, {2'} }, name=B, radius=1cm];
subgraph W [V={ {0} }, name=C, radius=0cm];
A 4 -> B 5 -> A 1;
A 1 -- B 2;
A 1 -- B 5;
A 2 -- B 1;
A 2 -- B 3;
A 3 -- B 2;
A 3 -- B 4;
A 4 -- B 3;
A 4 -- B 5;
A 5 -- B 1;
A 5 -- B 4;
A 2 -> A 1;
A 2 -> A 3;
A 4 -> A 3;
A 5 -> A 1;
A 4 -> A 5;
\foreach \i in {1,...,5}{
C 1 -- B \i;
}
};
\end{tikzpicture}
\end{tabular}
\end{tiny}
\end{center}
\vspace{-5mm}
\caption{Two partial orientations of the Gr\"otzsch graph}\label{Grotzsch-graph-2}
\end{figure}
\noindent
{\bf Case 2.} If the longest directed path induced by the vertices 1--5 is of length 2 then, again using the symmetries, we can assume the following orientation of the arcs: $1\rightarrow 2\rightarrow 3$, $1\rightarrow 5$, $4\rightarrow 5$ and $4\rightarrow 3$ as shown in the rightmost graph in Fig.~\ref{Grotzsch-graph}. Moreover, Lemma~\ref{lemma} can be used to complete orientations of the subgraph induced by the vertices in $\{1, 2, 3, 2'\}$, as shown in the right graph in Fig.~\ref{Grotzsch-graph-2}. We consider two subcases here depending on orientation of the arc $02'$.\\[-3mm]
\noindent
{\bf Case 2a.} Suppose $0 \rightarrow 2'$ is an arc. By Lemma~\ref{lemma},
\begin{itemize}
\item from the subgraph induced by $0, 2', 3, 4'$, we have $0\rightarrow 4'$ and $4' \rightarrow 3$;
\item from the subgraph induced by $3, 4, 5, 4'$, we have $4'\rightarrow 5$;
\item from the subgraph induced by $0, 1', 5, 4'$, we have $0\rightarrow 1'$ and $1' \rightarrow 5$;
\item from the subgraph induced by $1, 2, 1', 5$, we have $1' \rightarrow 2$;
\item from the subgraph induced by $0, 1', 2, 3'$, we have $0\rightarrow 3'$ and $3' \rightarrow 2$.
\item from the subgraph induced by $2, 3, 4, 3'$, we have $3' \rightarrow 4$;
\item from the subgraph induced by $0, 3', 4, 5'$, we have $0\rightarrow 5'$ and $5' \rightarrow 4$.
\end{itemize}
Now if $5'\rightarrow 1$ were an arc, the subgraph induced by $0, 5', 1, 2'$ would be a shortcut, while if $1\rightarrow 5'$ were an arc, the subgraph induced by $1, 5', 4, 5$ would be a shortcut. A contradiction. \\[-3mm]
\noindent
{\bf Case 2b.} Suppose $2' \rightarrow 0$ is an arc. By Lemma~\ref{lemma},
\begin{itemize}
\item from the subgraph induced by $0, 5', 1, 2'$, we have $1\rightarrow 5'$ and $5' \rightarrow 0$;
\item from the subgraph induced by $1, 5, 4, 5'$, we have $4 \rightarrow 5'$;
\item from the subgraph induced by $0, 3', 4, 5'$, we have $4\rightarrow 3'$ and $3' \rightarrow 0$;
\item from the subgraph induced by $2, 3, 4, 3'$, we have $2 \rightarrow 3'$;
\item from the subgraph induced by $0, 1', 2, 3'$, we have $2\rightarrow 1'$ and $1' \rightarrow 0$;
\item from the subgraph induced by $1, 2, 1', 5$, we have $5 \rightarrow 1'$;
\item from the subgraph induced by $0, 1', 5, 4'$, we have $5\rightarrow 4'$ and $4' \rightarrow 0$.
\end{itemize}
Now if $3\rightarrow 4'$ were an arc, the subgraph induced by $2', 3, 4', 0$ would be a shortcut, while if $4'\rightarrow 3$ were an arc, the subgraph induced by $4, 5, 4', 3$ would be a shortcut; a contradiction. \\[-3mm]
Thus, $G$ is not semi-transitive, and its minimality follows from the above mentioned fact that all triangle-free graphs on 10 or fewer vertices are 3-colorable, and thus semi-transitive by Theorem~\ref{3-color-graphs}.
\end{proof}
\begin{figure}
\begin{center}
\begin{tikzpicture}[node distance=1cm,auto,main node/.style={circle,draw,inner sep=2.5pt,minimum size=4pt}]
\node[main node] (9) {{\tiny 9}};
\node[main node] (8) [left of=9] {{\tiny 8}};
\node[main node] (10) [below right of=9] {{\tiny 10}};
\node[main node] (7) [below left of=8] {{\tiny 7}};
\node[main node] (6) [below of=7] {{\tiny 6}};
\node[main node] (5) [below right of=6] {{\tiny 5}};
\node[main node] (11) [below of=10] {{\tiny 11}};
\node[main node] (12) [below left of=11] {{\tiny 12}};
\node[main node] (2) [above left of=8,xshift=-1cm,yshift = 0.2cm] {{\tiny 2}};
\node[main node] (3) [above right of=9,xshift=1cm,yshift = 0.2cm] {{\tiny 3}};
\node[main node] (1) [below left of=5,xshift=-1cm,yshift = -0.2cm] {{\tiny 1}};
\node[main node] (4) [below right of=12,xshift=1cm,yshift = -0.2cm] {{\tiny 4}};
\path
(5) edge (6)
edge (12)
edge (9);
\path
(7) edge (6)
edge (8)
edge (11);
\path
(9) edge (8)
edge (10);
\path
(11) edge (10)
edge (12)
edge (7);
\path
(8) edge (12);
\path
(6) edge (10);
\path
(1) edge (2)
edge (4)
edge (12)
edge (7);
\path
(3) edge (2)
edge (4)
edge (11)
edge (8);
\path
(2) edge (6)
edge (9);
\path
(4) edge (5)
edge (10);
\end{tikzpicture}
\caption{The Chv\'atal graph}\label{Chvatal-graph}
\end{center}
\end{figure}
The well-known Chv\'atal graph is presented in Fig.~\ref{Chvatal-graph}. It is the minimal 4-regular triangle-free 4-chromatic graph \cite{Ch}. Using the software \cite{G}, we found out that the Chv\'atal graph is not semi-transitive. We have also found an analytical proof of this fact via a long and tedious case analysis. Even being written using a specially developed short notation introduced in \cite{AKM15}, the proof takes several pages; therefore, we put the proof of our next theorem in Appendix for the most patient and interested Reader.
\begin{theorem}\label{Chv-thm} The Chv\'atal graph $H$ is a minimal $4$-regular triangle-free non-semi-transitive graph. \end{theorem}
As it was shown in \cite{Ch}, the Chv\'atal graph $H$ is not 4-critical: it remains 4-chromatic after removal of the edge $56$ (a graph is called 4-{\em critical}, if it is 4-chromatic, but removal of any edge makes it 3-chromatic). The software \cite{G} shows that the graph $H\setminus \{56\}$ is still non-semi-transitive. A proof of this fact is very similar to the proof of Theorem~\ref{Chv-thm}, in particular, it is also tedious, long and does not bring any new insights, so we omit it.
Note also, that proving that a graph $G$ is not semi-transitive immediately implies that the whole class of graphs containing $G$ as an induced subgraph is not semi-transitive; so, Theorems~\ref{Knes-K62-thm} and~\ref{Chvatal-graph} indeed give two classes of non-semi-transitive graphs.
\section{Semi-transitive triangle-free 4-chromatic graphs}\label{sec3}
As a matter of fact, no explicit examples of semi-transitively orientable triangle-free graphs with chromatic number 4, or larger, have been published yet. However, as it was shown in \cite{HKP11}, the existence of such graphs easily follows from two well-known classical results presented below.
\begin{theorem}[\cite{Vitaver}]\label{Vit} A graph is $k$-chromatic if and only if the minimum possible length of the longest directed path among all its acyclic orientations is $k-1$.
\end{theorem}
\begin{theorem}[\cite{Erdos}]\label{Erd} For every $k\ge 2$ and $g\ge 3$ there exists a $k$-chromatic graph of girth $g$.
\end{theorem}
Indeed, Theorem~\ref{Vit} implies that every graph whose girth is larger than its chromatic number has a semi-transitive orientation (as there is no chance for a shortcut in an acyclic orientation of such a graph), and Theorem~\ref{Erd} claims that such graphs exist.
However, the existence of 4-chromatic semi-transitive graphs of girth $4$ does not follow from Theorems~\ref{Vit} and~\ref{Erd}. Below we present two explicit examples of such graphs.
\subsection{Circulant graphs}
A {\em circulant graph} $C(n; a_1,\ldots, a_k)$ is a graph with the vertex set $\{0,\ldots , n-1\}$ and an edge set
$$E=\{ij \ |\ (i-j)\pmod n \mbox{ or } (j-i)\pmod n \mbox{ are in } \{a_1,\ldots, a_k\}\}.$$
According to \cite{BT}, such graphs were first studied in 1932 by Foster, and the name comes from circulant matrices introduced by Catalan in 1846.
Circulant graphs have applications in distributed computer networks \cite{BCH}.
Note that circulant graphs are indeed the Cayley graphs on cyclic groups $Z_n$; so, they are
vertex-transitive (i.e.\ for every pair of its vertices there is an automorphism mapping one of them into another). Circulant graphs are also a particular case of Toeplitz graphs \cite{Ghorban}. Various results on semi-transitivity of Toeplitz graphs have been obtained in \cite{CKKK}.
\begin{figure}
\begin{center}
\begin{tikzpicture}[every node/.style={draw,circle}]
\graph [clockwise,math nodes] {
subgraph C [V={ {1}, {2}, {3}, {4}, {5}, {6}, {7}, {8}, {9}, {10}, {11}, {12}, {0} }, name=A, radius=2.2cm];
A 1 -> A 13;
A 2 -> A 1; A 2 -> A 3; A 2 -> A 7; A 2 -> A 10;
A 3 -> A 4; A 3 -> A 8; A 3 -> A 11;
A 4 -> A 5; A 4 -> A 12;
A 5 -> A 13;
A 6 -> A 1; A 6 -> A 5; A 6 -> A 7; A 6 -> A 11;
A 7 -> A 8; A 7 -> A 12;
A 8 -> A 13;
A 9 -> A 1; A 9 -> A 4; A 9 -> A 8; A 9 -> A 10;
A 10 -> A 5; A 10 -> A 11;
A 11 -> A 12;
A 12 -> A 13;
};
\end{tikzpicture}
\caption{A semi-transitive orientation of the circulant graph $C(13;1,5)$}\label{Toeplitz-13-1-5-8-12}
\end{center}
\end{figure}
It is well-known that the circulant graph $C(13;1,5)$ (which is the same as the Toeplitz graph $T_{13}(1,5,8,12)$) is the smallest vertex-transitive 4-chromatic triangle-free graph \cite{JT}. Of course, it would be nice to add this graph to our collection of minimal non-semi-transitive 4-chromatic triangle-free graphs in the previous section, but the graph appears to be semi-transitive, as follows from the next theorem.
\begin{theorem}\label{Circ} The circulant graph $C(13;1,5)$ is a $4$-chromatic $4$-regular semi-transitive graph of girth $4$.
\end{theorem}
\begin{proof} Let $G:=C(13;1,5)$ and consider its orientation presented in Fig.~\ref{Toeplitz-13-1-5-8-12}. It is easy to verify
by successive deletion of sources and/or sinks that this orientation is acyclic.
The following two easy observations help in checking the absence of shortcuts.\\[-3mm]
\noindent {\bf Claim $1$.} If $v$ is a source or a sink in a directed graph and either all its neighbors are sinks in $G\setminus v$ or all of them are sources in $G\setminus v$ then $v$ does not lie in any
shortcut.
Indeed, assume $v$ lies in a shortcut with a long path $v_0\rightarrow v_1\rightarrow \cdots \rightarrow v_{k-1}\rightarrow v_k$. If $v$ is a sink then $v=v_k$, and thus, $v_{k-1}$ cannot be a source in $G\setminus v$ and $v_0$ cannot be a sink in $G\setminus v$. If $v$ is a source then $v=v_0$, and thus, $v_k$ cannot be a source in $G\setminus v$ and $v_1$ cannot be a sink in $G\setminus v$. \\[-3mm]
\noindent {\bf Claim $2$.} If $v$ is a source that lies in a shortcut, then there are two directed paths $P_0,P_1$ starting at $v$ so that $P_0$ starts with a shortcutting arc $u\rightarrow v$ and $v$ is $k$-th vertex in $P_1$ for some $k\ge 4$.
This claim follows directly from the definition of the shortcut.
By Claim 1, $0$ is not a part of any shortcut in $G$, and $1$ does not lie in a shortcut in $G\setminus 0$. In the graph $G\setminus \{0,1\}$ the paths starting in $6$ are $\{65, 678, 67(12),6(11)(12)\}$, and the paths starting in $9$ are $$\{9(10)5, 9(10)(11)(12),945, 94(12),98 \}.$$ By Claim~2, both these vertices are not in shortcuts. Applying Claim~1 to $G\setminus \{ 0,1,6,9\}$, remove successively the vertices $2$, $7$, and $8$. In the obtained graph, exclude $10$
by Claim~2 (the only paths are $(10)5$ and $(10)(11)(12)$), and afterwards, remove $5$ and $12$ by Claim~1. The remaining graph on the vertex set $\{3,4,11\}$ is a tree.
So, there are no shortcuts in $G$ and the considered orientation is semi-transitive.
\end{proof}
As it was proved in \cite{Heu}, a connected $4$-regular circulant other than $C(13;1,5)$ has chromatic number $4$ if and only if it is isomorphic to the circulant graph $C(n;1,2)$ for some $n=3t+1$ or $n=3t+2$ where $t\ge 2$. Although all such circulants contain triangles, we would like to close the question on the semi-transitivity of 4-regular circulants by proving the following result.
\begin{theorem}\label{Circ2} Each $4$-regular circulant graph is semi-transitive.
\end{theorem}
Clearly, a disjoint union of semi-transitive graphs is semi-transitive, $K_5=C(5;1,2)$ admits transitive orientation and every 3-colorable graph is semi-transitive by Theorem~\ref{3-color-graphs}. Hence, Theorem~\ref{Circ2} is a direct corollary of the above mentioned result in \cite{Heu},
Theorem~\ref{Circ} and the following lemma.
\begin{lemma}\label{Circs} A circulant graph $C(n;1,2)$ is semi-transitive for each $n\ge 6$.
\end{lemma}
\begin{proof}
Consider a circulant graph $G=C(n;1,2)$ with the vertex set $V=\{0,1,\ldots, n-1\}$. Orient the edges of the subgraph induced by the subset $V_0=\{0,1,\ldots, n-3\}$ from lowest to highest (i.e. $0\rightarrow 1$, $0\rightarrow2$, $1\rightarrow2$, $1\rightarrow 3$, etc) and set the orientation of the remaining seven edges as follows:
$1\rightarrow n-1,\ 0\rightarrow n-1,\ 0\rightarrow n-2,\ n-2\rightarrow n-4,\ n-2\rightarrow n-3,\ n-2\rightarrow n-1,\ n-1\rightarrow n-3.$
It is easy to see that the orientation is acyclic. Assume that there is a shortcut $v_0\rightarrow \cdots \rightarrow v_k$ with a shortcutting arc
$v_0\rightarrow v_k$ where $k\ge 3$. Clearly, the shortcut cannot lie in $V_0$ since otherwise for the shortcutting arc we have $k\le 2$ by the definition of the circulant, a contradiction with $k\ge 3$. So, $n-2$ or $n-1$ must be in the shortcut. By symmetry, we may assume that the shortcut contains $n-2$ (otherwise, reverse all arcs and swap $n-1$ with $n-2$ and $i$ with $n-3-i$ for all $i=0,\ldots,n-3$). Since the longest path outgoing from $n-2$ has length $2$, $v_0\ne n-2$. But then the shortcut must contain the arc $0\rightarrow n-2$. Since $0$ is a source, we have $v_0=0,\ v_1=n-2$. There are only two paths of length at least $3$ starting with the arc $0\rightarrow n-2$ (namely, $0 \rightarrow n-2 \rightarrow n-1 \rightarrow n-3$ and $0 \rightarrow n-2 \rightarrow n-4 \rightarrow n-3$). But in both cases $G$ does not contain the shortcutting arc $0\rightarrow n-3$. So, the presented orientation is semi-transitive.
\end{proof}
\begin{remark} If $n=5$ then the orientation in Lemma~\ref{Circs} provides a transitive orientation of $K_5=C(5;1,2)$. \end{remark}
\subsection{Toft's graphs}
Another nice example of 4-chromatic semi-transitive graphs of girth 4 is given by
Toft's graphs $T_n$ that were introduced in \cite{Toft} as first instances of dense 4-critical graphs (see \cite{Peg} for various constructions of dense critical graphs).
Let $n>3$ be odd. The construction of Toft's graph $T_n$ is as follows. It has a vertex set $V=A_1\cup A_2\cup A_3\cup A_4$ of $4n$ vertices where $A_1$ and $A_4$ induce odd cycles $C_n$ and $A_2\cup A_3$ induces the complete bipartite graph $K_{n,n}$ with parts $A_2$ and $A_3$. There is also a perfect matching whose all edges connect either $A_1$ with $A_2$ or $A_3$ with $A_4$.
\begin{theorem} Toft's graph $T_n$ is semi-transitive.
\end{theorem}
\begin{figure}
\begin{center}
\begin{tikzpicture}[->, >=stealth', shorten >=1pt, node distance=1cm,auto,main
node/.style={circle,draw,inner sep=1pt,minimum size=2pt}]
\node[main node] (1) {{\tiny 1}};
\node[main node] (2) [below of=1] {{\tiny 2}};
\node[main node] (3) [below of=2] {{\tiny 3}};
\node[main node] (4) [below of=3] {{\tiny 4}};
\node[main node] (5) [below of=4] {{\tiny 5}};
\node[main node] (6) [right of=1] {{\tiny 6}};
\node[main node] (7) [below of=6] {{\tiny 7}};
\node[main node] (8) [below of=7] {{\tiny 8}};
\node[main node] (9) [below of=8] {{\tiny 9}};
\node[main node] (10) [below of=9] {{\tiny 10}};
\node[main node] (11) [right of=6, xshift=2cm] {{\tiny 11}};
\node[main node] (12) [below of=11] {{\tiny 12}};
\node[main node] (13) [below of=12] {{\tiny 13}};
\node[main node] (14) [below of=13] {{\tiny 14}};
\node[main node] (15) [below of=14] {{\tiny 15}};
\node[main node] (16) [right of=11] {{\tiny 16}};
\node[main node] (17) [below of=16] {{\tiny 17}};
\node[main node] (18) [below of=17] {{\tiny 18}};
\node[main node] (19) [below of=18] {{\tiny 19}};
\node[main node] (20) [below of=19] {{\tiny 20}};
\path (1) edge (2)
edge (6)
edge [bend right=50] (5);
\path (2) edge (3)
edge (7);
\path (3) edge (4)
edge (8);
\path (5) edge (4);
\path (4) edge (9);
\path (5) edge (10);
\path (11) edge (16);
\path (16) edge (17)
edge [bend left=50] (20);
\path (12) edge (17);
\path (17) edge (18);
\path (13) edge (18);
\path (18) edge (19);
\path (14) edge (19);
\path (20) edge (19);
\path (15) edge (20);
\path (6) edge (11)
edge (12)
edge (13)
edge (14)
edge (15);
\path (7) edge (11)
edge (12)
edge (13)
edge (14)
edge (15);
\path (8) edge (11)
edge (12)
edge (13)
edge (14)
edge (15);
\path (9) edge (11)
edge (12)
edge (13)
edge (14)
edge (15);
\path (10) edge (11)
edge (12)
edge (13)
edge (14)
edge (15);
\end{tikzpicture}
\caption{A semi-transitive orientation of Toft's graph $T_5$}\label{Toft-T5-sem}
\end{center}
\end{figure}
\begin{proof} A semi-transitive orientation of $T_n$ can be constructed as follows. Every arc $uv$ where $u\in A_i$ and $v\in A_{i+1}$ for any $i\in \{1,2,3\}$ is directed $u\rightarrow v$.
The cycles $A_1$ and $A_4$ are oriented semi-transitively in an arbitrary way (e.~g. by arranging in each of them two disjoint directed paths of lengths $2$ and $n-2$ starting in a same node). An example of Toft's graph $T_5$ and its orientation is shown in Fig.~\ref{Toft-T5-sem}.
Clearly, this orientation is acyclic. Assume, there is a shortcut $C$ with a long path $v_0\rightarrow
\cdots \rightarrow v_k$. Then either $v_0, v_k\in A_i$ for some $i\in \{1,2,3,4\}$ or $v_0\in A_i, v_k\in A_{i+1}$ for some $i\in \{1,2,3\}$. The first case is impossible since the sets $A_2$ and $A_3$ are independent and the orientations of $A_1$ and $A_4$ are semi-transitive. The second case cannot occur since all vertices form $A_2$ and $A_3$ have degree $1$ in the subgraphs induced by $A_1\cup A_2$ and $A_3\cup A_4$, respectively, and the subgraph induced by $A_2\cup A_3$ has no directed paths of length more than $1$. Therefore, the presented orientation is semi-transitive.
\end{proof}
\section{Open problems}\label{sec4}
In this paper we presented examples of non-semi-transitive triangle-free graphs of girth 4, namely the Gr\"otzsch graph, the Chv\'atal graph, and the Chv\'atal graph without certain edge. However, for higher girths the similar existence question is still open.
\begin{problem}\label{prob1} Do there exist non-semi-transitive graphs of girth $g$ for every $g\geq 5$? \end{problem}
We also presented examples of semi-transitive $k$-chromatic graphs of girth $k$ for $k=4$. Finding similar explicit instances could be of interest for larger $k$, especially in terms of minimality according to different criteria.
\begin{problem}\label{prob2} Do there exist semi-transitive $k$-chromatic graphs of girth $k$ for every $k\geq 5$? If yes, then are there regular or vertex-transitive examples? What is the minimum number of vertices and/or edges in such graphs? How dense can they be?\end{problem}
Problem~\ref{prob2} is some kind of a complement question to Problem~\ref{prob1}, so at least one of these problems must have a positive answer. However, we conjecture that the answer is positive for both of them.
Finally, it would be interesting to extend the results of Lemma~\ref{Circs}. Note, that in general the circulants may be not semi-transitive. For instance, $C(14;1,3,4,5)$ is not \cite{G}. But is this true for $C(n;1,\ldots,k)$?
\begin{problem} Are all circulants $C(n;1,2,\ldots,k)$ semi-transitive? What about circulants $C(n; t, t+1,\ldots, k)$ for some integers $k$ and $t$ satisfying $k - t > 1$?\end{problem}
\medskip
\noindent
{\bf Anknowledgements. } The authors are grateful to the unknown referees for their valuable comments and suggestions. The work of the second author was partially supported by the program of fundamental scientific researches of the SB RAS, project 0314--2019--0014.
|
1,116,691,498,632 | arxiv | \section{Introduction}
Understanding the effects of electron correlation and pseudogap
phenomena
~\cite{RVB,Nature,Timusk,Marshall,Kivelson_Review,Andrea} in doped
oxides, including the cuprate superconductors is one of the most
challenging problems in condensed matter physics~\cite{Anderson}.
Although the experimental determination of various inhomogeneous
phases in cuprates is still somewhat controversial~\cite{Tallon},
the underdoped high T$_c$ superconductors (HTSCs) are often
characterized by crossover temperatures below which excitation
pseudogaps in the normal-state are seen to develop~\cite{Zachar}.
In these materials, the spectral weight begins to be strongly
suppressed below some characteristic temperature T$_s$ that
is higher than the superconducting crossover temperature
T$_c$. There are many experiments supporting a highly nonuniform
hole distribution leading to the formation of hole-rich and
hole-poor regions in doped La$_{2-x}$Sr$_x$CuO$_4$ and other
cuprate HTSCs ~\cite{Matsuda}. This electronic phase separation is
expected to be mostly pronounced at low hole concentrations.
Recently, strong experimental evidence has been found for
`electronic phase separation' in La-cuprates near optimal doping
into separate, magnetic and superconducting phases~\cite{Hashini}.
The relevance of the Hubbard model for studies of the HTSCs has
been the focus of intensive research and debated for quite some
time with no firm conclusions up to now. Even though the small
Hubbard clusters do not have the full capacity to describe the
complexity of copper ions and their ancillary oxygens detected in
the HTSC materials, it is still argued that this model can capture
the essential physics of the HTSCs~\cite{RVB}. However, beyond one and
infinite dimensions, there is no exact solution currently available
for the Hubbard Hamiltonian. It is also known that in the
optimally doped cuprates, the correlation length of dynamical spin
fluctuations is very small~\cite{Zha}, which points to the local
character of electron interactions in the cuprates. Therefore, exact
microscopic studies of pairing, crossover and pseudogaps, by using
analytical diagonalization of small Hubbard clusters, which
account accurately for short-range dynamical correlations, are
relevant and useful with regard to understanding the physics of
the HTSCs. Our exact analytical solution appears to be providing
useful insight into the physical origin of the high energy
insulator-metal and low energy antiferromagnetic crossovers,
electron pairing and spin density fluctuations in the
superconducting phase.
The following questions are central to our study: (i) When treated
exactly, what essential features can the simple Hubbard clusters
capture, that are in common with the HTSCs? (ii) Using simple
cluster studies, is it possible to obtain a mesoscopic
understanding of electron-hole/electron-electron pairing and
identify various possible phases and crossover temperatures? iii)
Do these small clusters (coupled to a particle bath) contain
important features that are similar to large clusters and
thermodynamical systems?
Our work has uncovered important answers to the questions raised
above. This is a follow-up to our recent study,
in which rigorous criteria were
found for the existence of
microscopic quantum critical points (QCP),
Mott-Hubbard (MH) and Ne{\'e}l type bifurcations,
and
corresponding critical temperatures of crossovers and various
phases in finite-size systems ~\cite{JMMM}. Small 2- and 4-site
clusters with short range electronic correlations provide (unique)
insight into the thermodynamics and exact many body physics,
difficult to obtain from approximate methods. In particular, we
show that these small Hubbard clusters, in the absence of long
range order, exhibit particle-particle, Mott-Hubbard like
particle-hole or antiferromagnetic-paramagnetic instabilities
in the ground state and
at finite
temperatures.
In addition, the 4-site (square) cluster is the basic building
block of the CuO$_2$ planes in the HTSCs and it can be used as a
block reference to build up larger superblocks in 2D of desirable
sizes by applying Cluster Perturbation Theory (CPT)~\cite{CPT},
non-perturbative Real-Space Renormalization Group
(RSRG)~\cite{rsrg}, Contractor Renormalization Group
(CORE)~\cite{CRG}, or Dynamical Cluster Approximation (DCA) for
embedded 4-site clusters coupled to an uncorrelated
bath~\cite{Jarrell}. Similar attempts at studying small clusters
(such as the ground state studies of weakly coupled Hubbard dimers
and squares by Tsai and Kivelson~\cite{tk}) have begun recently
and the lessons learned here will be invaluable to such studies
and useful to the condensed matter community in general. Above
all, these small clusters can be synthesized and utilized for
understanding essential many-body physics at the mesoscopic level
and hence are useful in their own right.
\section{Model and methodology}
The single orbital 4-site minimal Hubbard Hamiltonian,
\begin{eqnarray}
H = -t\sum\limits_{i \sigma}(c^{+}_{i\sigma}c_{i+1\sigma}
+ H.c.)+U\sum\limits_{i}n_{i\uparrow}
n_{i\downarrow}-h\sum\limits_{i}S^z_i,\label{2-site1}
\end{eqnarray}
with hopping $t$ and on-site interaction $U$ is the focus of this
work. Periodic boundary conditions are used for the clusters.
Unless otherwise stated, all the energies reported here are
measured in units of $t$ (i.e. $t$ has been set to $1$ in most of
the equations that follow). Our calculations are based on the
exact {\sl analytical} expressions for the eigenvalue $E_{n}$ of
the $n^{th}$ many-body eigenstate of the 4-site Hubbard
clusters~\cite{schumann}. As we show here, this model, used in
conjunction with the grand canonical and canonical ensembles,
yields valuable insight into the physics of strongly correlated
electrons.
\begin{figure}
\begin{center}
\includegraphics*[width=20pc,height=20pc]{ensemble.eps}
\end{center}
\caption {Various possible configuration mixing of electrons (below half
filling) that can be found in an ensemble of 4-site clusters. Note
that the mixing of configurations is brought about by the
temperature.}
\label{fig:ensemble}
\end{figure}
\subsection{
Thermodynamics and response functions} The complete phase diagram
of interacting electrons can be obtained with high accuracy due to
the analytically (exact) given thermodynamic expressions. In
Fig.~\ref{fig:ensemble}, possible electron configurations (below
half-filling) in the grand canonical ensemble for the 4-site
clusters are shown. The grand partition function Z$_U$ (where the
number of particles $N$ and the projection of spin $s^z$ are
allowed to fluctuate) and its derivatives are calculated (exactly)
without taking the thermodynamic limit. The exact grand canonical
potential $\Omega_U$ for many-body interacting electrons is
\begin{eqnarray}
\Omega_U=-{T}\ln \!\sum\limits_{n\leq N_H} e^{{-\frac {E_{n}-\mu
N_{n} - hs^{z}_n} T} }, \label{OmegaU}
\end{eqnarray}
where $N_n$ and $s^{z}_n$ are the number of particles and the
projection of spin in the $n^{th}$ state respectively. The Hilbert
space dimension in (\ref{OmegaU}) is $N_H=4^4$. The derivatives we
study may be labeled as
first order (such as the average spin projection/magnetization
in response to an applied magnetic field) or second order (such as fluctuations/susceptibilities).
These responses, evaluated as functions of chemical potential $\mu$,
applied field $h$, on-site Coulomb interaction $U$ and temperature $T$,
carry a wealth of information that can be used
to identify various phases and phase boundaries.
Some of these results for the 2- and 4-site clusters
were reported earlier ~\cite{JMMM}.
The (first order) responses due to
doping and external magnetic field are as follows:
\begin{eqnarray}
\left\langle N\right\rangle=-{{\frac {\partial \Omega_U} {\partial
\mu}}}, \label{number}
\end{eqnarray}
\begin{eqnarray}
\left\langle s^z\right\rangle=-{{\frac {\partial \Omega_U} {\partial
h}}}. \label{spin}
\end{eqnarray}
Analytical expressions derived for the averages $\left\langle N\right\rangle$
and $\left\langle s^z\right\rangle$ are analyzed
numerically in a wide range of $U$, $h$, $\mu$ and $T$ parameters.
The charge and spin degrees respond to an applied
magnetic field ($h$) as well as electron or hole doping levels
(i.e. chemical potential $\mu$) and display clearly identifiable,
prominent peaks, paving the way for rigorous definitions of
Mott-Hubbard (MH), antiferromagnetic (AF), spin pseudogaps and
related crossover behavior~\cite{JMMM}. The exact expressions for
charge susceptibility, ${\chi_c}={{\frac {\partial
{\left\langle {N}\right\rangle}} {\partial \mu}}}$ and spin
susceptibility, $\chi_s={{\frac {\partial \left\langle
s^{z}\right\rangle} {\partial h}}}$ can be found as a function of
$U$, $h$, $\mu$ and $T$ from,
\begin{eqnarray}
\left\langle X^2\right\rangle-\left\langle
X\right\rangle^2=T{{\frac {\partial {\left\langle
{X}\right\rangle}} {\partial x}}}, \label{fluctuation_number}
\end{eqnarray}
where $X$ corresponds to $N$ or $s^z$ and $x$ to $\mu$ or $h$.
Using maxima and minima in spin and charge susceptibilities, phase
diagrams in a $T$ {\it vs} $\mu$ plane for any $U$ and $h$ can be
constructed. This approach also allows us to obtain
QCP
and rigorous criteria for various transitions, such as the MH
crossover at half-filling and MH like bifurcations, using the
evolution of peaks in charge or spin susceptibility~\cite{JMMM}
(see below).
\subsection{Charge (pseudo) gap}
We define canonical energies $\mu_{\pm}$,
\begin{eqnarray}
&\mu_{+}=E(M+1,M^\prime;U:T)-E(M,M^\prime;U,T)\label{mupm1} \\
&\mu_{-}=E(M,M^\prime;U:T)-E(M-1,M^\prime;U:T)
\label{mupm2}
\end{eqnarray}
where $E(M,M^\prime;U:T)$ is the canonical (ensemble) energy with
a given number of electrons $N=M+M^{\prime}$ determined by the
number of up ($M$) and down ($M^\prime$) spins. At zero
temperature the expressions (\ref{mupm1}) and (\ref{mupm2}) are
identical to those introduced in~\cite{Lieb1}. At finite
temperature, the calculated charge susceptibility is a
differentiable function of $N$ and $\mu$. The peaks (i.e. maxima)
in $\chi_c(\mu)$, {\sl which may exist in a limited range of
temperature}, are identified easily from the conditions,
$\chi^{'}_c(\mu_{\pm})=0$ with $\chi^{''}_c(\mu_{\pm}) <0$. We
define $T_c(\mu)$ to be the temperature at which a (possible) peak
is found in $\chi_c(\mu)$ at a given $\mu$.
The positive charge gap for electron-hole (excito) excitations,
${\Delta^{e-h}}(T)>0$, is defined by
\begin{eqnarray}
\Delta^{e-h}(T) = \left\{ \begin{array}{ll}
\mu_{+}-\mu_{-} & \mbox{if $\mu_{+} > \mu_{-}$} \\
0 & \mbox{otherwise,} \\
\end{array}
\right.
\label{charge_gap}
\end{eqnarray}
as the separation between $\mu_{+}$ and $\mu_{-}$. The
electron-hole instability
$\Delta^{e-h}(T)>0$
can exist in a limited range of temperatures and
$U>U_c(T)$,
with $\Delta^{e-h}(T)\equiv 0$ at $U\leq U_c(T)$. (In general, the
critical parameter $U_c(T)$, identified and discussed in
Sections~\ref{A} and \ref{B}, depends also on $h$.) The energy
gap, $\Delta^{e-h}(T)\ge 0$, serves as a natural order parameter
in a multidimensional parameter space ${h,U,T}$ and at $T\neq 0$
will be called a {\sl pseudogap}, since $\chi$ has a small, but
nonzero weight inside the gap. At $T=0$, this gap $\Delta^{e-h}_0
\equiv \Delta^{e-h}(0)$ will be labeled a {\sl true gap} since
$\chi_c$ is exactly zero inside.
The difference $\mu_{+}-\mu_{-}$ is somewhat similar to the
difference $I-A$ for a cluster, where $I$ is the ionization
potential and $A$ the electron affinity. For a single isolated
atom at half-filling and $T=0$, $I-A$ is equal to $U$ and hence
the above difference represents a screened $U$, reminiscent of
Herring's definition of $U$ in transition elements~\cite{herring}.
\subsection{Mott-Hubbard crossover}
The thermodynamic quantities with fixed $N$ ({\it canonical
approach}) are certainly smooth, analytical functions of $T$ and
$U$. Thus, one may naively think that at half-filling
and large $U$,
there is no real cooperative phenomena at $T\sim U$ for transition
from localized to delocalized electrons or at
$T\sim {t^2\over U}$
for transition from
antiferromagnetic to paramagnetic state. At finite temperature,
the thermodynamic quantities $\Omega_U$ and $\left\langle
N\right\rangle$ are both analytic and smooth functions of $T$ and
$\mu$. Although the charge susceptibility $\chi_c$ is also a
differentiable function of $\mu$ and $T$ at all $T>0$, $\chi_c$
{\it vs} $\mu$ exhibits a weak, fourth order singularity at some
critical temperature $T_{MH}$ ({\it saddle point})~\cite{JMMM}.
Thus the MH crossover at half-filling ($\mu = U/2$)
can be defined simply
as a critical temperature $T_{MH}$, at which two peaks merge into
one with $\mu= \mu_{+} =\mu_{-}= U/2$ and $\chi^{'}_c(\mu) =0$ and
$\chi^{''}_c(\mu) =0$, i.e. as the temperature corresponding to a
point of inflexion in $\chi_c(\mu)$ ~\cite{JMMM}. This procedure
gives a rigorous definition for the MH crossover temperature
$T_{MH}$ (from a localized into an itinerant state), at which the
electron-hole pseudogap melts, i.e. $\Delta^{e-h}(T_{MH})=0$. The
MH crossover, due to its many-body nature, is also a cooperative
effect which may occur even for a single atom.
\subsection{Spin (pseudo) gap}
Analogously, we define a (positive) spin pairing gap between
various spin configurations (projections of spin $s$) for a given
number of electrons ($N= M + M^\prime$)
in
the absence of a
magnetic field ($h=0$) as,
\begin{eqnarray}
\Delta^s(T)=E(M+1,M^\prime-1;U:T)-E(M,M^\prime;U:T).
\label{spin_gap}
\end{eqnarray}
This corresponds to the energy necessary to make an excitation by
overturning a single spin. Possible peaks in the zero magnetic
field spin susceptibility $\chi_s(\mu)$, when monitored as a
function of $\mu$, can also be used to define an associated
temperature, $T_s(\mu)$, as the temperature at which such a peak
exists and the spin pseudogap as the separation
(distance)
between two such peaks.
\subsection{AF (pseudo) gap and onset of magnetization}
Similar to the (charge) plateaus seen in $\left\langle
N\right\rangle$ {\it vs} $\mu$, we can trace the variation of
magnetization $\left\langle s^z\right\rangle$ {\it vs} an
applied magnetic field $h$ and identify the spin plateau features,
which can be associated with staggered magnetization or short
range antiferromagnetism. We calculate the critical magnetic field
$h_{c\pm}$ for the onset of magnetization ($s^z\to \pm 0$), which
depends on $N$ and $\mu$, by flipping a down spin,
$h_{c+}=E(M+1,M^\prime-1;U:T)-E(M,M^\prime;U:T)$ or an up spin,
$h_{c-}=E(M,M^\prime;U:T)-E(M-1,M^\prime+1;U:T)$~\cite{Sebold}.
The spin singlet binding energy ${\Delta^{AF}}(T)>0$ can be
defined as,
\begin{eqnarray}
{\Delta^{AF}}(T) = {{h_{c+}} - {h_{c-}}\over 2},
\label{af_gap}
\end{eqnarray}
and serves as a natural antiferromagnetic order parameter in a
multidimensional parameter space ${U,T,\mu}$. This will be
called an {\sl AF pseudogap} at nonzero temperature. We define
$T_{AF}$ as the temperature at which the pseudogap, $\Delta^{AF}(T)=0$,
vanishes and above which a paramagnetic state is found. An exact
analytical expression for the AF spin gap in the ground state
($\Delta^{AF}(0)$) at
half-filling was obtained in Ref.~\cite{JMMM}.
In what follows, all of the temperatures defined above, $T_c(\mu)$,
$T_s(\mu)$ and $T_{AF}(\mu)$,
will be used when constructing phase diagrams.
\subsection{Pairing instability}
To determine whether the cluster can support electron pairing at
finite temperature despite the purely repulsive electronic
interactions, the electron-electron (or hole-hole) pair binding
energy,
\begin{eqnarray}
&\Delta^P(T)=
\nonumber \\
&[E(M-1,M^\prime;U:T)-E(M+1,M^\prime;U:T)]-
\nonumber \\
& 2[E(M,M^\prime;U:T)-E(M+1,M^\prime;U:T)],~
\label{pairing_gap}
\end{eqnarray}
is calculated by adding or subtracting one electron near
$N=M+M^\prime$. The average energy $E(M,M^\prime;U:T)$ is given
for configurations with a fixed number of electrons $N$ and spin
magnetization $s^z=0$ using our grand canonical ensemble approach.
At zero temperature, the binding energy (\ref{pairing_gap}) is
identical to the one introduced in Ref.~\cite{scalettar}.
Using the definitions for $\mu_{\pm}$ from Eqs.~\ref{mupm1} and
\ref{mupm2}, electron-electron (or hole-hole) pair energy can also
be written as,
\begin{eqnarray}
\Delta^{P}(T) = \left\{ \begin{array}{ll}
\mu_{-}-\mu_{+} & \mbox{if $\mu_{-} > \mu_{+}$} \\
0 & \mbox{otherwise.} \\
\end{array}
\right.
\label{pair_gap}
\end{eqnarray}
In the ground state, the electron-pair binding energy gap
at $\left\langle N\right\rangle\approx 3$
is fully developed
at $U\leq U_c(T)$
when
$\Delta^P(0)>0$,
i.e. $\mu_{-}>\mu_{+}$, which leads to
the phase separation into $\left\langle N\right\rangle=2$ and
$\left\langle N\right\rangle=4$ clusters (see Section \ref{C})
and an effective attraction between the electrons
in $\left\langle N\right\rangle=2$ cluster configuration
~\cite{white}. {\sl On the other hand, when $\mu_{+} > \mu_{-}$,
the condition ${\Delta^{e-h}}(T)>0$ with $U>U_c(T)$ provides an
electron-hole pairing mechanism as a precursor to
antiferromagnetism~\cite{JMMM}.} We can define T$_P$ as the
temperature at which the pseudogap,
$\Delta^{P}$(T$_P)=0$,
vanishes. The existence of particle-particle
($\Delta^P(T)>0$)
or particle-hole ($\Delta^{e-h}(T)>0$)
instability and the corresponding solution for a positive
pseudogap ($\Delta(T)>0$) can be formulated at an arbitrary $U>0$
by combining both equations (\ref{charge_gap})
and (\ref{pair_gap})
in one as,
\begin{eqnarray}
\Delta(T) = \left\{ \begin{array}{ll}
\Delta^{e-h}(T) & \mbox{if $U>U_c(T)$} \\
\Delta^{P}(T) & \mbox{if $U<U_c(T).$} \\
\end{array}
\right. \label{gap}
\end{eqnarray}
At zero temperature, $\Delta(0)= 0$ at $U=0$ or $U=U_c(0)$.
\begin{figure}
\begin{center}
\includegraphics*[width=20pc,height=20pc]{fnum_spin.eps}
\end{center}
\caption {{Variation of average electron concentration
$\left\langle N\right\rangle$
(top) and average spin
$\left\langle s^z\right\rangle$ (bottom) vs $\mu$ for various $U$
values at temperature $T=0.02$. The vanishing of the charge gap
near $\left\langle N\right\rangle = 3$ for $U=4$
has implications related to pairing as discussed in the text.
Note also that the spin plot has been obtained with an applied
magnetic field of $h=0.1$ and shows the stabilization
of a magnetic state near $\left\langle N\right\rangle = 3$
for $U=4$. At zero field, $\left\langle
s^z\right\rangle$ = 0 everywhere due to degeneracy between spin up
and down states.}}
\label{fig:num}
\end{figure}
\section{Results}\label{Results}
\subsection{$\left\langle N\right\rangle$ and
$\left\langle s^z\right\rangle$ vs $\mu$ and pseudogaps}\label{A}
In Fig.~\ref{fig:num}, we explicitly illustrate the variation of
$\left\langle N\right\rangle\leq 4$ {\it vs} $\mu$ for various
$U$ values in order to track the variation of charge gaps with
$U$. The opening of such gaps is a local correlation effect and
clearly does not follow from long range order, as exemplified
here. The true gaps at $\left\langle N\right\rangle=1$ and
$\left\langle N\right\rangle=4$ develop for infinitesimal $U>0$
and increase monotonically. In contrast, the charge gap at
$\left\langle N\right\rangle=3$ opens at finite $U>U_c(T)$ (see
Fig.~\ref{fig:num}). Thus at low temperature, $\left\langle
N\right\rangle$ (expressed as a function of $\mu$ in
Fig.~\ref{fig:num}) evolves smoothly for $U\leq U_c(T)$, showing
finite leaps across the MH plateaus only at $\left\langle
N\right\rangle=1$, $\left\langle N\right\rangle=2$. Such a density
profile of $\left\langle N\right\rangle$ {\it vs} $\mu$ near
$\left\langle N\right\rangle=3$ closely resembles the one
calculated in Fig.~\ref{fig:num} for the {\sl attractive} 4-site
Hubbard cluster with $U=-4$ at $T=0.02$ and is indicative of a
possible particle pairing instability. At larger $U>U_c(T)$, an
electron-hole gap is opened at $\left\langle N\right\rangle=3$.
Therefore the cluster at $U>U_c(0)$ behaves as a MH like insulator at
all (allowed) integer numbers ($1\le N\le 8$) with electron charge
localized (non-Fermi liquid).
In contrast, at $U\leq U_c(0)$ the chemical potential gets pinned
upon doping in the midgap states at $\left\langle
N\right\rangle\simeq$ 3.
While Fig.~\ref{fig:num} shows the magnetization at a relatively
high magnetic field, its behavior at very low temperature, $T\to
0$ and infinitesimal magnetic field, $h\to 0$ is also noteworthy.
In this case, as $U$ increases, $\left\langle N\right\rangle$ and
$\left\langle s^z\right\rangle$ {\it vs} $\mu$ near $\left\langle
N\right\rangle=3$ reveal islands of stability, due to phase
separation (see Section \ref{C}),
for various charge ($N=2$ and $N=4$)
and spin ($s$ and $s^z$) configurations as follows. Phase A
($U\leq U_c(0)$): particle-particle $\Delta^{P}(0)>0$ and
spin-spin $\Delta^{s^z=0}(0)>0$ pairing gaps with the minimal
projection of spin $\left\langle s^z\right\rangle=0$.
Phase B ($U_c(0)<U<4(2+\sqrt{7})\simeq 18.583$): particle-hole
$\Delta^{e-h}(0)>0$ and spin-spin $\Delta^{s^z=1/2}(0)
>0$ pairings
with the spin $s=1/2$ and unsaturated ferromagnetism,
$\left\langle s^z\right\rangle=1/2$ ({\it triplet
pairing})~\cite{Sebold}. Phase C (large $U>4(2+\sqrt{7})$):
particle-hole $\Delta^{e-h}(0)
>0$ pairing without spin gap
($\Delta^{s^z=3/2}(0)\equiv 0$)
and maximum projection
$\left\langle s^z\right\rangle=3/2$ ({\it saturated
ferromagnetism})~\cite{Mattis}.
In Phase A for $U=4$, charge and spin are coupled ({\it i.e. no
charge-spin separation}), while in Phase B at $U=6$, the charge
and spin are partially decoupled ({\it partial charge-spin
separation}). In Phase C for $U \to \infty$, the charge and spin
are fully decoupled, when the charge gap saturates to its maximum
value, $\to 2(2-\sqrt{2})$, while the spin gap from $\left\langle
s^z\right\rangle=1/2$ to $\left\langle s^z\right\rangle=3/2$,
defined earlier in (\ref{spin_gap}), vanishes ({\it full
charge-spin separation}).
Phase A, due to strong particle-particle coupling with double
electron charge ($Q=2e$) and zero spin ($s^z=0$) with a majority of
$\left\langle N\right\rangle=2$
clusters,
becomes a good candidate for the full 'bosonization' of charge and
spin degrees of freedom and possible 'superconductivity'.
In contrast, at even numbers of electrons, such as $\left\langle
N\right\rangle\simeq 2$ in Fig.~\ref{fig:num}, there are
electron-hole $\Delta^{e-h}(0)>0$ and
spin-spin
$\Delta^{s^z=0}(0)>0$ pairings at all $U$ values, and therefore
the charge
$Q=2e$ and spin $s^z=0$ are both coupled
and there is full charge-spin
reconciliation, when the {\it singlet-triplet} spin excitation gap
at quarter filling approaches the charge gap,
$\Delta^{s^z=0}\equiv\Delta^{e-h}=2(2\sqrt{2}-1)$, as $U\to
\infty$. Exactly at half filling, $\left\langle N\right\rangle=4$,
there is partial charge-spin separation at all finite $U>U_c(0)$.
However, as $U\to \infty$ the charge
MH gap $\Delta^{e-h}(0)\to \infty$ and
the AF gap
$\Delta^{s^z=0}(0)\to 0$ (vanishes); there is full charge-spin
separation
with saturated spin $\left\langle s^z\right\rangle=2$
in this limiting case. Also,
we find that for all $U$, the
clusters with a single electron at $\left\langle
N\right\rangle=1$ is a MH like insulator:
charge gap $\Delta^{e-h}(0) \to \infty$, while the spin gap
$\Delta^{s^z=1/2}(0)\to 0$ with saturated spin $
\left\langle s^z\right\rangle=1/2$.
For any given $N$ in the charge sector,
one can easily identify an insulator or a metallic
liquid if $\Delta^{e-h}(0)>0$ or $\Delta^{e-h}(0)\equiv 0$ respectively.
Accordingly, it is also useful to distinguish a
spin insulator, $\Delta^{s}(0)>0$, or a spin liquid
($\Delta^{s}(0)\equiv 0$) state in the spin sector.
In Fig.~\ref{fig:ch_sus}, we show the evolution of charge
susceptibility $\chi_c$ as a function of $\mu$, which exhibits
clearly identifiable sharp peaks. At low temperature, peak
structures in $\chi_c(\mu)$ and zero magnetic field spin
susceptibility, $\chi_s(\mu)$, are observed to develop in these
clusters; between two consecutive peaks, there exists a pseudogap
in charge or spin degrees. The opening of such distinct and
separated pseudogap regions for spin and charge degrees of freedom
(at low temperature) is a signature of corresponding charge and
spin separation away from half-filling.
\subsection{Pairing gap at $\left\langle N\right\rangle=3$}\label{B}
At relatively large $U\ge U_c(T)$, the energy gap
$\Delta^{c}_{3}(U:T)=E(4;U:T)+E(2;U:T)-2E(3;U:T)$ becomes positive
for $\left\langle N\right\rangle=$ 3 (see Fig.~\ref{fig:pair}).
Its zero temperature value was first derived analytically
~\cite{JMMM} as,
\begin{eqnarray}
\Delta^c _{3} (U:T=0) = -{2\over \sqrt3}\sqrt{({16t^2}+
{U^2})}\cos{\gamma\over 3}+\nonumber\\
{U\over 3}-{2\over 3}\sqrt{({48t^2} +
{U^2})}\cos{\alpha\over 3}+\nonumber\\
\sqrt{32t^2 + U^2 + 4\sqrt{64t^4 + 3t^2U^2}}\label{gap_3},
\end{eqnarray}
where $\alpha = \arccos{\{({4Ut^2\over 3}-{U^3\over 27}) /
({16t^2\over 3} +{U^2\over 9})^{{3\over 2}}\}}$ and
$\gamma=\arccos{\{(4Ut^2)/ {({16t^2\over 3}+{U^2\over 3})^{3\over
2}}\}}$. Due to (ground state) level crossings~\cite{Mattis}, the
exact expression (\ref{gap_3}) is valid only in a limited range of
$U\leq 4(2+\sqrt{7})$.
The critical value, $U_c(0)=4.58399938$, at which
$\Delta^c_{3}(U_c: T=0)=0$, reported in Ref.~\cite{JMMM}, serves as an
estimation of the accuracy of the gap value, which is slightly
different from a value obtained in Ref.~\cite{tk}.
When $\Delta^{c}_{3}(U:T)=E(4;U:T)+E(2;U:T)-2E(3;U:T)$ becomes
negative for $U\le U_c(T)$ as shown in Fig.~\ref{fig:pair}, the
$\left\langle N\right\rangle=3$ states become less (energetically)
favorable when compared with $\left\langle N\right\rangle=2$ and
$\left\langle N\right\rangle=4$ states. This is a manifestation of
electron binding where, despite bare electron repulsion, electron
pairs experience an attraction~\cite{tk,scalettar,white}. We have
also observed a similar vanishing of charge gaps for negative $U$
values (see Fig.~\ref{fig:num}), where there is an inherent
electron-electron attraction, supporting the above statement. For
$U\geq U_c(T)$, the gap in the electron-hole channel is positive
(i.e. $\Delta^{c}_{3}(T)>0$) which favors excitonic, electron-hole
pairing, similar to MH gap at half-filling.
\begin{figure}
\begin{center}
\includegraphics*[width=20pc]{ch_sus.eps}
\hfill
\end{center}
\caption {The charge susceptibility {\it vs} chemical potential
$\mu$ at $T=0.02$ for two different $U$ values ($U=4$ and $U=6$)
below half-filling. Note that there are clearly identifiable
peaks, and such peak positions, when monitored as a function of
temperature, have been used to construct phase diagrams (see
text).}
\label{fig:ch_sus}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics*[width=20pc]{egap.eps}
\end{center}
\caption {The energy gap given by Eq.~(\ref{gap_3}) at zero
temperature and its evolution at several nonzero temperatures
as a function of $U$ obtained from the canonical ensemble.
This is directly related to the electron-electron
pairing gap ($U<U_c(T)$) and electron-hole charge gap ($U>U_c(T)$) as
defined in the text. Note that charge pairing is unlikely to occur above
temperature $T$=0.08. As previously, all energies are measured in
units of $t$, the hopping parameter.
} \label{fig:pair}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics*[width=20pc]{ph_u4.eps}
\end{center}
\caption {Temperature $T$ vs chemical potential $\mu$ phase
diagram for the four-site cluster at $U=4$ and $h=0$, below half
filling ($\mu\leq\mu_0$). Regions I and II are paramagnetic phases
and quite similar to the ones found in the two-site cluster
(Ref.~\cite{JMMM}), showing strong charge-spin separation. Phase
III is a MH antiferromagnetic phase (with zero spin). However,
note the (new) line phase (labeled P) which consists of charge and
spin fluctuations. This is a new feature seen in the 4-site
cluster which suggests the existence of electron-electron pairing
and phase separation into hole-poor and hole-rich regions
at low temperature. For $U=4$, pairing occurs below the
temperature, T$_P$=0.076.}
\label{fig:phase_u4}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics*[width=20pc]{ph_u6.eps}
\end{center}
\caption {Temperature $T$ vs chemical potential $\mu$ phase
diagram for the four site cluster at $U=6$ and $h=0$, below half-filling
($\mu\leq \mu_0$). Regions I, II and III are quite similar to the
ones found for U=4 in Fig.~\ref{fig:phase_u4}, again showing
strong charge-spin separation. However, a charge gap opens as a
new bifurcation (I and II phases) which consists of charge and
spin pseudogaps, (replacing the equilibrium line phase P in the
previous figure, i.e. no electron-electron pairing here: see text
for details). Temperature labels similar to those shown in
Fig.~\ref{fig:phase_u4} may be used here but have been suppressed
for clarity. } \label{fig:phase_u6}
\end{figure}
\subsection{Phase separation}\label{C}
It appears that the canonical approach yields an adequate
estimation of a possible pair binding instability in the ensemble of
small clusters at relatively low temperature. The competition
between attraction and repulsion under hole-doping can lead to a
`microscopic phase separation', which comprises of an
inhomogeneous structure of competing and coexisting hole-rich
$\left\langle N\right\rangle=2$ (hole-hole or electron-electron
pairing), hole-poor $\left\langle N\right\rangle=4$ (AF at half
filling) and magnetic
$\left\langle N\right\rangle=3$
clusters.
From Eqs. (\ref{mupm1}) and (\ref{mupm2}), it is apparent that the
condition $\mu_{-}=\mu_{+}$
at $U=U_c(0)$ and $\left\langle N\right\rangle=3$
defines a lower bound for the existence of a phase separation
boundary that distinguishes a charge-spin separated region from a
charge-spin coupled
one.
If we neglect every second hopping term in the two dimensional
square lattice, the system can be thought of as an ensemble of
decoupled 4-site non-interacting clusters~\cite{CRG,tk}, as shown
in Fig.~\ref{fig:ensemble}. New and important features appear if
the number of electrons $\left\langle N\right\rangle=3$ and total
magnetization $\left\langle s^z\right\rangle=0$ are kept fixed for
the whole system of decoupled clusters, placed in the (particle)
bath, by allowing the particle number on each separate cluster to
fluctuate. One is tempted to think that due to symmetry, there is
only a single hole on each cluster within the $\left\langle
N\right\rangle=3$ in ensemble. However, this statement reflects a
simple average only. Due to thermal and quantum fluctuations in
the density of holes between the clusters (for $U<U_c(T)$), it is
energetically more favorable to form pairs of holes. In this case,
snapshots of the system at relatively low temperatures and at a
critical value ($\mu_P$ in Fig.~\ref{fig:phase_u4}) of the
chemical potential would reveal equal probabilities of finding
(only) clusters that are either hole-rich $\left\langle
N\right\rangle=2$ or hole-poor $\left\langle N\right\rangle=4$.
However, at higher temperatures when pairing coexists with
magnetic spin fluctuations, there exists a small window of
parameters which brings some stability to $\left\langle
N\right\rangle=3$ clusters
above $T_P$.
Thus the
crossover from full separation (segregation) to the
coexisting magnetic ($\left\langle N\right\rangle=3$) and
hole-rich ($\left\langle N\right\rangle=2$) phases develops
smoothly and depends on the degree of disorder, i.e. temperature.
In section~\ref{A}, it was shown how the changes in the parameter
$U$ affect the spin (magnetization) by changing the cluster
configuration.
Phase separation of magnetic
($\left\langle N\right\rangle=3$)
and paired
($\left\langle N\right\rangle=2$) states
can also be triggered by increasing the magnetic field. The phase
separation
({\it i.e. segregation})
into separate magnetic $\left\langle
N\right\rangle=3$ and hole-rich $\left\langle N\right\rangle=2$
regions seen here
at $\mu_{-}>\mu_{+}$,
closely resembles the phase separation
effect detected recently in super-oxygenated
La$_{2-x}$Sr$_x$CuO$_{4+y}$, with various Sr
contents~\cite{Hashini}. Thus our results are consistent with the
experimental observation that a small (applied) magnetic field mimics
doping and promotes stability of the magnetic phase
$\left\langle N\right\rangle=3$ over the superconducting state
$\left\langle N\right\rangle=2$ near optimal doping.
In addition, our calculated probabilities (from the grand
canonical ensemble) at low temperature show that, in the interval
$\mu_{+}<\mu<\mu_P$, $\left\langle N\right\rangle=2$ clusters
become the majority; i.e. we observe both, pairing and phase
separation at low temperature.
Note that phase separation here has a different origin and occurs
at relatively weak coupling $U<U_c$ far from the level crossing
regime, at which the spin gap vanishes. Therefore, the mechanism
of phase separation here is different from the one found in
Refs.~\cite{Visscher,Emery} at large $U$ limit due to the {\it
spin} density fluctuations ($h_{+}<h_{-}$) and {\it spin}
saturation. Indeed, the four-site cluster at large $U >
4(2+\sqrt{7})$ reveals ferromagnetism in accordance with the
Nagaoka theorem~\cite{Mattis}.
\subsection{Phase diagrams}
In Figs. \ref{fig:phase_u4} and \ref{fig:phase_u6}, phase diagrams
for the 4-site cluster ($U=4$ and $U=6$) are shown (see also
Ref~\cite{cond-mat}). These have been constructed almost
exclusively using the temperatures, $T_c(\mu)$, $T_s(\mu)$ and
$T_{AF}(\mu)$, defined previously. We have identified the
following phases in these diagrams: (I) and (II) are MH like
paramagnetic phases with a charge pseudogap separated by a phase
boundary where the spin susceptibility reaches a maximum, with
$\Delta^{e-h}(T)>0$, $\Delta^{AF}\equiv 0$; at finite temperature,
phase I has a higher $\left\langle N\right\rangle$ compared to
phase II; Phase (III) is a MH like antiferromagnetic insulator
with bound charge and spin, when $\Delta^{e-h}(T)>0$,
$\Delta^s(T)>0$, $\Delta^{AF}(T)>0$; (P) is a line phase for $U=4$
with a vanished charge gap at $\left\langle N\right\rangle=3$, now
corresponding to the opening of a pairing gap
($\Delta^P(T)>0$)
in the electron-electron channel with $\Delta^c_{3} (U: T)<0$. We
have also verified the well known fact that the low temperature
behavior in the vicinity of half-filling, with charge and spin
pseudogap phases coexisting, represents an AF insulator
~\cite{JMMM}. However, {\sl away from half-filling}, we find very
intriguing behavior in thermodynamical charge and spin degrees of
freedom.
In both phase diagrams, we find similar paramagnetic MH (I), (II)
charge-spin separated phases in addition to the AF (III) phase
where spin and charge are bounded. In Fig.~\ref{fig:phase_u6},
spin-charge separation in phases (I) and (II) originates for
relatively large $U$ ($=6$) in the underdoped regime. In contrast,
Fig.~\ref{fig:phase_u4} shows the existence (at $U=$ 4) of a line
phase (with pairing) similar to $U<0$ case with electron
pairing
($\Delta^P(T)>0$),
when the chemical potential is pinned up on
doping within the highly degenerate midgap states near
(underdoped) $1/8$ filling.
\subsection{Quantum critical points}
Among other interesting results, rich in variety, sharp
transitions at
QCP
near $\left\langle N\right\rangle=3$ are found in the ground
state at $U>U_c(0)$ between phases with true charge and spin gaps;
for infinitesimal $T\to 0$, these gaps are transformed into
`pseudogaps' with some nonzero weight between peaks (or maxima) in
susceptibilities monitored as a function of doping (i.e. $\mu$) as
well as $h$. In the limiting case $T_c(\mu_c)\to 0$, the QCP
doping, $\mu_c$, defines a sharp MH like (AF)
like
transition with diverging $\chi_c$~\cite{JMMM}. At the QCP doping,
$\mu_s$, with $T_s(\mu_s) \to 0$, the zero spin susceptibility,
$\chi_s$, also exhibits a maximum.
In Fig.~\ref{fig:phase_u6}, the critical temperature $T_s (\mu)$
falls abruptly to zero at the QCP doping, $\mu_s$ (true only for
$U>U_c(0)$), implying~\cite{Tallon} that the (spin) pseudogap can
exist independently of possible particle pairing in
Fig.~\ref{fig:phase_u6}. In contrast, for $U<U_c(0)$ and low
temperature, there is no charge-spin separation near $\left\langle
N\right\rangle=3$. Therefore in Fig.~\ref{fig:phase_u4} at $U=4$
($U<U_c(0)$) we do not observe any QCP associated with $\mu_s$ or
$\mu_c$ close to $\left\langle N\right\rangle=3$. Instead,
Fig.~\ref{fig:phase_u4} shows the existence of a line phase (with
pairing) similar to the attractive $U<0$ case with a spin
pseudogap, which exists only at finite temperature $T_s(\mu)>0$,
and electron pairing
($T_P>T_s$),
when the chemical potential is
pinned up on doping within the highly degenerate midgap states
near (underdoped) 1/8 filling.
We have
also seen that a reasonably strong magnetic field can bring about
phase separation and has a dramatic effect (mainly) on the QCP at
$\mu_s$, at which the spin pseudogap disappears. It is evident
from our exact results that presence of QCP at zero temperature
and critical crossovers at $T_c(\mu)$, $T_s(\mu)$ and
$T_{AF}(\mu)$ temperatures, gives strong support for the cooperative
character of existing phase transitions and crossovers similar to
those seen in large
thermodynamic systems at finite temperatures~\cite{Langer,Cyrot}.
\subsection{Charge-spin separation}
Charge-spin separation effect is fundamental for understanding of
the generic features common for small and large thermodynamic
systems. We formulated exact criteria when the charge and spin
excitations are decoupled at $U>U_c(T)$. There is controversy
regarding the nature of MH and AF transitions and relation between
their consequent critical temperatures~\cite{JMMM}. In Figs.
\ref{fig:phase_u4} and \ref{fig:phase_u6}, the charge, decoupled
from spin degree of freedom, condenses at temperatures below $T_c$
while AF spin correlations
near half-filling
are seen to develop at lower temperatures
$T_{AF}(\mu)<T_c(\mu)$~\cite{JMMM}.
However, in the limited range close to $\mu\ge \mu_c$, there is a
reverse behavior, $T_{AF}(\mu)\ge T_c(\mu)$.
Electrons were until recently thought to carry their charge and
spin degrees of freedom equally; however accurate studies of
thermodynamic response functions in nanoscale clusters show that
in real materials, these two degrees of freedom are relatively
independent from one another and can condensate at different
doping levels $\mu_c$, $\mu_s$ and transition temperatures
$T_c(\mu)$, $T_{AF}(\mu)$ and $T_{s}(\mu)$ shown in
Fig.~\ref{fig:phase_u4} and Fig.~\ref{fig:phase_u6}.
The charge-spin separation is an unusual behavior of electrons in
some materials under certain conditions permitting the formation
of two independent (bound) electron-electron or electron-hole
pairs (quasiparticles) in charge sectors and spin singlet and
triplet states in spin sectors. The spin quasiparticle only
carries the spin degree of the electron but not the charge, while
the charge quasiparicle has spin equal to zero but its electric
charge equals either zero (electron-hole pair) or a charge of two
electrons (electron-electron pair). We find that at large
$U>U_c(T)$, clusters with localized charge are favored over
itinerant ones.
\begin{figure}
\begin{center}
\includegraphics*[width=20pc]{ph_u0.eps}\hfill
\end{center}
\caption {The single particle or `noninteracting' ($U=0$) case,
illustrating the positions of charge and spin susceptibility
peaks in a $T-\mu$ space for the 4-site cluster at $\mu<1$ (half-filling is
at $\mu_0=0$). Note
how the charge and spin peaks follow one another. Even in the
presence of a nonzero magnetic field there is no charge-spin
separation.}
\label{fig:phase_u0}
\end{figure}
As an important footnote, in the noninteracting $U=0$ case shown
in Fig.~\ref{fig:phase_u0}, the charge and spin peaks follow one
another (in sharp contrast to the $U=4$ and $6$ cases). In
regions I and II, positions of charge (as well as spin) maxima and
minima coincide indicating that there is no charge-spin
separation, even in the presence of a magnetic field. In the
entire range of $\mu$, the charge and spin fluctuations directly
follow one another without charge-spin separation. Our detailed
analysis of the (responses such as) variation of electron
concentration $\left\langle N\right\rangle$, zero magnetic field
magnetization $\left\langle s^z\right\rangle$ {\it vs} $\mu$ and
various fluctuations shows that there is no charge-spin separation
and both, the spin and charge degrees, closely follow each other.
Thus, at $U=0$ the spin and charge degrees are strongly coupled to
one another. On the other hand, the charge-spin separation effect
at $U\neq 0$ led to rigorous definitions of Mott-Hubbard like,
antiferromagnetic, spin pseudogaps, particle-particle pairings and
related crossovers.
\section{Conclusion}
In summary, we have illustrated how to obtain phase diagrams and
identify the presence of temperature driven crossovers, quantum
phase transitions and charge-spin separation for any $U\ge 0$ in
the four-site Hubbard {\it nanocluster} as doping (or chemical
potential) is varied. Specifically, our exact solution pointed out
an important difference between the $U=4$ ($U<U_c(0)$) and $U=6$
($U>U_c(0)$) phase diagrams at finite temperature in the vicinity
of hole doping $\approx $ 1/8 which can be tied to possible
electron-electron pairing due to overscreening of the repulsive
interaction between electrons in the former.
The resulting phase diagram with competing hole-rich
($\left\langle N\right\rangle=2$), hole-poor ($\left\langle
N\right\rangle=4$) and magnetic ($\left\langle N\right\rangle=3$)
phases
captures also the essential features of phase separation in doped
La$_{2-x}$Sr$_x$CuO$_{4+y}$~\cite{Hashini}.
Our analytical results near
$\left\langle N\right\rangle\approx 3$ strongly suggest that
particle pairing can exist at $U<U_c(T)$, while particle-hole
binding is presumed to occur for $U>U_c(T)$. It is also clear that
short-range correlations alone are sufficient for pseudogaps to
form in small and large clusters, which can be linked to the
generic features of phase diagrams in temperature and doping
effects seen in the HTSCs. The exact cluster solution shows how
charge and spin
(pseudo)
gaps are formed at the microscopic level and their behavior as a
function of doping (i.e. chemical potential), magnetic field and
temperature. The pseudogap formation can also be associated with
the condensation of spin and charge degrees of freedom below spin
and charge crossover temperatures. In addition,
our exact analytical and
numerical calculations provide important benchmarks for comparison
with Monte Carlo, RSRG, DCA and other approximations.
Finally, our results
on the existence of QCP and crossover temperatures
show the {\sl cooperative nature} of phase transition phenomena in
finite-size clusters similar to large thermodynamic systems. The
small {\it nanoclusters} exhibit a pairing mechanism in a limited
range of $U$, $\mu$ and $T$ and share very important intrinsic
characteristics with the HTSCs apparently because in all these
`bad' metallic high
T$_c$
materials, local interactions play a key role. As charge and spin
fluctuations are relevant to the charge and spin susceptibilities
(\ref{fluctuation_number}), energy fluctuations are related to the
specific heats and these new results for the 4-site and larger
clusters, which provide further support to our picture developed
here, will be reported elsewhere.
One of us (ANK) thanks Steven Kivelson for interest and helpful
discussion by communicating the results of their preprint (Ref.~\cite{tk}).
This research was
supported in part by the U.S. Department of Energy under Contract
No. DE-AC02-98CH10886.
|
1,116,691,498,633 | arxiv | \section{Introduction}
\label{sec:introduction}
Most people have experienced at least once the annoying high-pitched sound that chalk may produce when pressed against a blackboard.
As it is now well known \cite{CHAMPNEYS2016}, the sound is the result of fast vibrations of the piece of chalk that quickly and repeatedly detaches from the blackboard and comes back into contact with it.
This phenomenon is paradoxical as the more one presses the chalk against the surface, the more likely bouncing motion becomes.
This type of oscillatory behaviour is not only annoying but can be costly and troublesome when it manifests in practical applications.
For example, the repeated lift of an automated tool performing a cut leads to imprecise processing, resulting in unusable goods or ones with reduced value \cite{Ibrahim1994}.
Moreover, in an assembly line, a robotic arm used grasping objects from a moving belt may abruptly be pushed away from the belt, resulting in decreases in speed and accuracy \cite{Assembly2009}.
The phenomenon described above was named after Paul Painlev\'e, who, in 1905, published the first studies related to the paradox, providing a mathematical model.
In particular, in \cite{Painleve1905}, he analysed the dynamics of a rigid stick sliding on a surface, showing that, assuming a Coulomb friction law, when the friction coefficient was higher than a certain threshold value, a non-trivial phenomenon occurs.
Namely, the solution to the differential equations describing the motion of the stick may become indeterminate or inconsistent, in the sense that the model would predict the stick to penetrate the rigid surface, which clearly is not realistic.
In the following years, many mathematicians and scientists have been interested in the study of this paradoxical phenomenon, but, as pointed out by Champneys in \cite{CHAMPNEYS2016}, to this date, all the ways in which the stick can enter the inconsistent or indeterminate solution modes have not been determined analytically.
For this reason, most of the research follows a numerical or experimental approach in the investigation of the problem.
For example, in \cite{loststed1982}, L\"{o}tstedt created a digital simulation of the dynamics of rigid mechanical systems under unilateral constraints, in order to study the Painlev\'{e} phenomenon.
In \cite{Or2012} and \cite{Burns2017}, numerical simulations are used to investigate how the paradox affects the motion of an inverted pendulum sliding on an inclined plane and that of a double pendulum, respectively.
Furthermore, in \cite{leine2016}, Leine et al. studied through numerical simulations the paradox in a specific two-masses system, called \textit{frictional impact oscillator}.
They showed that the critical friction value was strictly linked to the masses ratio, and the Painlev\'{e} paradox was the cause of a Hopf bifurcation, in which a sliding equilibrium loses its stability and a periodic bouncing motion appears.
Similar results can be found in \cite{LANCIONI2009}.
Another system whose motion is influenced by the paradox, is the prismatic revolute robotic set-up analysed in \cite{elkaranshawy2017solving} via numerical simulations.
It is also important to highlight that the phenomenon studied by Painlev\'{e} can influence the motion of walking robots.
As a matter of fact, most of the passive walking models such as the \emph{compass biped} or the \emph{rimless wheel} \cite{Collins1082}, assume that there is a frictional sticking contact between the foot and the surface, whereas in reality there is always a slipping of the foot.
For instance, the numerical results obtained in \cite{Or2014} showed that regular periodic gait can be subject to an instability, related to the Painlev\'{e} phenomenon.
Moreover, in recent work by Zhao et al. \cite{zhao2008experimental}, the occurrence of the Painlev\'{e} paradox was demonstrated experimentally in a two link robotic arm whose end effector is in contact with a sliding belt. It was shown that for certain parameter values, the arm can lift off from the moving belt, showing, for the first time in the literature, a physical demonstration of the paradox in a realistic robotic set-up.
To the best of our knowledge, the only instance of a control strategy employed to avoid the onset of the paradox is described in \cite{liang2012relative}, where a PID regulator is used to control the sliding of a two-links robot on a vertical wall.
The contribution of our work is twofold.
Firstly, we extend the analysis of the system originally presented in \cite{zhao2008experimental} unfolding the bifurcation mechanisms behind the occurrence of the Painlev\'e paradox.
Secondly, we exploit this new information to synthesise appropriate control strategies to prevent the paradox from taking place; being this one of the very few attempts at using active controllers to address the problem, along with \cite{liang2012relative}.
In particular, in order to better understand the conditions that trigger the onset of the phenomenon, we present a characterisation of the steady state dynamics for different values of the velocity of the belt.
Specifically, we find that the paradox manifests itself only when the speed of the belt is in a certain critical interval.
However, we show that, even when the velocity of the belt is within that critical interval, some control strategies may be employed to avoid the undesired lift-off and bouncing motion stemming from it.
In particular, we show how a PID regulator and a hybrid force/motion control scheme can be exploited to reach some positioning control goals while keeping the robot in a region of the phase space such that the paradox is not triggered.
According to our simulations, the hybrid control shows the most promising results, representing an innovation with respect to \cite{liang2012relative}, where only a PID control strategy was used.
Our results nicely combine bifurcation analysis with control system design, offering a novel approach for the active suppression of the Painlev\'e paradox in realistic mechanical systems.
\section{Bifurcation analysis}
\label{sec:bifurcation analysis}
\subsection{Model description}
\label{subsec:model_description}
\begin{figure}[t]
\centering {
{\includegraphics[width=0.5\columnwidth]{robot_scheme.pdf}}
\caption{A double-revolute robotic arm on a moving belt.}
\label{fig:robot_representation}
}
\end{figure}
We consider a two-links mechanical set-up as that represented in Figure \ref{fig:robot_representation}.
Rotational dashpots with damping coefficient $\sigma$ are present in both the joints, and a rotational spring with elastic constant $k$ is mounted in the lower joint; moreover, the belt is moving at a speed $v_\R{belt}$.
We define the generalised coordinates
$\B{q} \triangleq \begin{bmatrix} \theta_1 & \theta_2 \end{bmatrix}\T$,
the coordinates of the end effector
$\B{z} \triangleq \begin{bmatrix} z_\R{t} & z_\R{n} \end{bmatrix}\T$,
and, for later use, the state vector
$\B{x} \triangleq \begin{bmatrix} \theta_1 & \dot{\theta}_1 & \theta_2 & \dot{\theta}_2 \end{bmatrix}\T$.
Then, a mathematical model of the system can be given as
\begin{equation}\label{eq:motion_equation}
\ddot{\B{q}} = \B{M}^{-1} (-\B{w} - \B{c} + \B{J}\T \B{f} + \B{u}),
\end{equation}
where:
\begin{itemize}
\item $\B{M}$ is the mass matrix;
\item $\B{w}$ contains the torques determined by the elastic force, viscous friction, and gravity;
\item $\B{c}$ are the torques determined by the centrifugal and Coriolis forces;
\item $\B{J}$ is the Jacobian, defined by the relation $\dot{\B{z}} = \B{J} \dot{\B{q}}$;
\item $\B{u} = \begin{bmatrix} u_1-u_2 & u_2\end{bmatrix}\T$ contains the control torques, with $u_1$ and $u_2$ being the torques applied to the first and the second joint, respectively;
\item $\B{f} = \begin{bmatrix}f_\R{t} & f_\R{n}\end{bmatrix}\T$ are the contact forces acting on the end effector, with $f_\R{n}$ being the normal reaction and $f_\R{t}$ being the Coulomb friction.
In particular, $f_\R{t} = - \mu \R{sign} (\dot{z}_\R{r}) f_\R{n}$, where $\dot{z}_\R{r} \triangleq \dot{z}_\R{t} - v_{\R{belt}}$ is the velocity of the contact point with respect to the belt and $\mu$ is the friction coefficient.
\end{itemize}
The expressions of the above quantities are
\begin{equation*}
\begin{aligned}
\B{M} &= \begin{bmatrix}
\frac{4}{3} m l^{2} & \frac{ml^{2}}{2} \R{cos} (\theta_2 - \theta_1) \\[1 ex]
\frac{ml^{2}}{2} \R{cos} (\theta_2 - \theta_1) & \frac{ml^{2}}{3}
\end{bmatrix}, \\
\B{w} &= \begin{bmatrix}
\frac{3mgl}{2}\R{sin}\theta_1-k(\theta_2-\theta_1+ \alpha_0)-\sigma(\dot{\theta}_2- 2\dot{\theta}_1)\\[1 ex]
\frac{mgl}{2}\R{sin}\theta_2+k(\theta_2-\theta_1+ \alpha_0)+ \sigma(\dot{\theta}_2-\dot{\theta}_1)
\end{bmatrix}, \\
\B{c} &= \begin{bmatrix}
\frac{ml^{2}}{2}\dot{\theta}_2^2\R{sin}(\theta_{1}-\theta_{2})\\[6 pt]
\frac{ml^{2}}{2}\dot{\theta}_1^2\R{sin}(\theta_{2}-\theta_{1})
\end{bmatrix}, \\
\B{J} &= \begin{bmatrix}
\B{j}_1\T \\[1ex] \B{j}_2\T
\end{bmatrix} =
\begin{bmatrix}
l\R{cos}\theta_1 & l\R{cos}\theta_2\\[1 ex]
l\R{sin}\theta_1 & l\R{sin}\theta_2
\end{bmatrix}.
\end{aligned}
\end{equation*}
The values of the parameters are set using the experimentally derived ones reported in \cite{zhao2008experimental} and are given in Table \ref{tab:parameter_values}.
\begin{table}[t]
\caption{Robot parameters}
\label{tab:parameter_values}
\begin{center}
\begin{tabular}{@{}llll@{}}
\toprule
Parameter & Symbol & Value & Unit \\ \midrule
belt speed & $v_{\R{belt}}$ & $[-1,-0.1]$ & m/s \\
friction coefficient & $\mu$ & $[0.1,1]$ & - \\
links mass & $m$ & $0.12$ & kg \\
links length & $l$ & $0.21$ & m \\
damping coefficient & $\sigma$ & $0.005$ & N$\cdot$s/m \\
elastic constant & $k$ & $1.3$ & N/m \\
robot height & $H$ & $0.3775$ & m \\
spring rest position & $\alpha_0$ & $13.72$ & degrees \\ \bottomrule
\end{tabular}
\end{center}
\end{table}
Model \eqref{eq:motion_equation} can be recast in terms of the position $\B{z}$ of the end effector as
\begin{equation}\label{eq:motion_equation_end_effector}
\ddot{\B{z}} = -\B{J}\B{M}^{-1} (\B{w} + \B{c} - \B{u}) + \B{Q} \B{f} + \B{s},
\end{equation}
where $\B{Q} \triangleq \B{J} \B{M}^{-1} \B{J}\T = (Q_{i,j})$, $i,j = 1,2$, and $\B{s}$ is the centripetal acceleration, given by
\begin{equation*}
\B{s} = \begin{bmatrix}
s_1 \\[1 ex]
s_2
\end{bmatrix} = \begin{bmatrix}
-l(\dot\theta_1^2\R{sin}\theta_1+\dot\theta_2^2\R{sin}\theta_2) \\[1 ex]
l(\dot\theta_1^2\R{cos}\theta_1+\dot\theta_2^2\R{cos}\theta_2)
\end{bmatrix}.
\end{equation*}
Model \eqref{eq:motion_equation_end_effector} can be expressed componentwise as
\begin{equation}\label{eq:motion_equation_end_effector_t}
\ddot{z}_\R{t} = -\B{j}_1^\R{T}\B{M}^{-1}(\B{w}+\B{c}-\B{u})
+ f_\R{n} (- \mu \R{sign} (\dot{z}_\R{r}) Q_{1,1} + Q_{1,2} )
+ s_1,
\end{equation}
\begin{equation}\label{eq:motion_equation_end_effector_n}
\ddot{z}_\R{n} = -\B{j}_2^\R{T}\B{M}^{-1}(\B{w}+\B{c}-\B{u})
+ f_\R{n} (- \mu \R{sign} (\dot{z}_\R{r}) Q_{2,1} + Q_{2,2} )
+ s_2.
\end{equation}
For a fixed value of $\mu$, when $\dot{z}_\R{r} > 0$, we will show that there exists a region in the state space, say $\mathcal{R}^+ \subseteq \mathbb{R}^4$, such that when the state vector $\B{x} \in \mathcal{R}^+$ the paradox is triggered.
Differently, when $\dot{z}_\R{r} < 0$, the paradox manifests itself if $\B{x} \in \mathcal{R}^- \subseteq \mathbb{R}^4$.
However, the sets of positions and velocities represented by $\mathcal{R}^+$ are symmetrical with respect to the $y$-axis to those contained in $\mathcal{R}^-$.
Therefore, for the sake of simplicity, we can limit our analysis to the case that $\dot{z}_\R{r}$ is positive; the results might then easily be extended to the case that $\dot{z}_\R{r}$ is negative by simply taking into account the symmetry between $\mathcal{R}^+$ and $\mathcal{R}^-$.
Defining the functions $p : \mathbb{R}^3 \rightarrow \mathbb{R}$ and $b : \mathbb{R}^6 \rightarrow \mathbb{R}$, given by
\begin{align}
b \triangleq &- \B{j}_2^\R{T} \B{M}^{-1} (\B{w} + \B{c} - \B{u}) + s_2, \\
p \triangleq &- \mu Q_{2,1}+Q_{2,2},
\end{align}
we can rewrite \eqref{eq:motion_equation_end_effector_n} as
\begin{equation}\label{eq:normal_acceleration}
\ddot{z}_\R{n} = b (\B{q},\dot{\B{q}},\B{u}) + p(\B{q},\mu) f_\R{n}.
\end{equation}
In \eqref{eq:normal_acceleration}, the physical meaning of the newly introduced functions $b$ and $p$ is more evident.
$b$ is the \emph{free normal acceleration}, i.e. the normal acceleration in the absence of contact forces, whereas $p$ determines how the normal reaction $f_\R{n}$ influences the normal acceleration $\ddot{z}_\R{n}$ of the end effector.
When $z_\R{n} = -H$, the end effector is in contact with the moving belt, reproducing a situation analogous to that originally investigated by Painlev\'e.
As explained in \cite{Painleve1905} for a more general case, if $\mu$ is greater than a critical value $\mu_\R{c}$, system \eqref{eq:motion_equation_end_effector} can display four different types of solution, depending on the signs of $b$ and $p$, which in turn depend on $\B{q}$, $\dot{\B{q}}$, $\B{u}$, and $\mu$.
The first two modes, \emph{sliding} and \emph{flight}, are associated to solutions to the motion equation \eqref{eq:normal_acceleration}, both characterized by $p > 0$; while, when $p < 0$, the solution to \eqref{eq:normal_acceleration} is \emph{indeterminate} or \emph{inconsistent}.
Next, we describe each solution mode in greater detail.
\begin{enumerate}[(i)]
\item \emph{Sliding, $p > 0$, $b < 0$.} --- The end effector is in contact with the belt, i.e.~$z_\R{n} = -H$, $f_\R{n}=-\frac{b}{p}$, and possibly $\dot{z}_\R{t} \ne 0$.
\item \emph{Flight, $p > 0$, $b > 0$.} --- Either $z_\R{n} > 0$, or $z_\R{n} = -H$ and $\ddot{z}_\R{n} > 0$.
\item \emph{Indeterminate, $p < 0$, $b > 0$.} --- The solution is not unique; nonetheless, according to \cite{elkaranshawy2017solving}, when simulating the system, it is possible to resolve the indeterminate mode into a flight mode.
\item \emph{Inconsistent, $p < 0$, $b < 0$.} --- Given the signs of $p$ and $b$, we would have $\ddot{z}_n < 0$, which, recalling that $z_\R{n} = -H$, is not physically feasible, because both the robot and the belt are assumed to be rigid.
This troublesome scenario can be resolved, as explained in \cite{zhao2008experimental}, as an \emph{impact without collision} \cite{GENOT1998}, in which $\dot{z}_\R{n}$ turns from zero to positive, determining the lift-off of the end effector, followed by a succession of bounces on the belt.
\end{enumerate}
In Figure \ref{fig:phase_plane_open_loop}, we provide an example of the regions associated to each of the four solution modes.
\begin{figure}[t]
\centering {
{\includegraphics{phase_plane_open_loop.pdf}}
\caption{Different modes of solution for different values of $\theta_1$ and $\dot{\theta}_1$. Here $\mu = 0.6$, $\B{u} = \B{0}$, and $\theta_2$ and $\dot{\theta}_2$ are chosen in order to have the tip of the robot in contact with the belt. ``indet.'' stands for indeterminate and ``$*$'' stands for inconsistent.
The black solid line is the place where $b = 0$, while the dashed red lines are the places where $p = 0$.}
\label{fig:phase_plane_open_loop}
}
\end{figure}
\subsection{Bifurcation diagrams}
\label{subsec:bifurcation_diagrams}
\begin{figure}[t]
\centering {
\includegraphics{bifurcation_open_loop.pdf}
\caption{Bifurcation diagram with $\mu = 0.6$. The black solid line corresponds to initial conditions $\B{x}_{0,\R{a}} = [32 \ 0 \ 18.27 \ 0 ]\T$, whereas the dashed red line corresponds to $\B{x}_{0,\R{d}} = [-11.4 \ 0 \ -35.1 \ 0 ]\T$.}
\label{fig:bifurcation_open_loop}
}
\end{figure}
\begin{figure}[t]
\centering {
\subfigure[]{\label{fig:poincare_map_large}
\includegraphics{poincare_map.pdf}}
\subfigure[]{\label{fig:poincare_map_detail}
\includegraphics{poincare_map_detail_a.pdf}}
\subfigure[]{\label{fig:poincare_map_chaos_periodicity}
\includegraphics[scale=0.4]{periodicocaotico.pdf}}
}
\caption{Bifurcation diagram with $\mu = 0.6$ and initial conditions $\B{x}_{0,\R{a}} = [32 \ 0 \ 18.27 \ 0 ]\T$. (a) is the full picture, while (b) is an enlargment of the portion in the red box traced in (a); (c) depicts the type of the asymptotic behaviour: red represents a chaotic dynamics, whereas black stands for periodic motion.}
\label{fig:poincare_map}
\end{figure}
To better understand the occurrence of the paradox causing the lift-off of the end effector, we traced a two-dimensional numerical bifurcation diagram in the parameter space consisting of the friction coefficient $\mu$ and the speed of the belt $v_\R{belt}$.
The system was simulated using event-detection routines available in Matlab to detect transitions between each of the solution modes described in Section \ref{subsec:model_description}.
The bifurcation diagram was constructed via a brute-force method \cite{jacobson2002stability} by simulating the system from a set of random initial conditions for parameters selected in a grid defined by the ranges $0.1\leq \mu \leq 1$ and $-1 \leq v_{\R{belt}} \le -0.1$, with steps of 0.1, and 0.005, respectively.
In each run, the state values are recorded, after a transient time of 250 s, over a time interval of 50 s.
Then, if in a certain run $\max \dot{\theta}_1 > 0$, it means that the parameter values used in that simulation are such that persistent bouncing motion manifests, which is undesired.
We observed that the bounces appear only for $\mu \geq \mu_\R{c} = 0.4$, that is $\R{max} \dot{\theta}_1 = 0$ if $\mu < \mu_\R{c}$, for all values of $v_\R{belt}$.
Moreover, we verified that, when $\mu \ge \mu_\R{c}$, features of the bouncing motion such as duration and (a)periodicity depend only on $v_{\R{belt}}$.
Given that the bifurcation diagram is flat for $\mu < \mu_\R{c}$, and independent of $\mu$ provided that $\mu \ge \mu_\R{c}$, we only present a two-dimensional section of the diagram, in Figure \ref{fig:bifurcation_open_loop}, where $\mu = 0.6$ was considered.
We note that (i) not all initial conditions trigger the bounces (see the red dashed line), and that (ii) bounces are present only when $-0.575 \le v_\R{belt} \le -0.3$.
In order to gain greater knowledge on the specific behaviour of the system when $-0.575 \le v_\R{belt} \le -0.3$ (still with $\mu = 0.6$), we traced a second more detailed bifurcation diagram in Figure \ref{fig:poincare_map}, in which we plot the value of $\theta_1$ when $\dot{\theta}_1$ turns from negative to positive, as a function of the parameter value.
The diagram shows the presence of both periodic and chaotic solutions, providing evidence for the onset of complex seemingly aperiodic behaviour in the parameter regions depicted in red in Figure \ref{fig:poincare_map_chaos_periodicity}.
\section{Control synthesis}
\label{sec:control_synthesis}
Next, we wish to design a controller able to avoid the onset of the bouncing motion due to the paradox and keep the robot moving in contact with the belt.
This in turn requires using a feedback control to guarantee that $p>0$ and $b>0$ at all times in \eqref{eq:normal_acceleration}.
Without loss of generality, we set $v_\R{belt}= - 0.4$, that is a value that allows the occurrence of the paradox.
Firstly, setting $\mu = 0.6$, in Table \ref{tab:admissible_configurations} we determine analytically the values of $z_\R{t}$ such that $p > 0$; we call these \emph{admissible configurations}, given that indeterminate and inconsistent solutions will not appear for such values of $z_\R{t}$.
Secondly, one should determine, among the admissible configurations, those corresponding to $b < 0$; nevertheless, this task is not easy to achieve analytically, because, differently from $p$, $b$ is also a function of $\dot{\B{q}}$ and $\B{u}$.
However, $b < 0$ can be attained using a control scheme that aims at keeping $f_\R{n} > 0$, as it is easy to verify from \eqref{eq:motion_equation_end_effector_n}, when $\ddot{z}_\R{n} = 0$.
We start by using a simpler PID controller, showing that such strategy can keep the end effector in contact with the belt only in a narrow range of the admissible configurations.
Next, we move to a hybrid force/motion control \cite{Siciliano2008} (that allows the regulation of $f_\R{n}$) and demonstrate that this latter approach guarantees avoidance of the lift-off of the end effector in a wider range of the admissible configurations.
\begin{table}[t]
\caption{Admissible configurations for $\mu=0.6$}
\label{tab:admissible_configurations}
\begin{center}
\begin{tabular}{@{}ll@{}}
\toprule
Elbow position & Admissible configurations [m]\\ \midrule
elbow up ($\theta_2-\theta_1>0$) & $-0.184 \leq z_\R{t}< 0.183$\\
elbow down ($\theta_2-\theta_1<0$) & $-0.184 \leq z_\R{t} < 0.168$\\ \bottomrule
\end{tabular}
\end{center}
\end{table}
\subsection{PID strategy}
\label{subsec:pid_strategy}
For the sake of simplicity, we started by considering a simpler PID control approach to test its feasibility to solve the control goal.
Let $\B{z}^*$ be the reference value for the end effector coordinates, $e \triangleq z_\R{t}^*-z_\R{t}$ a reference error, and $\B{q}' \triangleq \begin{bmatrix} \theta_1 & \theta_2 - \theta_1 \end{bmatrix}\T$.
Say $\B{q}'^*$ the reference value for $\B{q}'$, computed from $\B{z}^*$
using inverse kynematics as explained in \cite{Siciliano2008}.
Hence, the control terms $u_i$, $i = 1,2$, obtained using a PID control scheme are given by
\begin{equation}
u_i = K_{\R{P},i} (q'^*_i - q'_i) + K_{\R{I},i} \int_{0}^{\tau} (q'^*_i - q'_i) \R{d}t + K_{\R{D},i} (\dot{q}'^*_i - \dot{q}'_i),
\end{equation}
where $K_{\R{P},i}$, $K_{\R{I},i}$, $K_{\R{D},i}$, $i = 1,2$, are constants.
The PID gains were selected heuristically by running a series of numerical simulations from two sets of initial conditions.
These are $\B{x}_{0,\R{d}} = \begin{bmatrix} -11.4 & 0 & -35.1 & 0 \end{bmatrix}\T$ and $\B{x}_{0,\R{u}} = \begin{bmatrix} -35.1 & 0 & -11.4 & 0 \end{bmatrix}\T$, both corresponding to $\B{z} = \begin{bmatrix} -0.1624 & 0 \end{bmatrix}\T$, with the only difference that $\B{x}_{0,\R{d}}$ is an ``elbow down'' posture ($\theta_2-\theta_1<0$) and $\B{x}_{0,\R{u}}$ is an ``elbow up'' posture ($\theta_2-\theta_1>0$).
The gains were adjusted in a trial-and-error process with the aim of obtaining a large value of $Z_{\R{t},\R{sliding}}$, that is the largest value of $z_\R{t}$ such that no lift-off occurs.
We observed different results, depending on the initial condition. For $\B{x}(t = 0) = \B{x}_{0,\R{d}}$, $Z_{\R{t},\R{sliding}} = 0.0375$, and acceptable values of the gains were found to be $K_{\R{P},1} = 200, K_{\R{I},1} = 25, K_{\R{D},1} = 2$, and $K_{\R{P},2} = K_{\R{I},2} = K_{\R{D},2} = 0$.
The corresponding simulation graphs, are shown in Figure \ref{fig:pid}.
Differently, for $\B{x}(t = 0) = \B{x}_{0,\R{u}}$, the simulations results showed that the PID control is not able to effectively avoid the onset of the paradox.
As a matter of fact, we could not find values of the control gains such that lift-off was avoided.
An example is visible in Figure \ref{fig:failure}, where the time evolution of the normal reaction $f_\R{n}$ is depicted; notice that it eventually becomes zero, meaning that the end effector detaches from the belt.
\begin{figure}[t]
\centering {
{\includegraphics{pidgomitobasso}}
\caption{Simulation with PID control and $\B{x}(t = 0) = \B{x}_{0,\R{d}}$.}
\label{fig:pid}
}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics{pidgomitoalto}
\caption{Simulation with PID control, $\B{x}(t = 0) = \B{x}_{0,\R{u}}$ and $z_\R{t}^*$ as in Figure \ref{fig:pid}.}
\label{fig:failure}
\end{figure}
\subsection{Hybrid force/motion control}
\label{subsec:hybrid_motion_force_control}
Next, we show that better performance can be achieved with a force/motion control scheme \cite{Siciliano2008}, since it allows to regulate the value of the normal reaction $f_\R{n}$ in addition to the end effector's tangential position $z_\R{t}$.
In particular, defining the unit vectors $\hat{\B{i}}_x \triangleq \begin{bmatrix} 1 & 0 \end{bmatrix}\T$ and $\hat{\B{i}}_y \triangleq \begin{bmatrix} 0 & 1 \end{bmatrix}\T$, associated to the $x$ and $y$ Cartesian axes, the control action is given by
\begin{equation} \label{eq:force_motion_control}
\B{u} = \B{w} + \B{c} + \B{M} \B{J}^{-1} \left( - \dot{\B{J}} \dot{\B{q}} + \hat{\B{i}}_x \alpha_v \right) + \B{J}\T \left( - \hat{\B{i}}_x f_{\R{t}} - \hat{\B{i}}_y \alpha_{f} \right).
\end{equation}
Note that, in \eqref{eq:force_motion_control}, on the right-hand side, the first, second, and fifth terms compensate corresponding terms in \eqref{eq:motion_equation_end_effector}.
Differently, the fourth and and sixth terms are used to assign dynamics for $z_\R{t}$ and $f_\R{n}$, respectively. Specifically, letting $f_\R{n}^*$ be a reference value for the normal reaction, we choose
\begin{align}
\alpha_v =& \ddot{z}_\R{t}^* + K'_{\R{P}} (z^*_{\R{t}} - z_\R{t}) + K'_\R{D} (\dot{z}^*_\R{t} - \dot{z}_\R{t}), \\
\alpha_{f} =& f^*_\R{n} + K'_\R{I} \int_{0}^{\tau} (f^*_\R{n} - f_\R{n}) \R{d}t,
\end{align}
where $K'_\R{P}$, $K'_\R{D}$, and $K'_\R{I}$ are control gains.
A block diagram of the hybrid force/motion control scheme is illustrated in Figure \ref{fig:force_motion_control_scheme}.
\begin{figure}[t]
\centering
\includegraphics[scale=0.35]{controlloibrido.pdf}
\caption{Hybrid force/motion control scheme.}
\label{fig:force_motion_control_scheme}
\end{figure}
To test the performance of the control system, we ran a series of simulations from the same initial conditions used to validate the PID control strategy; the control gains being selected heuristically as $K'_\R{P} = K'_\R{D} = 900, K'_\R{I} = 650$ for $\B{x}(t=0)=\B{x}_{0,\R{d}}$ and $K'_\R{P} = K'_\R{D} = 900, K'_\R{I} = 0$ for $\B{x}(t=0)=\B{x}_{0,\R{u}}$.
The numerical results showed that, for $\B{x}_{0,\R{d}}$, $Z_{\R{t},\R{sliding}} = 0.148$, whereas, for $\B{x}_{0,\R{u}}$, $Z_{\R{t},\R{sliding}} = 0.163$, which are both higher than the values obtained with the PID, meaning that the force/motion control scheme allows the robot to operate in a wider range of configurations.
Examples of simulations are shown in Figures \ref{fig:hybrid} and \ref{fig:hybridone}, representing the results of the simulations starting from $\B{x}_{0,\R{d}}$ and $\B{x}_{0,\R{u}}$, respectively.
Moreover, we verified that when using the present control strategy, the persistent bouncing motion is suppressed for all $v_\R{belt} \in [-1, -0.1]$.
This is shown in the closed-loop bifurcation diagram in Figure \ref{fig:bifurcation_closed_loop}, which can be compared with that in Figure \ref{fig:bifurcation_open_loop}, representing the bifurcation diagram for the open loop system.
\begin{figure}[t]
\centering
\includegraphics{ibridogomitobasso.pdf}
\caption{Simulation with force/motion control, $\B{x}(t = 0) = \B{x}_{0,\R{d}}$ and $f_\mathrm{n}^*=10$ N.
In the third panel from the top, the black solid line is $u_1$, whereas the red dashed line is $u_2$.}
\label{fig:hybrid}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics{ibridogomitoalto.pdf}
\caption{Simulation with force/motion control, $\B{x}(t = 0) = \B{x}_{0,\R{u}}$ and $f_\mathrm{n}^*=10$ N.
In the third panel from the top, the black solid line is $u_1$, whereas the red dashed line is $u_2$.}
\label{fig:hybridone}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics{bifurcation_closed_loop.pdf}
\caption{Closed loop bifurcation diagram obtained with $\B{x}(t = 0) = \B{x}_{0,\R{d}}$, the same references as that in Figure \ref{fig:hybrid}, and in the presence of the force/motion control.}
\label{fig:bifurcation_closed_loop}
\end{figure}
As expected, the closed-loop system remains in contact with the belt over the entire parameter region of interest without any bifurcation to persistent bouncing motion.
\section{Conclusions}
\label{sec:conclusions}
We dealt with the analysis and control of the Painlev\'{e} paradox in a two-links robot in contact with a moving belt.
The paradox determines occasional lift-off of the tip of the robot, which is undesired for a number of applications, like cutting or objects moving.
We started by conducting a bifurcation study varying the belt speed, finding that some values determine a chaotic motion of the end effector, while for others the motion is a periodic bouncing.
Then, we used the results of the bifurcation analysis to inform the control design and proposed two control schemes, a PID controller and a hybrid force/motion control strategy, which we compared through numerical simulations.
We showed that the latter strategy is effective in preventing the paradox from occurring and hence guaranteeing that end effector of the robot stays in contact with the belt over a wider parameter range with respect to the PID.
\bibliographystyle{IEEEtran}
|
1,116,691,498,634 | arxiv | \section{Din\'{a}mica medible y topol\'{o}gica}
Usaremos definiciones, notaci\'{o}n y algunos resultados b\'{a}sicos de teor\'{\i}a de la medida abs\-tracta y de la topolog\'{\i}a en espacios m\'{e}tricos compactos, cuyos enunciados y demostraciones se pueden encontrar por ejemplo en \cite{Folland}, \cite{Rudin} o \cite{Stein2005}.
\subsection{Din\'{a}mica de los automorfismos de medida.} \index{automorfismo! de esp. de medida}
Sea $(X, {\mathcal A})$ un espacio medible y sea $T: X \mapsto X$ una
transformaci\'{o}n medible, es decir $T^{-1}(A) \in {\mathcal A} \; \;
\forall A \in {\mathcal A}$.
\begin{definition} \em \index{sistema din\'{a}mico} {\bf Sistema din\'{a}mico discreto}
Se llama \em sistema din\'{a}mico discreto \em por iterados de $T$ hacia el futuro a la
aplicaci\'{o}n que a cada $ n \in \mathbb{N}$ le hace corresponder la
transformaci\'{o}n $T^n: X \mapsto X$ donde $T^n := T \circ T \ldots
\circ T$ \ $ n $ veces, si $n \geq 1$ y $T^0 := id$. Es inmediato chequear que $T^n$ es medible para todo $n \in \mathbb{N}$.
Si adem\'{a}s $T $ es bi-medible (i.e. $T: X \mapsto X$ es medible,
invertible y su inversa $T^{-1}: X \mapsto X$ es medible), entonces \em el sistema din\'{a}mico discreto, \em por iterados de $T$ hacia el futuro y el pasado, se define como
la aplicaci\'{o}n que a cada $n \in \mathbb{Z}$ le hace corresponder
$T^n$, donde $T^{-1}$ es la inversa de $T$ y $T^{n} := (T^{-1})^{|n|} := T^{-1} \circ T^{-1} \circ \ldots T^{-1}$ \ $|n|$ veces si $n \leq -1$.
\end{definition}
Se observa que si $T^{n+m} = T^n \circ T^m \ \forall \ n, m \in \mathbb{N} $, y adem\'{a}s si $T$ es bi-medible se cumple esa propiedad para todos $n, m \in \mathbb{Z}$. Esta propiedad
algebraica (cuando $T$ es bi-medible) se llama \em propiedad de grupo. \em Significa que el
sistema din\'{a}mico es una acci\'{o}n del grupo de los enteros en el espacio $X$.
\begin{definition} \em \index{o'rbita}
Se llama \em \'{o}rbita positiva o futura $o^+(x)$ \em por el punto $x
\in X$, y $x$ se llama estado inicial de la \'{o}rbita, a la sucesi\'{o}n
$$o^+(x):= \{T^n(x)\}_{n \in N}.$$ Si $T$ es bi-medible se llama \'{o}rbita
negativa o pasada a la sucesi\'{o}n $$o^-(x):= \{T^{-n}(x)\}_{n \in N}$$ y se
llama \em \'{o}rbita $o(x)$ bilateral \em (o simplemente \'{o}rbita cuando $T$ es bi-medible) a la sucesi\'{o}n bi-infinita
$$o(x) := \{T^n(x)\}_{n \in \mathbb{Z}}.$$
Cuando $T$ es medible pero no bi-medible, llamamos simplemente \'{o}rbita a la \'{o}rbita positiva o futura.
\end{definition}
Se observa que $T^{n+m}(x) = T^n(T^m(x))$ y por lo tanto el
iterado $T^m(x)$ es el estado inicial de la \'{o}rbita por $x$ que se
obtiene corriendo el instante 0 al que antes era instante $m$.
\begin{definition} \em \index{punto! fijo} \index{punto! peri\'{o}dico}
\em Punto fijo \em (o peri\'{o}dico de per\'{\i}odo 1) es $x_0$ tal que $T(x_0) = x_0$.
\em Punto
peri\'{o}dico \em es $x_0$ tal que existe $p \geq
1$ tal que $T^p(x_0) = x_0$. El per\'{\i}odo es el m\'{\i}nimo $p \geq 1$
que cumple lo anterior. Se observa, a partir de la propiedad
de grupo, que si un punto es peri\'{o}dico
de per\'{\i}odo $p$ entonces su \'{o}rbita est\'{a} formada por exactamente
$p$ puntos.
\end{definition}
\begin{definition} \em \index{medida! invariante}
Sea $T$ medible en un espacio $(X, {\mathcal A})$. Una medida $\mu$ se
dice que es \em invariante por $T$, o que $T$ preserva $\mu$, \em o se
dice tambi\'{e}n que $T$ es un \em automorfismo del espacio de medida $(X,
{\mathcal A}, \mu)$\em), si
$$\mu (T^{-1}(A)) = \mu (A) \; \; \forall A \in {\mathcal A}.$$
\end{definition}
Se observa que pueden no existir medidas de probabilidad
invariantes para cierta $T: X \mapsto X $ transformaci\'{o}n medible
dada, como se muestra en el Ejemplo \ref{ejemplonoexistenmedidasinvariantes}. Sin embargo:
\begin{theorem} \label{medidasinvariantes} \index{medida! invariante! teorema de existencia} \index{teorema! de existencia de! medidas invariantes} \label{teoremaExistenciaMedInvariantes} {\bf Existencia de medidas invariantes.}
Sea $X$ un espacio m\'{e}trico compacto, y sea $T: X \mapsto X$ continua. Entonces
existen
\em(usualmente infinitas) \em medidas de probabilidad en la sigma-\'{a}lgebra de Borel, que son
invariantes para $T$. \em
\end{theorem}
Demostraremos este teorema en la siguiente secci\'{o}n \ref{seccionpruebateoexistmedinvariantes}.
\begin{exercise}\em
Sean $(X, {\mathcal A}, \mu) $ e $ (Y, {\mathcal B}, \nu)$ u espacios de medida y sea $T: X \mapsto Y$ medible. Se define $T^* \mu$ como la medida en $(Y, {\mathcal B})$ tal que $(T^* \mu)(B):= \mu(T^{-1}(B)) = \nu (B)$ para todo $B \in {\mathcal B}$.
(a) Encontrar un ejemplo en que $T^*\mu = \nu$ pero $\mu(A) \neq \nu (T(A))$ para alg\'{u}n $A \in {\mathcal A}$ tal que $T(A) \in {\mathcal B}$.
(b) Encontrar un ejemplo en que $T$ sea medible, cumpla $T^*\mu= \nu$ pero $T^{-1}$ no sea medible.
(c) Demostrar que si $T$ es medible, invertible y su inversa $T^{-1}$ es medible, entonces $T^* \mu = \nu$ si y solo si $(T^{-1})^* \nu = \mu$. Cuando se cumplen todas esas condiciones, se dice que $T$ (y por lo tanto tambi\'{e}n $T^{-1}$) es un isomorfismo de espacios de medida.
\end{exercise}
\begin{example} \em \label{ejemplonoexistenmedidasinvariantes} \index{medida! invariante! ejemplo de no existencia}
Sea $T: [0,1] \mapsto [0,1]$ tal que $T(x) = x/2 $ si $x \neq
0$ y $T(0) = 1 \neq 0$. Afirmamos que:
\em No existe
medida de probabilidad invariante por $T$.\em
\end{example}
{\em Demostraci\'{o}n: } Si existiera, llam\'{e}mosla $\mu$. Consideremos
la partici\'{o}n de intervalo (0,1] dada por los subintervalos
disjuntos dos a dos $A_n= (1/2^{n+1}, 1/2^n]$ para $n \in \mathbb{N}$. Se
cumple $T^{-1}(A_{n}) = A_{n-1}$ para todo $n \geq 1$. Como $\mu$
es $T $ invariante se deduce $\mu (A_n) = \mu (A_0) \; \; \forall
n \geq 0$. Se tiene $\mu ((0,1] = \sum _{n \geq 0 } \mu (A_n) =
\sum _{n \geq 0 } \mu (A_0) \leq 1$. Luego $\mu (A_0) = 0$, de
donde $\mu ((0,1])= 0$ y $\mu (\{0\}) = 1$. Luego $\mu
(T^{-1}(\{0\})) = 1$ lo cual es absurdo porque $T^{-1}(\{0\}) =
\emptyset$.
\hfill $\Box$
\begin{exercise}\em \em
\em Probar que si $T:X \mapsto X$ es medible entonces $T$ preserva
la medida $\mu$ si y solo si para toda $f \in L^1 (\mu )$ se
cumple $$\int f \circ T \, d \mu = \int f \, d \mu $$
\end{exercise}
\begin{proposition}\label{proposicionSubalgebra}
Sea $T: X \mapsto X$ una transformaci\'{o}n medible en un espacio
medible $(X, {\mathcal A})$. Si $\mu$ es una medida finita o $sigma$-finita y
${\mathcal A}_0$ es un \'{a}lgebra que genera a ${\mathcal A}$ tal que $\mu
(T^{-1} (A)) = \mu (A) \; \forall A \in {\mathcal A}_0$ entonces $\mu
$ es invariante por $T$.
\end{proposition}
{\em Demostraci\'{o}n: } Sea $\nu (A) = \mu (T^{-1}(A))$ definida para
todo $A \in {\mathcal A }$. En la sub\'{a}lgebra ${\mathcal A}_0$ la premedida
$\nu $ coincide con la premedida $\mu$ (ambas restringidas a la
sub\'{a}lgebra son premedidas). Como existe una \'{u}nica extensi\'{o}n de una
premedida dada en un \'{a}lgebra a la sigma \'{a}lgebra generada, entonces
$\mu = \nu$ en $\mathcal A$. \hfill $\Box$
\begin{corollary} \label{corolarioMedidaInvarianteEnAlgebraGeneradora}
Si $T: \mathbb{R}^k \mapsto \mathbb{R}^k$ es una transformaci\'{o}n Borel me\-di\-ble y
$\mu$ es una medida $\sigma$-finita tal para todo conjunto $A$ que
sea uni\'{o}n finita de rect\'{a}ngulos de $R^k$ se cumple $\mu
(T^{-1}(A)) = \mu ( A )$ entonces $\mu$ es invariante por $T$ en
toda la sigma-\'{a}lgebra de Borel.
\end{corollary}
\subsection{Prueba de existencia de medidas inva\-riantes} \label{seccionpruebateoexistmedinvariantes}
En esta secci\'{o}n $X$ denota un espacio m\'{e}trico compacto y $T: X \mapsto X$ una transformaci\'{o}n continua, a menos que se indique lo contrario.
Introduciomos algunas definiciones y resultados del An\'{a}lisis Funcional:
{\bf Notaci\'{o}n: El espacio de las funciones continuas y su dual.} \index{$C^0(X, \mathbb{R})$ espacio de! funciones reales continuas}
Denotamos $C^0(X, \mathbb{R})$ el espacio de las funciones continuas $\psi: X \mapsto \mathbb{R}$ con la topolog\'{\i}a de la convergencia uniforme en $X$ (inducida por la norma del supremo). Es decir $$\mbox{dist} {(\psi_1, \psi_2)} := \|\psi_1 - \psi_2\|_0,$$
donde se denota
$$\|\psi\|_0 : = \max_{x \in X} | \psi (x)| \ \ \forall \ \psi \in C^0(X, \mathbb{R}).$$
Denotamos $C^0(X, [0,1])$ al subespacio de funciones continuas $\psi: X \mapsto [0,1]$ que toman valores no negativos ni mayores que 1, con la norma del supremo definida antes (o del m\'{a}ximo, en nuestro caso, pues $X$ es un espacio m\'{e}trico compacto).
El espacio $C^0(X, [0,1])$ es m\'{e}trico acotado y cerrado. En efecto, el l\'{\i}mite de una sucesi\'{o}n uniformemente convergente de funciones continuas en $C^0(X, [0,1])$ es continua, y pertenece a $C^0(X, [0,1])$). Adem\'{a}s $C^0(X, [0,1])$ tiene una base numerable de abiertos, y por lo tanto existe un subconjunto numerable $\{\psi_i\}_{i \in \mathbb{N}}$ denso en $C^0(X, [0,1])$ (ver por ejemplo \cite[Proposition 4.40]{Folland})
Denotamos como $C^0(X, \mathbb{R})^*$ al dual de $C^0(X, \mathbb{R})$; es decir al conjunto (al que luego dotaremos de una topolog\'{\i}a adecuada) de todos los operadores lineales $$\Lambda: C^0(X, \mathbb{R}) \mapsto \mathbb{R}.$$
\begin{definition} {\bf El espacio ${\mathcal M}$ de las probabilidades y la to\-po\-lo\-g\'{\i}a d\'{e}bil$^*$} \em \index{${\mathcal M}$ (espacio de medidas de pro\-ba\-bi\-lidad)} \index{espacio! de medidas de pro\-ba\-bi\-lidad} \index{topolog\'{\i}a d\'{e}bil estrella} \label{definicionTopologiaDebil*}
Sea $X$ un espacio m\'{e}trico compacto. Sea ${\mathcal M}$ el \em espacio de todas las medidas de probabilidad. \em
En ${\mathcal M}$ introducimos la siguiente topolog\'{\i}a, llamada \em topolog\'{\i}a d\'{e}bil$^*$: \em
Si $\mu_n, \mu \in {\mathcal M}$ decimos que la sucesi\'{o}n $\{\mu_n\}$ es convergente a $\mu$ en la topolog\'{\i}a d\'{e}bil$^*$, es decir
$$\lim_{n \rightarrow + \infty} \mu_n = \mu \ \ \mbox{ en } {\mathcal M},$$
cuando:
$$\lim_{n \rightarrow + \infty} \int \psi \, d \mu_n = \int \psi \, d \mu \ \ \mbox{ en } {\mathbb{R}} \ \ \forall \ \psi \in C^0(X, \mathbb{R}).$$
\end{definition}
\begin{remark} \em Por definici\'{o}n de convergencia en la topolog\'{\i}a d\'{e}bil$^*$ de una sucesi\'{o}n de medidas de probabilidad, la sucesi\'{o}n converge a la medida
$\mu$ si y solo si
para cada funci\'{o}n continua, la sucesi\'{o}n de integrales converge
a la integral respecto a $\mu $. Es falso que para todo $A$ boreliano
la sucesi\'{o}n de medidas de $A$ converja a $\mu (A)$. En efecto, v\'{e}ase el ejercicio siguiente:
\end{remark}
\begin{exercise}\em
Sea $X = [0,1] $. Para cada $n \geq 0 $ sea $\mu _n $ la medida
delta de Dirac concentrada en el punto $ 1/2^n $.
a)
Probar que existe $\mu = \mbox{l\'{\i}m} _{n \rightarrow +\infty} \mu _n $ en la topolog\'{\i}a d\'{e}bil$^*$ y encontrar la medida l\'{\i}mite $\mu$.
b)
Encontrar $A \subset [0,1] $ boreliano tal que no existe
$\mbox{l\'{\i}m} _{n \rightarrow +\infty} \mu _n (A) $.
Sugerencia: $ A = \{ 1/2^{2j}: j \geq 0 \} $.
c)
Encontrar $B \subset [0,1] $ boreliano tal que existe
$\mbox{l\'{\i}m} _{n \rightarrow +\infty} (\mu _n (B)) \neq \mu (B)$.
\end{exercise}
\begin{remark} \em Observemos que, para cada $\mu \in {\mathcal M}$, el operador $\Lambda_{\mu}: C^0(X, \mathbb{R}) \mapsto \mathbb{R}$ definido por:
$$\Lambda_{\mu} (\psi) := \int \psi \, d \mu $$
es lineal, positivo (es decir $\Lambda_{\mu}(\psi) \geq 0 $ si $\psi \geq 0$) y acotado (es decir existe $k >0$ tal que $|\Lambda_{\mu} (\psi)| \leq k $ para toda $\psi\in C^0(X, \mathbb{R})$ tal que $\|\psi\|_0\leq 1$). Adem\'{a}s $\Lambda_{\mu}(\psi)=1$ si $\psi: X \mapsto \mathbb{R}$ es la funci\'{o}n constante igual a 1.
El siguiente teorema establece el rec\'{\i}proco de la propiedad observada arriba, y es un resultado cl\'{a}sico de la Teor\'{\i}a Abstracta de la Medida y del An\'{a}lisis Funcional:
\end{remark}
{\bf Teorema de Representaci\'{o}n de Riesz} \index{teorema! Representaci\'{o}n de Riesz}
Sea $X$ un espacio m\'{e}trico compacto.
\em Para todo operador lineal $\Lambda: C^0(X, \mathbb{R}) \mapsto \mathbb{R}$ que sea positivo y acotado, existe y es \'{u}nica una medida finita $\mu $ \em (de Borel y positiva) \em tal que \em
$$\Lambda (\psi) = \int \psi \, d \mu \ \ \forall \ \psi \in C^0(X, \mathbb{R})$$
\em Adem\'{a}s si $\Lambda (1) = 1$ entonces $\mu $ es una probabilidad. Es decir, $\mu(X)= 1$, \'{o}, usando nuestra notaci\'{o}n, $\mu \in {\mathcal M}$. \em
\vspace{.3cm}
Una demostraci\'{o}n del Teorema de Representaci\'{o}n de Riesz puede encontrarse, por ejemplo, en \cite[Teorema 2.3.1]{Rudin}
Debido al Teorema de Representaci\'{o}n de Riesz, el espacio ${\mathcal M}$ se puede identificar con el espacio de los operadores lineales de $C^0(X, \mathbb{R})$ que son positivos, acotados y que valen 1 para la funci\'{o}n constante igual a 1. (Recordemos que el espacio de los operadores lineales de $C^0(X, \mathbb{R})$ se llama dual de $C^0(X, \mathbb{R})$ y se denota como $C^0(X, \mathbb{R})^*$). En el An\'{a}lisis Funcional se definen diversas topolog\'{\i}as en el dual de un espacio de funciones. Una de ellas es la llamada \em topolog\'{\i}a d\'{e}bil$^*$, \em que es la topolog\'{\i}a de la convergencia \em punto a punto, \em definida como sigue:
$$\lim_{n \rightarrow + \infty} \Lambda_n = \Lambda \mbox{ en } C^0(X, \mathbb{R})^*$$
si y solo si
$$ \lim_{n \rightarrow + \infty} \Lambda_n \psi = \Lambda \psi \mbox{ en } \mathbb{R} \ \ \forall \ \psi \in C^0(X, \mathbb{R}). $$
De las definiciones anteriores, deducimos que la topolog\'{\i}a d\'{e}bil estrella en ${\mathcal M}$ es la topolog\'{\i}a inducida por la topolog\'{\i}a d\'{e}bil estrella (o de la convergencia punto a punto) en el dual $C^0(X, \mathbb{R})^*$ del espacio funcional de las funciones continuas. La topolog\'{\i}a d\'{e}bil$^*$ puede definirse como la topolog\'{\i}a producto de las definidas por la convergencia de los valores num\'{e}ricos $\Lambda_n(\psi)$ que toman los operadores $\Lambda_n$ para cada $\psi \in C^0(X, \mathbb{R})$ fija.
El siguiente teorema es cl\'{a}sico del An\'{a}lisis Funcional, y es una consecuencia del teorema de Tichonov (ver por ejemplo \cite[Teorema 4.43]{Folland})que establece, bajo ciertas hip\'{o}tesis, la compacidad de la topolog\'{\i}a producto:
\begin{theorem}\label{theoremTichonov} \index{teorema!Tichonov}
{\bf (Corolario del Teorema de Tichonov)} \label{teoremaCompacidadEspacioProbabilidades0}
Si $X$ es un espacio m\'{e}trico compacto, entonces para toda constante $k >0$ el subconjunto de los operadores lineales acotados por $k$ es compacto en el espacio dual $C^0(X, \mathbb{R})^*$ con la topolog\'{\i}a d\'{e}bil estrella.
\end{theorem}
Como caso particular, observemos que el espacio ${\mathcal M}$ de las probabilidades de Borel en ${ X}$ (via el teorema de Representaci\'{o}n de Riesz) es un subconjunto \em cerrado \em del espacio de los operadores lineales acotados con la topolog\'{\i}a d\'{e}bil estrella (es decir, si $\Lambda_n (1)= 1$ y $\Lambda_n \rightarrow \Lambda$, entonces $\Lambda (1) = 1$).
M\'{a}s detalladamente:
\begin{theorem} {\bf Compacidad y metrizabilidad del espacio de probabilidades}
\label{teoremaCompacidadEspacioProbabilidades} \index{espacio! de medidas de pro\-ba\-bi\-lidad} \index{teorema! de compacidad de ${\mathcal M}$} \index{teorema! de metrizabilidad de ${\mathcal M}$}
\index{m\'{e}trica en ${\mathcal M}$}
Sea $X$ un espacio m\'{e}trico compacto. Sea ${\mathcal M}^{1} $ el espacio de todas las medidas $\mu$ \em (de Borel, positivas y finitas)\em tales que $\mu(X) \leq 1$. Sea en ${\mathcal M}^{1} $ la topolog\'{\i}a d\'{e}bil$^*$, definida en \em \ref{definicionTopologiaDebil*}. \em
Entonces:
{\bf (a)} ${\mathcal M}^{1} $ es compacto.
{\bf (b)} ${\mathcal M}^{1}$ es metrizable. \em (Es decir, existe una m\'{e}trica $\mbox{dist}$ que induce la topolog\'{\i}a d\'{e}bil$^*$; esto es una distancia en ${\mathcal M}^{1}\times {\mathcal M}^{1}$ tal que $$\lim_n \mbox{dist}(\mu_n, \mu) = 0 \mbox{ si y solo si } \lim_n\mu_n = \mu$$ con la topolog\'{\i}a d\'{e}bil$^*$). \em
{\bf (c)} Si $\{\psi_i\}_{i \in \mathbb{N}} \subset C^0(X, [0,1])$ es un conjunto numerable denso de funciones, entonces la siguiente m\'{e}trica induce la topolog\'{\i}a d\'{e}bil$^*$ en ${\mathcal M}^1$:
\begin{equation}\label{equationDistanciaMedidas} \mbox{dist}(\mu, \nu) := \sum_{i=1}^{+ \infty} \frac{1}{2^i} \ \left| \int \psi_i \, d \mu \ - \ \int \psi_i \, d \nu \right| \ \ \forall \ \mu, \nu \in {\mathcal M}^1. \end{equation}
{\bf (d)} ${\mathcal M}^{1}$ es secuencialmente compacto. \em (Es decir, toda sucesi\'{o}n de medidas en ${\mathcal M}^{1}$ tiene alguna subsucesi\'{o}n convergente).
\vspace{.3cm}
Consecuencia: Siendo ${\mathcal M} = \{ \mu \in {\mathcal M}^1: \mu(X)= 1\}$ cerrado en ${\mathcal M}^1$, se deduce que
{\bf (e)} \em El espacio de probabilidades ${\mathcal M}$ con la topolog\'{\i}a d\'{e}bil$^*$ es compacto, metrizable y secuencialmente compacto. \em
\end{theorem}
\begin{exercise}\em
Demostrar el Teorema \ref{teoremaCompacidadEspacioProbabilidades} como consecuencia del Corolario \ref{teoremaCompacidadEspacioProbabilidades0}, identificando el espacio ${\mathcal M}^1$ con el dual de $C^0(X, \mathbb{R})$ via el Teorema de Representaci\'{o}n de Riesz.
\end{exercise}
\begin{proposition}\label{coincidencia}
Sea $\{ \psi_i: i \geq 1 \}$ un conjunto numerable denso en
$C^0(X, [0,1])$.
Dos medidas $\mu _1 $ y $\mu _2 $ en ${\mathcal M}^1(X)$ coinciden
si para todo $i \geq 1 $ se cumple
$$\int \psi_i \, d \mu _1 = \int \psi_i \, d \mu _2 $$
\end{proposition}
{\em Demostraci\'{o}n: }
Por la unicidad de la medida del teorema de Riesz alcanza probar que
para toda $\psi \in C^0(X, \mathbb{R})$ se cumple:
\begin{eqnarray} \label{denso}
\int \psi \, d \mu _1 = \int \psi \, d \mu _2
\end{eqnarray}
La igualdad (\ref{denso}) vale obviamente para $\psi$ id\'{e}nticamente nula.
Si $\psi$ no es id\'{e}nticamente nula, basta demostrar (\ref{denso}) para
$\psi / \|\psi\|_0 $, donde $\|\psi\|_0 = \max_{x \in X} |\psi(x)|$. Entonces supongamos que $\|\psi \|_0 = 1$. Cualquier funci\'{o}n real $\psi$ puede escribirse como $\psi = \psi^+ - \psi^-$, donde $\psi^+ = \max\{\psi, 0\}, \ \psi^- = - \min\{\psi, 0\}$. Observemos que $\psi^+, \psi^- \in C^0([0,1])$. Si demostramos la igualdad (\ref{denso}) para $\psi^+ $ y $\psi^-$, entonces vale tambi\'{e}n para $\psi$. Basta entonces probar la igualdad (\ref{denso}) para toda $\psi \in C^0(X, [0,1])$. Por la densidad
de las funciones $\{\psi_i\}$ en $C^0(X, [0,1]) $, existe una sucesi\'{o}n $\psi_{i_n}$
convergente uniformemente (es decir con la norma del supremo en $C^0(X, \mathbb{R})$),
a la funci\'{o}n $\psi$. Por lo tanto, converge tambi\'{e}n puntualmente
y est\'{a} uniformemente acotada por 1. Cada $\psi_{i_n}$ verifica la
igualdad (\ref{denso}). Luego, por el teorema de convergencia dominada, $\psi$ tambi\'{e}n
cumple (\ref{denso}).
\hfill $\Box$
\begin{remark} \em
Sea $X$ un espacio m\'{e}trico compacto no vac\'{\i}o, sea ${\mathcal B}$ la sigma-\'{a}lgebra de Borel, y sea ${\mathcal M}$ el espacio de todas las medidas de probabilidad de Borel en $X$ con la topolog\'{\i}a d\'{e}bil$^*$.
El espacio ${\mathcal M}$ es no vac\'{\i}o. Por ejemplo, si elegimos un punto $x \in X$ entonces $ \delta_x \in {\mathcal M}$, donde $\delta_x$ es la probabilidad Delta de Dirac soportada en el punto $x \in X$. Esto es $\delta_x$ es la probabilidad que a cada boreliano $B \subset X$ le asigna $\delta_x(B) = 1$ si $x \in B$, y $\delta_x (B)= 0$ si $ x \not \in B$.
\end{remark}
Ahora agreguemos una din\'{a}mica en $X$:
\begin{definition} {\bf El pull back $T^*: {\mathcal M} \mapsto {\mathcal M}$} \index{$T^*$ pull back} \index{pull back}
Sea $(X, {\mathcal A})$ un espacio medible, sea $T: X \mapsto X$ medible y sea ${\mathcal M}$ el conjunto de las medidas de probabilidad en $(X, \mathbb{A})$. Definimos el siguiente operador en ${\mathcal M}$, llamado pull back del mapa $T$:
$$T^*:{\mathcal M} \mapsto {\mathcal M} \ \ \ \ \ \ \ \ (T^* \mu)(B) = \mu (T^{-1}(B)) \ \ \forall \ \mu \in {\mathcal M} \ \ \forall\ B \in {\mathcal B}.$$
\end{definition}
Es inmediato verificar que $\mu$ es invariante por $T$ si y solo si $T^* \mu = \mu$, es decir, las medidas invariantes por $T$ son los puntos fijos por $T^*$ en el espacio ${\mathcal M}$.
\begin{exercise}\em \label{ejercicioInvarianciaDeMedidasIntegral}
Probar que para toda $\psi \in L^1(\mu)$ se cumple:
$$\int \psi \, d T^* \mu = \int \psi \circ T \, d \mu.$$
Sugerencia: Chequear primero para las funciones caracter\'{\i}sticas $\chi_B$ de los borelianos, luego para las combinaciones lineales de las funciones caracter\'{\i}sticas (funciones simples), y luego para las funciones medibles no negativas, usando el Teorema de convergencia mon\'{o}tona. Finalmente probar la igualdad para toda $\psi \in L^1(\mu)$ separando $\psi$ en parte real e imaginaria, y cada $\psi$ real en su parte positiva y negativa.
\end{exercise}
\begin{proposition} \label{proposicionContinuidadT*} Sea $X$ espacio m\'{e}trico compacto, y ${\mathcal M}$ el espacio de las proba\-bi\-li\-dades de Borel en $X$ con la topolog\'{\i}a d\'{e}bil$^*$. \index{continuidad! del operador pull back}
Si $T: X \mapsto X$ es continuo, entonces $T^*: {\mathcal M} \mapsto {\mathcal M}$ es continuo en ${\mathcal M}$.
\end{proposition}
{\em Demostraci\'{o}n: }
Basta chequear que si $\lim_n \mu_n = \mu $ en ${\mathcal M}$ entonces $\lim_n T^* \mu_n = T^* \mu$. En efecto, como $\lim_n \mu_n = \mu$, entonces, para toda $\psi \in C^0(X, \mathbb{R}) $:
$$\lim_{n \rightarrow + \infty} \int \psi \, d \mu_n = \int \psi \, d \mu$$
En particular, siendo $T: X \mapsto X$ continuo, la igualdad anterior se cumple para $\psi \circ T$. Luego deducimos que
$$\lim_{n \rightarrow + \infty} \int \psi \circ T \, d \mu_n = \int \psi \circ T \, d \mu.$$
Usando el resultado del Ejercicio \ref{ejercicioInvarianciaDeMedidasIntegral}, deducimos que
$$\lim_{n \rightarrow + \infty} \int \psi \, d T^* \mu_n = \int \psi \, d T^* \mu \ \ \forall \ \psi \in C^0(X, \mathbb{R}).$$
Por lo tanto la sucesi\'{o}n de medidas $T^* \mu_n$ converge en la topolog\'{\i}a d\'{e}bil$^*$ de ${\mathcal M}$, a la medida $T^* \mu$ como quer\'{\i}amos demostrar.
\hfill $\Box$
Ahora probaremos el Teorema \ref{teoremaExistenciaMedInvariantes}, utilizando el llamado \em
procedimiento de Bogliubov-Krylov \em \cite{Bogliubov-Krylov}. \index{Bogliubov-Krylov! pro\-cedi\-miento de} Este procedimiento parte de cualquier medida de probabilidad en el espacio $X$, toma promedios aritm\'{e}ticos de los iterados del operador pull back $T^*$ de esta medida hasta tiempo $n$, y finalmente una subsucesi\'{o}n convergente en la topolog\'{\i}a d\'{e}bil estrella de esos promedios. Se obtienen medidas invariantes por $T$ (bajo la hip\'{o}tesis de que $T: X \mapsto X$ es continuo). El procedimiento de Bogliubov-Krylov permite \lq\lq fabricar\rq\rq \ medidas de probabilidad invariantes, usando como \lq\lq semilla\rq\rq cualquier medida de probabilidad.
\vspace{.5cm}
\newpage
{\bf Demostraci\'{o}n del Teorema \ref{teoremaExistenciaMedInvariantes} de existencia de medidas invariantes:}
{\em Demostraci\'{o}n: }
Elijamos una medida de probabilidad de Borel cualquiera $\rho \in {\mathcal M}$. Construyamos para cada $1 \leq n \in \mathbb{N}$, la siguiente probabilidad:
$$\mu_n = \frac{1}{n} \sum_{j= 0}^{n-1} (T^*)^j \rho$$
Es inmediato probar, a partir de la definici\'{o}n del operador $T^*$ (que cumple $T^* \mu (B) = \mu(T^{-1}(B))$ para todo boreliano $B$), que $$T^* \mu_n = \frac{1}{n} \sum_{j= 0}^{n-1} (T^*)^{j+1} \rho$$
Como el espacio ${\mathcal M}$ de las probabilidades de Borel es secuencialmente compacto con la topolog\'{\i}a d\'{e}bil$^*$, existe una subsucesi\'{o}n $\{\mu_{n_i}\}_{i \in \mathbb{N}}$ ( con $ \lim_i n_i = + \infty$), que es convergente en ${\mathcal M}$. Llamemos $\mu \in {\mathcal M}$ a su l\'{\i}mite; es decir:
$$\mu = \lim_{i \rightarrow + \infty} \frac{1}{n_i} \sum_{j= 0}^{n_i-1} (T^*)^j \rho$$
Basta demostrar ahora que $T^* \mu = \mu$, es decir $\mu$ es una probabilidad $T$-invariante.
Usando la continuidad del operador $T^*$ (Proposici\'{o}n \ref{proposicionContinuidadT*}), deducimos que
$$T^* \mu = \lim_{i \rightarrow + \infty} T^* \left (\frac{1}{n_i} \sum_{j= 0}^{n_i-1} (T^*)^j \rho \right ) = \lim_{i \rightarrow + \infty} \frac{1}{n_i} \sum_{j= 0}^{n_i-1} (T^*)^{j+1} \rho$$
Integrando cada $\psi \in C^0(X, \mathbb{R})$ res\-pec\-to de la medida $T^* \mu$, y luego res\-pec\-to de la medida $\mu$, aplicando la definici\'{o}n de l\'{\i}mite de la sucesi\'{o}n de medidas $\mu_{n_i}$ con la topolog\'{\i}a d\'{e}bil$^*$, y la continuidad del operador $T^*$ en ${\cal M}$, obtenemos:
$$\int \psi \, dT^* \mu - \int \psi \, d\mu = \lim_{i \rightarrow + \infty} \left ( \int \psi \, d \, T^* \mu_{ n_i} - \int \psi \, d \mu_{ n_i}\right )= $$
$$ \lim_{i \rightarrow + \infty} \frac{1}{n_i} \left ( \int \psi \, d \, (T^*)^{n_i}\rho - \int \psi \, d \rho \right ) $$
Entonces $$\left |\int \psi \, d\, T^* \mu - \int \psi \, d \mu \right | \leq \lim _{i \rightarrow + \infty} \frac{1}{n_i} \|\psi\|_0 \left (\rho(X) + \rho(T^{-n_i}(X)) \right ) =$$ $$ \lim_{n \rightarrow + \infty} \frac{2}{n} \|\psi\|_0 = 0.$$
En la \'{u}ltima igualdad tuvimos en cuenta que $\rho$ es una probabilidad. Obtuvimos que
$$\left |\int \psi \, d \, T^* \mu - \int \psi \, d \mu \right | = 0 \ \ \forall \ \psi \in C^0(X, \mathbb{R}).$$ Por lo tanto los operadores lineales $\psi \mapsto \int \psi \, d T^* \mu$ y $\psi \mapsto \int \psi \, d \mu$ son el mismo. Por la unicidad de la medida en el teorema de Riesz deducimos que $T^* \mu = \mu$, como quer\'{\i}amos demostrar.
\hfill $\Box$
\begin{exercise}\em
Sea $(X, {\mathcal A})$ un espacio medible y sea ${\mathcal M}$ el conjunto de todas las medidas de probabilidad en $(X, {\mathcal A})$.
Suponga que existe $\mu \in {\mathcal M}(X)$ tal que $T^*\mu (A)
\leq 2 \mu (A) $ para todo boreliano $A\subset X $.
a)
Probar que $ 2 \mu - T^* \mu \in {\mathcal M}(X) $.
b)
Si $X$ es un espacio m\'{e}trico compacto, ${\mathcal A}$ es la sigma-\'{a}lgebra de Borel y si $T$ es continua, probar que dado $\mu _0 \in {\mathcal M}(X)$ existe
$\mu \in {\mathcal M}(X)$ tal que $2 \mu - T^* \mu = \mu _0 $. Sugerencia:
Para todo $\mu \in {\mathcal M}(X) $ definir $G(\mu ) = 1/2 \cdot (T^* \mu +
\mu _0 ), \; \; \; \mu _n = 1/n \cdot \sum _{j=0} ^{n-1} G^j(\mu _0 ) $
y tomar una subsucesi\'{o}n convergente en ${\mathcal M}(X) $. Probar que $G$ es continuo en ${\mathcal M}(X)$. Observar que $G$ conmuta con las combinaciones lineales finitas \em convexas \em de probabilidades. Es decir si $\mu = \sum_{i= 1}^k \lambda_i \nu_i$, donde $\nu_i \in {\mathcal M}(X)$, $ \ \ 0 \leq \lambda_i \leq 1$ y $\sum_{i= 1}^k \lambda_i = 1$, entonces $G(\nu) = \sum_{i= 1}^k \lambda_ i G(\nu_i) $.
\end{exercise}
\subsection{Ejemplos y puntos peri\'{o}dicos hiperb\'{o}licos}
\index{punto! peri\'{o}dico! hiperb\'{o}lico}
\begin{example} \em \index{rotaci\'{o}n! irracional} \index{rotaci\'{o}n! racional}
Sea en $S^1$ (el c\'{\i}rculo) la rotaci\'{o}n $T(e^{2 \pi i x}) = e^{(2 \pi i) (x + a)}
$, donde $a$ es una constante real. Si $a$ es racional, $T$ se
llama \em rotaci\'{o}n racional \em en el c\'{\i}rculo, y si $a$ es
irracional, $T$ se llama \em rotaci\'{o}n irracional. \em A trav\'{e}s de la
proyecci\'{o}n $\Pi: \mathbb{R} \mapsto S^1$ dada por $\Pi (x) = e ^{2 \pi i
x}$, la medida de Lebesgue $m$ en $\mathbb{R}$ induce una medida
$m_{\sim}$ en $S^1$ dada por $m_{\sim }(A) = m (\Pi ^{-1} (A) \cap
[0,1])$. Esta medida $m_{\sim }$ se llama medida de Lebesgue en el
c\'{\i}rculo. Como $m$ en $\mathbb{R}$ es invariante por traslaciones, es f\'{a}cil
probar que $m _{\sim} $ en el c\'{\i}rculo $S^1$ es invariante por las rotaciones.
\end{example}
\begin{exercise}\em \em
\em Probar que para la rotaci\'{o}n racional en el c\'{\i}rculo todos los
puntos son peri\'{o}dicos con el mismo per\'{\i}odo. Probar que la rotaci\'{o}n
irracional en el c\'{\i}rculo no tiene puntos peri\'{o}dicos. Probar que la
medida de Lebesgue en el c\'{\i}rculo es invariante por las rotaciones.
\end{exercise}
\begin{remark} \em
Aunque no es inmediato, se puede probar que \em
todas las \'{o}rbitas
de la rotaci\'{o}n irracional del c\'{\i}rculo son densas. \em
(Lo probaremos en \S\ref{remarkRotacionIrracionalOrbitasDensas}).
\end{remark}
\begin{exercise}\em \em {\bf Tent map} \index{tent map}
\em Sea el intervalo [0,1] dotado de la sigma \'{a}lgebra de Borel.
Sea $ T:[0,1] \mapsto [0,1]$ dada por $T(x) = 2 x$ si $x \in
[0,1/2]$ y $T(x) = 2-2x$ si $x \in [1/2,1]$. Probar que $T$ preserva
la medida de Lebesgue en el intervalo. (Sugerencia: graficar $T$ y
probar que la preimagen de un intervalo $I$ tiene la misma medida
que $I$. Usar el corolario \ref{corolarioMedidaInvarianteEnAlgebraGeneradora}.
\end{exercise}
\begin{definition} \em \label{definicionatractoryrepulsor} \index{atractor! peri\'{o}dico} \index{repulsor peri\'{o}dico}
Sea $T:X \mapsto X$ Borel medible en un espacio m\'{e}trico $X$.
Decimos que un punto peri\'{o}dico $x_0$ de per\'{\i}odo $p$ es \em un
atractor \em si existe un entorno $V$ de $x_0$ invariante hacia
delante por $T^p $ (es decir $T^p(V) \subset V$) y tal que
$$\mbox{$\,$dist$\,$} (T^n(x_0), T^n(y))_{n \rightarrow + \infty} \rightarrow 0
\; \; \forall y \in V $$ Cuando $T$ es invertible con inversa
medible decimos que un punto peri\'{o}dico $x_0$ de per\'{\i}odo $p$ es \em
un repulsor \em si existe un entorno $V$ de $x_0$ invariante hacia
atr\'{a}s por $T^p $ (es decir $T^{-p}(V) \subset V$) y tal que
$$\mbox{$\,$dist$\,$} (T^n(x_0), T^n(y))_{n \rightarrow - \infty} \rightarrow 0
\; \; \forall y \in V $$
\end{definition}
\begin{proposition} \label{claimatractoreshiperbolicos} \index{punto! peri\'{o}dico! atractor}
Sea $f: S^1 \mapsto S^1$ un difeomorfismo; \em es decir $f$ es de clase $C^1$ (i.e. derivable con derivada continua), invertible (i.e. biyectiva; existe la transformaci\'{o}n inversa $f^{-1}: S^1 \mapsto S^1$), y su inversa $f^{-1}$ es tambi\'{e}n de clase $C^1$) \em
Supongamos que el difeomorfismo $f: S^1 \mapsto S^1$ preserva la orientaci\'{o}n (i.e. $f'>0$). Sea
$x_0$ un punto peri\'{o}dico de per\'{\i}odo $p$ tal que la derivada $
(f^p) '(x_0)$ es menor que 1. Entonces $x_0$ es un atractor. An\'{a}logamente, si
$(f^p) ' (x_0)$ es mayor que 1 entonces $x_0$ es un repulsor. \em
\end{proposition}
{\em Demostraci\'{o}n: }
La segunda afirmaci\'{o}n se obtiene de la primera usando $f^{-p}$ en
lugar de $f^p$. Demostremos la primera afirmaci\'{o}n renombrando
como $f$ a la transformaci\'{o}n $f^p$. Entonces $x_0$ es fijo. Graf\'{\i}quese $f(x)$ para $x \in S^1 \approx [0,1]$
del intervalo $[0,1]$ en s\'{\i} mismo, en el que se ha identificado el
0 con el 1 en el punto $x_0$. La gr\'{a}fica de $f $ corta
a la diagonal por lo menos en el punto $0 \sim 1= x_0$
Gr\'{a}ficamente, los iterados futuros de $y$ en un entorno de $x_0 $
suficientemente peque\~{n}o, se obtienen trazando la vertical de
abscisa $ y$, cort\'{a}ndola con la gr\'{a}fica de $f $, trazando luego la
horizontal por ese punto, cort\'{a}ndola con la diagonal, trazando la
vertical por ese punto, cort\'{a}ndola con la gr\'{a}fica de $f$, y as\'{\i}
sucesivamente (ver por ejemplo \cite[Figure on page 19]{Jost}). Si la funci\'{o}n es continua con
derivada continua, y
la derivada $f'(x_0) = a >0 $ en el punto fijo $x_0$ es menor que uno, entonces los
sucesivos puntos en la gr\'{a}fica de $f$ obtenidos por la
construcci\'{o}n anterior, tienden mon\'{o}tonamente al punto fijo $x_0$. En efecto, por la definici\'{o}n de diferenciabilidad y de derivada: $\|f(y) - f(x_0)\| \leq (a + (1-a)/2) \, \|y - x_0\|$ para todo $y$ suficientemente cercano a $x_0$, digamos $\|y - x_0\| < \delta$. Es decir $\|f(y) - x_0\| \leq b \|y - x_0\|$ donde $0 < b = a + (1 - a)/2 = (1 + a)/2 < 1$. Luego,
$f(y)$ tambi\'{e}n cumple $\|f(y) -x_0\| < \delta$. Se puede aplicar, por inducci\'{o}n, la desigualdad anterior a todos los iterados futuros $f^n(y)$ (es decir para todo $n \in \mathbb{N}$). Obtenemos $\|f^n(y) - x_0\| \leq b^n \|y - x_0\|$. Siendo $0 < b < 1$, deducimos que $f^n(y)$ converge mon\'{o}tonamente a $x_0$.
\hfill $\Box$
\begin{definition} \label{definicionhiperbolico} \em \index{punto! peri\'{o}dico! hiperb\'{o}lico}
Un punto peri\'{o}dico $x_0$ con per\'{\i}odo $p$ de un difeormofismo $f: X \mapsto X$ en una variedad diferenciable $X$ se dice
\em hiperb\'{o}lico \em si los valores propios (complejos) de la derivada $df^p_{x_0}$ de $f^p$
en $x_0$, tienen todos m\'{o}dulo diferente de 1. Se recuerda que la derivada $df^p_{x_0}$ es una transformaci\'{o}n lineal de $\mathbb{R}^m$ en $ \mathbb{R}^m$, donde $m $ es la dimensi\'{o}n de la variedad $X$.
\end{definition}
{\bf Consecuencia: } Si $x_0$ es un punto peri\'{o}dico hiperb\'{o}lico de
un difeomorfismo $f$ de clase $C^1$ del c\'{\i}rculo $S^1$, entonces es un atractor si $|(f^p)'(x_0)| < 1$, y es
un repulsor si $|(f^p)'(x_0) |
> 1$. (Siendo $x_0$ hiperb\'{o}lico, sabemos que $|f^p)'(x_0)| \neq 1$ por definici\'{o}n, as\'{\i} que los dos casos anteriores son los \'{u}nicos posibles). En efecto, si $f$ preserva al orientaci\'{o}n del c\'{\i}rculo, aplicamos la Proposici\'{o}n \ref{claimatractoreshiperbolicos}, y si $f$ invierte la orientaci\'{o}n, aplicamos la misma Proposici\'{o}n a $f^2$ para deducir que $x_0$ es un punto fijo atractor (resp. repulsor) de $f^2$. Es f\'{a}cil probar que si $x_0$ es un punto fijo de $f$ que es atractor (resp. repulsor) para $f^2$, entonces tambi\'{e}n es atractor (resp. repulsor) para $f$.
Para un difeomorfismo $f$ en el c\'{\i}rculo $S^1$, y m\'{a}s en general para un mapa de clase $C^1$ en una variedad de dimensi\'{o}n 1, un punto peri\'{o}dico hiperb\'{o}lico $x_0$ se llama \em pozo \em si $|(f^p)'(x_0)| < 1$, y se llama \em fuente \em si $|(f^p)'(x_0)| >1$. Generalizando este resultado cuando la variedad tiene dimensi\'{o}n mayor que uno, adoptamos la siguiente definici\'{o}n:
\begin{definition} \em {\bf Pozos, fuentes y sillas} \index{pozo} \index{fuente} \index{silla}
Sea $f: X \mapsto X$ un difeomorfismo en una variedad diferenciable $X$. Sea $x_0$ un punto peri\'{o}dico hiperb\'{o}lico para $f$ de per\'{\i}odo $p$ (i.e. los valores propios de $df^p_{x_0}$ tienen todos m\'{o}dulo diferente de 1). El punto $x_0$, y tambi\'{e}n la \'{o}rbita (finita) de $x_0$, se llama \em pozo \em si los valores propios de $df^p_{x_0}$ tienen todos m\'{o}dulo menor que 1. Se llama \em fuente \em si todos tienen m\'{o}dulo mayor que 1. Y se llama \em silla \em si alguno tiene m\'{o}dulo mayor que 1 y alg\'{u}n otro m\'{o}dulo menor que 1. (Obs\'{e}rvese que las sillas solo pueden existir si la dimensi\'{o}n de la variedad es 2 o mayor).
\end{definition}
\begin{exercise}\em
(a) Encontrar ejemplo de un difeomorfismo $f: \mathbb{R}^2 \mapsto \mathbb{R}^2$ con un punto fijo hiperb\'{o}lico tipo silla, otro ejemplo con un pozo y otro con una fuente. (b) Encontrar un ejemplo de $f: \mathbb{R}^2 \mapsto {\mathbb{R}^2}$ que tenga exactamente tres puntos fijos, sean los tres hiperb\'{o}licos, uno tipo fuente, otro pozo y otro silla. (c) Demostrar que para cualquier difeomorfismo $f: M \mapsto M$, los pozos son atractores, las fuentes son repulsores, y las sillas no son atractores ni repulsores.
\end{exercise}
\begin{exercise}\em
Encontrar un ejemplo de difeomorfismo $f: \mathbb{R}^2 \mapsto \mathbb{R}^2 $ que tenga un punto fijo atractor que no sea pozo (i.e. que no sea hiperb\'{o}lico), otro que tenga un punto fijo repulsor que no sea fuente (i.e. que no sea hiperb\'{o}lico) y otro que tenga un punto fijo no hiperb\'{o}lico que no sea ni fuente ni pozo pero que todas las \'{o}rbitas futuras en un entorno cualquiera de $x_0$ suficientemente peque\~{n}o, o bien tiendan a $x_0$ o bien se salgan del entorno.
\end{exercise}
\vspace{.2cm}
\index{exponentes de Lyapunov! de punto peri\'{o}dico}
{\bf Exponente de Lyapounov negativo significa contracci\'{o}n exponencial:} \em Si $f: S^1 \mapsto S^1$
es un difeomorfismo y $x_0$ es un punto fijo atractor hiperb\'{o}lico \em (i.e. $|f'(x_0)|<1$, es decir $x_0$ es un pozo), \em entonces
$$\lim _{n \rightarrow + \infty} \frac{\log \mbox{\em dist} (f^n(x), x_0)}{n} = -\lambda <0
\; \; \forall x \mbox{ en alg\'{u}n entorno de } x_0$$ donde $-\lambda
= \log |f'(x_0)| < 0$ se llama exponente de Lyapounov en $x_0$. \em
{\begin{exercise}\em \em \em
Demostrar la afirmaci\'{o}n anterior. Sugerencia:
Probar que
$$\frac{\mbox{$\,$dist$\,$}(f^{n+1}(x), x_0)}{\mbox{$\,$dist$\,$}(f^{n}(x), x_0)}\rightarrow
e^{-\lambda}< 1$$
\end{exercise}
{\bf Interpretaci\'{o}n:} La distancia de
$f^n(x)$ al atractor hiperb\'{o}lico se contrae exponencialmente con
coeficiente asint\'{o}ticamente igual a $e$ elevado al exponente de Lyapounov $-\lambda < 0$.
\vspace{.2cm} {\bf Exponente de Lyapounov positivo significa
dilataci\'{o}n exponencial:} \em Si $f: S^1 \mapsto S^1$ es un difeomorfismo y $x_0$ es un punto fijo repulsor hiperb\'{o}lico \em (i.e.
$|f'(x_0)|>1$, es decir $x_0$ es una fuente) \em entonces
$$\lim _{n \rightarrow - \infty} \frac{\log \mbox{\em dist} (f^n(x), x_0)}{n} = \sigma > 0
\; \; \forall x \mbox{ en alg\'{u}n entorno de } x_0$$ donde $\sigma =
\log |f'(x_0)| > 0$ se llama exponente de Lyapounov en $x_0$. \em
\begin{exercise}\em \em
\em
Demostrar la afirmaci\'{o}n anterior. Sugerencia: Aplicar lo ya probado a $f^{-1}$ y la f\'{o}rmula de derivada de la funci\'{o}n inversa para deducir que
$$\lim_{n \rightarrow - \infty} \frac{\mbox{$\,$dist$\,$}(f^{n+1}(x), x_0)}{\mbox{$\,$dist$\,$}(f^{n}(x), x_0)}\rightarrow
e^{\sigma}>1$$
\end{exercise}
{\bf Interpretaci\'{o}n:} La distancia de
$f^n(x)$ al repulsor hiperb\'{o}lico (al crecer $n$ y mientras $f^n(x)$ est\'{e} en un entorno peque\~{n}o del repulsor) se dilata exponencialmente con
coeficiente asint\'{o}ticamente igual a $e$ elevado al exponente de Lyapounov $\sigma > 0$.
\vspace{.2cm}
\begin{exercise}\em {\bf Flujo polo norte-polo sur} \em \label{ejerciciopolonortepolosur} \index{flujo polo norte-polo sur}
\em
Llamaremos \em secci\'{o}n de flujo polo norte-polo
sur \em en el c\'{\i}rculo $S^1$, a un
difeomorfismo $f: S^1 \mapsto S^1$ de clase $C^1$, que preserva la
orientaci\'{o}n, y tal que existen solo 2 puntos fijos
$N$ y $S$,
son ambos hiperb\'{o}licos, $N$ repulsor y $S$ atractor.
Graficar $f$ en
$[0,1]_{mod 1}$ tomando $0 \sim 1 = N$. Demostrar que todas las
\'{o}rbitas excepto $N$ y $S$ son mon\'{o}tonas y convergen a $S$.
(Sugerencia: ver prueba de la afirmaci\'{o}n
\ref{claimatractoreshiperbolicos}.) Demostrar que las \'{u}nicas
medidas de probabilidad invariantes son las combinaciones lineales
convexas de $\delta _N$ y $\delta _S$. Sugerencia: considerar una
partici\'{o}n numerable del intervalo $(0, S)$ formado por $A_n =
[x_{n}, x_{n+1})$ para $n \in \mathbb{Z}$ donde $x_0$ se elige cualquiera
en el intervalo abierto $(0,S)$ y $x_n := f^n(x_0)$ para todo $n \in \mathbb{Z}$. Usando argumento similar a la prueba del ejemplo
\ref{ejemplonoexistenmedidasinvariantes}, probar
que $\mu ((0,S)) =
0$. An\'{a}logamente probar que $\mu ((S, 1)) = 0$, de donde $\mu
(\{N,S\}) = 1$.
\end{exercise}
\begin{definition} \em \index{difeomorfismos! Morse-Smale} {\bf Difeomorfismo Morse-Smale en $S^1$.}
Un difeomorfismo $f: S^1 \mapsto S^1$ se dice \em
Morse-Smale \em si preserva la orientaci\'{o}n y existen exactamente una cantidad finita de
puntos peri\'{o}dicos (todos del mismo per\'{\i}odo) y son todos ellos
hiperb\'{o}licos.
\end{definition}
\begin{exercise}\em \em
\em Probar que en un difeomorfismo $f$ Morse-Smale en el c\'{\i}rculo las \'{u}nicas medidas invariantes son las
combinaciones lineales convexas de las medidas $$\frac{\delta
_{x_0} + \delta_{f(x_0)} + \ldots + \delta _{f^{p-1}(x_0)}}{p}$$
donde $x_0$ es un punto peri\'{o}dico de per\'{\i}odo $p$. Sugerencia:
Graficar $f^p$ en $S^1= [0,1]/\sim$ donde $0 \sim 1$ es un punto
peri\'{o}dico de per\'{\i}odo $p$. Probar que los atractores y los
repulsores se alternan. Probar que para toda medida invariante el
arco entre un repulsor y un atractor consecutivos tiene medida
cero, usando el procedimiento
del ejercicio \ref{ejerciciopolonortepolosur}.
\end{exercise}
\subsection{Din\'{a}mica topol\'{o}gica} \index{din\'{a}mica topol\'{o}gica}
\begin{definition} \label{definicionrecurrencia} \index{recurrencia! topol\'{o}gica} \em {\bf
Recurrencia topol\'{o}gica}. Sea $T: X \mapsto X$ una transformaci\'{o}n
Borel medible en un espacio topol\'{o}gico $X$. Sea $x \in X$. Se
llama \em omega-l\'{\i}mite de $x$ \em al conjunto $$\omega (x)= \{ y
\in X: \exists n_j \rightarrow + \infty \mbox{ tal que }
T^{n_j}(x) \mapsto y \}$$ \index{omega-l\'{\i}mite} \index{$\omega(x)$ omega-l\'{\i}mite} \index{alfa-l\'{\i}mite} Cuando $T$ es bi-medible (i.e. $T$ es medible, invertible y con inversa medible) se llama \em
alfa-l\'{\i}mite de $x$ \em al conjunto
$$\alpha (x)= \{ y \in X: \exists n_j \rightarrow - \infty \mbox{
tal que } T^{n_j}(x) \mapsto y \}$$ Un punto $x$ se dice \em
recurrente \em si
$$x \in \omega (x)$$ \index{punto! recurrente}
Dicho de otra forma $x$ es recurrente si existe una subsucesi\'{o}n
$n_j \rightarrow + \infty$ tal que $T^{n_j}(x) \rightarrow x$.
Luego para todo entorno $V$ de $x$ existe una subsucesi\'{o}n $n_j \in
\mathbb{N}$ tal que $T^{n_j}(V) \cap V \neq \emptyset$.
\end{definition}
\begin{exercise}\em Sea $X$ un espacio m\'{e}trico compacto y sea $T: X \mapsto $ continua. Probar que:
(a) $\omega(x)$ es compacto no vac\'{\i}o para todo $x \in X$.
(b) $\omega(T^n(x)) = \omega(x)$ para todo $n \in \mathbb{N}$, es decir el conjunto $\omega(x)$ depende de la \'{o}rbita por $x$ y no de qu\'{e} punto en la \'{o}rbita de $x$ se elija.
(c) $T (\omega(x)) = \omega(x) \subset T^{-1}(\omega(x))$ para todo $x \in \mathbb{N}$. Es decir $\omega(x)$ es un conjunto invariante por $T$ hacia el futuro.
(d) Si adem\'{a}s $T$ es un homeormorfismo (i.e. $T$ es continua, invertible y con inversa $T^{-1}$ continua) probar que:
(i) $\alpha(x)$ es compacto no vac\'{\i}o para todo $x \in X$, $\alpha(T^n(x))= \alpha (x)$ para todo $n \in \mathbb{Z}$ y $T(\alpha(x)) = \alpha(x) = T^{-1}(\alpha(x))$.
(ii) Si $x \in \omega(x)$ entonces $\alpha(x) \subset \omega(x)$. Sugerencia: Si $x \in \omega(x)$ entonces $T^{-n}(x) \in \omega(x)$ para todo $n \in \mathbb{Z}$, luego $\alpha(x) \subset \overline{\omega(x)} = \omega(x)$.
(iii) $x \in \omega(x)$ si y solo si $\omega(x) = \overline{o(x)}$, donde $o(x) = \{T^n(x): \ n \in \mathbb{Z}\}$.
\end{exercise}
\begin{definition} \em {\bf Conjunto no errante.} \index{conjunto! no errante}
Sea $T: X \mapsto X$ una transformaci\'{o}n Borel medible en un
espacio topol\'{o}gico $X$. Un punto $x \in X$ es \em no errante \em
si para todo entorno $V$ de $x$ existe una sucesi\'{o}n $n_j
\rightarrow + \infty$ tal que $T^{n_j}(V) \cap V \neq \emptyset$.
El conjunto de los puntos no errantes de $T$ (que puede ser vac\'{\i}o)
se denota como $\Omega (T)$ y se llama \em conjunto no errante.
\em
\end{definition}
\begin{exercise}\em \em
\em Sea $X$ un espacio topol\'{o}gico de Hausdorff y sea $T: X \mapsto X$ Borel me\-dible.
(a) Probar que el conjunto de los puntos recurrentes est\'{a} contenido en
el conjunto no errante $\Omega (T)$ (la inclusi\'{o}n opuesta no es
necesariamente cierta, como se ver\'{a} m\'{a}s adelante).
(b) Sea $\mu $ una
medida de probabilidad invariante por $T$. Si $X$ tiene base
numerable de abiertos probar (sin usar\- el enunciado del teorema de
recurrencia de Poincar\'{e} que viene m\'{a}s adelante) que $\mu (\Omega
(T))= 1$, es decir: casi todo punto es no errante para cualquier
medida de probabilidad invariante por $T$.
Sugerencia: Probar que
para todo $V$ de la base de abiertos que tenga medida positiva la
sucesi\'{o}n de conjuntos medibles $T^{-n}(V), \; n \in \mathbb{N}$ no puede
ser de conjuntos disjuntos dos a dos a partir de un cierto $n_0$
en adelante. Deducir que existe una subsucesi\'{o}n $n_j \rightarrow +
\infty$ tal que $T^{-n_j}(V) \cap V \neq \emptyset$ y esto implica
$V \cap T^{n_j}(V) \neq \emptyset$. Un punto es errante (no es no
errante) si est\'{a} contenido en alg\'{u}n $V$ abierto tal que no cumple
lo anterior. Deducir que los puntos errantes forman un conjunto de
medida nula.
\end{exercise}
\begin{definition} \label{definitionTransitividad} \index{transitividad}
\em {\bf Transitividad topol\'{o}gica} Sea $X$ un espacio topol\'{o}gico y
$T: X \mapsto X$ Borel medible. Se dice que $T$ es \em transitiva
\em si dados dos abiertos $U$ y $V$ no vac\'{\i}os, existe $n \geq 1$
tal que $T^n(U) \cap V \neq \emptyset$.
Sup\'{o}ngase que $X$ es de Hausdorff sin puntos aislados. Es f\'{a}cil
probar que \em si existe una \'{o}rbita positiva
densa entonces $T$ es transitiva. \em Y si adem\'{a}s, $T$ es continua y el espacio topol\'{o}gico
tiene base numerable de abiertos y es
de Baire (esto es: toda intersecci\'{o}n numerable de abiertos densos es densa)
entonces $T$ \em es transitiva si y solo si existe una \'{o}rbita
positiva
densa. \em
La transitividad significa que para cualquier abierto $U$, por
peque\~{n}o que sea, \em los iterados positivos de $U$ transitan por todo el espacio
desde el punto de vista topol\'{o}gico \em (es decir, por todos los abiertos del espacio).
\end{definition}
\begin{remark} \em \label{remark00}
Se observa que para dos conjuntos cualesquiera $U$ y $V$:
$$T^n(U) \cap V \neq \emptyset \mbox{ si y solo si
} T^{-n}(V) \cap U \neq \emptyset$$
Es f\'{a}cil ver que si $T$ es continua y transitiva, entonces dados
dos abiertos $U$ y $V$ no vac\'{\i}os, existe $n_j \rightarrow +
\infty$ tal que $T^{-n_j}(V) \cap U \neq \emptyset$. En
particular, tomando $U=V \ni x$, se deduce que:
\em Si $T: X \mapsto X$ es continua y transitiva entonces $\Omega (T) =
X$. \em
\end{remark}
\begin{exercise}\em
Probar las afirmaciones de la Observaci\'{o}n \ref{remark00} y las que est\'{a}n inmediatamente despu\'{e}s de la definici\'{o}n de transitividad en \ref{definitionTransitividad}.
\end{exercise}
\subsection{Recurrencia y Lema de Poincar\'{e}} \index{teorema! lema de Poincar\'{e}} \index{recurrencia}
Sea $T: X \mapsto X $ Borel medible en un
espacio m\'{e}trico compacto $X$. Recordando la Definici\'{o}n \ref{definicionrecurrencia} y teniendo en cuenta la compacidad secuencial de $X$, un punto $x$ es recurrente si y solo si \em para todo
entorno $V$ de $x$ existen infinitos iterados hacia el futuro de
$x$ en $V$. \em Es decir, existe $n_j \rightarrow + \infty$ tal que
$T^{n_j}(x)\in V$.
\begin{definition}
\em Sea $T: X \mapsto X$ medible en un espacio medible $(X, {\mathcal A})$.
Sea $E \in {\mathcal A}$. Un punto $x \in E$ \em vuelve
infinitas veces a $E$ \em si existen infinitos iterados hacia el
futuro de $x$ en $E$. Mejor dicho: existe $n_j \rightarrow +
\infty$ tal que $T^{n_j}(x)\in E$ para todo $j \in \mathbb{N}$.
\end{definition}
Los siguientes dos teoremas, llamados Lemas de
Recurrencia de Poincar\'{e}, se encuentran por ejemplo en \cite[pag. 32-35]{Mane} (ver tambi\'{e}n \cite{ManeIngles}). En
\cite[\S 1.4]{Walters} se encuentra tambi\'{e}n la versi\'{o}n medible siguiente:
\begin{theorem} \label{teoPoincare} \index{teorema! lema de Poincar\'{e}}
{\bf Lema de Recurrencia de Poincar\'{e}. Versi\'{o}n me\-dible}
Sea $T:X \mapsto X$ medible que preserva una medida de
probabilidad $\mu$. Sea $E$ un conjunto medible tal que $\mu
(E)>0$. Entonces $\mu$-c.t.p de $E$ vuelve infinitas veces a $E$.
\end{theorem}
{\em Demostraci\'{o}n: }
Sea $F_N := \bigcap_{n \geq N} T^{-n}(X \setminus E)$ y sea $F:= \bigcup_{N \geq 0} F_N$. Por construcci\'{o}n $x \in F$ si y solo si $T^n(x) \not \in E$ para todo $n $ suficientemente grande. Esto ocurre si y solo si la \'{o}rbita futura $\{T^n(x)\}_{n \geq 0}$ de $x$ no pasa infinitas veces por $E$. Basta probar entonces que $\mu(F \cap E)= 0$.
Por construcci\'{o}n $T^{-1}(F_N) = F_{N+1}$. Como $\mu$ es $T$-invariante, tenemos $\mu (F_{N+1}) = \mu (F_N)$ para todo $N \geq 0$. Luego $\mu (F_N) = \mu (F_0) $ para todo $N \geq 0$. Siendo $F_ {N+1} \supset F_N$ para todo $N \geq 0 $, entonces $\mu (F) =\lim _{N \rightarrow + \infty} \mu(F_N) = \mu(F_0) $ y $F \supset F_0 $. Luego $\mu (F \setminus F_0) = 0$. Como $E \cap F_0 = \emptyset$, deducimos que $E \cap F \subset F \setminus F_0$, de donde $\mu (E \cap F) = 0$, como quer\'{\i}amos demostrar.
\hfill $\Box$
\begin{theorem}
{\bf Lema de Recurrencia de Poincar\'{e}. Versi\'{o}n to\-po\-l\'{o}\-gica} \index{teorema! lema de Poincar\'{e}}
Sea $T: X \mapsto X$ Borel-medible en un espacio topol\'{o}gico $X$
con base numerable. Si $T$ preserva una medida de probabilidad
$\mu$, entonces $\mu$-c.t.p. es recurrente (es decir $x \in \omega
(x) \; \; \mu$-c.t.p. $x \in X$).
\end{theorem}
{\em Demostraci\'{o}n: } Sea $\{V_j\}_{j \in \mathbb{N}}$ una base de
abiertos. Por \ref{definicionrecurrencia}: $x \not \in \omega(x)$
si y solo si $x \in \bigcup _{j \in \mathbb{N}} A_j$ para alg\'{u}n $j \in \mathbb{N}$, donde $A_j = \{x \in
V_j: x \mbox{ no vuelve infinitas veces a } V_j \}$. Por
\ref{teoPoincare} $\mu (A_j) = 0$ Luego, la uni\'{o}n numerable de los
conjuntos $A_j$, que coincide con el conjunto de los puntos no
recurrentes, tiene $\mu$ medida nula. $\Box$
\begin{exercise}\em \em \em
Sea $(X, {\mathcal A})$ un espacio medible. Sea $T: X \mapsto X$
me\-di\-ble que preserva una probabilidad $\mu$. Sea $E \in {\mathcal A}$
tal que $\mu (E) >0$. Probar que
$$\sum_{n \in \mathbb{N}} \chi _E (T^n(x))$$ diverge $\mu$- c.t.p $ x \in
E$
\end{exercise}
\begin{exercise}\em \em \em
Sea $T: X \mapsto X$ Borel medible en el espacio topol\'{o}gico $X$
compacto , y preservando una medida de probabilidad $\mu$. Sea
$\mbox{ supp }(\mu )$ el soporte compacto de $\mu$ (i.e. el
m\'{\i}nimo compacto con medida $\mu$ igual a 1). Probar que
$\emptyset \neq \mbox{supp}(\mu )\subset \overline {\mbox
{Rec}(T)}$ siendo $\mbox{Rec}(T)$ el conjunto de los puntos
recurrentes de $T$.
\end{exercise}
\begin{theorem} \index{teorema! Hopf}
\em {\bf Teorema de Hopf.} \em Sea $T: \mathbb{R}^k \mapsto \mathbb{R}^k$ Borel
bimedible (medible, invertible con inversa medible) que preserva
la medida de Lebesgue $m$. Entonces casi todo punto de $\mathbb{R}^n$ o
bien es recurrente o bien tiene omega l\'{\i}mite vac\'{\i}o.
\end{theorem}
\begin{exercise}\em \em \em
Demostrar el teorema de Hopf enunciado antes. Su\-ge\-rencia: $\mathbb{R}^k =
\bigcup_{i \in \mathbb{N}} X_i$ donde $X_i$ es una bola abierta de radio $r_i
\rightarrow + \infty$ creciente con $i$. Sea $$\widetilde {X}_i = \{x
\in X_i: T^j(x) \in X_i \mbox{ para infinitos valores positivos de
} j \} .$$ Sea $\widetilde T _i : \widetilde X_ i \mapsto \widetilde X_i$ la
transformaci\'{o}n que a cada $x \in \widetilde X_i$ hace corresponder el
primer retorno a $ X_i$: $\widetilde T(x) = T^j (x) \in \widetilde
X_i$ para el m\'{\i}nimo $j = j(x)$ natural positivo tal que $T^j(x) \in X_i$. Probar que $m(\widetilde T_i \widetilde X_i) = m (\widetilde X_i)$;
luego c.t.p. de $\widetilde X_i$ est\'{a} en la imagen de $\widetilde T_i$.
Probar que $\widetilde T _i $ preserva $m$. Aplicar el teorema de
recurrencia de Poincar\'{e} para deducir que $m$-c.t.p de $\widetilde X_i$
es recurrente. Probar que c.t.p. de $X_i$ o bien es recurrente o
bien su omega l\'{\i}mite no intersecta a $X_i$.
\end{exercise}
\begin{remark} \em
En \cite{FrantzEnciclopedia}, y en la bibliograf\'{\i}a all\'{\i} citada, se rese\~{n}an varios otros resultados sobre recurrencia, adem\'{a}s de los lemas b\'{a}sicos de recurrencia de Poincar\'{e}. Algunos de estos resultados miden, en relaci\'{o}n a potencias de $n$, la frecuencia con la que \'{o}rbita futura del punto recurrente $x$ se acerca a $x$, la vinculan con las medidas erg\'{o}dicas y con la entrop\'{\i}a m\'{e}trica del sistema (la entrop\'{\i}a m\'{e}trica es, gruesamente hablando, una medici\'{o}n, ponderada seg\'{u}n una probabilidad invariante $\mu$, del desorden espacial que produce $f$ al ser iterada).
\end{remark}
\subsection{Ergodicidad} \index{ergodicidad} \index{medida! erg\'{o}dica}
\index{transformaci\'{o}n! erg\'{o}dica}
\begin{definition} \label{ergodicidad} \label{definitionErgodicidadI}
\em {\bf Ergodicidad I}
Sea $(X, {\mathcal A}, \mu ) $ un espacio de medida de probabilidad y
$T: X \mapsto X$
medible que preserva $\mu $. Se dice que $T$ es \em erg\'{o}dica \em
respecto a la medida $\mu$,
o que $\mu$ es una \em medida erg\'{o}dica
\em para $T$,
si dados dos
conjuntos medibles con medida positiva $U$ y $V$ existe $n \geq 1
$ tal que $T^n(U) \cap V \neq \emptyset$.
\end{definition}
{\bf Nota: } Observar que la ergodicidad es la versi\'{o}n en el contexto medible de la transitividad topol\'{o}gica. Se resalta que por definici\'{o}n, si una medida es erg\'{o}dica para $T$, entonces es $T$-invariante. No se define ergodicidad de medidas no invariantes.
\begin{definition} \label{definitionErgodicidadII} \index{ergodicidad} \index{medida! erg\'{o}dica} \index{transformaci\'{o}n! erg\'{o}dica}
\label{ergodicidad2} \em {\bf Ergodicidad II }
Sea $(X, {\mathcal A}, \mu )$ un espacio de medida de probabilidad y
$T: X \mapsto X$
medible que preserva $\mu $. Se dice que \em $T$ es
erg\'{o}dica \em respecto a la medida $\mu$,
o que $\mu$ es una \em medida erg\'{o}dica
\em para $T$, si todo conjunto medible $A$ que sea invariante por
$T$ (es decir $T^{-1}(A) = A$) tiene o bien medida nula o bien
medida 1.
\end{definition}
\begin{theorem} \label{teoremaDefinicionesErgodicidad} \index{equivalencia de definiciones! de ergodicidad}
$T$ es erg\'{o}dica seg\'{u}n la definici\'{o}n I para la medida de
probabilidad $\mu$ si y solo si es erg\'{o}dica seg\'{u}n la definici\'{o}n
II.
\end{theorem}
{\em Demostraci\'{o}n: } Supongamos que no se cumple la definici\'{o}n II. Entonces, existe un conjunto $A$ invariante por $T$ que tiene medida
positiva distinta de 1. Luego el complemento $A^c$ de $A$ es
tambi\'{e}n invariante por $T$ y tiene medida positiva. Como $A = T^{-1}(A)$, toda
\'{o}rbita positiva con estado inicial en $A$ est\'{a}
contenida en $A$.
Deducimos que no existe $n \in \mathbb{N}$ tal que $T^n(A)$ intersecta a $A^c$. Concluimos que
$T $ no es erg\'{o}dica seg\'{u}n la definici\'{o}n I.
Rec\'{\i}procamente, suponemos por hip\'{o}tesis que se cumple la definici\'{o}n II. Todo conjunto $A$ invariante por $T$ tiene
medida o bien nula o bien 1. Supongamos por absurdo
que no se cumple
la definici\'{o}n I. Entonces existen conjuntos medibles $U$ y $V$
con medida positiva tales que $\bigcup _{n \in \mathbb{N}} T^{-n}(V) \cap U =
\emptyset$. Entonces el conjunto $$A= \bigcap _{N \in \mathbb{N}} \big (\bigcup
_{n \geq N} T^{-n}(V)\big)$$ es medible, invariante por $T$ (verificar que $T^{-1}(A) = A$) y tiene
medida positiva (verificar que $\mu(A) \geq \mu (V) $ usando que $\mu$ es medida de probabilidad $T$-invariante), pero $A$ no intersecta a $U$ que tambi\'{e}n tiene
medida positiva. De lo anterior se deduce que $A$ no puede tener
medida 1, con lo que encontramos un conjunto invariante
que tiene medida positiva menor que 1, contradiciendo la
hip\'{o}tesis. \hfill $\Box$
\begin{exercise}\em Sea $T: X \mapsto X$ que preserva una medida de probabilidad $\mu$.
(a) Probar que $\mu$ es erg\'{o}dica para $T$ si y solo si todo conjunto $A$ medible invariante para el futuro (i.e. $A \subset T^{-1}(A) $) cumple $\mu(A)= 0$ \'{o} $\mu(A)= 1$). (b) Probar que $\mu$ es erg\'{o}dica para $T$ si y solo si todo conjunto $A$ medible invariante para el pasado (i.e. $T^{-1}(A) \subset A$) cumple $\mu(A)= 0$ \'{o} $\mu(A)= 1$. Sugerencias: (a) Considerar $B= \bigcup_{n \geq 0} T^{-n}(A) $, probar que $\mu(B) = \mu(A)$ y que $B$ es $T$-invariante. (b) Considerar el complemento de $A$.
\end{exercise}
\vspace{.3cm}
\index{medida! positiva sobre abiertos}
{\bf Definici\'{o}n: } Una medida de Borel $\mu$ en un espacio
topol\'{o}gico se dice \em positiva sobre abiertos \em si $\mu (V) >0$
para todo abierto $V$ no vac\'{\i}o.
\begin{exercise}\em \em \em
Sea $X$ un espacio topol\'{o}gico y $T: X \mapsto X$ Borel medible que
preserva una medida de probabilidad $\mu$ que es positiva sobre
abiertos. Probar que si $T$ es erg\'{o}dica respecto de $\mu$ entonces
$T$ es transitiva (topol\'{o}gicamente).
\end{exercise}
\begin{exercise}\em \em \index{difeomorfismos! Morse-Smale} \index{transitividad}
\em Probar que los difeomorfismos Morse-Smale en el c\'{\i}rculo tienen
medidas erg\'{o}dicas pero no son transitivos topol\'{o}gicamente.
\end{exercise}
\begin{remark} \em M\'{a}s adelante demostraremos que \em la medida de Lebesgue es
erg\'{o}dica para la rotaci\'{o}n irracional del c\'{\i}rculo \em (Teoremas \ref{teoremaErgodicidadRotacionIrracional} y \ref{teorema1}).
Luego, como la medida de Lebesgue es positiva sobre abiertos, la rotaci\'{o}n irracional es transitiva topol\'{o}gicamente. Entonces existe alguna \'{o}rbita
densa. Es f\'{a}cil ver, usando que la rotaci\'{o}n en el c\'{\i}rculo conserva
las distancias, que al existir una \'{o}rbita densa, \em todas las
\'{o}rbitas son densas. \em
Tambi\'{e}n probaremos que el \em tent map $T$ en el intervalo es erg\'{o}dico respecto a la
medida de Lebesgue. \em Luego $T$ es topol\'{o}gicamente transitivo y existe \'{o}rbita densa. Sin embargo no todas las \'{o}rbitas en el futuro por $T$ son densas: en efecto, existen \'{o}rbitas peri\'{o}dicas (que, como \'{o}rbitas en el futuro, son conjuntos finitos, y por lo tanto no son densos).
\end{remark}
\begin{nada} \em \index{promedio! temporal}
{\bf Promedios temporales asint\'{o}ticos de Birkhoff}
\end{nada} El Teorema Erg\'{o}dico de Birkhoff-Khinchin, que enunciaremos completamente m\'{a}s adelante ({Teorema \ref{theoremBirkhoff}}), establece que si $T: X \mapsto X$ es una transformaci\'{o}n medible que tiene medidas de probabilidad $T$-invariantes, entonces para toda probabilidad $\mu $ que sea $T$-invariante y para toda funci\'{o}n $\psi \in L^1(\mu) $, existe $\mu$-c.t.p. $x \in X$ el siguiente l\'{\i}mite:
$$ \widetilde \psi (x):= \lim_{n \rightarrow + \infty} \frac{1}{n} \sum_{j= 0}^{n-1} \psi \circ T^j(x).$$
\begin{theorem} {\bf Ergodicidad III.} \label{TeoremaErgodicidadIII} \index{ergodicidad} \index{medida! erg\'{o}dica} \index{equivalencia de definiciones! de ergodicidad} \index{transformaci\'{o}n! erg\'{o}dica} \label{proposition_mu_ergodicapromediobirkhoff1}
Sea $T: X \mapsto X$ medible en un espacio medible $(X, {\mathcal A})$ y sea $\mu$ una probabilidad invariante por $T$. Las siguientes afirmaciones son equivalentes:
{\bf (i)} $\mu$ es erg\'{o}dica para $T$
{\bf (ii)} Toda funci\'{o}n $\psi : X \mapsto \mathbb{R}$ que sea medible e invariante por $T$ \em (es decir $\psi(x) = \psi (T(x)) \ \forall \, x \in X$), \em es constante $\mu$-c.t.p.
{\bf (iii)} Para toda funci\'{o}n medible y acotada $\psi: X \mapsto \mathbb{R}$ existe el siguiente l\'{\i}mite \em $\mu$-c.t.p \em y es igual a $\int \psi \, d \mu$: \em
\begin{equation} \label{equationBirkhoffErgodica} \lim_{n \rightarrow + \infty} \frac{1}{n} \sum_{j= 0}^{n-1} \psi \circ T^j (x) = \int \psi \, d \mu \ \ \ \mu-\mbox{c.t.p.} \ x \in X \end{equation}
\end{theorem}
Probaremos el Teorema \ref{proposition_mu_ergodicapromediobirkhoff1} en el par\'{a}grafo \ref{paragraphPrueba00}.
\begin{remark} \em
{\bf Hip\'{o}tesis de Bolzmann de la Mec\'{a}nica Estad\'{\i}stica:} \index{hip\'{o}tesis de Bolzmann} \index{ergodicidad} \index{medida! erg\'{o}dica} \index{transformaci\'{o}n! erg\'{o}dica}
Antes de demostrar el Teorema \ref{proposition_mu_ergodicapromediobirkhoff1}, interpretaremos el significado de la afirmaci\'{o}n en la parte (iii) para fundamentar su relevancia.
Ella es un caso particular del Teorema Erg\'{o}dico de Birkhoff, que veremos m\'{a}s adelante, que establece que para toda medida $\mu$ erg\'{o}dica se cumple la igualdad (\ref{equationBirkhoffErgodica}), no solo cuando $\psi$ es medible y acotada, sino tambi\'{e}n para toda $\psi \in L^1(\mu)$.
La igualdad (\ref{equationBirkhoffErgodica}) posee un significado relevante en la teor\'{\i}a erg\'{o}dica, pues afirma que \em el promedio espacial \em de cada funci\'{o}n $\psi$ con respecto a la probabilidad $\mu$ en el espacio $X$ (i.e. el valor esperado $\int \psi \, d \mu$ de cada variable aleatoria $\psi$) coincide con \em el promedio temporal asint\'{o}tico \em de los valores observados de $\psi$ a lo largo de $\mu$-casi toda \'{o}rbita. Este promedio temporal asint\'{o}tico es el l\'{\i}mite cuando $n \rightarrow + \infty$ de los promedios temporales $(1/n) \sum _{j= 0}^{n-1} \psi (T^j(x))$ de los valores de $\psi$, observados a lo largo del pedazo finito de \'{o}rbita desde el instante 0 hasta el instante $n-1$. Salvo casos excepcionales, es falso que el l\'{\i}mite de los promedios temporales exista para todos los puntos $x \in M$. Adem\'{a}s, aunque para toda medida invariante $\mu$ ese l\'{\i}mite existe $\mu$-c.t.p. (Teorema erg\'{o}dico de Birkhoff), es falso en general (salvo cuando $\mu$ es erg\'{o}dica) que coincida con el promedio espacial de $\psi$ respecto a la probabilidad $\mu$. Por lo tanto las medidas erg\'{o}dicas $\mu$ para $T$ tienen un significado estad\'{\i}stico relevante, pues permite estimar el promedio temporal a largo plazo (esto es el promedio estad\'{\i}stico de las series de observaciones $\psi(T^j(x))$ a largo plazo, llam\'{e}smole por ejemplo el \lq\lq clima\rq \rq) en los sistemas determin\'{\i}sticos, para $\mu$-casi todo estado inicial $x \in X$, calculando el valor esperado de $\psi$ respecto a la probabilidad $\mu$. Sin embargo, aplicar la igualdad (\ref{equationBirkhoffErgodica}) para hacer esa estimaci\'{o}n, puede ser muy err\'{o}neo cuando $\mu$ no es erg\'{o}dica, o cuando $\mu$ no es invariante por $T$, o cuando el estado inicial $x$ no pertenece al conjunto de $\mu$-probabilidad igual a 1.
La igualdad (\ref{equationBirkhoffErgodica}) de los promedios temporales asint\'{o}ticos con el promedio espacial (o sea el valor esperado) es lo que en la Mec\'{a}nica Estad\'{\i}stica se llama Hip\'{o}tesis de Boltzmann. Es una hip\'{o}tesis importante para demostrar propiedades de la din\'{a}mica de sistemas formado por una cantidad finita pero muy grande de part\'{\i}culas que evolucionan determin\'{\i}sticamente en el tiempo (por iteraciones de un mapa $T$ medible) preservando una medida de probabilidad dada $\mu$ en el espacio $X$ de todos los estados posibles (llamado \lq\lq espacio de fases\rq\rq). Para los sistemas llamados conservativos esta medida de probabilidad $\mu$ es la medida de Lebesgue, normalizada para que $\mu(X)= 1$. Aplicando el Teorema \ref{proposition_mu_ergodicapromediobirkhoff1}, la hip\'{o}tesis de Boltzmann se traduce en la hip\'{o}tesis de ergodicidad de esa pro\-ba\-bi\-lidad $\mu$.
\end{remark}
\begin{nada} \em
\label{paragraphPrueba00}
{\bf Prueba del Teorema \ref{proposition_mu_ergodicapromediobirkhoff1}:}
\end{nada}
{\em Demostraci\'{o}n: }
{\bf (i) $\Rightarrow$ (ii): }
Sea $a \in \mathbb{R}$ y sea $A_a = \{x \in X: \psi(x) \leq a\} \subset X$. Como $\psi$ es medible, $A_a$ es medible . Siendo $\psi (T(x))= \psi (x)$ para todo $x \in X$, tenemos $T^{-1}(A_a) = A_a$. Como $\mu$ es erg\'{o}dica, entonces $\mu(A_a)$ vale 0 o 1. Considere la funci\'{o}n $g: \mathbb{R} \mapsto \{0,1\}$ definida como $g(a):= \mu (A_a)$. Por construcci\'{o}n $A_a \subset A_b$ si $a < b$. Luego $g(a) \leq g(b)$ si $a < b$. Entonces $g$ es creciente y solo puede tomar valores 0 \'{o} 1. Sea \begin{equation}
\label{eqn11} k= \inf \{a \in \mathbb{R}: g(a)= 1\} = \sup \{ a \in \mathbb{R}: g(a)= 0\} \in [-\infty, + \infty].\end{equation} Probemos que $k \in \mathbb{R}$. En efecto $\psi(x) \in \mathbb{R}$ para todo $x \in M$; entonces $\emptyset= \bigcap_{n \in \mathbb{N}} A_{-n}, \ \ X = \bigcup_{n \in \mathbb{N} }A_n$. Como $A_n \subset A_{n+1}$ para todo $n \in \mathbb{N}$, entonces $\lim_{n \rightarrow + \infty} \mu (A_n) = 1$ y $\lim _{n \rightarrow - \infty} \mu(A_{n}) = 0$. Esto implica que existe $n_0 \in \mathbb{N}$ tal que $g(n_0) = 1$ y $g(-n_0) = 0$. Por lo tanto $-n_0 \leq k \leq n_0 $. Ahora probemos que $\mu(A_k)= 1$. De (\ref{eqn11}) deducimos que
$0 =\mu(A_{k-(1/n)}) \leq \mu (A_k) \leq \mu (A_{k+(1/n)}) \ \forall \ 1 \leq n \in \mathbb{N}$
Adem\'{a}s $A_{k} = \bigcap_{n= 1}^{\infty} A_{k + (1/n)} = X $, y $A_{k + (1/{n+1})} \subset A_{k + (1/n)}$. Luego $\mu (A_k)= \lim_{n \rightarrow + \infty}\mu(A_{k + (1/n)}) = 1$. Hemos probado que $\mu(A_k) = 1$.
Por otra parte $ B_k:= \{x \in X: \psi(x) \geq k\} = \bigcap _{n= 1}^{\infty} (X \setminus A_{k-(1/n)})$, donde $\mu (X \setminus A_{k-(1/n})) = 1 $. Como $A_{k-(1/{n+1})} \supset A_{k -(1/n)}$, obtenemos $\mu(B_k) = 1$. Por lo tanto $\mu (A_k \cap B_k) = 1$, es decir para $\mu-$c.t.p. $x \in M$ se cumple $\psi(x) = k$.
{\bf ii) $\Rightarrow$ iii):}
Como $\mu$ es invariante por $T$ tenemos
$\int \psi \circ T^j \, d \mu = \int \psi \, d \mu \ \ \forall \ j \in \mathbb{N}.$
Luego $ \int \frac{1}{n} \sum_{j= 0}^{n-1} \psi \circ T^j \, d \mu = \int \psi \, d \mu \ \ \forall \ \ n \geq 1.$
Tomando l\'{\i}mite cuando $n \rightarrow + \infty$ y aplicando el Teorema de Convergencia Dominada, deducimos que
$$\int \lim _{n \rightarrow + \infty} \frac{1}{n} \sum_{j= 0}^{n-1} \psi \circ T^j \, d\mu = \lim_{n \rightarrow + \infty} \int \frac{1}{n} \sum_{j= 0}^{n-1} \psi \circ T^j \, d \mu = \int \psi \, d \mu $$
Consideremos la siguiente funci\'{o}n definida $\mu$-c.t.p: $$\Phi(x) := \lim_{n \rightarrow + \infty} \frac{1}{n} \sum_{j = 0}^{n-1} \psi \circ T^j(x) \mu-\mbox{c.t.p. } x \in X. $$
Aqu\'{\i}, en la hip\'{o}tesis de existencia $\mu$-c.t.p. de esta funci\'{o}n $\Phi$, es decir en la hip\'{o}tesis de existencia $\mu$-c.t.p. del l\'{\i}mite de los promedios temporales, estamos aplicando el Teorema Erg\'{o}dico de Birkhoff (como si ya estuviera demostrado). Entonces tenemos
$$\int \Phi \, d \mu = \int \psi \, d \mu.$$
Para tener definida la funci\'{o}n $\Phi$ en todo punto $x \in X$, tomemos $$\Phi (x):= 0 \ \forall \ x \in X \mbox { tal que } \not \exists \lim _{n \rightarrow + \infty} \frac{1}{n} \sum_{j = 0}^{n-1} \psi \circ T^j(x). $$
Afirmamos que $\Phi$ es invariantes por $T$. Basta chequear que dado $\epsilon >0 $ existe $N$ tal que para todo $n \geq N$ se cumple $$\Big | \frac{1}{n} \sum _{j= 0}^{n-1} \psi \circ T^j(T(x)) - \frac{1}{n} \sum _{j= 0}^{n-1} \psi \circ T^j( x)\Big| =$$ $$ =\Big |\frac{1}{n} \psi(T^{n+1}(x)) - \psi(x) \Big| < \epsilon \ \ \ \forall \ x \in X.$$
En efecto la desigualdad anterior se verifica para todo $n$ suficientemente grande porque, por hip\'{o}tesis, $0 \leq |\psi(x)| \leq k$ para todo $x \in X$, para cierta constante $k >0$.
Entonces, para toda sucesi\'{o}n $n_j \rightarrow + \infty$ tal que exista el l\'{\i}mite del promedio temporal hasta $n_j$ de $\psi$, con estado inicial $x$, tambi\'{e}n existe ese l\'{\i}mite con estado inicial $T(x)$, y ambos l\'{\i}mites coinciden. Deducimos que $$\Phi(x) = \Phi(T(x)) \ \ \forall \ x \in X.$$ Es decir, $\Phi$ es funci\'{o}n real invariante por $T$. Aplicando la hip\'{o}tesis (ii) tenemos que $\Phi$ es $\mu$-c.t.p. constante, igual a cierto n\'{u}mero real $K$. Entonces $$\int \psi \, d \mu = \int \Phi \, d \mu = \int K \, d \mu = K.$$
Deducimos que $K= \int \psi \, d \mu$, es decir $\Phi(x) = \int \psi \, d \mu$ para $\mu$-c.t.p. $x \in X$. Esto implica la igualdad (\ref{equationBirkhoffErgodica}), probando (iii).
{\bf (iii) $\Rightarrow $ (i)}
Sea $A \subset X$ medible e invariante por $T$, y consideremos la funci\'{o}n caracter\'{\i}stica $\chi_A$. Es una funci\'{o}n medible y acotada, y como $A$ es invariante por $T$, para todo $x \in X$ tenemos $\chi_A \circ T^j(x) = \chi_{T^{-j}(A)}(x) = \chi_A(x) \in \{0,1\}$. Entonces $$\widetilde \chi_A = \lim_{n \rightarrow +\infty} \sum_{j= 0}^{n-1} \chi_A (T^{j}(x)) = \chi _{A} (x) \in \{0,1\} \ \forall \ x \in X.$$ En particular vale la igualdad anterior para $\mu$-c.t.p. $x \in X$. Como $$\widetilde (\chi_A)(x) = \int \chi_A \, d \mu = \mu (A) \ \ \mu-\mbox{c.t.p. } x \in X $$
deducimos que $\mu(A) \in \{0,1\}$. Por lo tanto $\mu$ es erg\'{o}dica, terminando de probar (i).
\hfill $\Box$
\subsection{Existencia de medidas erg\'{o}dicas} \label{sectionExistenciaMedidasErgodicas}
\begin{theorem} \index{teorema! de existencia de! medidas erg\'{o}dicas}
{\bf Existencia de medidas erg\'{o}dicas} \label{teoremaExistenciaMedErgodicas} \index{teorema! de medidas erg\'{o}dicas! existencia}
Sea $X$ un espacio m\'{e}trico compacto y sea $T: X\mapsto X$ continua. Entonces existen medidas de probabilidad erg\'{o}dicas para $T$. Adem\'{a}s, toda medida $T$-invariante es el l\'{\i}mite en la topolog\'{\i}a d\'{e}bil$^*$ de una sucesi\'{o}n de medidas que son combinaciones lineales finitas de medidas erg\'{o}dicas.
\end{theorem}
Demostraremos el teorema \ref{teoremaExistenciaMedErgodicas} de existencia de medidas erg\'{o}dicas al final de esta secci\'{o}n, en el par\'{a}grafo \ref{paragraphPruebaTeoremaExistMedidasErgodicas}.
\newpage
\begin{nada} \em
{\bf Singularidad mutua y continuidad absoluta} \index{medida! absolutamente continua} \index{medida! mutuamente singulares}
\end{nada}
Recordamos que dos medidas de probabilidad $\mu$ y $\nu$ se dicen mutuamente singulares $\mu \perp \nu$ cuando existe alg\'{u}n conjunto medible $A \subset X$ tal que $\mu(A)= 1$ y $\nu(A)= 0$. Entonces $\mu (X \setminus A)= 0$ y $\nu(X \setminus A)= 1$ y la relaci\'{o}n mutuamente singular es sim\'{e}trica.
Dadas dos medidas de probabilidad $\mu$ y $\nu$ se dice que $\mu$ es absolutamente continua respecto de $\nu$, y se denota $\mu \ll \nu$, cuando para todo conjunto medible $A$ tal que $\nu(A)= 0$ se cumple $\mu (A) = 0$.
Se dice que dos medidas $\mu$ y $\nu$ son equivalentes, y se denota $\mu \sim \nu $, cuando $\mu \ll \nu$ y $\nu \ll \mu$.
Se observa que si $\mu \ll \nu$ entonces $\mu \not \perp \nu$ (el rec\'{\i}proco es falso).
El siguiente teorema es cl\'{a}sico en la Teor\'{\i}a abstracta de la Medida (en particular en la Teor\'{\i}a de Probabilidades):
{\bf Teorema de Descomposici\'{o}n de Lebesgue-Radon-Nikodym. } \index{teorema! Lebesgue-Radon-Nikodym}
\em Dadas dos medidas de probabilidad $\mu$ y $\nu$ existen dos probabilidades $\mu_1$ y $\mu_2$, y un \'{u}nico real $t \in [0,1]$, tales que
$$\mu = t \mu_1 + (1-t) \mu_2, \ \ \mu_1 \ll \nu, \ \ \mu_2 \perp \nu.$$
Si adem\'{a}s $ t \neq 0, 1$ entonces $\mu_1$ y $\mu_2$ son \'{u}nicas. \em
El enunciado cl\'{a}sico de este Teorema, establece que \em existen \'{u}nicas las medidas finitas $t \mu_1$ y $(1-t) \mu_2$ \em (posiblemente alguna de ellas es cero) tales que sumadas dan $\mu$, siendo $t \mu_1 \ll \nu$ y $(1-t) \mu_2 \perp \nu$.
La demostraci\'{o}n del Teorema de Descomposici\'{o}n de Lebesgue-Radon-Nikodym se encuentra por ejemplo en \cite[Theorem 3.8]{Folland} \'{o} en \cite[Teorema 6.2.3]{Rudin}.
Del teorema de Lebesgue-Radon-Nikodym se deduce que en el caso particular $\mu \ll \nu$, se cumple $t= 1$, $\mu_1 = \mu$, y $\mu_2$ es cualquiera. An\'{a}logamente, si $\mu \perp \nu$, entonces $t= 0$, $\mu_2= \mu$ y $\mu_1$ es cualquiera. En el caso que $\mu \not \ll \nu, \ \mu \not \perp \nu$, es \'{u}nica la pareja $(\mu_1, \mu_2)$ de probabilidades en la descomposici\'{o}n de Radon-Nikodym.
Volvamos ahora a las propiedades de las medidas de probabilidad erg\'{o}dicas para una transformaci\'{o}n $T$:
\begin{theorem} \label{teoremaErgodicaMutuamSingular} {\bf Singularidad mutua de medidas erg\'{o}dicas} \index{medida! erg\'{o}dica} \index{teorema! de medidas erg\'{o}dicas! singularidad mutua} \index{transformaci\'{o}n! erg\'{o}dica} \index{ergodicidad}
Sea $T: X \mapsto X$ medible en el espacio medible $(X, {\mathcal A})$.
{\em (a) } Si existen dos medidas de probabilidad diferentes $\mu$ y $\nu$, ambas erg\'{o}dicas para la transformaci\'{o}n $T$, entonces $\mu \perp \nu$.
{\em (b) } Si $\mu$ y $\nu$ son medidas de probabilidad erg\'{o}dicas para $T$ y si $\mu \ll \nu$, entonces $\mu= \nu$.
\end{theorem}
{\em Demostraci\'{o}n: }
Afirmamos que, si $\mu$ y $\nu$ son ambas erg\'{o}dicas para $T$, y si para todo conjunto $A$ invariante por $T$ se cumple $\mu(A) = \nu(A)$, entonces $\mu = \nu$. En efecto, sea $B$ cualquier conjunto medible, y denotemos $\chi_B$ a la funci\'{o}n caracter\'{\i}stica de $B$. Aplicando la propiedad (iii) del Teorema \ref{proposition_mu_ergodicapromediobirkhoff1} a la funci\'{o}n $\psi= \chi_B$, sabemos que existe un conjunto $A_1$, que es $T$-invariante, tal que $\mu(A_1) = 1$, y se cumple la igualdad (\ref{equationBirkhoffErgodica}) para la medida $\mu$ y para todo $x \in A_1$. An\'{a}logamente, existe un conjunto $A_2$, que es $T$-invariante, tal que $\nu(A_2) = 1$, y se cumple la igualdad (\ref{equationBirkhoffErgodica}) para la medida $\nu$ y para todo $x \in A_2$. Por hip\'{o}tesis $\nu(A_1) = \mu(A_1)= 1 = \nu(A_2) = \mu(A_2)$. Entonces $\nu(A_1 \cap A_2) = \mu(A_1 \cap A_2) = 1$ y para todos los puntos $x \in A_1 \cap A_2$ se cumple la igualdad
$$\mu(B) = \int \chi_B \, d \mu = \lim _{n \rightarrow + \infty} \frac{1}{n} \sum_{j= 0}^{n-1} \chi_B \circ T^j(x) = \int \chi_B \, d \nu = \nu (B)$$
Deducimos entonces que $\mu(B) = \nu(B)$ para todo conjunto medible $B$, de donde $\mu= \nu$. Hemos terminado de probar la afirmaci\'{o}n del principio.
Demostremos ahora la parte (a) del teorema \ref{teoremaErgodicaMutuamSingular}. Debido a la afirmaci\'{o}n reci\'{e}n demostrada, como $\nu \neq \mu$ y son ambas erg\'{o}dicas, existe un conjunto $A$ que es $T$-invariante, tal que $\mu(A) \neq \nu (A)$. Por definici\'{o}n de ergodicidad $\mu(A), \nu (A) \in \{0,1\}$. Luego (eventualmente sustituyendo $A$ por su complemente en caso necesario), tenemos $\mu(A)= 1$ y $\nu(A)= 0$, de donde $\mu \perp \nu$, demostrando (a).
La parte (b) es una consecuencia inmediata de (a), pues si $\mu \ll \nu$ entonces $\mu \not \perp \nu$. Por lo tanto, como $\mu$ y $\nu$ son erg\'{o}dicas y $\mu \not \perp \nu$, aplicando la parte (a) deducimos que $\mu = \nu$, como quer\'{\i}amos probar.
\hfill $\Box$
\begin{nada} \em
\label{paragraphPruebaTeoremaExistMedidasErgodicas} \index{teorema! de existencia de! medidas erg\'{o}dicas}
{\bf Demostraci\'{o}n del Teorema \ref{teoremaExistenciaMedErgodicas} (existencia de medidas er\-g\'{o}\-dicas)}
\end{nada}
El teorema \ref{teoremaExistenciaMedErgodicas} es un corolario inmediato del teorema \ref{teoremaconvexhullmedidasergodicas} que demostraremos a continuaci\'{o}n. Las definiciones de \em puntos extremales de un conjunto compacto y convexo \em en un espacio vectorial topol\'{o}gico, y de \em envolvente compacta convexa, \em se emplean en el enunciado siguiente, y se incluyen abajo del mismo.
\begin{theorem} \label{teorema2_5} \label{teoremaErgodicasExtremales}
\label{teoremaconvexhullmedidasergodicas} \index{teorema! de medidas erg\'{o}dicas! extremalidad}
\index{medida! extremal}
\index{medida! erg\'{o}dica} \index{equivalencia de definiciones! de ergodicidad} \index{transformaci\'{o}n! erg\'{o}dica} \index{ergodicidad}
Sea $T: X \mapsto X$ continua en el espacio m\'{e}trico compacto $X$. Sea ${\mathcal M}$ el conjunto de medidas de probabilidad borelianas en $X$, con la topolog\'{\i}a d\'{e}bil$^*$. Sea ${\mathcal M}_ T \subset {\mathcal M}$ el conjunto de probabilidades invariantes por $T$ y sea ${\mathcal E}_T \subset {\mathcal M}_T$ el conjunto de las probabilidades (invariantes) erg\'{o}dicas para $T$.
Entonces
{\bf (a)} ${\mathcal M}_T$ es compacto y convexo.
{\bf (b)} ${\mathcal E}_T$ coincide con el conjunto de puntos extremales de ${\mathcal M}_T$.
{\bf (c)} ${\mathcal M}_T$ coincide con la envolvente convexa compacta de ${\mathcal E}_T$.
\vspace{.5cm}
\em Demostraremos el Teorema \ref{teoremaExistenciaMedErgodicas} m\'{a}s adelante en esta secci\'{o}n. Para poder demostrarlo, necesitamos definir convexidad, envolvente convexa, puntos extremales y ver las propiedades de estos conceptos:
\end{theorem}
{\bf Combinaciones convexas y puntos extremales} \index{combinaci\'{o}n convexa}
Recordemos las siguientes definiciones y el teorema de Krein-Milman del An\'{a}lisis Funcional:
$\bullet$ Sea ${\mathcal A}$ s un conjunto no vac\'{\i}o de un espacio vectorial topol\'{o}gico. Se llama \em combinaci\'{o}n convexa \em de puntos de ${\mathcal A}$ a cualquier punto del espacio que pueda escribirse como una combinaci\'{o}n lineal finita $$t_1 a_1 + t_2 a_2 + \ldots t_k a_k, $$ tal que $a_i \in {\mathcal A}$, \ $0 \leq t_i \leq 1$ para todo $1 \leq i \leq k$, y $\sum_{i= 1}^k t_i= 1$.
Denotamos como $ec({\mathcal A})$ al conjunto de todas las combinaciones convexas de puntos de ${\mathcal A}$, llamado envolvente convexa de ${\mathcal A}$. Observar que ${\mathcal A} \subset ec({\mathcal A})$.
$\bullet$ El conjunto ${\mathcal A}$ se dice \em convexo \em si contiene a todas las combinaciones convexas de sus puntos, es decir ${\mathcal A} = ec({\mathcal A})$. Es inmediato deducir, por inducci\'{o}n en $k \geq 2$, que un conjunto ${\mathcal A}$ es convexo, si y solo si $$ta + (1-t) b \in {\mathcal A} \ \ \ \ \forall \ t \in [0,1], \ \ \forall \ a,b \in {\mathcal A}.$$
$\bullet$ Se llama \em envolvente convexa cerrada \em de ${\mathcal A}$ a $\overline {ec({\mathcal A})}$, donde $\overline{\ \ \ }$ indica la clausura (o adherencia).
\index{envolvente convexa}
$\bullet$ Si ${\mathcal A}$ es tal que $\overline {ec({\mathcal A})}$ es compacto, llamaremos a este \'{u}ltimo conjunto \em la envolvente convexa compacta \em de ${\mathcal A}$.
$\bullet$ Si ${\mathcal K} $ es un subconjunto no vac\'{\i}o de un espacio vectorial topol\'{o}gico, se llama \em punto extremal \em de ${\mathcal K}$ (cuando existe) a un punto $a \in {\mathcal K}$ tal que las \'{u}nicas combinaciones convexas $t b + (1-t) c = a $, con $0 \leq t \leq 1 \ \ b, \ c \in {\mathcal K}$, son aquellas para las cuales $b = a$ \'{o} $c= a$.
\vspace{.2cm}
$\bullet$ {\bf Teorema de Krein-Milman}
\em Todo compacto no vac\'{\i}o y convexo en un espacio vectorial topol\'{o}gico contiene puntos extremales y coincide con la envolvente convexa compacta del conjunto de sus puntos extremales. \em
La demostraci\'{o}n del Teorema de Krein-Milman puede encontrarse por ejemplo en \cite[Teorema 3.21]{RudinFuncional}.
\begin{exercise}\em Sean $x_1, x_2 \in \mathbb{R}^2$. Chequear que toda combinaci\'{o}n convexa de $\{x_1, x_2\}$ est\'{a} en el segmento con extremos $x_1, x_2$ y rec\'{\i}procamente. Para ilustrar el teorema de Krein-Milman, considerar un pol\'{\i}gono regular $K$ en $\mathbb{R}^2$. ($K$ es la uni\'{o}n del interior del pol\'{\i}gono con su borde). Entonces $K$ es compacto. Chequear con argumentos geom\'{e}tricos que: (a) el pol\'{\i}gono $K$ es convexo; (b) los puntos extremales de $K$ son los v\'{e}rtices del pol\'{\i}gono; (c) todo punto de $K$ es una combinaci\'{o}n convexa de sus v\'{e}rtices. En el ejemplo del pol\'{\i}gono, la cantidad de puntos extremales es finita. Para ilustrar el teorema de Krein-Milman cuando la cantidad de puntos extremales es infinita, considerar una circunferencia $S^1 \subset \mathbb{R}^2$ y el compacto $K$ que es la uni\'{o}n de la circunferencia $S^1$ (borde de $K$) con la regi\'{o}n acotada encerrada por $S^1$ (interior de $K$). Entonces $K$ es compacto. Chequear con argumentos geom\'{e}tricos que: (d) $K$ es convexo; (e) el conjunto de puntos extremales de $K$ es la circunferencia $S^1$; (f) todo punto de $K$ es combinaci\'{o}n convexa de dos puntos extremales. \end{exercise}
{\bf Demostraci\'{o}n del Teorema \ref{teoremaconvexhullmedidasergodicas}}
{\em Demostraci\'{o}n: }
{\bf (a)} Es inmediato chequear que ${\mathcal M}_T$ es convexo: en efecto, si $\mu_1, \mu_2 \in {\mathcal M}_T$ entonces para todo conjunto boreliano $B$ se cumple $$\mu_i(T^{-1}(B))= \mu_i(B) \ \ \ i= 1,2.$$ Luego, cualquiera sea $0 \leq t \leq 1$ se tiene $$[t\mu_1 + (1-t) \mu_2] (T^{-1}(B)) =[t\mu_1 + (1-t) \mu_2] ( B),$$ de donde $t \mu_1 + (1-t) \mu_2 \in {\mathcal M}_T$, probando que ${\mathcal M}_T$ es convexo.
Ahora probemos que ${\mathcal M}_T$ es compacto.
Recordemos que ${\mathcal M}$ es compacto con la topolog\'{\i}a d\'{e}bil$^*$ (esto fue demostrado en la secci\'{o}n \ref{seccionpruebateoexistmedinvariantes}). Luego, para demostrar que ${\mathcal M}_T$ es compacto, basta probar que ${\mathcal M}_T$ es cerrado en ${\mathcal M}$. Sea $\mu_n \rightarrow \mu$ en ${\mathcal M}$ tales que $\mu_n \in {\mathcal M}_T$. Para deducir que $\mu \in{\mathcal M}_T$ basta recordar que es continuo el operador $T^*:{\mathcal M} \mapsto {\mathcal M}$, definido como $[T^*\mu] (B) = \mu(T^{-1}(B))$ para todo boreliano $B$. (La continuidad de $T^*$ fue demostrada en la secci\'{o}n \ref{seccionpruebateoexistmedinvariantes}). Entonces $$T^* \mu = \lim_ n T^*\mu_n = \lim \mu_n = \mu,$$ de donde $\mu \in {\mathcal M}_T$. Esto termina la prueba de que ${\mathcal M}_T$ es cerrado en el espacio m\'{e}trico compacto ${\mathcal M}$, y por lo tanto ${\mathcal M}_T$ es compacto.
{\bf (b)} Probemos que $\mu$ es erg\'{o}dica para $T$ si y solo si es punto extremal de ${\mathcal M}_T$. Primero asumamos que $\mu$ no es erg\'{o}dica. Entonces existe un boreliano $T$-invariante $A \subset X$ tal que $0 < \mu(A) < 1$. Sea $A^c= X \setminus A$ y consid\'{e}rense las probabilidades definidas para todo boreliano $B$ por: $$\mu_1 (B) := \frac{\mu(B \cap A)}{\mu(A)}, \ \ \ \ \ \mu_2(B):= \frac{\mu(B \cap A^c)}{\mu(A^c)}.$$ Por construcci\'{o}n, tenemos $$\mu = t \mu_1 + (1-t) \mu_2, \ \ \ \ \mbox{ donde } t:= \mu(A).$$ N\'{o}tese que $\mu_1 \neq \mu_2$, porque $\mu_1(A) = 1$ y $ \mu_2(A) = 0$. Adem\'{a}s $0 <t < 1$, de donde $$\mu \neq \mu_1, \ \ \mu \neq \mu_2.$$ Por lo tanto $\mu$ no es punto extremal de ${\mathcal M}_T$. Hemos probado que si $\mu$ no es erg\'{o}dica, entonces $\mu$ no es punto extremal de ${\mathcal M}_T$.
Ahora probemos el rec\'{\i}proco. Asumamos que $\mu$ es erg\'{o}dica y probemos que $\mu$ es punto extremal de ${\mathcal M}_T$. Sea $\mu = t \mu_1 + (1-t) \mu_2$, donde $ 0 \leq t \leq 1$ y $\mu_1, \mu_2 \in {\mathcal M}_T$. Por un lado, si $t= 0$ \'{o} $t= 1$ entonces $\mu = \mu_1$ \'{o} $\mu = \mu_2$. Por otro lado, si $0 < t < 1$ entonces $\mu_1 \ll \mu$ y $\mu_2 \ll \mu$ (pues si $\mu(B)= 0$ siendo la suma de dos sumandos no negativos, cada sumando debe ser cero, de d\'{o}nde $0=\mu_1(B)=\mu_2(B)$). Sea $A$ un conjunto $T$-invariante. Por hip\'{o}tesis $\mu$ es erg\'{o}dica, entonces $\mu(A)= 0$ \'{o} $\mu(A^c)= 0$. Como $\mu_i \ll \mu$ para $i= 1,2$, deducimos que $\mu_i(A) = 0$ \'{o} $\mu_i(A)= 1$. Luego $\mu_i$ es erg\'{o}dica. Por lo demostrado en el Teorema \ref{teoremaErgodicaMutuamSingular}, $\mu_i \ll \mu$ siendo $\mu_i$ y $\mu$ erg\'{o}dicas, implica que $\mu_i = \mu$. En este caso tenemos entonces $\mu= \mu_1 = \mu_2$. Hemos probado, en todos los casos, que las \'{u}nicas combinaciones convexas de $\mu$ son aquellas para las cuales $\mu = \mu_1$ \'{o} $\mu = \mu_2$. Entonces por definici\'{o}n, $\mu $ es extremal como quer\'{\i}amos demostrar.
{\bf (c)} Es consecuencia directa de a) y b), y del Teorema de Krein Milman. \hfill $\Box$
\vspace{.3cm}
{\bf Fin de la prueba del Teorema \ref{teoremaExistenciaMedErgodicas}:
Existencia de medidas erg\'{o}dicas}
{\em Demostraci\'{o}n: }
Por la parte c) del Teorema \ref{teoremaconvexhullmedidasergodicas}, $${\mathcal M}_T = \overline{e.c.{\mathcal E}_T},$$ donde ${\mathcal E}_T$ denota el conjunto de medidas erg\'{o}dicas para $T$ y ${\mathcal M}_T$ el de todas las medidas $T$-invariantes. Por el Teorema \ref{teoremaExistenciaMedInvariantes}, ${\mathcal M}_T \neq \emptyset$. Luego ${\mathcal E}_T \neq \emptyset$. Para probar la \'{u}ltima afirmaci\'{o}n del Teorema \ref{teoremaExistenciaMedInvariantes} aplicamos la Definici\'{o}n de la clausura $\overline{e.c.{\mathcal E}_T}$ de la envolvente convexa del conjunto ${\mathcal E}_T$ de probabilidades erg\'{o}dicas. Recordamos que la envolvente convexa $ {e.c.{\mathcal E}_T}$ es el conjunto de todas las medidas que se obtienen como combinaciones lineales convexas de medidas erg\'{o}dicas. Como toda medida invariante $\mu$ est\'{a} en $\overline{e.c.{\mathcal E}_T}$, entonces $\mu$ se puede aproximar, tanto como uno desee en la topolog\'{\i}a d\'{e}bil$^*$ del espacio de probabilidades, por medidas que son por combinaciones lineales finitas y convexas de medidas erg\'{o}dicas.
\hfill $\Box$
\begin{exercise}\em
Sea $T: X \mapsto X$ una transformaci\'{o}n continua en un espacio m\'{e}trico compacto.
(I) Asuma que existe una medida $\mu$ invariante por $T$ y positiva sobre abiertos (no necesariamente erg\'{o}dica).
(a) Probar que para cada abierto $V$ no vac\'{\i}o, existe una medida erg\'{o}dica $\nu_V $ tal que $\nu_V(V) >0$.
Sugerencia: Por absurdo, si $\nu(V)= 0$ para toda medida erg\'{o}dica $\nu$, entonces $\rho(V)= 0$ para toda medida de probabilidad $\rho$ que sea combinaci\'{o}n convexa finita de erg\'{o}dicas. Probar que $\rho(V)= 0$ para toda medida $\rho$ en la envolvente convexa compacta de las erg\'{o}dicas (con la topolog\'{\i}a d\'{e}bil$^*$ del espacio ${\mathcal M}$ de probabilidades). Usar el Teorema \ref{teoremaconvexhullmedidasergodicas} para concluir que $\mu(V)= 0$, contradiciendo la hip\'{o}tesis.
(b) Usando el Lema de Recurrencia de Poincar\'{e} demostrar que el conjunto de los puntos recurrentes es denso en $X$.
(c) Concluir que $\Omega(T) = X$.
(II) Asuma que para todo abierto $V$ existe una medida de probabilidad invariante $\mu_V$ (no necesariamente erg\'{o}dica), tal que $\mu_V(V) >0$.
(d) Demostrar que existe una medida de probabilidad invariante $\nu_V$ erg\'{o}dica tal que $\nu_V(V) >0$.
(e) Demostrar que existe una medida de probabilidad invariante $\rho$ positiva sobre abiertos. Sugerencia: Tomar una base numerable de abiertos $\{V_i\}_{i \geq 1}$, demostrar que las medidas $\rho_n := \sum_{i= 1}^n (1/2^i) \mu_{V_i}$ (que no son probabilidades, pero son finitas) satisfacen $0 <\rho_n (X) \leq 1$, son $T$-invariantes y existe $\rho= \lim^*_{n \rightarrow + \infty} \rho_n$ en la topolog\'{\i}a d\'{e}bil$^*$ del espacio ${\mathcal M}^1$ de medidas de probabilidad finitas uniformemente acotadas por 1.
\end{exercise}
\section{Teoremas Erg\'{o}dicos.}
\label{chapterTeoBirkhoff}
Las hip\'{o}tesis generales para este cap\'{\i}tulo son las siguientes:
$(X, {\mathcal A})$ es un espacio medible, $T: X \mapsto X$ es una transformaci\'{o}n
medible que preserva una medida de probabilidad $\mu $, y $f: X \mapsto {\mathbb{R}}$ es una
funci\'{o}n medible.
Consideremos el siguiente resultado: \index{medida! invariante}
\em Sea $T: X \mapsto X$ medible en el espacio m\'{e}trico compacto $X$. Sea $\mu$ una medida de probabilidad en la sigma-\'{a}lgebra de Borel. Entonces $\mu$ es $T$-invariante si y solo si para toda $f: X \mapsto \mathbb{R}$ continua se cumple
\begin{equation} \label{eqn27}\int f \,d \mu = \int f \circ T \, d \mu\end{equation}
Adem\'{a}s, vale la igualdad \em (\ref{eqn27}) \em para toda $f$ continua si y solo si vale para toda $f \in L^p(\mu)$, cualquiera sea el natural $p \geq 1$. \em
\begin{exercise}\em
Probar la afirmaci\'{o}n anterior.
\end{exercise}
\subsection{Teorema Erg\'{o}dico de Birkhoff-Khinchin - Enunciado y Corolarios}
\begin{definition} {\bf Promedios temporales } \index{promedio! temporal} \label{definitionPromediosBirkhoff} \index{promedio! de Birkhoff}
\em
Sea $(X, {\mathcal A})$ un espacio medible.
Sea $T: X \mapsto X$ medible que preserva una
medida de pro\-ba\-bi\-lidad $\mu$ (no necesariamente erg\'{o}dica). Sea $f
\in L^p (\mu)$ para $1 \leq p \in \mathbb{N}$.
Se denota con $\widetilde {f}^+ $ o simplemente con $\widetilde f $ al
l\'{\i}mite de los llamados \em promedios orbitales (o promedios temporales o promedios de Birkhoff) hacia
el futuro \em de $f$, en los puntos $x \in X$ donde exista, esto
es:
$$\widetilde f^+ (x)= \lim _{n \rightarrow + \infty} \frac{1}{n} (f (x)+ f \circ T (x) +
\ldots f \circ T^{n-1})$$
Si adem\'{a}s $T$ es invertible, se denota con $\widetilde f^- $ al
l\'{\i}mite de los promedios orbitales (o temporales o de Birkhoff) hacia el pasado, en
los puntos $x$ donde exista. Esto es:
$$\widetilde f^- (x) = \lim _{n \rightarrow + \infty} \frac{1}{n} (f (x)+
f \circ T^{-1}(x) + \ldots f \circ T ^{-(n-1)}(x))$$
\end{definition}
\begin{theorem}
{\bf Teorema erg\'{o}dico de Birkhoff-Khinchin} \index{teorema! Birkhoff-Khinchin} \label{theoremBirkhoff} \index{teorema! erg\'{o}dico}
Sea $(X, {\mathcal A})$ un espacio medible.
Si $T: X \mapsto X$ es medible que preserva una medida de
probabilidad $\mu$ entonces:
\begin{itemize}
\item [a) ] Para toda $f \in L^1(\mu )$ existe $\widetilde f (x)= \lim _{n \rightarrow \infty}
\frac{1}{n} \sum _{j= 0}^{n-1} f \circ T ^j (x)$ $\mu$-c.t.p. en $x
\in X$.
\item[b) ] $\widetilde f$ es $T$-invariante, es decir : $\widetilde f \circ
T = \widetilde f \; \mu$-c.t.p. M\'{a}s precisamente, para todo $x \in X$
existe $\widetilde f (x)$ si y solo si existe $\widetilde f
(T(x))$ y en ese caso $\widetilde f(x) = \widetilde f(T(x))$.
\item[c) ]Para todo natural $p \geq 1$, si $f \in L^p (\mu ) \subset L^1(\mu)$,
entonces $\widetilde f \in L^p(\mu )$ y la convergencia es tambi\'{e}n
en $L^p(\mu )$.
\item [d) ] $\int f \, d \mu = \int \widetilde f \, d \mu$
\end{itemize}
\end{theorem}
La demostraci\'{o}n de Birkhoff del Teorema \ref{theoremBirkhoff} se encuentra en \cite{Birkhoff}. Otra demostraci\'{o}n, diferente de la de Birkhoff, puede encontrarse por ejemplo, en
\cite[proof of Theorem 1.14, pag. 38-39]{Walters}, en
\cite[Theorem 2.1.5]{Keller} o en
\cite[p\'{a}g. 114-122]{Mane} {Mane} (ver tambi\'{e}n \cite{ManeIngles}).
El Teorema Erg\'{o}dico de Birkhoff-Khinchin es un caso particular del llamado {\bf Teorema Erg\'{o}dico Subaditivo de Kingmann} \cite{Kingmann}, que establece la convergencia de la sucesi\'{o}n $\{f_n / {n}\}_n$ donde $f_n$, en vez de ser necesariamente una suma de Birkhoff, es \em una sucesi\'{o}n subaditiva de funciones \em en $L^1(\mu)$. M\'{a}s precisamente, se asume por hip\'{o}tesis, que $$f_{n+m} \leq f_n + f_m \circ T^n \ \ \ \forall \ \ n, m \in \mathbb{N}^+,$$ donde $f_1 \in L^1(\mu)$. En particular, $f_n = \sum_{j= 0}^{n-1} f_1 \circ T^j$ es un ejemplo de sucesi\'{o}n subaditiva.
El Teorema Erg\'{o}dico Subaditivo de Kingmann establece que, para toda medida $\mu$ que sea invariante por $T$, para toda sucesi\'{o}n $\{f_n\}_{n \geq 1}$ subaditiva de funciones reales en $L^1(\mu)$, \em la sucesi\'{o}n $\{f_n/n\}_n$ converge $\mu$-c.t.p. \em Luego, este Teorema generaliza el Teorema Erg\'{o}dico de Birkhoff-Khinchin. La demostraci\'{o}n del Teorema Subaditivo de Kingmann se encuentra en \cite{Kingmann}. Otra demostraci\'{o}n, que no usa el Teorema Erg\'{o}dico de Birkhoff-Khinchin, y por lo tanto puede sustituir la demostraci\'{o}n de este \'{u}ltimo teorema, se encuentra en \cite{Bochi_KingmannNotes}.
\begin{corollary} \em \label{corollary1}
{\bf Igualdad de los promedios temporales hacia el futuro y hacia el pasado. } \index{promedio! temporal}
\em
Si $T$ es medible, invertible, con inversa medible, y preserva una
medida de probabilidad $\mu$, entonces para toda $f \in L^1(\mu )$
se cumple $\widetilde f^+$ \em (promedio temporal hacia el futuro) \em y $\widetilde f^-$ \em (promedio temporal hacia el pasado) \em existen $\mu$-c.t.p.
y son iguales $\widetilde f^+ = \widetilde f^- \;\mu$-c.t.p. \em
(Ver la definici\'{o}n de $\widetilde f^+$ y $\widetilde f^-$ en \ref{definitionPromediosBirkhoff}.)
En el siguiente Ejercicio \ref{exerciseBirkhoffPatras}, se da una gu\'{\i}a para la demostraci\'{o}n del Corolario \ref{corollary1}.
\end{corollary}
\begin{exercise}\em \label{exerciseBirkhoffPatras}
Probar el corolario \ref{corollary1} como consecuencia del Teorema de Birkhoff-Khinchin. Sugerencias: Para demostrar que existen $\widetilde f^+(x)$ y $\widetilde f^-(x)$ para $\mu$-c.t.p. $x \in X$, aplicar el teorema de Birkhoff a $T^{-1}$, probar que toda medida $\mu$ es invariante por $T$ si y solo s\'{\i} lo es por $T^{-1}$, y usar que la intersecci\'{o}n de dos conjuntos con $\mu$-medida igual a $1$ tiene medida $\mu$-medida $1$. Para probar que $\widetilde f^+(x) = \widetilde f^-(x)$ para $\mu$-c.t.p. $x \in X$, para cada natural $n \geq 1$ denote $f^+_n := (1/n) \sum_{j= 0}^{n-1} f \circ T^j, \ f^-_{n} := (1/n) \sum_{j= 0}^{n-1} f \circ T^{-j}$. Por el Teorema de Birkhoff, $f^-_n - f^+_n$ converge en $L^1(\mu)$ a $\widetilde f^- - \widetilde f^+$. Entonces para todo $\epsilon >0$ existe $n$ suficientemente grande tal que $\|\widetilde f^- - \widetilde f^+\|_{L^1} \leq \|f^-_n - f^+_n\|_{L^1} + \epsilon$. Basta probar entonces que $\int |f^-_n - f^+_n| \, d \mu$ tiende a cero cuando $n \rightarrow + \infty$. Chequear que para todo $x \in X$ se cumple la siguiente igualdad: $$f^-_{n+1}(x) - f^+_{n+1}(x) = f_{n+1}^-(x) + f^+_{n+1}(x) - 2 f^+_{n+1}(x) = $$ $$=\frac{2n+1}{n} \, f^+_{2n+1}(T^{-n}(x)) \ \ + \frac{f(x)}{n+1} \ \ - \ \ 2 f^+_{n+1}(x)$$
Como $f^+_{2n+1}$ converge a $\widetilde f^+$ en $L^1(\mu)$, y adem\'{a}s la medida $\mu$ y la funci\'{o}n $\widetilde f^+$ son invariantes por $T$, tenemos, para todo $n $ suficientemente grande:
$$\int |f^+_{2n+1} - \widetilde f^+| \, d \mu < \epsilon$$
$$\int |f^+_{2n+1} - \widetilde f^+| \, d \mu = \int |f^+_{2n+1}\circ T^{-n} - \widetilde f^+ \circ T^{-n}| \, d \mu = $$ $$=\int |f^+_{2n+1} \circ T^{-n}- \widetilde f^+| \, d \mu < \epsilon$$
Juntando todo lo anterior, deducir, para todo $n$ suficientemente grande, que: $$\|f^-_{n+1} - f^+_{n+1} \|_{L^1(\mu)} \leq $$ $$\frac{2n+1}{n+1} \ \|f^+_{2n+1} \circ T^{-n} - \widetilde f^+ \|_{L^1(\mu)} \ \ + \ \ \frac{|f(x)|}{n+1} \ \ + \ \ 2 \, \| f^+_{n+1} - \widetilde f^+\|_{L^1(\mu)} $$ $$\leq 3\epsilon + \epsilon + 2\epsilon = 6 \epsilon.$$
\end{exercise}
\begin{corollary} \label{corolarioPromediosTransitividad}
{\bf Promedios de medida de transitividad.} \index{promedio! de medida de transitividad}
Para toda medida $\mu$ invariante por $T$ y para todos los conjuntos $A$ y $B$ medibles, existe el l\'{\i}mite siguiente:
$$\tau(A, B) = \lim _{n \rightarrow + \infty} \frac{1}{n} \sum _{j=0}^{n-1}
\mu (T^{-j}(A) \cap B)$$
\end{corollary}
{\em Demostraci\'{o}n: } Denotemos $\chi_C$ a la funci\'{o}n caracter\'{\i}stica de cualquier conjunto $C$. Sea $$I_n = \frac{1}{n} \sum_{j= 0}^{n-1}
\mu(T^{-j}(A) \cap B) = $$ $$\frac{1}{n} \sum_{j= 0}^{n-1}\int
\chi_{T^{-j}(A)} \chi _B \, d \mu = \int ( \frac{1}{n} \sum_{j=
0}^{n-1} \chi_A \circ T^j) \chi_B \, d \mu, $$ El integrando a la
derecha de la igualdad anterior est\'{a} dominado por $1 \in L^1(\mu
)$. Por el teorema de convergencia dominada $\lim I_n = \int
\widetilde \chi_A \chi _B \, d \mu \; \; \; \; \Box$
\begin{exercise}\em
Probar que el l\'{\i}mite $\tau(A, B)$ del Corolario \ref{corolarioPromediosTransitividad} verifica las siguientes desigualdades
$$\mu(A)- \sqrt{\mu(A) [1 - \mu(B)]} \leq \tau(A, B) \leq \sqrt{\mu(A) \mu(B) }.$$
Concluir que:
$\tau(A, B) = 0$ si $\mu(A) = 0$ \'{o} $\mu(B) = 0$.
Si $\tau(A, B) = 0$ entonces $\mu(A) + \mu(B) \leq 1$
$\tau(A, B) = 1$ si y solo si $\mu(A) = \mu(B)= 1$.
Sugerencia para la primera parte: En la demostraci\'{o}n del Corolario \ref{corolarioPromediosTransitividad} se prob\'{o} que $\tau(A, B) = \int \widetilde \chi_A \chi _B, d \mu$. En $L^2(\mu)$ se cumple
$$\int f g \, d \mu \leq \|f\|_{L^2} \, \|g\|_{L^2}.$$ Para probar la desigualdad de la derecha, aplicar lo anterior a $\widetilde \chi_A$ y $\chi _B$, y recordar que $0 \leq \widetilde \chi_A \leq 1$, por lo cual $\widetilde \chi_A^2 \leq \widetilde \chi_A$. Deducir que $\|\widetilde \chi_A\|_{L^2} \leq \sqrt {\mu(A)}$ aplicando el teorema de Birkhoff. Para probar la desigualdad de la izquierda, aplicar la desigualdad de la derecha a $A$ y $B^c$ y probar que $\tau(A, B) + \tau (A, B^c) = \mu(A)$.
\end{exercise}
\begin{corollary} \em {\bf Promedios de sucesi\'{o}n de funciones.}
\label{corollary3} \index{promedio! de sucesi\'{o}n de funciones}
\em Sea $T: X \mapsto X$ medible que preserva una medida de
probabilidad $\mu$. Sea $f_n \in L^1(\mu )$ una sucesi\'{o}n de
funciones, dominada por $f_0 \in L^1(\mu)$, que converge
$\mu$-c.t.p. y en $L^1(\mu )$ a $f \in L^1(\mu )$. Entonces
$$\lim _{n \rightarrow + \infty} \frac{1}{n}\sum _{j=0}^{n-1} f_{j} \circ T^j = \widetilde f \ \ \
\mu \mbox{-c.t.p. y en }L^1(\mu ). $$
\end{corollary}
\begin{exercise}\em
Demostrar el corolario \ref{corollary3}. Sugerencia: Basta
probarlo para $f_n \geq 0; f_n \rightarrow 0 \; c.t.p.$ Sea $G_k (x) = \sup _{n \geq k}
\{f_n(x)\}$. Entonces $G_k \rightarrow 0\; c.t.p.$, y por
convergencia dominada $\|G_k\|_{L^1} \rightarrow 0$. Sea $\widetilde
G_k = \lim _{n \rightarrow \infty}(1/n) \sum _{j= 0}^{n-1} G_k \circ
T^j$ que existe $\mu$-c.t.p. por el teorema de Birkhoff. La sucesi\'{o}n $\widetilde G_k$ es decreciente con $k$, por lo que tiene l\'{\i}mite, y por el
lema de Fatou $$0 \leq \int \lim \widetilde G_k \, d \mu \leq \lim
\int \widetilde G_k \, d \mu = \lim \int G_k \, d \mu = 0.$$ Luego
$\widetilde G_k \rightarrow 0 \; c.t.p.$. Finalmente, usar que
$$\lim \sup _{n \rightarrow \infty} \frac{1}{n}
\sum _{j=0}^{n-1} f_j \circ T^j (x) = \lim \sup _{n \rightarrow \infty} \frac{1}{n}
\sum _{j= k}^{n-1} f_j \circ T^j (x) \leq
\widetilde G_k(x).$$
\end{exercise}
\begin{definition} \em \index{tiempo medio de estad\'{\i}a} \index{promedio! de tiempo de estad\'{\i}a}
Dado $A$ conjunto medible, se denomina \em tiempo medio de
estad\'{\i}a $\tau _A(x)$ de un punto $x \in X$ en $A$, \em al
siguiente l\'{\i}mite, cuando existe:
$$\tau _A(x) =
\lim _{n \rightarrow + \infty} \frac{1}{n} \# \{j \in \mathbb{N}: 0 \leq j
\leq n-1,\; T^j(x) \in A\}$$
\end{definition}
\begin{exercise}\em
Probar el siguiente teorema, para todo conjunto me\-di\-ble $A \subset X$ y para toda medida $\mu$ que sea invariante por $T: X \mapsto X$:
\em El tiempo medio de estad\'{\i}a $\tau_A $ existe $\mu $-c.t.p. y adem\'{a}s
la convergencia es en $L^p(\mu )$ para todo $p \geq 1$ natural.
Adem\'{a}s $\int \tau _A \, d \mu = \mu (A)$. \em Sugerencia:
Observar que el tiempo medio de estad\'{\i}a en $A$ es la funci\'{o}n
$\tau_A = \widetilde \chi _A$, donde $\chi_A$ es la funci\'{o}n caracter\'{\i}stica de $A$. Aplicar el teorema erg\'{o}dico de
Birkhoff-Khinchin.
\end{exercise}
\begin{definition} \em {\bf Conjuntos de probabilidad total para $T$} \label{definitionProbabilidadTotal} \index{probabilidad! total} \index{conjunto! de probabilidad total}
Sea $T:X \mapsto X$ una transformaci\'{o}n medible tal que el conjunto ${\mathcal M}_T$ de las medidas de probabilidad $T$-invariantes es no vac\'{\i}o. Un conjunto medible $\Lambda \subset X$ se dice que tiene \em probabilidad total para $T$ \em si para toda $\mu \in {\mathcal M}_T$ se cumple $\mu(\Lambda)= 1$.
Entonces, la primera parte del ejercicio anterior se puede enunciar de la siguiente forma:
\em Para todo conjunto medible $A \subset X$, el tiempo medio de estad\'{\i}a $\tau_A(x)$ existe para un conjunto de puntos con probabilidad total. Si adem\'{a}s $\mu(A) >0$, entonces $\tau_A(x)$ no nula $\mu$-c.t.p. \em (pues $\int \tau_A \, d \mu = \mu(A)$).
\end{definition}
En el Corolario \ref{CorolarioDescoErgodicaEspaciosMetricos} veremos un criterio para que un conjunto tenga probabilidad total, que requiere solo el conocimiento de las medidas erg\'{o}dicas.
\begin{exercise}\em \label{exercise4} \label{exerciseLimiteBirkhoffConjuntoEstable} {\bf Conjuntos estables e inestables.} \index{conjunto! estable} \index{conjunto! inestable}
Sea $T: X \mapsto X$ Borel medible e invertible con inversa
medible, en un espacio m\'{e}trico compacto $X$, tal que el conjunto de medidas invariantes por $T$ es no vac\'{\i}o. Sea $f: X \mapsto \mathbb{C} $ una
funci\'{o}n compleja continua. Sea $\Lambda \subset X$ el conjunto de probabilidad total tal que existen, y son iguales entre s\'{\i}, los l\'{\i}mites $\widetilde f ^+ (x)$ y $\widetilde f^-(x)$ de los promedios de Birkhoff hacia el futuro y hacia el pasado, respectivamente. Sea $x_0 \in \Lambda$. Se
definen los conjuntos estable e inestable respectivamente por el
punto $x_0$ (quiz\'{a}s se reducen solo a $\{x_0\}$):
$$W^s(x_0) := \{y \in X: \lim _{n \rightarrow + \infty} \mbox{$\,$dist$\,$} (T^ny, T^n x_0) = 0\}$$
$$W^u(x_0) := \{y \in X: \lim _{n \rightarrow - \infty} \mbox{$\,$dist$\,$} (T^ny, T^n x_0) = 0\}$$
Probar que para todo $y \in W^s(x_0)$ existe $\widetilde f^+(y)$ y $\widetilde f^+(y) =
\widetilde f^+(x_0) $. Probar que para todo $y \in W^u(x_0)$ existe $\widetilde f^-(y)$ y $\widetilde f^-(y) = \widetilde f^-(x_0). $
\end{exercise}
\begin{exercise}\em {\bf Promedios de Birkhoff para funciones reales fuera de $L^1(\mu)$.}
Probar la siguiente generalizaci\'{o}n del teorema de
Birkhoff-Khinchin:
Sea $(X, {\mathcal A})$ un espacio medible y sea
Sea $T: X \mapsto X$ medible que preserva la probabilidad
$\mu$. Sea $f : X \mapsto \mathbb{R}$ medible.
\em Entonces, $\mu$-c.t.p. o bien $\widetilde{| f |} \; (x) = + \infty $ o bien $\widetilde f$ existe y es finito. \em
Sugerencia:
Basta probarlo para $f \geq 0$. Dado $c >0$
sea $$X_c = \{ x \in X: \liminf_{n \rightarrow + \infty} \frac{1}{n} \sum _{j= 0}^{n-1} f \circ T^j (x) \leq c
\}$$
$X_c$ es $T$- invariante. Sea $f_m (x) = \min (f(x), m)$. Usar el teorema de Birkhoff para probar que existe $\widetilde f_m$ $\mu$-c.t.p. $x \in X_c$. Sea $f|_{X_c} = \chi_{X_c} f$ donde $\chi_{X_c}$ denota la funci\'{o}n caracter\'{\i}stica de $X_c$. Probar que $f|_{X_c} \in L^1(\mu)$ usando el teorema de convergencia mon\'{o}tona y la igualdad del teorema de Birkhoff $\int f_m|_{X_c} \, d \mu = \int \widetilde {f_m|_{X_c}}\, d \mu \leq c $. Deducir que existe el l\'{\i}mite $\widetilde f (x)$ para $\mu$-c.t.p. $x$ en $X_c$. Tomar la uni\'{o}n de los $X_c$ para todo $c \geq 1$ natural y concluir que en el complemento de esa uni\'{o}n se cumple $ \widetilde {f } = + \infty$.
\end{exercise}
\begin{exercise}\em Sea $T: X \mapsto X$ una transformaci\'{o}n medible en un espacio m\'{e}trico compacto $X$, tal que es no vac\'{\i}o el conjunto ${\mathcal M}_T$ de probabilidades invariantes por $T$.
Probar que el conjunto siguiente tiene probabilidad total para $T$ (i.e. tiene probabilidad 1 para toda $\mu \in {\mathcal M}_T$):
$$ \big \{ x \in X: \mbox{ existe l\'{\i}m} _{n \rightarrow +\infty}
\frac{1}{n} \cdot \sum _{j=0} ^{n-1} f \circ T ^{p j} (x) \: \: \:
\forall \: f \in C^0(X, \mathbb{R}), \: \: \forall \: p \geq 1 \big \} $$
Sugerencia: ${\mathcal M} _{T^p} (X) \supset {\mathcal M} _T (X) $.
\end{exercise}
\subsection{Otras caracterizaciones de la ergodicidad}
En esta secci\'{o}n, salvo indicaci\'{o}n en contrario, $(X, {\mathcal A})$ denota un espacio medible y $T: X \mapsto X$ una transformaci\'{o}n medible que preserva alguna medida de probabilidad $\mu$.
Se recuerda las Definiciones \ref{definitionErgodicidadI} y \ref{definitionErgodicidadII} de ergodicidad, y los Teoremas \ref{teoremaDefinicionesErgodicidad}, \ref{TeoremaErgodicidadIII} y \ref{teoremaErgodicasExtremales}, en los que dimos diferentes caracterizaciones de ergodicidad. Agregamos ahora las siguientes:
\begin{theorem} \label{teoremaergodicidad} {\bf Ergodicidad IV}
Sea $T: X \mapsto X$ medible que preserva una medida de
probabilidad $\mu$. Las siguientes propiedades son equivalentes:
{\bf a) } $T$ es erg\'{o}dica respecto de $\mu$.
{\bf b) } Para toda $f \in L^1(\mu ) $ se cumple
$$\widetilde f (x) = \int f \, d \mu \; \; \; \mu -\mbox{c.t.p.}$$
{\bf c) } Para toda pareja de conjuntos medibles $A$ y $B$ se
cumple:
$$\lim _{n \rightarrow + \infty} \frac{1}{n} \sum _{j= 0}^{n-1} \mu (T^{-j}(A) \cap B)
= \mu (A) \mu (B).$$
\end{theorem}
El enunciado y la prueba del Teorema \ref{teoremaergodicidad} fueron extra\'{\i}dos, con leves modificaciones, de \cite[p\'{a}gs. 130-131]{Mane} (ver tambi\'{e}n \cite{ManeIngles}).
\vspace{.2cm}
{\bf Demostraci\'{o}n de que a) $\Rightarrow $ b) en el
Teorema \ref{teoremaergodicidad}:} Por el Teorema de Birkhoff-Khinchin $\int
\widetilde f \, d \mu = \int f \, d \mu$. Entonces basta demostrar
que $\widetilde f$ es constante $\mu-$c.t.p.
Por el teorema de Birkhoff-Khinchin $\widetilde f(x)$ existe
$\mu$-c.t.p., $\widetilde f(x)$ existe si y solo si existe
$\widetilde f(Tx)$ y en ese caso $\widetilde f(Tx) = \widetilde
f(x)$. Dicho de otra forma, el conjunto $A= \{x \in X: \widetilde
f (x) \mbox { existe } \}$ cumple $\mu(A)= 1$, $T^{-1}(A) = A$ y
para todo $x \in A: \; \; \widetilde f(x) = \widetilde f(Tx)$.
Sea $g: X \mapsto \mathbb{C} $ definida como $g(x) = \widetilde f(x)$ si $x
\in A$, $g(x) = 0$ si $x \not \in A$. Entonces para todo $x \in X$
se cumple $g \circ T (x)= g(x)$. Por la afirmaci\'{o}n demostrada
antes $g = cte\; \; \; \mu-$c.t.p. Pero por construcci\'{o}n $g=
\widetilde f \; \; \; \mu-$c.t.p., de donde se deduce que
$\widetilde f = cte \; \; \; \mu-$c.t.p. como quer\'{\i}amos. \hfill $\Box$
\vspace{.2cm}
{\bf Demostraci\'{o}n de que b) $\Rightarrow $ c) en el Teorema
\ref{teoremaergodicidad}:}
Obs\'{e}rvese que $\mu (T^{-j}A \cap
B) = \int \chi_{T^{-j}(A)}\chi_B \, d \mu = \int (\chi_A \circ
T^j) \chi _ B \, d \mu$, de donde:
$$I_n= \frac{1}{n} \sum _{j=0}^{n-1} \mu (T^{-j}A \cap
B) = \int (\frac{1}{n} \sum _{j=0}^{n-1} \chi_A \circ T^j) \chi _
B \, d \mu$$ Por convergencia dominada (el integrando est\'{a} acotado
por la funci\'{o}n constante $1 \in L^1(\mu )$), resulta:
\begin{equation} \label{eqn1} \lim I_n = \int \lim _{n \rightarrow + \infty}
(\frac{1}{n}\sum _{j=0}^{n-1} \chi_A \circ T^j) \chi_ B \, d \mu =
\int (\widetilde \chi _A) \chi _B \, d \mu \end{equation} Por
el teorema de Bikhoff-Khinchin, observando
que $\chi _A \in L^p(\mu )$ y $\widetilde \chi _A $ es invariante
con $T$, luego contante $\mu-c.t.p$, se tiene que
$$\widetilde \chi_A (x) = \int \widetilde \chi_A \, d \mu = \int \chi_A \, d \mu = \mu (A)\; \; \; \mu-c.t.p. x \in X$$
Sustituyendo en (\ref{eqn1}) resulta \
$\lim_n I_n = \int (\mu (A)) \chi _B \, d \mu = \mu (A) \mu (B). $ \hfill $\Box$
\vspace{.2cm}
{\bf Demostraci\'{o}n de que c) $\Rightarrow $ a) en el Teorema
\ref{teoremaergodicidad}:} Basta demostrar que si $A, B
\subset X$ son medibles con $\mu$ medida posi\-tiva, entonces existe
$j \geq 1$ tal que $\mu (T^{-j}(A) \cap B) >0$ (cf. Definici\'{o}n \ref{definitionErgodicidadI}). Por hip\'{o}tesis:
$$\lim \frac{1}{n} \sum _{j=1}^{n-1} \mu (T^{-j}(A) \cap B) =
\lim \frac{1}{n} \sum _{j=0}^{n-1} \mu (T^{-j}(A) \cap B) = \mu
(A) \mu (B) >0
$$ Entonces $\sum _{j=1}^{n-1} \mu (T^{-j}(A) \cap B) >0 \ \ \forall \ n $ suficientemente
grande, de donde $ \mu (T^{-j}(A) \cap B)>0 $ para alg\'{u}n $j
\geq 1$ como quer\'{\i}amos probar. \hfill $\Box$
\begin{exercise}\em {\bf Ergodicidad V.} \label{ejercicio0} \index{medida! erg\'{o}dica} \index{ergodicidad} \index{transformaci\'{o}n! erg\'{o}dica}
Sea $T: X \mapsto X$ que preserva una probabilidad $\mu$.
Probar que:
{\bf (a) } \em $T$ erg\'{o}dica respecto de $\mu $ si y solo si para todo
conjunto $A$ medible tal que $T^{-1}(A) \subset A$ \'{o} $T^{-1}(A) \supset A$, se cumple $\mu
(A)$ es o bien cero o bien uno. \em
{\bf (b) } \em $T$ es erg\'{o}dica respecto de $\mu$ si y solo si para toda $f \in
L^1(\mu)$ tal que \em $f \circ T \leq f\; \mu$-c.t.p., \em se cumple \em $f =
cte \ \ \ \ \mu-$ c.t.p. \em
{\bf (c) } $T$ es erg\'{o}dica respecto de $\mu$ si y solo si para todos los conjuntos $A$ medibles que cumplen $T^{-1}(A) \subset A$ \'{o} $A \subset T^{-1}(A)$ se verifica $\mu(A) = 0$ \'{o} $\mu(A)= 1$. \em
{\bf (d) } \em $T$ es erg\'{o}dica respecto de $\mu$ si y solo si para todas las funciones $f \in
L^1(\mu)$ tales que \em $f \circ T \leq f\; \mu$-c.t.p., \ \'{o} \ $f \circ T \geq f\; \mu$-c.t.p.\em se cumple \em $f =
cte \ \ \ \ \mu-$ c.t.p.
Sugerencia para (a): Basta probarlo cuando $T^{-1}(A) \supset A$, pues en caso contrario, sustituimos $A$ por su complemento. Denote $\chi_{A}$ a la funci\'{o}n caracter\'{\i}stica de $A$. Como $T^{-1}(A) \supset A$, pruebe que $\chi_{T^{-n}(A)}(x) = 1$ para todo $x \in A$. Luego $\widetilde \chi_{A}(x) = 1$ para todo $x \in A$, y usando que $\chi_A$ es constante $\mu$-c.t.p. si $\mu$ es erg\'{o}dica, se deduce $\mu(A)= 1$ \'{o} $\mu(A)= 0$.
\end{exercise}
\begin{exercise}\em \label{ejercicio513}
Sea $X$ un espacio m\'{e}trico compacto y sea $T: X \mapsto X$ Borel-medible tal que preserva una probabilidad $\mu$.
Sea $\{g_i: i\geq 1 \} $ un conjunto numerable denso en
$C^0(X, [0,1])$.
Probar que son equivalentes
las afirmaciones siguientes:
i)
$\mu $ es erg\'{o}dica.
ii)
$\widetilde f (y) = \int f \mbox{d} \mu \: \: \: \mu\mbox{-c.t.p.} \; y;
\: \: \forall \: f \in C_0 (X, \mathbb{R}) $
iii)
$\widetilde g_i (y) = \int g_i \mbox{d} \mu \: \: \: \mu \mbox{-c.t.p.} \; y;
\: \: \forall \: i \geq 1 $
Sugerencia: Recordar que $\mu $ es erg\'{o}dica si y solo si
para toda $h \in L^1(\mu ): \widetilde h (y ) = \int h \mbox{d} \mu $ , $ \mu \mbox{-c.t.p.} \; y $; y que las funciones continuas son densas en $L^1 (\mu ) $.
\end{exercise} \index{medida! erg\'{o}dica} \index{ergodicidad} \index{transformaci\'{o}n! erg\'{o}dica} \index{equivalencia de definiciones! de ergodicidad}
\begin{exercise}\em \em \label{exercise1} \em
Sea $X$ un espacio m\'{e}trico compacto y sea $T: X \mapsto X$ Borel-medible tal que preserva una probabilidad $\mu$. Probar que $T$ es erg\'{o}dica respecto de
$\mu$ si y solo si para toda funci\'{o}n compleja $f: X \mapsto \mathbb{C} $
continua, el l\'{\i}mite $\widetilde f$ de los promedios de Birkhoff de
$f$ es constante $\mu$-c.t.p. Sugerencia: Usar lo probado en el ejercicio \ref{ejercicio513} y chequear, usando el Teorema de Birkhoff-Khinchin, que si $\widetilde f$ es una constante $\mu$-c.t.p., entonces esta constante es $\int f \, d \mu$.
\end{exercise}
Volvamos al caso general de un espacio medible $(X, {\mathcal A})$ con una transformaci\'{o}n medible $T: X \mapsto X$ que preserva una medida de probabilidad $\mu$.
Por el Lema de recurrencia de Poincar\'{e} (Teorema \ref{teoPoincare}) si un conjunto medible $A$ cumple $\mu(A) >0$, entonces la \'{o}rbita futura de $\mu$-c.t.p. $x \in A$ vuelve infinitas veces a $A$. Sin embargo, ese Lema no dice nada sobre la frecuencia de visita y la duraci\'{o}n de las estad\'{\i}as de la \'{o}rbita de $x$ en el conjunto $A$. Las medidas erg\'{o}dicas dan exactamente el valor de la frecuencia asint\'{o}tica en que la \'{o}rbita de $x$ pasa dentro de $A$.
\begin{definition} \em {\bf Tiempo medio de estad\'{\i}a.} \index{tiempo medio de estad\'{\i}a} \index{promedio! de tiempo de estad\'{\i}a}
Llamamos \em tiempo medio de estad\'{\i}a \em $\tau _A (x)$ de la
\'{o}rbita por $x \in X$ en un conjunto medible $A$ a:
$$\tau _A(x) = \lim _{n \rightarrow + \infty} \frac{\#\{ j \in \mathbb{N}: 0 \leq j \leq n-1, T^j(x)
\in A
\}}{n} = $$ $$ \lim _{n \rightarrow + \infty} \frac{1}{n} \sum _{j= 0}^{n-1} \chi _A \circ T^j (x)=
\widetilde \chi _A (x)$$
\end{definition}
\vspace{.3cm}
\begin{theorem} {\bf Ergodicidad VI.} \index{medida! erg\'{o}dica} \index{ergodicidad} \index{equivalencia de definiciones! de ergodicidad} \index{transformaci\'{o}n! erg\'{o}dica}
Sea $T: X \mapsto X$ medible que preserva una medida de probabilidad $\mu$. Entonces $\mu$ es erg\'{o}dica
para $T$ si y solo si para todo conjunto medible $A$ el
tiempo medio de estad\'{\i}a $\tau_A (x)$ es constante $\mu$-c.t.p. Adem\'{a}s, en ese caso $\tau _A(x) = \mu (A)\; \;
\; \mu-c.t.p.$
\end{theorem}
El enunciado y la prueba de este teorema, con leves modificaciones, fue extra\'{\i}do de \cite[p\'{a}g. 133]{Mane} (ver tambi\'{e}n \cite{ManeIngles}).
{\em Demostraci\'{o}n:} Por el teorema de Birkhoff-Khinchin
$\widetilde \chi _A \in L^1(\mu)$ y cumple $\widetilde \chi_A =
\widetilde \chi_A \circ T \; \; \mu-c.t.p.$
Si $\widetilde \chi _A = cte \; \; \mu-c.t.p.$ entonces, aplicando
nuevamente el teorema de Bikhoff-Khinchin:
\begin{equation} \label{eqn31}\widetilde \chi_A(x) =
\int \widetilde \chi _A \, d \mu = \int \chi _A \, d \mu = \mu
(A)\; \; \; \; \mu-c.t.p. \end{equation}
Si $\tau_A = \widetilde \chi _A = cte \; \; \mu-c.t.p.$ para todo
$A $ medible, tomemos en particular $A$ tal que $T^{-1}(A) = A$.
Para demostrar la ergodicidad de $\mu$ hay que probar que $\mu (A)
$ es cero o uno.
$$\widetilde \chi_A = \lim \frac{1}{n} \sum _{j= 0}^{n-1} \chi_A \circ T^j$$
Pero $\chi _A \circ T^j = \chi _{T^{-j}(A)} = \chi _A$. Luego
resulta:
\begin{equation} \label{eqn32}\widetilde \chi _A(x) = \chi _A(x) \in \{0,1\} \; \; \; \mu-\mbox{c.t.p. }x \in X \end{equation}
Por (\ref{eqn31}) y (\ref{eqn32}) se tiene $\mu (A) \in \{0,1\}$ y $\mu$ es erg\'{o}dica
como quer\'{\i}amos probar.
Rec\'{\i}procamente, si $\mu$ es erg\'{o}dica, entonces por la parte b) del
teorema \ref{teoremaergodicidad}. $\tau _A = \widetilde \chi _A = cte \; \;
\mu-c.t.p.$ \hfill $ \Box$
\begin{exercise}\em \label{exercise2} \index{medida! erg\'{o}dica} \index{transformaci\'{o}n! erg\'{o}dica} \index{equivalencia de definiciones! de ergodicidad} \index{ergodicidad} \label{exerciseTildefLocalmenteConstante} \em
\em Sea $T:X \mapsto X$ es Borel medible en un espacio
topol\'{o}gico $X$ conexo, que preserva una
medida de probabilidad $\mu$ positiva sobre abiertos. Probar que:
(a) Una funci\'{o}n $g \in L^1 (\mu) $ invariante con $T$ (i.e. $g
\circ T = g \; \mu $-c.t.p.) es constante $\mu$-c.t.p. si y solo
si es localmente constante c.t.p. (es decir: existe un
cubrimiento de $X$ por abiertos, tales que en cada abierto $V$ del
cubrimiento se cumple $g|_V = K_V $ constante $\mu$-c.t.p. de
$V$.)
(b) $T$ es erg\'{o}dica si y solo si toda funci\'{o}n $g \in L^1(\mu )$
que sea invariante con $T$ es localmente constante.
\end{exercise}
\subsection{Ergodicidad \'{U}nica} \index{medida! erg\'{o}dica} \index{transformaci\'{o}n! erg\'{o}dica} \index{ergodicidad \'{u}nica}
\index{transformaci\'{o}n! \'{u}nicamente erg\'{o}dica} \index{ergodicidad}
\begin{definition}
\em
La transformaci\'{o}n $T$ es \em \'{u}nicamente erg\'{o}dica \em si existe
una \'{u}nica medida de probabilidad que es $T$ invariante.
Por lo visto en la secci\'{o}n \ref{sectionExistenciaMedidasErgodicas} esta \'{u}nica medida
es extremal en el conjunto de las probabilidades invariantes; luego es erg\'{o}dica.
\em
\end{definition}
\begin{theorem} \label{ergouni} \label{teoremaErgodicidadUnica}
Sea $T$ continua en un espacio m\'{e}trico compacto.
Las siguientes afirmaciones son equivalentes:
\begin{description}
\item[i)]
$T$ es \'{u}nicamente erg\'{o}dica
\item[ii)]
Para toda $f \in C^0(X, \mathbb{R})$, para todo $x \in X $ existe el siguiente l\'{\i}mite y es un n\'{u}mero independiente de $x$: $$\widetilde f := \lim_{n \rightarrow + \infty} \frac{1}{n} \sum_{j= 0}^{n-1} f \circ T^j(x).$$
\item[iii)]
Para toda $f \in C^0(X, \mathbb{R})$, la sucesi\'{o}n de funciones continuas $\{ f_n \} $
definidas por
$$ f_n = \frac{1}{n} \sum _{j=0}^{n-1} f \circ T^j $$
converge uniformemente a una constante cuando $n \rightarrow \infty $.
\end{description}
\end{theorem}
{\em Demostraci\'{o}n:}
i) implica iii):
Sea $\mu $ la \'{u}nica medida de ${\mathcal M}_T(X)$. Probaremos
$f_n $ converge uniformemente a $\int f \, d \mu $ en $X$.
Por absurdo, supongamos que existe $\epsilon > 0 $, y una sucesi\'{o}n
$n_j \rightarrow \infty $ tal que $\sup _{x \in X} |f_{n_j} (x) - \int
f \, d \mu | \geq \epsilon $ para todo $j \geq 1$.
Como el supremo es un m\'{a}ximo, existe $x_j \in X$, donde se alcanza.
Por el teorema de Riesz, existe una medida $\mu _j$ (no necesariamente $T$- invariante) tal que:
$$ \forall \: g \in C^0(X, \mathbb{R}): \:\:\: \int g \, d \mu _j =
\frac{1}{n_j} \sum _ {i =0 }^{n_j -1} g \circ T^i (x_j) $$
Por la compacidad del espacio ${\mathcal M}(X)$, existe una subsucesi\'{o}n
convergente de esta medidas $\mu _j$. Por simplicidad seguiremos usando
la misma notaci\'{o}n para la subsucesi\'{o}n que para la sucesi\'{o}n original.
$$\mu _j \rightarrow \nu $$ Afirmamos que $\nu $ es $T$- invariante.
En efecto:
$$ \int g \, d T^* \nu = \int g \circ T \, d \nu = \lim \int g\circ T \, d \mu _j = \lim \frac{1}{n_j} \sum _{ i=0}^{n_j -1} g \circ T^{i+1} (x_j) $$
$$ = \lim \frac{1}{n_j} \sum _{ i=0}^{n_j -1} g \circ T^{i} (x_j)
= \lim \int g \, d \mu _{n_j} = \int g \, d \nu $$
Como por hip\'{o}tesis $T$ es \'{u}nicamente erg\'{o}dica, se tiene $\mu = \nu $.
Entonces
$$ \int f \, d \mu = \int f \, d \nu = \lim f _{n_j}(x_j)$$
Pero por construcci\'{o}n
$$ |f_{n_j} (x_j) - \int f \, d \mu | \geq \epsilon \:\:\: \mbox{ para todo }
j \geq 1 $$ contradiciendo
la igualdad anterior.
iii) implica ii) porque la convergencia uniforme de funciones continuas en $X$ implica la convergencia
en todo punto de $X$.
ii) implica i):
Sea $\Lambda $ el funcional lineal positivo definido por
$$\Lambda (f) = \widetilde f $$ para toda $f \in C^0(X, \mathbb{R})$.
Para toda medida $\mu $ que sea $T$ invariante, por el teorema de
Birkhoff
$$ \int f \, d \mu = \int \widetilde f \, d \mu = \widetilde f = \Lambda (f) $$
Por el teorema de Riesz, existe una \'{u}nica $\mu $ que cumple
$$\int f \, d \mu = \Lambda (f) $$
Luego existe una \'{u}nica $\mu $ que es $T$- invariante.
\hfill $\Box$
Como ejemplo, veremos en la pr\'{o}xima secci\'{o}n que la rotaci\'{o}n
irracional en el c\'{\i}rculo es \'{u}nicamente erg\'{o}dica.
\begin{definition} \em {\bf Conjuntos minimales} \index{conjunto! minimal} \label{definicionMinimalTopologico}
Sea $T$ una transformaci\'{o}n continua en un espacio m\'{e}trico
compacto $X$.
Un subconjunto $\Lambda \subset X$ es \em minimal \em (desde el punto de vista topol\'{o}gico) si es
compacto, no vac\'{\i}o, invariante hacia el futuro (es decir: $ \Lambda \subset T^{-1}(\Lambda)$), y no existe ning\'{u}n subconjunto propio de
$\Lambda $ que sea compacto, no vac\'{\i}o e invariante hacia el futuro.
\vspace{.2cm}
Tenemos la siguiente caracterizaci\'{o}n (ver parta (a) del Ejercicio \ref{ejercicioMinimalTopologico}):
$\Lambda$ \em es minimal si y solo si \em
$\Lambda $ es compacto, no vac\'{\i}o y $T$-invariante (es decir: $T^{-1}(\Lambda)= \Lambda$) y no contiene subconjuntos propios que sean compactos, no vac\'{\i}os e invariantes por $T$ hacia el futuro.
\vspace{.2cm}
Si adem\'{a}s el mapa continuo $T$ es invertible con inversa continua, entonces $\Lambda$ \em es minimal si y solo si \em
$\Lambda $ es compacto, no vac\'{\i}o y $T$-invariante, y no contiene subconjuntos propios que sean tambi\'{e}n compactos, no vac\'{\i}os y $T$-invariantes (ver parte (b) del Ejercicio \ref{ejercicioMinimalTopologico}).
\vspace{.2cm}
Finalmente, es f\'{a}cil ver que cualquiera sea $T$ continua (no necesariamente invertible), un conjunto $\Lambda $
compacto, no vac\'{\i}o e invariante, \em es mi\-ni\-mal si y solo si \em todas
sus \'{o}rbitas hacia el futuro son densas en $\Lambda $. Esto es porque la clausura
de cada una de ellas es un compacto no vac\'{\i}o, invariante hacia adelante,
y contenido en $\Lambda $ (ver tambi\'{e}n parte (a) del Ejercicio \ref{ejercicioMinimalTopologico}).
\end{definition}
\begin{exercise}\em \label{ejercicioMinimalTopologico}
(a) Probar que las siguientes afirmaciones son equivalentes (y por lo tanto cualquiera de ellas puede utilizarse como definici\'{o}n de $\Lambda $ minimal):
(i) $\Lambda \subset X$ es compacto no vac\'{\i}o e invariante por $T$ hacia el futuro, y no contiene subconjuntos propios compactos no vac\'{\i}os e invariantes hacia el futuro.
(ii) $\Lambda \subset X$ es compacto no vac\'{\i}o y $T$-invariante, y no contiene subconjuntos propios compactos no vac\'{\i}os e invariantes por $T$ hacia el futuro. (Sugerencia para demostrar (i) $\Rightarrow$ (ii): probar que si $\Lambda $ cumple (i), entonces $T^{-1}(\Lambda) \setminus \Lambda$ tambi\'{e}n.)
(iii) $\Lambda \subset X$ es compacto no vac\'{\i}o, y para todo $x \in \Lambda$ la clausura de $\{f^j(x)\}_{j \geq 0} $ es igual a $\Lambda$.
(b) Asumir ahora que $T$ es un homeormorfismo (es decir, $T$ es continua, invertible y su $T^{-1}$ es continua). Probar que las siguientes afirmaciones son equivalentes (y por lo tanto cualquiera de ellas puede utilizarse como definici\'{o}n de $\Lambda$ minimal):
(i) $\Lambda \subset X$ es compacto no vac\'{\i}o e invariante por $T$ hacia el futuro, y no contiene subconjuntos propios compactos no vac\'{\i}os e invariantes hacia el futuro.
(ii) $\Lambda \subset X$ es compacto no vac\'{\i}o y $T$-invariante, y no contiene subconjuntos propios compactos no vac\'{\i}os y $T$-invariantes.
(iii) $\Lambda \subset X$ es compacto no vac\'{\i}o, y para todo $x \in \Lambda$ la clausura de $\{f^j(x)\}_{j \geq 0} $ es igual a $\Lambda$.
(iv) $\Lambda \subset X$ es compacto no vac\'{\i}o e invariante por $T$ hacia el pasado (es decir $T^{-1}(\Lambda) \subset \Lambda$), y no contiene subconjuntos propios compactos no vac\'{\i}os e invariantes hacia el pasado.
(v) $\Lambda \subset X$ es compacto no vac\'{\i}o, y para todo $x \in \Lambda$ la clausura de $\{f^{-j}(x)\}_{j \geq 0} $ es igual a $\Lambda$.
\end{exercise}
Veamos como se vincula la ergodicidad \'{u}nica con los conjuntos minimales:
\begin{theorem} \index{ergodicidad \'{u}nica} \index{transformaci\'{o}n! \'{u}nicamente erg\'{o}dica} \label{theoremErgodicidadUnica->Minimal}
Si $T$ es continua en un espacio m\'{e}trico compacto $X$, y \'{u}nicamente
erg\'{o}dica, entonces existe un \'{u}nico minimal $\Lambda $, y adem\'{a}s
$\Lambda $ es el soporte de
la medida invariante por $T$.
\end{theorem}
{\bf Ejemplo de Furstenberg: } El rec\'{\i}proco del Teorema \ref{theoremErgodicidadUnica->Minimal} es falso:
Furnstenberg en \cite{Furstenberg} (ver tambi\'{e}n
\cite[Cap\'{\i}tulo 2, \S 7, p\'{a}g. 172]{Mane} o \cite{ManeIngles}), dio un ejemplo de transformaci\'{o}n continua en el toro que preserva la medida de Lebesgue,
para la cual todo el toro es el \'{u}nico minimal pero la medida
de Lebesgue no es erg\'{o}dica. Luego, en el Ejemplo de Furstenberg, la transformaci\'{o}n no
es \'{u}nicamente erg\'{o}dica, pero existe un \'{u}nico minimal (y adem\'{a}s este minimal es
el soporte de una medida $T$-invariante).
Otros ejemplos en los que se prueba la existencia de m\'{a}s de una medida erg\'{o}dica para mapas con un \'{u}nico conjunto minimal, se encuentran en \cite{BachurinStatAttr}.
\vspace{.3cm}
{\em Demostraci\'{o}n del Teorema} \ref{theoremErgodicidadUnica->Minimal}:
Sea $\mu $ la \'{u}nica medida invariante de $T$. Sea $\Lambda $ el soporte compacto
de $ \mu $, definido como $$ \Lambda := \{ x \in X: \forall \: V \mbox{ entorno de } x \: \:
\mu (V) > 0 \}. $$ $\Lambda $ es cerrado en $X$ compacto, luego es compacto.
Veamos que $\Lambda \subset T^{-1}(\Lambda)$. Sea $x \in \Lambda$ y sea $y= T(x)$. Hay que probar que $y \in \Lambda$. Para todo entorno
$U$ de $y$, $T^{-1}(U) $ es entorno de $x$, luego $0 < \mu (T^{-1} (U))
= \mu (U) $. Esto prueba que $ y \in \Lambda$; luego
$\Lambda$ es invariante hacia adelante.
Sea $\Lambda _0 $ compacto, no vac\'{\i}o, invariante
hacia adelante. Sea $\widehat T = T | _{\Lambda _{0}} : \Lambda _0 \mapsto
\Lambda _0 $. Por el teorema de existencia de medidas invariantes,
existe $ \widehat \nu $ probabilidad que es $\widehat T$ invariante.
Sea $\nu $ probabilidad en $X$, definida as\'{\i}:
$$ \nu (A) = \widehat \nu (A \cap \Lambda _0 ) $$ El soporte de
$\nu $ est\'{a} contenido en $\Lambda _0 $. En efecto, si $x \not \in \Lambda_0$, entonces existe $V$, entorno de $x$ disjunto con $\Lambda_0$. Luego $\nu(V) = 0$ y $x$ no pertenece al soporte de $\nu$.
Probemos que $\nu $ es $T$-invariante:
$$\nu (T^{-1}(A) = \widehat \nu (T^{-1} (A) \cap \Lambda _0 ) =
\widehat \nu \{ x \in \Lambda _0: T(x) \in A \} =$$ $$ \widehat \nu (\widehat T ^{-1}
(A \cap \Lambda _0)) = \widehat \nu (A \cap \Lambda _0 ) = \nu (A). $$
Como $T$ es \'{u}nicamente erg\'{o}dica, $\nu = \mu $. Luego
$\Lambda = \mbox{sop } \nu \subset \Lambda _0 $.
Hemos probado que todo $\Lambda _0 $ compacto, no vac\'{\i}o e invariante hacia adelante
contiene a $\Lambda $.
De ello se deducen dos resultados:
Primero, si $\Lambda _0 $ adem\'{a}s est\'{a} contenido en
$\Lambda $, entonces coincide con $\Lambda $.
Luego $\Lambda $ es minimal.
Segundo: si $\Lambda _0 $ es minimal, entonces coincide con $\Lambda $.
Luego $\Lambda $ es el \'{u}nico minimal.
\hfill $\Box$
\begin{exercise}\em
Sea $X$ un espacio m\'{e}trico compacto y sea $T:X \mapsto X$ medible, tal que existe alguna medida invariante $\mu$.
$$ P =
\big \{ x \in X: \mbox{ existe } \lim_{n \rightarrow \infty}
\frac{1}{n} \cdot \sum _{j=0} ^{n-1} h \circ T ^{ j} (x) \
\forall \ h : X \mapsto \mathbb{R} \mbox{ medible acotada} \big \} $$
a)
Probar que $P$ es el conjunto de los puntos peri\'{o}dicos de $T$.
Sugerencia: 1) Inventar una sucesi\'{o}n $\{a _i \}_{i \geq 0 } $ de
ceros y unos tal que no exista el l\'{\i}mite cuando $n \rightarrow \infty $
de $ 1/n \cdot \sum _{j=0}^{n-1} a _j $. (Por ejemplo: 1 uno, 1 cero,
10 unos, 10 ceros, 100 unos, 100 ceros, 1000 unos, 1000 ceros, etc).
2) Si $x $ no es peri\'{o}dico mostrar que $\widetilde \chi _A (x) $ no existe
para $A = \{ T^i(x): a_i = 1 \}$.
b)
Sea $A \subset X$ cualquier boreliano dado. Probar que tiene
probabilidad total el conjunto
$ \{x\in X: \mbox { existe } \lim_{n\rightarrow
\infty} (1/n) \, \sum _{j=0} ^ {n-1} \chi _A (T^j (x)) \} $.
c)
Probar que si $T$ es continua, \'{u}nicamente erg\'{o}dica y existe
un conjunto minimal con infinitos puntos, entonces $P = \emptyset $.
(Un ejemplo de tal $T$ es la rotaci\'{o}n irracional del c\'{\i}rculo, como veremos a continuaci\'{o}n).
\end{exercise}
\subsection{Ergodicidad de la rotaci\'{o}n irracional} \index{rotaci\'{o}n! irracional} \index{ergodicidad! de la rotaci\'{o}n irracional} \label{sectionRotacionIrracionalErgodica}
\begin{theorem} \label{teoremaErgodicidadRotacionIrracional}
\label {irrota}
La rotaci\'{o}n irracional en el c\'{\i}rculo es \'{u}nicamente erg\'{o}dica.
Luego, es erg\'{o}dica respecto a la medida de Lebesgue.
\end{theorem}
Una prueba muy breve y cl\'{a}sica del Teorema \ref{teoremaErgodicidadRotacionIrracional} requiere la aplicaci\'{o}n de la Teor\'{\i}a Espectral para el estudio de las propiedades de ergodicidad y de mixing de los sistemas din\'{a}micos. No cubriremos ese tema en este texto, por lo que daremos otra prueba, pedestre y m\'{a}s larga. La prueba breve que aplica la Teor\'{\i}a Espectral puede encontrarse por ejemplo en \cite[Theorem 1.8]{Walters}.
{\em Demostraci\'{o}n: }
Sea $T(x) = x+ x_0 $ (m\'{o}d. 1), para todo $x \in S^1 = \mathbb{R}/\sim (\mbox{m\'{o}d.} 1)$, donde $x_0 \in (0,1) $ es irracional dado.
Aplicando el Teorema \ref{teoremaErgodicidadUnica} parte ii), para probar que $T$ es \'{u}nicamente erg\'{o}dica, probaremos que para cada $f \in C^0(S^1, \mathbb{R})$ la sucesi\'{o}n $$f_n = \frac{1}{n} \sum_{j= 0}^{n-1} f \circ T^j$$ converge en todo punto a una constante cuando $n \rightarrow + \infty$.
Por el Teorema Erg\'{o}dico de Birkhoff-Khinchin para Lebesgue-casi todo punto $x_1 \in S^1$ existe $\lim_{n \rightarrow + \infty} f_n(x_1) = \widetilde f(x_1)$. Dado $\epsilon >0$, como $f$ es uniformemente continua en $S^1$ (porque es continua en el compacto $S^1$), existe $\delta >0$ tal que $$|x-y|< \delta \ \Rightarrow \ |f(x)- f(y)| < \epsilon.$$ Por la definici\'{o}n de la rotaci\'{o}n $T$ tenemos la siguiente igualdad m\'{o}d. $1$, para todo punto $y \in S^1$ y para todo $j \geq 0$:
$$T^j(y) = y + j x_0 = (y-x_1) + x_1 + j x_0 = (y-x_1) + T^j(x_1). $$
Luego,
$|y-x_1| < \delta \ \Rightarrow \ |f(T^j(y)) - f(T^j(x_1))| < \epsilon \ \ \forall \ j \in \mathbb{N} \ \Rightarrow $
$$|f_n(y) - f_n(x_1)| < \epsilon \ \ \forall \ n \in \mathbb{N} .$$
Sea $N \geq 1$ tal que $|f_n(x_1) - \widetilde f (x_1)| < \epsilon$ para todo $n \geq N$. Obtenemos:
\begin{equation} \label{equation20}|f_n(y) - \widetilde f(x_1) | < 2 \epsilon \ \ \forall \ n \in \mathbb{N} \mbox{ si } |y-x_1| < \delta\end{equation}
Hemos probado que para todo $\epsilon >0$ existe $\delta >0$ (m\'{o}dulo de continuidad uniforme de $f$, y por lo tanto independiente de $x_1$) tal que se cumple la desigualdad (\ref{equation20}) para todo punto $y$ a distancia menor que $\delta$ de $x_1$. Luego $\limsup f_n (y) - \liminf f_n(y) < 2\epsilon$ si $|y- x_1| < \delta$. Como esta desigualdad vale para Lebesgue-c.t.p. $x_1 \in S^1$, dado $\epsilon >0$ y dado $y \in S^1$, podemos siempre elegir alg\'{u}n $x_1 \in S^1$ tal que $|y- x_1| < \delta$ y tal que existe $\widetilde f (x_1)$. Entonces podemos aplicar la desigualdad (\ref{equation20}) para todo $y \in S^1$ eligiendo, para cada $y$, alg\'{u}n $x_1$ adecuado. Deducimos que para todo $y \in S^1$ se cumple $\limsup f_n(y) - \liminf f_n(y) < 2 \epsilon$. Como $\epsilon >0$ es arbitrario, existe $$\widetilde f(y) = \lim f_n(y) \ \ \forall \ y \in S_1.$$ Aplicando nuevamente la desigualdad (\ref{equation20}), deducimos que \em la funci\'{o}n $ \widetilde f $ es con\-ti\-nua. \em
Lo anterior vale para cualquier rotaci\'{o}n del c\'{\i}rculo. Todav\'{\i}a no usamos que la rotaci\'{o}n es irracional, que aplicaremos ahora para probar que $\widetilde f$ es constante.
La funci\'{o}n $\widetilde f$ es invariante por $T$, es decir $\widetilde f = \widetilde f \circ T$, pues el l\'{\i}mite de los promedios temporales de Birkhoff. Luego, $\widetilde f$ toma un valor constante para cada \'{o}rbita. Entonces, para probar que $\widetilde f$ es constante (sabiendo ya que es continua) alcanza con probar que existe una \'{o}rbita densa.
Para probar que alguna \'{o}rbita es densa alcanza con probar que existe $x_1 \in S^1$ tal que para todo $\delta >0$ la \'{o}rbita $o^+(x_1) = \{ T^n (x_1): \ n \geq 0\}$ es $\delta$-densa (i.e. todo intervalo de longitud $\delta$ contiene alg\'{u}n punto de $o^+(x_1)$). La transformaci\'{o}n $T$ preserva la medida de Lebesgue. Aplicando el Lema de Recurrencia de Poincar\'{e}, Lebesgue-c.t.p. es recurrente. Elijamos un punto recurrente $x_1$. Entonces existe $n_j \rightarrow + \infty$ tal que $$\lim_j |T^{n_j}(x_1) - x_1| = 0.$$ Fijemos $m_1 \geq 1$ tal que $|T^{m_1}(x_1)- x_1| < \delta$. Observemos que $T^{m_1}(x_1) - x_1 = m_1 x_0$ (todas las igualdades son m\'{o}dulo 1). Como $x_0 \in (0,1) \setminus \mathbb{Q}$, tenemos que $m_1 x_0 \neq 0$ (m\'{o}d. $1$). Adem\'{a}s $T^{2m_1}(x_1) = T^{m_1}(x_1) + m_1x_0$, de donde
$$|T^{2m_1}(x_1) - T^{m_1}(x_1)| = |T^{m_1} (x_1) - x_1|= m_1 x_0 = a \in (0, \delta) \neq 0$$
Por inducci\'{o}n en $k \geq 0$ obtenemos
\begin{equation}
\label{equation21}
|T^{(k+1)m_1}(x_1) - T^{k m_1}(x_1)| = a\neq 0 \ \ \forall \ k \in \mathbb{N}. \end{equation}
Afirmamos que el conjunto $A:= \{T^{k m_1}(x_1): \ k \in \mathbb{N}\} \subset o^+(x_1)$ es $\delta$-denso en $S^1$. Por un lado, para valores diferentes de $k \in \mathbb{N}$, los puntos respectivos $T^{k n_1}(x_1) \in A$ son diferentes. En efecto, si $T^{k m_1}(x_1) = T^{h m_1}(x_1)$, entonces $km_1 x_0 = h m_1 x_0$, de donde $|k-h| m_1 x_0 = 0$ (m\'{o}d. $1$), donde $k,h, m_1 \in \mathbb{N}$. Como $x_0 \not \in \mathbb{Q}$ deducimos que $h= k$. Por otro lado, dos puntos consecutivos de $A$ cumplen la igualdad (\ref{equation21}). Entonces $\bigcup_{k \in \mathbb{N}} (T^{km_1}(x_1) - a, T^{km_1}(x_1)] = S^1$, lo que demuestra que $A$ es $\delta$-denso en $S^1$ (pues $\delta > a$). Siendo $A \subset o^+(x_1)$, la \'{o}rbita por $x_1$ es $\delta$-densa, para todo $\delta >0$, luego es densa, terminando la demostraci\'{o}n del Teorema \ref{teoremaErgodicidadRotacionIrracional}. \hfill $\Box$
\begin{remark} \em \label{remarkRotacionIrracionalOrbitasDensas}
En la demostraci\'{o}n del Teorema \ref{teoremaErgodicidadRotacionIrracional} probamos que existe una \'{o}rbita $\{f^n(x_0)\}_{n \geq 0}$ densa en la rotaci\'{o}n irracional del c\'{\i}rculo $S^1= [0,1]\ |(\mbox{m\'{o}d. } 1)$ dada por $f(x) = x+ a \ (\mbox{m\'{o}d. } 1)$ donde $a $ es irracional. Esto implica que
\begin{center}
\em Toda \'{o}rbita por la rotaci\'{o}n irracional del c\'{\i}rculo es densa. \em
\end{center}
{\em Demostraci\'{o}n: }
Tenemos: $f^n(y_0) = y_0 + na = x_0 + na + (y_0 - x_0) = f^n(x_0) + (y_0 - x_0) \ (\mbox{m\'{o}d. } 1)$. Entonces, la \'{o}rbita $\{f^n(y_0)\}_{n \geq 0}$ se obtiene de la \'{o}rbita $\{f^n(x_0)\}_{n \geq 0}$ rot\'{a}ndola $y_0 - x_0 \ (\mbox{m\'{o}d. } 1)$. Como cualquier rotaci\'{o}n en el c\'{\i}rculo es un homeomorfismo, lleva un conjunto denso a un conjunto tambi\'{e}n denso. Por lo tanto, existe una \'{o}rbita densa si y solo si todas las \'{o}rbitas son densas. Ya probamos (al final de la demostraci\'{o}n del Teorema \ref{teoremaErgodicidadRotacionIrracional}), que existe una \'{o}rbita densa cuando $a$ es irracional. Concluimos que todas las \'{o}rbitas son densas. \hfill $\Box$
\end{remark}
De la prueba del Teorema \ref{teoremaErgodicidadRotacionIrracional} deducimos que las rotaciones en el c\'{\i}rculo son erg\'{o}dicas (respecto de la medida de Lebesgue) si y solo s\'{\i} son \'{u}nicamente erg\'{o}dicas, y esto ocurre si y solo si la rotaci\'{o}n es topologicamente transitiva. M\'{a}s en general, la transitividad
topol\'{o}gica es equivalente a la ergodicidad
\'{u}nica para la rotaci\'{o}n r\'{\i}gida en cualquier grupo topol\'{o}gico compacto
abeliano (ver por ejemplo \cite[page 266]{NicolPetersenEnciclopedia}).
\begin{exercise}\em
Consid\'{e}rese el toro $k$-dimensional $$\mathbb{T}^k = (S^1)^k =[0,1]^k/(\mbox{m\'{o}d} \{1\}^k),$$ y la operaci\'{o}n de grupo $+$ $(\mbox{m\'{o}d} \{1\}^k)$. Sea $x_0 \in \mathbb{T}^k$. Sea $T$ la traslaci\'{o}n $T(x) = x + x_0 \ (\mbox{m\'{o}d} \{1\}^k) $ para todo $x \in \mathbb{T}^k$.Si $\widetilde x_0 \in {\mathbb{R}^n}$ es un representante de $x_0 \in T^k$ y si $<,>$ denota el producto interno usual en $\mathbb{R}^n$, asuma que
$< x_0 , m > \not \in Z \; \; \; \forall m \in Z^k - \{0\} $.
a)
Demostrar que $T$ es \'{u}nicamente erg\'{o}dica.
b)
Demostrar que $\forall x \: \in X $, la medida de Lebesgue es el
l\'{\i}mite cuando $ n \rightarrow \infty $ de las medidas
$ 1/n \cdot \sum _{j=0}^{n-1} \delta _{T^j (x)} $.
c) Probar que la medida de Lebesgue es la \'{u}nica medida
de proba\-bili\-dad en el toro invariante por todas las traslaciones, pero que no toda traslaci\'{o}n es \'{u}nicamente erg\'{o}dica.
{ Sugerencia: las traslaciones que tienen puntos peri\'{o}dicos no son \'{u}nicamente erg\'{o}dicas.}
d) Deducir que para toda traslaci\'{o}n erg\'{o}dica del toro, \'{e}ste es minimal y todas las \'{o}rbitas son densas.
\end{exercise}
\subsection{Transformaciones Mixing.}
\index{transformaci\'{o}n! mixing}
\index{medida! mixing}
\begin{definition}
\em \label{definicionmixing} Sea $T: X \mapsto X$ medible que
preserva una medida de probabilidad $\mu$. Se dice que $T$ \em es
mixing respecto de $\mu$, o que $\mu$ es mixing respecto de $T$,
\em si para toda pareja $A,B$ de conjuntos medibles se cumple:
\begin{equation} \label{eqn23a}\lim _{n \rightarrow + \infty} \mu (T^{-n} A \cap B ) = \mu (A) \mu (B).\end{equation}
\end{definition}
\begin{theorem} \label{teoremaMixingImplicaErgodica}
Toda transformaci\'{o}n mixing es erg\'{o}dica.
\em (El rec\'{\i}proco es falso como veremos en el Ejemplo \ref{rotirracnoesmixing}.)
\end{theorem}
{\em Demostraci\'{o}n: } En el teorema \ref{teoremaergodicidad} se
prueba que $T$ es erg\'{o}dica si y solo si
\begin{equation} \label{eqn23b}\lim _{n \rightarrow + \infty} \frac{1}{n} \sum _{j=0}^{n-1}
\mu (T^{-j} A \cap B ) = \mu (A) \mu (B)\end{equation} Si se cumple la
igualdad (\ref{eqn23a}) de la definici\'{o}n de transformaci\'{o}n mixing, entonces se
cumple la igualdad (\ref{eqn23b}) de ergodicidad. En efecto si una sucesi\'{o}n $\{a_n\}_{n \in \mathbb{N}}$ de n\'{u}meros reales es convergente a $a \in \mathbb{R}$, entonces (aplicando la definici\'{o}n de l\'{\i}mite se puede chequear f\'{a}cilmente): $\lim_{n \rightarrow + \infty} (1/n) \, \sum_{j=0}^{n -1} a_j = a$. \hfill $\Box $
Para probar que el
rec\'{\i}proco del Teorema \ref{teoremaMixingImplicaErgodica} es falso, basta ver que la medida de Lebesgue para la rotaci\'{o}n irracional en el c\'{\i}rculo (que ya probamos que es
erg\'{o}dica en al Teorema \ref{teoremaErgodicidadRotacionIrracional}) no es mixing. Probaremos que no es mixing en el Ejemplo
\ref{rotirracnoesmixing} de esta secci\'{o}n.
\begin{definition}
\em Sea $T: X \mapsto X$ Borel-medible en un espacio topol\'{o}gico
$X$. Se dice que $T$ \em es topol\'{o}gicamente mixing \em si y solo
si para toda pareja de abiertos $U$ y $V$ no vac\'{\i}os existe $n_0
\in \mathbb{N}$ tal que
$$T^{n} (U) \cap V \neq \emptyset \; \; \; \forall n \geq n_0 $$
\end{definition}
\begin{exercise}\em
Probar que \em si $T$ es Borel medible en un espacio topol\'{o}gico
$X$ y es mixing respecto a una medida de probabilidad $\mu$
positiva sobre abiertos, entonces es topol\'{o}gicamente mixing.
\end{exercise}
No toda medida erg\'{o}dica es mixing. En efecto:
\begin{example} \em
\label{rotirracnoesmixing}
\em La medida de Lebesgue en el c\'{\i}rculo no es mixing para la
rotaci\'{o}n irracional. \em
{\em Demostraci\'{o}n: } Sea la rotaci\'{o}n irracional $T:S^1
\mapsto S^1$ en el c\'{\i}rculo $S^1$, definida por $T(z) = z + \alpha \ (\mbox{m\'{o}d.} 1)$, donde
$\alpha$ es un n\'{u}mero irracional (que puede tomarse en $(0,1)$). Para demostrar que su \'{u}nica medida de probabilidad invariante (la medida $m$ de Lebesgue) no es mixing, basta demostrar que $T$ no es
topol\'{o}gicamente mixing. Sea
$U \subset S^1$ un intervalo abierto de longitud
$\epsilon:
0< \epsilon < (1/4) \min \{\alpha, 1 - \alpha \}$. Probaremos que, si
para cierto $n_0 \in \mathbb{N}$ se cumple $T^{n_0} (U) \cap U \neq
\emptyset$,
entonces $T^{n_0 +1}(U) \cap U = \emptyset$. De lo contrario
la longitud del intervalo uni\'{o}n $T^{n_0 +1}(U) \cup U \cup
T^{n_0}(U)$ ser\'{\i}a menor que $3 \epsilon < \min \{\alpha, 1 - \alpha
\}$, pero contendr\'{\i}a dos puntos $x_0$ y $T(x_0)= x_0 + \alpha$
que distan $ \min \{\alpha, 1 - \alpha \}$. \hfill $\Box$
\end{example}
\begin{exercise}\em
Sea $T$ medible, invertible con inversa medible, que preserva una
medida de probabilidad $\mu$. Probar que $T$ \em es mixing si y
solo si $T^{-1}$ lo es. \em (Sugerencia: para dos conjuntos $A$ y
$B $ cualesquiera $\mu (T^{-n}(A) \cap B ) = \mu (T^n(B) \cap A
).) $
\end{exercise}
\begin{example} \em
\label{exampleTentMapMixing}
Sea en $S^1 = [0,1]/\sim \mbox{m\'{o}d. } 1$ el tent map $T$ definido por $T(x)= 2 x$ si $0 \leq x \leq 1/2$, y $T(x) = 2- 2 x$ si $1/2 \leq x \leq 1$. Probaremos el siguiente resultado:
\em La medida de Lebesgue $m$ en $S^1$ es mixing para el tent map $T$; luego es erg\'{o}dica. \em
\end{example}
{\em Demostraci\'{o}n: }
Por definici\'{o}n de mixing, debemos probar que \begin{equation}\label{eqn24}\lim_{n \rightarrow + \infty} m (T^{-n}A \cap B) B= m (A) m (B)\end{equation} para toda pareja de conjuntos medibles $A$ y $B$. Si $A = \emptyset$ \'{o} $B = \emptyset$, entonces la igualdad anterior es trivialmente igual a cero. Consideremos entonces el caso en que $A$ y $B$ son no vac\'{\i}os. Fijemos el boreliano $A$ no vac\'{\i}o. Primero probaremos (\ref{eqn24}) cuando $B$ es un intervalo abierto; luego cuando $B$ es un abierto, despu\'{e}s para $B$ compacto, y finalmente para $B$ boreliano cualquiera.
Sea $B$ es un intervalo abierto, con longitud $a$. Para cada $n \geq 1$, la gr\'{a}fica de $T^n$ est\'{a} compuesta por $2^n$ segmentos, con pendiente $2^n$ (en valor absoluto) cada uno. La imagen por $T^n$ de cualquier intervalo de longitud $2^{n-1}$ cubre todo el intervalo $[0,1]$ (croquizar la gr\'{a}fica). La preimagen por $T^n$ de cualquier boreliano $A$ no vac\'{\i}o est\'{a} formada por $2^n$ copias de $A$, todas homot\'{e}ticas a $A$ con raz\'{o}n $1/2^n$ y equidistantes en el segmento $[0,1]$. Luego, la intersecci\'{o}n $T^{-n}(A) \cap B$, cuando $B$ es un intervalo de longitud $a$, contiene $k = [\mbox{parte entera}(a \cdot 2^n)]-1$ de esas copias homot\'{e}ticas a $A$, y menos de $k+2$ de esas copias. Luego:
$$ ({ \mbox{parte entera}(a \cdot 2^n)}-1) \, \frac{m(A)} {2^n} \leq m(T^{-n}(A) \cap B) $$ $$\leq {(\mbox{parte entera}(a \cdot 2^n) + 1)} \, \frac{m(A)}{2^n}.$$
De las desigualdades anteriores se deduce que $\lim _n m(T^{-n}(A) \cap B) = a \cdot m (A) = m (B) m (A)$. Hemos probado (\ref{eqn24}) cuando $B$ es un intervalo.
Ahora consideremos el caso en que $B$ es un abierto no vac\'{\i}o en el c\'{\i}rculo. $B= \bigcup I_j$ donde $\{I_j\}$ es una colecci\'{o}n finita o infinita numerable de intervalos abiertos disjuntos dos a dos. Dividimos en dos subcasos: Si $B$ es uni\'{o}n finita de intervalos disjuntos dos a dos, o si es uni\'{o}n infinita numerable. Si $B = \bigcup_{j= 1}^N I_j$, tenemos
$$\lim_{n \rightarrow + \infty} m (T^{-n} (A) \cap B) = \lim_{n \rightarrow + \infty} \sum_{j= 1}^N m (T^{-n}(A) \cap I_j) = $$
$$= \sum_{j= 1}^N \lim_{n \rightarrow + \infty} m (T^{-n}(A) \cap I_j) = \sum_{j= 1}^N m (A) m (I_j) = m (A) \cdot m(B).$$
Si $B = \bigcup_{j= 1}^{+ \infty} I_j$, dado $\epsilon >0$ existe $N$ tal que \begin{equation} \label{eqn25a} 0 \leq m(B) - m(B_N) = m(B \setminus B_N) < \epsilon,\end{equation} donde $B_N = \bigcup_{j= 1}^N I_j$ Luego:
$$0 \leq m(T^{-n}(A) \cap B) - m(T^{-n}(A) \cap B_N) =$$ $$ =m (T^{-n}(A) \cap (B \setminus B_N)) \leq m (B \setminus B_N) < \epsilon \ \ \forall \ n \geq 0. $$ Es decir: \begin{equation} \label{eqn25b} 0 \leq m(T^{-n}(A) \cap B) - m(T^{-n}(A) \cap B_N) < \epsilon \ \ \forall \ n \geq 0\end{equation}
Por lo probado antes $\lim_{n \rightarrow + \infty} m(T^{-n}(A) \cap B_N) = m (A) m (B_N)$. Entonces, para todo $n$ suficientemente grande \begin{equation} \label{eqn25c}|m (T^{-n}(A) \cap B_N) - m(A) m(B_N)| < \epsilon.\end{equation} Reuniendo las desigualdades (\ref{eqn25a}), (\ref{eqn25b}), (\ref{eqn25c}), obtenemos, para todo $n$ suficientemente grande, la siguiente cadena de desigualdades:
$$|m (T^{-n}(A) \cap B) - m(A) m(B)| \leq |m (T^{-n}(A) \cap B) - m (T^{-n}(A) \cap B_N)| + $$ $$+ |m (T^{-n}(A) \cap B_N) - m(A) m(B_N)| + m(A) | m(B_N) - m(B)| < 3 \epsilon.$$
Lo anterior demuestra que $\lim_{n \rightarrow + \infty} m (T^{-n}(A) \cap B) = m(A) m(B)$. Hemos probado la igualdad (\ref{eqn24}) cuando $B$ es abierto. Ahora demostremos que si la igualdad (\ref{eqn24}) se cumple para un conjunto $B$, entonces tambi\'{e}n se cumple para su complemento $B^c$. En efecto: $$m (T^{-n}(A) \cap B^c) = m (T^{-n} (A)) - m (T^{-n}(A) \cap B) =$$ $$= m (A) - m (T^{-n}(A) \cap B) \rightarrow_n m (A) - m(A) m (B) = m(A) m (B^c).$$
Entonces, como (\ref{eqn24}) vale para todos los abiertos $B$, y es una propiedad cerrada en complementos, se cumple tambi\'{e}n para todos los compactos $B$. Ahora prob\'{e}mosla para cualquier boreliano $B$. Dado $\epsilon$, existe un compacto $K$ y un abierto $V$ tales que
$K \subset B \subset V$ y $m(V \setminus K) < \epsilon$. Luego $$m(V) = m(B) + m (V \setminus B) \leq m(B) + m (V \setminus K) < m(B) + \epsilon.$$ An\'{a}logamente
$$m(K) = m(B) - m (B \setminus K) \geq m(B) - m(V \setminus K) > m(B) - \epsilon.$$
Entonces:
$$m(T^{-n}(A) \cap B) \leq m (T^{-n}(A) \cap V) \rightarrow _n m(A) m (V) \leq m(A) m (B) + \epsilon, $$
$$m(T^{-n}(A) \cap B) \geq m (T^{-n}(A) \cap K) \rightarrow _n m(A) m (K) \geq m(A) m (B) - \epsilon. $$
Lo anterior prueba que para todo $n$ suficientemente grande
$$|m(T^{-n}(A) \cap B) - m(A) \, m (B)| < 2\epsilon, $$
de donde se deduce que la igualdad (\ref{eqn24}), como quer\'{\i}amos demostrar.
\hfill $\Box$
\subsection{Descomposici\'{o}n Erg\'{o}dica}
El prop\'{o}sito de esta secci\'{o}n es enunciar el Teorema \ref{theoremDescoErgodicaEspaciosMetricos}, de descomposici\'{o}n o desintegraci\'{o}n erg\'{o}dica. El mismo extiende el resultado de existencia de medidas erg\'{o}dicas para transformaciones continuas en espacios m\'{e}tricos compactos, probando c\'{o}mo se puede descomponer o desintegrar una medida invariante $\mu$ en funci\'{o}n de las que son erg\'{o}dicas.
\begin{definition} {\bf Descomposici\'{o}n o Desintegraci\'{o}n Erg\'{o}dica } \label{definitionDescoErgodica}
\em Sea $(X, \mathcal A)$ un espacio medible y $T: X \mapsto X$ una transformaci\'{o}n medible tal que existe alguna medida de probabilidad $\mu$ (definida en ${\mathcal A} $) invariante por $T$.
{\bf (i)} Sea $A \in {\mathcal A}$. Decimos que $\mu$ tiene \em descomposici\'{o}n o desintegraci\'{o}n erg\'{o}dica \em para el conjunto $A$ si para $\mu$-c.t.p. $x \in X$ existe una medida erg\'{o}dica $\mu_x$ tal que:
\ \ \ \ \ {\bf (a)} La funci\'{o}n real definida $\mu$-c.t.p por $x \mapsto \mu_x(A)$ es medible para $\mu$-c.t.p. $x \in X$. (Entonces est\'{a} en $L^1(\mu)$ pues est\'{a} acotada por 1)
\ \ \ \ \ {\bf (b)} $$\mu(A) = \int_{x \in X} \big(\mu_x(A)\big) \, d \mu.$$
{\bf (ii)} Sea $h \in L^1(\mu)$. Decimos que $\mu$ tiene \em descomposici\'{o}n o desintegraci\'{o}n erg\'{o}dica \em para la funci\'{o}n $h$ si para $\mu$-c.t.p $x \in X$ existe una medida erg\'{o}dica $\mu_x$ tal que $h \in L^1(\mu_x)$ y tal que:
\ \ \ \ \ {\bf (c)} La funci\'{o}n real definida $\mu$-c.t.p por $x \mapsto \int h \, d\mu_x $ es medible para $\mu$-c.t.p. $x \in X$ y est\'{a} en $L^1(\mu)$.
\ \ \ \ \ {\bf (d)}
$$\int h \, d \mu = \int_{x \in X} \Big( \int h \, d \mu_x \Big) \, d \mu.$$
\end{definition}
\begin{theorem}.
\label{theoremDescoErgodicaEspaciosMetricos}
{\bf Descomposici\'{o}n Erg\'{o}dica en espacios m\'{e}tricos compactos}
Sea $X$ un espacio m\'{e}trico compacto provisto de la sigma-\'{a}lgebra de Borel. Sea $T: X \mapsto X$ una transformaci\'{o}n medible que preserva una medida de probabilidad $\mu$.
Entonces, para todo $A \in {\mathcal A}$ y para toda $h \in L^1(\mu)$ existe descomposici\'{o}n erg\'{o}dica de $\mu$.
M\'{a}s a\'{u}n, para $\mu$-\mbox{c.t.p. } $x \in X$ existe y es \'{u}nica una medida erg\'{o}dica $\mu_x$ \em (llamada {\bf componente erg\'{o}dica } de $\mu$ a la que pertenece $x$) \em tal que
$$\lim_{n \rightarrow + \infty} \frac{1}{n} \sum_{j= 0}^{n-1} \delta_{T^j(x)} = \mu_x,$$
donde el l\'{\i}mite es en la topolog\'{\i}a d\'{e}bil estrella del espacio ${\mathcal M}$ de probabilidades. \index{componentes erg\'{o}dicas}
\end{theorem}
La demostraci\'{o}n del Teorema \ref{theoremDescoErgodicaEspaciosMetricos} para transformaciones continuas en espacios m\'{e}tricos se encuentra por ejemplo en \cite[Cap.II \S]{Mane} (ver tambi\'{e}n \cite{ManeIngles}). Una generalizaci\'{o}n para transformaciones medibles que preservan una medida de probabilidad, se encuentra en en \cite[Theorem 2.3.3]{Keller}.
Ahora veamos alguna de sus consecuencias:
Recordamos la Definici\'{o}n \ref{definitionProbabilidadTotal}: Un conjunto medible $A \subset X$ se dice que tiene \em probabilidad total \em si $\mu(A)= 1$ para toda medida de probabilidad $\mu$ en $X$ que sea invariante por $T$ (bajo la hip\'{o}tesis que existen medidas de probabilidad invariantes por $T$).
\begin{corollary} \label{CorolarioDescoErgodicaEspaciosMetricos}
Si $X$ es un espacio m\'{e}trico compacto, y si $T: X \mapsto X$ es continua, entonces existen medidas erg\'{o}dicas para $T$. Adem\'{a}s un conjunto medible $A \subset X$ tiene probabilidad total si y solo si $\nu(A)= 1$ para toda medida de probabilidad $\nu$ erg\'{o}dica para $T$.
\end{corollary}
{\em Demostraci\'{o}n: }
Usando el Teorema \ref{theoremDescoErgodicaEspaciosMetricos},
y la definici\'{o}n \ref{definitionDescoErgodica}, existen medidas erg\'{o}dicas para $T$ . Adem\'{a}s, para cualquier conjunto medible $A$, cualquiera sea la medida invariante $\mu$ tenemos, para el complemento $A^c$ de $A$, la siguiente igualdad:
$$ \mu(A^c) = \int \Big (\int \chi_{A^c} \, d \mu_x \Big)\, d \mu,$$
donde $\mu_x$ es una medida erg\'{o}dica, que depende del punto $x$ y est\'{a} definida para $\mu$-c.t.p. $x \in X$. Como $\chi_{A^c} \geq 0$
entonces la funci\'{o}n $ x \mapsto \int \chi_{A^c} \, d \mu_x = \mu_x(A^c)$ es no negativa. Las medidas $\mu_x$ son erg\'{o}dicas seg\'{u}n enuncia el Teorema \ref{theoremDescoErgodicaEspaciosMetricos} y la Definici\'{o}n \ref{definitionDescoErgodica}. Concluimos que $\mu(A^c) = 0$ para toda medida invariante $\mu$, si y solo si $\nu(A^c) = 0$ para toda medida erg\'{o}dica $\nu$.
\hfill $\Box$
\section{Din\'{a}mica diferenciable: \\ Hiperbolicidad uniforme y no uniforme} \label{chapterTeoriaPesinElementos}
El siguiente ejemplo es conocido como "Arnold's cat map" (el mapa del gato de Arnold). Esto es porque su din\'{a}mica fue representada por el matem\'{a}tico ruso Vladimir Arnold en \cite{arnold}, con un dibujo similar al de la Figura \ref{figuraManzana} de este cap\'{\i}tulo. En su dibujo (ver por ejemplo uno parecido al original en \cite[Figure 2]{Pikovsky_Arnoldcat}), Arnold utiliza el contorno de la figura estilizada de un gato, en lugar de una manzana como hacemos nosotros en la Figura \ref{figuraManzana} de este cap\'{\i}tulo. En su dibujo, Arnold \lq\lq muestra\rq\rq \ el efecto de la propiedad de mixing del mapa sobre los trazos del contorno del gato, el cual, en pocos iterados, se vuelve irreconocible.
\subsection{Ejemplo de automorfismo lineal hiperb\'{o}lico en el toro.} \index{automorfismo! lineal del toro} \label{section2111} \index{hiperbolicidad! uniforme} \index{transformaci\'{o}n! hiperb\'{o}lica! uniforme} \index{difeomorfismos! de Anosov}
Sea el toro ${\mathbb{T}}^2 = \mathbb{R}^2 /\sim$ donde la relaci\'{o}n de equivalencia
$\sim$ est\'{a} dada por:$(a,b) \sim (c,d) $ en ${\mathbb{T}}^2$ si $c-a$ y $d-b$
son enteros. (Otra notaci\'{o}n que se usa para $\mathbb{R}^2 /\sim $ es $\mathbb{R}^2
|_{mod \mathbb{Z}^2} = \mathbb{R}^2/\mathbb{Z}^2$.)
Sea $\Pi: \mathbb{R}^2 \mapsto \mathbb{Z}^2$ la proyecci\'{o}n del espacio de
recubrimiento $\mathbb{R}^2$ del toro definida por $\Pi (a,b) = (a,b)_{mod \
\mathbb{Z}^2}$ donde esto \'{u}ltimo indica la clase de equivalencia de $(a,b)
\in \mathbb{R}^2$.
Sea $f: {\mathbb{T}}^2 \mapsto {\mathbb{T}}^2$ dada por $$f (x) = \Pi ( \left (%
\begin{array}{cc}
2 & 1 \\
1 & 1 \\
\end{array}%
\right) (\Pi ^{-1}(x)) ) \ \ \ \forall \ x \in \mathbb{T}^2.$$ Llamaremos a $f$ automorfismo lineal hiperb\'{o}lico de matriz $\left(%
\begin{array}{cc}
2 & 1 \\
1 & 1 \\
\end{array}%
\right)$ en el toro, o simplemente \lq \lq $\left (
\begin{array}{cc}
2 & 1 \\
1 & 1 \\
\end{array} \right )$ en el toro".
Abusando de la notaci\'{o}n, a un punto $x \in {\mathbb{T}}^2$ lo denotaremos con
cualquier representante suyo $(a,b) \in \mathbb{R}^2$: $(a,b) \in
\Pi^{-1}(\{x\})$.
\begin{exercise}\em
Probar que $(0,0)$ es el \'{u}nico punto fijo por la transformaci\'{o}n $f = \left(%
\begin{array}{cc}
2 & 1 \\
1 & 1 \\
\end{array}%
\right)$ en el toro. Probar que
$$\{(1/5, 2/5), (4/5, 3/5)\} \ \ \ \mbox{ y } \ \ \
\{(3/5, 1/5), (2/5, 4/5)\}$$ son dos \'{o}rbitas peri\'{o}dicas por $f$ y
que son las \'{u}nicas de per\'{\i}odo $2$. Probar que
$$\{(1/2, 1/2), (1/2,
1), (1, 1/2)\} $$ es una \'{o}rbita peri\'{o}dica de per\'{\i}odo $3$.
\end{exercise} La topolog\'{\i}a en el toro es el cociente de la topolog\'{\i}a usual en
$\mathbb{R}^2$. Es metrizable y la m\'{e}trica est\'{a} dada por $$\mbox{$\,$dist$\,$} (x,y) = $$ $$
\min \{ \sqrt{(c-a)^2 + (d-b)^2}: {(a,b) \in \Pi^{-1}(x), (c,d)
\in \Pi ^{-1}(y)} \}.$$
La medida de Lebesgue en el toro es la medida de Borel $\widetilde m$
definida por $\widetilde m (B) = m (\Pi ^{-1} B \cap [0,1]^2)$ donde
$m$ es la medida de Lebesgue en $\mathbb{R}^2$. Se observa que la medida
de Lebesgue en el toro es una medida de probabilidad. Donde no d\'{e}
lugar a confusi\'{o}n renombraremos como $m$ a la medida de Lebesgue
en el toro.
\begin{proposition} \index{medida! de Lebesgue}
La medida de Lebesgue $m$ en el toro es invariante por la
transformaci\'{o}n $f = \left(%
\begin{array}{cc}
2 & 1 \\
1 & 1 \\
\end{array}%
\right)$.
\end{proposition}
{\em Demostraci\'{o}n: }
Denotamos $A =\left(%
\begin{array}{cc}
2 & 1 \\
1 & 1 \\
\end{array}%
\right)$ a la matriz de coeficientes enteros que define la transformaci\'{o}n $f$ en el toro. Observamos que $\mbox{det}(A)= 1$.
Sea $B$ un boreliano en el toro. $$m (f^{-1}
(B)) = \int \chi _{f^{-1}(B)} (x) \; dm(x) = \int \chi _B \circ
f(x)\, dm(x).$$ Haciendo el cambio de variables lineal e invertible
$z= f(x)$ en la integral anterior resulta $$m (f^{-1} (B)) = \int
\chi _B (z) J(z) \, dm(z)$$ donde $J(z) = |\mbox{det} df^{-1}(z)| = |\mbox{det}
df(x)|^{-1}$ es el jacobiano del cambio de variables $z =f (x)$. \index{jacobiano}
En nuestro caso una parametrizaci\'{o}n local de la superficie del toro $\mathbb{T}^2$ est\'{a} dada por $\Pi|_{B_{\delta}(\Pi^{-1}(x))}$, donde $ { B_{\delta}(\Pi^{-1}(x))}$ es la bola abierta en $\mathbb{R}^2$ de radio $\delta >0$ (suficientemente peque\~{n}o) y centro en un punto denotado como $\Pi^{-1}(x) \in \mathbb{R}^2$ que se proyecta por $\Pi: \mathbb{R}^2 \mapsto \mathbb{T}^2$ en el punto $x \in \mathbb{T}^2$. Calculando la derivada $df(x)$ y el Jacobiano con las coordenadas en esa carta local, resulta $T_x\mathbb{T}^2 \sim \mathbb{R}^2$, y $$J(z) = (\mbox{det} A)^{-1} = 1 \; \; \forall z \in
{\mathbb{T}}^2.$$ Entonces
$m (f^{-1} (B)) = \int \chi _B (z)\, dm(z) = m(B) $. \hfill $ \Box$
\vspace{.2cm}
{\bf Din\'{a}mica del ejemplo de automorfismo lineal hiperb\'{o}lico en el toro.}
Estudiemos la din\'{a}mica por iterados de la transformaci\'{o}n lineal
$F: \mathbb{R}^2 \mapsto \mathbb{R}^2$ que tiene como matriz
$A = \left(%
\begin{array}{cc}
2 & 1 \\
1 & 1 \\
\end{array}%
\right)$.
$F$ se llama levantado de $f$ a $\mathbb{R}^2$ y $f$ se llama proyecci\'{o}n
de $F$ en el toro. La din\'{a}mica de $F$ est\'{a} relacionada fuertemente
con la din\'{a}mica de su proyecci\'{o}n $f$ en el toro.
En efecto la proyecci\'{o}n $\Pi: \mathbb{R}^2 \mapsto {\mathbb{T}}^2$ \em cuando
restringida a un entorno suficientemente peque\~{n}o del origen
$(0,0)$ \em es un homeomorfismo sobre su imagen.
$\Pi$ transforma la din\'{a}mica de $F$ en la de $f$. M\'{a}s precisamente
$$\Pi \circ F = f \circ \Pi$$
Luego, aplicando $\Pi: \mathbb{R}^2 \mapsto {\mathbb{T}}^2$ a una \'{o}rbita de $F$ se
obtiene una \'{o}rbita de $f$. Y la preimagen por $\Pi$ de una \'{o}rbita
por $f$ es una infinidad numerable de \'{o}rbitas por $F$ tal que una
se obtiene de otra traslad\'{a}ndola en $\mathbb{R}^2$ seg\'{u}n un vector de
coordenadas enteras.
El comportamiento topol\'{o}gico local de las
\'{o}rbitas de $F$ en un entorno suficientemente peque\~{n}o $V$ del
origen es el mismo (a menos del homeomorfismo $\Pi|_V$) que el de
las \'{o}rbitas de $f$ en el abierto $\Pi (V)$.
\vspace{.2cm}
{\bf Variedades invariantes.} \index{variedad invariante! estable} \index{variedad invariante! inestable}
Los valores propios de la matriz $A$
son $$ \sigma := (3+\sqrt5)/2
>1, \ \ \ 0 < \lambda:= (3-\sqrt5)/2 <1.$$ Las direcciones propias respectivas
tienen pendientes irracionales, la primera positiva y la segunda
negativa. Se deduce que las dos rectas que pasan por el origen y
tienen direcciones seg\'{u}n los vectores propios de la matriz $A$ son
invariantes por $F$ en el plano. Entonces sus proyecciones en el
toro son curvas invariantes por $f$ y se cortan transversalmente
en el origen (y tambi\'{e}n se cortan transversalmente en todos sus otros puntos de intersecci\'{o}n en el
toro).
$F$ restringida a la recta $r_1$ que tiene direcci\'{o}n propia de
valor propio mayor que 1, expande las distancias exponencialmente
con tasa $\log (3+\sqrt5) /2 >0$. Es decir, contrae distancias hacia el pasado como $\sigma ^{-n} = e^{-n \log \sigma}$: $$\frac{\mbox{dist} (F^{-n}(a,b), (0,0))}{\mbox{dist}((a,b), (0,0))} =e^{-n \log \sigma} \ \forall \ n \in \mathbb{N}, \ \ \forall \ (a,b) \in r_1 \setminus (0,0).$$ Proyectando $r_1$ en el toro se
obtiene una curva $W^u(0,0) = \Pi (r_1)$ inmersa en el toro, que se llama \em
variedad inestable por $(0,0)$. \em
$W^u(0,0)$ pasa por el origen, \em es densa en el toro \em
(esto se puede demostrar usando que la pendiente de $r_1$ en el
plano es irracional y usando que la rotaci\'{o}n irracional en el
c\'{\i}rculo es densa en el c\'{\i}rculo), \em es invariante por $f$ y
cumple: \em
$$W^u(0,0) = \{y \in {\mathbb{T}}^2: \lim_{n \rightarrow - \infty} f^n(y) = (0,0)\}$$
Esto \'{u}ltimo se debe a que $r_1 = \{(a,b) \in \mathbb{R}^2: \lim_{n
\rightarrow - \infty} F^n(a,b) = (0,0) \}$. Observamos que la subvariedad $W^u(0,0) = \Pi (r_1)$ es inmersa y densa en $\mathbb{T}^2$, pero \em no es subvariedad encajada en ${\mathbb{T}^2}$. \em Es decir, la topolog\'{\i}a que se define a lo largo de la subvariedad $W^u(0,0)$ no es la inducida por su inclusi\'{o}n en $\mathbb{R}^2$. Los abiertos en $W^u(0,0)$ est\'{a}n generados por los arcos abiertos (homeomorfos a intervalos abiertos en la recta real). Estos no se obtienen como intersecci\'{o}n de un abierto en $\mathbb{T}^2$ con $W^u(0,0)$ pues cualquier abierto en $\mathbb{T}^2$ cortado con $W^u(0,0)$ contiene una infinidad de arcos conexos, debido a la densidad de $W^u(0,0)$.
{\bf Variedad inestable local:} \index{variedad invariante! local} En este ejemplo, existe $\epsilon >0$ suficientemente peque\~{n}o, tal que, denotando $W^u_{loc}(0,0)$ (variedad inestable local) a la componente conexa de $W^u(0,0)$ intersecada con la bola de centro $(0,0)$ y radio $\epsilon$ en el toro $\mathbb{T}^2$, se tiene:
$$\frac{\mbox{dist} (f^{-n}(y), (0,0))}{\mbox{dist}(y, (0,0))} =e^{-n \log \sigma} \ \forall \ n \in \mathbb{N}, \ \ \forall \ y \in W^u_{loc}(0,0). $$
Esta igualdad se obtiene porque la bola de centro $(0,0)$ y radio $\epsilon >0$ en el toro $\mathbb{T}^2$ es difeomorfa por un preimagen de $\Pi$, con la bola de centro $(0,0)$ y radio $\epsilon >0$ en $\mathbb{R}^2$, si $\epsilon >0$ es suficientemente peque\~{n}o.
An\'{a}logamente $F$ restringida a la recta $r_2$ que tiene direcci\'{o}n
propia de valor propio menor que 1, contrae las distancias
exponencialmente con tasa $\log (3-\sqrt5) /2 <0$. Es decir, contrae distancias hacia el futuro como $\lambda ^{n} = e^{n \log \lambda}$: $$\frac{\mbox{dist} (F^{n}(a,b), (0,0))}{\mbox{dist}((a,b), (0,0))} =e^{n \log \lambda} \ \forall \ n \in \mathbb{N}, \ \ \forall \ (a,b) \in r_2 \setminus (0,0).$$ Proyectando
$r_2$ en el toro se obtiene una curva $W^s(0,0) = \Pi (r_1)$ inmersa en el
toro, que se llama \em variedad estable por $(0,0)$. \em
$W^s(0,0)$ pasa por el origen, \em es densa en el toro \em (porque
la pendiente de $r_2$ en el plano es irracional), \em es
invariante por $f$, no encajada en $\mathbb{R}^2$ \em y cumple:
$$W^s(0,0) = \{y \in {\mathbb{T}}^2: \lim_{n \rightarrow + \infty}f^n(y) = (0,0)\}$$
Esto \'{u}ltimo se debe a que $r_2 = \{(a,b) \in \mathbb{R}^2: \lim_{n
\rightarrow + \infty} F^n(a,b)= (0,0) \}$.
Denotando $W^s_{loc}(0,0)$ (variedad estable local) a la componente conexa de $W^s(0,0)$ intersecada con la bola de centro $(0,0)$ y radio $\epsilon$ en el toro $\mathbb{T}^2$, se tiene:
$$\frac{\mbox{dist} (f^{n}(y), (0,0))}{\mbox{dist}(y, (0,0))} =e^{n \log \lambda} \ \forall \ n \in \mathbb{N}, \ \ \forall \ y \in W^s_{loc}(0,0). $$
{\bf Variedades invariantes por cualquier punto:}
El argumento anterior puede aplicarse a cualquier punto peri\'{o}dico $x$. Deducimos que todos los puntos peri\'{o}dicos son hiperb\'{o}licos tipo silla (tienen un valor propio mayor que uno y otro positivo menor que uno).
En general, para cualquier punto $x \in \mathbb{T}^2$, aunque no sea peri\'{o}dico, la variedad estable $W^s(x)$ y la variedad inestable $W^u(x)$ se definen como la proyecciones sobre el toro de las rectas en $\mathbb{R}^2$ que pasan por $\Pi^{-1}(x)$, seg\'{u}n las direcciones de los vectores propios de la matriz $A$ (que son las mismas que las direcciones de las rectas $r_1$ y $r_2$ en $\mathbb{R}^2$ que pasan por el origen). Argumentando como m\'{a}s arriba se tiene $$\lim_{n \rightarrow + \infty} \mbox{dist}(f^n(y), f^n(x))= 0 \ \ \forall \ y \in W^s(x)$$
$$\lim_{n \rightarrow + \infty} \mbox{dist}(f^{-n}(y), f^{-n}(x))= 0 \ \ \forall \ y \in W^u(x),$$
y el acercamiento a cero de esas distancias se realiza como $\lambda^n$ o $\sigma^{-n}$ respectivamente. Las variedad inestable por un punto $x$ no es invariante si el punto $x$ no es fijo por $f$, pero su imagen por $f$ es la variedad inestable por el punto $f(x)$ (esto se chequea inmediatamente de la construcci\'{o}n de $W^u(x) $ y $W^u(f(x))$ como las im\'{a}genes por $\Pi$ de las rectas paralelas a $r_1$ por $\Pi^{-1}(x)$ y $\Pi^{-1}(f(x)) = F(\Pi^{-1}(x))$ respectivamente).
{\bf Foliaciones invariantes:} \index{foliaci\'{o}n! invariante! estable} \index{foliaci\'{o}n! invariante! inestable}
La familia de todas las variedades inestables, forman en el toro $\mathbb{T}^2$ lo que se llama una \em foliaci\'{o}n \em invariante: pues cada subvariedad de la foliaci\'{o}n (llamada hoja), al aplicarle $f$ se transforma en otra hoja de la foliaci\'{o}n. An\'{a}logamente, la foliaci\'{o}n formada por las variedades estables, es invariante.
{\begin{figure} [h]
\vspace{-.3cm}
\begin{center}\includegraphics[scale=.6]{Figura2.eps}
\vspace{0cm}
\caption{\label{figuraManzana} Deformaci\'{o}n producida por el automorfismo lineal hiperb\'{o}lico en el toro.}
\end{center}
\vspace{0cm}
\end{figure}}
{\bf Interpretaci\'{o}n gr\'{a}fica del automorfismo lineal hiperb\'{o}lico:} La deformaci\'{o}n que produce $f$ en este ejemplo de automorfismo lineal hiperb\'{o}lico en el toro $\mathbb{T}^2$, est\'{a} representado en la figura \ref{figuraManzana}. Esa figura es una modificaci\'{o}n de la conocida llamada \lq\lq gato de Arnold\rq\rq (que en vez de una manzana, deforma la imagen de un gato, figura creada por Arnold, en la d\'{e}cada de 1960 para ilustrar la deformaci\'{o}n hiperb\'{o}lica de un automorfismo lineal hiperb\'{o}lico en el toro $\mathbb{T}^2$). Al iterar sucesivas veces $f$, la figura representada, se estira a lo largo de la foliaci\'{o}n inestable y se contrae a lo largo de la foliaci\'{o}n estable. Como $f$ es invertible, los pedazos que se obtienen de identificar $0$ con $1$ en vertical y horizontal, no se intersecan (provienen de subconjuntos disjuntos antes de aplicar $f$).
\begin{definition}
{\bf Exponentes de Lyapunov en puntos fijos hi\-per\-b\'{o}\-licos.} \index{exponentes de Lyapunov}
\em
Los logaritmos de los m\'{o}dulos de los valores propios de $df^p (x_0)$ dividido $p$, cuando $x_0$ es un punto peri\'{o}dico de per\'{\i}odo $p$, se llaman \em
exponentes de Lyapunov \em en $x_0$. En el cap\'{\i}tulo \ref{chapterTeoriaPesinElementos} definiremos con generalidad los exponentes de Lyapunov de cualquier sistema din\'{a}mico diferenciable, para casi todas sus \'{o}rbitas (no necesariamente puntos fijos ni \'{o}rbitas peri\'{o}dicas).
En el ejemplo del $\left(%
\begin{array}{cc}
2 & 1 \\
1 & 1 \\
\end{array}%
\right)$ $f$ es lineal (mejor dicho $F$, dada por la matriz $A$, es lineal), por lo tanto la matriz $A$ es, en una base adecuada de $T_{(0,0)} {\mathbb{T}^2}$), la derivada $df (0,0): T_{(0,0)} \mathbb{T}^2 \sim \mathbb{R}^{2} \mapsto T_{(0,0)} \mathbb{T}^2 \sim \mathbb{R}^2$. Los logaritmos de los valores propios de $A = df(0,0)$
son dos, uno positivo $\chi^+= \log \sigma$ y el otro negativo $\chi^- = \log \lambda$. (El origen es un punto
fijo hiperb\'{o}lico tipo silla, pues $0 <\lambda < 1 < \sigma$).
El exponente de Lyapunov $\chi^+ := \log \mu = \log (3+\sqrt5) /2 >0$ es \em la tasa exponencial \em
\em de dilataci\'{o}n \em al aplicar $df_{(0,0)}$, de las normas de los vectores en el subespacio propio $U$, que corresponde al valor propio $\mu$ de $df_{(0,0)}$. Precisamente:
$$\lim _{n \rightarrow + \infty} \frac { \log (\|df^n_{(0,0)} u\|/ \|u\|) }{n} = \log \mu = \chi^+ >0 \; \
\forall \ u \in U \setminus \{{\bf 0} \}\subset T_xM.$$
Observar que para $n < 0$ tambi\'{e}n se cumple la misma igualdad anterior; es decir:
$$\lim _{n \rightarrow - \infty} \frac { \log (\|df^n_{(0,0)} u\|/ \|u\|) }{n} = \log \mu = \chi^+ \; \
\forall \ u \in U \setminus \{{\bf 0} \}\subset T_xM.$$
Adem\'{a}s, el exponente de Lyapunov positivo $\chi^+ $ es la tasa exponencial de dilataci\'{o}n de las distancias a lo largo de la variedad inestable local por el punto fijo. M\'{a}s precisamente
$$\lim _{n \rightarrow - \infty} \frac { \log \mbox{$\,$dist$\,$} (f^n(y), (0,0))}{n} = \chi^+ = \log \mu >0\; \;
\forall y \in W^u(0,0)$$ \index{exponentes de Lyapunov! no nulos}
\index{exponentes de Lyapunov! positivos}
An\'{a}logamente, el exponente de Lyapunov $\chi^-:= \log \lambda = \log (3-\sqrt5) /2 <0$
negativo, se interpreta como \em la tasa \em \em exponencial de contracci\'{o}n por $f$ \em en un entorno
suficientemente peque\~{n}o del punto fijo a lo largo de la variedad
estable por ese punto. M\'{a}s precisamente
$$\lim _{n \rightarrow + \infty} \frac { \log \mbox{$\,$dist$\,$} (f^n(y), (0,0))}{n} = \chi^- = \log \lambda < 0 \; \;
\forall y \in W^s(0,0).$$
\end{definition}
\begin{exercise}\em \label{exercise2111transitivo}
Sea $f$ la transformaci\'{o}n $\left(%
\begin{array}{cc}
2 & 1 \\
1 & 1 \\
\end{array}%
\right)$ en el toro. Probar que $m$- c.t.p es recurrente pero que
hay puntos no recurrentes (Sugerencia: $y \in W^s(0,0) \setminus
\{(0,0)\}$ no es recurrente). Probar que dados dos abiertos $U$ y
$V$ cualesquiera en el toro, existe una subsucesi\'{o}n $n_j$ de
naturales tales que $f^{n_j}U \cap V \neq \emptyset$. (Sugerencia:
Usar que $W^u((0,0))$ es invariante por $f$ y pru\'{e}bese que para
cualquier arco compacto $K \subset W^u(0,0)$ la uni\'{o}n de sus
iterados hacia el futuro $\bigcup _{n \in \mathbb{N}} f^n(K) $ es densa en el
toro.)
Deducir que $f$ es transitiva. \index{automorfismo!
lineal del toro} \index{transformaci\'{o}n! hiperb\'{o}lica! uniforme}
\index{difeomorfismos! de Anosov}
Probar que todo punto es no errante. Concluir que si bien todo
punto recurrente es no errante, no necesariamente todo punto no
errante es recurrente.
\end{exercise}
{\bf Observaci\'{o}n: } \index{automorfismo!
lineal del toro} \index{transformaci\'{o}n! erg\'{o}dica} \index{ergodicidad} \index{medida! erg\'{o}dica} \index{medida! de Lebesgue} En el cap\'{\i}tulo \ref{chapterAtractores}, Corolario \ref{corolarioAnosovTransitivomedidaLebesgue} probaremos que la transformaci\'{o}n $$f =
\left(%
\begin{array}{cc}
2 & 1 \\
1 & 1 \\
\end{array}%
\right)$$ en el toro es \em erg\'{o}dica respecto a la medida de
Lebesgue.\em
\begin{exercise}\em
Sea el automorfismo lineal hiperb\'{o}lico $f= \left(
\begin{array}{cc}
2 & 1 \\
1 & 1 \\
\end{array}
\right)
$ del toro $\mathbb{T}^2$. Sea $P$ el conjunto de puntos peri\'{o}dicos por $f$. Probar que $P \neq \emptyset $, pero
$\mu (P) = 0 $ para toda $\mu$ invariante y erg\'{o}dica para $f$ que sea positiva sobre abiertos.
\end{exercise}
\subsection{Difeomorfismos de Anosov e hiperbolicidad uniforme}
En esta secci\'{o}n asumiremos que $M$ es una variedad diferenciable, compacta y conexa, provista de una estructura riemanniana (i.e. existe un producto interno $< \cdot, \cdot >:TM \times TM \mapsto \mathbb{R}$ diferenciable, definido en el fibrado tangente $TM$. Por lo tanto, para todo $x \in M$ y para todo $v \in T_xM$, est\'{a} definida la norma $\| v \| := \sqrt {< v, v>_x}$, determinada por la m\'{e}trica riemanniana en $M$).
Sea $f: M \mapsto M$ un difeomorfismo, lo que se denota $f \in \mbox{Diff }^1(M)$ y significa que $f$ es de clase $C^1$, invertible, y su inversa $f^{-1}: M \mapsto M$ tambi\'{e}n es de clase $C^1$. Si adem\'{a}s $f$ y $f^{-1}$ fueran de clase $C^r$ para alg\'{u}n natural $r > 1$, se denota $f \in \mbox{Diff }^r(M)$ (para lo cual se requiere que $M$ sea una variedad de clase $C^r$ por lo menos).
En esta secci\'{o}n, asumiremos que $f \in \mbox{Diff }^1(M)$ e indicaremos expresamente cuando adem\'{a}s $f \in \mbox{Diff }^r(M)$ para alg\'{u}n $r > 1$.
\begin{definition} \em \label{definicionAnosov} {\bf Difeomorfismos de Anosov \cite{Anosov}}
\index{hiperbolicidad! uniforme}
\index{transformaci\'{o}n! hiperb\'{o}lica! uniforme}
\index{difeomorfismos! de Anosov}
\index{splitting! hiperb\'{o}lico}
$f \in \mbox{Diff }^1(M)$ se llama \em difeomorfismo de Anosov \em si existe una descomposici\'{o}n (llamada \em splitting\em) $S \oplus U = TM$ del fibrado tangente $TM$, no trivial (i.e. $0 <\mbox{dim}(S) < \mbox{dim}(M)$), que es invariante por $df$ (i.e. $df(x) S_x = S_{f(x)}, \ \ df(x) U_x = U_{f(x)}$), y existen constantes $C >0$ y $0 < \lambda < 1 < \sigma$, tales que, para todo $x \in M$:
\begin{equation}
\label{equationAnosovStable}
\|df^n(x) s\| \leq C \lambda^n \|s\| \ \ \forall \ n \geq 0, \ \ \forall \ s \in S_x,\end{equation}
\begin{equation} \index{subespacio! inestable} \index{fibrado! inestable}
\label{equationAnosovUnstable} \|df^{n}(x) u\| \geq C^{-1} \sigma^n \|u\| \ \ \forall \ n \geq 0, \ \ \forall \ u \in U_x.\end{equation}
Para cada punto $x \in M$, los subespacios $S_x$ y $U_x$ se llaman \em estable e inestable \em respectivamente, en $x$. Los subfibrados $S$ y $U$ (formados por los subespacios $S_x$ y $U_x$ al variar $x \in M$), se llaman fibrados estable e inestable respectivamente. La constante $ 0 <\lambda < 1$ se llama tasa o coeficiente de contracci\'{o}n (uniforme) en el futuro a lo largo del fibrado estable, y la constante $\sigma > 1$, tasa o coeficiente de dilataci\'{o}n (uniforme) en el futuro a lo largo del fibrado inestable.
{\bf Nota: } En la definici\'{o}n de difeomorfismo de Anosov, la constante ${C} $ no depende de $x$, as\'{\i} como tampoco dependen de $x$ los coeficientes $\lambda$ y $\sigma$ de contracci\'{o}n y dilataci\'{o}n respectivamente. Siendo la variedad $M$ compacta, si se cambia la m\'{e}trica Riemanniana, la norma de los vectores en cada subespacio tangente se cambia por una equivalente. Luego, se modifica la constante $C$ por otra constante $C'$ (que tampoco depende de $x$) manteni\'{e}ndose las mismas tasas $\lambda$ y $\sigma$ de contracci\'{o}n y dilataci\'{o}n respectivamente. Esto permite que la definici\'{o}n de difeomorfismo de Anosov sea intr\'{\i}nseca al difeomorfismo, y no dependa de la m\'{e}trica riemanniana elegida. Se puede probar que para todo $f$ de Anosov existe una m\'{e}trica riemanniana (llamada \em m\'{e}trica adaptada a $f$\em) para la cual la constante $C$ puede tomarse igual a 1).
\end{definition}
\begin{exercise}\em \index{automorfismo!
lineal del toro}
Probar que el autormorfismo lineal $f = \left(%
\begin{array}{cc}
2 & 1 \\
1 & 1 \\
\end{array}%
\right)$ en el toro $\mathbb{T}^2$ es un difeomorfismo de Anosov. Sugerencia: Tomar $S_x$ y $U_x$ las direcciones propias de la matriz $A$ que define a $f$, $\lambda$ y $\mu$ los valores propios respectivos y $C= 1$.
\end{exercise}
\begin{exercise}\em
Sea $f$ un difeomorfismo de Anosov. Probar que la inversa $0 <\sigma^{-1} < 1$ de la tasa $\sigma$ de dilataci\'{o}n hacia el futuro a lo largo del fibrado inestable, es una tasa de contracci\'{o}n hacia el pasado a lo largo del mismo fibrado. Precisamente:
\begin{equation}
\label{equationAnosovUnstableBB} \|df^{-n}(x) u\| \leq C \sigma^{-n} \|u\| \ \ \forall \ n \geq 0, \ \ \forall \ u \in U_x, \ \ \ \forall \ x \in M.\end{equation}
An\'{a}logamente, probar que la inversa $\lambda^{-1} >1 $ de la tasa contracci\'{o}n $\lambda$ hacia el futuro a lo largo del fibrado estable, es una tasa de dilataci\'{o}n hacia el futuro a lo largo del mismo fibrado estable.
Deducir que $f: M \mapsto M$ es un difeomorfismo de Anosov, si y solo si $f^{-1}$ tambi\'{e}n lo es.
\end{exercise}
\begin{exercise}\em \label{ejercicioUnicidadSplittingHiperbolico}
\index{splitting! hiperb\'{o}lico}
Mostrar que el splitting $S_x \oplus U_x$ es \'{u}nico.
Sugerencia: Probar, para todas las direcciones $[s] \subset S_x$ y $[v] \subset T_xM \setminus S_x$, las siguientes desigualdades:
$\limsup_{n \rightarrow + \infty} \frac{1}{n}{\log \|df^n s\|} < 0 < \limsup_{n \rightarrow + \infty} \frac{1}{n}{\log \|df^n v\|}. $
\end{exercise}
\begin{exercise}\em \label{ejercicioContinuidadSplittingHiperbolico}
Probar que las aplicaciones $x \mapsto S_x$ y $x \mapsto U_x$ son continuas y deducir que si $M$ es conexa, entonces $\mbox{dim}S_x$ y $\mbox{dim}U_x$ son constantes.
Sugerencia: Considerar una sucesi\'{o}n $\{x_n\}$ convergente a $x$ y tomar una subsucesi\'{o}n de ella tales que $S_{x_n}$ y $U_{x_n}$ tengan dimensiones constantes con $n$ y sean convergentes. Mostrar que los subespacios $\lim_n S_{x_n} $ y $\lim_n U_{x_n}$ satisfacen las desigualdades de hiperbolicidad uniforme, y forman un splitting de $T_xM$. Finalmente, usar la unicidad del splitting hiperb\'{o}lico en el punto $x \in M$.
\end{exercise}
\begin{exercise}\em
Probar que en la Definici\'{o}n \ref{definicionAnosov}, la condici\'{o}n $0 <\mbox{dim}(S_x) < \mbox{dim}(M)$ es redundante.
Sugerencia: Por absurdo, si $\mbox{dim}(S_x)= \mbox{dim}(M)$, elegir $N \geq 1$ tal que $C \lambda^N < 1/4$. Usando la definici\'{o}n de diferenciabilidad de $f$ en el punto $x$, probar que existe $\delta_x >0$ tal que: $$\mbox{dist}(x,y) < \delta_x \ \Rightarrow \ \mbox{dist}(f(x), f(y)) \leq \frac{ \mbox{dist}(x,y)}{2}.$$
Cubrir $M$ con una cantidad $k$ finita de bolas abiertas $\{B_{\delta_i}(x_i)\}_{1 \leq i \leq k} $ de centros $x_1, \ldots, x_i, \ldots x_k$ y radios $\delta_i := \delta_{x_i}$.
Probar que para todo $y \in M$, y para todo $n \in \mathbb{N}$ existe $x_i$ (que depende de $y$ y de $n$) tal que $\mbox{dist}(f^{nN}(x_i), f^{nN}(y)) < \epsilon.$ Deducir que $f^{n N}(M)$ se puede cubrir con $k$ bolas de radio $\epsilon$. Siendo $f^{n N}$ un difeomorfismo, toda la variedad $M$ se puede cubrir con $k$ bolas de radio $\epsilon >0$, siendo $k$ constante y $\epsilon >0$ arbitrario. Deducir que $\mbox{diam}(M) < k\epsilon$ para todo $\epsilon >0$, lo cual es una contradicci\'{o}n pues el di\'{a}metro de $M$ es positivo.
\end{exercise}
\begin{exercise}\em \label{ejercicioAnosovExponentesLyapunov} \index{exponentes de Lyapunov}
Sea $f: M \mapsto M$ un difeomorfismo de Anosov con tasa $ \lambda < 1$ de contracci\'{o}n a lo largo del fibrado estable $S$, y tasa $\sigma > 1$ de dilataci\'{o}n a lo largo del fibrado inestable $U$. Probar que para todo $x \in M$:
$$\limsup_{n \rightarrow + \infty} \frac{\log \|df^n(x) s\|}{n} \leq \log \lambda < 0 \ \forall s \in S_x $$
$$\limsup_{n \rightarrow - \infty} \frac{\log \|df^n(x) s\|}{-n} \leq \log \lambda < 0 \ \forall s \in S_x $$
$$\liminf_{n \rightarrow + \infty} \frac{\log \|df^n(x) u\|}{n} \geq \log \sigma > 0 \ \forall u \in U_x $$
$$\liminf_{n \rightarrow - \infty} \frac{\log \|df^n(x) u\|}{-n} \geq \log \sigma > 0 \ \forall u \in U_x $$
\end{exercise}
{\bf Exponentes de Lyapunov para difeomorfismos de Anosov: } \index{exponentes de Lyapunov}
\index{teorema! Oseledets}
En las pr\'{o}ximas secciones enunciaremos el teorema de Oseledets que establece que los l\'{\i}mites superior e inferior de las desigualdades de arriba, son l\'{\i}mites que existen para $\mu$-casi todo punto $x \in M$ (donde $\mu$ es cualquier medida de probabilidad invariante por $f$). Esos l\'{\i}mites se llaman \em exponentes de Lyapunov \em (en el futuro) para la \'{o}rbita de $x \in M$.
Por lo tanto, admitiendo el teorema de Oseledets, de las desigualdades de arriba se deducen los siguientes enunciados, que satisface todo difeomorfismo de Anosov $f$ para toda medida de probabilidad $\mu$ que sea $f$-invariante:
{\bf (a)} \em Los exponentes de Lyapunov de la \'{o}rbita por $\mu$- casi todo punto $x \in M$, correspondientes a las direcciones $u $ del subespacio inestable, son positivos y mayores o iguales que el logaritmo del coeficiente de dilataci\'{o}n $\sigma > 1$ del difeomorfismo de Anosov. \em
\index{exponentes de Lyapunov! no nulos} \index{exponentes de Lyapunov! positivos}
{\bf (b)} \em Los exponentes de Lyapunov de la \'{o}rbita por $\mu$- casi todo punto $x \in M$, correspondientes a las direcciones $s $ del subespacio estable, son negativos y menores iguales que el logaritmo del coeficiente de contracci\'{o}n $\lambda < 1$ del difeomorfismo de Anosov. \em
De los enunciados (a) y (b), teniendo en cuenta que por definici\'{o}n de difeomorfismo de Anosov, el espacio tangente $T_xM$ es la suma directa de los subespacios estable $S_x$ e intestable $U_x$, se deduce el siguiente enunciado (para una demostraci\'{o}n del mismo usar el teorema de Oseledets, que probaremos en el pr\'{o}ximo cap\'{\i}tulo, y ver Ejercicio \ref{ejercicioExponentesLyapunov}):
{\bf (c)} \em Los exponentes de Lyapunov para un difeomorfismo de Anosov no son cero, y est\'{a}n \lq\lq bounded away from zero\rq\rq \ \em (i.e. est\'{a}n fuera de un entorno de cero).
\begin{remark} \em .
\index{transitividad}
{\bf Sobre la transitividad de los difeomorfismos de Anosov.}
Una conjetura cuya demostraci\'{o}n es a\'{u}n un problema abierto es la siguiente:
{\bf Conjetura: } Los difeomorfismos de Anosov en variedades compactas y conexas son transitivos.
Se conocen pruebas parciales: si la variedad $M$ donde act\'{u}a el difeomorfismo de Anosov es un toro $\mathbb{T}^n$, entonces $f$ es transitivo. Otro caso conocido: si la dimensi\'{o}n del fibrado inestable o la del estable es uno (de un difeomorfismo $f$ de Anosov), entonces $f$ es transitivo. En gene\-ral, no se conocen ejemplos de difeomorfismos $f$ de Anosov que no sean transitivos.
\end{remark}
\begin{remark} \em .
{\bf Sobre la ergodicidad de los difeomorfismos de Anosov.} \index{ergodicidad} \index{difeomorfismos! de Anosov}
\index{transformaci\'{o}n! erg\'{o}dica}
Sea $M$ una variedad compacta y conexa de dimensi\'{o}n $n \geq 2$. Si $f \in \mbox{Diff }^2(M)$ es un difeomorfismo de Anosov (de clase $C^2$), si $f$ es transitivo y si $f$ preserva la medida de Lebesgue $m$, entonces $m$ es erg\'{o}dica. Demostraremos este resultado en el Corolario \ref{corolarioAnosovTransitivomedidaLebesgue}, al definir y estudiar las medidas invariantes de probabilidad llamadas SRB (Sinai-Ruelle-Bowen) para los difeomorfismos de Anosov.
\end{remark}
\subsection{Conjuntos uniformemente hiperb\'{o}licos} \index{hiperbolicidad! uniforme}
\index{transformaci\'{o}n! hiperb\'{o}lica! uniforme}
\index{conjunto! hiperb\'{o}lico! uniforme}
\begin{definition}
\em \label{definicionHiperbolicidadUniforme}
Sea $f: M \mapsto M$ y sea $\Lambda \subset M$ un subconjunto compacto e invariante (i.e. $f^{-1} (\Lambda) = \Lambda$).
El conjunto $\Lambda $ se llama \em uniformemente hiperb\'{o}lico \em (en breve, hiperb\'{o}lico) si para todo $x \in \Lambda$ existe un splitting $S_x \oplus U_x = T_xM$ del espacio tangente $T_xM$ a $M$ en $x$, que dependen continuamente de $x \in \Lambda$, y constantes $C >0$ y $ 0 < \lambda < 1 < \sigma$, que verifican las desigualdades (\ref{equationAnosovStable}) y (\ref{equationAnosovUnstable}) para todo $x \in \Lambda$. \index{splitting! hiperb\'{o}lico}
\index{subespacio! inestable}
\index{fibrado! inestable}
{\bf Nota: } La condici\'{o}n de dependencia continua de $S_x$ y $U_x$ al variar $x \in \Lambda$ es redundante. Se puede demostrar esta continuidad a partir de las invariancia de esos subespacios, de las desigualdades de hiperbolicidad uniforme y de la compacidad del conjunto $\Lambda$ (ver Ejercicios \ref{ejercicioUnicidadSplittingHiperbolico} y \ref{ejercicioContinuidadSplittingHiperbolico} y generalizarlos sustituyendo $M$ por $\Lambda$).
{\bf Sobre las dimensiones de $S_x$ y $U_x$:} Son complementarias (su suma es $\mbox{dim} M$), pero no necesariamente son ambas mayores que cero. Es decir, alguno de los dos subespacios puede tener dimensi\'{o}n cero, y el otro coincidir con $T_xM$. Las dimensiones pueden depender de $x \in \Lambda$. La dependencia continua de los subespacios invariantes implica que las dimensiones de $S_x$ y $U_x$ sean localmente constantes. Como $\Lambda$ es compacto, si existen varios subconjuntos de $\Lambda$ para los cuales $S_x$ y $U_x$ tienen dimensiones diferentes, entonces son dos a dos aislados.
\end{definition}
De las definiciones anteriores, es inmediato deducir que $f: M \mapsto M$ es un difeomorfismo de Anosov, si y solo si la variedad $M$ es un conjunto uniformemente hiperb\'{o}lico (con dimensiones estable e inestable no nulas).
\index{difeomorfismos! de Anosov}
\begin{exercise}\em \label{exercisepozosillafuente} \index{pozo} \index{fuente} \index{silla} \index{punto! peri\'{o}dico! hiperb\'{o}lico}
Sea $f: M \mapsto M$. Probar que un punto peri\'{o}dico $x$ de per\'{\i}odo $p$ (i.e. $f^p(x)= x, \ f^j(x) \neq x \ \forall \ 1 \leq j < p $ es hiperb\'{o}lico (i.e. los valores propios de $df^p$ tienen m\'{o}dulo diferente de 1), si y solo si su \'{o}rbita (finita) $\{x, f(x), \ldots, f^{p-1}(x)\}$ es un conjunto uniformemente hiperb\'{o}lico. Probar que si $x$ es punto silla (es decir, $df^p(x)$ tiene valores propios con m\'{o}dulo menor que uno y tambi\'{e}n con m\'{o}dulo mayor que uno), entonces los subespacios propios de $df^p(x)$ son los subespacios estable $S_x$ e inestable $U_x$, respectivamente. Probar que si $x$ es un pozo hiperb\'{o}lico (i.e. $df^p(x)$ tiene todos sus valores propios con m\'{o}dulo menor que uno), entonces $S_x = T_xM$ y $U_x = {\bf 0}$. Enunciar y probar resultado dual si $x$ es una fuente hiperb\'{o}lica (i.e. $df^p(x)$ tiene todos sus valores propios con m\'{o}dulo mayor que uno).
\end{exercise}
\subsection{Ejemplo: Herradura de Smale lineal} \index{herradura de Smale} \label{sectionHerraduraSmale}
Un ejemplo paradigm\'{a}tico de conjunto hiperb\'{o}lico (transitivo) que no es toda la variedad, es el siguiente, debido a Smale \cite{Smale} (ver tambi\'{e}n, por ejemplo, \cite[pages 97-98]{Jost}):
\begin{definition} \label{definicionHerradura}
\em {\bf Herradura de Smale lineal.}
Se llama \em herradura de Smale lineal \em (en
dimensi\'{o}n 2 y con 2 patas) a un difeomorfismo $T: Q= [0,1]^2 \subset \mathbb{R}^2 \mapsto T(Q) \subset \mathbb{R}^2$ que satisface las siguientes condiciones (ver Figura \ref{figuraHerraduraSmale}):
\begin{itemize}
\item [a) ] $T(Q)\cap Q = Q_0 \cup Q_1$ donde $Q= [0,1]^2, $
$ Q_0=[1/5,2/5]\times [0,1], \; Q_1 =
[3/5,4/5]\times [0,1] $.
\item[b) ] $T^{-1}(Q_0) = [0,1] \times
[1/5,2/5], \; T^{-1}(Q_1) = [0,1] \times [3/5,4/5] $.
\item [c) ] Para $j=0,1$, la restricci\'{o}n $T|_{T^{-1}(Q_j)} (x,y) $
es una transformaci\'{o}n af\'{\i}n en $(x,y)$ con
direcciones propias (1,0) y (0,1) y valores propios $\lambda$ y
$\mu$ reales tales que $|\lambda | = 1/5$ y $|\mu | = 5$
respectivamente. Por ejemplo:
$$T|_{T^{-1}(Q_0)} (x,y) = ((1/5)(x+1), 5 y -1 ),$$ $$
T|_{T^{-1}(Q_1)} (x,y) = ((-1/5)(x-4), -5 y +4 )$$
\end{itemize}
Para comprender c\'{o}mo es la transformaci\'{o}n $T$, ver la figura \ref{figuraHerraduraSmale} e imaginar $T$ como la composici\'{o}n de dos transformaciones: Primero considerar una transformaci\'{o}n af\'{\i}n que lleva el cuadrado $Q= [0,1]^2$ a un rect\'{a}ngulo 5 veces m\'{a}s alto y 5 veces menos ancho que el cuadrado $Q $ (contrae en horizontal y dilata en vertical). Despu\'{e}s aplicar una transformaci\'{o}n que \rq\rq dobla\rq\rq en forma biyectiva al rect\'{a}ngulo alto y flaco, d\'{a}ndole forma de herradura, superponiendo esta sobre el cuadrado $Q$ en $Q_0$ y $Q_1$ (sin deformar $Q_0$ ni $Q_1$).
Se observa que hemos restringido la definici\'{o}n eligiendo valores num\'{e}ricos fijos para $|\lambda|$ y $|\mu|$, iguales a $1/5$ y $5$ respectivamente. Sin embargo, si se toman otros valores num\'{e}ricos, $0 < |\lambda| < 1/2$ y $2 < |\sigma|$, y se definen (coherentemente con esos nuevos valores num\'{e}ricos) los rect\'{a}ngulos compactos $Q_0 $ y $Q_1$ disjuntos, como en la Figura \ref{figuraHerraduraSmale}, y tales $T(Q) \cap Q = Q_0 \cup Q_1$; y si las transformaciones $T|_{T^{-1}(Q_0)}$ y $T|_{T^{-1}(Q_1)}$ son afines con valores propios $ \pm\lambda $ y $\pm \sigma$, entonces $T$ tambi\'{e}n se llama \em herradura de Smale lineal.\em
\end{definition}
\begin{figure}[h]
\setlength{\unitlength}{0.5truecm}
\begin{picture}(18,12)(0,0)
\put(2.5,0.5){\footnotesize $0$}
\put(3,1){\vector(3,0){18}}
\put(3,1){\vector(0,3){10}}
\put(3,6){\line(3,0){10}}
\put(13,1){\line(0,3){5}}
\put(3,6.1){\line(3,0){10}}
\put(13.1,1){\line(0,3){5}}
\put(3,1.1){\line(3,0){10}}
\put(3.1,1){\line(0,3){5}}
\put(5,1){\circle*{.2}}
\put(7,1){\circle*{.2}}
\put(9,1){\circle*{.2}}
\put(11,1){\circle*{.2}}
\put(4.0,0.5){\footnotesize $0.2$}
\put(6,0.5){\footnotesize $0.4$}
\put(8,0.5){\footnotesize $0.6$}
\put(11.2,0.5){\footnotesize $0.8$}
\put(13,0.5){\footnotesize $1$}
\put (5.5,3) {$Q_0$}
\put (9.5,3) {$Q_1$}
\put(5,-1){\line(0,1){9}}
\put(7,-1){\line(0,1){9}}
\put(9,-1){\line(0,1){9}}
\put(11,-1){\line(0,1){9}}
\put (5,-1){\line(1,0){2}}
\put (9,-1){\line(1,0){2}}
\put(5.1,-1){\line(0,1){9}}
\put(7.1,-1){\line(0,1){9}}
\put(9.1,-1){\line(0,1){9}}
\put(11.1,-1){\line(0,1){9}}
\put (5,-1.1){\line(1,0){2}}
\put (9,-1.1){\line(1,0){2}}
{\qbezier(5,8)(5.5,11.6)(8,12)}
{\qbezier(11.1,8)(10.6,11.6)(8,12)}
{\qbezier(5.1,8)(5.6,11.6)(8,11.9)}
{\qbezier(11,8)(10.5,11.6)(8,11.9)}
{\qbezier(7,8)(7.2,9.8)(8,10)}
{\qbezier(9.1,8)(8.9,9.8)(8,10)}
{\qbezier(7.1,8)(7.3,9.8)(8,9.9)}
{\qbezier(9,8)(8.8,9.8)(8,9.9)}
\put(3,2){\circle*{.2}}
\put(3,3){\circle*{.2}}
\put(3,4){\circle*{.2}}
\put(3,5){\circle*{.2}}
\put(2,1.5){\footnotesize $0.2$}
\put(2,3){\footnotesize $0.4$}
\put(2,4.1){\footnotesize $0.6$}
\put(2,5.1){\footnotesize $0.8$}
\put(2.5,6){\footnotesize $1$}
\put(1,2){\line(1,0){4}}
\put(1,3){\line(1,0){4}}
\put(1,4){\line(1,0){4}}
\put(1,5){\line(1,0){4}}
\put (1,2){\line(0,1){1}}
\put (1,4){\line(0,1){1}}
\put(7,2){\line(1,0){2}}
\put(7,3){\line(1,0){2}}
\put(7,4){\line(1,0){2}}
\put(7,5){\line(1,0){2}}
\put(11,2){\line(1,0){4}}
\put(11,3){\line(1,0){4}}
\put(11,4){\line(1,0){4}}
\put(11,5){\line(1,0){4}}
{\qbezier(15,2)(18.5,2.2)(18.7,3.5)}
{\qbezier(15,5)(18.5,4.8)(18.7,3.5)}
{\qbezier(15,3)(17.1,3.1)(17.2,3.5)}
{\qbezier(15,4)(17.1,3.9)(17.2,3.5)}
{\qbezier(12,5.5)(12.2,6.4)(16,6.2)}
\put(16.2, 6.2){$Q = [0,1] \times [0,1]$}
{\qbezier(10,9.5)(10.2,10.4)(14,10.2)}
\put(14.2, 10.2){$T(Q)$}
\put (12, 9) {$ T(Q) \cap Q = Q_0 \cup Q_1$}
{\qbezier(14,2.6)(14.2,1.5)(18,1.7)}
\put(18.2, 1.7){$T^{-1}(Q)$}
\end{picture}
\vspace{.5cm}
\caption{\small Herradura de Smale}
\index{herradura de Smale}
\label{figuraHerraduraSmale}
\vspace{-.4cm}
\end{figure}
\begin{exercise}\em
Sea $T: \mathbb{R}^2 \mapsto \mathbb{R}^2$ una herradura de Smale lineal. Sea $Q = [0,1]^2$.
(a) Dibujar esquem\'{a}ticamente $T(T(Q) \cap Q) \subset T(Q)$, $T^2(Q)$ y $\cap_{n= 0}^N T^n(Q)$ para $N= 2, 3$.
(b) Dibujar esquem\'{a}ticamente)
$T^{-1}(Q) \cap Q, $ y $\cap_{n=0}^{N} T^{-n} Q $ para $N= 2, 3$.
(c) Dibujar esquem\'{a}ticamente)
el conjunto \lq\lq estable\rq\rq \ $ W^s \cap Q $ de todos los puntos de $Q$ cuyas
\'{o}rbitas futuras permanecen en $Q$ para todos los iterados $n
\geq 0$. (Sugerencia: Ver que $W^s \cap Q = \cap_{n= 0}^{+ \infty}T^{-n}(Q)$.)
(d) Calcular el {\bf exponente de Lyapunov negativo (tasa exponencial de contracci\'{o}n hacia el futuro)}
de la herradura de Smale: $\lim _{n \rightarrow + \infty} (1/n)
\log \|DT^n _{x} (1,0) \|$ para todo punto $x \in \bigcap_{n \in
\mathbb{N}}T^{-n}(Q)$.
(d) Definir el conjunto \lq\lq inestable\rq\rq \ $W^u \cap Q$ y el exponente de
Lyapunov positivo (tasa exponencial de dilataci\'{o}n hacia el futuro.) Calcularlo.
\end{exercise}
\begin{definition} \em \label{definicionMaximalHerraduraSmale} Sea $T: Q =[0,1]^2 \subset \mathbb{R}^2 \mapsto \mathbb{R}^2$
es la herradura de Smale lineal. Se llama \em conjunto invariante maximal de $T$ en $Q$ \em a
$$\Lambda : = \bigcap _{n \in \mathbb{Z}} T^{-n}Q.$$ Obs\'{e}rvese que por la propiedad de intersecciones finitas no vac\'{\i}as
de compactos, el conjunto $\Lambda$ es compacto no vac\'{\i}o. \index{conjunto! maximal invariante} \index{maximal invariante}
\begin{exercise}\em \label{ejercicioHerraduraSmaleHiperbolica}
Probar que el maximal invariante $\Lambda$ de una herradura de Smale lineal, es un conjunto hiperb\'{o}lico. Probar que $\Lambda$ es el producto cartesiano de dos conjuntos de Cantor en el intervalo $[0,1]$.
\end{exercise}
\end{definition}
\subsection{Variedades invariantes de conjuntos hi\-per\-b\'{o}licos} \index{variedad invariante! estable}
\index{variedad invariante! inestable}
\index{hiperbolicidad! uniforme}
\index{transformaci\'{o}n! hiperb\'{o}lica! uniforme}
\index{conjunto! hiperb\'{o}lico! uniforme}
A continuaci\'{o}n enunciamos algunos teoremas sobre din\'{a}mica diferenciable, que generalizan propiedades de los difeomorfismos de Anosov en una variedad compacta $M$, y en particular algunos de los resultados vistos en el ejemplo del automorfismo lineal hiperb\'{o}lico $f = \left(%
\begin{array}{cc}
2 & 1 \\
1 & 1 \\
\end{array}%
\right)$ en el toro $\mathbb{T}^2$.
\begin{theorem} {\bf Variedades invariantes para $\Lambda$ unif. hiperb\'{o}lico} \index{teorema! de existencia de! variedades invariantes}
\label{teoremavariedadesinvariantesAnosov}
Sea $f: M \mapsto M$ un difeomorfismo y $\Lambda \subset M$ un conjunto invariante, compacto y uniformemente hiperb\'{o}lico. Para $x \in M$ denotemos $U_x, S_x$ los subespacios inestable y estable respectivamente. Entonces, para cada $x \in \Lambda$ existen y son \'{u}nicas:
{\bf A)} Una subvariedad conexa $W^s(x)$, $C^1$-inmersa en $M$ \em (pero no necesariamente \lq\lq embedded\rq\rq, i.e. no encajada en $M$, ni necesariamente compacta), \em llamada variedad estable por $x$, tal que: \em
\begin{equation} \label{equationvarestable1} x \in W^s(x), \ \ \ f (W^s(x)) = W^s(f (x)),
\ \ \ T_x (W^s(x)) = S_x. \end{equation} \em
{\bf B)} Una subvariedad $C^1$ conexa $W^u(x)$, $C^1$-inmersa en $M$ \em (pero no necesariamente encajada en $M$ ni compacta), \em llamada variedad inestable por $x$, tal que: \em
\begin{equation} \label{equationvarinestable1}x \in W^u(x), \ \ \ f (W^u(x)) = W^u(f (x)), \ \ \ T_x (W^u(x)) = U_x.\end{equation} \em
tales que: \em \begin{equation}
\label{equationvarestable2} \lim _{n \rightarrow + \infty} \mbox{dist}(f^n(y), f^n(x)) = 0 \ \ \Leftrightarrow \ \ \ y \in W^s(x), \end{equation}
\begin{equation} \label{equationvarinestable2}\lim _{n \rightarrow - \infty} \mbox{dist}(f^n(y), f^n(x)) = 0 \ \ \Leftrightarrow \ \ \ y \in W^u(x).\end{equation} \em
\em Si adem\'{a}s $f$ es un difeomorfismo $C^r$ para alg\'{u}n $r > 1$ entonces $W^s_x$ y $W^u_x$ son subvariedades de clase $C^r$.
\end{theorem}
Una prueba del teorema de existencia de variedades estable e inestable puede encontrarse en \cite{HirschPughShub}.
\begin{remark} \em
\index{subespacio! inestable}
\index{fibrado! inestable}
\index{variedad invariante! estable}
\index{variedad invariante! inestable}
Sea $f: M \mapsto M$ un difeomorfismo de Anosov con coeficiente $ \lambda < 1$ de contracci\'{o}n en el fibrado estable $S$, y coeficiente $\sigma > 1$ de dilataci\'{o}n en el fibrado inestable $U$.
Admitiendo la existencia de $C^1$-subvariedades $W^s(x)$ y $W^u(x)$ conexas, que cumplen las condiciones (\ref{equationvarestable1}) y (\ref{equationvarinestable1}) respectivamente, y sabiendo que los subespacios $S_x $ y $ U_x$ dependen continuamente de $x$, se puede demostrar que
\begin{equation} \label{equationdistanciaestable} \limsup _{n \rightarrow + \infty} \frac{\log \mbox{dist}(f^n(y), f^n(x))}{n} \leq \log \lambda < 0 \ \ \forall \ y \in W^s(x); \end{equation}
\begin{equation} \label{equationdistanciainestable} \liminf _{n \rightarrow - \infty} \frac{\log \mbox{dist}(f^n(y), f^n(x))}{n} \geq \log \sigma >0 \ \ \forall \ y \in W^u(x).\end{equation}
\end{remark}
\begin{remark} \em \index{difeomorfismos! de Anosov}
\index{teorema! Franks}
Volvamos a los difeomorfismos de Anosov, como caso particular de sistema uniformemente hiperb\'{o}lico:
El Teorema o Lema de Franks \cite{Franks} (ver tambi\'{e}n \cite{Manning} para difeomorfismos de Anosov en el toro n-dimensional, para todo $n \geq 2$) establece que:
{\bf Teorema o Lema de Franks} \em Las \'{u}nicas superficies compactas, conexas, sin borde y orientables
que soportan un difeomorfismo de Anosov $f$ son homeomorfas al
toro ${\mathbb{T}}^2$, y $f$ es conjugado a un automorfismo lineal hiperb\'{o}lico. \em
\index{homeomorfismos conjugados} \index{conjugaci\'{o}n} \index{difeomorfismos! conjugados}
La demostraci\'{o}n del Lema de Franks se encuentra en \cite{Franks}. En \cite{Manning}, Manning generaliza la \'{u}ltima parte del Lema de Franks, probando que los difeomorfismos de Anosov en el toro de dimensi\'{o}n $n$ son conjugados a automorfismos lineales. Recientemente, en \cite{Hammerlindl-LemadeFranks} se obtiene el mismo resultado que Manning, pero para difeomorfismos de Anosov en nilmanifols de dimensi\'{o}n 3, y no solo para el toro.
Nota: Dos homeomorfismos $f: X \mapsto X$ y $g: Y \mapsto Y$, en
respectivos espacios topol\'{o}gicos $X$ e $Y$, se dice que son \em
conjugados o topol\'{o}gicamente equivalentes \em si existe un
homeormorfismo $h: X \mapsto Y$ llamado \em conjugaci\'{o}n \em tal
que $g \circ h= h \circ f$. La conjugaci\'{o}n implica que cada una de las
propiedades de la din\'{a}mica topol\'{o}gica de $f$ (transitividad,
conjunto no errante, omega y alfa l\'{\i}mites, recurrencia) se
satisface para $f$ si y solo si se satisface para $g$.
Debido al teorema de Franks los automorfismos lineales en el toro
${\mathbb{T}}^2$ son el paradigma de los difeomorfismos de Anosov en
superficies, y entre ellos en particular el $\left(%
\begin{array}{cc}
2 & 1 \\
1 & 1 \\
\end{array}%
\right)$ en el toro ${\mathbb{T}}^2$.
En cambio, conjuntos hiperb\'{o}licos como la herradura de Smale, o topol\'{o}gicamente conjugadas a ella, se pueden construir en un abierto homeormorfo a un cuadrado de $\mathbb{R}^2$, en cualquier superficie
(variedad de dimensi\'{o}n dos).
Se puede generalizar la herradura de Smale a dimensiones mayores que dos, tomando un cubo $n$-dimensional en el rol de cuadrado $Q$, y eligiendo dimensiones complementarias de contracci\'{o}n uniforme y de dilataci\'{o}n uniforme.
\end{remark}
\subsection{Expansividad o caos topol\'{o}gico.} \index{expansividad} \index{caos} \index{transformaci\'{o}n! ca\'{o}tica} \index{transformaci\'{o}n! expansiva}
Se han dado varias definiciones precisas de \em caos \em en la literatura matem\'{a}tica. Pero estas definiciones no son equivalentes entre s\'{\i} (ver por ejemplo \cite{buzziEnciclopedia}). Entonces cuando uno se refiere a \em sistema ca\'{o}tico \em deber\'{\i}a siempre incluir la definici\'{o}n que est\'{a} utilizando, con precisi\'{o}n, y observar que los resultados que se obtengan con esa definici\'{o}n, no son necesariamente ciertos si se hubiese adoptado otra. Seg\'{u}n sea el objetivo de investigaci\'{o}n de quien estudia el sistema din\'{a}mico (por ejemplo, su objetivo puede ser estudiar la din\'{a}mica topol\'{o}gica de un conjunto abierto y denso de \'{o}rbitas, o solo estudiar la de $\mu$-casi toda \'{o}rbita cuando $\mu$ es invariante, o la de Lebesgue-casi toda \'{o}rbita, cuando la medida de Lebegue no es invariante, etc), adopta una u otra definici\'{o}n.
El estudio de las propiedades topol\'{o}gicas de los difeomorfismos
linea\-les en el toro ${\mathbb{T}}^2$ es paradigm\'{a}tico para estudiar m\'{a}s en general,
los sistemas ca\'{o}ticos desde el punto de vista topol\'{o}gico. En efecto:
\begin{definition} \em
Sea $T: X \mapsto X$ un homeomorfismo en un espacio m\'{e}trico
compacto $X$. Se dice que $T$ es \em ca\'{o}tico \em (topol\'{o}gicamente)
o \em expansivo\em, si existe una constante $\alpha >0$ (llamada
\em constante de expansividad) \em tal que
$$\sup_{n \in \mathbb{Z}} \mbox{$\,$dist$\,$} (T^n(x), T^n(z)) > \alpha \; \; \forall x \neq y \in X$$
\end{definition}
{\bf Interpretaci\'{o}n: } El caos topol\'{o}gico o expansividad es una versi\'{o}n de \lq\lq hiperbolicidad topol\'{o}gica.\rq\rq \ Significa que la \em
din\'{a}mica es $\alpha$-sensible a las condiciones iniciales: \em dos
\'{o}rbitas con estados iniciales $x \neq y $ diferentes se separan
m\'{a}s que la distancia $\alpha $, hacia el futuro \'{o} hacia el pasado. Podr\'{a} haber parejas de \'{o}rbitas, con estados iniciales pr\'{o}ximos $x \neq y$, que se separan solo para el futuro pero no para el pasado, o al rev\'{e}s. Generalmente (por ejemplo en un difeomorfismo de Anosov), para la mayor parte de las parejas de estados iniciales $x \neq y$, sus \'{o}rbitas se separan en el futuro y en el pasado.
La separaci\'{o}n hacia el futuro o hacia el pasado, m\'{a}s que una constante uniforme $\alpha$, de dos \'{o}rbitas con estados iniciales $x \neq y$ (por m\'{a}s que $x $ e $y $ est\'{e}n tan cerca entre s\'{\i} como se desee) implica lo siguiente:
Si uno aproxima con error
$\epsilon
>0$, aunque sea arbitrariamente peque\~{n}o, el estado inicial $x$ sustituy\'{e}ndolo por $y$ tal que $0 <\mbox{$\,$dist$\,$} (x,y)
\leq \epsilon$, entonces el estado del sistema $T^n(y)$ en alg\'{u}n
instante $n \in \mathbb{Z}$, se modificar\'{a} m\'{a}s que $\alpha >0 $ respecto
del estado $T^n(x)$ que tendr\'{\i}a si no se hubiese cometido el
error en el estado inicial. La sensibilidad a las condiciones
iniciales, o expansividad, o caos topol\'{o}gico, se llama tambi\'{e}n
\lq \lq efecto mariposa": la leve modificaci\'{o}n del estado inicial
producido por el aleteo de una mariposa, por m\'{a}s leve que esta sea (es decir por m\'{a}s peque\~{n}o que sea la diferencia $\epsilon >0$ provocada en ese estado inicial) produce una modificaci\'{o}n
\lq\lq dr\'{a}stica\rq\rq \ (es decir mayor que una constante uniforme $\alpha >0$) en el estado del sistema en otro instante).
\begin{exercise}\em \index{rotaci\'{o}n! racional} \index{rotaci\'{o}n! irracional}
Demostrar que la rotaci\'{o}n del c\'{\i}rculo (racional o
irracional) no es expansiva (i.e. no es topol\'{o}gicamente ca\'{o}tica). \em
\end{exercise}
\begin{exercise}\em \index{tent map}
Demostrar que el tent map $f: [0,1] \mapsto [0,1]$ es expansivo para el futuro, esto es, existe una constante $\alpha >0$ tal que $$\mbox{Si \ dist}(f^j(x), f^j(y)) < \alpha \ \forall \ j \in \mathbb{N} \ \mbox{ entonces } \ x= y.$$
Sugerencia: Probar que la derivada $|(f^n)'|$ tiende uniformemente a $+ \infty$ con $n$ y usar $\alpha= 1/4$.
\end{exercise}
\begin{exercise}\em \index{automorfismo!
lineal del toro} Demostrar que la transformaci\'{o}n $f = \left(%
\begin{array}{cc}
2 & 1 \\
1 & 1 \\
\end{array}%
\right)$ en el toro ${\mathbb{T}}^2$ es expansiva.
\end{exercise}
\begin{remark} \em
En \cite{Lewowicz-Expansivos}, Lewowicz demostr\'{o} los siguientes resultados:
\vspace{.2cm} \index{expansividad} \index{transformaci\'{o}n! expansiva} \index{teorema! Lewowicz}
{\bf Teorema de Lewowicz} \em Las \'{u}nicas superficies \em (variedades de dimensi\'{o}n dos) \em compactas y conexas donde existen homeomorfismos expansivos, son homeomorfas al toro $\mathbb{T}^2$. \em
\vspace{.2cm}
\em Todos los homeomorfismos
expansivos en el toro ${\mathbb{T}}^2$ son
conjugados a un Anosov \em (y por el teorema de Franks son conjugados
a un difeomorfismo de Anosov lineal). Por eso resulta paradigm\'{a}tico
estudiar los difeomorfismos lineales, y el $ \left(%
\begin{array}{cc}
2 & 1 \\
1 & 1 \\
\end{array}%
\right)$ en particular.
\end{remark}
\subsection{Foliaciones invariantes para dif. de Anosov.} \index{difeomorfismos! de Anosov} \index{foliaci\'{o}n! invariante! estable}
\index{foliaci\'{o}n! invariante! inestable}
\label{sectionFoliacionesInvariantesDifeosAnosov}
La unicidad de las variedades invariantes para los difeomorfismos de Anosov, implica que la siguiente colecci\'{o}n no numerable de subconjuntos (subvariedades) $\{W^s_x\}_{x \in M}$, sea una partici\'{o}n de $M$. En efecto:
\begin{proposition}
Sea $f: M \mapsto M$ un difeomorfismo de Anosov, y sean $x, y \in M$.
Entonces o bien $ W^s_x \cap W^s_y = \emptyset$, o bien $W^s_x = W^s_y$.
An\'{a}logamente para $W^u_x$ y $W^u_y$.
\end{proposition}
{\em Demostraci\'{o}n: } Debido a (\ref{equationvarestable2}) y (\ref{equationvarinestable2}), $y \in W^s_x$ si y solo si $x \in W^s_y$.
Supongamos que existe $z \in W^s_x \cap W^s_y$. Sea $x' \in W^s_x$. Usando nuevamente (\ref{equationvarestable2}) y (\ref{equationvarinestable2}) y la propiedad triangular, deducimos que $$\lim_{ n \rightarrow + \infty} \mbox{dist}(f^n(y), f^n(x')) = 0.$$ En efecto: $$\mbox{dist}(f^n(y), f^n(x') \leq $$ $$ \mbox{dist} (f^n(y), f^n(z))+\mbox{dist} (f^n(z), f^n(x))+\mbox{dist} (f^n(x), f^n(x')),$$ y estos tres sumandos tienden a cero cuando $n \rightarrow + \infty$.
Luego $x' \in W^s_y$ para todo $x' \in W^s_x$, probando que $W^s_x \subset W^s_y$. Sim\'{e}tricamente, $W^s_ y \subset W^s_x$. Hemos demostrado que si existe $z \in W^s_x \cap W^s_y$ entonces $W^s_y = W^s_x$.
\hfill $\Box$
\begin{definition} \em {\bf Foliaciones estable e inestable}.\index{foliaci\'{o}n! trivializaci\'{o}n de}
Se llama \em foliaci\'{o}n estable \em a la partici\'{o}n $\{W^s_x\}_{x \in M}$ de $M$ en variedades estables. An\'{a}logamente, se llama \em foliaci\'{o}n inestable \em a la partici\'{o}n $\{W^u_x\}_{x \in M}$ de $M$ en variedades inestables.
{\bf Nota sobre la definici\'{o}n geom\'{e}trica de \lq\lq foliaci\'{o}n\rq\rq: } Por definici\'{o}n, una foliaci\'{o}n tiene una estructura geom\'{e}trica precisa que transciende a la mera partici\'{o}n del espacio $M$ en subvariedades inmersas disjuntas dos a dos todas de la misma dimensi\'{o}n $1 \leq k_1 < \mbox{dim} M $. Estas variedades inmersas se llaman hojas de la foliaci\'{o}n. La estructura geom\'{e}trica de foliaci\'{o}n consiste en la existencia de un atlas de cartas locales $\xi$ de $M$ que son \lq\lq trivializadoras\rq\rq \ de las hojas de la foliaci\'{o}n: i.e. la imagen de cada $\xi$ es el producto cartesiano $B_{k_1} \times B_{k_2}$, donde $B_{k_i}$ es la bola unitaria de $R^{k_i}$ ($k_1 + k_2 = \mbox{dim}M$), tal que transforma la componentes conexas locales de cada hoja, en las secciones $B_{k_1} \times \{v_2\}$ (con $v_2 \in B_{k_2}$ fijo).
\index{foliaci\'{o}n! din\'{a}micamente definida}
En general, para las foliaciones din\'{a}micamente definidas (por ejemplo, las foliaciones estable e inestable de un $f$ de Anosov), las cartas locales trivializadoras $\xi$ existen, pero son solo homeormofismos sobre sus im\'{a}genes (no son siquiera $C^1$). Como las hojas de la foliaci\'{o}n son subvariedades inmersas de clase $C^{1}$, entonces las \em restricciones \em de las cartas trivializadoras a las hojas locales, son $C^{1}$.
\end{definition}
\begin{definition} \em {\bf Regularidad de foliaciones invariantes.} \index{foliaci\'{o}n! regular}
\index{foliaci\'{o}n! de clase $C^0$}
Un partici\'{o}n ${\mathcal W} := \{W _x\}_{x \in M}$ del espacio $M$ en subvariedades diferenciables inmersas en $M$ y disjuntas dos a dos, todas de la misma dimensi\'{o}n, se dice que es \em una foliaci\'{o}n invariante $C^0$ \em (desde el punto de vista din\'{a}mico) si:
(1) $f (W_x) = W_{f(x)} \ \ \forall \ x \in M$ (invariancia),
(2) cada hoja $W_x$ es de clase $C^1$, y
(3) la aplicaci\'{o}n $x \in M \mapsto T_x W_x$ que lleva cada punto $x$ en el subespacio tangente $T_xW_x $ en $x$ a la hoja $W_x$, es continua (es decir, el subespacio tangente a la hoja var\'{\i}a en forma $C^0$ con $x \in M$).
{\bf Nota: } La condici\'{o}n (2) de que cada hoja sea de clase $C^1$, implica que su subespacio tangente var\'{\i}e en forma $C^0$ con $x$ \em variando a lo largo de la hoja respectiva. \em Pero no implica que ese subespacio tangente \em var\'{\i}e adem\'{a}s en forma $C^0$ con $x$ en todas las direcciones transversales a la hoja. \em Por lo tanto la condici\'{o}n (3) no es redundante: es m\'{a}s fuerte que la condici\'{o}n (2).
An\'{a}logamente, para todo $r \in \mathbb{N}$, la partici\'{o}n ${\mathcal W}$ es \em una foliaci\'{o}n inva\-rian\-te $C^{r}$ \em (desde el punto de vista din\'{a}mico) si cada hoja $W_x \in {\mathcal W}$ es de clase $C^{r+1}$ y la aplicaci\'{o}n $x \in M \mapsto T_xW_x$ es de clase $C^{r}$ . Para que esta definici\'{o}n sea aplicable, hay que asumir que la variedad ambiente $M$ es de clase $C^{r+1}$ por lo menos.
\index{foliaci\'{o}n! H\"{o}lder continua}
Sea $0 <\alpha < 1$. Una foliaci\'{o}n invariante ${\mathcal W}$ es $\alpha$-H\"{o}lder continua y se denota de clase $C^{\alpha}$, si cada hoja $W_x \in {\mathcal W}$ es de clase $C^1$ y la aplicaci\'{o}n $x \in M \mapsto T_xW_x$ satisface:
\begin{equation}
\label{eqn101}
\mbox{dist}(W_x, W_y) \leq K \, [\mbox{dist}(x,y)]^{\alpha}\end{equation}
para cierta constante $K$.
Se dice que la foliaci\'{o}n es Lipschitz y se denota como $C^{Lip} $, si es $1$-H\"{o}lder continua (es decir vale la desigualdad (\ref{eqn101}) para $\alpha= 1$.
\index{foliaci\'{o}n! Lipschitz}
\end{definition}
Debido al Teorema \ref{teoremavariedadesinvariantesAnosov}, tenemos el siguiente resultado:
\begin{corollary} \index{foliaci\'{o}n! invariante! estable} \index{foliaci\'{o}n! invariante! inestable}
Si $f$ es difeomorfismo de Anosov de clase $C^1$, entonces las foliaciones estable e inestable son de clase $C^0$ .
\end{corollary}
{\em Demostraci\'{o}n: }
$T_x W^s_x = S_x, \ T_x W^u_x = U_x$ y los subespacios $S_x$ y $U_x$ dependen continuamente de $x$. \hfill $\Box$
Uno podr\'{\i}a esperar que si $f$ es difeomorfismo de Anosov de clase $C^{r+1}$ con $r \geq 0$, entonces las foliaciones invariantes estable e inestables sean foliaciones de clase $C^{r}$. Este \'{u}ltimo resultado es FALSO. En gene\-ral, cada hoja es $C^{r+1}$. Pero no se puede mejorar casi nada la regularidad $C^0$ de las foliaciones invariantes simplemente aumentando la regularidad de $f$.
\begin{theorem} {\bf H\"{o}lder continuidad de foliaciones invariantes para difeomorfismos de Anosov.} \index{teorema! de existencia de! foliaciones invariantes} \index{foliaci\'{o}n! H\"{o}lder continua} \index{foliaci\'{o}n! regular}
Si $f: M \mapsto M$ es un difeomorfismo de Anosov de clase $C^k$ con $k > 2$, entonces las variedades estables e inestables de $f$ son subvariedades $C^k$, y las foliaciones estables e inestables que forman son $\alpha-$H\"{o}lder continuas para cierto $0 < \alpha < 1$.
\end{theorem}
Una prueba de este teorema se puede encontrar en \cite[Theorem 2.3.1, pag. 48]{BarreiraPesin}.
\subsection{Exponentes de Lyapunov}
A lo largo de las pr\'{o}ximas secciones asumiremos las siguientes hip\'{o}tesis:
$\bullet$ $M$ es una variedad diferenciable, compacta, conexa y provista de una estructura riemanniana.
$\bullet$ $f: M \mapsto M$ es de clase $C^1$ (es decir $f$ es diferenciable y su derivada $df$ es continua). El mapa $f$ no es necesariamente invertible.
\begin{notation} \em
\label{notationDiferenciable}
.
$\bullet $ $f: M \mapsto M$ es un difeomorfismo si de clase $C^1$, invertible y adem\'{a}s su inversa tambi\'{e}n es de clase $C^1$. Denotamos $f \in \mbox{Diff }^1(M)$.
$\bullet$ Si adem\'{a}s $f$ y su inversa son de clase $C^r$ para alg\'{u}n $r \geq 2$ (para lo cual la variedad $M$ tambi\'{e}n debe ser de clase $C^r$ por lo menos), indicaremos $f \in \mbox{Diff }^r (M)$.
$\bullet$ Denotaremos $f \in \mbox{Diff }^{ 1 + \alpha }$ cuando $f \in \mbox{Diff }^1(M)$ y adem\'{a}s $df: TM \mapsto TM$ es $\alpha$-H\"{o}lder continua para cierta constante $0 < \alpha < 1$; i.e. existen constantes $\delta >0$ y $K >0$ tales que
$$ \| df_x - df_y\| \leq K (\mbox{dist}(x,y) )^{\alpha}\ \ \forall \ x,y \in M \mbox{ such that } \mbox{dist}(x,y) < \delta.$$
En el primer miembro de la desigualdad anterior, $\| A \|$ denota la norma de la transformaci\'{o}n lineal $A \in L(R^k)$ donde $k = \mbox{dim}(M)$, es decir $\|A\| = \max\{\|Av\|: v \in \mathbb{R}^k, \|v\| = 1\}$.
$\bullet$ Denotaremos $f \in \mbox{Diff }^{1 + Lip}(M)$ si $f \in \mbox{Diff } ^1(M)$ y adem\'{a}s $df$ es Lipschitz, i.e. existen constantes $\delta >0$ y $K >0$ que satisfacen la desigualdad de $\alpha$-H\"{o}lder continuidad con $\alpha = 1$.
\end{notation}
\begin{definition} {\bf Exponentes de Lyapunov} \index{punto! d\'{e}bilmente regular} \index{punto! regular} \index{exponentes de Lyapunov! de puntos regulares} \index{exponentes de Lyapunov} \index{regularidad! de puntos}
\index{regularidad! d\'{e}bil} \label{definicionExponentesLyapunov}
\em Sea $f \in \mbox{Diff }^1(M)$.
Un punto $x \in M$ se llama \em d\'{e}bilmente regular \em si existen los siguiente l\'{\i}mites para todo $v \in T_xM \setminus \{{\bf 0}\}$:
$$\lim_{n \rightarrow + \infty} \frac{\log (\|df_x^n(v)\|)}{n} ; \ \ \ \ \ \lim_{n \rightarrow + \infty} \frac{\log (\|df_{x}^{-n} (v)\|)}{-n}.$$
Estos dos l\'{\i}mites (n\'{u}meros reales), se llaman \em exponentes de Lyapunov en el futuro y en el pasado respectivamente, de la \'{o}rbita por $x$ correspondientes a la direcci\'{o}n $[v]$. \em
No se definen los exponentes de Lyapunov en los puntos no regulares.
Se puede definir tambi\'{e}n puntos d\'{e}bilmente regulares y exponentes de Lyapunov (en el futuro) para $f \in C^1(M)$ aunque $f$ no sea un difeomorfismo.
\vspace{.3cm}
M\'{a}s adelante veremos el Teorema de Oseledets, que demuestra, entre otros resultados, lo siguiente:
\index{teorema! Oseledets}
\begin{center}
\em Para toda medida de probabilidad $\mu$ que sea $f$-invariante, $\mu$-casi todo punto $x \in M$ es regular. \em
\end{center}
Dicho de otra forma, el conjunto de los puntos no regulares tiene medida nula para toda medida de probabilidad $\mu$ invariante por $f$.
{\bf Nota sobre el concepto de regularidad:} \index{punto! regular}
\index{punto! Lyapunov regular} \index{regularidad! de puntos} \index{regularidad! Lyapunov} En la literatura, suele llamarse punto regular a aquellos puntos $x$ que cumplen una condici\'{o}n m\'{a}s fuerte que la que hemos adoptado nosotros en la Definici\'{o}n \ref{definicionExponentesLyapunov}. En efecto, se definen condiciones adicionales de existencia de subespacios invariantes tales que para todo vector en ellos el exponente de Lyapunov en el futuro coincide con el exponente de Lyapunov en el pasado. A los puntos regulares que cumplen esa condici\'{o}n m\'{a}s fuerte, los llamaremos Lyapunov-regulares (ver Definici\'{o}n \ref{definitionLyapunovRegulares}). El teorema de Oseledets establece que $\mu$-casi todo punto $x \in M$ no solo es d\'{e}bilmente regular seg\'{u}n nuestra definici\'{o}n \ref{ejercicioAnosovExponentesLyapunov}, sino tambi\'{e}n Lyapunov-regular seg\'{u}n la Definici\'{o}n \ref{definitionLyapunovRegulares}.
\end{definition}
Los exponentes de Lyapunov dependen de la \'{o}rbita de $x$, pero no dependen de cu\'{a}l punto se tome en la misma \'{o}rbita. En efecto:
\begin{exercise}\em
Probar que si $x$ es d\'{e}bilmente regular, entonces para todo $k \in \mathbb{Z}$ fijo, el punto $y = f^k(x)$ (es decir, cualquier punto en la \'{o}rbita de $x$) es tambi\'{e}n d\'{e}bilmente regular y que el conjunto de los exponentes de Lyapunov en $y$ coincide con el de los de $x$.
\end{exercise}
El siguiente ejercicio tiene como objetivo adelantarse al enunciado del teorema de Oseledets (que enunciaremos al final de esta secci\'{o}n).
\begin{exercise}\em \label{ejercicioExponentesLyapunov0} \index{exponentes de Lyapunov! de puntos regulares} \index{exponentes de Lyapunov} \index{punto! d\'{e}bilmente regular}
\index{punto! regular}
\index{regularidad! de puntos}
\index{regularidad! d\'{e}bil}
Sea $x$ un punto d\'{e}bilmente regular.
(a) Sean ${\bf 0} \neq v \in T_xM$, $\chi$ el exponente de Lyapunov en el futuro (o en el pasado) de la \'{o}rbita de $x$ correspondiente a una direcci\'{o}n $u \neq 0$. Sea $[u] \subset T_xM$ el subespacio de dimensi\'{o}n 1 generado por $u$. Demostrar que para todo $0 \neq u' = [u]$ el exponente de Lyapunov en el futuro (o en el pasado respectivamente) correspondiente a $u$ es el mismo que el de $u'$.
(b) Sean $\chi^+_u$ y $\chi^+_v$ los exponentes de Lyapunov en el futuro de dos direcciones l.i. $0 \neq u, v \in T_xM$. Sean $\chi^-_u$ y $\chi^-_v$ los exponentes de Lyapunov en el pasado de esas dos mismas direcciones $u$ y $v$. Asuma $\chi^+_u \neq \chi^+_v, \ \chi^-_u \neq \chi^-v$. Sea $w = u + v$. Probar que $\chi^+_w = \max \{\chi^+_u, \chi^+_v\}$ y $\chi^-_w = \min \{\chi^-_u, \chi^-_v\}$.
Sugerencia: Asuma $\chi^+_u > \chi^+_v$. Use el primer l\'{\i}mite de \ref{definicionExponentesLyapunov} con los vectores $u$ y $v$, para probar que para todo $\epsilon >0$ existe $N \in \mathbb{N}$ tal que:
$$\|df^n(w)\| \geq \|df^n(u) \| -\| df^n(v)\| \geq $$
$$e^{n(\chi_u ^+ - \epsilon)}( \|u\| - e^{n(\chi^+_v - \chi^+_u + 2 \epsilon)}\|v\| ) \ \ \forall \ n \geq N. $$
Fije $\epsilon < (\chi^+_u - \chi^+_v)/2$. Tome logaritmo, divida entre $n$ y luego $n \rightarrow + \infty$.
(c) Como en la parte (b) asuma ahora $\chi_u^+ = \chi_v^+$ y $\chi_u ^- = \chi_v ^-$. Pruebe que $\chi_w ^+ \leq \chi_u^+= \chi_v^+, \ \chi_w ^- \geq \chi_u^-= \chi_v^- $.
(d) Sea una base $u_1, \ldots, u_k$ de $T_xM$. Asuma que los exponentes de Lyapunov en el futuro $\chi^+_{i}$ y en el pasado $\chi^-_i$ en las direcciones $u_i$, cumplen $\chi^+_i \neq \chi^+ _j $ para todo $i \neq j$. Probar que para todo $0 \neq u = \sum_{i= 1}^k b_i u_i \in T_xM$, se cumple:
$$\chi^+_u = \max_{1 \leq i \leq k; \ b_i \neq 0} \chi^+_i.$$
$$\chi^-_u = \min_{1 \leq i \leq k; \ b_i \neq 0} \chi^-_i.$$
Sugerencia: usar inducci\'{o}n en la cantidad de coeficientes $b_i \neq 0$, y la parte (b).
\end{exercise}
\begin{exercise}\em \label{ejercicioExponentesLyapunov}
Sea $x$ un punto d\'{e}bilmente regular.
Para cada $k \geq 0$ sean $E^1_{f^k(x)}$ y $E^2_{f^k(x)}$ dos subespacios L.I. (de dimensiones no nulas) de $T_{f^k(x)}M$, invariantes por $f$ (es decir $ E^i_{f^{k+1}(x)} = df_{f^k(x)} E^i_{f^k(x)} $ para todo $k \geq 0$ y para $i = 1, 2$). Asuma la siguiente hip\'{o}tesis:
{\bf Hip\'{o}tesis I} Para cada $i$ coinciden los exponentes de Lyapunov en el futuro y en el pasado entre s\'{\i} y en todas las direcciones del subespacio $E^i_x$. Ll\'{a}melo $\chi^i$. Adem\'{a}s para todo $0 \neq v \in E^1_x \oplus E^2_x$, el exponente de Lyapunov en el futuro de $v$ es mayor o igual que el exponente de Lyapunov en el pasado de $v$.
(a) Probar que
(i) Para todo $k \geq 0$ y para cada $i= 1, 2$, los exponentes de Lyapunov en el futuro y en el pasado en $f^k(x)$, correspondientes a la direcci\'{o}n $E^i_{f^k(x)}$, coinciden entre s\'{\i} y con $\chi^i$.
(ii) Probar que para todo $0 \neq v \in E^1_{f^k(x)} \oplus E^2_{f^k(x)} $ tal que $ v \not \in E^1_{f^k(x)}, E^2_{f^k(x)}$, el exponente de Lyapunov $\chi^+_v(x)$ de $[v]$ en el futuro es igual a $\max\{\chi^1, \chi^2\}$, y el exponente de Lyapunov $\chi_v ^-(x)$ de $[v]$ en el pasado, es igual a $\min\{\chi^1, \chi^2\}$.
(iii) Deducir que si $\chi_1 = \chi_2 = \chi $, entonces el subespacio $E_{f^k(x)}= E^1_{f^k(x)} \oplus E^2_{f^k(x)}$ es invariante, de dimensi\'{o}n mayor que los subespacios $E^1_{f^k(x)} $ y $ E^2_{f^k(x)}$ que lo generaron, y los exponentes de Lyapunov en el futuro y en el pasado para todos los vectores ${\bf 0} \neq v \in E$ coinciden con $\chi$.
(b) Extender (enunciar y demostrar) los resultados de la parte a), asu\-mien\-do que existe un splitting $T_x M= E^1_x \oplus E^2_x \oplus \ldots \oplus E^{h}_x$ que verifica la Hip\'{o}tesis I.
(c) Deducir, asumiendo (b), que el conjunto de exponentes de Lyapunov diferentes en cualquier punto regular que cumpla la Hip\'{o}tesis I es finito, y menor o igual que dim($M$).
\end{exercise}
El ejercicio anterior motiva preguntarse cu\'{a}ndo se cumple la hip\'{o}tesis asumida sobre la existencia de los subespacios $E^i_x$ invariantes para los cuales los exponentes de Lyapunov en el futuro y en el pasado existen y son iguales entre s\'{\i}, y diferentes para diferentes valores de $i$. Esta pregunta motiva la siguiente definici\'{o}n de regularidad:
\begin{definition} \em \label{definitionLyapunovRegulares} {\bf Puntos Lyapunov-regulares} \index{punto! regular}
\index{regularidad! de puntos}
\index{regularidad! Lyapunov}
\index{punto! Lyapunov regular}
Un punto $x \in M$ se llama \em Lyapunov regular \em si
existe un splitting del espacio tangente $$T_xM = E^1_x \oplus E^2_x \oplus E^{k(x)}_x $$ \index{splitting! de Oseledets} (que puede reducirse como caso particular a $k(x)= 1, \ \ T_x M= E^1_x$), tal que existen y coinciden entre s\'{\i} \em{ los exponentes de Lyapunov en el futuro y en el pasado $\chi_i(x)$} \em \index{exponentes de Lyapunov! de puntos regulares}
\index{exponentes de Lyapunov} en toda direcci\'{o}n $ [v] \subset E^i_x$ y adem\'{a}s $$\chi_1(x) < \chi_2(x) <\ldots <\chi_h(x) (x).$$
En otras palabras:
$$\lim_{n \rightarrow + \infty} \frac{\log (\|df_x^n(v)\|)}{n} = \lim_{n \rightarrow - \infty} \frac{\log (\|df_{x}^{n} (v)\|)}{n} = \chi_i(x) \ \ \forall \ \{{\bf 0}\}\neq [v] \subset E^i_x.$$
Los subespacios $E^i_x$ en un punto Lyapunov regular, se llaman \em subespacios de Osele\-dets. \em \index{subespacio! de Oseledets}
\end{definition}
{\bf Nota: } El Ejercicio \ref{ejercicioExponentesLyapunov} muestra que todo punto Lyapunov-regular es regular en el sentido de la Definici\'{o}n \ref{definicionExponentesLyapunov}. El rec\'{\i}proco es falso (ver el ejemplo del Ejercicio \ref{ejerciciopolonortepolosur}).
\vspace{.3cm}
En el enunciado del siguiente ejercicio se establece que la regularidad Lyapunov, el splitting y los exponentes de Lyapunov dependen de la \'{o}rbita y no de qu\'{e} punto $x$ en cada \'{o}rbita se elija.
\begin{exercise}\em Sea $x$ un punto Lyapunov-regular.
(a) Demostrar que el splitting $T_xM = \bigoplus_{i= 1}^{h(x)} E^i(x)$ y el conjunto de exponentes de Lyapunov, son \'{u}nicos.
(b) Demostrar que para todo $k \in \mathbb{Z}$ el punto $f^k(x)$ tambi\'{e}n es Lyapunov regular, y adem\'{a}s $E^i_{f^k(x)} = df^k E^i(x) \forall \ 1 \leq i \leq h(f^k(x)) = h(x), $
y $\chi_i(x) = \chi_i(f^k(x))$.
\end{exercise}
{\bf Pregunta: } ?`Existen abundantes puntos Lyapunov regulares?
El siguiente resultado es un teorema fundamental en la Teor\'{\i}a Erg\'{o}dica Diferenciable y responde afirmativamente a la pregunta anterior, por lo menos desde el punto de vista medible, y con respecto a las medidas invariantes:
\begin{theorem}
{\bf de Oseledets } \label{theoremOseledecs} \index{teorema! Oseledets}
Sea $M$ una variedad compacta riemanniana de dimensi\'{o}n finita. Sea \em $f \in \mbox{Diff }^1(M)$. \em Entonces
{\bf (a) } El conjunto $R$ de puntos Lyapunov regulares para $f$ es medible.
{\bf (b) } Las funciones que a cada punto $x \in R$ asignan los exponentes de Lyapunov son medibles.
{\bf (c) } El conjunto $R$ tiene probabilidad total \em (para toda medida de probabilidad $f$-invariante $\mu$, se cumple $\mu(R)= 1$).
\end{theorem}
La demostraci\'{o}n de V.I. Oseledets del Teorema \ref{theoremOseledecs} se encuentra en \cite{Oseledecs} para difeomorfismos que preservan la medida de Lebesgue erg\'{o}dica, y en \cite{Oseledecs2}, en general.
Otras demostraciones del teorema de Oseledets pueden encontrarse por ejemplo en \cite{BarreiraPesin}, en \cite[Cap. IV \S 10]{Mane} (ver tambi\'{e}n \cite{ManeIngles} y \cite{VianaOseledets}) y en \cite{Raghunathan}.
Generalizaciones del Teorema de Oseledets, llamados Teoremas Erg\'{o}dicos Multiplicativos, enuncian resultados en los cuales $df_x$ es sustituido por una aplicaci\'{o}n lineal que depende de $x$ en un espacio vectorial finito dimensional, o incluso por un cociclo en ciertos espacios de Banach infinito dimensionales. Por ejemplo, en \cite{MargulisMultiplicativeErgodicTheorem}, se prueba un Teorema Erg\'{o}dico Multiplicativo que generaliza al Teorema de Oseledets a ciertos cociclos en espacios de Banach uniformemente convexos.
\subsection{Hiperbolicidad no uniforme} \label{sectionHiperbNoUniforme}
{\bf Interpretaci\'{o}n de los exponentes de Lyapunov no nulos: } \index{exponentes de Lyapunov} \index{exponentes de Lyapunov! no nulos}
Veremos por qu\'{e} los exponentes de Lyapunov, cuando no son nulos, se interpretan como \em la tasa exponencial asint\'{o}tica de crecimiento (dila\-taci\'{o}n) o decrecimiento (contracci\'{o}n) \em en el futuro de la norma de los vectores en el subespacio tangente, por iteraci\'{o}n del diferencial, es decir al aplicar $df^{n}$
En efecto, supongamos que un exponente de Lyapunov en el futuro y en el pasado, para la direcci\'{o}n $[s] \subset T_xM$ de la \'{o}rbita por $x$, es $\chi(x) <0$. Entonces, de la definici\'{o}n del l\'{\i}mite en \ref{definicionExponentesLyapunov}, para todo $\epsilon >0$ existe $N \geq 1$:
$$\|df^n_x(s)\| \leq e^{n(\chi + \epsilon)} \|s\| \ \forall \ n \geq N$$
Luego, para ciertos n\'{u}meros reales $C(x) >0$, $\lambda(x) = e^{\chi(x) + \epsilon} <1$ (si se fija $0 < \epsilon < -\chi(x)$), se cumple
\begin{equation}\label{equationChi+}\|df^n(s)\| \leq C(x) [\lambda(x)] ^n \|s\| \ \ \forall \ n \geq 0, \ \ \mbox{ donde }0 < \lambda(x) <1,\end{equation}
(Demostramos con detalle la existencia de $C(x) >0$ y la desigualdad anterior en el Lema \ref{lemaHiperbolicidadNoUniforme}.)
La desigualdad anterior significa que, en la direcci\'{o}n $[s]$, el diferencial $n$-\'{e}simo contrae exponencialmente las normas de los vectores hacia el futuro con coeficiente $0<\lambda = e^{\chi + \epsilon} < 1$. Este coeficiente tiende a $e^{\chi}$ cuando $\epsilon \rightarrow 0 ^+$ (y por lo tanto cuando $n \rightarrow + \infty$). Entonces un exponente de Lyapunov negativo $\chi$ es asint\'{o}ticamente igual a $\log \lambda$. Decimos as\'{\i} que \em un exponente de Lyapunov negativo $\chi(x)$ es la tasa exponencial asint\'{o}tica de contracci\'{o}n en el futuro (o de dilataci\'{o}n hacia el pasado) por la derivada de $f^n$ en la direcci\'{o}n $[s]$.\em
An\'{a}logamente, si para la misma \'{o}rbita de $x$ existe alguna direcci\'{o}n $[u] \subset T_xM $, para la cual el exponente de Lyapunov en el pasado y en el futuro es $\chi(x) > 0$, entonces existen un n\'{u}mero real $C(x) >0$ y un valor real $\sigma(x) = e^{\chi(x) - \epsilon} > 1$ (si se fija $0 < \epsilon < \chi(x)$) tales que:
\begin{equation}\label{equationChi-}\|df^{-n}(u)\| \leq C(x) [\sigma(x)] ^{-n} \|u\| \ \ \forall \ n \geq 0, \ \ \mbox{ donde } \sigma(x) >1,\end{equation}
(Demostraremos la existencia de la constante $C(x)>0$ y la desigualdad anterior en el Lema \ref{lemaHiperbolicidadNoUniforme}.) Siendo $\sigma $ asint\'{o}ticamente igual a $ e^{\chi}$, decimos que \em un exponente de Lyapunov positivo $\chi(x)$ es la tasa exponencial asint\'{o}tica de constracci\'{o}n hacia el pasado (o de dilataci\'{o}n hacia el futuro) por la derivada de $f^{-n}$ en la direcci\'{o}n $[u]$.\em
\vspace{.3cm}
Observemos las similitudes y diferencias entre las desigualdades (\ref{equationChi+}) y (\ref{equationChi-}) y las de la Definici\'{o}n \ref{definicionHiperbolicidadUniforme} de hiperbolicidad uniforme (desigualdades (\ref{equationAnosovStable}) y (\ref{equationAnosovUnstable})) Las similitudes justifican la siguiente definici\'{o}n:
\newpage
\begin{definition} \label{definitionHiperbolicidadNoUniforme} \index{conjunto! hiperb\'{o}lico! no uniforme} \index{transformaci\'{o}n! hiperb\'{o}lica! no uniforme} \index{hiperbolicidad! no uniforme} \index{no uniformemente hiperb\'{o}lico}
{\bf Hiperbolicidad no uniforme} \em
Sea $\Lambda \subset M$ un conjunto medible $f$-invariante: $f^{-1}(\Lambda)= \Lambda$ ($\Lambda$ no es necesariamente compacto).
Decimos que \em $f$ es no uniformemente hiperb\'{o}lico en $\Lambda$, \em (o que $\Lambda$ es un conjunto no uniformemente hiperb\'{o}lico para $f$), si para todo punto $x \in \Lambda$ existe un splitting $T_xM = S_x \oplus U_x$ que depende mediblemente de $x \in \Lambda$ y que es $df$-invariante, i.e. \index{splitting! hiperb\'{o}lico} \index{fibrado! inestable} \index{subespacio! inestable}
\noindent $\bullet$
$dfS_x = S_{f(x)}, \ \ df U_x = U_{f(x)} \ \ \forall \ x \in \Lambda,$
y existen n\'{u}meros reales $C(x)>0$ y $0 \leq \lambda (x) < 1 < \sigma(x)$, que dependen mediblemente de $x \in \Lambda$, tales que
\noindent $\bullet$ $\lambda(f(x)) = \lambda(x), \ \ \sigma(f(x)) = \sigma (x)$,
\noindent $\bullet$ se verifican las dos desigualdades (\ref{equationChi+}) y (\ref{equationChi-}); es decir, para todo $n \geq 0$ y para todos $u \in U_x$ y $s \in S_x$:
$$\|df^n s \| \leq C(x) \lambda(x)^n \|s\| , \ \ \ \ \|df^{-n} u\| \leq C(x) \sigma(x)^{-n} \|u\| $$
\begin{remark} \em
\label{remarkSplittingMedibleHiperbolico} {\bf Hiperbolicidad No Uniforme.}
A diferencia de la Definici\'{o}n \ref{definicionHiperbolicidadUniforme} de hiperbolicidad uniforme en compactos, la hiperbolicidad no uniforme no implica la continuidad del splitting $S_x \oplus U_x$ al variar $x \in \Lambda$. Pero s\'{\i} exige, por definici\'{o}n, que el splitting sea medible.
\end{remark}
{\bf Nota: }
Si $C(x)$, $\lambda(x)$ y $\sigma(x)$ son constantes independientes de $x \in \Lambda$, y adem\'{a}s $\Lambda$ es compacto, se dice que $f$ es uniformemente hiperb\'{o}lico en $\Lambda$, o que $\Lambda$ es un conjunto uniformemente hiperb\'{o}lico para $f$, de acuerdo con la definici\'{o}n \ref{definicionHiperbolicidadUniforme}.
\end{definition}
\begin{lemma}
\label{lemaHiperbolicidadNoUniforme} Sea $\Lambda$ un conjunto $f$-invariante, medible tal que todo $x \in \Lambda$ es un punto Lyapunov regular cuyo splitting $E^1_x \oplus E^2_x \oplus \ldots \oplus E^r_x = T_xM $ depende mediblemente de $x $ \em ($r= r(x)$ tambi\'{e}n), \em y cuyos exponentes de Lyapunov respectivos
$\chi_1(x) < \chi_2(x) < \ldots < \chi_i(x) < \ldots \chi_r(x)$ son {\bf \em todos diferentes de cero} y dependen medi\-blemente de $x$. Entonces $\Lambda$ es un conjunto no uniformemente hiperb\'{o}lico. \em
En extenso, existe un splitting medible $T_xM= S_x \oplus U_x$ invariante por $df$ y funciones medibles $C(x) >0$ y $0 < \lambda(x) < 1 < \sigma(x)$, tales que $\lambda(f(x))= \lambda(x)$ y $\sigma(f(x)) = \sigma(x)$ y
tales que se verifican las desigualdades (\ref{equationChi+}) y (\ref{equationChi-}) para todo $n \geq 0$, para todo $u \in U_x$ y todo $s \in S_x$, y para todo $x \in \Lambda$.
\end{lemma}
{\em Demostraci\'{o}n: }
Sean:
\hfill $\alpha := \max\{\chi_i(x): 1 \leq i \leq r, \ \chi_i(x) <0\} < 0,$
\hfill $\beta := \min\{\chi_i(x): 1 \leq i \leq r, \ \chi_i(x) >0\} >0.$
Como $\alpha$ y $\beta$ son m\'{a}ximo y m\'{\i}nimo de funciones medibles, son medibles. Fijemos un valor constante $\epsilon >0$ suficientemente peque\~{n}o tal que
$\alpha + \epsilon <0, $ $ \beta - \epsilon > 0.$
Denotamos $$0 < \lambda = \lambda(x) := e^{\displaystyle \alpha + \epsilon} < 1, \ \ \ \sigma = \sigma(x) := e^{\displaystyle \beta - \epsilon} > 1.$$
Como $\lambda(x)$ y $\sigma(x)$ son composici\'{o}n de funciones continuas con funciones medibles, son medibles.
Por construcci\'{o}n, como los exponentes de Lyapunov son los mismos para $x$ que para $f(x)$, tenemos
$\lambda(x)= \lambda (f(x)), $ $ \sigma(x) = \sigma(f(x)).$
Sean
$$S_x := \oplus \{E_x^i(x): 1 \leq i \leq r, \ \chi_i(x) <0 \},$$
$$U_x := \oplus \{E_x^i(x): 1 \leq i \leq r, \ \chi_i(x) >0 \},$$
donde $\oplus_{i= 1}^r E_x^i(x) = T_xM$ es el splitting en los subespacios de Oselecs del espacio tangente en el punto $x$. Estos subespacios existen por la Definici\'{o}n \ref{definitionLyapunovRegulares} de punto Lyapunov regular; y por hip\'{o}tesis dependen mediblemente de $x$. Las funciones $\chi_i(x) $ son medibles. Luego las preimagenes por $\chi_i: T\Lambda \mapsto \mathbb{R}$ de ${\mathbb{R}^+}$ y de $\mathbb{R}^-$ son conjuntos medibles. Finalmente, la suma directa de un conjunto finito de subfibrados medibles, es un subfibrado medible. Concluimos que $S_x $ y $U_x$ son subfibrados medibles de $T \Lambda$.
Por construcci\'{o}n, como los subespacios de Oseledets son invariantes por $df$, tenemos
$$df S_x = S_{f(x)}, \ \ \ df U_x = U_{f(x)}.$$
Aplicando el resultado del Ejercicio \ref{ejercicioExponentesLyapunov}, tenemos:
Para todo $s \in S_x$ el exponente de Lyapunov en el futuro (y tambi\'{e}n en el pasado) es menor o igual que $\alpha < 0$. Para todo $u \in U_x$ el exponente de Lyapunov en el pasado (y tambi\'{e}n en el futuro) es mayor o igual que $\beta >0$. En extenso:
\begin{equation}\label{eqn35a}\chi_i^+(x,s) = \lim_{n \rightarrow + \infty} \frac {\log \displaystyle { \|df^n(s)\|}}{n} \leq \alpha < \alpha + \epsilon = \log \lambda < 0 \ \ \forall \ 0 \leq s \in S_x,\end{equation}
\begin{equation}\label{eqn35b}\chi_i^-(x,u) = \lim_{m \rightarrow - \infty} \frac{\log \displaystyle {\|df^m(u)\|} }{m} \geq \beta > \beta - \!\epsilon = \log \sigma > \!0 \; \forall \; 0 \leq u \in U_x.\end{equation}
Probemos que para cada $0 \leq s \in S_x$ y para cada $0 \leq u \in U_x$, existen los siguientes n\'{u}meros reales $H(x,s) \ , K(x,s) \ >0$:
\begin{equation}\label{eqn35ff}H(x, s) := \sup_{n \geq 0} \frac{\|df_x^n(s)\|}{ \ \lambda^n \, \|s\| \ }, \ \ \ \ \ \ K(x, u) := \ \sup_{n \geq 0} \frac{\|df_x^{-n}(u)\|}{\ \sigma^{-n} \, \|u\| \ }.\end{equation}
En efecto, fijemos $x, u, s$. En las igualdades (\ref{eqn35a}) y (\ref{eqn35b}), aplicamos la definici\'{o}n de l\'{\i}mite, multiplicamos por $n$ (con $|n|$ suficientemente grande), y aplicamos la exponencial. (Hay que cuidar que cuando $n$ es negativo, al multiplicar por $n$ se invierten el sentido de las desigualdades). Concluimos que existe $N= N(x,u,s) \geq 0$ tal que
$${\|df^n(s)\|}\ / \ { \lambda^{n} \, \|s\| \ } < 1, \ {\|df^{-n}(u)\|}\ / \ {\ \sigma^{-n }\, \|u\| \ } < 1,$$
para todo $n \geq N \geq 0$. Entonces, el supremo que define $H(x,s)$, as\'{\i} como el supremo que define $K(x,s)$, existe y es un n\'{u}mero real no negativo (porque para $|n| \geq N$, todos los cocientes cuyo supremo buscamos son menores que 1; y para $0 \leq |n| \leq N$ los cocientes a maximizar son positivos y una cantidad finita).
Afirmamos que existen los n\'{u}meros reales $K(x), H(x) >0$, definidos por:
\begin{equation}\label{eqn35fh} H(x) := \sup \big\{ H(x,s) : \ \ s \in S_x, \ \|s\| = 1 \big \},\end{equation}
\begin{equation}\label{eqn35fj}K(x) := \sup \big\{ K(x,u) : \ \ u \in U_x, \ \|u\| = 1 \big \} \end{equation}
Probaremos que existe $K(x)$ real (la prueba de que existe $H(x)$ real es similar). Tomemos una base $u_1, \ldots, u_{k}$ de $U_x$, donde $k = \mbox{dim}(U_x)$, formada por vectores $u_i$ de norma 1, que se encuentran todos en los subespacios de Oseledets, seg\'{u}n la definici\'{o}n \ref{definitionLyapunovRegulares} de punto Lyapunov regular. Entonces, dado $u \in U_x$ se puede escribir:
$ u = \sum_{i= 1}^{k_2} b_i u_i.$
Si $ \|u\| = 1$, entonces existe $M(x)$ tal que $ 0 \leq |b_i| \leq M(x) \ \forall \ 1 \leq i \leq k.$ En efecto, fijada la base, cada $|b_i| $ es una funci\'{o}n real continua del vector $u \in U_x$ (pues es la norma de la proyecci\'{o}n ortogonal del vector $u$ sobre el subespacio generado por $u_i$). Por el teorema de Weierstrass, la funci\'{o}n continua $u_i$ tiene un m\'{a}ximo $M_i$ en el subconjunto compacto $\{u \in U_x: \|u\| = 1\} \subset T_xM$. Entonces basta tomar $M(x) := \max_{i= 1}^{k} M_i$.
Tenemos:
\begin{equation}\label{eqn35y}\frac{\|df^{-n}(u)\|}{\sigma^{-n}} \leq \sum_{i= 1}^{k} |b_i| \frac{\|df^n(u_i)\|}{{\sigma^{-n}}} \leq M(x) \sum_{i= 1}^{k} K(x, u_i) =: K_1(x) \ \ \forall \ n \geq 0. \end{equation}
Para la primera desigualdad usamos la propiedad triangular de la norma. Para la \'{u}ltima desigualdad, usamos que $|b_i| \leq M(x)$ y la definici\'{o}n de $K(x, u_i)$.
Esto prueba que existe $K_1(x) < + \infty$ definido por la igualdad (\ref{eqn35y}). Como la desigualdad a la izquierda en (\ref{eqn35y}) vale para todo $u \in U_x$ con $\|u\|= 1$, entonces el supremo $K(x)$ definido en (\ref{eqn35fj}) cumple $K(x) \leq K_1(x) < + \infty$. An\'{a}logamente se prueba que existe $H(x) < + \infty$ definido por (\ref{eqn35fh}).
De las definiciones de los n\'{u}meros $H(x,s), K(x,u), H(x), K(x)$ en las igualdades (\ref{eqn35ff}), (\ref{eqn35fh}) y (\ref{eqn35fj}), definiendo $C(x) = \max\{K(x), H(x)\}$, deducimos:
$$ \|df^n(s) \| \leq C(x) \lambda(x)^n \ \|s\| \ \forall n \geq 0, \ \forall \ s \in S_x,$$
$$\|df^{-n}(u) \| \leq C(x) \sigma(x)^{-n} \ \|u\| \ \forall n \geq 0, \ \forall \ u \in U_x.$$
En efecto, estas desigualdades valen cuando $\|s\| = 1$ y $\|u\|= 1$ por la definici\'{o}n de $H(x)$ y $K(x)$. Entonces valen para todo $s \in S_x$ y para todo $u \in U_x$ por la linealidad de $df^n$.
Esto termina de probar el Lema \ref{lemaHiperbolicidadNoUniforme}.
\hfill $\Box$
\subsection{Regi\'{o}n de Pesin y medidas hiperb\'{o}licas}
En esta secci\'{o}n $f\in \mbox{Diff }^1(M)$ y $M$ es una variedad compacta y riemanniana.
\begin{definition}
\label{definicionRegionDePesin} \em \index{exponentes de Lyapunov} \index{exponentes de Lyapunov! no nulos} \index{Pesin! regi\'{o}n de} \index{regi\'{o}n de Pesin}
La \em regi\'{o}n de Pesin $P_f \subset M$ \em es el conjunto de los puntos $x \in M$ Lyapunov-regulares tales que los exponentes de Lyapunov $\chi^1_x < \chi^2_x \ldots \chi^{h(x)}_x$ son todos diferentes de cero.
\end{definition}
Observar que por el Teorema \ref{theoremOseledecs} de Oseledets, la regi\'{o}n de Pesin es medible.
Para algunos difeomorfismos la regi\'{o}n de Pesin puede ser vac\'{\i}a. Por ejemplo, trivialmente, si $f$ es la identidad $P_f = \emptyset$. Otro ejemplo: las rotaciones de la esfera $S^2$ ($f$ es la rotaci\'{o}n de la esfera, de \'{a}ngulo constante alrededor de un di\'{a}metro de $S^2$, llamado eje polo norte-polo sur): $P_f = \emptyset$.
\begin{exercise}\em
(a) Construir un difeomorfismo $f: S^1 \mapsto S^1$ en el c\'{\i}rculo $S^1$ que preserve la orientaci\'{o}n, tal que $P_f = \emptyset$ y tal que en todo abierto $V \subset S^1$ la derivada $f'$ no sea id\'{e}nticamente igual a 1.
(b)Idem en la esfera $S^2$ con la condici\'{o}n de que en todo abierto $V \subset S^2$ la derivada $df$ no es id\'{e}nticamente igual a la identidad $Id$, ni a $-Id$.
(c) Construir un ejemplo en el c\'{\i}rculo $S^1$ que cumplan las condiciones de la parte (a) y adem\'{a}s tal que el conjunto de puntos Lyapunov regulares sea finito.
(d) ?`Existen ejemplos en el c\'{\i}rculo que cumplan las condiciones de la parte (a) y adem\'{a}s tal que el conjunto de los puntos Lyapunov-regulares sea infinito?
(e) Construir un difeomorfismo $f: S^1 \mapsto S^1$ tal que la regi\'{o}n de Pesin $P_f \neq \emptyset$ pero que no coincida con el conjunto de todos los puntos Lyapunov-regulares.
(f) Probar que para todo difeomorfismo $f$ del c\'{\i}rculo, o bien la regi\'{o}n de Pesin $P_f$ es vac\'{\i}a o bien es finita o bien es infinita numerable, y encontrar ejemplos de los tres casos (sugerencia: probar que todo $x \in P_f$ es aislado en $P_f$).
(g) Demostrar que existen difeomorfismos $f: S^2 \mapsto S^2$ en la esfera tal que la regi\'{o}n de Pesin $P_f$ es no numerable (sugerencia: la Herradura de Smale definida en \ref{definicionHerradura}).
\end{exercise}
\begin{definition}
\em {\bf (Medida hiperb\'{o}lica)} \label{definitionMedidaHiperbolica} \index{medida! hiperb\'{o}lica}
\index{hiperbolicidad! en regi\'{o}n de Pesin}
Una medida de probabilidad $f$-invariante $\mu$ se dice \em hiperb\'{o}lica \em si $\mu(P_f) = 1$, donde $P_f$ es la regi\'{o}n de Pesin. En otras palabras, $\mu$ es hiperb\'{o}lica si y solo si los exponentes de Lyapunov son diferentes de cero $\mu$-c.t.p.
\end{definition}
Para demostrar el siguiente resultado, utilizaremos el Teorema \ref{theoremOseledecs} de Oseledets.
\begin{theorem}
{\bf Medidas hiperb\'{o}licas y conjuntos no uniformemente hi\-per\-b\'{o}\-li\-cos} \label{theoremMuHiperbolicaNoUnifHiperbolico}
\index{conjunto! hiperb\'{o}lico! no uniforme} \index{hiperbolicidad! no uniforme} \index{medida! hiperb\'{o}lica} \index{teorema! de hiperbolicidad no uniforme}
{\bf (a)} Si $\mu$ es medida de probabilidad hiperb\'{o}lica entonces existe un conjunto invariante $\Lambda$ (no necesariamente compacto) tal que $\mu(\Lambda)= 1$ y $f$ es hiperb\'{o}lica \em (unif. o no unif.) \em en $\Lambda$.
{\bf (b)} Si $\Lambda$ es un conjunto invariante tal que $f$ es
\em (unif. o no unif.) \em hiperb\'{o}lica en $\Lambda$, y si $\mu$ es una probabilidad $f$-invariante tal que $\mu(\Lambda)= 1$, entonces $\mu$ es medida hiperb\'{o}lica.
{\bf (c)} Si $\Lambda$ es un conjunto invariante y compacto tal que $f$ es \em (unif. o no unif.) \em hiperb\'{o}lica en $\Lambda$, entonces existen medidas de probabilidad $\mu$ tales que $\mu(\Lambda)= 1$. Luego, por la parte (b), todas estas medidas son hiperb\'{o}licas. En particular existen medidas de probabilidad hiperb\'{o}licas y erg\'{o}dicas soportadas en $\Lambda$.
\end{theorem}
{\em Demostraci\'{o}n: }
{\bf (a)} Sea $\Lambda$ la regi\'{o}n de Pesin. Por definici\'{o}n de puntos regulares, $\Lambda $ es $f$-invariante, y por el Teorema \ref{theoremOseledecs} de Oseledets, $\Lambda$ es medible. Por definici\'{o}n de medida hiperb\'{o}lica $\mu(\Lambda)= 1$. Por el Teorema \ref{theoremOseledecs} de Oseledets, los exponentes de Lyapunov y el splitting en subespacios correspondientes, son funciones medibles. Debido al Lema \ref{lemaHiperbolicidadNoUniforme}, como los exponentes de Lyapunov de todo punto $x \in \Lambda$ son no nulos, $\Lambda$ es (unif. o no unif.) hiperb\'{o}lico.
{\bf (b)} De la desigualdad (\ref{equationChi+}), tomando logaritmo, dividiendo entre $n$ y haciendo $n \rightarrow + \infty$, se deduce que para $\mu$-c.t.p. $x \in \Lambda$, y para toda direcci\'{o}n $[s] \in S_x$, los exponentes de Lyapunov \em hacia el futuro \em (seg\'{u}n Definici\'{o}n \ref{definicionExponentesLyapunov}) son menores que $\log \lambda(x) < 0$. An\'{a}logamente, de la desigualdad (\ref{equationChi-}), tomando logaritmo, dividiendo entre $-n < 0$ (se invierte el sentido de la desigualdad), y haciendo $n \rightarrow + \infty$, deducimos que para toda direcci\'{o}n $[u] \in U_x$ los exponentes de Lyapunov \em hacia el pasado \em son mayores que $\log \sigma(x) > 0$.
Por el teorema de Oseledets, $\mu$-casi todo punto es Lyapunov regular. Entonces para $\mu$-casi todo punto existen los subespacios de Oseledets para los cuales los exponentes de Lyapunov hacia el pasado son iguales a los exponentes de Lyapunov hacia el futuro. A priori, hay tres casos: el subespacio de Oseledets $E_x^i$ est\'{a} contenido en $S_x$, o est\'{a} contenido en $U_x$, o ninguna de las dos cosas. En el primer caso, el exponente de Lyapunov hacia el futuro y hacia el pasado en $E_x^i$ es negativo (porque es negativo hacia el futuro por estar contenido en $S_x$ y coincide con el exponente hacia el pasado por ser un subespacio de Oseledets). En el segundo caso, el Lyapunov hacia el futuro y hacia el pasado en $E_x^i$ es positivo (porque es positivo hacia el pasado por estar contenido en $U_x$ y porque coincide con el exponente hacia el futuro por ser un subespacio de Oseledets). Probemos que el tercer caso es vac\'{\i}o; es decir todo subespacio de Oseledets est\'{a} contenido en $S_x$ o en $U_x$. Por absurdo, sea una direcci\'{o}n $[v] \in E_x^i, \ \ [v] \not \in S_x, U_x$. Como $T_xM = U_x \oplus S_x$, entonces $v= u + s$ con $0 \neq u \in U_x$, \ $0 \neq s \in S_x$. Como $\chi^+_s <0$, aplicando lo probado en el Ejercicio \ref{ejercicioExponentesLyapunov}, el exponente de Lyapunov $\chi_i$ hacia el futuro en $E_x^i$ es menor o igual que $\chi^+_s <0$. Entonces es negativo. An\'{a}logamente, como $\chi^-_u >0$ (porque $u \in U_x$ y en $U_x$ los exponentes de Lyapunov hacia el pasado son positivos), aplicamos lo probado en el Ejercicio \ref{ejercicioExponentesLyapunov} y deducimos que el exponente de Lyapunov $\chi_i$ hacia el pasado en $E_x^i$ es mayor o igual que $\chi^-_u >0$. Entonces es positivo. Pero en un subespacio de Oseledecs, el exponente de Lyapunov hacia el futuro y hacia el pasado coincide. No puede ser negativo y positivo a la vez. Entonces no hay subespacio de Oseledets que no est\'{e} incluido en $S_x$ o en $U_x$.
Concluimos que en todos los subespacios de Oseledets, los exponentes de Lyapunov son no nulos. Por lo probado en el ejercicio \ref{ejercicioExponentesLyapunov}, todos los exponentes de Lyapunov en cualquier direcci\'{o}n, hacia el futuro o hacia el pasado, son iguales a alg\'{u}n exponente de Lyapunov en los subespacios de Oseledecs. Entonces son no nulos. Esto vale para $\mu$-c.t.p. $x \in M$. Entonces $\mu$-c.t.p no tiene exponentes de Lyapunov iguales a cero; es decir, $\mu$ es una medida de probabilidad hiperb\'{o}lica.
{\bf (c)} Sea $\widetilde f = f|_{\Lambda} : \Lambda \mapsto \Lambda$. Siendo $\Lambda$ un espacio m\'{e}trico compacto y $\widetilde f$ continua en $\Lambda$, el Teorema \ref{teoremaExistenciaMedErgodicas} asegura que existen medidas de probabilidad $\mu$ soportadas en $\Lambda$ invariantes y erg\'{o}dicas para $\widetilde f$. Es inmediato chequear que $\mu$, como medida de probabilidad en $M$, es invariante y erg\'{o}dica para $f$. Por la parte b) toda tal medida es hiperb\'{o}lica.
\hfill $\Box$
\begin{remark} \em
\label{remarkMedidaHiperbolicaErgodica} {\bf Medidas hiperb\'{o}licas erg\'{o}dicas:} \index{medida! hiperb\'{o}lica erg\'{o}dica}
Si una medida es erg\'{o}dica, entonces los exponentes de Lyapunov son constantes $\mu$-c.t.p. (pues son funciones medibles invariantes c.t.p.). Por el mismo motivo, las dimensiones de los subespacios del splitting en la definici\'{o}n de Lyapunov regularidad son constantes $\mu$-c.t.p. Por lo tanto, para las medidas hiperb\'{o}licas erg\'{o}dicas, la definici\'{o}n del conjunto no uniformemente hiperb\'{o}lico $\Lambda$, tal que $\mu(\Lambda)= 1$, adquiere las particularidades siguientes:
{\bf (i) } En las desigualdades (\ref{equationChi+}) y (\ref{equationChi-}), las tasas de contracci\'{o}n y dilataci\'{o}n $0 < \lambda < 1 < \sigma$ son constantes independientes de $x \in \Lambda$ (mientras que en general el coeficiente $C(x)>0$ var\'{\i}a con $x$).
{\bf (ii) } Las dimensiones de los subespacios estable $S_x$ e inestable $U_x$, son constantes, independientes de $x \in \Lambda$.
\end{remark}
\subsection{\large Variedades estable e inestable en la regi\'{o}n de Pe\-sin}
En esta secci\'{o}n, $M$ es una variedad compacta y riemanniana y $f \in \mbox{Diff}^{1 + \alpha}(M)$, es decir $f$ es difeomorfismo $C^1$ m\'{a}s H\"{o}lder.
Recordamos que esto significa que $f \in \mbox{Diff }^1(M)$ y tanto $df_x$ como $df^{-1}_x$ son funciones H\"{o}lder-continuas del punto $x \in M$, i.e. existen constantes $\alpha, K >0$ tales que
$$\|df_x - df_y\| \leq K \, [\mbox{dist}(x,y)]^{\alpha},$$
y an\'{a}logamente para $df^{-1}_x$.
El siguiente teorema, generaliza al caso no uniformemente hiperb\'{o}lico, el Teorema \ref{teoremavariedadesinvariantesAnosov} de existencia de variedades invariantes para conjuntos uniformemente hiperb\'{o}licos (en particular para difeomorfismos de Anosov). Sin embargo, la validez de la siguiente generalizaci\'{o}n, as\'{\i} como la de los resultados que fundamentan la Teor\'{\i}a de Pesin, est\'{a} restringida
a difeomorfismos de clase $C^{1 + \alpha}$.
\newpage
\begin{theorem} {\bf Variedades Estable e Inestable locales (Pesin)}
\label{theoremVarInvariantesRegionPesin} \index{variedad invariante! local} \index{variedad invariante! inestable}
Sea $f: M \mapsto M$ un difeomorfismo $C^1$ m\'{a}s H\"{o}lder en una variedad riemanniana compacta $M$ tal que la regi\'{o}n de Pesin $P(f)$ es no vac\'{\i}a. Para $x \in P(f)$ denotamos $S_x$ y $U_x$ los subespacios estable e inestable, respectivamente, correspondientes a los exponentes de Lyapunov negativos y positivos de $x \in \Lambda$. \em (Notar que \ $S_x \oplus U_x = T_xM$). \em
Entonces, para todo $x \in P(f)$ existen subvariedades locales $W _{\mbox{\footnotesize{loc}}}^s(x)$ y $W_{\mbox{\footnotesize{loc}}} ^u(x)$, $C^1$-encajadas en $M$, a las que llamamos variedad estable e inestable local respectivamente, tales que:
\em
{\bf (a) } $$T_xW_{\mbox{\footnotesize{loc}}}^s(x) = S_x, \ \ \ T_xW _{\mbox{\footnotesize{loc}}}^u(x) = U_x.$$
{\bf (b) } $$f(W_{\mbox{\footnotesize{loc}}} ^s(x)) \subset W _{\mbox{\footnotesize{loc}}}^s(f(x)), \ \ \ f (W_{\mbox{\footnotesize{loc}}}^u (x)) \supset W_{\mbox{\footnotesize{loc}}}^u (f (x)).$$
{\bf (c) } Para todo $x \in P(f)$ y para todo $y \in M$ se cumple:
$$\lim_{n \rightarrow + \infty} \mbox{dist}(f^n(x), f^n(y)) = 0 \ \mbox{ si } \ y \in W_{\mbox{\footnotesize{loc}}} ^s(x),$$
$$\lim_{n \rightarrow + \infty} \mbox{dist}(f^{-n}(x), f^{-n}(y)) = 0 \ \mbox{ si } \ y \in W_{\mbox{\footnotesize{loc}}} ^u(x).$$
\end{theorem}
El teorema anterior y su demostraci\'{o}n se encuentran en Pesin \cite{Pesin76}. La demostraci\'{o}n puede encontrarse tambi\'{e}n en \cite[Theorem 4.1.1, pag.81]{BarreiraPesin} o en \cite{HirschPughShub}.
Se observa, en el Teorema \ref{theoremVarInvariantesRegionPesin}, que la hip\'{o}tesis $f$ de clase $C^1$ m\'{a}s H\"{o}lder es necesaria. En efecto, Pugh en \cite{PughC1masHolder} construy\'{o} un ejemplo $f \in \mbox{Diff }^1(M)$ cuya derivada $df$ es continua pero no es H\"{o}lder continua, con regi\'{o}n de Pesin no vac\'{\i}a, para el que no vale el Teorema \ref{theoremVarInvariantesRegionPesin} de existencia de variedades invariantes.
\section{Atractores topol\'{o}gicos y erg\'{o}dicos} \label{chapterAtractorestopoyergo}
\subsection{Atractores topol\'{o}gicos}
A lo largo de esta secci\'{o}n $f: X \mapsto X$ es una transformaci\'{o}n continua en un espacio m\'{e}trico compacto $X$.
\begin{definition}
{\bf Estabilidad Lyapunov y orbital.} \index{estabilidad! orbital}
\index{estabilidad! Lyapunov} \index{conjunto! orbitalmente estable} \index{conjunto! Lyapunov estable}
\em
Sea $K \subset X$ un conjunto compacto no vac\'{\i}o invariante por $f$ (es decir $f^{-1}(K) = K$.)
$K$ se dice \em orbitalmente estable \em (hacia el futuro) si para todo $\epsilon >0$ existe $\delta >0$ tal que $$\mbox{dist}(p, K) < \delta \ \Rightarrow \ \mbox{dist} (f^n(p), K) < \epsilon \ \ \forall \ n \geq 0.$$
$K$ se dice \em Lyapunov estable \em (hacia el futuro) si para todo $\epsilon >0$ existe $\delta >0$ tal que $$\mbox{dist}(p, K) < \delta \ \Rightarrow \ \exists \ q \in K \mbox{ tal que } \mbox{dist} (f^n(p), f^n(q)) < \epsilon \ \ \forall \ n \geq 0.$$
De las definiciones anteriores se deduce que si $K$ es Lyapunov estable, entonces es orbitalmente estable. Sin embargo el rec\'{\i}proco no es cierto en general.
\end{definition}
\begin{definition} {\bf Atractor topol\'{o}gico I} \em \label{definitionAtractorTopologico} \index{atractor! topol\'{o}gico}
Un \em atractor topol\'{o}gico \em es un conjunto $K$ compacto y no vac\'{\i}o tal que
1) $ K = f^{-1}(K) = f(K)$.
2) Existe un abierto $V \supset K$, llamado \em cuenca local de atracci\'{o}n topol\'{o}gica \em de $K$, tal que \begin{equation} \label{eqn28}\lim_{n \rightarrow + \infty} \mbox{dist}(f^n(x), K) = 0 \ \ \forall \ x \in V. \end{equation}
\index{cuenca de atracci\'{o}n! topol\'{o}gica}
Consideramos, de particular importancia, los atractores topol\'{o}gicos \em que sean orbitalmente estables \em (para los cuales daremos una definici\'{o}n equiva\-lente en \ref{definitionAtractorTopologicoII}).
Un atractor topol\'{o}gico $K$ se llama \em minimal \em (como atractor topol\'{o}gico) si el \'{u}nico compacto no vac\'{\i}o
$K' \subset K$ que cumple 1) y 2) es $K' = K$.
\index{conjunto! minimal}
\end{definition}
\begin{remark} \em Muchos autores requieren adem\'{a}s de las condiciones 1) y 2), que el compacto no vac\'{\i}o $K$ sea orbitalmente estable o Lyapunov estable, para llamarse atractor topol\'{o}gico.
La condici\'{o}n de estabilidad orbital en la Definici\'{o}n \ref{definitionAtractorTopologico} no es redundante con las otras dos condiciones 1) y 2). En el ejemplo \ref{exampleAsintoticoNoEstable} parte (B), se muestra un conjunto compacto $ \{p_0\}$ formado por un solo punto fijo $p_0$ que cumple las condiciones 1) y 2) de la definici\'{o}n \ref{definitionAtractorTopologico}, pero que no es orbitalmente estable.
\end{remark}
{\bf Ejemplo:} En el cap\'{\i}tulo 1 vimos que \index{pozo}
toda \'{o}rbita peri\'{o}dica hiperb\'{o}lica tipo pozo (de un difeomorfismo $f: M \mapsto M$ en una variedad $M$), es un atractor topol\'{o}gico.
\begin{exercise}\em
Probar que una \'{o}rbita peri\'{o}dica hiperb\'{o}lica tipo pozo es un atractor topol\'{o}gico Lyapunov estable.
\end{exercise}
\begin{exercise}\em \label{exerciseTransitivoAtrTop} (a) Sea $f: M \mapsto M$ un homeomorfismo transitivo en una variedad compacta $M$. Probar que $M$ es un atractor topol\'{o}gico y que es minimal (como atractor topol\'{o}gico). Deducir que $M$ es el \'{u}nico atractor topol\'{o}gico. Sugerencia: De la transitividad deducir que existe alguna \'{o}rbita futura densa en $M$. Para probar que $M$ es minimal como atractor topol\'{o}gico y el \'{u}nico atractor asumir que $K \subset M$ es un atractor topol\'{o}gico. Su cuenca local de atracci\'{o}n topol\'{o}gica $V$ contiene alg\'{u}n punto $x$ de una \'{o}rbita futura densa. Probar que la \'{o}rbita futura de $x$ es densa. Usando la definici\'{o}n de atractor, probar que el omega-l\'{\i}mite de $x$ para cualquier $x \in V$ debe estar en $K$. Deducir que $K= M$.
(b) Probar que el \'{u}nico atractor topol\'{o}gico del autormorfismo lineal $f = \left(
\begin{array}{cc}
2 & 1 \\
1 & 1 \\
\end{array}
\right)
$ en el toro $\mathbb{T}^2$ es todo el toro. \index{automorfismo! lineal del toro}
\end{exercise}
\begin{exercise}\em \label{ejercicioBobo}
Sea $K \subset X$ un atractor topol\'{o}gico. Sea en $K$ la topolog\'{\i}a inducida por su inclusi\'{o}n en el espacio m\'{e}trico $X$. Suponga que existe $x \in K$ tal que la \'{o}rbita futura de $x$ es densa en $K$. Demostrar que $K$ es minimal como atractor topol\'{o}gico. \index{atractor! topol\'{o}gico}
\end{exercise}
\begin{exercise}\em \label{ejercicioAtractorCircunferencia}
Sea $X \subset \mathbb{R}^2 \sim \mathbb{C}$ un disco compacto de centro en el origen y radio $1 < r < 2$. Consideremos en $X$ coordenadas polares $p = \rho e ^{i \varphi} \in X: \ \ \ (\rho, \varphi): \ \ 0 \leq \rho \leq r, \ \ 0 \leq \varphi < 2 \pi$. Sea $f: X \mapsto X$ el homeomorfismo dado por las siguientes ecuaciones
$$f(p) = f(\rho e^{i \varphi})= \rho* e ^{i \varphi*}, \mbox{ donde } $$
$$\rho^* = \frac{ \rho (4 - \rho)}{3} , \ \ \ \ \ \ \ \varphi^* = \varphi + a$$
donde $a$ es una constante $0 \leq a < 2 \pi$.
(a) Dibujar algunas de las \'{o}rbitas y probar que la circunferencia $K$ de centro en el origen y radio $1$ es un atractor topol\'{o}gico Lyapunov estable. (Sugerencia: ver la demostraci\'{o}n de que $K$ es atractor topol\'{o}gico en el ejemplo \ref{exampleAsintoticoNoEstable}).
(b) Probar que si $a/(2\pi)$ es irracional entonces $K$ es minimal como atractor topol\'{o}gico. (Sugerencia: usar el ejercicio \ref{ejercicioBobo})
(c) Si $a/(2 \pi)$ es racional, ?`es $K$ minimal como atractor topol\'{o}gico?
\end{exercise}
\begin{example} \em \label{exampleAsintoticoNoEstable}.
\index{conjunto! no orbitalmente estable}
Sea $X \subset \mathbb{R}^2 \sim \mathbb{C}$ el disco compacto de centro en el origen y radio $1 < r < 2$, como en el ejercicio \ref{ejercicioAtractorCircunferencia}.
{\bf (A) } Sea, en coordenadas polares, el siguiente homeormorfismo $f$:
$$f(p) = f(\rho e^{i \varphi})= \rho^* e ^{i \varphi^*}, \mbox{ donde } $$
$$\rho^* = \frac{ \rho (4 - \rho)}{3} , \ \ \ \ \ \ \ \varphi^* = \varphi + (\rho-1) $$
Tenemos $\rho^* -1 = (3- \rho) (\rho - 1) / 3$. Luego $|\rho^*-1| \leq \lambda |\rho-1| $ si $\rho > 3 \cdot (1 - \lambda)$ donde $0 < \lambda < 1$. Entonces, por inducci\'{o}n en $n$ se deduce que $\lim_{n \rightarrow + \infty} |\rho^{(n)} - 1| = 0 $ donde $\rho^{(n)} $ es la distancia al origen de $f^n(p)$ para cualquier $p \neq (0,0)$. Deducimos que la distancia de $f^n(p)$ a la circunferencia $K$ de centro $0$ y radio $1$ tiende a cero con $n \rightarrow + \infty$, para cualquier punto inicial $0 \neq p \in X$. Adem\'{a}s $\rho^*= 1$ si $\rho= 1$, de donde $\rho^{(n)} = 1$ si $\rho ^{(0)} = 1$. Entonces la circunferencia $K$ de centro en el origen y radio $1$ es invariante por $T$ y por lo probado antes $\mbox{dist}(f^n(p), K) \rightarrow 0$. Luego $K$ es un atractor topol\'{o}gico de acuerdo con la definici\'{o}n \ref{definitionAtractorTopologico}. Adem\'{a}s $K $ es orbitalmente estable porque $|\rho^* - 1| \leq |\rho - 1|$. Entonces $|\rho^{(n)} - 1| $ es decreciente con $n$ si $p \neq 0$. Luego, la distancia de $f^n(p)$ a la circunferencia $K$ decrece con $n$. Por lo tanto, es menor que $\epsilon >0$ si inicialmente es menor que $\delta= \epsilon$. Esto prueba la estabilidad orbital de $K$.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=.4]{Figura3.eps}
\caption{Algunas \'{o}rbitas del Ejemplo $g$ en \ref{exampleAsintoticoNoEstable} (B). \label{figuraAtractorNoEstable}
\index{conjunto! no orbitalmente estable} \index{atractor! topol\'{o}gico! no orbitalmente estable}}
El punto 1 es fijo, es un atractor topol\'{o}gico
no orbitalmente estable.
\end{center}
\end{figure}
{\bf (B) } Sea ahora $g: X \mapsto X$ dado por las ecuaciones
$$g(p) = f(\rho e^{i \varphi})= \rho^* e ^{i \varphi^*}, \mbox{ donde } $$
$$\rho^* = \frac{ \rho (4 - \rho)}{3} , \ \ \ \ \ \ \ \varphi^* = \varphi \, \Big(2 - \frac{\varphi}{2 \pi} \Big) \mbox{ para } \varphi \in [0, 2 \pi). $$
En la figura \ref{figuraAtractorNoEstable} se representan algunas de las \'{o}rbitas. Como en la parte (A), se prueba que la circunferencia $K$ de centro en el origen y radio 1, es un atractor topol\'{o}gico orbitalmente estable. Adem\'{a}s, si graficamos $\varphi^*$ en funci\'{o}n de $\varphi$ para todo $\varphi \in [0, 2 \pi]$, observamos que es creciente, con puntos fijos en $0$ y $2 \pi$, siendo $0$ repulsor (a la derecha de 0) y $2 \pi$ atractor (a la izquierda de $2 \pi$). Entonces, si $\varphi^{(n)}$ es el \'{a}ngulo en coordenadas polares del punto $f^n(p)$, para $p= \rho \, e ^{ i \varphi}$, se tiene que $\lim_{n \rightarrow + \infty} \varphi^{(n)} = 2 \pi \sim 0$.
Luego, en este ejemplo, $1 = 1 e ^{2 \pi \, i}$ es un punto fijo, y $\lim_{n \rightarrow + \infty} f^n(p) = 1$ para todo $p \neq 0$. Entonces $\{1\} \subset K$ tambi\'{e}n es atractor topol\'{o}gico seg\'{u}n la definici\'{o}n \ref{definitionAtractorTopologico}. Pero $\{1\}$ no es orbitalmente estable: Por un lado $1$ es un punto fijo, por lo tanto, si $\{1\}$ fuera \'{o}rbitalmente estable, toda \'{o}rbita futura con punto inicial $p$ suficientemente cercano a $1$, deber\'{\i}a mantenerse arbitrariamente cerca de $1$. Consideremos la \'{o}rbita futura $o:= \{f^n(p)\}_{n \geq 0}$ por $0 \neq p = \rho e^{i \varphi} \not \in K$ con $0 <\varphi < \delta$ (para $\delta >0$ suficientemente peque\~{n}o). Aunque $o$ se acerca a $K$ cuando $n $ crece, y adem\'{a}s $\lim_{n} f^n(p) = 1$, el \'{a}ngulo $\varphi^{(n)}$ (esto es la coordenada angular de $f^n(p)$ en polares) no se mantiene a distancia menor que $\epsilon >0$ de $0$. Dicho de otra forma, la \'{o}rbita $\{f^n(p)\}_n$, para algunos $n \geq 1$, se aleja del punto fijo $1$ m\'{a}s que una constante positiva (digamos 1/6, si $\delta >0$ es suficientemente peque\~{n}o), aunque finalmente tiende a $1$ cuando $n \rightarrow + \infty$ (ver Figura \ref{figuraAtractorNoEstable}).
\end{example}
El siguiente resultado da una caracterizaci\'{o}n de los atractores topol\'{o}gicos orbitalmente estables. Debido a este resultado algunos autores definen atractor topol\'{o}gico agregando, a las condiciones 1) y 2) de la Definici\'{o}n \ref{definitionAtractorTopologico}, la condici\'{o}n de que sea orbitalmente estable.
\newpage
\begin{proposition}.
\label{propositionAtractorTopologico} {\bf Caracterizaci\'{o}n de atractores topol\'{o}gicos orbitalm. estables}
Sea $f: X \mapsto X$ un homeomorfismo en un espacio m\'{e}trico compacto $X$.
{\bf (a) } Si $V$ es un abierto no vac\'{\i}o tal que $\overline{f(V) } \subset V$ y si \begin{equation}\label{eqnMaximalInvariante} K:= \bigcap_{n= 0}^{+ \infty} f^n(V),\end{equation} entonces $K$ es compacto no vac\'{\i}o, es un atractor topol\'{o}gico orbitalmente estable y $V$ es cuenca local de atracci\'{o}n de $K$. \em
{\bf (b) } Rec\'{\i}procamente, si $K$ es un atractor topol\'{o}gico orbitalmente estable, entonces existe un abierto $V$ que es cuenca local de atracci\'{o}n de $K$, tal que $\overline{f(V)} \subset V$ y $K = \bigcap_{n= 0}^{+ \infty} f^n(V).$
\end{proposition}
{\bf Definici\'{o}n de conjunto maximal invariante: } \index{conjunto! maximal invariante} \index{maximal invariante} Si $K$ es compacto invariante no vac\'{\i}o que satisface la igualdad (\ref{eqnMaximalInvariante}) para un abierto $V \supset K$, entonces $K $ se llama \em conjunto maximal invariante (hacia el futuro) \em de $V$.
\vspace{.2cm}
La Proposici\'{o}n \ref{propositionAtractorTopologico} justifica la siguiente definici\'{o}n:
\begin{definition} {\bf Atractor topol\'{o}gico II} \label{definitionAtractorTopologicoII} \index{atractor! topol\'{o}gico}
\index{estabilidad! orbital} \index{conjunto! orbitalmente estable}
\em Sea $f: X \mapsto X$ es un homeomorfismo en un espacio m\'{e}trico compacto $X$. Un subconjunto no vac\'{\i}o, compacto e invariante
$K \subset X$ se llama \em atractor topol\'{o}gico orbitalmente estable \em o tambi\'{e}n \em atractor topol\'{o}gico maximal invariante \em si existe un entorno abierto $V \supset K$, llamado cuenca local de atracci\'{o}n topol\'{o}gica de $K$, tal que \em $$\overline {f(V)} \subset V, \ \ \ \mbox{ y } \ \ \ K := \bigcap_{n= 0}^{+ \infty} f^n(V).$$
\end{definition}
{\em Demostraci\'{o}n: }
{\em de la Proposici\'{o}n } \ref{propositionAtractorTopologico}:
{\bf (a) } Como $\overline {f(V)} \subset V$ y $f$ es un homeomorfismo, tenemos, por inducci\'{o}n: $$f^{n+1}(V) \subset \overline {f^{n+1}(V)} \subset f^n(V) \ \forall \ n \in \mathbb{N}.$$ $$K:= \cap_{n = 0}^{+ \infty} f^n(V) = V \cap (\cap_{n= 1}^{+ \infty}f^n(V)) \subset V \cap (\cap_{n= 1}^{+ \infty}\overline{f^n(V)})= $$ $$= \cap_{n= 1}^{+ \infty}\overline{f^n(V)} \subset \cap_{n= 0}^ {+\infty}f^n(V)) = K .$$ Luego, todas las inclusiones anteriores son igualdades y tenemos $$K = \bigcap_{n= 1}^{+ \infty}\overline{f^n(V)}, \ \ \ \ \ \bigcap_{n= 1}^{+ N} {f^n(V)} = {f^N(V)}, \ \ \ \ \ \bigcap_{n= 1}^{+ N} \overline{f^n(V)} = \overline{f^N(V)} \ \ \forall \ N \geq 1.$$ Por la propiedad de intersecciones finitas no vac\'{\i}as de compactos, deducimos que $K$ es compacto no vac\'{\i}o. Adem\'{a}s $$f^{-1}(K) = \cap_{n= 1}^{+ \infty} f^{-1}(\overline{f^n(V)}) = \cap_{n= 0}^{+ \infty} {\overline{f^n(V)}} = \overline V \cap K = K, $$
pues, por construcci\'{o}n $K \subset V$. Hemos probado que $K$ es compacto no vac\'{\i}o e invariante con $f$.
Para $\epsilon >0$, denotamos $B_{\epsilon}(K) := \{x \in X: \mbox{dist}(x, K) < \epsilon\}$.
\vspace{.2cm}
{\bf Afirmaci\'{o}n A:}
\em Para todo $\epsilon >0$ existe $N \geq 1$ tal que
$ \overline{f^N(V)} \subset B_{\epsilon}(K).$ \em
\vspace{.2cm}
Por absurdo, si existe $\epsilon >0$ y para todo $n \geq 1$ existe $x_n \in \overline{f^n(V)}$ tal que $\mbox{dist}(x_n, K) \geq \epsilon$, entonces tomando una subsucesi\'{o}n de $\{x_n\}$ convergente a un punto $x$, tenemos $\mbox{dist}(x, K) \geq \epsilon$. Adem\'{a}s $x \in \overline {f^N(V)}\ \forall \ N \geq 1$ (pues $x_N \in \overline {f^N(V)} = \cap_{n= 1}^N \overline{f^n(V)}$, de donde $x_{m} \in \overline {f^N(V)}$ para todo $m \geq N$). Entonces $x \in \cap_{N \geq 1} \overline{f^N(V)} = K$, lo cual contradice que $\mbox{dist}(x, K) \geq \epsilon$. Hemos probado la afirmaci\'{o}n A.
Para probar que $K$ es un atractor topol\'{o}gico, y que $V$ es cuenca local de atracci\'{o}n de $K$, basta probar que si $y = \lim_{j \rightarrow + \infty} f^{n_j}(x)$ para alg\'{u}n $x \in V$ y para alguna subsucesi\'{o}n $n_j \rightarrow + \infty$, entonces $y \in K$. En efecto para todo $\epsilon >0$, tenemos $\mbox{dist}(y, \overline {f^{n_j}(V)}) < \epsilon$ para todo $j$ suficientemente grande. Luego, usando la afirmaci\'{o}n (A), como ${\overline{f^{n_j}(V)}} \subset B_{\epsilon}(K)$ para todo $n_j$ suficientemente grande, deducimos: $\mbox{dist}(y, K) < 2 \epsilon$. La desigualdad anterior vale para todo $\epsilon >0$. Entonces $\mbox{dist}(y,K) = 0$; es decir $y \in K$, como quer\'{\i}amos demostrar.
Ahora resta probar que $K$ es orbitalmente estable. Dado $\epsilon >0$ sea $N \geq 1$ como en la afirmaci\'{o}n (A). Siendo $f$ un homeomorfismo, $f^N(V)$ es abierto. Adem\'{a}s $K \subset f^N(V)$ porque $K := \bigcap_{N \geq 0}f^N(V)$. Como $K$ es compacto, existe $\delta >0$ tal que $B_{\delta}(K) \subset f^N(V)$. Si $x \in B_{\delta}(K)$ entonces, para todo $m \geq 0$ se cumple $f^m(x) \in f^m(B_{\delta}(K)) \subset f^m (f^N(V)) \subset \overline{f^{m+ N}(V)} \subset \overline {f^N(V)} \subset B_{\epsilon}(K). $ Hemos probado que si $\mbox{dist}(x, K) < \delta$ entonces $\mbox{dist}(f^m(x), K) < \epsilon$ para todo $m \geq 0$. Luego $K$ es orbitalmente estable, terminando la demostraci\'{o}n de la parte (a).
\vspace{.3cm}
{\bf (b) } Sea $K$ un atractor topol\'{o}gico orbitalmente estable y sea $V'$ un abierto que es cuenca local de atracci\'{o}n de $K$, seg\'{u}n la condici\'{o}n 2) de la Definici\'{o}n \ref{definitionAtractorTopologico}. Sea $\epsilon >0$ tal que $$ \overline {B_{2\epsilon}(K)} \subset V',$$ donde $ B_{\epsilon}(K) := \{x \in X: \mbox{dist}(x, K) < \epsilon\}$. Por la definici\'{o}n de esta\-bi\-li\-dad orbital de $K$ existe $0 <\delta < \epsilon$ tal que $f^m(B_{2\delta}(K)) \subset B_{\epsilon}(K)$ para todo $m \geq 0$.
Como $\overline{B_{\delta}(K)} \subset B_{2 \delta}(K)$, tenemos
$$f^m (\overline {B_{\delta}(K)}) \subset B_{\epsilon}(K) \ \ \forall \ m \geq 0.$$
\vspace{.2cm}
{\bf Afirmaci\'{o}n (B): } Existe $N \geq 1$ tal que $$f^m(x) \in B_{ \epsilon}(K) \ \forall \ m \geq N, \ \forall \ x \in \overline {B_{2\epsilon}(K)}.$$
En efecto, como $\overline {B_{2\epsilon}(K)} \subset V'$ y $V'$ satisface la condici\'{o}n 2) de la Definici\'{o}n \ref{definitionAtractorTopologico}, para cada $x \in \overline{B_{2\epsilon}(K)}$ existe $N_x \geq 1$ tal que $\mbox{dist} (f^{N_x}(x), K) < \delta$. Como $f$ es continua, existe un entorno abierto $U_x$ de $x$ tal que $\mbox{dist} (f^{N_x}(y), K) < \delta$ para todo $y \in U_x$. Siendo $\overline {B_{2\epsilon}(K)}$ compacto, exis\-te un subcubrimiento finito $\{U_{x_1}, U_{x_2}, \ldots, U_{x_h}\}$. Deducimos que para todo $y \in \overline {B_{2\epsilon}(K)}$ existe $N_{x_i} \geq 1$ tal que $\mbox{dist} (f^{N_{x_i}}(y), K) < \delta$. Por la construcci\'{o}n de $\delta$ deducimos que $\mbox{dist} (f^{m}(y), K) < \epsilon$ para todo $m \geq N_{x_i}$. Luego, si $N= \max\{N_{x_i}: 1 \leq i \leq h\}$, se cumple $f^m(y) \in B_{\epsilon}(K)$ para todo $m \geq N$ y para todo $y \in \overline{B_{2\epsilon}(K)}$, terminando la prueba de la Afirmaci\'{o}n (B).
\vspace{.2cm}
Definimos el abierto $V$ de la siguiente manera:
$$V := B_{\epsilon_0}(K) \ \cup \ f(B_{ {\epsilon_1 }}(K)) \ \cup \ f^2(B_{ {\epsilon_2 }}(K)) \ \cup $$ $$ \cup \ f^3(B_{ {\epsilon_3 }}(K)) \ \cup \ \ldots \ \cup \ f^N(B_{ {\epsilon_N }}(K)),$$
$$\mbox{donde } \ \ \epsilon _i := \epsilon \left( 1 + \frac{i}{2^N} \right) < 2 \epsilon \ \ \ \ \ \forall \ \ 0 \leq i \leq N.$$
Probemos que $\overline{f(V)} \subset V$. En efecto, por un lado si $0 \leq i \leq N-1$ entonces $$\overline {f(f^i(B_{\epsilon_i}(K)))} \subset \overline {f^{i+1}(B_{\epsilon_i}(K))} \subset f^{i+1}(B_{\epsilon_{i+1}}(K)) \subset V.$$ Por otro lado, si $i= N$ entonces $$\overline {f(f^N(B_{\epsilon_N}(K)))} \subset \overline {f (f^N(B_{2\epsilon}(K))} = f^{N+1}(\overline{B_{2 \epsilon}(K)}) \subset B_{\epsilon}(K) \subset V ,$$ debido a la construcci\'{o}n del natural $N$ seg\'{u}n la afirmaci\'{o}n (B).
Ahora probemos que $V$ es cuenca local de atracci\'{o}n de $K$. Sea $x \in V$. Entonces $x \in f^i(B_{2 \epsilon}(K))$ para alg\'{u}n $0 \leq i \leq N$, es decir $x = f^i(y)$ para alg\'{u}n $y \in B_{2 \epsilon}(K) \subset V'$, donde $V'$ es por hip\'{o}tesis una cuenca local de atracci\'{o}n de $K$. Entonces, por la propiedad 2) de la Definici\'{o}n \ref{definitionAtractorTopologico}, tene\-mos $\lim_{n \rightarrow + \infty}\mbox{dist}(f^n(y), K) = 0$. Luego $\lim_{m \rightarrow + \infty} \mbox{dist}(f^m(x), K) = \lim _{m \rightarrow + \infty}\mbox{dist}(f^{m-i}(y), K) = 0$, para todo $x \in V$. Hemos probado que $V$ satisface la condici\'{o}n 2) de la Definici\'{o}n \ref{definitionAtractorTopologico}. Entonces, por definici\'{o}n $V$ es cuenca local de atracci\'{o}n topol\'{o}gica de $K$.
Para terminar la demostraci\'{o}n solo falta probar que $K = \cap_{n= 0}^{+ \infty} f^n(V)$. Por un lado como $f(K) = K$ (porque por hip\'{o}tesis $K$ es invariante con $f$), y como $K \subset V$ por construcci\'{o}n, entonces $K = f^n(K) \subset f^n(V)$ para todo $n \geq 0$. Luego $K \subset \cap_{n= 0}^{+ \infty} f^n(V)$.
Ahora probemos la inclusi\'{o}n opuesta. Sea $y \in \cap_{n= 0}^{+ \infty} f^n(V)$. Entonces $$\forall \ n \geq 1 \ \ \ \exists \ \ x_n \in f(V) \mbox{ tal que } y = f^{n-1}(x_n).$$ Sea $x$ el l\'{\i}mite de una subsucesi\'{o}n convergente de $\{x_n\} \subset f(V)$, es decir
$$x = \lim_{j \rightarrow + \infty} x_{n_j}, \ \ \ n_j \rightarrow + \infty.$$ Tenemos $x \in \overline {f(V)} \subset V$.
Dado $\epsilon' >0$, por la estabilidad orbital de $K$, existe $\delta' >0$ tal que $f^n(B_{\delta'}(K)) \subset B_{\epsilon'}(K)$ para todo $n \geq 0$.
Como $V$ es cuenca local de atracci\'{o}n de $K$, y $x \in V$, deducimos que existe $N$ tal que $\mbox{dist} (f^m(x), K) < \delta' \ \ \forall \ m \geq N,$ en particular $$\mbox{dist}(f^N(x), K) < \delta'.$$
Por la continuidad de $f^N$, existe un entorno abierto $U_x$ de $x$, tal que $$\mbox{dist}(f^N(z), K) < \delta' \ \forall \ z \in U_x.$$ Por construcci\'{o}n del punto $x $, $x_{n_j} \in U_x$ para todo $j$ suficientemente grande. Luego:
$$\mbox{dist}(f^N(x_{n_j}), K) < \delta' \ \ \forall \ j \mbox{ suficientemente grande.}$$
Por la construcci\'{o}n del n\'{u}mero $\delta'$ a partir de la estabilidad orbital de $K$, deducimos que $$\mbox{dist}(f^m(x_{n_j}), K) < \epsilon' \ \ \forall \ \ m \geq N.$$ Luego, en particular, para $m= n_j -1$, si elegimos $j$ suficientemente grande tal que
$$n_j \geq N +1,$$
tenemos $$\mbox{dist}(f^{n_j -1}(x_{n_j}), K) < \epsilon'.$$
Por construcci\'{o}n de la sucesi\'{o}n $\{x_n\}$ tenemos $y = f^{n_j-1}(x_{n_j})$. Deducimos que $\mbox{dist}(y, K) < \epsilon '$. Como $\epsilon'>0$ es arbitrario, concluimos que $y \in K$ como quer\'{\i}amos demostrar. \hfill $\Box$
\subsection{Cuenca global de atracci\'{o}n topol\'{o}gica}
\begin{definition}
{\bf Cuenca global de atracci\'{o}n topol\'{o}gica} \index{cuenca de atracci\'{o}n! topol\'{o}gica} \index{cuenca de atracci\'{o}n! global} \index{$C(K)$ o $ E_K$ cuenca de atracci\'{o}n! topol\'{o}gica del compacto $K$}
\em Sea $f: X \mapsto X$ continua en un espacio m\'{e}trico compacto $X$. Sea $K \subset X$ (no vac\'{\i}o, compacto e invariante) un atractor topol\'{o}gico seg\'{u}n la Definici\'{o}n \ref{definitionAtractorTopologico}. Se llama \em cuenca de atracci\'{o}n topol\'{o}gica (global) de $K$ \em al siguiente conjunto:
\begin{equation} \label{eqn29} C(K) = \{ x \in X: \lim_{n \rightarrow + \infty} \mbox{dist}(f^n(x), K) = 0\}. \end{equation}
Nota: Dado un compacto $K$ no vac\'{\i}o e invariante cualquiera, aunque el conjunto $C(K)$ construido mediante la igualdad (\ref{eqn29}) resulte no vac\'{\i}o, en general no se llama a este conjunto \lq\lq cuenca de atracci\'{o}n topol\'{o}gica\rq\rq de $K$, si $K$ no es un atractor topol\'{o}gico. Es decir, cuando se usa el nombre \lq\lq cuenca de atracci\'{o}n topol\'{o}gica\rq\rq, previamente sabemos que $K$ satisface todas las condiciones de la Definici\'{o}n \ref{definitionAtractorTopologico}, en particular la condici\'{o}n 2) de existencia de un \em entorno abierto \em de $K$ contenido en la cuenca global de atracci\'{o}n topol\'{o}gica.
\end{definition}
\begin{proposition}. \label{proposicionCuencaTopologica}
{\bf Propiedades de la cuenca global de atracci\'{o}n topol\'{o}gica.} \index{cuenca de atracci\'{o}n! topol\'{o}gica} \index{cuenca de atracci\'{o}n! global} \index{cuenca de atracci\'{o}n! abierta} \index{cuenca de atracci\'{o}n! invariante}
Sea $f: X \mapsto X$ continua en un espacio m\'{e}trico compacto $X$. Si $K \subset X$ es un atractor topol\'{o}gico, entonces su cuenca (global) de atracci\'{o}n topol\'{o}gica $C(K)$, tiene las si\-guien\-tes propiedades:
{\bf (a) } $C(K)$ es abierto y no vac\'{\i}o.
{\bf (b) } $C(K)$ es invariante con $f$, es decir $f^{-1}(C(K)) = C(K)$.
{\bf (c) } $K \subset C(K) $.
\em Como consecuencia de (a) y (b) si $X $ es conexo, entonces o bien $K = X$ (en cuyo caso $C(K) = K = X$), o bien la cuenca de atracci\'{o}n $C(K)$ contiene propiamente a $K$ (no coincide $K$ con su cuenca).
\end{proposition}
{\em Demostraci\'{o}n: }
{\bf (b) } Si $x \in C(K)$ entonces, por la igualdad (\ref{eqn29}):
$\lim_{n \rightarrow + \infty} \mbox{dist}(f^n(x), K) = 0$.
Luego, $$\lim_{n \rightarrow + \infty} \mbox{dist} (f^{n}(f(x)), K) = \lim_{n \rightarrow + \infty} \mbox{dist} (f^{n-1}(f(x)), K) = $$ $$ = \lim_{n \rightarrow + \infty} \mbox{dist} (f^{n}( x), K)= 0.$$ Esto muestra que $f(x) \in C(K)$ para todo $x \in C(K)$. Hemos probado que $f(C(K)) \subset C(K)$, o lo que es lo mismo, $C(K) \subset f^{-1}(C(K))$.
Sea ahora $y \in f^{-1}( C(K))$. Entonces $f(y) = x \in C(K)$. Luego, para todo $n \geq 0$ tenemos $f^n(x) = f^{n+ 1}(y)$. Entonces: $$\lim_{n \rightarrow + \infty} \mbox{dist} (f^{n }(y), K) = \lim _{n \rightarrow + \infty} \mbox{dist} (f^{n +1}(y), K) = $$ $$= \lim_{n \rightarrow + \infty} \mbox{dist} (f^{n }(x), K) = 0. $$ Se deduce que $y \in C(K)$ para todo $y \in f^{-1}(C(K))$, probando que $f^{-1}(C(K)) \subset C(K)$.
{\bf (a) y (c) } Por hip\'{o}tesis $K$ es un atractor topol\'{o}gico. Entonces existe $V \supset K$ abierto, cuenca local de atracci\'{o}n topol\'{o}gica de $K$, de acuerdo a la condici\'{o}n 2) de la Definici\'{o}n \ref{definitionAtractorTopologico}. De la definici\'{o}n de $C(K)$ en (\ref{eqn29}), junto con la igualdad (\ref{eqn28}), tenemos $V \subset C(K)$, probando que $K \subset C(K)$ y, por lo tanto, $C(K) $ es no vac\'{\i}o. Si $x \in C(K)$, entonces $\lim_n \mbox{dist}(f^n(x), K) = 0$. Como $V $ es un entorno abierto de $K$, deducimos que existe $N \geq 1$ (que depende de $x \in C(K)$), tal que
$$f^n(x) \in V \ \ \ \forall \ \ n \geq N.$$
En particular, la afirmaci\'{o}n de arriba vale para $n= N$. Luego, por la continuidad de $f$ y la apertura de $V$, existe un entorno abierto $U_x$ de $x$ tal que
$$f^N(x) \in V \ \ \forall \ \ x \in U_x.$$
Como $V \subset C(X)$, deducimos que $f^N(U_x) \subset C(X)$, o dicho de otra forma $U_x \subset f^{-N}(C(X)) = C(X)$. Luego $C(X)$ es abierto.
\hfill $\Box$
\begin{proposition} \label{propositionMedidaInvarianteAtractorTopologico} \index{atractor! topol\'{o}gico} \index{medida! soportada en atractor}
Sea $f: X \mapsto X$ continua en un espacio m\'{e}trico compacto $X$. Sea $K \subset X$ un atractor topol\'{o}gico con cuenca global de atracci\'{o}n topol\'{o}gica $C(K)$. Entonces:
{\bf (a) } Existen medidas de probabilidad invariantes para $f|_{C(K)}$.
{\bf (b) } Existen medidas de probabilidad erg\'{o}dicas para $f|_{C(K)}$.
{\bf (c) } Toda medida de probabilidad invariante para $f|_{C(K)}$ est\'{a} soportada en el atractor $K$ (es decir $\mu(K)= 1$).
\end{proposition}
{\em Demostraci\'{o}n: }
{\bf (a) y (b)} Consideremos $f|_K: K \mapsto K$. Es la restricci\'{o}n al atractor $K \subset C(K)$ de $f|_{C(K)}: C(K) \mapsto C(K)$. Como $f$ es continua y $K$ es compacto, entonces, por lo demostrado en el cap\'{\i}tulo 1, existen medidas de probabilidad invariantes para $f|_K$, y tambi\'{e}n existen medidas de probabilidad erg\'{o}dicas para $f|_K$. Tomemos una de estas medidas $\nu$ invariante para $f|_K$. Definamos la medida de probabilidad $\mu$ en los borelianos $B$ de $C(K)$ de la siguiente forma: $$\mu(B) := \nu (B \cap K) \ \ \forall \ B \subset C(K) \ \mbox{ boreliano }.$$ (Observar que $\mu(C(K)) = \nu(C(K) \cap K)= \nu (K) = 1$, porque $K \subset C(K)$.) Veamos que la probabilidad $\mu$ es invariante para $f|_{C(K)}$. Sea $B \subset C(K)$ un boreliano cualquiera:
$$\mu(f^{-1}(B)) = \nu (f^{-1}(B) \cap K) = \nu(f^{-1}(B) \cap f^{-1}(K)) = $$ $$ = \nu (f^{-1}(B \cap K)) = \nu (B \cap K) = \mu (B),$$
pues $\nu$ es invariante con $f|_K$.
Ahora veamos que si $\nu$ es erg\'{o}dica para $f|_K$ entonces $\mu$ es erg\'{o}dica para $f|_{C(K)}$. Sea $B \subset C(K)$ boreliano tal que $f^{-1}(B) = B$. Tenemos que $f^{-1}(B \cap K) = f^{-1}(B) \cap f^{-1}(K) = B \cap K$. Luego, como $\nu$ es erg\'{o}dica para $f|_K$, se cumple $\nu(B \cap K) \in \{0,1\}.$. Entonces
$\mu(B) = \nu(B \cap K) \in \{0,1\}, $ lo que prueba la ergodicidad de $\mu$.
{\bf (c) } Sea $\mu$ un medida de probabilidad invariante para $f|_{C(K)}$. Entonces $\mu(C(K)) = 1$. Debemos probar que $\mu(K) = 1$ (lo cual implica que $\mu(C(K) \setminus K) = 0$). Por el Lema de Recurrencia de Poincar\'{e}, $\mu$-c.t.p. es recurrente; es decir $x \in \omega(x)$ para $\mu$-c.t.p. $x \in C(K)$. Para probar que $\mu(K)= 1$ basta probar que $x \in K$ para cualquier $x \in C(K)$ tal que $ x \in \omega(x)$. Para esto, alcanza demostrar que $\omega(x) \subset K$ para todo $x \in C(K)$. En efecto, por la definici\'{o}n de omega-l\'{\i}mite, si $y \in \omega(x)$ entonces $$y = \lim_{j \rightarrow + \infty} f^{n_j}(x) , \ \ \ n_j \rightarrow + \infty. $$
Luego: $$\mbox{dist}(y, K) = \lim_{j \rightarrow + \infty} \mbox{dist}(f^{n_j}(x), K) \leq \limsup_{n \rightarrow + \infty} \ \ \mbox{dist}(f^n(x), K).$$
Como $x \in C(K)$, por la definici\'{o}n de cuenca de atracci\'{o}n topol\'{o}gica dada en (\ref{eqn29}), tenemos: $$\limsup_{n \rightarrow + \infty} \ \ \mbox{dist}(f^n(x), K) = \lim_{n \rightarrow + \infty} \ \ \mbox{dist}(f^n(x), K) = 0.$$
Luego $\mbox{dist}(y,K)= 0$, o lo que es lo mismo $y \in K$ para todo $y \in \omega(x)$, como quer\'{\i}amos demostrar. \hfill $\Box$
\subsection{Atractores hiperb\'{o}licos ca\'{o}ticos}
\index{atractor! hiperb\'{o}lico} \index{atractor! ca\'{o}tico} En esta secci\'{o}n consideramos el caso particular en que $X= M$ es una variedad diferenciable compacta y riemanniana, y $f \in \mbox{Diff }^1(M)$.
Se llaman \em atractores hiperb\'{o}licos, \em a aquellos atractores topol\'{o}gicos $K$ que est\'{a}n soportados en las variedades inestables de un conjunto compacto no vac\'{\i}o (unif. o no unif.) hiperb\'{o}lico $\Lambda \subset K$. M\'{a}s precisamente: $K$ es un atractor hiperb\'{o}lico si es un atractor topol\'{o}gico y existe un conjunto invariante compacto y (unif. o no unif.) hiperb\'{o}lico $\Lambda \subset K$ tal que $K \subset \bigcup_{p \in \Lambda} W^u(p)$.
El caso no trivial es cuando la dimensi\'{o}n de estas variedades inestables es mayor o igual que 1, para $\mu$- casi todo punto $x \in K$ para alguna medida invariante $\mu$, soportada en el atractor $K$.
Como son varie\-dades inestables de puntos de un conjunto hiperb\'{o}lico, los exponentes de Lyapunov en las direcciones tangentes a estas variedades (tangentes al atractor) son po\-si\-ti\-vos. Llamaremos a tales atractores topol\'{o}gicos, \em atractores hiperb\'{o}licos ca\'{o}ticos. \em
La b\'{u}squeda de variedades inestables que soporten el atractor, es una de las motivaciones m\'{a}s relevantes de la teor\'{\i}a de sistemas din\'{a}micos diferenciables. Significa que la din\'{a}mica dentro de un atractor topol\'{o}gico $K$ (es decir la din\'{a}mica de $f|_K$) puede ser expansiva en el futuro, o en otras palabras, ca\'{o}tica; por tener exponentes de Lyapunov positivos. Dicho de otra forma, veamos el caso en que el atractor $K$ sea orbitalmente estable (\'{o} Lyapunov estable, respectivamente). La estabilidad orbital (de Lyapunov, resp.) rige en la cuenca de atracci\'{o}n $C(K)$. En efecto, por definici\'{o}n, la estabilidad orbital (de Lyapunov resp.) tiene significado no trivial solo para las \'{o}rbitas de $C(K)$ \em fuera \em de $K$. Sin embargo, no es contradictoria con una din\'{a}mica inestable y expansiva, dada por los exponentes de Lyapunov positivos, \em dentro \em del atractor $K$ (ver el ejemplo de siguiente ejercicio).
El siguiente ejercicio muestra un ejemplo en el que la din\'{a}mica dentro del atractor topol\'{o}gico es expansiva o ca\'{o}tica: el atractor est\'{a} formado por las variedades inestables de puntos con exponentes de Lyapunov positivos:
\begin{exercise}\em
{\bf Atractor Solenoide (Smale-Williams)} \index{atractor! solenoide} \index{atractor! de Smale-Williams}
\index{solenoide}
Sea $S \subset \mathbb{R}^3$ el toro s\'{o}lido compacto, definido como la imagen en $\mathbb{R}^3$ de la siguiente parametrizaci\'{o}n con tres par\'{a}metros reales (en coordenadas cil\'{\i}ndricas) $$(r, \varphi, \theta) \in [0, a] \times [0, 2 \pi] \times [0, 2 \pi],$$ donde $0 < a < 1/2$ es una constante real:
\begin{eqnarray}
x &=& (r -1/2) \cos\theta \cos \varphi \nonumber \\
y & = & r \cos \theta \sen \varphi \nonumber \\
z &= & r \sen \theta \nonumber
\end{eqnarray}
(a) Interpretar el significado geom\'{e}trico de los par\'{a}metros $(r, \varphi, \theta)$ en $\mathbb{R}^3$ y dibujar el toro s\'{o}lido $S$. (Sugerencia, dibujar primero el disco circular de radio $a$ en el plano $xz$ (es decir el plano $\{\varphi= 0\}$). Luego, observar que $S$ es el s\'{o}lido de revoluci\'{o}n que se obtiene haciendo girar ese disco alrededor del eje de las $z$).
(b) Sea $f: S \mapsto \mbox{int}(S)$ continua, tal que lleva cada disco circular $D_{\varphi}$ que se obtiene cortando $S$ con el plano $\varphi$ constante, en un disco circular $f(D_{\varphi})$ de radio $a/4$, contenido en la secci\'{o}n $D_{2 \varphi}$, de modo que $f|_{D_{\varphi}} = f_3 \circ f_2 \circ f_1$ donde:
$\bullet$ $f_1$ es una rotaci\'{o}n con eje $z$ de \'{a}ngulo $\varphi$ (esto implica que $f_1 (D_{\varphi}) = D_{2 \varphi}$; cuando escribimos $2 \varphi$ nos referimos a este \'{a}ngulo m\'{o}dulo $2 \pi$).
$\bullet$ $f_2$ es una homotecia de raz\'{o}n $1/4$ que lleva el centro del disco $D_{2 \varphi}$ al punto, en el interior de $D_{2 \varphi}$, con par\'{a}metros $(a/2, 2 \varphi, 0)$
$\bullet$ $f_3$ es una rotaci\'{o}n de \'{a}ngulo $\varphi$ alrededor del eje perpendicular al plano del disco $D_{2 \varphi}$ que pasa por el centro del c\'{\i}rculo $D_{2 \varphi}$.
Verificar que $f: S \mapsto f(S)$ es inyectiva, y que $f(S)$ es un compacto contenido en el interior de $S$. Dibujar $f(S)$. Sugerencia: $f(S)$ da dos vueltas alrededor del eje de las $z$. Ver por ejemplo \cite[Figure 1]{Pikovsky_Arnoldcat}.
(c) Sea $K = \bigcap_{n \geq 0} f^n(S)$ el atractor topol\'{o}gico con cuenca local de atracci\'{o}n $S$. Sea, para cada $N \geq 0$ el compacto
$K_N = \bigcap_{n= 0}^{N} f^n(S).$ Sea $A_N = K_N \cap \{(r, \varphi, \theta) \in S: \ \varphi= 0\}.$
Dibujar $A_1, A_2, K_1, K_2$, y probar que $K_{N+1} \subset \mbox{int}(K_N)$ para todo $N \geq 0$. (Sugerencia: inducci\'{o}n en $N$).
(d) Para cada punto $p \in K$ se define la variedad inestable de $p$ del siguiente modo:
$$W^u(p) = $$ $$=\{q \in S: \exists \ f^{-n}(q) \in S \ \forall \ n \geq 0 \ \mbox{ y } \lim_{n \rightarrow + \infty} \mbox{dist}(f^{-n}(q), f^{-n}(p))= 0\}.$$
Probar que $W^u(p)$ es una variedad inmersa en $\mathbb{R}^3$ de dimensi\'{o}n 1. Suge\-rencia: Fijar un punto cualquiera $q \in W^u(p)$. No es restrictivo asumir que la coordenada $\varphi$ de $q$ es cero. Usando las secciones $A_N$ definidas en la parte (c) probar que la componente conexa de la intersecci\'{o}n de $W^u(p)$ con un entorno peque\~{n}o de punto $q $ en $\mathbb{R}^3$, es un arco diferenciable.
(e) Probar que el atractor $K$ es:
$$K = \bigcup_{p \in K} W^u(p).$$
(f) Probar que $K$ es un atractor topol\'{o}gico ca\'{o}tico. Esto significa que para alguna medida invariante $\mu$ soportada en $K$, para $\mu$-c.t.p. $p \in K$ y para toda direcci\'{o}n tangente inestable $0 \neq v \in T_p (W^u(p)) = T_p(K)$ el exponente de Lyapunov de $v$ hacia el futuro es positivo. Alcanza con probar que:
$$\liminf_{n \rightarrow + \infty}\frac{\log \|df^n(v)\|}{n} \geq \log 2 >0 \ \ \forall \ p \in K, \ \ \forall \ 0 \neq v \in T_p(W^u(p)).$$
(Sugerencia: probar que $\|df_p(v)\|/\|v\| \geq 2 $ para todo $p \in K$ y para todo $0 \neq v \in T_p(W^u(p))$.)
\end{exercise}
\subsection{Atractores erg\'{o}dicos}
En esta secci\'{o}n consideraremos $f\colon M \mapsto M$ continua en una varie\-dad compacta y riemanniana $M$. Por el teorema erg\'{o}dico de Birkhoff-Khinchin, cualquiera sea la medida invariante $\mu$, para $\mu$-c.t.p. $x \in M$ y para toda funci\'{o}n $\psi \in L^1(\mu)$ (en particular para toda funci\'{o}n continua $\psi$) existe el promedio temporal asint\'{o}tico $\widetilde \psi $ (en el futuro) definido por: \index{promedio! temporal}
$$\widetilde \psi (x) = \lim_{n \rightarrow + \infty} \frac{1}{n} \sum_{j= 0}^{n-1} \psi (f^j(x)).$$
Es decir, para $\mu$-c.t.p. $x $ fijo en la variedad $M$, est\'{a} definido el funcional lineal
$$\psi \in C^0(M, \mathbb{R}) \ \mapsto \ \widetilde \psi (x).$$
Por el teorema de Representaci\'{o}n de Riesz, existe una medida de probabilidad $\mu_x$ tal que
$$\widetilde \psi (x) = \int \psi \, d \mu_x \ \ \ \mbox{ para } \mu-{\mbox{c.t.p. }} x \in M, \ \ \forall \ \mu \in {\mathcal M}_f,$$
donde ${\mathcal M}_f$ denota el espacio de todas las medidas de probabilidad en la sigma-\'{a}lgebra de Borel de $M$ que son $f$-invariantes. En los cap\'{\i}tulos anteriores probamos que $\mu_x \in {\mathcal M}_f$. Adem\'{a}s, dotando el espacio ${\mathcal M}$ de probabilidades (no necesariamente $f$-invariantes) de la topolog\'{\i}a d\'{e}bil-estrella, tenemos:
$$\mu_x = \lim_{n \rightarrow + \infty} \frac{1}{n} \sum_{j= 0}^{n-1} \delta_{f^j(x)} \ \ \ \mbox{ para } \mu-{\mbox{c.t.p. }} x \in M, \ \ \forall \ \mu \in {\mathcal M}_f, $$
donde el l\'{\i}mite es tomado en ${\mathcal M}$ con la topolog\'{\i}a d\'{e}bil$^*$ y $\delta_y$ denota la probabilidad Delta de Dirac soportada en el punto $y \in M.$
En las secciones anteriores vimos adem\'{a}s que $\mu_x $ es erg\'{o}dica para $\mu$-c.t.p. $x \in M$, para toda $\mu \in {\mathcal M}_f$. Luego, el promedio temporal asint\'{o}tico $\widetilde \psi (x)$ coincide con el valor esperado de $\psi$ con respecto a la probabilidad $\mu_x$.
Por un lado, notamos que la teor\'{\i}a erg\'{o}dica desarrollada hasta ahora es v\'{a}lida para $\mu$-c.t.p. $x \in M$, y en general, no para cualquier punto $x \in M$. Dicho de otra forma, si el criterio de selecci\'{o}n de los puntos iniciales $x \in M$ no es $\mu$.c.t.p. para alguna medida $\mu \in {\mathcal M}_f$, entonces no necesariamente existen los promedios asint\'{o}ticos de Birkhoff $\widetilde \psi (x)$. Y si existen, en general, no coinciden con el promedio espacial que resulta de integrar $\psi$ respecto a una medida erg\'{o}dica.
Por otro lado, cuando se tiene un atractor topol\'{o}gico $K$ con cuenca local $V $ (abierta), el criterio de selecci\'{o}n de los estados iniciales, por la propia definici\'{o}n de atractor topol\'{o}gico, reside en tomar los puntos $x \in V$ para alg\'{u}n abierto $V$. El criterio topol\'{o}gico de relevancia u \lq\lq observabilidad\rq\rq \ de conjuntos de \'{o}rbitas, es que estos conjuntos sean abiertos, o m\'{a}s en general, con interior no vac\'{\i}o.
Sin embargo, en la mayor\'{\i}a de los ejemplos de atractores topol\'{o}gicos que vimos en la secci\'{o}n anterior, el atractor $K $ (que est\'{a} contenido en su cuenca local $V$) tiene interior vac\'{\i}o. Adem\'{a}s vimos (Proposici\'{o}n \ref{propositionMedidaInvarianteAtractorTopologico}), que las medidas $\mu$ invariantes por $f|_{V}$ (que siempre existen) est\'{a}n soportadas en $K$. Entonces $\mu$-c.t.p. $x $ de la cuenca $V$, est\'{a} en $K$. Luego, el teorema erg\'{o}dico de Birkhoff, y el teorema de existencia de medidas invariantes y erg\'{o}dicas, \em no aseguran la existencia de los promedios temporales asint\'{o}ticos, \em para las \'{o}rbitas con punto inicial en \em un conjunto de estados iniciales relevante u \lq\lq observable\rq\rq, desde el punto de vista topol\'{o}gico. \em
Notamos que, en general, no hay esperanza que los promedios temporales asint\'{o}ticos existan para las \'{o}rbitas con punto inicial en un conjunto de estados iniciales con interior no vac\'{\i}o (es decir, relevante u \lq\lq observa\-ble\rq\rq, desde el punto de vista topol\'{o}gico). En efecto, si $K$ es un atractor topol\'{o}gico con cuenca de atracci\'{o}n $C(K)$ (abierta), se puede demostrar que si $f|_{C(K)}$ no es \'{u}nicamente erg\'{o}dica (y en general no lo es), entonces el conjunto de estados iniciales $x \in C(K)$ para los cuales no existe el promedio asint\'{o}tico de Birkhoff, es denso. Luego, el conjunto de estados iniciales para los cuales existe ese promedio temporal asint\'{o}tico, tiene interior vac\'{\i}o.
Por estos motivos, entre otras razones, se adopta otro criterio de \lq\lq obser\-va\-bi\-lidad\rq\rq \ de las \'{o}rbitas o de selecci\'{o}n de los estados iniciales. Es un criterio medible en vez de topol\'{o}gico, pero que considera a la mayor\'{\i}a de los puntos de la cuenca $C(K)$.
\begin{definition} \label{remarkObservabilidadEstadistica} {\bf Criterio de observabilidad medible } \em \index{observabilidad!}
Cuando el espacio es una variedad riemanniana $M$, \em el criterio medible de \lq\lq observabilidad\rq\rq \ de las \'{o}rbitas, es que formen un conjunto con medida de Lebesgue positiva. \em
\end{definition}
Lo anterior justifica las siguientes definiciones:
\begin{definition} \label{definitionAtractorErgodico}
{\bf Atractor erg\'{o}dico} \index{atractor! erg\'{o}dico}
\em Sea $f: M \mapsto M$ continua en una variedad compacta y Riemanniana $M$ de dimensi\'{o}n finita. Denotamos con $m$ a la medida de Lebesgue en $M$. Notamos que $m$ no es necesariamente $f$-invariante.
Un conjunto compacto no vac\'{\i}o $K$ se llama \em atractor erg\'{o}dico \em si:
$\bullet$ $f^{-1}(K) = K = f(K)$
$\bullet$ Existe un abierto $V \supset K$ tal que
\begin{equation}\label{eqn28z} \lim_{n \rightarrow + \infty} \mbox{dist}(f^n(x), K) = 0 \ \mbox{ para Lebesgue-c.t.p. } \ x \in V.\end{equation}
$\bullet$
\begin{equation} \label{eqn30} \exists \ \mu \mbox{ erg\'{o}dica tal que: } \mu(K)= 1 \mbox{ y } $$ $$ \exists \ \widetilde \psi (x) = \lim_{n \rightarrow + \infty} \frac{1}{n} \sum_{j= 0}^{n-1} \psi (f^j(x)) = \int \psi \, d \mu \ \end{equation} $$\mbox{ para Lebesgue-c.t.p. } x \in V \ \ \mbox{ y } \ \ \ \forall \ \psi \in C^0(M, {\mathbb{R}}).$$
Un abierto $V \supset K$ que satisface las condiciones anteriores, se llama \em cuenca local de atracci\'{o}n \em del atractor erg\'{o}dico $K$.
\vspace{.3cm}
{\bf Nota:} No todos adoptan la Definici\'{o}n \ref{definitionAtractorErgodico}. Algunos autores exigen que $K$ sea, por definici\'{o}n, un atractor topol\'{o}gico que satisface adem\'{a}s la condici\'{o}n (\ref{eqn30}), para llamarlo atractor erg\'{o}dico (ver por ejemplo \cite{PughShubErgodicAttractors}).
{\bf Medida SRB o f\'{\i}sica soportada en un atractor erg\'{o}dico: } \index{medida! SRB} \index{medida! f\'{\i}sica}
Se observa que cuando existe una medida $\mu$ (erg\'{o}dica; soportada en el atractor erg\'{o}dico $K$) que satisface la igualdad (\ref{eqn30}) para Lebesgue-c.t.p. $x \in V$, entonces esta medida es \em \'{u}nica. \em
Tal medida $\mu$ se llama \em medida SRB erg\'{o}dica \em o tambi\'{e}n \em medida f\'{\i}sica erg\'{o}dica, \em del atractor erg\'{o}dico $K$.
En \ref{definitionMedidaSRB} definiremos medida SRB o medida f\'{\i}sica $\mu$, en un contexto m\'{a}s general, aunque $\mu$ no est\'{e} soportada en un atractor erg\'{o}dico.
\vspace{.3cm}
{\bf Observaci\'{o}n: } Dado cualquier atractor topol\'{o}gico $K$ con cuenca local $V$, debido a la Proposici\'{o}n \ref{propositionMedidaInvarianteAtractorTopologico} siempre existen medidas invariantes y erg\'{o}dicas para $f|_V$, y est\'{a}n soportadas en $K$. Si alguna de estas medidas erg\'{o}dicas $\mu$ satisface la condici\'{o}n (\ref{eqn30}) para Lebesgue-casi todo punto $x \in V$, entonces $K$ es un atractor erg\'{o}dico.
\end{definition}
\begin{remark} \em .
\index{observabilidad}
A diferencia de los atractores topol\'{o}gicos, para los atractores erg\'{o}dicos $K$ la atracci\'{o}n a $K$ de las \'{o}rbitas en su cuenca $V$ dada por la igualdad (\ref{eqn28z}) es solo para Lebesgue c.t.p. $x \in V$, y no necesariamente para todo punto $x \in V$. Es un \em criterio de observabilidad medible Lebesgue\em-c.t.p. de la cuenca. Entonces un atractor erg\'{o}dico no es necesariamente un atractor topol\'{o}gico. En el Ejercicio \ref{ejercicioEjemploMagnetico} se muestra un ejemplo de atractor erg\'{o}dico que no es topol\'{o}gico.
Por otra parte, un atractor topol\'{o}gico satisface la igualdad (\ref{eqn28}) para todo $x \in V$. Luego satisface (\ref{eqn28z}). Pero no necesariamente satisface la condici\'{o}n de existencia de una medida SRB erg\'{o}dica para la cual valga la igualdad (\ref{eqn30}). Entonces un atractor topol\'{o}gico no es necesariamente un atractor erg\'{o}dico. En el ejercicio \ref{ejercicioEjemploFacil} se muestra un ejemplo de atractor topol\'{o}gico que no es erg\'{o}dico.
\begin{exercise}\em \label{ejercicioEjemploFacil}
Sea en $Q= [0,1]^2$ la aplicaci\'{o}n $T(x,y) = \big((1/2) x, y\big)$. Probar que el segmento $ K= \{0\} \times [0,1]$ es un atractor topol\'{o}gico pero no es un atractor erg\'{o}dico. Sugerencia: para probar que $K$ no es atractor erg\'{o}dico, demostrar que toda medida $\mu$ erg\'{o}dica es delta de Dirac en un punto fijo y que el conjunto de puntos $x \in Q$ para los cuales vale la igualdad (\ref{eqn28z}) tiene medida de Lebesgue cero.
\end{exercise}
\begin{exercise}\em
\label{ejercicioEjemploMagnetico} Sea en el disco $D= \{z \in \mathbb{C}: |z| \leq 1\}$ la transformaci\'{o}n $f: D \mapsto D$ que deja fijo el origen y tal que para todo $z \neq 0$ expresado en polares, $f(z)$ est\'{a} dado por la siguiente f\'{o}rmula:
$$f(\rho e^{i \varphi}) = \widehat \rho e^{i \widehat \varphi},$$
donde $$\widehat \varphi = \frac{3}{2} \varphi \mbox{ si } 0 \leq \varphi < \pi (\mbox{m\'{o}d} 2 \pi),$$
$$\widehat \varphi = \pi + \frac{\varphi}{2} \mbox{ si } \pi \leq \varphi < 2\pi (\mbox{m\'{o}d} 2 \pi),$$
$$\widehat \rho = \Big (1- \frac{\widehat \varphi}{2 \pi} \Big ) \rho.$$
(a) Bosquejar las \'{o}rbitas (Sugerencia: los puntos $\varphi = 0$ son fijos, y las dem\'{a}s \'{o}rbitas son tales que el argumento tiende a $2 \pi$ por abajo y el m\'{o}dulo tiende a cero).
(b) Probar que toda \'{o}rbita con punto inicial $z$ que no se encuentre en el segmento $\varphi=0$ es tal que la distancia al origen tiende a cero y la sucesi\'{o}n de promedios de Birkhoff de las funciones continuas $\psi$ tiende a $\int \psi \, d \delta_0$, donde $\delta_0$ es la Delta de Dirac soportada en el origen.
(c) Concluir que $K= \{0\}$ es un atractor erg\'{o}dico pero no es atractor topol\'{o}gico.
Nota: En este ejemplo $f$ es discontinuo en el semieje real positivo. Sin embargo, puede construirse un ejemplo continuo con bosquejo similar de \'{o}rbitas.
\end{exercise}
\end{remark}
\begin{remark} \em \label{remarkObservabilidadTopyEstad} \index{observabilidad} \index{cuenca de atracci\'{o}n! observabilidad de}
Por un lado tenemos el criterio de obser\-va\-bilidad de la cuenca local $V$ del atractor. O bien la atracci\'{o}n se produce para todo estado inicial $x$ en el abierto $V$ (criterio de observabilidad topol\'{o}gica) o bien la atracci\'{o}n se produce solo para Lebesgue c.t.p. $x \in V$ (criterio de observabilidad Lebesgue-medible). El criterio de observabilidad de la cuenca es entonces el \em criterio con el cual se eligen los estados iniciales \em para observar a d\'{o}nde son atra\'{\i}das las \'{o}rbitas.
\begin{definition}\label{definitionAtraccionTopol} {\bf Atracci\'{o}n topol\'{o}gica } \em En forma independiente al criterio de \index{atracci\'{o}n! topol\'{o}gica} obser\-va\-bilidad de los estados iniciales en la cuenca local, la atracci\'{o}n en s\'{\i} misma, definida por la igualdad (\ref{eqn28}) para los atractores topol\'{o}gicos, y por la igualdad (\ref{eqn28z}) para los atractores erg\'{o}dicos se llama \em atracci\'{o}n topol\'{o}gica. \em
Esta significa, por definici\'{o}n, que el l\'{\i}mite de la distancia al atractor $K$ existe y es cero, desde los puntos iniciales elegidos seg\'{u}n el criterio de observabilidad que corresponda.
\end{definition}
\begin{definition}\label{definitionAtraccionEstad} \index{atracci\'{o}n! estad\'{\i}stica}
{\bf Atracci\'{o}n estad\'{\i}stica. } \em Esta significa, por definici\'{o}n, que los promedios de Birkhoff de las funciones continuas convergen (o por lo menos, en un contexto m\'{a}s general, tienen subsucesiones convergentes) al valor esperado respecto a alguna medida invariante $\mu$ soportada en el atractor $K$, desde los puntos iniciales elegidos seg\'{u}n el criterio de observabilidad que corresponda. En un contexto m\'{a}s general, $\mu$ no es necesariamente erg\'{o}dica.
Lo usual es que cuando se estudia la atracci\'{o}n estad\'{\i}stica, el criterio de selecci\'{o}n de puntos iniciales sea el de observabilidad medible Lebesgue c.t.p.
\end{definition}
\end{remark}
En los pr\'{o}ximos ejemplos veremos casos particulares de existencia y de no existencia de atractor erg\'{o}dico.
\begin{example} \em \label{ejemploTentMapEnDisco} \index{tent map}
En el tent map del intervalo, todo el intervalo es un atractor topol\'{o}gico y erg\'{o}dico a la vez, cuya medida erg\'{o}dica SRB (o f\'{\i}sica) es la medida de Lebesgue (ver por ejemplo \cite{buzziEnciclopedia}).
\end{example}
\begin{example} \em \index{flujo polo norte-polo sur}
En el flujo Polo Norte-Polo Sur, el Polo Sur es atractor erg\'{o}dico y topol\'{o}gico a la vez, cuya media erg\'{o}dica SRB (o f\'{\i}sica) es la delta de Dirac soportada en el Polo Sur.
\end{example}
\begin{example} \em . \label{ejemploRotacionEsfera}
{\bf Contraejemplo: Rotaci\'{o}n irracional de la esfera. } \index{rotaci\'{o}n! irracional}
En este ejemplo no existen atractores erg\'{o}dicos. Sea $f: S^2 \mapsto S^2$ conti\-nua, definida en la superficie esf\'{e}rica $S^2$ por la siguiente para\-me\-tri\-zaci\'{o}n en coordenadas \lq\lq esf\'{e}ricas\rq\rq:
$S^2 : \{(x,y,z) \in \mathbb{R}^3 : \ \ x = \cos \varphi \sen \theta, \ \ y= \sen \varphi \sen \theta, \ \ z= \cos \theta, \ \ 0 \leq \varphi \leq 2 \pi, \ \ 0 \leq \theta \leq \pi\}$
$f$ est\'{a} definida por las siguientes ecuaciones en coordenadas esf\'{e}ricas:
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $f(\varphi, \theta) = (\widehat \varphi , \widehat \theta )$ donde:
\begin{eqnarray}
\widehat \theta & = \theta & \nonumber \\
\widehat \varphi & = & \varphi + a\nonumber
\end{eqnarray}
siendo $a \in (0, 2 \pi)$ una constante tal que $a/2 \pi$ es irracional.
Es inmediato chequear que los polos $C_0 = \{\theta= 0\}$ y $C_{\pi} = \{\theta = \pi\}$ son puntos fijos por $f$, y que cada circunferencia $C_{\theta}$, con $\theta$ constante diferente de $0$ y de $\pi$, es invariante por $f$. Adem\'{a}s, $f|_{C_{\theta}}$ es una rotaci\'{o}n irracional si $\theta \neq 0, \pi$. Por lo tanto toda \'{o}rbita por $f$ es densa en la secci\'{o}n $C_{\theta}$ donde est\'{a} contenida. Deducimos que no hay \'{o}rbitas densas en $S^2$. Luego, $f: S^2 \mapsto S^2$ no es transitivo.
Sea $m$ la medida de Lebesgue bidimensional en la esfera $S^2$ (normalizada para que sea una probabilidad, es decir, dividimos la medida de Lebesgue en la esfera, entre el \'{a}rea de toda la esfera, para que $m(S^2) = 1$). Esta medida es invariante con $f$, pues el Jacobiano $|\mbox{det} df|$ es id\'{e}nticamente igual a 1. Sin embargo $m$ no es erg\'{o}dica, pues $m$ es positiva sobre abiertos pero $f$ no es transitivo.
\vspace{.3cm}
{\bf Afirmaci\'{o}n: } \em No existen atractores erg\'{o}dicos para la rotaci\'{o}n irracional $f$ en la esfera $S^2$. \em
{\em Demostraci\'{o}n: }
Por absurdo, supongamos que existe un atractor erg\'{o}dico $K$ y llamemos $V$ a su cuenca local de atracci\'{o}n. Entonces $K$ es compacto no vac\'{\i}o e invariante por $f$ y $V$ es abierto que contiene a $K$. Sea $p \in V$. Vimos que $\omega(p) = C_{\theta_p}$ donde $ C_{\theta_p}$ es la secci\'{o}n horizontal de la esfera que contiene al punto $p$. De la igualdad (\ref{eqn28z}) deducimos $\omega(p) \subset K$ para Lebesgue c.t.p. $p \in V$. Entonces $C_{\theta_p} \subset K$, y como $p \in C_{\theta_p}$ deducimos que $p \in K$. Hemos probado que, bajo la hip\'{o}tesis de absurdo, Lebesgue c.t.p. $p \in V$ est\'{a} contenido en $K$. Como $K$ es compacto, tenemos $V \subset K$. Pero por definici\'{o}n de atractor erg\'{o}dico $K \subset V$. Entonces $K= V$ es compacto y abierto a la vez, y es no vac\'{\i}o. Como $S^2$ es conexo concluimos que, si existiera un atractor erg\'{o}dico $K$, \'{e}ste ser\'{\i}a toda la esfera $K= S^2$.
Por (\ref{eqn30}), el promedio temporal asint\'{o}tico $\widetilde \psi (p)$ deber\'{\i}a ser constante para $m$-c.t.p. $p \in V= S^2$, para cualquier funci\'{o}n continua $\psi: S^2 \mapsto \mathbb{R}$. Como en este ejemplo $m$ es invariante, entonces $m$ ser\'{\i}a una medida erg\'{o}dica, contradiciendo que $m$ es positiva sobre abiertos y $f$ no es transitivo. \hfill $\Box$
{\bf Nota: } En este ejemplo \ref{ejemploRotacionEsfera} toda la esfera es un atractor topol\'{o}gico (en realidad es el \'{u}nico atractor topol\'{o}gico). Luego, este ejemplo prueba que no todo atractor topol\'{o}gico es un atractor erg\'{o}dico.
\end{example}
\begin{example} \em \index{automorfismo! lineal en el toro}
Sea $f = {\left [
\begin{array}{cc}
2 & 1 \\
1 & 1 \\
\end{array}
\right ] }_{\mbox{m\'{o}d}\displaystyle (1,1)}
$ en el toro $\mathbb{T}^2$. Es erg\'{o}dico respecto a la medida de Lebesgue $m$. Todo el toro $\mathbb{T}^2$ es un atractor erg\'{o}dico y $m$ es la medida erg\'{o}dica SRB o f\'{\i}sica.
\end{example}
\subsection{Atracci\'{o}n estad\'{\i}stica y medidas SRB o f\'{\i}\-si\-cas}
Independientemente de si existe o no un atractor erg\'{o}dico, dada un medida de probabilidad $\mu$ (no necesariamente erg\'{o}dica ni invariante) definimos el siguiente conjunto $B(\mu)$ en el espacio $X$ donde act\'{u}a $f$:
\begin{definition} {\bf Cuenca de atracci\'{o}n estad\'{\i}stica} \label{DefinicionCuencaDeAtraccionEstadistica} \em \index{cuenca de atracci\'{o}n! estad\'{\i}stica} \index{medida! atracci\'{o}n estad\'{\i}stica de} \index{$B(\mu)$ cuenca de atracci\'{o}n! estad\'{\i}stica de la medida $\mu$}
Sea $f: X \mapsto X$ una transformaci\'{o}n continua en un espacio m\'{e}trico compacto.
Sea $\mu \in {\mathcal M}$ una probabilidad.
Se llama \em cuenca de atracci\'{o}n estad\'{\i}stica \em de $\mu$ al siguiente conjunto:
$$B(\mu): = \{ x \in X: \ \ \lim_{n \rightarrow + \infty} \frac{1}{n} \sum_{j= 0}^{+ \infty} \psi(f^j(x)) = \int \psi \, d \mu \ \ \forall \ \psi \in C^0(X, \mathbb{R}) \}.$$
Usando la caracterizaci\'{o}n de la topolog\'{\i}a d\'{e}bil$^*$ en el espacio ${\mathcal M}$ obte\-nemos:
\begin{eqnarray}
\label{equationB(mu)}
&B(\mu) = \{ x \in X: \ \ \lim_{n \rightarrow + \infty} \sigma_{n,x} = \mu\}, & \mbox{ d\'{o}nde } \\ \label{(1)}
&\sigma_{n,x} := \frac{1}{n} \sum_{j= 0}^{n-1} \delta_{f^j(x)}.&
\end{eqnarray} \index{$\sigma_{n,x}$ probabilidad emp\'{\i}rica}
{\bf Observaci\'{o}n:} La cuenca de atracci\'{o}n estad\'{\i}stica de cualquier medida de probabilidad $\mu$ es un conjunto medible. En efecto, tomando una familia numerable $\{\psi_i\}_{i \in \mathbb{N}}$ de funciones reales continuas $\psi: X \mapsto [0,1]$ que sea denso en el espacio $C^0(X, [0,1])$, la igualdad dentro de la definici\'{o}n de la cuenca $B(\mu)$, se verifica para toda $\psi \in C^0(X, \mathbb{R})$ si y solo si se verifica para $\psi_i$ para todo $i \in {\mathbb{N}}$. Para cada $i$ fijo, la igualdad se satisface para un conjunto medible, pues el l\'{\i}mite puntual de una sucesi\'{o}n de funciones continuas es medible. Luego, $B(\mu)$ es la intersecci\'{o}n numerable de conjuntos medibles; es decir, es medible.
\end{definition}
\begin{definition}
\label{definitionProbaEmpiricas}
{\bf Probabilidades emp\'{\i}ricas} \em \index{probabilidad! emp\'{\i}rica}
\index{medida! de probabilidad emp\'{\i}rica}
\index{$\sigma_{n,x}$ probabilidad emp\'{\i}rica}
Se observa que $\sigma_{n,x}$, definida por la igualdad (\ref{(1)}), es una medida de probabilidad, para todo $n \geq 1$ y para todo $x \in X$ (en general $\sigma_{n,x}$ no es $f$ invariante). Es la probabilidad promedio concentrada en los puntos de tramos finitos de la \'{o}rbita futura por $x$.
Las probabilidades $ \sigma_{n,x}$ (para cualquier $n \geq 1$, con $x \in X$ fijo) se llaman \em probabilidades emp\'{\i}ricas \em de la \'{o}rbita futura por $x$. Este nombre proviene de que un experimentador no puede observar los promedios temporales asint\'{o}ticos (en tiempo infinito), sino que observa los promedios hasta tiempo $n$ finito. Estos promedios se pueden calcular como el valor esperado integrando respecto a las probabilidades emp\'{\i}ricas. M\'{a}s precisamente, debido a la igualdad (\ref{(1)}) tenemos:
$$\frac{1}{n} \sum_{j= 0}^{n-1} \psi (f^j(x)) = \int \psi \, d \sigma_{n,x} \ \ \ \forall \ \ \psi \in C^0(X, \mathbb{R}).$$
\index{promedio! temporal} \index{promedio! de Birkhoff}
\end{definition}
\begin{remark} \em
Si $\mu$ es una probabilidad tal que su cuenca de atracci\'{o}n estad\'{\i}stica $B(\mu)$ es no vac\'{\i}a, entonces $\mu$ es invariante con $T$ (Ejercicio \ref{ejercicioZ0} a)). Se puede caracterizar a una medida erg\'{o}dica por la siguiente afirmaci\'{o}n (Ejercicio \ref{ejercicioZ0} b)):
$$\mu \mbox{ es invariante y erg\'{o}dica si y solo si } \mu (B(\mu)) = 1.$$
Para cualquier medida de probabilidad $\mu$ no invariante, y para cualquier medida de probabilidad invariante no erg\'{o}dica, se cumple $\mu (B(\mu)) = 0$ (Ejercicio \ref{ejercicioZ0} c)), aunque $B(\mu)$ puede ser no vac\'{\i}o.
\end{remark}
\begin{remark} \em
\label{remarkBowen} \index{cuenca de atracci\'{o}n! estad\'{\i}stica} \index{medida! atracci\'{o}n estad\'{\i}stica de}
\em La cuenca de atracci\'{o}n estad\'{\i}stica $B(\mu)$ de una medida invariante $\mu$ no erg\'{o}dica, puede ser no vac\'{\i}a, y adem\'{a}s puede cubrir Lebesgue c.t.p. \em
En efecto, un ejemplo de homeomorfismo no diferenciable en un disco bidimensional compacto $D$ (que es adaptaci\'{o}n de un difeomorfismo en el disco compacto, atribuido a Bowen) exhibe una medida invariante y \em no erg\'{o}dica \em $\mu$ tal que $B(\mu) \neq \emptyset$ (ver \cite[Example 7.2, case B] {CatIlyshenkoAttractors}). M\'{a}s a\'{u}n, en este ejemplo, $B(\mu)$ contiene Lebesgue casi todo punto $x \in D$ a pesar que $\mu(B(\mu)) = 0$, es decir, el soporte de $\mu$ tiene medida de Lebesgue nula. Luego, en este ejemplo, existe una medida SRB $\mu$ no erg\'{o}dica, la medida de Lebesgue $m$ no es inva\-rian\-te, existe un atractor topol\'{o}gico no erg\'{o}dico cuya cuenca de atracci\'{o}n topol\'{o}gica es abierta y cubre Lebesgue casi todo punto, y no existen atractores erg\'{o}dicos.
En la versi\'{o}n $C^{1}$ del ejemplo atribuido a Bowen (ver \cite{Golenishcheva}), se cumple $m(B(\mu)) = 0$ para toda medida de probabilidad $\mu$ invariante soportada en el atractor topol\'{o}gico. Luego, no existen medidas SRB. Adem\'{a}s en este ejemplo, toda medida invariante es hiperb\'{o}lica (tiene exponentes de Lyapunov no nulos) y tiene exponentes de Lyapunov positivos. Como no existen medidas SRB, no existen atractores erg\'{o}dicos. Sin embargo, existe un atractor topol\'{o}gico cuya cuenca es abierta y cubre Lebesgue casi todo punto.
\end{remark}
\begin{exercise}\em
\label{ejercicioZ0}
Sea $f: X \mapsto X$ continua en un espacio m\'{e}trico $X$. Denotamos ${\mathcal M}$ el espacio de todas las probabilidades de Borel (no necesariamente $f$-invariantes), dotado de la topolog\'{\i}a d\'{e}bil$^*$.
Sea $\mu \in {\mathcal M}$.
{\bf a) } Probar que si la cuenca de atracci\'{o}n estad\'{\i}stica $B(\mu)$ es no vac\'{\i}a, entonces $\mu$ es $f$-invariante.
{\bf b) } Probar que
$\mu $ es invariante y erg\'{o}dica si y solo si $ \mu (B(\mu)) = 1.$
{\bf c) } Probar que si $\mu$ no es invariante, o si $\mu$ es invariante pero no erg\'{o}dica, entonces
$$\mu (B(\mu)) = 0.$$ (Sugerencias: $B(\mu)$ es siempre un conjunto $f$-invariante. En el caso $\mu$ invariante, aplicar el teorema de descomposici\'{o}n erg\'{o}dica).
\end{exercise}
De la parte c) del Ejercicio \ref{ejercicioZ0} deducimos:
\vspace{.2cm} \index{cuenca de atracci\'{o}n! estad\'{\i}stica} \index{medida! atracci\'{o}n estad\'{\i}stica de} \index{medida! erg\'{o}dica}
\index{equivalencia de definiciones! de ergodicidad}
\em Una medida de probabilidad $\mu$ es $f$-invariante y erg\'{o}dica si y solo si $$\mu(B(\mu)) = 1,$$
donde $B(\mu)$ denota la cuenca de atracci\'{o}n estad\'{\i}stica de $\mu$. \em
\begin{definition}
{\bf Medidas SRB o f\'{\i}sicas } {\em (Sinai \cite{Sinai_SRB}- Ruelle \cite{Ruelle_SRB}-Bowen \cite{Bowen1971,Bowen-Ruelle_SRB})} \index{medida! SRB} \index{medida! f\'{\i}sica} \index{cuenca de atracci\'{o}n! estad\'{\i}stica} \index{medida! atracci\'{o}n estad\'{\i}stica de} \label{definitionMedidaSRB}
\em
Sea $f: M \mapsto M$ continua en una variedad compacta y riemanniana $M$. Sea $m$ la medida de Lebesgue en $M$, normalizada para que sea una probabilidad: $m(M) = 1$. (En general, $m$ no es necesariamente $f$-invariante.) Sea $\mu \in {\mathcal M}$.
Decimos que la medida de probabilidad $\mu$ es \em SRB (Sinai-Ruelle-Bowen) \em o, indistintamente, que es \em f\'{\i}sica, \em
si $$m(B(\mu)) >0,$$
donde $B(\mu)$ es la cuenca de atracci\'{o}n estad\'{\i}stica de $\mu$, definida en \ref{DefinicionCuencaDeAtraccionEstadistica}.
\end{definition}
{\bf Nota: } Si $\mu$ es SRB, entonces $B(\mu) \neq \emptyset$, y por lo observado en la parte (a) del Ejercicio \ref{ejercicioZ0}, $\mu \in {\mathcal M}_f$ (es decir las medidas SRB son invariantes con $f$). \index{medida! SRB}
De acuerdo a la Definici\'{o}n \ref{definitionMedidaSRB}, una medida SRB puede no ser erg\'{o}dica. En efecto, en el ejemplo mencionado en \ref{remarkBowen}, que es adaptaci\'{o}n $C^0$ de un ejemplo atribuido a Bowen, existe medida SRB no erg\'{o}dica. \index{medida! SRB no erg\'{o}dica}
\vspace{.3cm}
{\bf Sobre la nomenclatura \lq\lq SRB\rq\rq \ y \lq\lq f\'{\i}sica\rq\rq. } \index{medida! SRB} \index{medida! f\'{\i}sica}
La Definici\'{o}n \ref{definitionMedidaSRB} de medida SRB no es adoptada por todos los autores. En general se utiliza esta definici\'{o}n que solo requiere $m(B(\mu)) >0$, solo para llamar \em f\'{\i}sica \em a la medida $\mu$.
Pero, para una parte importante de matem\'{a}ticos (por ejemplo \cite{buzziEnciclopedia}, \cite{YoungSurvey}), medida SRB no es sin\'{o}nimo de medida f\'{\i}sica. Llaman f\'{\i}sica a cualquier probabilidad $\mu$ cuya cuenca de atracci\'{o}n estad\'{\i}stica $B(\mu)$ tenga medida de Lebesgue positiva. Pero para llamar SRB a $\mu$, requieren que la probabilidad $\mu$ sea erg\'{o}dica, tenga exponentes de Lyapunov positivos, y tenga medidas condicionales inestables absolutamente con\-ti\-nuas (veremos m\'{a}s adelante qu\'{e} significa esta propiedad adicional, al introducir las medidas de Gibbs, mediante las Definiciones \ref{definitionMedidasCondicionadasAC}, \ref{definitionMedidasCondicionasInestables} y \ref{definitionMedidaGibbs}). Esto es debido a que, entre otros motivos, en el contexto restringido de los difeomorfismos de clase $C^{1 + \alpha}$ uniformemente
hiperb\'{o}licos, ambas definiciones son equivalentes (ver Teorema \ref{TheoremSRBanosov}).
Nosotros adoptamos la definici\'{o}n de, por ejemplo,
\cite[Definition 1.9]{BonattiDiazVianaLibro} o \cite[Definition 22]{Jost}.
Aqu\'{\i}, en la Definici\'{o}n \ref{definitionMedidaSRB}, \em no estamos asumiendo ninguna condici\'{o}n adicional a la fisicalidad de la medida para llamarla SRB o f\'{\i}sica (ni la ergodicidad, ni la positividad de exponentes de Lyapunov, ni la continuidad absoluta condicionada inestable). En resumen, usamos ambas palabras \lq\lq SRB \ \rq\rq \'{o} \lq\lq f\'{\i}sica\rq\rq, como sin\'{o}nimos. \em
\section{Teor\'{\i}a de Pesin} \index{Pesin! teor\'{\i}a de} \index{teor\'{\i}a de Pesin} \label{chapterAtractores}
\subsection{Desintegraci\'{o}n en medidas condicionales}
En esta secci\'{o}n asumimos que $X$ es un espacio m\'{e}trico compacto, provisto de la sigma-\'{a}lgebra de Borel ${\mathcal B}$, y que $\mu$ es una medida de probabilidad en $(X, {\mathcal B})$.
Expondremos el enunciado de un resultado de la teor\'{\i}a de la medida (el Teorema de Rohlin), v\'{a}lido aunque no exista una din\'{a}mica definida en el espacio $X$. En la secci\'{o}n siguiente veremos el uso del Teorema de Rohlin, junto con la Teor\'{\i}a de Pesin, en sistemas din\'{a}micos de clase $C^1$-m\'{a}s H\"{o}lder (cuando $X$ tiene adem\'{a}s, una estructura de variedad).
\begin{definition}
\label{definicionParticionMedible}
{\bf Partici\'{o}n medible } \index{partici\'{o}n! medible}
\em
Se llama \em partici\'{o}n \em en $X$
a una colecci\'{o}n (puede ser finita, infinita numerable o infinita no numerable) $${\mathcal P} := \{W({x})\}_{x \in X}$$ de subconjuntos $ W({x}) \subset X$ (pueden ser medibles o no medibles) tales que:
{\bf (a) } $ x \in W({x})$ para todo $x \in X$ (esto implica que $\bigcup_{x \in X} W(x) = X$)
{\bf (b) } Los conjuntos $W(x)$ son dos a dos disjuntos; es decir, para toda pareja de puntos $x \neq y$ en $X$, \'{o} bien $W(x) = W(y)$ \'{o} bien $W_x \cap W_y = \emptyset$.
\vspace{.3cm}
La partici\'{o}n ${\mathcal P}$ se dice \em medible, \em si sus piezas $W(x)$ son todas medibles y ${\mathcal P}$ est\'{a} generada por una colecci\'{o}n numerable de particiones finitas con piezas medibles. Esto es:
{\bf (c) } Existe una colecci\'{o}n numerable $\{\mathcal P_n\}_{n \geq 1}$ donde:
$${\bf (c1) } \ \ {\mathcal P}_n := \{E_{n,j}\}_{1 \leq j \leq k_n} $$ es una partici\'{o}n finita de $X$ para todo $n \geq 1$, con exactamente $k_n$ piezas $E_{n,j} \subset X$ medi\-bles (disjuntas dos a dos al cambiar $j$ con $n$ fijo, y cuya uni\'{o}n en $j$ es $X$ para todo $n$ fijo).
$${\bf (c2) } \ \ {\mathcal P}_{n+1} \prec {\mathcal P}_n \ \ \forall \ n \geq 1.$$ \index{partici\'{o}n! m\'{a}s fina que}
Esto significa que cada pieza de la partici\'{o}n ${\mathcal P}_{n+1}$ est\'{a} contenida en alguna pieza de la partici\'{o}n ${\mathcal P}_{n}$ (Se dice que ${\mathcal P}_{n+1}$ es \em m\'{a}s fina \em que ${\mathcal P}_{n}$ y se denota ${\mathcal P}_{n+1} \prec {\mathcal P}_n$).
$${\bf (c3) } \ \ \forall \ x \in X: \ \ W(x) = \bigcap _{n \geq 1} \ E_{n, j_n(x)} \in {\mathcal P}$$ donde $\ 1 \leq j_n(x) \leq k_n $ es el \'{u}nico \'{\i}ndice, para cada $n$ fijo, tal que $\ x \in E_{n, j_n(x)} \ \ \forall \ n \geq 1.$
Observar que esta \'{u}ltima condici\'{o}n implica que para todo $x \in X$ y para todos $n \geq 1$ y $1 \leq j \leq k_n$, o bien $x \not \in E_{n,j}$, o bien $W(x) \subset E_{n,j}$. En otras palabras, cada conjunto medible $E_{n,j}$ est\'{a}n formado por piezas enteras de la partici\'{o}n ${\mathcal P}$. Dicho de otra forma: ${\mathcal P} \prec {\mathcal P}_n$ para todo $n \geq 1$.
La condici\'{o}n (3) establece una condici\'{o}n m\'{a}s fuerte que ${\mathcal P} \prec {\mathcal P}_n$ para todo $n \geq 1$: La \lq\lq intersecci\'{o}n\rq\rq \ decreciente de las particiones ${\mathcal P}_n$ es ${\mathcal P}$. Esto se denota como $${\mathcal P} = \bigvee_{n= 1}^{+ \infty} {\mathcal P}_n,$$
\index{partici\'{o}n! operaci\'{o}n $\vee$} \index{$\vee$ operaci\'{o}n! entre particiones} \index{$\prec$ relaci\'{o}n! entre particiones}
donde para cualquier pareja ${\mathcal R} , {\mathcal S}$ de particiones finitas se define
$${\mathcal R} \vee {\mathcal S} := \{R \cap S \colon \ R \in {\mathcal R}, \ S \in {\mathcal S}\}.$$
La condici\'{o}n (c3) (junto con la propiedad de medibilidad de las piezas de cada partici\'{o}n ${\mathcal P}_n$) implica, en particular, la medibilidad de las piezas de ${\mathcal P}$. Sin embargo el rec\'{\i}proco es falso, como veremos en el ejemplo del Ejercicio \ref{exercise2111particionNoMedible}: existen particiones ${\mathcal P}$ cuyas piezas son todas medibles y que no cumplen la condici\'{o}n (c). Estas particiones, no son particiones medibles, de acuerdo a esta definici\'{o}n, a pesar que sus piezas son todas medibles, porque no est\'{a} generada por ninguna colecci\'{o}n numerable de particiones con piezas medibles.
\end{definition}
\begin{exercise}\em
Probar que si ${\mathcal P}$ es una partici\'{o}n con piezas medibles, y si ${\mathcal P}$ es finita o infinita numerable, entonces ${\mathcal P}$ es una partici\'{o}n medible.
\end{exercise}
En los siguientes ejercicios veremos ejemplos de particiones medibles y no medibles, con una cantidad no numerable de piezas:
\begin{exercise}\em \label{ejericioParticionesMedibleshorizontales}
Sea $X= [0,1]^2$. Para cada $(x_0, y_0) \in X$ se denota $S(x_0, y_0) := \{(x,y) \in X: x = x_0\} $ al segmento vertical de $X$ que se obtiene seccionando $X$ con la recta $x= x_0$ constante.
{\bf (a) } Probar que la foliaci\'{o}n ${\mathcal F} := \{S(x,y)\}_{(x,y) \in X}$ es una partici\'{o}n medible de $X$. (Sugerencia: considerar la colecci\'{o}n numerable $\{E_{n,i}\}_{n \geq 1, 1 \leq i \leq n}$ de borelianos $E_{n,i} := [(i-1)/n, i/n) \times [0,1] $ si $i < n$, \ $E_{n,n} := [(n-1)/n, 1] \times [0,1]$.)
{\bf (b) } En el intervalo $[0,1]$ sea $K = \cap_{n= 0}^{+ \infty} K_n$ el conjunto de Cantor de los \lq\lq tercios mitad\rq\rq, i.e. $K_0 = [0,1]$ y $K_{n+1}$ se obtiene de $K_n$ retirando de cada intervalo cerrado $I$ que forma a $K_n$ el subintervalo abierto con punto medio en el punto medio de $I$ y con longitud $(1/3)\mbox{long(I)}$. Sea en $X$, la partici\'{o}n $\{\mathcal P\} = \{P(x,y)\}_{(x,y) \in X}$ definida por $P(x_0,y_0) = S(x_0, y_0)$ si $x_0 \in K$, y $P(x_0, y_0) := \{(x,y) \in X: \ x \not \in K\}$ si $x_0 \not \in K$. Probar que ${\mathcal P}$ es una partici\'{o}n medible en $X$.
{\bf (c) } Sea $\xi: X \mapsto Y$ un homeomorfismo. Sea en $Y$ la foliaci\'{o}n
${ \mathcal G } := \{\xi(S(\xi^{-1}(x,y)))\}_{(x,y) \in Y}$. (Se dice que $\xi^{-1}$ es una trivializaci\'{o}n $C^0$ de la foliaci\'{o}n ${\mathcal G}$). Probar que ${\mathcal G}$ es una partici\'{o}n medible.
{\bf (d) } Sean $X$ e $Y$ espacios m\'{e}tricos compactos, y sea en $X \times Y$ la partici\'{o}n ${\mathcal F} = \{S(x,y)\}_{(x,y) \in X \times Y}$ por secciones verticales $S(x_0,y_0):= \{(x,y) \in X \times Y: \ x= x_0\}.$ Probar que ${\mathcal F}$ es una partici\'{o}n medible.
\end{exercise}
\begin{exercise}\em
Sean $(X, {\mathcal A})$ e $(Y, {\mathcal B})$ dos espacios medibles. Sea $\xi: X \mapsto Y $ una transformaci\'{o}n bimedible. Sea en $X$ una partici\'{o}n
${\mathcal P} = \{P(x)\}_{x \in X}$ cualquiera (no necesariamente medible). Sea en $Y$ la partici\'{o}n ${\mathcal Q} := \{\xi(S(\xi^{-1}(y)))\}_{y \in Y} $. Probar que ${\mathcal P}$ es partici\'{o}n medible si y solo si ${\mathcal Q}$ lo es.
\end{exercise}
\begin{exercise}\em \label{exercise2111particionNoMedible} \index{partici\'{o}n! medible} \index{automorfismo! lineal en el toro}
Sea $f: \mbox{Diff }^{\infty}(\mathbb{T}^2)$ en el toro $\mathbb{T}^2$ el autormorfismo lineal $$f = \left(
\begin{array}{cc}
2 & 1 \\
1 & 1 \\
\end{array}
\right)
(\mbox{ m\'{o}d. }\mathbb{Z}^2) .$$ Sea $W^u(p)$ la variedad inestable (global) por cada punto $p \in \mathbb{T}^2$, i.e.: $$W^u(p) = \{q \in \mathbb{T}^2: \lim_{n \rightarrow + \infty} \mbox{dist}(f^{-n}(p), f^{-n}(q))= 0\}.$$
(a) Probar que $W^u(p)$ es un conjunto medible para todo $p \in \mathbb{T}^2$.
(b) Probar que la partici\'{o}n ${\mathcal P}:= \{W^u(p)\}_{p \in {\mathbb{T}^2}}$ no es medible.
Sugerencia: Considerar la medida de Lebesgue $m$ en el toro. Chequear que $m(W^u(p)) = 0$ para todo $p$. Sea $E \subset \mathbb{T}^2$ medible con $m(E) >0$ y tal que si $p \in E$ entonces $W(p) \subset E$. Probar que $m(E) = 1 $ (Recordar que $\mathbb{T}^2 = \mathbb{R}^2/\mathbb{Z}^2$. Tomar un levantado $\widehat E$ a $\mathbb{R}^2$ del conjunto $E$: si $\widehat p \in \widehat E$ entonces toda la recta con direcci\'{o}n inestable en $\mathbb{R}^2$ que pasa por $\widehat p$ est\'{a} contenida en $\widehat E$. Mirar la intersecci\'{o}n de $\widehat E$ con cada recta horizontal de altura entera en $\mathbb{R}^2$. Proyectar todas esas intersecciones, siguiendo las verticales en $\mathbb{R}^2$, sobre una sola recta horizontal. Observar que esa proyecci\'{o}n (m\'{o}d. $1$) en $[0,1] = \mathbb{R}/\mathbb{Z}$, es invariante con la rotaci\'{o}n irracional y tiene medida de Lebesgue positiva en el intervalo.) De la propiedad $m(E) = 0$ \'{o} $m(E) = 1$, deducir que para $m$-c.t.p. $p \in \mathbb{T}^2$ no se puede verificar la condici\'{o}n (c3) de la Definici\'{o}n \ref{definicionParticionMedible}) de partici\'{o}n medible.
\end{exercise}
\begin{exercise}\em \label{exerciseMedibleSiiMuMedible}
(a) Probar que si ${\mathcal P}$ y ${\mathcal Q}$ son dos particiones medibles, entonces ${\mathcal P} \vee {\mathcal Q}$ es una partici\'{o}n medible. (Sugerencia, construir ${\mathcal P}_n \vee {\mathcal Q}_n$ donde $ {\mathcal P}_n $ y ${\mathcal Q}_n$ son las particiones finitas que satisfacen, para ${\mathcal P}$ y ${\mathcal Q}$ respectivamente, la condici\'{o}n (c) de la Definici\'{o}n \ref{definicionParticionMedible}.)
(b) Probar que si ${\mathcal Q}$ es medible, entonces ${\mathcal P}$
es medible si y solo si ${\mathcal P} \vee {\mathcal Q}$ es medible.
\end{exercise}
\vspace{.3cm}
{\bf Espacio cociente por una partici\'{o}n medible.} \index{espacio! cociente por partici\'{o}n}
Dada una partici\'{o}n medible ${\mathcal P}= \{W(x)\}_{x \in X}$ de $(X, {\mathcal B})$ consideramos el conjunto cociente $X /\sim$ por la relaci\'{o}n de equivalencia $x \sim y$ si y solo si $W(x) = W(y)$. Denotando $[x]$ a la clase de equivalencia que contiene a $x$, tenemos
$$X/\sim = \{[x]\}_{x \in X} \ \mbox{ donde } \ [x] = W(x) $$
Dotamos el conjunto $X/\sim$ de la estructura medible cociente, definiendo la sigma-\'{a}lgebra:
$$(X/\sim, {\mathcal A}/\sim): \ \ \ \widehat A \in {\mathcal A}/\sim \ \ \ \mbox{ si y solo si } $$ $$A := \{x \in X: W(x) \in \widehat A\} \in {\mathcal A}. $$
Para cada $W(x) \in X / \sim$, es decir, en cada pieza $W(x)$ de la partici\'{o}n medible ${\mathcal P}$, definimos la sigma-\'{a}lgebra ${\mathcal A}_{W(x)}$ que resulta de restringir la sigma-\'{a}lgebra ${\mathcal A}$ a $W (x)$. Esto es:
$$ (W(x), {\mathcal A}_{W(x)}): \ \forall \ A \subset W(x), \ \ A \in {\mathcal A}_{W(x)} \mbox{ si y solo si } A \in {\mathcal A}.$$
\begin{theorem}
{\bf Desintegraci\'{o}n Medible (Teorema de Rohlin)} \index{teorema! Rohlin} \index{teorema! de descomposici\'{o}n medible} \index{teorema! de existencia de! medidas condicionales} \label{theoremRohlin}
Sea $X$ un espacio m\'{e}trico compacto y sea ${\mathcal A}$ la sigma-\'{a}lgebra de Borel.
Sea $\{W(x)\}_{x \in X}$ una partici\'{o}n medible.
Sea $\mu$ una medida de probabilidad en $(X, {\mathcal A})$.
Entonces:
{\bf (i) } Existe una medida de probabilidad $\widehat \mu $ en el espacio medible cociente $X /\sim$, tal que para $\widehat \mu$-c.t.p. $W(x) \in X/\sim$, existe una medida de probabilidad $\mu^{W(x)}$ en el espacio medible $(W(x), {\mathcal A}_{W(x)})$, tal que:
$\bullet $ Para todo $A \in {\mathcal A}$:
\begin{equation}\label{eqnRohlin1}\int_X \chi_A \, d \mu = \int _{[x] \in X/\sim} \Big(\int_ {y \in [x]= W(x)} \chi_A (y) \, d \mu^{W(x)} \Big) \, d \widehat \mu. \end{equation}
$\bullet $ M\'{a}s en general, para toda $\psi \in L^1(\mu)$:
\begin{equation}\label{eqnRohlin2}\int_X \psi \, d \mu = \int _{[x] \in X/\sim} \, d \widehat \mu \ \Big(\int_ {y \in [x]= W(x)} \psi (y) \, d \mu^{W(x)} \Big). \end{equation}
{\bf (ii) } La medida de probabilidad $\widehat \mu$, y para $\widehat \mu$-casi todas las piezas $W(x) \in X /\sim$, las medidas de probabilidad $\mu^{W^u(x)}$ que verifican las igualdades \em (\ref{eqnRohlin1}) \em y \em (\ref{eqnRohlin2}), \em son \'{u}nicas.
\end{theorem}
La prueba original del Teorema de Rohlin se encuentra en \cite{Rohlin}, o tambi\'{e}n en \cite{Rohlin1}. La demostraci\'{o}n se encuentra reformulada tambi\'{e}n, por ejemplo, en \cite{Ledrappier} o en \cite{VianaRohlin}.
El teorema de Rohlin generaliza a un contexto medible, y para cualquier medida de probabilidad, el teorema de Fubini que vale para la medida de Lebesgue en un rect\'{a}ngulo de $\mathbb{R}^a \times \mathbb{R}^b$. En efecto, por el Teorema de Fubini en un rect\'{a}ngulo, la integral con respecto a la medida de Lebesgue se obtiene como la integral doble en una secci\'{o}n \lq\lq horizontal\rq\rq \ del rect\'{a}ngulo, de las integrales sobre las secciones \lq\lq verticales,\rq\rq \ con respecto a las medidas de Lebesgue a lo largo de dichas secciones, respectivamente.
\begin{definition}
{\bf Medidas condicionales } \em \index{medida! condicional} \label{definitionMedidasCondicionas}
En las hip\'{o}tesis y conclusiones del Teorema \ref{theoremRohlin} de Rohlin, las medidas $\mu^{W }$ se llaman \em medidas condicionales de $\mu$ a lo largo de las piezas $W $ de la partici\'{o}n ${\mathcal P}$. \em
Tambi\'{e}n las llamamos, en breve \em medidas ${\mathcal P}$-condicionales de $\mu$, \em o si la partici\'{o}n est\'{a} clara en el contexto, simplemente \em medidas condicionales \em de $\mu$.
Observar que cada medida condicionada $\mu^{W }$ est\'{a} soportada en una pieza $W $, es una probabilidad (es decir $\mu^{W}(W ) = 1$). Adem\'{a}s, $\mu^{W}$ est\'{a} definida para $\widehat \mu$-casi todas las piezas $W$ de la partici\'{o}n ${\mathcal P}$, y no necesariamente para todas las piezas.
\vspace{.3cm}
{\bf Continuidad absoluta de las medidas condicionales.} \index{continuidad absoluta! de medidas condicionales} \index{medida! condicional} \index{medida! absolutamente continua}
Cuando el espacio m\'{e}trico $X $ tiene una estructura de variedad compacta y riemanniana $M$ (es decir $X= M$), y si las piezas $W $ de la partici\'{o}n medible ${\mathcal P}$ son subvariedades de $M$, se considera, como medida privi\-legiada a lo largo de cada una de estas subvariedades, la medida de Lebesgue en $W $, que denotamos $m^{W}$.
En un contexto general, las medidas condicionales $\mu^{W}$ de $\mu$ a lo largo de las piezas $W $ de la partici\'{o}n ${\mathcal P}$, podr\'{\i}an no tener relaci\'{o}n con las medidas de Lebesgue $m^{W}$ a lo largo de estas piezas. Sin embargo, dentro de la Teor\'{\i}a de Pesin que veremos en la pr\'{o}xima secci\'{o}n, tienen especial importancia las medidas $\mu$ que cumplen la siguiente definici\'{o}n:
\end{definition}
\begin{definition}
\label{definitionMedidasCondicionadasAC}
\em
Se dice que $\mu$ \em tiene medidas ${\mathcal P}$-condicionales absolutamente continuas \em cuando para $\widehat \mu$- casi toda pieza $W $ de la partici\'{o}n ${\mathcal P}$, se cumple:
$$\mu^{W } \ll m^ {W }, \ \ \mbox{ i.e. } \mu^{W }(A) = 0 \mbox{ si } m^{W } (A) = 0, $$
donde $A \subset W$ es medible, $W$ es una subvariedad $u$-dimensional de $M$, y $m^W$ es la medida de Lebesgue $u$-dimensional a lo largo de la subvariedad $W$. \index{continuidad absoluta! de medidas condicionales}
\index{medida! condicional inestable}
\index{medida! absolutamente continua}
\end{definition}
\subsection{Medidas de Gibbs}
\begin{definition}
\label{definitionMedidasCondicionasInestables} \
\index{medida! condicional inestable}
{\bf Medidas condicionales inestables } \em
Sea $M$ una variedad compacta y riemanniana y sea $f: \in {\mbox{Diff }}^1(M)$ tal que el siguiente conjunto
\begin{equation} \label{eqnvariedadinestableglobal} W^u(x) := \{y \in M: \lim_{n \rightarrow + \infty} \mbox{dist} (f^{-n}(x), f^{-n}(y)) = 0\}\end{equation} es, por hip\'{o}tesis, una subvariedad $C^1$ inmersa en $M$ para $\mu$-casi todo punto $x \in M$, para cierta medida de probabilidad $\mu$ invariante con $f$. Bajo esta hip\'{o}tesis $W^u(x)$ se llama \em subvariedad inestable global \em por el punto $x$. Debido a la continuidad de $f$, es inmediato chequear que $f^{-1}(W^u(x)) = W^u(f^{-1}(x))$ para todo $x \in M$.
Sea $B_{\delta}$ una bola (compacta) de radio suficientemente peque\~{n}o con $\mu(B_{\delta}) >0$, y sea en ella la partici\'{o}n ${\mathcal F}_{\delta}^u = \{W^u_{\delta}(x)\}_{x \in M}$ definida como sigue:
$\bullet$ $W^u_{\delta}(x)= c.c._x(W^u(x) \cap B_{\delta}(x)) $ es la componente conexa que contiene al punto $x$ de la intersecci\'{o}n $W^u(x) \cap B_{\delta}(x)$, para todo punto $x$ tal que $W^u(x)$ es una subvariedad $C^1$-inmersa en $M$ (es decir para $\mu$-c.t.p $x \in M$). Por hip\'{o}tesis, esta subvariedad $W^u_{\delta}(x)$ est\'{a} $C^1$-encajada en $M$ para $\mu$-c.t.p. $x \in B_{\delta}$. Se llama \em subvariedad inestable local \em por el punto $x$.
$\bullet$ Abusando de la notaci\'{o}n, para los restantes puntos $x \in B_{\delta}$, denotamos $W^u_{\delta}(x) := \{y \in B_{\delta}\colon $ no existe subvariedad inestable por el punto $y \}$. Por construcci\'{o}n, esta \'{u}nica pieza de la partici\'{o}n ${\mathcal F}^u$ (que no es necesariamente una variedad) tiene $\mu$-medida cero.
Rescalando $\mu$ para que $\mu(B_{\delta}) = 1$, tenemos lo siguiente: \index{medida! condicional inestable}
\index{medida! condicional!}
\vspace{.5cm}
Si la partici\'{o}n ${\mathcal F}^u = \{W_{\delta}^u(p)\}_{p \in B_{\delta}}$ de subvariedades inestables locales as\'{\i} cons\-truida, fuera una partici\'{o}n $\mu$-medible, llamamos \em medidas condicionales inestables \em de $\mu$, a las medidas condicionales $\mu^{W^u_{\delta}(p)}$ de $\mu$ a lo largo de las piezas $W^u_{\delta}(p)$ de esa partici\'{o}n, es decir a lo largo de las variedades inestables locales.
Por simplicidad en la notaci\'{o}n escribiremos $\mu^u = \mu^{W^u_{\delta}}$ a las medidas condicionales inestables, y $m^u = m^{W^u_{\delta}} $ a las medidas de Lebesgue a lo largo de las subvariedades inestables $W^u_{\delta}$.
\end{definition}
\begin{definition}
\index{continuidad absoluta! de medidas condicionales}
\index{medida! condicional inestable}
\index{medida! absolutamente continua}
\label{definitionmedidascondicionalesinestablesAC} {\bf Medidas condicionales inestables absolutamente continuas.} \em
\index{continuidad absoluta! de medidas condicionales} \index{medida! condicional} \index{medida! absolutamente continua}
En el contexto de la Definici\'{o}n \ref{definitionMedidasCondicionasInestables}, una medida $\mu$ se dice que tiene \em medidas condicionales inestables absolutamente continuas \em cuando
$$\mu^u \ll m^u $$ para $\widehat \mu$- casi toda variedad inestable local $W^u_{\delta}$ de la partici\'{o}n ${\mathcal F}^u$, donde $\widehat \mu$ y $ \mu^u $ son las medidas del Teorema \ref{theoremRohlin} de desintegraci\'{o}n de Rohlin en la partici\'{o}n de variedades inestables locales.
Se recuerda que, por definici\'{o}n, dadas dos medidas $\nu_1$ y $\nu_2$, se dice que \em $\nu_1$ es absolutamente con respecto de $\nu_2$, \em y se denota $\nu_1 \ll \nu_2$ cuando para todo boreliano $A$ se cumple $$\nu_2(A)= 0 \ \Rightarrow \ \nu_1(A) = 0.$$
Dos medidas finitas $\nu_1$ y $\nu_2$ cumplen $\nu_1 \ll \nu_2$ si y solo si existe una funci\'{o}n $h \in L^1(\nu_2)$, llamada \em derivada de Radon-Nikodym \em de $\nu_1$ con respecto de $\nu_2$, tal que para todo conjunto medible $A$ se cumple
$$\nu_1(A) = \int _A \ h \, d \nu_2.$$
(Ver por ejemplo \cite[page 85]{Folland} o
\cite[page 113]{Rudin}.)
Luego, si $\mu$ tiene medidas condicionales inestables absolutamente continuas, entonces para $\mu$-c.t.p. $x \in M$ existe una funci\'{o}n $h_x \in L^1(m^u_x)$, donde $m^u_x$ es la medida de Lebesgue a lo largo de la variedad inestable local $W^u_{\delta}(x)$, tal que la medida condicional inestable $\mu^u_x$ a lo largo de $W^u_{\delta}(x) $ cumple:
$$\mu^u_x\big(A \cap W^u_{\delta} (x)\big) = \int_{y \in W^u_{\delta}(x)} \chi_A(y) \, h_x(y) \, d m_x^u(y) $$
para todo boreliano $A \subset M$.
\end{definition}
%
\begin{definition}
{\bf Medidas de Gibbs} \label{definitionMedidaGibbs} \em \index{continuidad absoluta! de medidas condicionales}
\index{medida! condicional inestable}
\index{medida! absolutamente continua}
\index{medida! de Gibbs}
Sea $M$ una variedad compacta y riemanniana y sea $f \in \mbox{Diff }^1(M)$. Una medida de probabilidad $\mu$ $f$-invariante se dice que es \em medida de Gibbs \em cuando cumple:
{\bf (i) } Para $\mu$-c.t.p. $x \in M$ el conjunto definido por la igualdad (\ref{eqnvariedadinestableglobal}) es una subvariedad $C^1$-inmersa en $M$ (subvariedad inestable global por el punto $x$).
{\bf (ii) } Para $\delta >0$ suficientemente peque\~{n}o, si $B_{\delta} \subset M$ es una bola compacta de radio radio $\delta$ tal que $\mu(B_{\delta})>0$, entonces la siguiente familia de variedades inestables locales, es una partici\'{o}n $\mu$-medible $${\mathcal F}^u := \{W^u_{\delta}(x)\}_{x \in B_{\delta}}, \ \ \mbox{ donde } \ W^u_{\delta}(x) := c.c._x (W^u (x) \cap B_{\delta}).$$
($c.c._x$ denota la componente conexa que contiene al punto $x$).
{\bf (iii) } $\mu$ tiene medidas condicionales inestables absolutamente continuas, de acuerdo a la definici\'{o}n dada en el \'{u}ltimo p\'{a}rrafo de \ref{definitionMedidasCondicionasInestables}
\end{definition}
Veamos ahora que la propiedad de tener medidas condicionales absolutamente continuas a lo largo de las piezas de una partici\'{o}n, se transmite de una medida invariante $\mu$ a sus componentes erg\'{o}dicas, y rec\'{\i}procamente. Por lo tanto una medida invariante $\mu$ es de Gibbs, si y solo si sus componentes erg\'{o}dicas son medidas de Gibbs.
\begin{corollary}
{\bf del Teorema de Rohlin} \label{corolarioRohlin1} \index{teorema! Rohlin} \index{continuidad absoluta! de medidas condicionales}
\index{medida! condicional inestable}
\index{medida! absolutamente continua} \index{componentes erg\'{o}dicas}
\index{medida! de Gibbs}
Sea $f\colon X \mapsto X$ continua en el espacio m\'{e}trico compacto, sea $\mu$ una medida de probabilidad $f$-invariante y sea ${\mathcal P} = \{W(x)\}_{x \in X}$ una partici\'{o}n medible y $f$-invariante \em (i.e. $f^{-1}(W(x)) = W(f^{-1}(x))$ para todo $x \in X$). \em
Entonces:
{\bf (a) } Las medidas condicionales $\mu^{W}$ de $\mu$ a lo largo de las piezas $W$ de ${\mathcal P}$, y la medida inducida $\widehat \mu$ por $\mu$ en el espacio cociente $X / \sim$ de la partici\'{o}n, son medidas invariantes con $f$.
{\bf (b) } $\mu_x^{W(x)} = \mu^{W(x)} \ \mbox{ para } \ \mu-\mbox{c.t.p. } x \in M,$
donde $W(x)$ denota la pieza de la partici\'{o}n ${\mathcal P}$ que contiene al punto $x$, y $\mu_x$ denota la componente erg\'{o}dica de la medida $\mu$ a la que pertenece el punto $x$ seg\'{u}n el Teorema \em \ref{theoremDescoErgodicaEspaciosMetricos} (es decir, $\lim_{n \rightarrow + \infty} (1/n) \sum_{j= 0}^{n-1} \delta_{f^j(x)} = \mu_x$ en la topolog\'{\i}a d\'{e}bil estrella).
{\bf (c)} Si adem\'{a}s $X = M$ es una variedad compacta y riemanniana, \em $f \in \mbox{Diff }^1(M)$, \em y ${\mathcal P} = \{W_{\delta} (x)\}_{x \in M}$ es una partici\'{o}n medible de la bola $B_{\delta}(x)$ formada por subvariedades $W_{\delta}(x)$ \ $C^1$ inmersas en $M $ para $\mu$-c.t.p. $x \in X$, entonces:
\ \ $\bullet$ Las medidas condicionales $\mu^{W}$ de $\mu$ son absolutamente continuas a lo largo de las subvariedades $W \in {\mathcal P}$ si y solo si para $\mu$-c.t.p. $x \in X$ las medidas condicionales $\mu_x^W$ de las componentes erg\'{o}dicas $\mu_x$ de $\mu$, son absolutamente continuas. \index{componentes erg\'{o}dicas}
\ \ $\bullet$ $\mu$ es una medida de Gibbs si y solo si las componentes erg\'{o}dicas $\mu_x$ de $\mu$ son medidas de Gibbs para $\mu$-c.t.p. $x \in M$.
\end{corollary}
{\em Demostraci\'{o}n: }
{\bf (a) } Por el Teorema \ref{theoremRohlin} de Rohlin y la $f$-invariancia de $\mu$ tenemos, para todo conjunto medible $A$:
\begin{equation} \label{eqn-10}\mu(A) = \int _{[x] \in X/ \sim} \, d \widehat \mu\int _{x \in [x] } \chi_A(x) \, d\mu^{W(x)} = $$ $$ =\int_{[x] \in X/ \sim} \Big( \mu^{W(x)}(W(x) \cap A \Big) \, d \widehat \mu = $$
$$= \mu(f^{-1}(A)) = \int _{[x] \in X/ \sim} \, d \widehat \mu\int _{x \in [x] } \chi_{f^{-1}(A)} (x) \, d\mu^{W(x)} =$$ $$ = \int_{[x] \in X/ \sim} \Big( \mu^{W(x)}(W(x) \cap f^{-1}(A)) \Big) \, d \widehat \mu.\end{equation}
Las medidas de probabilidad $\mu^{W(x)}$ est\'{a}n definidas para $\widehat \mu$-casi toda pieza de la partici\'{o}n ${\mathcal P}$. Para las otras piezas tomamos por convenci\'{o}n cualquier medida de probabilidad soportada en ellas. De esta forma tenemos definidas $\mu^{W(x)}$ para todo $x \in M$.
Debido a la $f$-invariancia de las piezas $W \in {\mathcal P}$, se cumple:
$$W(x) \cap f^{-1}(A) = f^{-1}(W(f(x)) \cap A)$$
Definimos la siguiente medida $(\mu^{W(y)})^*$ a lo largo de la pieza $W(y)$ para todo $y \in M$:
$$(\mu^{W(y)})^*(B):= \mu^{W(f^{-1}(y))}(f^{-1}(B)) \ \ \forall \ B \in {\mathcal A}, $$ donde
${\mathcal A}$ denota la sigma-\'{a}lgebra de Borel en $X$. Entonces:
$$\mu^{W(x)}(W(x) \cap f^{-1}(A)) = (\mu^{W(f(x))})^*(W(f(x)) \cap A),$$
y sustituyendo en (\ref{eqn-10}) obtenemos
\begin{equation} \label{eqn-10b}\mu(A) = \int_{[x] \in X/ \sim} \Big( (\mu^{W(f(x))})^*(W(f(x)) \cap A) \Big) \, d \widehat \mu =$$ $$ =\int _{[y] \in X /\sim} \Big( (\mu^{W(y)})^*(W(y) \cap A) \Big) \, d (\widehat \mu)^*,\end{equation}
donde $(\widehat \mu)^*$ es la medida de probabilidad en $X / \sim$ definida por
$$\int \varphi \, d \widehat \mu ^* := \int \varphi \circ f \, d \widehat \mu \ \ \forall \ \ \varphi \in L^1(\widehat \mu).$$ (En la \'{u}ltima igualdad $f: X \sim \mapsto X \sim $ denota la aplicaci\'{o}n que lleva la pieza $W \in {\mathcal P}$ en la pieza $f^{-1}(W) $).
De (\ref{eqn-10b}) deducimos
$$\mu(A) = \int _{[y] \in X /\sim} \Big( (\mu^{W(y)})^*(W(y) \cap A) \Big) \, d (\widehat \mu)^* = $$ $$ = \int _{[y] \in X /\sim} d \widehat \mu ^* \int_{y \in [y]} \chi_A(y) \, d (\mu^{W(y)})^* \ \ \forall \ A \in {\mathcal A}.$$
Luego hemos encontrado otra desintegraci\'{o}n de Rohlin de la medida $\mu$ con respecto a la partici\'{o}n ${\mathcal P}$. Por la unicidad de las medidas de probabilidad $\widehat \mu$ y $\mu^{W}$ de la desintegraci\'{o}n de Rohlin, se cumple
$$(\widehat\mu)^* = \widehat \mu, \ \ \ \mu^{W(x)} = (\mu^{W(f(x)})^*,$$
demostrando la $f$-invariancia de $\widehat \mu$ y de $\mu^W$.
{\bf (b) } Para todo conjunto $A \in {\mathcal A}$, por el Teorema de Descomposici\'{o}n Erg\'{o}dica (Teorema \ref{theoremDescoErgodicaEspaciosMetricos}) tenemos: $$\mu(A) = \int \mu_x(A) \, d \mu. $$
Por el Teorema de Descomposici\'{o}n de Rohlin, aplicado a cada medida erg\'{o}dica $\mu_x$, se cumple:
\begin{equation}\label{eqn-11}\mu(A) = \int \mu_x (A) \, d \mu = \int_{x \in X} d \mu \int_{[y] \in X / \sim} d \widehat \mu_x \int_{y \in [y]} \chi_A \, d \mu_x^{W(y)} =$$
$$=\int_{ [y] \in X/ \sim } d \widehat \nu \int_{y \in [y]} \chi_A \, d \mu_x^{W(y)},\end{equation}
donde la medida de probabilidad $ \widehat \nu $ en el espacio cociente $ X / \sim, {\mathcal A} $ est\'{a} definida por:
$$\int_{X/ \sim} \varphi \, d\widehat \nu := \int _{x \in X} \Big (\int _{[y] \in X / \sim} \varphi([y]) \, d \widehat \mu_x \Big ) \, \ d \mu \ \ \ \forall \ \ \varphi \in L^1 (\widehat \mu).$$
Luego, la igualdad (\ref{eqn-11}) es una descomposici\'{o}n de Rohlin de la medida $\mu$ con respecto a la partici\'{o}n ${\mathcal P}$. Por la unicidad de las probabilidades de la descomposici\'{o}n de Rohlin, concluimos que $\widehat \mu = \widehat \nu$ y $\mu^W_x = \mu^W$.
{\bf (c) } Como consecuencia de la parte (b) $\mu^W \ll m^W$ si y solo si $\mu_x ^W \ll m ^W$.
En el caso particular en que la partici\'{o}n medible ${\mathcal P}$ es la partici\'{o}n de una bola $B \subset M$ en variedades inestables locales, obtenemos que $\mu$ es medida de Gibbs si y solo si $\mu_x$ lo es para $\mu$-c.t.p. $x \in M$.
\hfill $\Box$
\vspace{.3cm}
\subsection{Relaci\'{o}n entre medidas de Gibbs y SRB} \index{Pesin! teor\'{\i}a de} \index{teor\'{\i}a de Pesin}
En esta secci\'{o}n asumimos que $M$ es una variedad compacta y riemanniana y que $f \in \mbox{Diff }^{1 + \alpha}(M)$. Algunos de los teoremas que veremos en esta secci\'{o}n se obtienen de resultados de la Teor\'{\i}a de Pesin, bajo la hip\'{o}tesis de que $f$ es de clase $C^{1 + \alpha}$. Pero son falsos si $f$ es solo de clase $C^1$. En particular las condiciones de continuidad absoluta de las medidas condicionales inestables (existencia de medida erg\'{o}dica de Gibbs que es tambi\'{e}n SRB) no rige para todo $f \in \mbox{Diff }^1(M)$.
M\'{a}s adelante, en las secciones posteriores de este cap\'{\i}tulo, veremos algunas formas de reformular los resultados de esta secci\'{o}n, generalizando algunas definiciones (mediante la introducci\'{o}n de las medidas SRB-like) y modificando adecuadamente los enunciados, para que sean aplicables a todo difeomorfismo de clase $C^1$ en la variedad $M$ (y m\'{a}s a\'{u}n, algunos de ellos son aplicables tambi\'{e}n a transformaciones continuas $f: M \mapsto M$).
\begin{theorem}
\label{teoremaAtractoresErgodicos} \label{theoremGibbs->SRB} {\bf de Gibbs implica SRB o f\'{\i}sica} \index{medida! SRB} \index{medida! de Gibbs} \index{medida! f\'{\i}sica} \index{teorema! de medida de Gibbs} \index{componentes erg\'{o}dicas}
Sea \em $f \in \mbox{Diff }^{1 + \alpha}(M)$ \em y sea $\mu $ una medida invariante hiperb\'{o}lica.
Si $\mu$ es una medida de Gibbs, entonces las componentes erg\'{o}dicas de $\mu$ son medidas SRB o f\'{\i}sicas.
\end{theorem}
En el p\'{a}rrafo \ref{proofTheoremGibbs->SRB} veremos la demostraci\'{o}n de este Teorema, reduci\'{e}ndola a resultados de la Teor\'{\i}a de Pesin. Tambi\'{e}n se puede encontrar la demostraci\'{o}n del teorema \ref{teoremaAtractoresErgodicos} en \cite{PughShubErgodicAttractors} o en \cite[Proposition 11.24]{BonattiDiazVianaLibro}.
Es cierto tambi\'{e}n el rec\'{\i}proco del Teorema \ref{theoremGibbs->SRB}, bajo hip\'{o}tesis de $C^{1 + \alpha}$-hiperbolicidad uniforme: si las componentes erg\'{o}dicas de una medida invariante $\mu$ son medidas SRB o f\'{\i}sicas, entonces $\mu$ es una medida de Gibbs. M\'{a}s precisamente:
\begin{theorem} \label{theoremSRB->Gibbs} {\bf SRB erg\'{o}dica implica de Gibbs} \index{medida! SRB erg\'{o}dica} \index{medida! SRB} \index{medida! de Gibbs} \index{teorema! de medida SRB erg\'{o}dica}
Sea \em $f \in \mbox{Diff }^{1 + \alpha}(M)$, \em sea $\Lambda \subset M$ un atractor erg\'{o}dico uniformemente hiperb\'{o}lico, y sea $\mu $ la medida erg\'{o}dica SRB o f\'{\i}sica soportada en $\Lambda$ \em (de acuerdo a la Definici\'{o}n \ref{definitionAtractorErgodico} de atractor erg\'{o}dico).
\em Entonces $\mu$ es medida de Gibbs.
\end{theorem}
El Teorema \ref{theoremSRB->Gibbs} es consecuencia del siguiente Teorema de Pesin-Sinai, que demuestra la existencia de medida SRB erg\'{o}dica, bajo hip\'{o}tesis de hiperbolicidad uniforme en el contexto $C^{1 + \alpha}$
\begin{theorem} \label{theoremPesinSinai}
{\bf [Pesin-Sinai, \cite{Pesin-Sinai}]} \index{teorema! Pesin-Sinai}
Sea $f \in \mbox{Diff }^{1 + \alpha}(M)$. Sea $\Lambda $ un atractor topol\'{o}gico uniformemente hi\-per\-b\'{o}\-lico. Entonces:
Existen medidas SRB-erg\'{o}dicas soportadas en $\Lambda$. Toda medida SRB erg\'{o}dica soportada en $\Lambda$ es de Gibbs. Rec\'{\i}procamente, toda medida erg\'{o}dica de Gibbs soportada en $\Lambda $ es SRB. \em (cf. Teorema \ref{theoremGibbs->SRB}).
\end{theorem}
En el caso particular que $f$ sea Anosov, demostraremos este resultado m\'{a}s adelante en esta secci\'{o}n (Teorema \ref{TheoremSRBanosov}).
\vspace{.3cm}
{\bf Generalizaci\'{o}n del Teorema \ref{theoremPesinSinai} para difeomorfismos $C^{1 + \alpha}$ no uniformemente hiperb\'{o}licos}
La implicaci\'{o}n SRB-erg\'{o}dica $\Rightarrow$ Gibbs, como en el Teorema \ref{theoremPesinSinai} de Pesin-Sinai, rige a\'{u}n en hip\'{o}tesis m\'{a}s gene\-rales que la hiperbolicidad uniforme que nosotros enunciamos en ese Teorema, asumiendo siempre que $f \in \mbox{Diff }^{1 + \alpha}(M)$. En efecto, el atractor topol\'{o}gico $\Lambda$ puede no ser hiperb\'{o}lico, sino que alcanza que sea \em parcialmente hiperb\'{o}lico \em (ver Definici\'{o}n, por ejemplo en \cite[Definition B.3, page 289]{BonattiDiazVianaLibro}). Puede ser no uniformemente hiperb\'{o}lico con singularidades, como el atractor de Lorenz \cite{Pesin1992}, por ejemplo.
Para los difeomorfismos parcialmente hiperb\'{o}licos, las medidas \index{hiperbolicidad! parcial} \index{difeomorfismos! parcialmente hiperb\'{o}licos} invarian\-tes no son necesariamente hiperb\'{o}licas, sino que puede existir un subespacio de Oseledets con exponente de Lyapunov igual a cero. Sin embargo, puede existir una separaci\'{o}n acotada uniformemente lejos de cero, entre los exponentes de Lyapunov positivos y los no positivos.
La demostraci\'{o}n de Pesin-Sinai del Teorema \ref{theoremPesinSinai} se encuentra en \cite{Pesin-Sinai} para difeomorfismos $C^{1 + \alpha}$-parcialmente hiperb\'{o}licos; en particular para los que son uniformemente hiperb\'{o}licos \'{o} Anosov. Tambi\'{e}n puede encontrarse la demostraci\'{o}n general para difeomorfismos $C^{1 + \alpha}$ par\-cial\-mente hi\-per\-b\'{o}\-li\-cos, en \cite[Theorem 11.16] {BonattiDiazVianaLibro}.
Observemos que la hiperbolicidad parcial no implica que la medidas invariantes sean hiperb\'{o}licas, y por lo tanto, aunque existan medidas de Gibbs erg\'{o}dicas, no podremos aplicar el Teorema \ref{theoremGibbs->SRB} para deducir que estas son SRB. En general, el problema de existencia de medidas SRB para difeomorfismos parcialmente hiperb\'{o}licos, est\'{a} esencialmente abierto. Recientemente, en \cite{VianaYang} se demuestra que los
difeomorfismos parcialmente hiperb\'{o}licos con dimensi\'{o}n central 1 (1 es la dimensi\'{o}n del subespacio de Oseledets con exponente de Lyapunov igual a cero), $C^{1 + \alpha}$-gen\'{e}ricamente existe una, y a lo sumo una cantidad finita, de medidas erg\'{o}dicas SRB cuyas cuencas de atracci\'{o}n estad\'{\i}stica cubren Lebesgue c.t.p.
Los difeomorfismos hiperb\'{o}licos son un caso particular de los llamados difeomorfismos con \em splitting dominado. \em \index{splitting! dominado} La definici\'{o}n, los enunciados y las demostraciones de propiedades din\'{a}micas topol\'{o}gicas de los difeomorfismos con splitting dominado se encuentran en \cite{PujalsSamba}. El problema de existencia de medidas SRB para los difeomorfismos con splitting dominado, salvo en casos particulares, est\'{a} esencialmente abierto.
Otro caso en el que la existencia de medidas SRB se ha estudiado, es el de \index{difeomorfismos! derivados de Anosov}
los llamados difeomorfismos \em derivados de Anosov. \em La demostraci\'{o}n de la misma tesis del Teorema \ref{theoremPesinSinai} para difeomorfismos derivados de Anosov transitivos con exponente de Lyapunov positivo y de clase $C^{1 + \alpha}$, y adem\'{a}s la unicidad de su medida SRB, fueron dadas por M.F. Carvalho en \cite{Carvalho} (o tambi\'{e}n, m\'{a}s detalladamente explicadas, en \cite{CarvalhoTesis}).
Tambi\'{e}n existen otras generalizaciones del Teorema \ref{theoremPesinSinai} para ciertas clases de difeomorfismos no uniformemente ni parcialmente hiperb\'{o}licos ni derivados de Anosov y que no tienen splitting dominado (por ejemplo en \cite{Hu}, \cite{HeberTesis} y \cite{CatEnr2001}). Estas clases de difeomorfismos tienen $C^{r}$ regularidad para valores de $r \geq 2$ suficientemente grande y se componen de ciertos difeomorfismos $f_1$ llamados \em casi-Anosov, \em \index{difeomorfismos! casi-Anosov} que est\'{a}n en el borde de los de Anosov en el espacio ${\mbox{Diff }^{r}(M)}$ para cierto $r > 1$ suficientemente grande. Por definici\'{o}n, un difeomorfismo $f_1$ es casi Anosov, o \em almost Anosov, \em si $f_1$ se obtiene por medio de una isotop\'{\i}a (es decir, por medio de una deformaci\'{o}n continua $f_t \in \mbox{Diff }^r(M)$ para $t \in [0,1] \subset \mathbb{R}$), tal que $f_t$ es de Anosov transitivo para todo $0 \leq t < 1$. La isotop\'{\i}a en un casi-Anosov debe cumplir la siguiente condici\'{o}n: el splitting $T_{x}M = S^t_{x } \oplus U^t_{x}$ es un splitting uniformemente hiperb\'{o}lico de $f_t$ para todo $t \in [0, 1)$ fijo, y para todo $x \in M$. Adem\'{a}s, por hip\'{o}tesis, existe una \'{o}rbita $o(x_0)$ tal que para todo $x \not \in \overline{o (x_0)} $ existe un splitting $S_x^1 \oplus U_x^1$ invariante por $f_1$, obtenido como el l\'{\i}mite cuando $t \rightarrow 1$ del splitting hiperb\'{o}lico de $S_x^{t} \oplus U_x^t$. Finalmente, el difeomorfismo $f_1$, o bien posee tambi\'{e}n un splitting (no hiperb\'{o}lico) definido como $S^{1} \oplus U^{1}$, donde $S^{1}= \lim_{t \rightarrow 1^-} {S^1}, \ U^{1} = \lim_{t \rightarrow 1^-} U^t $ (cuando existen dichos l\'{\i}mites en la \'{o}rbita de $x_0$ y son mutualmente transversales); o bien existen dichos l\'{\i}mites pero son tangentes; o bien no existen.
La existencia de una \'{u}nica medida de Gibbs erg\'{o}dica que es SRB, en ejemplos del primer caso de difeomorfismos $C^{2}$ casi-Anosov, es demostrada en \cite{Carvalho}, donde $x_0$ es un punto fijo por $f_t$ para todo $0 \leq t \leq 1$, y los exponentes de Lyapunov en $U_{x_0}^1$ (para $t= 1$) son estrictamente positivos. En cambio, en el ejemplo de casi-Anosov tambi\'{e}n del primer caso, estudiado en \cite{HuYoung}, se prueba que no existe ninguna medida de probabilidad de Gibbs, si se toma como foliaci\'{o}n inestable, la formada por las $C^1$-subvariedades que son tangentes a $E^1_x$ en todo punto $x \in M$. En este ejemplo, el difeomorfismo $f_1$, de clase $C^2$ casi-Anosov, se construye tomando $x_0$ en un punto fijo, pero tal que los exponentes de Lyapunov en $U_{x_0}^1$ son nulos. Sin embargo, a\'{u}n en este ejemplo, en \cite{HuYoung} se demuestra que existe una \'{u}nica medida SRB erg\'{o}dica y que su cuenca de atracci\'{o}n estad\'{\i}stica cubre Lebesgue casi todo punto.
En el segundo caso de difeomorfismos casi-Anosov, en el toro $\mathbb{T}^2$, \cite{HeberTesis} prueba la existencia de la medida de Gibbs erg\'{o}dica que es SRB, cuando $o(x_0)$ es una \'{o}rbita no peri\'{o}dica (m\'{a}s precisamente una \'{o}rbita de tangencia heterocl\'{\i}nica). Tambi\'{e}n en el segundo caso (cuando $S_{x_0}^1$ y $U_{x_0}^1$ son tangentes) y en $\mathbb{T}^2$, la exis\-tencia y unicidad de una medida SRB erg\'{o}dica con cuenca de atracci\'{o}n que cubre Lebesgue-c.t.p., es demostrada en \cite{CatEnr2001}, cuando $x_0$ es un punto fijo (no hiperb\'{o}lico) para $f_1 $ de clase $C^3$, y adem\'{a}s una de las derivadas segundas parciales de $f_1$ en $x_0 $ se anula. Este caso incluye los ejemplos de difeomorfismos casi-Anosov en el toro bidimensional introducidos por Lewowicz en \cite{Lewowicz-Ejemplo}, en los que las direcciones estable e inestable en un punto fijo $x_0$ para $t < 1$, se hacen tangentes cuando el par\'{a}metro $t$ de la isotop\'{\i}a alcanza el valor 1 (sin saber a priori si los exponentes de Lyapunov en casi todo punto diferente de $x_0$, son o no son, diferentes de cero).
El tercer caso de difeomorfismos casi-Anosov, en que los l\'{\i}mites $S_{x_0}^1$ y $U_{x_0}^1$ no existen, es estudiado en \cite{Hu} cuando $x_0$ es un punto fijo, $f$ es de clase $C^r$ para $r \geq 2$ suficientemente grande, y se cumplen ciertas hip\'{o}tesis sobre las derivadas de orden mayor que 1 en $x_0$.
Estos resultados parciales, fueron obtenidos en subclases muy parti\-culares de difeomorfismos casi-Anosov. La dificultad grande para demos\-trar la existencia de medidas SRB, cuando no se tienen a priori hip\'{o}tesis de existencia de una constante uniforme que separe los exponentes de Lyapunov positivos de los no positivos, provoca que el problema general de existencia de medidas SRB para los casi-Anosov, permanezca a\'{u}n mayor\-mente abierto.
\vspace{.3cm}
Ahora re-enunciamos el Teorema \ref{theoremPesinSinai} en el caso particular de
$f \in \mbox{Diff }^{1+ \alpha}(M)$ difeomorfismo de Anosov. M\'{a}s precisamente:
\begin{theorem}.
{\bf (Sinai)} \label{TheoremSRBanosov} \index{medida! SRB erg\'{o}dica} \index{medida! SRB} \index{medida! de Gibbs} \index{teorema! de medida SRB erg\'{o}dica}
\index{teorema! Pesin-Sinai} \index{teorema! de existencia de! medidas SRB} \index{teorema! de existencia de! medidas de Gibbs}
Sea $f \in \mbox{Diff }^{1 + \alpha}(M)$ un difeomorfismo de Anosov. Entonces:
{\bf (a) } Existen medidas erg\'{o}dicas SRB (o f\'{\i}sicas).
{\bf (b) } Toda medida SRB (o f\'{\i}sica) es de Gibbs erg\'{o}dica, y rec\'{\i}procamente
{\bf (c) } La uni\'{o}n de las cuencas de atracci\'{o}n estad\'{\i}stica de las medidas SRB cubren Lebesgue c.t.p. $x \in M$.
{\bf (d) } Si adem\'{a}s $f$ es topol\'{o}gicamente transitivo, entonces existe una \'{u}nica medida SRB, es erg\'{o}dica y de Gibbs, y su cuenca de atracci\'{o}n estad\'{\i}stica cubre Lebesgue c.t.p. $x \in M$.
\end{theorem}
Demostraremos el Teorema \ref{TheoremSRBanosov} m\'{a}s adelante en esta secci\'{o}n, en el p\'{a}rrafo \ref{proofTheoremSRBanosov}.
\begin{corollary}
\label{corolarioAnosovTransitivomedidaLebesgue}
\label{corollarySRBanosov}
Sea $f \in {\mbox{Diff }^{1 + \alpha}(M)}$ de Anosov transitivo que preserva la medida de Lebesgue $m$. Entonces, $m$ es erg\'{o}dica, es la \'{u}nica medida SRB de $f$ y es de Gibbs.
\end{corollary}
{\em Demostraci\'{o}n: }
Por la parte (d) del Teorema \ref{TheoremSRBanosov}, Lebesgue c.t.p. $x \in M$ cumple
$$\lim_{n \rightarrow + \infty} \frac{1}{n} \sum_{j= 0}^{n-1} \delta_{f^j(x)} = \mu,$$
donde $\mu$ es la \'{u}nica medida SRB de $f$, es erg\'{o}dica y de Gibbs.
Por el Teorema \ref{theoremDescoErgodicaEspaciosMetricos}, como la medida de Lebesgue $m$ es invariante, entonces para $m$-c.t.p. $x \in M$
$$\lim_{n \rightarrow + \infty} \frac{1}{n} \sum_{j= 0}^{n-1} \delta_{f^j(x)} = m_x,$$
donde $m_x$ es la componente erg\'{o}dica de $m$ a la que pertenece $f$.
Cuando existe, el l\'{\i}mite de una sucesi\'{o}n de medidas de probabilidad en la topolog\'{\i}a d\'{e}bil estrella, es \'{u}nico (esta es una propiedad de todo espacio m\'{e}trico). Concluimos que $m_x = \mu$ para $m$-c.t.p. $x \in M$. Dicho de otra forma, la descomposici\'{o}n erg\'{o}dica de $m$ tiene una \'{u}nica componente erg\'{o}dica que es $\mu$. Entonces $m = \mu$, como quer\'{\i}amos demostrar.
\hfill $\Box$
\begin{remark} \em
\label{remarkSRBanosovDensidad} \index{medida! densidad de} \index{medida! SRB} \index{medida! de Gibbs}
En la demostraci\'{o}n del Teorema \ref{TheoremSRBanosov}, probaremos adem\'{a}s los siguientes resultados:
\em Las medidas condicionales inestables $\mu_x^u$ de cualquier medida $\mu$ que sea SRB \em (para un difeomorfismos de Anosov de clase $C^{1+ \alpha}$) \em no solo son absolutamente continuas respecto a las medidas de Lebesgue inestables $m_x^u$ \em (a lo largo de las respectivas variedades inestables locales $W^u_{\delta}(x)$ para $\mu$-c.t.p. $x \in M$), \em sino que son adem\'{a}s equivalentes a estas; es decir: \em
$$\mu_x^u \ll m^u_x , \ \ \mu^u_x \ll m^u_x \ \ \ \mu-\mbox{c.t.p.} \ x \in M.$$ \em \index{densidad} \index{derivada de Radon-Nykodim}
Adem\'{a}s, la derivada de Radon-Nikodym $d \mu^u_x/d m^u_x$ (es decir la densidad de las medidas condicionales inestables) es una funci\'{o}n continua $h_x(y)$ y positiva, y est\'{a} definida por:
\em $$\frac{d\mu_x^u}{dm_x^u}(y) = h_x(y) := \prod_{j= 0}^{+ \infty} \frac{J^u(f^j(x))}{J^u(f^j(y))} \in C_0(M, \mathbb{R}^+) \ \ \ \mu-\mbox{c.t.p.} \ x \in M,$$
donde $J^u(x) := \big|\mbox{det} (df_x|_{E^u_x}\big |$ se llama \em Jacobiano inestable de $f$ en el punto $x$. \index{jacobiano inestable}
\end{remark}
Cuando intentamos generalizar el Corolario \ref{corollarySRBanosov}
a difeomorfismos no uniformemente hiperb\'{o}licos que preservan la medida de Lebesgue, el m\'{e}todo de demostraci\'{o}n que usamos para los Anosov transitivos no funciona si uno no sabe a priori, o demuestra primero, la existencia de una medida SRB:
\begin{conjecture} \em
{\bf (Viana \cite{VianaSurvey})} \index{conjetura! Viana}
\em
Sea $f$ un difeomorfismo que preserva la medida de Lebesgue $m$. Si $m$ es una medida hiperb\'{o}lica \em (cf. Definici\'{o}n \ref{definitionMedidaHiperbolica}) \em entonces existe alguna medida SRB.
\end{conjecture}
La demostraci\'{o}n de la conjetura de Viana, o el hallazgo de un contraejemplo, es un problema abierto.
\vspace{.3cm}
\subsection{Sobre la F\'{o}rmula de Pesin para la entrop\'{\i}a}
Como consecuencia de los teoremas \ref{theoremGibbs->SRB} y \ref{theoremSRB->Gibbs}, la b\'{u}squeda de medidas SRB o f\'{\i}sicas en el caso de difeomorfismos de clase $C^1$ m\'{a}s H\"{o}lder, se centra en la b\'{u}squeda de medidas de Gibbs, o sea, de medidas invariantes cuyas medidas condicionales inestables sean absolutamente continuas (ver Definiciones \ref{definitionMedidasCondicionasInestables} y \ref{definitionMedidaGibbs}). Por ese motivo, la caracterizaci\'{o}n de las medidas invariantes que son medidas de Gibbs, adquiere en el contexto $C^{1+ \alpha}$ especial relevancia. Una tal caracterizaci\'{o}n est\'{a} dada por la igualdad del siguiente Teorema \ref{theoremFormulaPesin}, llamada \em F\'{o}rmula de Pesin para la entrop\'{\i}a. \em
\vspace{.2cm}
{\bf Entrop\'{\i}a m\'{e}trica} Para poder enunciar la F\'{o}rmula de Pesin, \index{entrop\'{\i}a m\'{e}trica} introducimos brevemente el concepto de \em entrop\'{\i}a m\'{e}trica $h_{\mu}(f)$ \em de una transformaci\'{o}n continua $f: M \mapsto M$ con respecto a una medida de probabilidad $\mu$ invariante con $f$. La definici\'{o}n de entrop\'{\i}a m\'{e}trica, y el estudio de las primeras propiedades que dan su forma de c\'{a}lculo para difeomorfismos de Anosov, son debidos a Kolmogorov \cite{Kolmogorov} y a Sinai \cite{Sinai-MetricEntropy}. La definici\'{o}n precisa y sus propiedades, puede encontrarse adem\'{a}s, por ejemplo en \cite[\S 4.4]{Walters}, \cite[Cap\'{\i}tulo 4]{Mane},
\cite[pag. 168-170]{Katok-Hasselblatt}, \cite{Sinai-MetricEntropy2}, \cite[pag. 55-76]{SinaiBook1994}, \cite[\S 4.1-4.2]{Jost}, \cite[Chapter 3]{Keller}, o \cite{KingEnciclopedia}, entre muchos otros textos que tratan matem\'{a}ticamente el concepto de la entrop\'{\i}a m\'{e}trica para los sistemas din\'{a}micos.
Para comprender los enunciados siguientes, adm\'{\i}tase que tenemos defi\-ni\-do un n\'{u}mero real no negativo $h_{\mu}(f) \geq 0$, llamado \em entrop\'{\i}a m\'{e}trica de $f$ \em con respecto de la medida de probabilidad $f$-invariante $\mu$, que depende solo de $f$ y de $\mu$ y tal que $h_{\mu}(f)$ es invariante por isomorfismos de espacios de medida. El n\'{u}mero $h_{\mu}(f)$, por la forma en que se define, mide con respecto a la probabilidad $\mu$, la tasa exponencial maximal en que \em los iterados futuros de $f$ desordenan los pedazos de cualquier partici\'{o}n finita ${\mathcal P}$ del espacio $M$, \em ponderados con la probabilidad $\mu$.
M\'{a}s precisamente, consideremos la partici\'{o}n ${\mathcal P}_n := \bigvee_{j= 0}^{n-1} f^{-j}({\mathcal P})$, definida por la siguiente condici\'{o}n: $x, y \in A \in {\mathcal P}_n$ si y solo si para todo $0 \leq j \leq n-1$ existe $P_j \in {\mathcal P}$ tal que $x,y \in P_j$. Cada pedazo diferente de esta partici\'{o}n ${\mathcal P}_n$, es el conjunto de puntos $x$ que mutuamente se acompa\~{n}an dentro de un mismo pedazo de ${\mathcal P}$, al ser iterados hacia el futuro. Cuando mayor es la cantidad de pedazos diferentes de la partici\'{o}n ${\mathcal P}_n$ al crecer $n$, m\'{a}s desordena el iterado $f^n$ a las \'{o}rbitas en el espacio, en relaci\'{o}n a la partici\'{o}n inicial ${\mathcal P}$. \index{caos}
Se define
$$h_{\mu}(f, {\mathcal P}) := \limsup_{n \rightarrow + \infty} \frac{1}{n} \sum_{A \in {\mathcal P}_n} \mu(A) \log \mu(A).$$
$\sum_{A \in {\mathcal P}_n} \mu(A) \log \mu(A)$ es un promedio ponderado de logaritmos. Luego, se interpreta como un exponente (ponderado). Entonces, al dividirlo entre $n$, da una \em tasa o coeficiente de crecimiento \em exponencial con $n$. Por este motivo, ese cociente se interpreta como la tasa o velocidad de crecimiento exponencial con $n$ (hasta el instante $n$), del \lq\lq desorden espacial\rq\rq \ que producen los iterados de $f$, ponderado con la medida de probabilidad $\mu$. Por lo tanto, intuitivamente hablando, $h_{\mu}(f, \mathcal P)$ es la tasa exponencial, $\mu$-ponderada y asint\'{o}tica, en que los iterados de $f$ (mejor dicho $\mu$-casi toda \'{o}rbita de $f)$ desordenan a la partici\'{o}n inicial ${\mathcal P}$ con la que mir\'{a}bamos, como referencia, la distribuci\'{o}n de puntos iniciales.
Finalmente, se define la entrop\'{\i}a m\'{e}trica
$$h_{\mu}(f) : = \sup_{{\mathcal P} \in \mathbb{P}} h_{\mu}(f, {\mathcal P}),$$
donde ${\mathbb{P}}$ denota el conjunto de todas las particiones finitas de $M$ en piezas medibles. \index{entrop\'{\i}a m\'{e}trica}
Luego $h_{\mu}(f) \geq 0$ puede interpretarse como la tasa exponencial asint\'{o}tica maximal de crecimiento del desorden espacial de $f^n$ al hacer $n \rightarrow + \infty$, ponderada con la probabilidad $\mu$. Si $h_{\mu}(f) > 0$, el sistema restringido al soporte de $\mu$ se llama \em ca\'{o}tico \em en sentido medible.
Cuanto mayor es $h_{\mu}(f)$, m\'{a}s r\'{a}pidamente se desordena el espacio al iterar $f$.
Luego, al observador que quiera cuantificar el caos, interesan, si existen, aquellas medidas de probabilidad invariantes $\mu$ que maximicen la entrop\'{\i}a m\'{e}trica $h_{\mu}(f)$ en relaci\'{o}n al valor esperado de cierta funci\'{o}n real, llamada \em potencial \em (el cual, gruesamente hablando, cuantifica el significado de \lq\lq optimizar la observaci\'{o}n del caos\rq\rq). Estas medidas que maximizan la diferencia entre la entrop\'{\i}a m\'{e}trica y el valor esperado de una funci\'{o}n potencial, se llaman \em estados de equilibrio. \em \index{estados de equilibrio} Su estudio se constituye en la sub-teor\'{\i}a, dentro de la teor\'{\i}a erg\'{o}dica de los sistemas din\'{a}micos, llamada \em formalismo termodin\'{a}mico \em (ver por ejemplo \cite[Chapter 4]{Keller}).
\begin{theorem} \label{theoremFormulaPesin}
{\bf [Pesin] F\'{o}rmula de Pesin para la entrop\'{\i}a} \index{entrop\'{\i}a! f\'{o}rmula de Pesin} \index{f\'{o}rmula de Pesin} \index{Pesin! f\'{o}rmula de}
Sea $f \in \mbox{Diff }^{1 + \alpha}(M)$ que preserva una medida $\mu$ hiperb\'{o}lica y de Gibbs. Entonces \em \index{medida! de Gibbs} \index{medida! hiperb\'{o}lica}
\begin{equation} \label{eqnformuladePesin} h_{\mu}(f) = \int \ \sum_{i=1}^{\mbox{\ \footnotesize dim}(M)} \chi_i^+ (x) \, d \mu,\end{equation}
\em donde $h_{\mu}(f)$ es la entrop\'{\i}a m\'{e}trica de $f$ con respecto de $\mu$, y $$ \chi_i^+(x) := \max\{0, \chi_i(x)\},$$ siendo \em $\{\chi_i\}_{1 \leq i \leq \mbox{\footnotesize dim}(M)}$ \em los exponentes de Lyapunov de $f$ en el punto $x$, repetido cada uno de ellos tantas veces como la dimensi\'{o}n del espacio de Oseledets $E^i_x$ correspondiente. \index{exponentes de Lyapunov! positivos} \em
Se recuerda que, por el Teorema de Oseledets, $\mu$-c.t.p. es regular. Por lo tanto existe la funci\'{o}n real no negativa $\sum_{i=1}^{\mbox{\footnotesize dim}(M)} \chi_i^+ (x)$ y es medible. Si ninguno de los exponentes de Lyapunov $\chi_i(x)$ positivo, resulta $\sum_{i= 1}^{\mbox{\footnotesize dim}(M)} \chi^+_i(x) = 0$.
\end{theorem}
La demostraci\'{o}n de la F\'{o}rmula de Pesin se encuentra en \cite{PesinLyapunovExponents}. Una prueba diferente que prescinde de la propiedad de continuidad absoluta de la foliaci\'{o}n estable se encuentra en \cite{ManePesinFormula}. La demostraci\'{o}n tambi\'{e}n se encuentra, por ejemplo, en \cite[Theorem 5.4.5]{BarreiraPesin}. Finalmente, en \cite{Tahzibi} se gene\-ra\-liza el Teorema \ref{theoremFormulaPesin} para difeomorfismos $C^1$-gen\'{e}ricos.
Si una medida $\mu$ satisface la f\'{o}rmula de Pesin, entonces ella mide \'{o}ptimamente el caos medible del sistema en relaci\'{o}n al valor esperado de la suma de los exponentes de Lyapunov positivos. En efecto, la igualdad (\ref{eqnformuladePesin}) da el m\'{a}ximo posible de la entrop\'{\i}a m\'{e}trica $h_{\mu}(f)$ en relaci\'{o}n a ese valor esperado, ya que para toda aplicaci\'{o}n de clase $C^1$ rige la siguiente cota superior de $h_{\mu}(f)$: \index{caos}
\begin{theorem}
{\bf Desigualdad de Margulis-Ruelle \cite{Margulis-Inequality}, \cite{Ruelle_Inequality}} \index{desigualdad Margulis-Ruelle} \index{entrop\'{\i}a! desigualdad Margulis-Ruelle}
\index{teorema! Margulis-Ruelle}
\index{exponentes de Lyapunov! positivos}
Sea $f : M \mapsto M$ de clase $C^1$. Sea $\mu$ una medida de probabilidad inva\-rian\-te por $f$. Entonces
\em
\begin{equation} \label{eqnDesigualdadDeRuelle} h_{\mu}(f) \leq \int \ \sum_{i=1}^{\mbox{\ \footnotesize dim}(M)} \chi_i^+ (x) \, d \mu,\end{equation}
\em donde $h_{\mu}(f)$ es la entrop\'{\i}a m\'{e}trica de $f$ con respecto de $\mu$, y $$ \chi_i^+(x) := \max\{0, \chi_i(x)\},$$ donde \em $\{\chi_i\}_{1 \leq i \leq \mbox{\footnotesize dim}(M)}$ \em son los exponentes de Lyapunov de $f$ en el punto $x$, repetido cada uno de ellos tantas veces como la dimensi\'{o}n del espacio de Oseledets $E^i_x$ correspondiente. \em
\end{theorem}
La demostraci\'{o}n de la Desigualdad (\ref{eqnDesigualdadDeRuelle}) de Margulis-Ruelle se puede encontrar en \cite{Ruelle_Inequality}, o tambi\'{e}n en \cite[Theorem 5.4.1]{BarreiraPesin}.
\vspace{.3cm}
La f\'{o}rmula de Pesin, caracteriza a las medidas de Gibbs, debido al siguien\-te rec\'{\i}proco del Teorema \ref{theoremFormulaPesin}:
\begin{theorem}
\label{theoremLedrappier-Young} \index{teorema! Ledrappier-Young}
\index{entrop\'{\i}a} \index{medida! de Gibbs} \index{exponentes de Lyapunov! positivos} \index{continuidad absoluta! de medidas condicionales}
\index{medida! condicional inestable}
{\bf [Ledrappier-Young, \cite{Ledrappier-Young}]}
Sea $f \in \mbox{Diff }^2(M)$ y sea $\mu$ una medida invariante tal que \em $$\sum_{i=1}^{\mbox{\footnotesize dim}(M)} \chi_i^+(x) >0 \ \ \mu-\mbox{c.t.p. } x \in M.$$ \em Si se verifica la F\'{o}rmula de Pesin \em (\ref{eqnformuladePesin}) \em de la entrop\'{\i}a m\'{e}trica $h_{\mu}(f)$, entonces la medida $\mu$ es de Gibbs, es decir, $\mu$ tiene medidas condicionales inestables absolutamente continuas.
\end{theorem}
La demostraci\'{o}n del Teorema \ref{theoremLedrappier-Young} se encuentra en \cite{Ledrappier-Young}, y tambi\'{e}n en \cite{Ledrappier} con la hip\'{o}tesis adicional de $\mu$ hiperb\'{o}lica.
\begin{corollary} \index{atractor! topol\'{o}gico} \index{atractor! hiperb\'{o}lico} \index{hiperbolicidad! uniforme} \index{medida! SRB}
\index{medida! de Gibbs} \index{Pesin! f\'{o}rmula de} \index{f\'{o}rmula de Pesin} \index{entrop\'{\i}a! f\'{o}rmula de Pesin} \index{teorema! Ledrappier-Young}
Sea $\Lambda$ un atractor topol\'{o}gico uniformemente hiperb\'{o}lico de $f \in {\mbox{Diff }^2(M)}$. Entonces las siguientes afirmaciones son equivalentes:
{\bf (i)} $\mu$ es una medida erg\'{o}dica SRB (o f\'{\i}sica).
{\bf (ii)} $\mu$ es una medida erg\'{o}dica de Gibbs.
{\bf (iii)} $\mu$ es una medida erg\'{o}dica que satisface la F\'{o}rmula \em (\ref{eqnformuladePesin}) \em de Pesin para la entrop\'{\i}a.
\end{corollary}
{\em Demostraci\'{o}n: }
Se obtiene inmediatamente de reunir los Teoremas \ref{theoremPesinSinai}, \ref{theoremFormulaPesin} y \ref{theoremLedrappier-Young}.
\hfill $\Box$
\subsection{Demostraci\'{o}n del Teorema \ref{theoremGibbs->SRB}}
Para poder demostrar los Teoremas \ref{theoremGibbs->SRB} y \ref{TheoremSRBanosov}, necesitamos introducir algunos resultados relevantes de la llamada \em Teor\'{\i}a de Pesin: \em Esta teor\'{\i}a estudia el comportamiento de $\mu$-casi toda \'{o}rbita, para $f \in {\mbox{Diff }^{1 + \alpha}(M)}$ donde $\mu$ es una medida de probabilidad $f$-invariante e \em hiperb\'{o}lica. \em Es decir, la regi\'{o}n de Pesin $\Sigma$ (i.e. el conjunto de los puntos regulares cuyos exponentes de Lyapunov son todos diferentes de cero) tiene $\mu$-probabilidad igual a 1.
\begin{definition}
{\bf Holonom\'{\i}a} \em \label{definitionHolonomia} \index{holonom\'{\i}a} \index{foliaci\'{o}n! invariante} \index{foliaci\'{o}n! holonom\'{\i}a de}
Sea $\mu$ una medida de probabilidad invariante hiperb\'{o}lica de
$f \in \mbox{Diff }^{1 + \alpha}(M)$. Es decir, la regi\'{o}n de Pesin $\Sigma$ cumple $\mu(\Sigma) = 1$. Tomemos $x_0 \in \Sigma$ tal que para todo $\delta >0$ la bola $B_{\delta}(x_0) \subset M$ de centro $x_0$ y radio $\delta >0$ cumple $\mu(B_{\delta}(x_0))>0$. Por el Teorema \ref{theoremVarInvariantesRegionPesin}, existen para $\mu$-c.t.p. $x \in B_{\delta}(x_0)$ las variedades estables e inestables locales por el punto $x$, que denotamos $W^{s}_{\delta}(x), \ W^u_{\delta}(x) \subset B_{\delta}(x_0)$ (Si fuera necesario tomamos la componente conexa que contiene al punto $x$ de la intersecci\'{o}n de la variedad estable o inestable local por el punto $x$ con la bola $B_{\delta}(x_0)$).
Definimos la holonom\'{\i}a $$h_s: B^*_{\delta}(x_0) \mapsto W^u_{\delta}(x_0),$$ donde $$B^*_{\delta}(x_0) = \{ y \in B_{\delta}(x_0) \cap \Sigma\colon \ \#(W^s_{\delta}(y) \cap W^u_{\delta}(x_0)) = 1$$ como la transformaci\'{o}n que a cada punto $y \in B^*{\delta}(x_0)$ hace corresponder el \'{u}nico punto $h_s(y) \in W^s_{\delta}(y) \cap W^u_{\delta}(x_0)$.
Debido al Teorema \ref{theoremVarInvariantesRegionPesin}, si $\delta>0$ es suficientemente peque\~{n}o, entonces $h_s$ existe, pues el conjunto $B^*_{\delta}(x_0)$ contiene por lo menos a la variedad estable local $W^s_{\delta}(x_0)$ que interseca transversalmente a $W^u_{\delta}(x_0)$ en el punto $x_0$.
La transformaci\'{o}n $h_s$ se llama \em holonom\'{\i}a a lo largo de las variedades estables locales \em en la bola $B_{\delta}(x_0)$ sobre la variedad inestable local del punto $x_0$, o en breve, \em holonom\'{\i}a local estable. \em
Decimos que \em la holonom\'{\i}a estable \index{holonom\'{\i}a} \index{continuidad absoluta! de holonom\'{\i}a} \index{foliaci\'{o}n! absolutamente continua} \index{foliaci\'{o}n! invariante} \index{foliaci\'{o}n! holonom\'{\i}a de} \index{continuidad absoluta! de foliaci\'{o}n} $h_s$ es absolutamente continua \em si para $\mu$-c.t.p. $x_0 \in \Sigma$ y para todo boreliano $A \subset B_{\delta}(x_0)$ se cumple:
\begin{equation} \label{eqnholonomiaAC} m(h_s^{-1}(A) = 0 \ \Leftrightarrow \ m^u(A \cap W_{\delta}^u(x_0)) = 0, \end{equation}
donde $m$ es la medida de Lebesgue en la variedad $M$ y $m^u$ es la medida de Lebesgue a lo largo de la subvariedad inestable local $W_{\delta}^u(x_0)$.
An\'{a}logamente se define \em holonom\'{\i}a inestable \em y \em continuidad absoluta de la holonom\'{\i}a inestable, \em intercambiando entre s\'{\i} los roles de las variedades estables e inestables locales en la definici\'{o}n anterior.
\end{definition}
\begin{theorem} \label{theoremTeoriaPesin}
{\bf [Pesin] \cite{Pesin76}}
{\bf (Teorema fundamental de la Teor\'{\i}a de Pesin)} \index{teor\'{\i}a de Pesin} \index{Pesin! teor\'{\i}a de} \index{holonom\'{\i}a}
\index{foliaci\'{o}n! invariante}
\index{foliaci\'{o}n! holonom\'{\i}a de}
\index{continuidad absoluta! de holonom\'{\i}a}
\index{foliaci\'{o}n! absolutamente continua}
\index{continuidad absoluta! de foliaci\'{o}n}
Sea \em $f \in \mbox{Diff }^{1 + \alpha}(M)$ \em y sea $\mu$ una medida erg\'{o}dica e hiperb\'{o}lica. Entonces existe $\delta >0$ suficientemente peque\~{n}o, tal que la holonom\'{\i}a estable y la holonom\'{\i}a inestable en las bolas $B_{\delta}(x_0)$ para $\mu$-c.t.p. $x_0 \in M$ son absolutamente continuas.
\end{theorem}
La prueba del Teorema \ref{theoremTeoriaPesin} se encuentra en \cite{Pesin76}, y tambi\'{e}n, por ejemplo, en [Theorem 4.3.1]\cite{BarreiraPesin}.
Observamos que el Teorema \ref{theoremTeoriaPesin} es falso en la topolog\'{\i}a $C^1$. En efecto, en \cite{Bowen_C1horseshoe} y \cite{RobinsonYoungContrajemploCAdeFoliacion} (y tambi\'{e}n en \cite{SchmittGora} para endomorfismos) se construyen ejemplos de atractores hiperb\'{o}licos para los cuales la holonom\'{\i}a a lo largo de las variedades estables locales no es absolutamente continua.
Por ese motivo la demostraci\'{o}n del Teorema \ref{theoremGibbs->SRB} no funciona para difeomorfismos de clase $C^1$ que no sean $C^{1 + \alpha}$. En particular, la demostraci\'{o}n que daremos del Teorema \ref{TheoremSRBanosov} no funciona para difeomorfismos de clase $C^1$ que no sean de clase $C^{1 + \alpha}$. M\'{a}s adelante expondremos algunos resultados utilizando las medidas \lq\lq SRB-like\rq\rq (ver Definici\'{o}n \ref{definitionMedidaSRBlike}), que no son necesariamente medidas de Gibbs, pero que existen para difeomorfismos y endomorfismos sin m\'{a}s regularidad que $C^1$ manteniendo propiedades de atracci\'{o}n estad\'{\i}stica similares a las medidas SRB.
\vspace{.3cm}
Ahora, veremos que la demostraci\'{o}n del Teorema \ref{theoremGibbs->SRB}, que relaciona para difeomorfismos de clase $C^{1 + \alpha}$ las medidas de Gibbs hiperb\'{o}licas y erg\'{o}dicas con las medidas SRB o f\'{\i}sicas, se reduce al Teorema fundamental de la Teor\'{\i}a de Pesin que establece la continuidad absoluta de la holonom\'{\i}a estable local.
\begin{nada} \em \label{proofTheoremGibbs->SRB}
{\bf Demostraci\'{o}n del Teorema \ref{teoremaAtractoresErgodicos}}
\end{nada}
{\em Demostraci\'{o}n: }
Por la parte (c) del Corolario \ref{corolarioRohlin1} si $\mu$ es medida de Gibbs, entonces $\mu$-casi toda componente erg\'{o}dica $\mu_x$ de $\mu$ es medida de Gibbs. Adem\'{a}s como $\mu$ es hiperb\'{o}lica por hip\'{o}tesis, entonces $\mu(\Sigma)= 1$, donde $\Sigma$ es la regi\'{o}n de Pesin. Luego, por el Teorema \ref{theoremDescoErgodicaEspaciosMetricos} de descomposici\'{o}n erg\'{o}dica $\mu$-casi toda componente erg\'{o}dica $\mu_x$ de $\mu$ cumple $\mu_x(\Sigma) = 1$; es decir $\mu_x$ es hiperb\'{o}lica y de Gibbs, adem\'{a}s de ser erg\'{o}dica. Tomemos una tal $\mu_x$ y renombremosla como $\mu$.
Para probar el Teorema \ref{theoremGibbs->SRB}, aplicando la Definici\'{o}n \ref{definitionMedidaSRB} de medida SRB o f\'{\i}sica, debemos probar que la cuenca $B$ de atracci\'{o}n estad\'{\i}stica de $\mu$, definida por:
$$B= \{x \in M: \ \lim_{n \rightarrow + \infty} \frac{1}{n} \sum_{j= 0}^{n-1} \delta_{f^j(x)} = \mu\}$$
tiene medida de Lebesgue $m(B)$ positiva.
Como $\mu$ es erg\'{o}dica, $\mu(B) = 1$. Por el Teorema \ref{theoremRohlin} de Descomposici\'{o}n de Rohlin en una bola $B_{\delta}(x_0)$ con $\mu$-medida positiva, se cumple:
$$\mu^u(B \cap W^u_{\delta}(x_0)) = 1$$
para $ \mu$-c.t.p. $x_0$, donde $\mu^u$ es la medida condicionada inestable de $\mu$.
Como $\mu$ es medida de Gibbs, por la Definici\'{o}n \ref{definitionMedidaGibbs}, se cumple $\mu^u \ll m^u$, para $\mu$-c.t.p. $x_0$, donde $m^u$ es la medida de Lebesgue a lo largo de la variedad $W^u_{\delta}(x_0)$. Luego deducimos
$$m^u(B \cap W^u_{\delta}(x_0))>0$$
Aplicando el Teorema \ref{theoremTeoriaPesin} de la Teor\'{\i}a de Pesin, obtenemos:
$$m(h_s^{-1}(B \cap W^u_{\delta}(x_0)) >0.$$
Entonces, para terminar de demostrar que $\mu$ es una medida SRB, es decir para terminar de probar que $m(B) >0$, basta demostrar ahora que $h_s^{-1}(B \cap W^u_{\delta}(x_0)) \subset B$. En efecto, sea $y \in h_s^{-1}(B \cap W^u_{\delta}(x_0))$. Probemos que $y \in B$. Por definici\'{o}n de la holonom\'{\i}a estable $h_s$, se cumple $ y \in B^*_{\delta}$ tal que $h_s (y) = z := W^s_{\delta}(y) \cap W^u_{\delta}(x_0) \in B$. Entonces, como $z \in B \cap W^s_{\delta}(y)$, obtenemos $$\lim_{n \rightarrow + \infty} \mbox{dist}(f^n(z), f^n(y)) = 0, \ \ \ \lim_{n \rightarrow + \infty} \frac{1}{n} \sum_{j= 0}^{n-1} \delta_{f^j(z)} = \mu.$$
Aplicando el resultado probado en el Ejercicio \ref{exercise4}, resulta $$\lim_{n \rightarrow + \infty} \frac{1}{n} \sum_{j= 0}^{n-1} \delta_{f_j(y)} = \mu,$$
es decir $y \in B$, como quer\'{\i}amos probar.
\hfill $\Box$
\subsection{Lema de Distorsi\'{o}n Acotada}
Al final de este cap\'{\i}tulo probaremos el Teorema \ref{TheoremSRBanosov} que establece, en el contexto $C^{1 + \alpha}$-Anosov, la equivalencia entre las medidas SRB erg\'{o}dicas y las medidas de Gibbs erg\'{o}dicas, y la existencia de estas.
La demostraci\'{o}n del Teorema \ref{TheoremSRBanosov} est\'{a} fuertemente basada en el si\-guien\-te Lema \ref{lemmaDistoAcotada} de Distorsi\'{o}n Acotada. La prueba de este lema se basa en la hip\'{o}tesis $f \in \mbox{Diff }^{1 + \alpha}(M)$.
\begin{lemma} \index{lema de distorsi\'{o}n acotada} \index{distorsi\'{o}n acotada} \index{jacobiano inestable}
\label{lemmaDistoAcotada} {\bf Lema de Distorsi\'{o}n Acotada}
Sea $f \in {\mbox{Diff }^{1 + \alpha}}(M)$ de Anosov. Sea $TM = E^u \oplus E^s$ el splitting uniformemente hiperb\'{o}lico del fibrado tangente, y sea $W^u_{\delta}(x)$ la variedad inestable local por el punto $x$ para $\delta >0$ constante, suficientemente peque\~{n}o. Denotamos con $$J^u(x) = \big|\det \, df_x|_{E^u_x} \big| >0$$ al Jacobiano inestable en el punto $x$. Sean $x, y \in M $ dos puntos tales que $y \in W_{\delta}^u(x)$ y sean, para todo $n \geq 1$, las funciones
$$h_n(x, y) := \frac{\prod_{j= 1}^{n} J^u(f^{-j}(x))}{\prod_{j= 1}^{n} J^u(f^{-j}(y))} \in (0, + \infty)$$
$$h(x,y) = := \frac{\prod_{j= 1}^{+ \infty} J^u(f^{-j}(x))}{\prod_{j= 1}^{+ \infty} J^u(f^{-j}(y))} \in [0, + \infty],$$
definidas solo para las parejas $(x,y) $ de puntos en el conjunto:
$$ H_{\delta}:= \{(x,y) \in M \mbox{ tales que } y \in W^u_{\delta}(x)\}.$$
Entonces:
{\bf (i) } Existe una constante real $K >0$ tal que
$$ \ \frac{1}{K} < h(x,y) < {K} \ \ \forall \ (x,y) \in H_{\delta} $$
{\bf (ii) } La funci\'{o}n $h: H_{\delta} \rightarrow \mathbb{R}^+ $ es continua.
{\bf (iii) } Para todo $\epsilon >0$ existe $N \geq 1$ \em (independiente de la pareja de puntos $(x,y) \in H_{\delta}$) \em tal que:
$$
e^{- \epsilon} < \frac{h_n(x,y)}{h(x,y)} < e^{\epsilon} \ \ \forall \ (x,y) \in H_{\delta}, \ \ \forall \ n \geq N.$$
\end{lemma}
{\em Demostraci\'{o}n: }
{\bf (i) } Definamos $\log :[0, + \infty] \mapsto [- \infty, + \infty]$ acordando que $\log 0 = -\infty $ y $\log (+ \infty) = + \infty$. Para demostrar la afirmaci\'{o}n (i) basta probar que existe una constante real $c >0$ tal que $|\log h(x,y)| \leq c$. Por construcci\'{o}n de la funci\'{o}n $h: H_{\delta} \mapsto [0, + \infty] $ tenemos:
\begin{equation} \label{eqn-5}|\log h(x,y)| = \big|\sum_{j= 1}^{+ \infty} \big(\log J^u(f^{-j}(x)) - \log J^u(f^{-j}(y)) \big) \big| \leq $$ $$ \sum_{j= 1}^{+ \infty} \big|\log J^u(f^{-j}(x)) - \log J^u(f^{-j}(y)) \big|. \end{equation}
$J^u: M \mapsto \mathbb{R}^+$ es continuo (porque $df_x$ es continuo pues $f$ es de clase $C^1$ y $E^u_x$ depende continuamente de $x$). Entonces la funci\'{o}n $J^u$ est\'{a} uniformemente acotada superiormente e inferiormente por constantes reales positivas. Luego, por el Teorema del valor medio del c\'{a}lculo diferencial aplicado a la funci\'{o}n real $\log t$ de variable real positiva $t$, existe una constante $c_1 >0$ tal que $$|\log J^u (z_1) - \log J^u(z_2)| \leq c_1 |J^u(z_1) - J^u(z_2)|$$ para toda pareja $(z_1, z_2)$ de puntos en la variedad $M$.
Sustituyendo en (\ref{eqn-5}) resulta:
\begin{equation} \label{eqn-6}|\log h(x,y)| \leq c_1 \sum_{j= 1}^{+ \infty} \big| J^u(f^{-j}(x)) - J^u(f^{-j}(y)) \big|. \end{equation}
Siendo $f$ de clase $C^{1 + \alpha}$, las variedades inestables son de clase $C^{1 + \alpha}$ (cf. la \'{u}ltima parte del enunciado del Teorema \ref{teoremavariedadesinvariantesAnosov}). Entonces el subespacio tangente $T_y W^u_{\delta}(x) = E^u_y$ depende $\alpha$-H\"{o}lder continuamente del punto $y$ cuando $y$ var\'{\i}a a lo largo de la subvariedad $W^u_{\delta}(x)$. Es decir, para cada $x$ existe una constante $c_2(x) >0$ tal que
\begin{equation}
\label{eqn33}
\mbox{dist}(E^u_y , E^u_z) \leq c_2(x) \mbox{dist}(y,z)^{\alpha} \ \ \forall \ \ y,z \in W_{\delta}^u(x).\end{equation} Afirmamos que para $\delta >0$ fijo suficientemente peque\~{n}o, existe una constante $c_2$ uniforme tal que $$c_2(x) \leq c_2 \ \ \forall \ x \in M.$$ En efecto, fijando $x_1 \in M$ y una constante $C(x_1)> c_2(x_1)$, la desigualdad (\ref{eqn33}) se satisface para cualquier punto $\widehat x$ en un entorno suficientemente peque\~{n}o de $x_1$, sustituyendo $c_2( x) $ por $C(x_1)$. Luego, cubriendo la variedad compacta $M$ con una cantidad finita de tales entornos, centrados en puntos $x_1, \ldots, x_m$ respectivamente, y tomando $c_2= \max_{i= 1}^{m} C(x_i)$, deducimos que se satisface la desigualdad (\ref{eqn33}) para todo $x \in M$, con la constante $c_2$ en lugar de $c_2(x)$.
Por otra parte, como $f$ es de clase $C^{1 + \alpha}$, su derivada $df_x$ depende $\alpha$-H\"{o}lder continuamente del punto $x$. Es decir, existe una constante $c_3 >0$ tal que
$$\|df_y - df_z\| \leq c_3 \mbox{dist}(y,z) ^{\alpha} \ \ \forall \ y,z \in M.$$ Por otra parte existe una constante $c_4>0$ tal que $$|\det A |\leq c_4\|A\|$$ para toda aplicaci\'{o}n lineal $A$ de un espacio vectorial de dimensi\'{o}n igual a $\mbox{dim}(E^u)$ en otro de la misma dimensi\'{o}n.
Reuniendo las cuatro \'{u}ltimas desigualdades que involucran las constantes $ c_2(x), c_2, c_3$ y $c_4$, obtenemos
$$ |J^u(y) - J^u(z) | =\big| \, |\det(df_y|_{E^u_y} | - |\det(df_z|_{E^u_z} | \, \big| \leq $$ $$\big| \, |\det(df_y|_{E^u_y} | - |\det(df_z|_{E^u_y} | \, \big| + \big| \, |\det(df_z|_{E^u_y} | - |\det(df_z|_{E^u_z} | \leq $$ $$ \big| \leq c_4 \|df_y - df_z\| + c_4 (\max_{z \in M} \|df_z\|) \mbox{dist} (E^u_y , E^u_z) \leq $$ $$\big(c_4 c_3 + c_4 (\max_{z \in M} \|df_z\|) c_2 \big) \, \cdot \, \mbox{dist}(y,z)^{\alpha} \ \ \forall \ (y,z) \in W^u_{\delta}(x), \ \ \forall \ x \in M.$$
Sustituyendo en la desigualdad (\ref{eqn-6}), obtenemos una constante $c_5 >0$ tal que
\begin{equation}
\label{eqn-7}
|\log h(x,y)| \leq c_5 \, \sum_{j= 1}^{+ \infty} \mbox{dist}(f^{-j}(x), f^{-j}(y))^{\alpha} \ \ \ \forall \ (x,y) \in H_{\delta}
\end{equation}
Nota: Para obtener la desigualdad (\ref{eqn-7}) observamos que no necesariamente se cumple $f^{-j}W^u_{\delta}(x) \subset W^u_{\delta}(f^j(x))$, y por lo tanto, aunque $(x,y) \in H_{\delta}$ no necesariamente son aplicables directamente todas las desigualdades anteriores para la pareja de puntos $(f^{-j}(x), f^{-j}(y))$ para cualquier $j \geq 1$. Sin embargo, fijado $\delta >0$ suficientemente peque\~{n}o, aplicamos la parte B) del Teorema \ref{teoremavariedadesinvariantesAnosov}, y observamos que hay convergencia uniforme a cero de las distancias a lo largo de las variedades inestables locales al iterar hacia el pasado, pues los vectores tangentes $u \in E^u_x$ se contraen uniformemente hacia el pasado seg\'{u}n la Definici\'{o}n \ref{definicionAnosov} de difeomorfismo de Anosov. Luego, existe $N \geq 1$ uniforme tal que $f^{-n}W_{\delta}^u(x) \subset W_{\delta}^u (f^{-n}(x))$ para todo $n \geq N$ para todo $x \in M$. Entonces, basta elegir $0 <\delta'< \delta $ tal que $f^{-j}W_{\delta'} ^u (x) \subset W_{\delta}^u (f^{-j}(x))$ para $ j \in \{0, \ldots, N-1\}$, para que esa misma inclusi\'{o}n valga tambi\'{e}n para todo $j \geq 0$. Finalmente renombramos $\delta'$ como $\delta$ para deducir la desigualdad (\ref{eqn-7}).
Por la Definici\'{o}n \ref{definicionAnosov} tenemos:
\begin{equation}
\label{eqnDistanciasInestablesHiperbolicidad}
\mbox{dist}^u(f^{-j}(x), f^{-j}(y)) \leq C \sigma^{-j} \mbox{dist}^u(x,y),\end{equation}
donde $C >0$ y $\sigma > 1$ son las constantes dadas en la Definici\'{o}n \ref{definicionAnosov}, y $\mbox{dist}^u(x,y)$ es la distancia entre los puntos $x$ e $y$ a lo largo de la variedad inestable local $W^u_{\delta}(x)$. Adem\'{a}s la distancia $\mbox{dist}$ en la variedad ambiente $M$ entre dos puntos que est\'{a}n en la misma variedad inestable local, es siempre el m\'{\i}nimo de las longitudes de curvas que unen esos puntos (con la m\'{e}trica riemanniana dada en $M$). Por lo tanto es menor o igual que la distancia $\mbox{dist}^u$ a lo largo de la variedad inestable local. Sustituyendo en la desigualdad (\ref{eqn-7}) resulta:
\begin{equation}
\label{eqn-8bb}
|\log h(x,y)| \leq c_5 \, C^{\alpha} \, \sum_{j= 1}^{+ \infty} \sigma^{-\alpha j} \, (\mbox{dist}^u(x, y))^{\alpha} \leq c_6 (\mbox{dist}^u(x,y))^{\alpha} $$ $$\ \ \ \forall \ (x,y) \in H_{\delta},
\end{equation}
donde $c_6 = c_5 \, C^{\alpha} \, {1}/(1- \sigma^{-\alpha})$ (observar que $0 <\sigma^{-\alpha} < 1$ porque $\sigma > 1$ y $\alpha >0$).
Como $W^u_{\delta}(x)$ es una variedad $C_1$-encajada en $M$ que depende continuamente de $x$, existe $$\mbox{Diam}(W^u_{\delta}(x)) := \sup_{y \in W^u_{\delta}(x)} \mbox{dist}^u(x,y) \in \mathbb{R}^+$$ y es una funci\'{o}n continua real de $x \in M$. Luego, est\'{a} acotada superiormente por una constante uniforme $c_7 >0$. Sustituyendo en (\ref{eqn-8bb}), concluimos:
\begin{equation}
\label{eqn-8}
|\log h(x,y)| \leq c_6 \, (\mbox{dist}^u(x,y))^{\alpha} \leq c_6 \, c_7^{\alpha} = c\ \ \ \forall \ (x,y) \in H_{\delta},
\end{equation}
terminando la demostraci\'{o}n de la parte (i) del Lema \ref{lemmaDistoAcotada}.
\vspace{.2cm}
{\bf (iii) } Fijemos $n \geq 1$. Por la definici\'{o}n de las funciones $h$ y $h_n$ tenemos
$$\frac{h_n(x,y)}{h(x,y)} = := \frac{\prod_{j= n +1}^{+ \infty} J^u(f^{-j}(y))}{\prod_{j= n+1}^{+ \infty} J^u(f^{-j}(x))}$$
Luego, aplicando la f\'{o}rmula (\ref{eqn-8bb}) a los puntos $f^{-n-1}(x)$ y $f^{-n-1}(y)$, en lugar de $x$ e $y$ respectivamente, obtenemos
$$\Big|\log \Big(\frac{h_n(x,y)}{h(x,y)} \Big) \Big| \leq c_6 \big(\mbox{dist}^u(f^{-n-1}(x), f^{-n-1}(y))\Big)^{\alpha}.$$
Usando ahora la desigualdad (\ref{eqnDistanciasInestablesHiperbolicidad}), deducimos:
$$\Big|\log \Big(\frac{h_n(x,y)}{h(x,y)} \Big) \Big| = \leq c_6 C^{\alpha} \sigma^{-(n+1) \alpha} \big(\mbox{dist}^u(x , y)\big)^{\alpha} \leq c_6 \, C \, c_7^{\alpha} \, \sigma^{-(n+1) \alpha}.$$
Como $\sigma >1$ y $\alpha >0$, el t\'{e}rmino de la derecha tiende a cero cuando $n \rightarrow + \infty$. Luego, existe $N \geq 1$, independiente de $(x,y)$ tal que
$$\Big|\log \, \frac{h_n(x,y)}{h(x,y)} \Big| < \epsilon \ \ \forall \ n \geq N,$$
terminando de probar la afirmaci\'{o}n (iii) del Lema \ref{lemmaDistoAcotada}.
\vspace{.2cm}
{\bf (ii) } Debido a la afirmaci\'{o}n (ii), la sucesi\'{o}n de funciones $h_n(x,y)$ converge uniformemente a $h(x,y)$ para todo $(x,y) \in H_{\delta}$. Adem\'{a}s $h_n(x,y)$ es continua en $H_{\delta}$, porque el Jacobiano inestable $J^u(x) = \big|\det df^n_x|_{E^u_x}\big|$ es una funci\'{o}n continua de $x$, pues $df_x$ es continuo y $E_x^u$ tambi\'{e}n. El l\'{\i}mite uniforme de una sucesi\'{o}n de funciones continuas es continuo. Concluimos que $h$ es una funci\'{o}n continua,
finalizando la prueba del Lema \ref{lemmaDistoAcotada}.
\hfill $\Box$
\subsection{Demostraci\'{o}n del Teorema \ref{TheoremSRBanosov}}
En la demostraci\'{o}n del Teorema \ref{TheoremSRBanosov}, usaremos, adem\'{a}s del Lema de Distorsi\'{o}n Acotada, el siguiente resultado, v\'{a}lido para cualquier difeomorfismo de Anosov (no necesariamente de clase $C^{1 + \alpha}$), y generalizable tambi\'{e}n (reformulando adecuadamente el enunciado) para los entornos locales de atractores topol\'{o}gicos uniformemente hiperb\'{o}licos.
\begin{theorem}
\label{theoremProductoLocal} {\bf Estructura de producto local} \index{producto local} \index{teorema! de producto local} \index{estructura de producto local}
Sea $f \in \mbox{Diff }^1(M)$ Anosov. Existen constantes $ 0 < \delta' < \delta $ suficientemente peque\~{n}as, tales que $f$ tiene estructura de producto local en entornos de radio $\delta'$. \em
La estructura de producto local significa, por definici\'{o}n, que para toda pareja de puntos $(x,y) $ tales que $\mbox{dist}(x,y) < \delta'$, existen, son \'{u}nicos y dependen continuamente de $(x,y)$ los puntos $[x,y]$ y $[y,x]$ definidos por:
$$[x,y] = W^u_{ \delta}(x) \cap W^s_{ \delta}(y), \ \ [y,x] = W^u_{ \delta}(y) \cap W^s_{ \delta}(x),$$
donde $W_{ \delta}^{u,s}(x)$ denota la componente conexa de $W^{u,s}(x) \cap B_{ \delta}(x)$ que contiene al punto $x$.
\end{theorem}
Una prueba del Teorema \ref{theoremProductoLocal} puede encontrarse en \cite[Theorem 3.12]{Bowen} o en
\cite[Proposition 6.4.21]{Katok-Hasselblatt}. El Teorema \ref{theoremProductoLocal} es consecuencia del llamado \lq\lq shadowing lemma\rq\rq (lema de sombreado).
\vspace{.3cm}
Ahora estamos en condiciones de demostrar el Teorema \ref{TheoremSRBanosov} que establece la existencia, y mutua equivalencia, de las medidas erg\'{o}dicas de Gibbs y de las medidas SRB, para los difeomorfismos de Anosov de clase $C^{1 + \alpha}$.
\begin{nada} \em
\label{proofTheoremSRBanosov} {\bf Demostraci\'{o}n del Teorema \ref{TheoremSRBanosov}}
\end{nada}
Esta prueba consta de tres partes:
\vspace{.3cm}
\index{medida! de Gibbs} \index{medida! condicional inestable} \index{derivada de Radon-Nykodim} \index{densidad} \index{medida! densidad de} \index{medida! equivalencia de}
{\bf Afirmaci\'{o}n I:} \em Existen medidas de probabilidad $\mu$ de Gibbs tales que:
$\bullet$ Las probabilidades condicionales inestables $\mu^u $ son equivalentes a las medidas de Lebesgue $m^u$ a lo largo de las respectivas variedades inestables locales $W^u_{\delta}(x)$ para $\mu$-c.t.p. $x \in M$. \em
(Nota: La equivalencia entre $\mu^u$ y $m^u$ se define como $\mu^u \ll m^u$ y $m^u \ll \mu^u$). \em
$\bullet$ La derivada de Radon-Nikodym $d\mu^u/dm^u$ \em (la densidad de las medidas condicionales inestables $\mu^u$) \em es una funci\'{o}n real continua y estrictamente positiva. \em
\vspace{.3cm}
Una vez probada la Afirmaci\'{o}n I, como consecuencia, debido al Teorema \ref{theoremDescoErgodicaEspaciosMetricos} de Descomposici\'{o}n en componentes erg\'{o}dicas, y por las partes (b) y (c) del Corolario \ref{corolarioRohlin1} del Teorema de Rohlin, concluimos que existen medidas de probabilidad erg\'{o}dicas de Gibbs $\mu$ tales que $\mu^u $ es equivalente a $m^u$ para $\mu$-c.t.p. Aplicando entonces el Teorema \ref{theoremGibbs->SRB} ya demostrado, toda medida de probabilidad erg\'{o}dica y de Gibbs, es SRB. Deducimos que existen medidas SRB erg\'{o}dicas, es decir, concluimos la parte (a) del Teorema \ref{TheoremSRBanosov}. Adem\'{a}s la Afirmaci\'{o}n I implica que existen medidas SRB erg\'{o}dicas, que son de Gibbs, cuyas probabilidades condicionales inestables son equivalentes a la medida de Lebesgue a lo largo de las variedades inestables respectivas para $\mu$-c.t.p., y cuyas densidades son funciones reales continuas y estrictamente positivas.
\vspace{.3cm}
{\bf Afirmaci\'{o}n II:} \em Sea $\mu$ una medida de Gibbs erg\'{o}dica $\mu$ tal que la derivada de Radon-Nikodym \em (densidad $d\mu^u/dm^u$ de las medidas condicionales inestables $\mu^u$) \em es una funci\'{o}n continua \em (no necesariamente estrictamente positiva). \em Entonces la cuenca de atracci\'{o}n estad\'{\i}stica de $\mu$ cubre Lebesgue c.t.p. de un entorno abierto de su soporte. \em
\index{medida! de Gibbs erg\'{o}dica}
\index{derivada de Radon-Nykodim} \index{densidad}
\index{medida! densidad de}
\index{continuidad! de la funci\'{o}n densidad}
\index{cuenca de atracci\'{o}n! estad\'{\i}stica}
\vspace{.3cm}
De esta propiedad, una vez demostrada, deducimos que dos medidas de Gibbs erg\'{o}dicas diferentes cuyas densidades inestables sean continuas (en particular dos medidas de Gibbs erg\'{o}dicas que satisfagan la Afirmaci\'{o}n I), deben te\-ner soportes disjuntos a distancia positiva. En efecto, por la Definici\'{o}n \ref{DefinicionCuencaDeAtraccionEstadistica}, las cuencas de atracci\'{o}n estad\'{\i}stica de medidas diferentes deben ser disjuntas. Debido a la Afirmaci\'{o}n II los soportes de esas dos medidas est\'{a}n mutuamente aislados. De aqu\'{\i} deducimos (usando que $M$ es un espacio m\'{e}trico compacto y por lo tanto existe base numerable de abiertos) que las medidas de Gibbs erg\'{o}dicas que satisfacen la Afirmaci\'{o}n I (y por lo tanto tambi\'{e}n la II) son a lo sumo una cantidad numerable.
\vspace{.3cm}
{\bf Afirmaci\'{o}n III:} \em Lebesgue casi todo punto de la variedad $M$ pertenece a la uni\'{o}n de las cuencas de atracci\'{o}n estad\'{\i}stica de las medidas de Gibbs erg\'{o}dicas cuyas medidas condicionales inestables cumplen las dos propiedades de la Afirmaci\'{o}n \em I.
\index{cuenca de atracci\'{o}n! estad\'{\i}stica}
\index{medida! de Gibbs erg\'{o}dica}
\index{medida! condicional inestable}
\vspace{.3cm}
Supongamos probada la afirmaci\'{o}n III. Tomando en particular una medida SRB o f\'{\i}sica cualquiera $\nu$, con cuenca de atracci\'{o}n estad\'{\i}stica $A$, deducimos que $A$ est\'{a} contenido en la uni\'{o}n de las cuencas de atracci\'{o}n estad\'{\i}stica (que son disjuntas dos a dos) de una colecci\'{o}n finita o infinita numerable de medidas de Gibbs erg\'{o}dicas que satisfacen la Afirmaci\'{o}n I. Como las cuencas de atracci\'{o}n estad\'{\i}stica de medidas diferentes son disjuntas, deducimos que toda medida SRB o f\'{\i}sica es de Gibbs erg\'{o}dica y satisface la Afirmaci\'{o}n I, y por lo tanto tambi\'{e}n la II. Esto, junto con el Teorema \ref{theoremGibbs->SRB}, prueba la parte (b) del Teorema \ref{TheoremSRBanosov}.
Adem\'{a}s, la afirmaci\'{o}n III implica inmediatamente la parte (c) del Teorema \ref{TheoremSRBanosov}, y m\'{a}s a\'{u}n, toda medida SRB no solo es de Gibbs erg\'{o}dica, sino que adem\'{a}s satisface las afirmaciones I y II.
Supongamos ahora adem\'{a}s que $f$ es topol\'{o}gicamente transitivo. Si hubiera dos medidas SRB (erg\'{o}dicas) diferentes $\mu$ y $\nu$, entonces sus cuencas de atracci\'{o}n estad\'{\i}stica $B(\mu)$ y $B(\nu)$ ser\'{\i}an disjuntas. Como $\mu$ y $\nu$ satisfacen la Afirmaci\'{o}n II, existen abiertos disjuntos $U$ y $V$ tales que Lebesgue c.t.p. de $U$ pertenece a $B(\mu)$ y Lebesgue c.t.p. de $V$ pertenece a $B(\nu)$. Como $B(\mu)$ es $f$-invariante, entonces $m( f^n(U) \setminus B(\mu))= 0$ para todo $n \geq 1$. Como $f$ es topol\'{o}gicamente transitivo existe $n \geq 1$ tal que el abierto $f^n(U) \cap V \neq \emptyset$. Todo abierto (en particular $f^n(U) \cap V$) tiene medida de Lebesgue positiva, y Lebesgue c.t.p. de $f^n(U) \cap V$ pertenece a $ B(\mu)$ (por pertenecer a $f^n(U)$), y pertenece tambi\'{e}n a $B(\nu)$ (por pertenecer a $V$). Luego las cuencas de atracci\'{o}n estad\'{\i}stica $B(\mu)$ y $B(\nu)$ no son disjuntas, de donde $\mu = \nu$. Esto prueba la unicidad de la medida SRB, es decir, la parte (d) del Teorema \ref{TheoremSRBanosov}.
En resumen, para demostrar completamente el Teorema \ref{TheoremSRBanosov}, basta probar las Afirmaciones I, II y III.
\vspace{.3cm}
{\bf Primera parte de la demostraci\'{o}n del Teorema \ref{TheoremSRBanosov}}
(Prueba de la Afirmaci\'{o}n I de \S\ref{proofTheoremSRBanosov})
{\em Demostraci\'{o}n: }.
\vspace{.2cm}
{\bf Paso 1: Construcci\'{o}n de candidata a medida de Gibbs}
Elijamos una variedad inestable local cualquiera $W_0$ y consideremos la medida $m^u_0$ de Lebesgue a lo largo de $W_0$. Tenemos $m_0^u(W_0) >0$. Definimos la siguiente medida de probabilidad $\mu_0$ en la variedad ambiente $M$:
$$\mu_0(A) := \frac{m^u(A \cap W_0)}{m^u(W_0)} \ \ \ \forall \ A \in {\mathcal A},$$
donde ${\mathcal A}$ es la sigma-\'{a}lgebra de Borel.
Para todo $n \geq 1$ definimos las siguientes medidas de probabilidad $\nu_n$ y $\mu_n$:
\begin{equation} \label{eqn nu_n-mu_n}\nu_n (A):= \mu_0 (f^{-n}(A)), \ \ \ \mu_n(A) := \frac{1}{n} \sum_{j= 0}^{n-1} \nu_j(A) \ \ \ \forall \ A \in {\mathcal A}.\end{equation}
Finalmente tomamos una subsucesi\'{o}n $\{\mu_{n_i}\}_{i \in \mathbb{N}}$ convergente a $\mu$ en el espacio ${\mathcal M}$ de las medidas de probabilidad en $(M, \mathcal A)$ con la topolog\'{\i}a d\'{e}bil$^*$. Probaremos que la medida $\mu$ satisface la Afirmaci\'{o}n I.
\vspace{.2cm}
{\bf Paso 2: Descomposici\'{o}n de Rohlin de las medidas $\nu_n$. } \index{descomposici\'{o}n de Rohlin}
Consideremos un punto $x_0$ en el soporte de $\mu$ tal que existe una bola $B= B_{\delta}(x_0) \subset M$ de radio $\delta >0$ suficientemente peque\~{n}o, tal que $\mu(B) >0$. Entonces, por la definici\'{o}n de la topolog\'{\i}a d\'{e}bil estre\-lla (tomando por ejemplo una funci\'{o}n continua no negativa que vale 1 en $x_0$ y est\'{a} soportada en $B$), existe $N \geq 1$ tal que $\mu_N(B) >0$, de donde deducimos que, para todo $n \geq N$ existe $j \in \{0, \ldots, n-1\}$ tal que $\nu_j(B) >0$, y por lo tanto \begin{equation} \label{eqn-4} \mu_n(B) >0 \ \ \forall \ n \geq N.\end{equation} Esto implica, por la definici\'{o}n de la medida $\nu_j$ que $f^j(W_0) \cap B \neq \emptyset$.
Fijemos $n \geq 1$ tal que
$$f^n(W_0) \cap B \neq \emptyset.$$
Por construcci\'{o}n, para todo boreliano $A \subset M$, se cumple:
$$\nu_n(A \cap B) = \frac{1}{m_0^u(W_0)} \int \chi_{f^{-n}(A)}(y) \chi_{W_0 \cap f^{-n}(B)}(y) \, d m_0^u(y).$$
Haciendo el cambio de variable $x= f^n(y)$ y denotando $J^u(y)$ al Jacobiano inestable en el punto $y$ definido por
$$J^u (y) := \big|\det df_y|_{E^u(y)} \big|,$$ donde $E_y^u = T_yW^u(y)$, obtenemos:
$$\nu_n(A \cap B) = \frac{1}{m_0^u(W_0)} \int \chi_{A}(x) \chi_{f^n(W_0) \cap B}\,(x) \, \big |\det d(f^{-n})|_{E^u_x}(x)\big|\, \, d m^u(x)$$
donde $m^u(x)$ denota la medida de Lebesgue a lo largo de la variedad inestable local $W^u_{\delta}(x)$ por el punto $x$.
Tenemos $$\big |\det d(f^{-n})|_{E^u_x}(x)\big| = \frac{1}{\big |\det d(f^{n})|_{E^u_{y}}(y)\big|} = \frac{1}{\prod_{i= 0}^{n} J^u(f^i(y)) } = $$ $$=\frac{1}{\prod_{h= 1}^{n} J^u(f^{-h}(x)) },$$
de donde:
$$\nu_n(A \cap B) = \frac{1}{m_0^u(W_0)} \int \chi_{A}(x) \chi_{f^n(W_0) \cap B}\,(x) \,\frac{1}{\prod_{h= 1}^{n} J^u(f^{-h}(x)) } \, d m^u(x).$$
Denotamos $k_n$ el n\'{u}mero de componentes conexas de $f^n(W_0) \cap B$ (por convenci\'{o}n, $k_n = 0$ si el conjunto es vac\'{\i}o y $\sum_{i= 1} ^0 \cdot = 0$). Para cada $n \geq 1$ fijo, si $k_n \geq 1$ denotamos $\{W_{i, n}\}_{1 \leq i \leq k_n}$ al conjunto de componentes conexas de $f^n(W_0) \cap B$. Observamos que para todo $x \in W_{i,n}$ la medida $m^u(x)$ es la de Lebesgue a lo largo de la subvariedad $W_{i,n}$. Entonces:
\begin{equation}\label{eqn-3}\nu_n(A \cap B) = \frac{1}{m_0^u(W_0)} \sum_{i= 1}^{k_n}\int_{W_{i,n}} \chi_{A}(x) \,\frac{1}{\prod_{h= 1}^{n} J^u(f^{-h}(x)) } \, d m^u(x).\end{equation}
Fijamos $n \geq 1$ tal que $k_n \geq 1$, es decir $f^n(W_0) \cap B \neq \emptyset$. Luego, por cons\-truc\-ci\'{o}n de la medida $\nu_n$ mediante la igualdad (\ref{eqn nu_n-mu_n}), tenemos $\nu_n(B) >0$. Tomemos una subvariedad cualquiera $ V$, encajada en $B_{\delta}$, de dimensi\'{o}n complementaria a la inestable, que interseque en un solo punto cada una, a todas las variedades estables locales inestables $W^u{\delta}(z) $ (para todo $z \in B$). Por el Teorema \ref{theoremProductoLocal} de estructura de producto local, tal variedad topol\'{o}gicamente transversal $V$ existe, ya que puede tomarse igual a una variedad estable local.
Para cada $i \in \{1, \ldots, k_n\}$ consideremos el punto $x_{i,n} = W_{i,n} \cap V$. Construimos, para cada $x \in \bigcup_{i= 1}^{k_n} W_{i, n}$, el siguiente valor real \em positivo \em $h_n(x)$ de la que llamaremos funci\'{o}n $h_n$:
\begin{equation} \label{eqnhsubn} h_{n}(x) := \frac{\prod_{h= 1}^{n} J^u(f^{-h}(x_{i,n}))}{\prod_{h= 1}^{n} J^u(f^{-h}(x))},\end{equation}
donde $i \in \{1, \ldots, n\}$ es el \'{u}nico \'{\i}ndice tal que $x \in W_{i,n}$.
Consideramos la partici\'{o}n ${\mathcal P}_n$ de la bola $B$ cuyas piezas son $\{W_{i,n}\}_{1 \leq i \leq k_n}$ y adem\'{a}s el complemento en $B$ de $\cup_{i= 1}^{k_n} W_{i,n}$. Denotamos con
$\rho_n$ la si\-guien\-te medida (finita) en el espacio medible cociente formado por las piezas de la partici\'{o}n ${\mathcal P}_n$:
\begin{equation} \label{eqnrhosubn}\rho_n = \frac{1}{m_0^u(W_0)} \sum_{i= 1}^{k_n} \frac{\int_{W_{i,n}} h_n \, d m^u} {\prod_{h= 1}^{n} J^u \circ f^{-h} (x_{i,n})} \, \delta_{x_{i,n}}, \end{equation} donde $\delta_{x_{i,n}}$ denota la Delta de Dirac soportada en la pieza $W_{i,n}$ (repre\-sentada por el punto $x_{i,n}$). Sustituyendo (\ref{eqnhsubn}) y (\ref{eqnrhosubn}) en la igualdad (\ref{eqn-3}), obtenemos para todo boreliano $A$ la siguiente descomposici\'{o}n de la medida de probabilidad $\nu_n$ restringida a la bola $B$:
\begin{equation}
\label{eqnDescoRohlin nu_n}
\frac{\nu_n(A \cap B)}{\nu_n(B)} = \frac{1}{\nu_n(B)} \int d \rho_n \int_{W_{i,n}} \chi_{A}(x) \,\frac{h_n(x)}{\int_{W_{i,n}} h_n \, d m^u} \, d m^u(x). \end{equation}
La igualdad anterior da la descomposici\'{o}n de Rohlin con respecto de la partici\'{o}n ${\mathcal P}_n$, de la medida de probabilidad $\nu_n/\nu_n(B)$ en la bola $B$. En efecto, la medida de probabilidad $\nu^u_{n} $ condicionada a lo largo de las piezas de la partici\'{o}n (i.e. a lo largo de las variedades inestables locales $W_{i,n}$) est\'{a} dada por la integral de la derecha en la igualdad (\ref{eqnDescoRohlin nu_n}). Esta medida condicionada $\nu^u_n$ cumple $$d\nu^u_{n} = \frac{h_n}{\int_{W_{i,n}} h_n \, dm^u } \, d m^u,$$ y por lo tanto
$$\nu^u_{n}\ll m^u.$$ Decimos entonces que las medidas condicionales inestables de $\nu_n$ son absolutamente continuas (a\'{u}n cuando $\nu_n$ no es necesariamente invariante con $f$). La integral de la izquierda en (\ref{eqnDescoRohlin nu_n}) da la medida de probabilidad $\widehat \nu_n$ en el espacio cociente de la bola $B$ con respecto a la partici\'{o}n ${\mathcal P}_n$. Esta medida cociente $\widehat \nu_n$ es
$$\widehat \nu_n = \frac{1}{\nu_n(B)}\, {\rho_n}. $$
El objetivo en los siguientes pasos de la demostraci\'{o}n es usar la descomposici\'{o}n de Rohlin de las medidas $\nu_j$ dada por la igualdad (\ref{eqnDescoRohlin nu_n}) para todo $j \in \{0, \ldots, n-1\}$, para encontrar la descomposici\'{o}n de Rohlin de la medida $\mu_n = \sum_{j= 0}^{n-1}\mu_n$. Luego, pasando al l\'{\i}mite para una subsucesi\'{o}n $\{n_i\}_{i \in \mathbb{N}}$, encontraremos la descomposici\'{o}n de Rohlin de la medida $\mu$ construida en el Paso 1, para demostrar que es medida de Gibbs y satisface la Afirmaci\'{o}n I. El punto dif\'{\i}cil en este argumento es justificar rigurosamente el pasaje al l\'{\i}mite de los promedios de las medidas $\widehat \nu_n$ y de las medidas condicionales $\nu_n^u$. Para ello, necesitamos $\epsilon$-aproximar la descomposici\'{o}n de Rohlin dada por la igualdad (\ref{eqnDescoRohlin nu_n}), para todo $\epsilon >0$
arbitrariamente peque\~{n}o, y para $n$ suficientemente grande.
\vspace{.2cm}
{\bf Paso 3: $\epsilon $- aproximaci\'{o}n de la descomposici\'{o}n de Rohlin de $\nu_n$} \index{descomposici\'{o}n de Rohlin}
Fijemos, como en el paso 2, $n \geq 1$ tal que $f^n(W_0) \cap B \neq \emptyset$. Recordamos que $\{W_{i,n}\}_{1 \leq i \leq k_n}$ denota a las componentes conexas de $f^n(W_0) \cap B$, que son por lo tanto, variedades inestables locales contenidas en la bola $B$. Recordemos que para obtener la descomposici\'{o}n de Rohlin de $\nu_n$ dada por la igualdad (\ref{eqnDescoRohlin nu_n}), hemos elegido y dejado fijo un punto y uno solo $x_{i,n} \in W_{i,n} $. Definimos ahora, para todo $x \in \bigcup_{i= 1}^{k_n} W_{i,n}$ el siguiente valor $h(x) \in [0, + \infty]$ de la que llamaremos funci\'{o}n $h$ restringida a $\bigcup_{i= 1}^{k_n} W_{i,n}$:
\begin{equation} \label{eqnh } h (x) := \frac{\prod_{h= 1}^{+ \infty} J^u(f^{-h}(x_{i,n}))}{\prod_{h= 1}^{+ \infty} J^u(f^{-h}(x))},\end{equation}
donde $i$ es el \'{u}nico \'{\i}ndice en $\{1, \ldots, k_n\}$ tal que $x \in W_{i,n}$
Ahora aplicamos el Lema \ref{lemmaDistoAcotada} de Distorsi\'{o}n Acotada que establece que:
Existe una constante real $K >0$ tal que \begin{equation} \label{eqn(i)} \ \frac{1}{K} < h(x) < \frac{1}{K} \ \ \forall \ x \in H:= \bigcup_{n \geq 1}^{+ \infty} \bigcup_{i= 1}^{k_n} W_{i,n}\end{equation}
\begin{equation}
\label{eqn(ii)} \ h: H \rightarrow \mathbb{R}^+ \mbox{ es continua } \end{equation}
Para todo $\epsilon >0$ existe $N \geq 1$ (independiente de $x \in H$) tal que:
\begin{equation}
\label{eqn(iii)}
e^{- \epsilon} < \frac{h_n(x)}{h(x)} < e^{\epsilon} \ \ \forall \ x \in \bigcup_{i=1}^{k_n} W_{i,n} \ \ \forall \ n \geq N.\end{equation}
Aplicando la afirmaci\'{o}n (\ref{eqn(i)}) del Lema de Distorsi\'{o}n Acotada, cons\-truimos, para cada $n \geq 1$ fijo tal que $f^n(W_0) \cap B \neq \emptyset$, la siguiente medida finita $I_n$:
\begin{equation} \label{eqnDescoRohlinI_n} I_n(A) := \int d \rho_n \int_{W_{i,n}} \chi_{A}(x) \,\frac{h (x)}{\int_{W_{i,n}} h \, d m^u} \, d m^u(x), \ \ \forall \ A \in {\mathcal{A}} \end{equation}
donde $\rho_n$ es la medida definida en la igualdad (\ref{eqnrhosubn}).
Comparando las igualdades (\ref{eqnDescoRohlin nu_n}) y (\ref{eqnDescoRohlinI_n}), obtenemos, para todo boreliano $A$ la siguiente igualdad:
$$ \nu_n (A \cap B) = \int d \rho_n \int_{W_{i,n}} \chi_A \,\frac{h_n }{\int_{W_{i,n}} h_n \, d m^u} \, d m^u = $$ $$=\int d \rho_n \int_{W_{i,n}} \chi_A \,\frac{h_n}{h}\frac{h }{\int_{W_{i,n}} h (h_n/h) \, d m^u} \, d m^u $$
Usando la propiedad (\ref{eqn(iii)}) del Lema de Distorsi\'{o}n Acotada, deducimos, para todo $n \geq N$ tal que $f^n(W_0) \cap B \neq \emptyset$ y para todo Boreliano $A$:
$$e^{-2\epsilon} I_n(A) \leq {\nu_n(A \cap B)} \leq e^{2\epsilon} I_n(A)$$
En conclusi\'{o}n:
\begin{equation}
\label{eqnDescoRohlinAproxnu_n}
e^{-2 \epsilon} \int d \rho_n \int_{W_{i,n}} \chi_{A}(x) \,\frac{h (x)}{\int_{W_{i,n}} h \, d m^u} \, d m^u(x) \leq \nu_n(A \cap B) \leq $$ $$e^{ 2 \epsilon} \int d \rho_n \int_{W_{i,n}} \chi_{A}(x) \,\frac{h (x)}{\int_{W_{i,n}} h \, d m^u} \, d m^u(x) \ \ \forall \ A \in {\mathcal{A}}
\end{equation}
\vspace{.2cm}
{\bf Paso 4: Descomposici\'{o}n de Rohlin de la medida $\mu$} \index{descomposici\'{o}n de Rohlin}
Sea la medida de probabilidad $\mu$ construida en el Paso 1:
\begin{equation} \label{eqnmu}\mu= \lim_{i \rightarrow + \infty} \mu_{n_i},\end{equation}
donde el l\'{\i}mite es en la topolog\'{\i}a d\'{e}bil estrella, y $\mu_n$ est\'{a} definida para todo $n \geq 1$ por la igualdad (\ref{eqn nu_n-mu_n}), como el promedio aritm\'{e}tico de las medidas $\nu_j$ para $j \in \{0, \ldots, n-1\}$. Consideremos $n \geq N$ fijo y la partici\'{o}n medible ${\mathcal Q}$ de la bola $B$ formada por las variedades inestables locales. Denotemos $B / \sim$ el espacio medible cociente de $B$ con respecto a esa partici\'{o}n. Usando las desigualdades (\ref{eqnDescoRohlinAproxnu_n}) obtenemos, para todo boreliano $A \subset M$ la siguiente $\epsilon$-aproximaci\'{o}n de $\mu_n(A)$
\begin{equation}
\label{eqnDescoRohlinAproxmu_n}
e^{-2 \epsilon} \int_{B/\sim} d \Big(\frac{1}{n}\sum_{j= 0}^{n-1}\rho_j \Big) \, \int_{W} \chi_{A} \,\frac{h }{\int_{W} h \, d m^u} \, d m^u(x) \leq \mu_n(A \cap B) \leq $$ $$e^{ 2 \epsilon} \int d \frac{1}{n}\Big(\sum_{j= 0}^{n-1}\rho_j \Big) \, \int_{W} \chi_{A} \,\frac{h }{\int_{W} h \, d m^u} \, d m^u(x) \ \ \forall \ A \in {\mathcal{A}}, \ \ \ \forall \ n \geq N.
\end{equation}
donde, por convenci\'{o}n, $\rho_j$ est\'{a} definida por la igualdad (\ref{eqnrhosubn}) si
$f^j(W_0) \cap B \neq \emptyset$, y $\rho_j := 0$ en caso contrario.
Denotamos \begin{equation} \label{eqnrho*_n} \rho^*_n := \frac{1}{n} \sum_{j= 0}^{n-1} \rho_j.\end{equation}
Por construcci\'{o}n $\rho^*_n$ es una medida finita en el espacio cociente $B/ \sim$ de la bola $B$ con respecto a la partici\'{o}n ${\mathcal Q}$ en subvariedades locales inestables.
Por la igualdad (\ref{eqn-4}) $\mu_n(B)>0$ para todo $n$ suficientemente grande (digamos $n \geq N$). Entonces, debido a las desigualdades (\ref{eqnDescoRohlinAproxmu_n}), $\rho^*_n$ es una medida no nula para todo $n \geq N$. Probemos que la sucesi\'{o}n de medidas $\{\rho^*_n\}_{n \geq N}$ est\'{a} uniformemente acotada superiormente por $e^{\epsilon}$. En efecto, por un lado la medida $\mu_n$ construida por la igualdad (\ref{eqn nu_n-mu_n}) es una medida de probabilidad en toda la variedad $M$, y por otro lado, de la desigualdad a la izquierda en (\ref{eqnDescoRohlinAproxmu_n}), obtenemos
$$1 \geq \mu(B) \geq e^{-2 \epsilon} \int _{B/ \sim} d \rho_n^* = \rho^*(B/\sim).$$
El espacio de todas las medidas finitas en el espacio medible $B/\sim$ que est\'{a}n uniformemente acotadas por $e^{2 \epsilon}$, provisto de la topolog\'{\i}a d\'{e}bil estrella, de secuencialmente compacto. Luego, existe una subsucesi\'{o}n $\{n_{i_k}\}_{k \in \mathbb{N}}$ de $\{n_i\}_{i \in \mathbb{N}}$ (que por simplicidad seguimos denotando como $n_i$), tal que $\{\rho^*_{n_i}\}_{i}$ es convergente a una medida finita $\rho^*$.
En resumen, hemos demostrado la existencia de una medida finita $\rho^*$ en el espacio medible cociente $B/\sim$, tal que:
\begin{equation} \label{eqnrho*}\rho^* = \lim_{i \rightarrow + \infty} \rho^*_{n_i} = \lim_{u \rightarrow + \infty} \frac{1}{n_i} \sum_{j= 0}^{n_i -1} \rho_j,\end{equation}
donde el l\'{\i}mite de medidas es en la topolog\'{\i}a d\'{e}bil estrella.
Consideremos una funci\'{o}n real continua no negativa $\psi: M \mapsto [0,1]$. Las desigualdades (\ref{eqnDescoRohlinAproxmu_n}) y la igualdad (\ref{eqnrho*_n}) implican
\begin{equation}
\label{eqnDescoRohlinAproxmu_n_2}
e^{-2 \epsilon} \int_{B/\sim} \, d \rho^*_{n_i} \, \int_{W} \psi \,\frac{h }{\int_{W} h \, d m^u} \, d m^u \leq \int_B \psi \, d\mu_{n_i} \leq $$ $$e^{2 \epsilon} \int_{B/\sim} \, d \rho^*_{n_i} \, \int_{W} \psi \,\frac{h }{\int_{W} h \, d m^u} \, d m^u \ \ \forall \ n_i \geq N.
\end{equation}
Aplicando la propiedad (\ref{eqn(ii)}) del Lema de Distorsi\'{o}n Acotada, deducimos que el producto $\psi \cdot h$ es una funci\'{o}n continua en $B$. Luego, como tambi\'{e}n $m^u(W^u_{\delta}(x))$ depende continuamente de $x$ (porque el subespacio tangente $E^u_x $ depende continuamente de $x$), deducimos que la integral a la derecha en (\ref{eqnDescoRohlinAproxmu_n_2}) es un n\'{u}mero real positivo que depende continuamente de la variedad inestable $W \in B/\sim$. Entonces, por la igualdad (\ref{eqnrho*}) y por la definici\'{o}n de la topolog\'{\i}a d\'{e}bil estrella, deducimos que:
$$\lim_{i \rightarrow + \infty} \int_{B/\sim} \, d \rho^*_{n_i} \, \int_{W} \psi \,\frac{h }{\int_{W} h \, d m^u} \, d m^u = $$ $$= \int_{B/\sim} \, d \rho^* \, \int_{W} \psi \,\frac{h }{\int_{W} h \, d m^u} \, d m^u. $$
Tomando l\'{\i}mite en las desigualdades (\ref{eqnDescoRohlinAproxmu_n_2}) y recordando (\ref{eqnmu}), obte\-ne\-mos:
\begin{equation}
\label{eqnDescoRohlinAproxmu_n_3}
e^{-2 \epsilon} \int_{B/\sim} \, d \rho^* \, \int_{W} \psi \,\frac{h }{\int_{W} h \, d m^u} \, d m^u \leq \int_B \psi \, d\mu \leq $$ $$e^{2 \epsilon} \int_{B/\sim} \, d \rho^* \, \int_{W} \psi \,\frac{h }{\int_{W} h \, d m^u} \, d m^u
\end{equation}
Por linealidad, la integral de la derecha en (\ref{eqnDescoRohlinAproxmu_n_3}) define un funcional lineal acotado y positivo en el espacio $C^0(M, \mathbb{R})$ de todas las funciones continuas $\psi: M \mapsto \mathbb{R}$. Luego, por el Teorema de Representaci\'{o}n de Riesz, define una medida finita $\widetilde \mu $.
Como las desigualdades (\ref{eqnDescoRohlinAproxmu_n_3}) valen para toda $\psi \in C^0(M, [0,1]$, y la funci\'{o}n caracter\'{\i}stica $\chi_A$ de cualquier boreliano $A$ puede aproximarse en $L^1(\mu)$ y en $L^1(\widetilde \mu)$ por una funci\'{o}n continua $\psi \in C^0(M, [0,1])$, obtenemos, para todo boreliano $A$ las siguientes desigualdades:
\begin{equation}
\label{eqnDescoRohlinAproxmu}
e^{-2 \epsilon} \int_{B/\sim} d \rho^* \, \int_{W} \chi_A \,\frac{h }{\int_{W} h \, d m^u} \, d m^u \leq \mu(A \cap B) \leq $$ $$\int_{B/\sim} \, d \rho^* \, \int_{W} \chi_A \,\frac{h }{\int_{W} h \, d m^u} \, d m^u.
\end{equation}
Por construcci\'{o}n, las medidas involucradas en las desigualdades (\ref{eqnDescoRohlinAproxmu}) son independientes del n\'{u}mero $\epsilon>0$, pero valen para todo $\epsilon >0$ suficientemente peque\~{n}o. Luego, haciendo $\epsilon \rightarrow 0^+$, deducimos:
\begin{equation}
\label{eqnDescoRohlinmu}
\mu(A \cap B) = \int_{B/\sim} \, d \rho^* \, \int_{W} \chi_A \,\frac{h }{\int_{W} h \, d m^u} \, d m^u \ \ \forall \ A \in {\mathcal A}.
\end{equation}
Dividiendo la igualdad (\ref{eqnDescoRohlinmu}) entre $\mu(B) >0$, hemos encontrado la descomposici\'{o}n de Rohlin de la medida de probabilidad $\mu$ restringida a la bola $B$, con respecto a la partici\'{o}n ${\mathcal Q}$ de $B$ en subvariedades inestables locales. En efecto, la medida $\widehat \mu$ en el espacio cociente $B/\sim$ con respecto a esa partici\'{o}n es $\rho^*/\mu(B)$ (es inmediato chequear que esta es una medida de probabilidad en $B/\sim$); y las medidas condicionales inestables $\mu^u$ de $\mu$ est\'{a}n definidas por las integrales de la derecha en la igualdad (\ref{eqnDescoRohlinmu}). Luego, $$\mu^u \ll m^u$$ para $\widehat \mu$-casi toda variedad inestable $W \in B/\sim$. Adem\'{a}s, la derivada de Radon-Nikodym de $\mu^u$ con respecto a $m^u$ (la densidad) es:
$$\frac{d\mu^u}{dm^u}(x) = h. $$
Usando las propiedades (\ref{eqn(i)}) del Lema de Distorsi\'{o}n Acotada, $1/h$ es una funci\'{o}n continua y acotada superiormente. Entonces
$$\frac{dm^u}{d\mu^u}(x) = \frac{1}{h}, $$
lo cual implica que $$m^u \ll \mu^u,$$
terminando de demostrar que $\mu$ es una medida de Gibbs, que sus medidas condicionales inestables $\mu^u$ son equivalentes a las medidas de Lebesgue $m^u$ a lo largo de las variedades inestables, para $\mu$-c.t.p., y que su funci\'{o}n densidad inestable $h$ es continua y estrictamente positiva. Esto completa la prueba de la Afirmaci\'{o}n I del par\'{a}grafo \S\ref{proofTheoremSRBanosov}. \hfill $\Box$
{\bf Nota: } Adem\'{a}s, usando la igualdad (\ref{eqnh }) hemos probado que la densidad de las medidas condicionales inestables de $\mu$ a lo largo de la variedad inestable local $W^u_{\delta}(x_i)$, es
$$h (x) = \frac{\prod_{h= 0}^{+ \infty} J^u(x_i)}{\prod_{h= 0}^{+ \infty} J^u(x)},$$
demostrando tambi\'{e}n lo afirmado en la Observaci\'{o}n \ref{remarkSRBanosovDensidad}.
\vspace{.3cm}
\newpage
{\bf Segunda parte de la demostraci\'{o}n del Teorema \ref{TheoremSRBanosov}}
(Prueba de la Afirmaci\'{o}n II de \S\ref{proofTheoremSRBanosov})
\begin{exercise}\em
Probar la Afirmaci\'{o}n II de \S\ref{proofTheoremSRBanosov}.
Sugerencia: Sea dada $\mu$, una medida de probabilidad de Gibbs erg\'{o}dica con densidad inestable continua. Hay que probar que existe un conjunto medible con $\mu$-medida 1 y un abierto que lo contiene, tal que Lebesgue c.t.p. de ese abierto est\'{a} en la cuenca de atracci\'{o}n estad\'{\i}stica $B$ de $\mu$. Para $\mu$-c.t.p. $x_0 \in M$, toda bola $B_{\delta}(x_0)$ tiene $\mu$-medida positiva. Probemos que Lebesgue casi todo punto de $B_{\delta}(x_0)$, para $\delta >0$ suficientemente peque\~{n}o que depende de $x_0$, pertenece a la cuenca $B$. Consideremos la densidad $h(x_0)$ de $\mu^u_{x_0}$ respecto la medida de Lebesgue inestable $m^u_{x_0}$. Cualquier funci\'{o}n densidad es no negativa. Por definici\'{o}n de la funci\'{o}n densidad inestable $ h= d \mu^u/dm^u$, y por el teorema de descomposici\'{o}n de Rohlin, $h_{x_0} >0$ para $\mu$-c.t.p. $x_0 \in M$. Como por hip\'{o}tesis $h$ es continua, entonces $h_{x} >0$ para todo $x \in W^u_{\delta}(x_0)$, si tomamos $\delta >0$ suficientemente peque\~{n}o (dependiendo del punto $x_0$). Ahora podemos aplicar los mismos argumentos de la demostraci\'{o}n del Teorema \ref{theoremGibbs->SRB}, usando la condici\'{o}n (\ref{eqnholonomiaAC}) de continuidad absoluta de la holonom\'{\i}a $h_s$ a lo largo de la foliaci\'{o}n estable, establecida en el Teorema \ref{theoremTeoriaPesin} de la Teor\'{\i}a de Pesin. En extenso, por la construcci\'{o}n sugerida antes, la medida condicionada inestable $\mu^u$ a lo largo de la variedad $W^u_{\delta}(x_0)$ tiene densidad estrictamente positiva. Entonces es equivalente a la medida de Lebesgue $m^u$ en esa variedad inestable local. Usar esta propiedad, y la ergodicidad de $\mu$ para probar que $m^u$ c.t.p. de $W^u_{\delta}(x_0)$ pertenece a $B$ (i.e. $\mu$-c.t.p. pertenece a la cuenca $B$, lo cual implica que $\mu^u$ c.t.p. de $W^u_{\delta}(x_0)$ pertenece a $B$ debido al Teorema \ref{theoremRohlin}). Usar (\ref{eqnholonomiaAC}) en ambas direcciones, para deducir que la medida de Lebesgue $m$ de $h_s{-1}(W^u_{\delta}(x_0) \setminus B) \subset M$ es cero. Finalmente aplicar el Teorema \ref{theoremProductoLocal} de estructura de producto local del difeomorfismo de Anosov $f$, para deducir que $h_s^{-1}(W^u_{\delta}(x_0)) = B_{\delta}(x_0)$, y el mismo argumento que en la prueba del Teorema \ref{theoremGibbs->SRB} para demostrar que $h_s^{-1}(B \cap W^u_{\delta}(x_0)) \subset B$ y concluir que $m$-c.t.p. de $B_{\delta}(x_0)$ pertenece a $B$. Concluimos que para $\mu$-c.t.p. $x_0 \in M$ existe $\delta >0$ (que puede depender de $x_0$) tal que $m$-c.t.p. de $B_{\delta}(x_0)$ pertenece a la cuenca de atracci\'{o}n estad\'{\i}stica $B$ de la medida de Gibbs erg\'{o}dica $\mu$. Tomando la uni\'{o}n de esas bolas abiertas, se obtiene un abierto que contiene al soporte de $\mu$ y tal que $m$-c.t.p. de ese abierto est\'{a} contenido en la cuenca $B$.
\end{exercise}
{\bf Tercera y \'{u}ltima parte de la demostraci\'{o}n del Teorema \ref{TheoremSRBanosov}}
(Prueba de la Afirmaci\'{o}n III de \S\ref{proofTheoremSRBanosov})
{\em Demostraci\'{o}n: } Debido a la Afirmaci\'{o}n I ya probada, existen medidas de Gibbs $\mu$ cuyas medidas condicionales inestables $\mu^u$ satisfacen las dos propiedades en dicha afirmaci\'{o}n. Aplicando el Teorema \ref{theoremGibbs->SRB}, la parte (b) del Corolario \ref{corolarioRohlin1}, y la Definici\'{o}n \ref{definitionMedidaSRB}, deducimos que la cuenca de atracci\'{o}n estad\'{\i}stica de las componentes erg\'{o}dicas de $\mu$ tienen medida de Lebesgue positiva, y estas componentes erg\'{o}dicas son medidas de Gibbs erg\'{o}dicas cuyas condicionales inestables satisfacen las dos propiedades de la Afirmaci\'{o}n I. Por lo tanto, el conjunto $B$ formado por todos los puntos que pertenecen a las cuencas de atracci\'{o}n estad\'{\i}stica de medidas de Gibbs erg\'{o}dicas que satisfacen esas dos propiedades, cumple $$m(B) >0.$$ Nota: $B$ es medible: En efecto, , las medidas de Gibbs erg\'{o}dicas son a lo sumo, una cantidad numerable, porque por el Teorema \ref{theoremGibbs->SRB} y la Definici\'{o}n \ref{definitionMedidaSRB}, sus cuencas de atracci\'{o}n estad\'{\i}stica son disjuntas dos a dos y tienen medida de Lebesgue positiva. La cuenca de atracci\'{o}n estad\'{\i}stica de cualquier medida de probabilidad, por su construcci\'{o}n dada en la Definici\'{o}n \ref{DefinicionCuencaDeAtraccionEstadistica}, es un conjunto medible. Luego, la uni\'{o}n numerable $B$ de conjuntos medibles, es medible.
Queremos probar que $m(B)= 1$, es decir $m(C)= 0$ donde $$C = M \setminus B.$$
Como la cuenca de atracci\'{o}n estad\'{\i}stica de cualquier medida de probabilidad es invariante con $f$, tenemos:
$$f(B)= B, \ \ \ \ \ f(C)= C.$$
Tomemos $\delta >0$ suficientemente peque\~{n}o. Como $m(B) >0$, deducimos que:
$$ \mbox{\em Existe una bola $B_{\delta}(x_0)$ tal que $m(B \cap B_{\delta}(x_0))>0$. \em } $$
Afirmamos que
\begin{equation}
\label{eqnAprobarC}
\mbox{A probar: } \ \ \forall \ x_0 \in M, \ \ m(B \cap B_{\delta}(x_0))>0 \ \Rightarrow \ m(C \cap B_{\delta}(x_0) = 0. \end{equation}
Primero veamos que una vez probada la afirmaci\'{o}n (\ref{eqnAprobarC}), se deduce la afirmaci\'{o}n (III). En efecto, si la variedad $M$ fuera conexa, entonces la afirmaci\'{o}n (III) implica que las bolas abiertas de radio $\delta$ se clasifican en dos clases disjuntas: la de aquellas bolas tales que Lebesgue-c.t.p. de ella pertenece a $B$, y la de aquellas bolas tales que Lebesgue c.t.p. pertenece al complemento de $B$, o sea a $C$. Como la variedad es conexa, una de las dos clases es vac\'{\i}a. Como existe un bola $B_{\delta}(x_0)$ en la primera clase, entonces la segunda clase es vac\'{\i}a. Esto prueba que para toda bola de radio $\delta$, $m(C \cap B_{\delta})= 0$. Luego $0 = m(C) = m(M\setminus B), $ de donde deducimos que $m(B)= 0$, concluyendo la Afirmaci\'{o}n III, cuando la variedad $M$ es conexa. Y si la variedad $M$ no es conexa, como es compacta por hip\'{o}tesis, tiene una cantidad finita $k \geq 2$ de componentes conexas $M_1, \ldots, M_k$. Siendo $f$ un difeomorfismo de clase $C^{1 + \alpha}$, en particular, es continuo. Entonces existe un $p \geq 1$ tal que $f^p: M_i \mapsto M_i$ para todo $1 \leq i \leq k$, y $f^p|_{M_i} \in \mbox{Diff}^{1 + \alpha} (M_i)$. Como $f$ es Anosov, $f^p|_{M_i}$ es Anosov, donde $M_i$ es una variedad compacta y conexa. Por lo probado antes $m(C \cap M_i) = 0$ para todo $1 \leq i \leq k$. Luego $m(C) = 0$, terminando de probar la Afirmaci\'{o}n III de \S\ref{proofTheoremSRBanosov}, a partir de (\ref{eqnAprobarC}).
Ahora solo resta demostrar (\ref{eqnAprobarC}). Supongamos por absurdo que $$m(B \cap B_{\delta}(x_0)) >0, \ \ m(C \cap B_{\delta}(x_0)) >0.$$ Denotemos
$$W_0= W_{\delta}^u(x_0)$$ a la variedad inestable local por $x_0$ en la bola $B_{\delta}(x_0)$ (es decir, $W^u_{\delta}(x_0)$ es la componente conexa que contiene al punto $x_0$ de la intersecci\'{o}n $W^u(x_0) \cap B_{\delta}(x_0)$).
Debido a la propiedad (\ref{eqnholonomiaAC}), establecida en el Teorema \ref{theoremTeoriaPesin} de continuidad absoluta de la holonom\'{\i}a estable, tenemos
$$m^u(W_0 \cap B) >0, \ \ \ m^u(W_0 \cap C) >0$$
donde $m^u$ es la medida de Lebesgue inestable a lo largo de la variedad inestable local $W_0$.
Construyamos, como en el paso 1 de la demostraci\'{o}n de la Afirmaci\'{o}n I en \S\ref{proofTheoremSRBanosov}, las siguientes medidas de probabilidad en $M$. Para cualquier boreliano $A \subset M$, definimos:
$$\mu_0(A):= \frac{m^u(W_0 \cap A)}{m^u(W_0)} = \lambda \mu_{0,B} (A) + (1-\lambda) \mu_{0,C} (A),$$
donde $$\lambda:= \frac{m^u(B \cap W_0)}{m^u(W_0)}, \ \ 1 - \lambda:= \frac{m^u(C \cap W_0)}{m^u(W_0)},$$
$$\mu_{0,B} (A):= \frac{m^u(W_0 \cap A \cap B)}{m^u(W_0 \cap B)}, \ \ \ \frac{m^u(W_0 \cap A \cap C)}{m^u(W_0 \cap C)}.$$
$$\nu_n(A) := \mu_0(f^{-n}(A)) = \lambda \nu_{n,B}(A) + (1- \lambda) \nu_{n,C}(A),$$
donde $$\nu_{n,B}(A) := \mu_{0,B} (f^{-n}(A)), \ \ \nu_{n,C}(A) := \mu_{0,C} (f^{-n}(A)), $$
$$\mu_n := \frac{1}{n} \sum_{j= 0}^{n-1}\nu_{j} = \lambda \mu_{n, B} + (1- \lambda) \mu_{n,C},$$
donde
$$\mu_{n,B} := \frac{1}{n} \sum_{j= 0}^{n-1}\nu_{j, B}, \ \ \ \mu_{n,C} := \frac{1}{n} \sum_{j= 0}^{n-1}\nu_{j, C}.$$
Tomamos una subsucesi\'{o}n $\{n_i\}_{i \geq 1}$ tal que las sucesiones de medidas de probabilidad $\{\mu_{n_i}\}_{i \geq 1},$ \ $\{\mu_{n_i, B}\}_{i \geq 1}$ y $\{\mu_{n_i, C}\}_{i \geq 1}$ son convergentes en la topolog\'{\i}a d\'{e}bil estrella. Es decir, existen medidas de probabilidad $\mu$, $\mu_B$ y $\mu_C$ tales que:
$$ \mu = \lim_{i \rightarrow + \infty} \mu_{n_i}, \ \ \mu_B = \lim_{i \rightarrow + \infty} \mu_{n_i, B}, \ \ \mu_C = \lim_{i \rightarrow + \infty} \mu_{n_i,C}. $$
Entonces:
$$\mu = \lim_{i \rightarrow + \infty} \mu_{n_i} = \lim_{i \rightarrow + \infty} \lambda \mu_{n_i, B} + (1 - \lambda) \mu_{n_i, C} =$$ $$= \lambda \lim_{i \rightarrow + \infty} \mu_{n_i, B} + (1 - \lambda) \lim_{i \rightarrow + \infty}\mu_{n_i, C} = \lambda \mu_{ B} + (1 - \lambda) \mu_{ C}.$$
Luego $$\mu_C \ll \mu.$$ Por el teorema de Descomposici\'{o}n Erg\'{o}dica, como $\mu_C \ll \mu$, deducimos que:
\em Las componentes erg\'{o}dicas ${\mu_C}_x$ de $\mu_C$ coinciden con las componentes erg\'{o}dicas $\mu_x$ de $\mu$, para $\mu_C$-c.t.p. $x \in M$. \em
Por lo demostrado en la Afirmaci\'{o}n I de \S\ref{proofTheoremSRBanosov}, la medida de probabilidad $\mu$ es de Gibbs y sus medidas condicionales inestables satisfacen las dos propiedades de la Afirmaci\'{o}n I. Luego, usando las partes (b) y (d) del Corolario \ref{corolarioRohlin1}, obtenemos:
\em Las componentes erg\'{o}dicas $\mu_x$ de $\mu$ son medidas erg\'{o}dicas de Gibbs y sus medidas condicionales inestables $\mu_x^u$ satisfacen las dos propiedades de la Afirmaci\'{o}n \em I, \em para $\mu$-c.t.p. $x \in M$. \em
De las dos enunciados probados arriba, concluimos que:
\em Las componentes erg\'{o}dicas de $\mu_C$ con medidas erg\'{o}dicas de Gibbs cuyas condicionales inestables satisfacen las dos propiedades de la Afirmaci\'{o}n \em I, \em para $\mu_C$-c.t.p. $x \in M$. \em
Ahora aplicamos la Afirmaci\'{o}n II ya probada, para deducir el siguiente resultado:
{\bf Enunciado (a) ya probado: } \em Lebesgue c.t.p. de un abierto $V$ que contiene al soporte de $\mu_C$, pertenece al conjunto $B= M \setminus C$. \em
Pero, por construcci\'{o}n de la medida $\mu_C = \lim_i \mu_{n_i,C} $, si tomamos una funci\'{o}n real continua no negativa $\psi: M \mapsto [0,1] $ soportada en $V$ y tal que $\psi_V >0$, se cumple:
$$0 < \int \chi_V \psi \, d \mu_C = \int \psi \, d \mu_C = \lim_{i \rightarrow + \infty} \int \psi \, d \mu_{n_i, C} = $$ $$= \lim_{i \rightarrow + \infty} \frac{1}{n_i} \sum_{j= 0}^{n_i - 1} \int \psi \, d \nu_{j, C}. $$
Entonces, existe $j \geq 1$ tal que $$0 < \int \psi \, d \nu_{j, C} = \int \psi \circ f^{-j} \, d \mu_{0,C} = \int (\chi_V \cdot \psi) \circ f^{-j} \, d \mu_{0,C}.$$
(La \'{u}ltima igualdad proviene de que $\psi$ est\'{a} soportada en $V$, es decir $ \chi_V \cdot \psi = \psi$).
Por la construcci\'{o}n de $\mu_{0, C}$, y debido a la positividad establecida en la \'{u}ltima desigualdad, deducimos que
$$m^u(f^{-j}(V) \cap C \cap W_0) >0.$$
Como $f$ es un difeomorfismo y $f^j(C)= C$, deducimos
\begin{equation} \label{eqn-21}m^u(V \cap C \cap f^j(W_0)) >0. \end{equation}
Ahora tomamos una bola abierta $B_{\delta}(x_1) \subset V$, de radio $\delta >0$ suficientemente peque\~{n}o, centrada en un punto $x_1 \in V \cap C \cap f^j(W_0))$ tal que \begin{equation}
\label{eqn-22}
m^u( C \cap W^u_{\delta}(x_1)) >0,\end{equation} donde $W^u_{\delta}(x_1)$ es la componente conexa que contiene a $x_1$ de la intersecci\'{o}n $B_{\delta}(x_1) \cap f^j(W^u(x_0)) $, quien a su vez contiene a la componente conexa por el punto $x_1$ de $B_{\delta}(x_1) \cap f^j(W_0) $, cuya intersecci\'{o}n con $C$, debido a (\ref{eqn-21}), tiene medida de Lebesgue $m^u$ positiva.
Aplicamos la propiedad (\ref{eqnholonomiaAC}) de continuidad absoluta de la holonom\'{\i}a estable $h_s: B_{\delta}(x_1) \mapsto W_{\delta}^u(x_1)$, establecida en el Teorema \ref{theoremTeoriaPesin}. De (\ref{eqn-22}) deducimos
$$m(h_s^{-1}( C \cap W^u_{\delta}(x_1))) >0. $$
Como $h_s^{-1}(C) \subset C$ y $h_s^{-1}(W^u_{\delta}(x_1)) \subset B_{\delta}(x_1) \subset V$, deducimos que $m (C \cap V) >0$, lo cual contradice el enunciado (a) probado antes.
Esta contradicci\'{o}n termina de demostrar (\ref{eqnAprobarC}) por absurdo, y por lo tanto, con ella termina la prueba de la Afirmaci\'{o}n III de \S\ref{proofTheoremSRBanosov} y del Teorema
\ref{TheoremSRBanosov}.
\hfill $\Box$
\section{Atractores estad\'{\i}sticos y medidas SRB-like}
A lo largo de este cap\'{\i}tulo $f: M \mapsto M$ es una transformaci\'{o}n continua en una variedad compacta riemanniana $M$ de dimensi\'{o}n finita. Denotamos con $m$ a la medida de Lebesgue en $M$, re-escalada para que sea una medida de probabilidad: $m(M)= 1$ (i.e. si $0 <m(M) \neq 1$, sustituimos $m$ por la probabilidad $m/m(M)$).
Definiremos primero atractor de Milnor. Este no es en general un atractor estad\'{\i}stico. Sin embargo, la demostraci\'{o}n de su existencia es pr\'{a}cticamente la misma que la demostraci\'{o}n de existencia de atractores estad\'{\i}sticos, que veremos en la secci\'{o}n siguiente.
\subsection{Atractores de Milnor}
\begin{definition}
\label{definitionAtractorMilnor} \em \index{atractor! de Milnor}
{\bf Atractor de Milnor \cite{Milnor}} Se llama \em atractor de Milnor \em a un conjunto compacto no vac\'{\i}o $K \subset M$, invariante por $f$ (i.e. $f^{-1}(K) = K$), tal que $$m(E_K) = 1,$$ donde el conjunto $E_K \subset M$, llamado \em cuenca de atracci\'{o}n (topol\'{o}gica) \em de $K$, est\'{a} definido por
\begin{equation} \label{eqnCuencaAtracMilnor}E_K := \{x \in M: \ \ \lim_{n \rightarrow + \infty} \mbox{dist}(f^n(x), K) = 0\}. \end{equation} \index{cuenca de atracci\'{o}n! topol\'{o}gica} \index{cuenca de atracci\'{o}n! de Milnor} \index{$C(K)$ o $ E_K$ cuenca de atracci\'{o}n! topol\'{o}gica del compacto $K$}
\end{definition}
\begin{exercise}\em \label{ejercicio7}
Probar que para cualquier conjunto compacto $K$ no vac\'{\i}o, el conjunto $E_K$ definido por la igualdad (\ref{eqnCuencaAtracMilnor}) es medible. Sugerencia: La funci\'{o}n $d: M \mapsto \mathbb{R}$ definida por $d(x) = \mbox{dist}(x,K)$ es medible, y $f: M \mapsto M$ es medible porque es continua. El l\'{\i}mite superior de una sucesi\'{o}n de funciones medibles es medible.
\end{exercise}
\begin{exercise}\em
Sean $K \subset K'$ dos atractores de Milnor. Probar que $E_K \subset E_{K'}$.
\end{exercise}
En la Definici\'{o}n \ref{definitionAtractorMilnor} observamos que para un atractor de Milnor, el criterio de observabilidad de su cuenca es el criterio Lebesgue-medible (cf. Observaci\'{o}n \ref{remarkObservabilidadTopyEstad}), mientras que el tipo de atracci\'{o}n es topol\'{o}gica (cf. Definiciones \ref{definitionAtraccionTopol}
y \ref{definitionAtraccionEstad}).
Es inmediato chequear que todo atractor topol\'{o}gico es de Milnor. Sin embargo, no todo atractor de Milnor es topol\'{o}gico, como veremos en los Ejemplos \ref{ejemploMilnorNoTopologico} y \ref{ejemploMilnorNoTopologico1}.
Tambi\'{e}n es inmediato chequear que todo atractor erg\'{o}dico es de Milnor. Pero no todo atractor de Milnor es erg\'{o}dico, como muestra el Ejemplo \ref{ejemploRotacionEsfera}, en que toda la esfera $S^2$ es un atractor topol\'{o}gico, y por lo tanto un atractor de Milnor, pero no existen atractores erg\'{o}dicos.
\begin{definition} \index{atractor! de Milnor! $\alpha$-obs minimal} \index{conjunto! minimal $\alpha$-obs} \index{atractor! $\alpha$-observable}
{\bf $\alpha$-obs. minimalidad de un atractor de Milnor \cite{CatIlyshenkoAttractors}} \em Sea dado un n\'{u}mero real $0 <\alpha \leq 1$. Un atractor de Milnor $K$ se dice \em $\alpha$-observable \em (escribimos \lq\lq \em $K$ es $\alpha$-obs\rq\rq.\em) si su cuenca $E_K$ de atracci\'{o}n (topol\'{o}gica) cumple
$$m(E_K) \geq \alpha.$$
Un atractor $K$ de Milnor $\alpha$-obs. se dice \em $\alpha$-obs. minimal \em si no contiene subconjuntos compactos propios no vac\'{\i}os que sean atractores de Milnor $\alpha$-obs. para el mismo valor de $\alpha$.
En particular, cuando $\alpha= 1$, tenemos definidos los atractores de Milnor $1$-observables y $1$-observables minimales.
Se observa que todo atractor de Milnor $1$-observable es $\alpha$-observable para cualquier $0 < \alpha \leq 1$. Pero un atractor de Milnor $1$-obs. minimal no tiene por qu\'{e} ser $\alpha$-obs. minimal para todo $0 < \alpha < 1$ (ver Ejemplos \ref{ejemploMilnorNoTopologico} y \ref{ejemploMilnorNoTopologico1}).
\end{definition}
\begin{theorem} {\bf Existencia de atractores de Milnor} \index{teorema! de existencia de! atractor de Milnor} \index{atractor! de Milnor! $\alpha$-obs minimal}
\index{conjunto! minimal $\alpha$-obs} \label{TeoExistenciaAtrMilnor}
Sea $f: M \mapsto M$ continua en una variedad compacta y riemanniana $M$, de dimensi\'{o}n finita. Sea $0 < \alpha \leq 1$ dado. Entonces existen atractores de Milnor $\alpha$-obs. minimales para $f$. Adem\'{a}s, si $\alpha= 1$, el atractor de Milnor $1$-obs. minimal es \'{u}nico.
\end{theorem}
La prueba del Teorema \ref{TeoExistenciaAtrMilnor}, en una versi\'{o}n m\'{a}s restringida que enuncia solo la existencia y unicidad del atractor de Milnor $1$-observable minimal, fue dada primeramente en (\cite{Milnor}). En el ap\'{e}ndice de \cite{CatIlyshenkoAttractors} se define $\alpha$-obs. minimalidad de atractores de Milnor para $0 < \alpha \leq 1$, y se observa que la demostraci\'{o}n de existencia de atractor de Milnor dada en (\cite{Milnor}) puede aplicarse, con una inmediata adaptaci\'{o}n, para probar la existencia de atractores de Milnor $\alpha$-obs. minimales, para cualquier $0 < \alpha \leq 1$.
{\em Demostraci\'{o}n: } {\em del Teorema }\ref{TeoExistenciaAtrMilnor}: Sea ${\aleph}_{\alpha}$ la familia de los atractores de Milnor $\alpha$-obs. (no necesariamente minimales). Esta familia no es vac\'{\i}a pues, trivialmente, $M$ es un atractor de Milnor $1$-obs. En $\aleph_{\alpha}$ consideramos la relaci\'{o}n de orden parcial $K_1 \subset K_2$. Entonces, las cuencas de atracci\'{o}n topol\'{o}gica $E_{K_1}$ y $E_{K_2}$ cumplen $$E_{K_1} \subset E_{K_2}, \ \ \alpha \leq m(E_{K_1}) \leq m(E_{K_2}).$$
Sea en $\aleph_{\alpha}$ una cadena $\{K_i\}_{i \in I}$ (no necesariamente numerable). Es decir, $\{K_i\}_{i \in I}$ es un subconjunto totalmente ordenado de $\aleph_{\alpha}$, con la relaci\'{o}n de orden $\subset$.
Probemos que:
{\bf Afirmaci\'{o}n (i)} (A probar) \em Existe en $\aleph_{\alpha}$ un elemento $K$ minimal de la cadena $\{K_i\}_{i \in I}$. \em Es decir, probemos que existe $K \in \aleph_{\alpha}, \ K \subset K_i$ para todo $i \in I$.
En efecto, el conjunto $K= \bigcap_{i \in I} K_i$ es compacto no vac\'{\i}o, porque cualquier subcolecci\'{o}n finita $K_{1} \supset K_2 \supset \ldots \supset K_l$ de la cadena dada $\{K_i\}_{i \in I}$, tiene intersecci\'{o}n $K_1$ que es un compacto no vac\'{\i}o. Para probar que $K \in {\aleph}_{\alpha}$, debemos probar ahora que $m(E_K) \geq \alpha$. Sea $j \in \mathbb{N}^+$ y sea $V_j \supset K$ el abierto formado por todos los puntos de $M$ que distan de $K$ menos que $1/j$. Afirmamos que existe \begin{equation}
\label{eqn-25} K_{i_j} \subset V_j \mbox{ para alg\'{u}n } i_j \in I. \end{equation} Por absurdo, supongamos que para cierto $j \in \mathbb{N}^+$ fijo, para todo $i \in I$, el compacto $K_i \setminus V_j)$ es no vac\'{\i}o. Luego, como $\{K_i\}_{i \in \aleph}$ es totalmente ordenado con la relaci\'{o}n de orden $\subset$, obtenemos que la familia de compactos $\{K_i \setminus V_j \}_{i \in I}$ es totalmente ordenada. Argumentando como hicimos m\'{a}s arriba, pero con esta nueva familia totalmente ordenada de compactos, en vez de con la familia $\{K_i\}_{i \in I}$, deducimos que el siguiente compacto es no vac\'{\i}o $$\bigcap_{i \in I} (K_i \setminus V_j) = \Big(\bigcap _{i \in I} K_i \Big) \setminus V_j = K \setminus V_j,$$
contradiciendo que $K \subset V_j$. Hemos probado la afirmaci\'{o}n (\ref{eqn-25}).
Como todo punto $x \in E_{K_{i_j}}$ cumple $\lim_{n} \mbox{dist}(f^n(x), K_{i_j}) = 0$, entonces $f^{n}(x) \in V_j$ para todo $n$ suficientemente grande (que depende del punto $x$). Este argumento vale para cualquier $j \in \mathbb{N}^+$. Concluimos que todo punto en $\bigcap _{j \in {\mathbb{N}^+}} E_{K_{i_j}}$ pertenece a $E_K$. Rec\'{\i}procamente, todo punto de $E_{K}$, por definici\'{o}n de la cuenca de atracci\'{o}n topol\'{o}gica, est\'{a} contenido en $E_{K_i}$ para todo $i \in I$ (porque $K \subset {K_i}$). En particular, esta afirmaci\'{o}n se satisface para $i_j$, para todo $j \in \mathbb{N}^+$. Luego:
$$E_K := \bigcap _{j \in {\mathcal N}^+} E_{K_{i_j}}.$$
Como la colecci\'{o}n numerable $E_{K_{i_j}}$ est\'{a} totalmente ordenada, obtenemos $$m(E_K) = m \big(\bigcap_{j \in {\mathbb{N}^+}} E_{K_{i_j}}\big) = \lim_{j \rightarrow + \infty} m(E_{K_{i_j}}) \geq \alpha,$$
terminando de demostrar la afirmaci\'{o}n (i).
De la afirmaci\'{o}n (i) se deduce que para toda cadena en $\aleph_{\alpha}$ existe alg\'{u}n elemento $K \in \aleph_{\alpha}$ minimal de la cadena. Aplicando el Lema de Zorn, existen en $\aleph_{\alpha}$ elementos minimales de $\aleph_{\alpha}$. Es decir, existe $K \in \aleph_{\alpha}$ que no contiene subconjuntos propios que pertenezcan a $\aleph_{\alpha}$. Esto es, existe $K$ atractor de Milnor $\alpha$-obs. minimal.
Ahora probemos la unicidad del atractor de Milnor $1$-obs. minimal. Si existieran dos atractores de Milnor $K_1$ y $K_2$ que fueran $1$-obs. minimales, entonces la intersecci\'{o}n $E$ de sus cuencas de atracci\'{o}n topol\'{o}gica $E:= E_{K_1} \cap E_{K_2} $ cumple $$m(E)= 1,$$
porque $m(E_{K_1})= m(E_{K_2}) = 1$. Todo punto $x \in E$ verifica, por la definici\'{o}n de la cuenca $E_{K_1}$ y $E_{K_2}$, la siguiente propiedad:
\em Para todo $\epsilon >0$ existe $N \geq 1$ tal que \em
$$\mbox{dist}(f^n(x), K_1), \ \ \mbox{dist}(f_n(x), K_2) < \epsilon \ \ \forall \ n \geq N$$
Luego $\mbox{dist}(K_1, K_2) < 2 \epsilon \ \ \forall \ \epsilon >0$, de donde $K:= K_1 \bigcap K_2 \neq \emptyset$. Es est\'{a}ndar chequear que, siendo $K_1$ y $K_2$ compactos no vac\'{\i}os con intersecci\'{o}n no vac\'{\i}a, si un punto $x$ cumple
$$\lim_{n \rightarrow + \infty} \mbox{dist}(f^n(x), K_1)=\lim_{n \rightarrow + \infty} \mbox{dist}(f^n(x), K_1)= 0, $$
entonces $$\lim_{n \rightarrow + \infty} \mbox{dist}(f^n(x), K_1 \cap K_2)=0.$$
(Chequear esta \'{u}ltima afirmaci\'{o}n en la parte (a) del Ejercicio \ref{exercise5}.)
Luego $$E= E_{K_1} \bigcap E_{K_2} \subset E_{K_1 \cap K_2} $$ y como $m(E)= 1$, deducimos que $m(E_{K_1 \cap K_2})= 1$. Entonces $K_1 \cap K_2$ es un atractor de Milnor $1$-obs. Como $K_1$ y $K_2$ eran atractores de Milnor $1$-obs. minimales, concluimos que $K_1 \cap K_2 = K_1 = K_2$, terminando de demostrar la unicidad del atractor de Milnor 1-obs. minimal, y el Teorema \ref{TeoExistenciaAtrMilnor}.
\hfill $\Box$
\begin{exercise}\em \label{exercise5}
{\bf (a)} Demostrar que si $K_1$ y $K_2$ son compactos no vac\'{\i}os, y si existe una sucesi\'{o}n de puntos $x_n \in M$ tal que
$$\lim_{n \rightarrow + \infty} \mbox{dist}(x_n, K_1)= \lim_{n \rightarrow + \infty} \mbox{dist}(x_n, K_2)= 0, $$
entonces $K_1 \cap K_2 \neq \emptyset$ y $$\lim_{n \rightarrow + \infty} \mbox{dist}(x_n, K_1 \cap K_2) = 0.$$
{\bf (b)} Demostrar que si $K_1$ y $K_2$ son dos atractores de Milnor tales $m(E_{K_1} \cap E_{K_2}) >0$, entonces $K_1 \cap K_2$ es no vac\'{\i}o, y es un atractor de Milnor cuya cuenca de atracci\'{o}n topol\'{o}gica es $$E_{K_1 \cap K_2} = E_{K_1} \cap E_{K_2}.$$
\end{exercise}
\begin{example} \em
\label{ejemploMilnorNoTopologico} Sea $X= [0, 4 \pi] \times [0,1] $ el rect\'{a}ngulo compacto de ancho $4 \pi$ y altura 1, con un v\'{e}rtice en el origen, cuyo interior est\'{a} contenido en el primer cuadrante y tiene lados paralelos a los ejes. Sea en el intervalo $[0,4 \pi]$ la transformaci\'{o}n
$$f_1(x) = 1+ x - \cos x \ \ \forall \ x \in [0, 4 \pi] $$ y sea en $X$ la transformaci\'{o}n
$$f(x,y) = (f_1(x), y/2) \ \ \forall \ (x,y) \in [0, 4 \pi] \times [0,1]. $$
Sea $m$ la medida de Lebesgue en $X$, re-escalada para que $$m(X)= 1.$$
Es est\'{a}ndar chequear (ver Ejercicio \ref{exercise6}), que
$$K_1 := \{(2 \pi, 0)\} $$ es un atractor de Milnor con cuenca de atracci\'{o}n topol\'{o}gica $$E_{K_1} = (0, 2 \pi] \times [0,1],$$
y por lo tanto $K_1$ es $1/2$-obs. minimal como atractor de Milnor.
$$K_2 := \{(4 \pi, 0)\}$$ es otro atractor de Milnor con cuenca de atracci\'{o}n topol\'{o}gica $$E_{K_2} = (2 \pi, 4 \pi] \times [0,1],$$ y por lo tanto $K_2$ es tambi\'{e}n $1/2$-obs. minimal como atractor de Milnor.
$K_1$ no es atractor topol\'{o}gico, porque todo entorno de $K_1$ contiene puntos de la cuenca de atracci\'{o}n topol\'{o}gica de $K_2$.
$K_2$ es atractor topol\'{o}gico.
$K_1 \cup K_2$ es el \'{u}nico atractor de Milnor $1$-obs. minimal, y tambi\'{e}n es el \'{u}nico atractor $\alpha$-obs, si $1/2 < \alpha \leq 1$.
$K_1$ y $K_2$ son los \'{u}nicos atractores de Milnor $\alpha$-obs. si $0 < \alpha \leq 1/2$ (y como contienen un solo punto cada uno, son $\alpha$-obs. minimales).
$K_1 \cup K_2$ es atractor topol\'{o}gico, pero no es minimal como atractor topol\'{o}gico, pues contiene a $K_2$ que tambi\'{e}n es atractor topol\'{o}gico.
\end{example}
\begin{exercise}\em
\label{exercise6} Probar todas las afirmaciones del Ejemplo \ref{ejemploMilnorNoTopologico}.
\end{exercise}
\begin{example} \em
\label{ejemploMilnorNoTopologico1}
Sea en el cuadrado $X = [0,1] ^2$ la transformaci\'{o}n
$$f(x,y) = (x, (1/2) y) \ \ \forall \ (x,y) \in [0,1]^2.$$
Es inmediato chequear que todos los puntos del segmento $[0,1] \times\{0\}$ son puntos fijos y que $\lim_{n \rightarrow + \infty}f^n(x,y) = (x,0) \ \ \forall \ (x,y) \in [0,1]^2$. Entonces, para todo $0 < \alpha \leq 1$, cualquier segmento compacto $I \times \{0\}$ tal que la longitud del intervalo compacto $I \subset [0,1]$ sea exactamente $\alpha$, es un atractor de Milnor $\alpha$-observable minimal. Sea $K = (I_1 \cup I_2 \cup \ldots \cup I_k) \times \{0\}$ donde $I_k \subset [0,1]$ es un intervalo compacto con interior no vac\'{\i}o, la colecci\'{o}n de los intervalos $I_k$ es disjunta dos a dos y tal que la suma de las longitudes de los $I_k$ es exactamente $\alpha$. Entonces, $K$ tambi\'{e}n es un atractor de Milnor $\alpha$-obs. minimal. Adem\'{a}s, si $0 <\alpha < 1$ y si $I \subset [0,1]$ es un conjunto de Cantor con medida de Lebesgue igual a $\alpha $ (tales conjuntos de Cantor siempre existen), entonces $I \times \{0\}$ es tambi\'{e}n un atractor de Milnor $\alpha$-obs. minimal. Ninguno de los atractores $\alpha$-obs. minimales construidos anteriormente, si $0 <\alpha < 1$, es atractor topol\'{o}gico, pues las cuencas de atracci\'{o}n topol\'{o}gica no contienen ning\'{u}n entorno del atractor. El \'{u}nico atractor $1$-observable minimal es el intervalo $[0,1] \times \{0\}$, que s\'{\i} es atractor topol\'{o}gico.
\end{example}
\subsection{Atractores estad\'{\i}sticos o de Ilyashenko}
En esta secci\'{o}n, al igual que en la anterior, $f: M \mapsto M$ es una transformaci\'{o}n continua en una variedad compacta riemanniana $M$ de dimensi\'{o}n finita. Denotamos con $m$ a la medida de Lebesgue en $M$, re-escalada para que sea una medida de probabilidad: $m(M)= 1$, i.e. si $0 <m(M) \neq 1$, sustituimos $m$ por la probabilidad $m/m(M)$.
\begin{definition}
\label{definitionAtractorIlyashenko} \em
{\bf Atractor estad\'{\i}stico o de Ilyashenko \cite{Ilyashenko}, \cite{Ilyashenkogorodetski}} \index{atractor! de Ilyashenko}
\index{cuenca de atracci\'{o}n! estad\'{\i}stica}
\index{cuenca de atracci\'{o}n! de Ilyashenko} \index{$A_K$ cuenca de atracci\'{o}n! estad\'{\i}stica del compacto $K$}
\index{atractor! estad\'{\i}stico}
Se llama \em atractor estad\'{\i}stico o de Ilyashenko \em a un conjunto compacto no vac\'{\i}o $K \subset M$, invariante por $f$ (i.e. $f^{-1}(K) = K$), tal que $$m(A_K) = 1,$$ donde el conjunto $A_K \subset M$, llamado \em cuenca de atracci\'{o}n estad\'{\i}stica \em de $K$, est\'{a} definido por
\begin{equation} \label{eqnCuencaAtracIlyashenko}A_K := \{x \in M: \ \ \lim_{n \rightarrow + \infty} \frac{1}{j}\sum_{j= 0}^{n-1}\mbox{dist}(f^j(x), K) = 0\}. \end{equation}
\end{definition}
\begin{exercise}\em
Probar que para cualquier conjunto compacto $K$ no vac\'{\i}o, el conjunto $A_K$ definido por la igualdad (\ref{eqnCuencaAtracMilnor}) es medible. Suge\-rencia: La misma que para el Ejercicio \ref{ejercicio7}.
\end{exercise}
\begin{exercise}\em
Sean $K \subset K'$ dos atractores estad\'{\i}sticos. Probar que $A_K \subset A_{K'}$.
\end{exercise}
Es est\'{a}ndar chequear que todo atractor de Milnor (y en particular todo atractor topol\'{o}gico y todo atractor erg\'{o}dico) es estad\'{\i}stico o de Ilyashenko (Ejercicio \ref{ejercicio8}). Sin embargo, no todo atractor estad\'{\i}stico es de Milnor, como veremos en el Ejemplo \ref{ejemploIlyashenkoNoTopologico}. Adem\'{a}s, no todo atractor estad\'{\i}stico es erg\'{o}dico, como muestran los Ejemplos \ref{ejemploRotacionEsfera} y \ref{ejemploMilnorNoTopologico1}.
\begin{exercise}\em \label{ejercicio8} {\bf (a)} Sea $\{a_n\}_{n \in \mathbb{N}}$ una sucesi\'{o}n de reales que converge a $a \in \mathbb{R}$. Probar que la sucesi\'{o}n de promedios $\frac{1}{n} \sum_{j= 0}^{n-1} a_j$ converge a $a$.
{\bf (b)} Probar que todo atractor de Milnor $K$ es atractor estad\'{\i}stico o de Ilyashenko. Sugerencia: probar, usando la parte (a), que la cuenca de atracci\'{o}n topol\'{o}gica de $K$ est\'{a} contenida en la cuenca de atracci\'{o}n estad\'{\i}stica de $K$.
{\bf (c) } Probar que en los Ejemplos \ref{ejemploRotacionEsfera} y \ref{ejemploMilnorNoTopologico1}, los atractores de Milnor $K$ (que por la Parte (b) son tambi\'{e}n atractores estad\'{\i}sticos) no son atractores erg\'{o}dicos. Sugerencia: el \'{u}nico atractor de Milnor $K$ que cumple la condici\'{o}n (\ref{eqn28z}) de la Definici\'{o}n \ref{definitionAtractorErgodico} de atractor erg\'{o}dico, no cumple la condici\'{o}n (\ref{eqn30}) de esa definici\'{o}n.
{\bf (d)} Probar que en el Ejemplo \ref{ejemploMilnorNoTopologico}, los atractores de Milnor $K$ (que por la parte (b) tambi\'{e}n son atractores estad\'{\i}sticos) no son atractores erg\'{o}dicos, aunque para dos de ellos se cumple la condici\'{o}n (\ref{eqn30}) de la Definici\'{o}n \ref{definitionAtractorErgodico} de atractor erg\'{o}dico.
\end{exercise}
\subsection{Atracci\'{o}n estad\'{\i}stica de un compacto}
En la Definici\'{o}n \ref{definitionAtractorIlyashenko} observamos que para un atractor estad\'{\i}stico, el criterio de observabilidad de su cuenca es el criterio Lebesgue-medible (cf. Observaci\'{o}n \ref{remarkObservabilidadTopyEstad}), mientras que la atracci\'{o}n es estad\'{\i}stica (cf. Definiciones \ref{definitionAtraccionTopol}
y \ref{definitionAtraccionEstad}). En efecto:
\begin{proposition}
\label{propositionAtraccionEstadistica1} {\bf Caracterizaci\'{o}n de la atracci\'{o}n estad\'{\i}stica} \index{cuenca de atracci\'{o}n! estad\'{\i}stica}
\index{atractor! estad\'{\i}stico}
Sea $K$ un atractor estad\'{\i}stico o de Ilyashenko, y sea $A_K$ su cuenca de atracci\'{o}n estad\'{\i}stica. Entonces, para todo $x \in M$ se cumple
$x \in A_K$ si y solo si
toda medida de probabilidad $\mu$ que sea l\'{\i}mite de alguna subsucesi\'{o}n convergente en la topolog\'{\i}a d\'{e}bil estrella de las probabilidades emp\'{\i}ricas $\sigma_n:= \frac{1}{n} \sum_{j= 0}^{n-1} \delta_{f^j(x)}$ \em (cf. Definici\'{o}n \ref{definitionProbaEmpiricas}), \em cumple
$$\mu(K) = 1.$$
\end{proposition}
{\em Demostraci\'{o}n: }
{\em Demostraci\'{o}n del \lq\lq solo si\rq\rq: } Sea $x \in A_K$ y sea $$\mu = \lim_{i \rightarrow + \infty} \frac{1}{n} \sum_{j= 0}^{n-1} \delta_{f_j(x)},$$
para una cierta subsucesi\'{o}n $\{n_i\}_{i \in \mathbb{N}}$ de los naturales tal que ese l\'{\i}mite $\mu$ existe en la topolog\'{\i}a d\'{e}bil estrella del espacio de probabilidades. Debemos probar que $\mu(K) = 1$.
Por la definici\'{o}n de la topolog\'{\i}a d\'{e}bil estrella, para toda funci\'{o}n continua $\psi: M \mapsto \mathbb{R}$ se cumple
$$\lim_{i \rightarrow + \infty} \frac{1}{n_i} \sum_{j= 0}^{n_i-1} \psi \circ f^j(x) = \int \psi \, d \mu.$$ En particular, la igualdad anterior se cumple para la funci\'{o}n continua $$\psi(x) := \mbox{dist}(x, K).$$ Por la Definici\'{o}n \ref{definitionAtractorIlyashenko}, como $x \in A_K$, existe $n \geq 0$ tal que
$$\frac{1}{n} \sum_{j= 0}^{n-1} \mbox{dist}(f^j(x), K) < \epsilon$$
En particular, para $n= n_i$ para todo $i$ suficientemente grande, tenemos: $$\frac{1}{n_i} \sum_{j= 0}^{n_i-1} \psi(f^j(x)) \geq \epsilon $$
De las igualdades anteriores, deducimos que $\int \psi \, d \mu = 0$, es decir
$$\int \mbox{dist}(x, K) \, d \mu = 0$$
y como $\mbox{dist}( \cdot, K)$ es una funci\'{o}n no negativa, deducimos que $\mbox{dist}(x,K) = 0$ para $\mu$-c.t.p. $x \in M$.
Como $K$ es compacto, deducimos que $x \in K$ para $\mu$-c.t.p. $x \in M$, luego
$$\mu(K) = \int \chi_K(x) \, d\mu = \int 1 \, d \mu = 1,$$
como quer\'{\i}amos probar.
{\em Demostraci\'{o}n del \lq\lq si\rq\rq: } Sea dado $x \in M$ tal que toda medida $\mu$ de probabilidad que sea el l\'{\i}mite d\'{e}bil estrella de una subsucesi\'{o}n convergente de las probabilidades emp\'{\i}ricas $\sigma_n(x)$, est\'{a} soportada en $K$. Tenemos que probar que $\lim_{n \rightarrow + \infty} \frac{1}{n} \sum_{j= 0}^{n-1} \mbox{dist}(f^j(x), K) = 0$. Sea una subsucesi\'{o}n $n_i$ tal que existe el l\'{\i}mite $$\lim _{i \rightarrow + \infty} \frac{1}{n_i} \sum_{j= 0}^{n_i-1} \mbox{dist}(f^j(x), K) = a.$$ No es restrictivo suponer que la subsucesi\'{o}n $\sigma_{n_i}(x)$ es convergente (en caso contrario, reemplazamos $\{n_i\}$ por una subsucesi\'{o}n adecuada de ella). Entonces, llamando $\mu$ al l\'{\i}mite de $\sigma_{n_i}$, e integrando la funci\'{o}n continua $\psi = \mbox{dist}(\cdot, K)$ obtenemos:
\begin{equation}
\label{eqn-23}
a = \lim_{i \rightarrow + \infty} \frac{1}{n_i} \sum_{j= 0}^{n_i-1} \mbox{dist}(f^j(x), K) = \lim_{i \rightarrow + \infty} \int \psi(y) \, d (\sigma_{n_i}(x))(y) = $$ $$= \int \psi \, d \mu = \int \mbox{dist}(y,K)\, d \mu(y) .\end{equation}
Pero la \'{u}ltima integral en (\ref{eqn-23}) es igual a cero, pues de la hip\'{o}tesis $\mu(K)= 1$ deducimos que $y \in K$ para $\mu$-c.t.p. Luego $\mbox{dist}(y,K) = 0$ para $\mu$-c.t.p. $y \in M$. Entonces, la igualdad (\ref{eqn-23}) implica que
$\lim_n\frac{1}{n} \mbox{dist}\sum_{j= 0}^{n-1}(f^j(x), K)= 0 $ (pues el l\'{\i}mite de cualquier subsucesi\'{o}n convergente es $a= 0$). Entonces $x \in A_K$, terminando de demostrar la Proposici\'{o}n \ref{propositionAtraccionEstadistica1}.
\hfill $\Box$
\begin{definition}
\label{definitionSoporteDemu} \em Sea $\mu$ una medida de probabilidad en los borelianos de $M$. Se llama \em soporte compacto de $\mu$ \em al m\'{\i}nimo compacto $K \subset M$ tal que $\mu(K) = 1$. El soporte compacto existe, debido al Lema de Zorn y a la propiedad de que cualquier familia de compactos tiene intersecci\'{o}n no vac\'{\i}a si las intersecciones de todas las subfamilias finitas son no vac\'{\i}as. El soporte compacto de $\mu$ es \'{u}nico, pues si existieran dos diferentes $K_1$ y $ K_2$, entonces $\mu(K_1 \cap K_2)= 1$ y ni $K_1 $ ni $ K_2$ tendr\'{\i}an la propiedad de minimalidad.
\end{definition}
\begin{proposition}
\label{propositionSRB->estadistico} {\bf Medidas SRB y atractores estad\'{\i}sticos} \index{medida! SRB} \index{atractor! estad\'{\i}stico}
Sea $f: M \mapsto M$ continua en la variedad compacta $M$. Si existe una medida $\mu$ que es SRB para $f$ \em (cf. Definici\'{o}n \ref{definitionMedidaSRB}), \em entonces el soporte compacto $K$ de $\mu$ es un atractor estad\'{\i}stico o de Ilyashenko.
\end{proposition}
\begin{remark} \em
\label{remarkAtracIlyashenkoNo->SRB} El rec\'{\i}proco de la Proposici\'{o}n \ref{propositionSRB->estadistico} es falso. En efecto, las medidas SRB no siempre existen (cf. Ejemplos \ref{ejemploRotacionEsfera} y \ref{ejemploMilnorNoTopologico1}), pero los atractores de Ilyashenko siempre existen, como demostraremos en el Teorema \ref{TeoExistenciaAtrIlyashenko}.
A pesar de que el rec\'{\i}proco de la Proposici\'{o}n \ref{propositionSRB->estadistico} es falso, se puede gene\-ralizar el enunciado de esta Proposici\'{o}n de forma que su rec\'{\i}proco es verdadero. M\'{a}s precisamente, un compacto $K$ no vac\'{\i}o e invariante es atractor estad\'{\i}stico o de Ilyashenko, si y solo s\'{\i}, es el m\'{\i}nimo soporte compacto de una colecci\'{o}n adecuada de medidas invariantes, que llamamos SRB-like, y que describen en forma \'{o}ptima la estad\'{\i}stica de las \'{o}rbitas de un conjunto de medida de Lebesgue positivo. En efecto, en la pr\'{o}xima secci\'{o}n definiremos las medidas de probabilidad \lq\lq SRB-like\rq\rq \ o \lq\lq pseudo-f\'{\i}sicas\rq\rq, que incluyen a las medidas SRB o f\'{\i}sicas (cuando \'{e}stas existen), pero tambi\'{e}n pueden incluir a otras medidas de probabilidad invariantes que no son necesariamente SRB. El nuevo enunciado generalizado de la Proposici\'{o}n \ref{propositionSRB->estadistico} utilizando las medidas SRB-like, en lugar de las medidas SRB, y su rec\'{\i}proco, ser\'{a} establecido y demostrado en el Teorema \ref{theoremSRB-like<->Ilyashenko}.
\end{remark}
{\em Demostraci\'{o}n: }{\em de la Proposici\'{o}n } \ref{propositionSRB->estadistico}
Para probar que $K$ es atractor estad\'{\i}stico, por la Definici\'{o}n \ref{definitionAtractorIlyashenko}, hay que probar que $m(A_K) >0$, donde $m$ denota la medida de Lebesgue y $A_K$ denota la cuenca de atracci\'{o}n estad\'{\i}stica de $K$ definida en (\ref{eqnCuencaAtracIlyashenko}). Como $\mu$ es medida SRB, por la Definici\'{o}n \ref{definitionMedidaSRB}, la cuenca de atracci\'{o}n estad\'{\i}stica $B(\mu)$ de $\mu$, dada en la Definici\'{o}n (\ref{DefinicionCuencaDeAtraccionEstadistica}), cumple $m(B(\mu)) >0$. Luego, basta probar que $B(\mu) \subset A_K$. Sea $x \in B(\mu)$. Entonces, por la Definici\'{o}n \ref{DefinicionCuencaDeAtraccionEstadistica}, para cualquier funci\'{o}n continua $\psi: M \mapsto \mathbb{R}$ se cumple:
$$\lim_{n \rightarrow + \infty} \frac{1}{n} \sum_{j= 0}^{n-1} \psi \circ f^j(x) = \int \psi \, d\mu. $$
En particular si tomamos $\psi(x) = \mbox{dist}(x, K)$ tenemos:
$$\lim_{n \rightarrow + \infty} \frac{1}{n} \sum_{j= 0}^{n-1} \mbox{dist}(f^j(x), K) = \int \mbox{dist}(y, K)\, d \mu$$
Para concluir que $x \in A_K$ basta probar que la integral en la igualdad anterior es cero, y para ello basta chequear que $\mbox{dist}(y, K)= 0$ para $\mu$-c.t.p. $y \in M$. En efecto, $\mu$-c.t.p. $y \in M$ pertenece a $K$ porque $\mu(K)= 1$. Luego, hemos probado que $x \in A_K$ para todo $x \in B(\mu)$, y por lo tanto $m(A_K) \geq m(B(\mu)) >0$, y $K$ es un atractor de Ilyashenko.
\hfill $\Box$
\begin{definition}
{\bf Tiempo medio de estad\'{\i}a} \label{definitionFrecuencia} \em
\index{tiempo medio de estad\'{\i}a}
Sea $V \subset M$ un conjunto medible no vac\'{\i}o. Sea $x \in M$ cualquiera. Llamamos \em frecuencia de visitas a $V $ \em de la \'{o}rbita por $x$ hasta tiempo $n$, al siguiente n\'{u}mero $ \sigma_{n, x}(V)$ (cf. (\ref{(1)}) en la Definici\'{o}n \ref{definitionProbaEmpiricas} de las probabilidades emp\'{\i}ricas $\sigma_{n,x}$):
\begin{equation}\sigma_{n,x}(V) := \int \chi_V \, d \sigma_{n,x} = \frac{1}{n} \sum_{j= 0}^{n-1} \chi_{V}(f^j(x)) =$$ $$= \frac{\#\{0 \leq j \leq n-1\colon f^j(x) \in V \}}{n},\label{eqn-24}\end{equation}
donde $\chi_V$ denota la funci\'{o}n caracter\'{\i}stica de $V$ y $\#A$ denota cantidad de elementos de un conjunto finito $A$.
El \'{u}ltimo t\'{e}rmino de la igualdad (\ref{eqn-24}) indica que el tiempo medio de estad\'{\i}a en $V$ de la \'{o}rbita por $x$ hasta tiempo $n$ es la cantidad relativa de iterados del punto $x$ que caen dentro del conjunto $V$.
Por el teorema erg\'{o}dico de Birkhoff, si $\mu$ es una medida invariante, entonces para $\mu$-c.t.p. $x \in M$ el tiempo medio de estad\'{\i}a en $V$ de la \'{o}rbita por $x$ hasta tiempo $n$, converge cuando $n \rightarrow + \infty$, a $\widetilde \chi_V$, cuyo valor esperado $\int \widetilde \chi_V \, d \mu$ es igual a $\mu(V)$. Adem\'{a}s si la medida $\mu$ es erg\'{o}dica, el tiempo medio de estad\'{\i}a en $V$ tiende a $\mu(V)$ para $\mu$-c.t.p. $x \in M$.
Sin embargo, el enunciado del Teorema de Birkhoff y las propiedades de las medidas invariantes erg\'{o}dicas, no nos ser\'{a}n \'{u}tiles a los prop\'{o}sitos en esta secci\'{o}n. Nos interesa considerar el tiempo medio de estad\'{\i}a en $V$ para Lebesgue c.t.p., aunque la medida de Lebesgue no sea invariante con $f$, y aunque los puntos considerados $x$ no pertenezcan al soporte de ninguna medida invariante $\mu$.
\end{definition}
\begin{proposition}.
\label{propositionAtraccionEstadisticaFrec}
{\bf Caracterizaci\'{o}n de atracci\'{o}n estad\'{\i}stica por el tiempo medio de estad\'{\i}a en un entorno} \index{tiempo medio de estad\'{\i}a} \index{atractor! estad\'{\i}stico}
\index{cuenca de atracci\'{o}n! estad\'{\i}stica}
Sea $K$ un compacto no vac\'{\i}o invariante con $f$. Para todo $\epsilon >0$ denotamos con $V (\epsilon)$ al conjunto de puntos en la variedad $M$ que distan de $K$ menos que $\epsilon$.
Entonces, la cuenca de atracci\'{o}n estad\'{\i}stica $A_K$ del conjunto $K$\em (definida en la igualdad (\ref{eqnCuencaAtracIlyashenko}) de la definici\'{o}n \ref{definitionAtractorIlyashenko}) \em est\'{a} caracterizada por la siguiente igualdad:
\begin{equation} \label{eqnCuencaAtracIlyashenkoFrec}A_K = \bigcap_{\epsilon>0} \{x \in M: \ \lim_{n \rightarrow + \infty} \sigma_{n,x} (V(\epsilon)) \ = \ 1\}.\end{equation}
\end{proposition}
{\bf Notas: } La igualdad (\ref{eqnCuencaAtracIlyashenkoFrec}) implica que $x \in A_K$ si y solo si, cuando $n$ es suficientemente grande, la probabilidad (la frecuencia relativa) del suceso de encontrar al iterado $f^j(x)$, para alg\'{u}n $0 \leq j \leq n-1$, en un entorno $V(\epsilon)$ dado fijo arbitrariamente peque\~{n}o del atractor de Ilyashenko $K$, es 1. Sin embargo, la atracci\'{o}n de la \'{o}rbita por $x$ al conjunto $K$, no es necesariamente atracci\'{o}n topol\'{o}gica. Para ser atracci\'{o}n topol\'{o}gica, debe cumplirse con total certeza (no solo con probabilidad cercana a 1, ni tampoco solo con probabilidad 1) el suceso de encontrar al iterado $f^j(x)$ en el entorno $ V(\epsilon)$, para todo $j$ suficientemente grande.
La igualdad (\ref{eqnCuencaAtracIlyashenkoFrec}) da por lo tanto, la siguiente interpretaci\'{o}n intuitiva del significado de la atracci\'{o}n estad\'{\i}stica a un atractor de Ilyashenko $K$. Las \'{o}rbitas en su cuenca de atracci\'{o}n estad\'{\i}stica $A_K$ se acercan al conjunto $K$ tanto como se desee cuando el iterado $n$ tiende a infinito, pero se admite que existan iterados excepcionales, con una frecuencia relativa despreciable para $n$ arbitrariamente grande, en que la \'{o}rbita \lq\lq se toma una excursi\'{o}n relativamente muy breve (de vacaciones)\rq\rq \ alej\'{a}ndose del atractor $K$ durante la excursi\'{o}n (ver Ejemplo \ref{ejemploHuYoung}).
{\em Demostraci\'{o}n: }
{\em de la Proposici\'{o}n } \ref{propositionAtraccionEstadisticaFrec}:
{\bf (i)} Probemos que si $x \in A_K$ entonces
$\lim_{n \rightarrow + \infty} \sigma_{n,x}(V(\epsilon))$ para todo $\epsilon >0$. Fijemos $\epsilon >0$. Por simplicidad, escribimos $V$ en lugar de $ V(\epsilon)$, ya que $\epsilon $ est\'{a} fijo. Tenemos:
$$ {\epsilon \cdot \chi_{M \setminus V}(y) } \leq \mbox{dist}(y, K) \ \ \forall \ y \in M,$$
pues, cuando $y \in V$ el t\'{e}rmino de la izquierda es cero, y cuando $y \not \in V$, el t\'{e}rmino de la izquierda es $\epsilon \leq \mbox{dist}(y, K)$. Entonces:
$$0 \leq \epsilon \cdot \frac{1}{n}\sum_{j= 0}^{n-1}(1- \chi_V(f^j(x)) \leq \frac{1}{n} \sum_{j= 0}^{n-1}\mbox{dist}(f^j(x), K).$$
Como $x \in A_K$ por hip\'{o}tesis, el l\'{\i}mite cuando $n \rightarrow + \infty$ del t\'{e}rmino a la derecha en la desigualdad anterior, es cero. Concluimos que
$$0= \epsilon \cdot \lim_{n \rightarrow + \infty}\frac{1}{n}\sum_{j= 0}^{n-1}(1- \chi_V(f^j(x)) = \epsilon \cdot \lim_{n \rightarrow + \infty} \Big(1 - \frac{1}{n} \sum_{j= 0}^{n-1} \chi_V(f^j(x)) \Big) = $$ $$ = \epsilon \cdot \lim_{n \rightarrow + \infty} \sigma_{n,x}(V).$$
Como $\epsilon >0$, concluimos que $\lim_{n \rightarrow + \infty} \sigma_{n,x}(V) = 0$, como quer\'{\i}amos demostrar.
{\bf (ii)} Probemos que si $\lim_{n \rightarrow + \infty} \sigma_{n,x}(V(\epsilon))$ para todo $\epsilon >0$ entonces $x \in A_K$. Fijemos $\epsilon >0$ y por simplicidad denotemos $V= V(\epsilon)$.
Sea $$D:= \mbox{diam}(M) = \max\{\mbox{dist}(x,y)\colon x,y \in M\} >0.$$ Luego:
$$\mbox{dist}(y, K) \leq D \cdot \chi_{M \setminus V}(y) \ \ \forall \ y \not \in V, $$ $$\mbox{dist}(y,K) < \epsilon \cdot \chi_V(y) \ \ \forall \ y \in V. $$
Entonces $$\mbox{dist} (y,K) \leq \epsilon \cdot \chi_V(y) + D \cdot \chi_{M \setminus V}(y) \ \ \forall \ y \in M.$$
Consideramos $x$ que satisface las hip\'{o}tesis. De la \'{u}ltima desigualdad obtenemos:
$$0 \leq \frac{1}{n} \sum_{j= 0}^{n-1} \mbox{dist}(f^j(x), K) \leq \epsilon \cdot \frac{1}{n} \sum_{j= 0}^{n-1}\chi_{V}(f^j(x)) + D \cdot \frac{1}{n} \sum_{j= 0}^{n-1} \chi_{M \setminus V}(f^j(x)) = $$ $$= \epsilon \cdot \sigma_{n,x}(V) + D \cdot (1- \sigma_{n,x}(V)). $$
Por hip\'{o}tesis $\lim_{n \rightarrow + \infty} \sigma_{n,x}(V) = 1$. Como los n\'{u}meros reales positivos $\epsilon$ y $D$ est\'{a}n fijos (son independientes de $n$), deducimos que
$$ \lim_{n \rightarrow + \infty} \frac{1}{n} \sum_{j= 0}^{n-1} \mbox{dist}(f^j(x), K) = 0, $$
La \'{u}ltima igualdad significa, debido a la condici\'{o}n (\ref{eqnCuencaAtracIlyashenko}) de la Definici\'{o}n \ref{definitionAtractorIlyashenko}, que $x \in A_K$ como quer\'{\i}amos demostrar.
\hfill $\Box$
\subsection{Existencia de atractores de Ilyashenko}
\begin{definition}.
{\bf $\alpha$-obs. minimalidad de un atractor estad\'{\i}stico \cite{CatIlyshenkoAttractors}} \label{definitionAtractorIlyashenkoMinimal} \index{atractor! de Ilyashenko! $\alpha$-obs minimal} \index{atractor! $\alpha$-observable}
\index{atractor! estad\'{\i}stico! $\alpha$-obs minimal}
\index{conjunto! minimal $\alpha$-obs}
\em Sea dado un n\'{u}mero real $0 <\alpha \leq 1$. Un atractor estad\'{\i}stico o de Ilyashenko $K$ se dice \em $\alpha$-observable \em (escribimos \lq\lq \em $K$ es $\alpha$-obs\rq\rq.\em) si su cuenca $A_K$ de atracci\'{o}n estad\'{\i}stica cumple
$$m(A_K) \geq \alpha.$$
Un atractor $K$ estad\'{\i}stico o de Ilyashenko $\alpha$-obs. se dice \em $\alpha$-obs. minimal \em si no contiene subconjuntos compactos propios no vac\'{\i}os que sean atractores estad\'{\i}sticos o de Ilyashenko $\alpha$-obs. para el mismo valor de $\alpha$.
En particular, cuando $\alpha= 1$, tenemos definidos los atractores estad\'{\i}sticos o de Ilyashenko $1$-observables y $1$-observables minimales.
Se observa que todo atractor estad\'{\i}stico o de Ilyashenko $1$-observable es $\alpha$-observable para cualquier $0 < \alpha \leq 1$. Pero un atractor estad\'{\i}stico o de Ilyashenko $1$-obs. minimal no tiene por qu\'{e} ser $\alpha$-obs. minimal para todo $0 < \alpha < 1$.
\end{definition}
\begin{remark} \em {\bf Sobre conjuntos minimales} \index{conjunto! minimal}
Recordemos la caracterizaci\'{o}n de los conjuntos minimales desde el punto de vista topol\'{o}gico, dada en la Definici\'{o}n \ref{definicionMinimalTopologico}: un conjunto $K$ compacto, no vac\'{\i}o y $f$-invariante es minimal para $f$, desde el punto de vista topol\'{o}gico, si no contiene subconjuntos propios compactos no vac\'{\i}os que sean invariantes por $f$ hacia el futuro.
Observamos que no todo compacto $K$ minimal para $f$ desde el punto de vista topol\'{o}gico es un atractor de Ilyshenko $\alpha$-obs. minimal para alg\'{u}n $0 \leq \alpha \leq 1$ (ver Ejemplo \ref{ejemploHuYoung} o tambi\'{e}n \cite{klepstynMinStatAttr}).
Rec\'{\i}procamente, no todo atractor de Ilyshenko $\alpha$-obs. minimal es necesariamente un compacto minimal para $f$ desde el punto de vista topol\'{o}gico (ver Ejemplo \ref{ejemploAnosovlineal}).
En \cite{Ilyashenkogorodetski} y \cite{klepstynMinStatAttr} se estudian relaciones entre minimalidad para $f$ desde el punto de vista topol\'{o}gico y los atractores estad\'{\i}sticos o de Ilyashenko de $f$.
\end{remark}
\begin{theorem} {\bf Existencia de atractores de Ilyashenko} \label{TeoExistenciaAtrIlyashenko} \index{atractor! de Ilyashenko}
\index{atractor! estad\'{\i}stico}
\index{atractor! estad\'{\i}stico! $\alpha$-obs minimal}
\index{teorema! de existencia de! atractor estad\'{\i}stico}
\index{teorema! de existencia de! atractor de Ilyashenko}
Sea $f: M \mapsto M$ continua en una variedad compacta y riemanniana $M$, de dimensi\'{o}n finita. Sea $0 < \alpha \leq 1$ dado. Entonces existen atractores estad\'{\i}sticos o de Ilyashenko $\alpha$-obs. minimales para $f$. Adem\'{a}s, si $\alpha= 1$, el atractor de Ilyashenko $1$-obs. minimal es \'{u}nico.
\end{theorem}
La prueba del Teorema \ref{TeoExistenciaAtrIlyashenko} sigue los mismos argumentos de la prueba del Teorema \ref{TeoExistenciaAtrMilnor} de existencia de atractores de Milnor, con leves adaptaciones.
{\em Demostraci\'{o}n: } Sea ${\aleph}_{\alpha}$ la familia de los atractores de Ilyashenko $\alpha$-obs. (no necesariamente minimales). Esta familia no es vac\'{\i}a pues, trivialmente, $M$ es un atractor de Ilyashenko $1$-obs, y por lo tanto es $\alpha$-obs. para cualquier $0 < \alpha \leq 1$. En $\aleph_{\alpha}$ consideramos la relaci\'{o}n de orden parcial $K_1 \subset K_2$. De la condici\'{o}n (\ref{eqnCuencaAtracIlyashenko}), teniendo en cuenta que $\mbox{dist}(y, K_2) \leq \mbox{dist}(y, K_1)$ para todo $y \in M$, las cuencas de atracci\'{o}n estad\'{\i}stica $A_{K_1}$ y $A_{K_2}$ cumplen $$A_{K_1} \subset A_{K_2}, \ \ \alpha \leq m(A_{K_1}) \leq m(A_{K_2}).$$
Sea en $\aleph_{\alpha}$ una cadena $\{K_i\}_{i \in I}$ (no necesariamente numerable). Es decir, $\{K_i\}_{i \in I}$ es un subconjunto totalmente ordenado de $\aleph_{\alpha}$, con la relaci\'{o}n de orden $\subset$.
Probemos que:
{\bf Afirmaci\'{o}n (i)} (A probar) \em Existe en $\aleph_{\alpha}$ un elemento $K$ minimal de la cadena $\{K_i\}_{i \in I}$. \em Es decir, probemos que existe $K \in \aleph_{\alpha}, \ K \subset K_i$ para todo $i \in I$.
En efecto, el conjunto $K= \bigcap_{i \in I} K_i$ es compacto no vac\'{\i}o, porque cualquier subcolecci\'{o}n finita $K_{1} \supset K_2 \supset \ldots \supset K_l$ de la cadena dada $\{K_i\}_{i \in I}$, tiene intersecci\'{o}n $K_1$ que es un compacto no vac\'{\i}o. Para probar que $K \in {\aleph}_{\alpha}$, debemos probar ahora que $m(A_K) \geq \alpha$. Sea $j \in \mathbb{N}^+$ y sea $V_j \supset K$ el abierto formado por todos los puntos de $M$ que distan de $K$ menos que $1/j$. En la prueba del Teorema \ref{TeoExistenciaAtrMilnor} demostramos la afirmaci\'{o}n (\ref{eqn-25}): $$K_{i_j} \subset V_j \mbox{ para alg\'{u}n } i_j \in I.$$ Como todo punto $x \in A_{K_{i_j}}$ se cumple $\lim_{n} \sigma_{n,x}(V_j) = 0$ (cf. Proposici\'{o}n \ref{propositionAtraccionEstadisticaFrec}), y este argumento vale para todo $j \in \mathbb{N}^+$, entonces todo punto en $\bigcap _{j \in {\mathbb{N}^+}} A_{K_{i_j}}$ pertenece a $A_K$. Rec\'{\i}procamente, todo punto de $A_{K}$ est\'{a} contenido en $A_{K_i}$ para todo $i \in I$ (porque $K \subset {K_i}$). En particular esta afirmaci\'{o}n se satisface para $i_j$, para todo $j \in \mathbb{N}^+$. Luego:
$$A_K := \bigcap _{j \in {\mathcal N}^+} A_{K_{i_j}}.$$
Como la colecci\'{o}n numerable $A_{K_{i_j}}$ est\'{a} totalmente ordenada, obtenemos $$m(A_K) = m \big(\bigcap_{j \in {\mathbb{N}^+}} A_{K_{i_j}}\big) = \lim_{j \rightarrow + \infty} m(A_{K_{i_j}}) \geq \alpha,$$
terminando de demostrar la afirmaci\'{o}n (i).
De la afirmaci\'{o}n (i) se deduce que para toda cadena en $\aleph_{\alpha}$ existe alg\'{u}n elemento $K \in \aleph_{\alpha}$ minimal de la cadena. Debido al Lema de Zorn, existen en $\aleph_{\alpha}$ elementos minimales de $\aleph_{\alpha}$. Es decir, existe $K \in \aleph_{\alpha}$ que no contiene subconjuntos propios que pertenezcan a $\aleph_{\alpha}$. Esto es, existe $K$ atractor de Ilyashenko $\alpha$-obs. minimal.
Ahora probemos la unicidad del atractor de Ilyashenko $1$-obs. minimal. Si existieran dos atractores de Ilyashenko $K_1$ y $K_2$ que fueran $1$-obs. minimales, entonces la intersecci\'{o}n $A$ de sus cuencas de atracci\'{o}n estad\'{\i}stica $A:= A_{K_1} \cap A_{K_2} $ cumple $$m(A)= 1,$$
porque $m(A_{K_1})= m(A_{K_2}) = 1$. Todo punto $x \in A$ verifica, por la Proposici\'{o}n \ref{propositionAtraccionEstadisticaFrec} que caracteriza las cuencas $A_{K_1}$ y $A_{K_2}$, la siguiente propiedad:
\em Para todo $0 <\epsilon <1/2$ existe $N \geq 1$ tal que \em
$$\sigma_{n,x}(V_{\epsilon}(K_1)), \ \ \sigma_{n,x}(V_{\epsilon}(K_1)) > 1- (\epsilon/2) \ \ \forall \ n \geq N,$$
donde $V_{\epsilon}(K_i)$ denota el conjunto de puntos de la variedad $M$ que distan del compacto $K_i$ menos que $\epsilon$, y $\sigma_{n,x}$ denota la probabilidad emp\'{\i}rica definida en la igualdad (\ref{(1)}), Definici\'{o}n \ref{definitionProbaEmpiricas}.
Luego existen m\'{a}s de $N: = n \cdot (1- (\epsilon/2))\geq (3/4) n$ iterados $f^j(x)$ con tiempos $j \in \{0, 1, \ldots, n-1\}$ tales que $f^j(x) \in V_{\epsilon}(K_1)$. En efecto, si la cantidad de iterados $f^j(x)$ con tiempos $j \in \{0, 1, \ldots, n-1\}$ fuera menor o igual que $N $, entonces tendr\'{\i}amos $\sigma_{n,x}(V_{\epsilon}(K_1) \leq N/n = 1- (\epsilon/2) $. An\'{a}logamente, existen m\'{a}s de $N$ iterados $f^h(x)$ con tiempos $h \in \{0, 1, \ldots, n-1\}$ tales que $f^h(x) \in V_{\epsilon}(K_1)$. Como la cantidad de iterados posibles con tiempos en $\{0,1, \ldots, n-1\}$ es a lo sumo $n$, y como $2N \geq (3/2) n > n$, deben existir algunos iterados comunes $f^j(x)= f^h(x) \in V_{\epsilon}(K_1) \cap V_{\epsilon}(K_2)$. Tomemos alguno de esos iterados comunes $f^j(x)$. Tenemos
$$\mbox{dist}(K_1, K_2) \leq \mbox{dist}(f^j(x), K_1) + \mbox{dist}(f^j(x), K_2) < 2 \epsilon. $$ Entonces $ \mbox{dist}(K_1, K_2) < 2 \epsilon \ \ \forall \ 0 <\epsilon < 1/2$, de donde $$K:= K_1 \bigcap K_2 \neq \emptyset.$$ Aunque no es inmediato, es est\'{a}ndar chequear que, siendo $K_1$ y $K_2$ compactos no vac\'{\i}os con intersecci\'{o}n no vac\'{\i}a, si un punto $x$ cumple
$$\lim_{n \rightarrow + \infty} \sigma_{n,x} V_{\epsilon}(K_1)=\lim_{n \rightarrow + \infty} \sigma_{n,x} V_{\epsilon}(K_2)= 0 \ \ \ \forall \ \epsilon >0, $$
entonces $$\lim_{n \rightarrow + \infty} \sigma_{n,x} V_{\epsilon}(K_1 \cap K_2)=0 \ \ \forall \ \epsilon >0.$$
(Chequear esta \'{u}ltima afirmaci\'{o}n en la parte (a) del Ejercicio \ref{exercise5Ilyashenko}.)
Luego $$A:= A_{K_1} \bigcap A_{K_2} \subset A_{K_1 \cap K_2} $$ y como $m(A)= 1$, deducimos que $m(A_{K_1 \cap K_2})= 1$. Entonces $K_1 \cap K_2$ es un atractor de Ilyashenko $1$-obs. Como $K_1$ y $K_2$ eran atractores de Ilyashenko $1$-obs. minimales, concluimos que $K_1 \cap K_2 = K_1 = K_2$, terminando de demostrar la unicidad del atractor de Ilyashenko 1-obs. minimal, y el Teorema \ref{TeoExistenciaAtrIlyashenko}.
\hfill $\Box$
\begin{exercise}\em \label{exercise5Ilyashenko} Para un conjunto compacto no vac\'{\i}o $K \subset M$, y para $\epsilon >0$, denotamos $V_{\epsilon}(K) $ al conjunto de puntos de $M$ que distan de $K$ menos que $\epsilon$. Denotamos $\sigma_{n,x}$ las probabilidades emp\'{\i}ricas de la \'{o}rbita con estado inicial $x$ hasta tiempo $n$, seg\'{u}n la igualdad (\ref{(1)}), dada por la Definici\'{o}n \ref{definitionProbaEmpiricas}.
{\bf (a)} Demostrar que si $K_1$ y $K_2$ son compactos no vac\'{\i}os, y si existe un punto $x \in M$ tal que
$$\lim_{n \rightarrow + \infty} \sigma_{n,x} (V_{\epsilon} (K_1))= \lim_{n \rightarrow + \infty} \sigma_{n,x} (V_{\epsilon} (K_1))= 0 \ \ \forall \ \epsilon >0 $$
entonces $K_1 \cap K_2 \neq \emptyset$ y $$\lim_{n \rightarrow + \infty} \sigma_{n,x} (V_{\epsilon} (K_1 \cap K_2)) = 0 \ \ \forall \ \epsilon >0$$
{\bf (b)} Demostrar que si $K_1$ y $K_2$ son dos atractores de Ilyashenko tales $m(A_{K_1} \cap A_{K_2}) >0$, entonces $K_1 \cap K_2$ es no vac\'{\i}o, y es un atractor de Ilyashenko cuya cuenca de atracci\'{o}n estad\'{\i}stica es $$A_{K_1 \cap K_2} = A_{K_1} \cap A_{K_2}.$$
\end{exercise}
\subsection{Ejemplos de atractores estad\'{\i}sticos}
\begin{example} \em \index{automorfismo! lineal en el toro}
\label{ejemploAnosovlineal} Sea $f_0 \in \mbox{Diff }^{\infty}(\mathbb{T}^2)$ el mapa \lq\lq Arnold's cat \rq\rq \ \ en el toro bidimensional dado en el ejemplo de la Secci\'{o}n \ref{section2111}, como el automorfismo lineal hiperb\'{o}lico con matriz asociada $$ \Big(
\begin{array}{cc}
2 & 1 \\
1 &1 \\
\end{array}
\Big) \Big|_{\mbox{\footnotesize m\'{o}d.} (1,1)}.
$$ Recordando lo expuesto en la Secci\'{o}n \ref{section2111}: $f_0$ es un difeomorfismo de Anosov lineal, tiene a $(0,0)$ como punto fijo y preserva la medida de Lebesgue $m$ (re-escalada para que $m(\mathbb{T}^2)= 1$).
Probemos que el \'{u}nico atractor estad\'{\i}stico (en particular el \'{u}nico atractor estad\'{\i}stico $\alpha$-obs. minimal para cualquier $0 <\alpha \leq 1$), es $K= \mathbb{T}^2$. En efecto, en el Corolario \ref{corollarySRBanosov} probamos que $m$ es erg\'{o}dica para $f$. Entonces, $m$-c.t.p. $x \in M$ est\'{a} en la cuenca de atracci\'{o}n estad\'{\i}stica $B(m)$ de $m$. (Recordar que una medida de probabilidad invariante $\mu$ es erg\'{o}dica si y solo si $\mu(B(\mu)) = 1$).
Sea $K$ un atractor estad\'{\i}stico. Por definici\'{o}n, su cuenca de atracci\'{o}n estad\'{\i}stica $A_K$ tiene medida de Lebesgue positiva. Luego $$x \in B(m) \ \mbox{ para } m-\mbox{c.t.p. } x \in A_K.$$
Tomemos y dejemos fijo $x \in B(m) \cap A_K$. Como $x \in B(m)$ tenemos:
$$\lim_{n \rightarrow + \infty} \frac{1}{n} \sum_{j= 0}^{n-1} \delta_{f^j(x)} = m,$$
en la topolog\'{\i}a d\'{e}bil$^*$ del espacio ${\mathcal M}$ de las probabilidades. Entonces para toda funci\'{o}n continua $\psi$
se cumple
$$\lim_{n \rightarrow + \infty} \frac{1}{n} \sum_{j= 0}^{n-1} \psi \circ f^j(x) = \int \psi \, d m. $$
En particular si elegimos la funci\'{o}n continua $\psi$ definida por
$$\psi(x) = \mbox{dist}(x, K) \ \ \ \forall \ x \in M,$$
se cumple
$$\lim_{n \rightarrow + \infty} \frac{1}{n} \sum_{j= 0}^{n-1}\mbox{dist}(f^j(x), K) = \int \psi \, d m$$
Como $x \in A_K$, por la Definici\'{o}n \ref{definitionAtractorIlyashenko}, el l\'{\i}mite de la izquierda es cero. Entonces deducimos que
$$\int \psi \, d m = 0.$$
Pero $\psi \geq 0$. Entonces su integral da cero si y solo si $\psi = 0$ para $m$-c.t.p. Una funci\'{o}n continua que es cero para Lebesgue casi todo punto, es id\'{e}nticamente nula. Deducimos que $$0 =\psi(x)=\mbox{dist}(x, K) \ \ \ \forall \ x \in \mathbb{T}^2.$$
Como $K$ es compacto, $\mbox{dist}(x, K)= 0$ si y solo si $x \in K$. Concluimos que $x \in K$ para todo $x \in \mathbb{T}^2$, es decir $K= \mathbb{T}^2$, probando que el \'{u}nico atractor de Ilyashenko en este ejemplo es todo el toro.
Entonces el conjunto $K_0:=\{(0,0)\}$ no es atractor de Ilyashenko. Pero $K_0$ es minimal para $f$ desde el punto de vista topol\'{o}gico porque $(0,0)$ es un punto fijo por $f$ (i.e. $K_0$ es compacto no vac\'{\i}o, $f$-invariante, y no contiene subconjuntos propios con esas tres propiedades). Luego, en este ejemplo, hay un minimal (desde el punto de vista topol\'{o}gico) que no es atractor de Ilyashenko. M\'{a}s a\'{u}n, ning\'{u}n minimal desde el punto de vista topol\'{o}gico puede ser atractor de Ilyashenko. En efecto, m\'{a}s arriba probamos que el \'{u}nico atractor estad\'{\i}stico es todo el toro. Pero todo el toro no es minimal desde el punto de vista topol\'{o}gico, pues contiene propiamente a $\{(0,0)\}$ que es $f$-invariante.
\end{example}
\begin{remark} \em \index{probabilidad! emp\'{\i}rica} \index{medida! de probabilidad emp\'{\i}rica}
La demostraci\'{o}n que hicimos en el Ejemplo \ref{ejemploAnosovlineal} de existencia de atractores de Ilyashenko $\alpha$-obs. minimales que no son minimales desde el punto de vista topol\'{o}gico, se bas\'{o} en el uso del Corolario \ref{corollarySRBanosov}, que a su vez se obtiene de la Teor\'{\i}a de Pesin y de la Teor\'{\i}a de medidas SRB para difeomorfismos de clase $C^{1+ \alpha}$. M\'{a}s precisamente, el argumento que utilizamos en el Ejemplo \ref{ejemploAnosovlineal}, pas\'{o} por deducir la convergencia de la sucesi\'{o}n de probabilidades emp\'{\i}ricas $$\sigma_{n,x} = \frac{1}{n} \sum_{j= 0}^{n-1} \delta_{f^j(x)}$$ para $m$-c.t.p. $x $ en la cuenca de atracci\'{o}n estad\'{\i}stica $A_K$ del atractor de Ilyashenko $K$. Sin embargo, en general (cuando no sean aplicables los argumentos de la Teor\'{\i}a de Pesin), \em no es necesario que la sucesi\'{o}n $\{\sigma_{n,x}\}_{n \geq 1}$ sea convergente, \em para un conjunto de medida de Lebesgue positiva contenido en la cuenca $A_K$ de atracci\'{o}n estad\'{\i}stica de $K$ (ver por ejemplo \cite{Golenishcheva}).
En la pr\'{o}xima secci\'{o}n, tendremos en cuenta la posible no convergencia de la sucesi\'{o}n de probabilidades emp\'{\i}ricas para definir las medidas SRB-like (Definici\'{o}n \ref{definitionMedidaSRBlike}) Finalmente, caracterizaremos a los atractores de Ilyashenko $1$-obs. minimales, estudiando las medidas SRB-like. Otras relaciones entre la eventual convergencia o no convergencia de la sucesi\'{o}n de probabilidades emp\'{\i}ricas, y los atractores estad\'{\i}sticos o de Ilyashenko se encuentran por ejemplo en \cite{AshwinStatAttr}.
\end{remark}
\begin{example} \em
\label{ejemploIlyashenkoNoTopologico} \label{ejemploHuYoung} {\bf Hu-Young \cite{HuYoung}}
Sea $f_0 \in \mbox{Diff }^{\infty}(\mathbb{T}^2)$ el Anosov lineal en el toro bidimensional dado en el ejemplo de la Secci\'{o}n \ref{section2111}, con matriz asociada $$ \Big(
\begin{array}{cc}
2 & 1 \\
1 &1 \\
\end{array}
\Big) \Big|_{\mbox{\footnotesize m\'{o}d.} (1,1)}.
$$
En \cite{HuYoung} se construye una isotop\'{\i}a $\{f_t\}_{t \in [0,1]}$ de difeomorfismos $f_t \in \mbox{Diff }^{2}(\mathbb{T}^2)$ que transforma continuamente en el espacio $\mbox{Diff }^{2}(\mathbb{T}^2)$ el difeomorfismo de Anosov lineal $f_0$ en $f_1$ de modo que:
$\bullet$ $f_t$ es un difeomorfismo de Anosov para todo $0 \leq t < 1$.
$\bullet$ $f_t(0,0) = (0,0) $ para todo $0 \leq t \leq 1$.
$\bullet$ $f_t$ es conjugado a $f_0$ para todo $0 \leq t \leq 1$, es decir existe un homeomorfismo $h_t: \mathbb{T}^2 \mapsto \mathbb{T}^2$ tal que $$f_t \circ h_t = h_t \circ f_0.$$
$\bullet$ Para $t= 1$ la derivada $df_1(0,0)$ en el punto fijo $(0,0)$ tiene matriz asociada diagonalizable, con un valor propio igual a 1, y el otro positivo, pero menor estrictamente que 1. Esto implica que el punto fijo $(0,0)$ no es hiperb\'{o}lico para $f_1$ (para el par\'{a}metro $t= 1$). Pierde hiperbolicidad en la direcci\'{o}n que era expansora para valores del par\'{a}metro $t < 1$, pero no la pierde en la direcci\'{o}n contractiva.
Adem\'{a}s, en la construcci\'{o}n de \cite{HuYoung} se imponen otras condiciones a $f_1$ que permiten deducir la llamada quasi-hiperbolicidad en $\mathbb{T}^2\setminus \{(0,0)\}$.
En este ejemplo construido en \cite{HuYoung}, los autores muestran que la medida $\delta_{(0,0)}$ concentrada en el origen (que es $f_1$-invariante porque el $(0,0)$ es punto fijo) es SRB o f\'{\i}sica para $f_1$ (de acuerdo a nuestra Definici\'{o}n \ref{definitionMedidaSRB}, pero no de acuerdo con lo que los autores en \cite{HuYoung} llaman medida SRB). Adem\'{a}s muestran que la cuenca de atracci\'{o}n estad\'{\i}stica $B(\delta_{(0,0)})$ cubre Lebesgue c.t.p. del toro $\mathbb{T}^2$. Entonces, $\delta_{(0,0)}$ es la \'{u}nica medida SRB o f\'{\i}sica, ya que ninguna otra medida puede tener cuenca de atracci\'{o}n estad\'{\i}stica con medida de Lebesgue positiva (recordar que las cuencas de atracci\'{o}n estad\'{\i}stica de medidas diferentes, por la Definici\'{o}n \ref{DefinicionCuencaDeAtraccionEstadistica}, son disjuntas).
Aplicando la Proposici\'{o}n \ref{propositionSRB->estadistico}, deducimos que \em $ \{(0,0)\}$ es el \'{u}nico atractor de Ilyashenko de $f_1$. \em Por lo tanto $ \{(0,0)\}$ es el \'{u}nico atractor de Ilyashenko $\alpha$-obs. minimal para cualquier $0 < \alpha \leq 1$.
Como $f_1$ es conjugado a $f_0$ y $f_0$ es transitivo, entonces $f_1$ es transitivo. Aplicando lo probado en el Ejercicio \ref{exerciseTransitivoAtrTop}, el \'{u}nico atractor topol\'{o}gico para $f_1$ es ${\mathbb{T}^2}$. Luego $ \{(0,0)\}$ es un atractor estad\'{\i}stico o de Ilyashenko, que \em no es atractor topol\'{o}gico.\em
Por la Proposici\'{o}n \ref{propositionAtraccionEstadisticaFrec}, la frecuencia de visita a un entorno arbitrariamente peque\~{n}o del origen, para Lebesgue-casi toda \'{o}rbita, tiende a 1, cuando el n\'{u}mero $n$ de iterados tiende a infinito. Sin embargo, el origen no es un atractor topol\'{o}gico. Es decir, no es un pozo. Por el contrario, la din\'{a}mica en un entorno del origen es localmente conjugado a la de un punto fijo hiperb\'{o}lico tipo silla, pues el origen es una silla hiperb\'{o}lica para $t= 0$ y $f_1$ es conjugado con $f_0$. Entonces a pesar de que la frecuencia de estad\'{\i}a en un entorno del origen, para Lebesgue casi toda \'{o}rbita, tiende a 1, todo punto que no est\'{e} en la variedad estable local del origen, termina saliendo de ese entorno, para hacer excursiones breves alejado de \'{e}l.
Concluimos: Si el experimentador de la din\'{a}mica tiene como objetivo observar la estad\'{\i}stica de Lebesgue casi toda \'{o}rbita (es decir los promedios temporales de las distancias al atractor), entonces observar\'{a} que el sistema din\'{a}mico por iterados de $f_1$ se comporta con un punto fijo atractor, que atrae Lebesgue casi toda \'{o}rbita, como si fuera un pozo.
En cambio si el experimentador de la din\'{a}mica tiene como objetivo observar la topolog\'{\i}a din\'{a}mica de Lebesgue casi toda \'{o}rbita (es decir los conjuntos $\omega$-l\'{\i}mite a donde estas \'{o}rbitas tienden al iterar hacia el futuro), entonces observar\'{a} que el sistema din\'{a}mico por iterados de $f_1$ se comporta como el automorfismo lineal hiperb\'{o}lico en el toro, en que todo el toro es el \'{u}nico atractor transitivo de Lebesegue casi toda \'{o}rbita.
En el primer caso, el experimentador estad\'{\i}stico, no calificar\'{a} este sistema $f_1$ como ca\'{o}tico, pues es altamente previsible, desde el punto de vista estad\'{\i}stico (i.e. de los promedios temporales) para Lebesgue-casi toda \'{o}rbita. En el segundo caso, el experimentador topol\'{o}gico, lo calificar\'{a} como ca\'{o}tico, pues le resultar\'{a} imposible predecir en qu\'{e} abierto del espacio estar\'{a} el iterado $n$-\'{e}simo para Lebesgue-casi toda \'{o}rbita.
\end{example}
\subsection{Medidas SRB-like o pseudo-f\'{\i}sicas}
En esta secci\'{o}n, como en las dos anteriores, $M$ es una variedad compacta y riemanniana, de dimensi\'{o}n finita, y $f: M \mapsto M$ es continua (no necesariamente invertible).
\begin{definition}
\label{definitionpomegalimit} {\bf El conjunto $p\omega$ l\'{\i}mite de probabilidades} \index{$p\omega(x)$ p-omega l\'{\i}mite en ${\mathcal M}$} \index{probabilidad! emp\'{\i}rica}
\em Sea $x \in M$, sea ${\mathcal M}$ el espacio de medidas de probabilidad de Borel en $M$ con la topolog\'{\i}a d\'{e}bil estrella, y sea $\{\sigma_{n,x}\}_{n \geq 1} \subset {\mathcal M}$ la sucesi\'{o}n de probabilidades emp\'{\i}ricas de la \'{o}rbita futura de $x$ hasta tiempo $n$, definida en (\ref{(1)}) y Definici\'{o}n \ref{definitionProbaEmpiricas}.
Como ${\mathcal M}$ es secuencialmente compacto, existen subsucesiones convergentes de $\{\sigma_{n,x}\}_{n \geq 1}$.
Llamamos \em $p$-omega-l\'{\i}mite \em de la \'{o}rbita de $x$, u \em omega-l\'{\i}mite en el espacio de probabilidades \em de la \'{o}rbita de $x$ al conjunto $p\omega(x)$ formado por los l\'{\i}mites de todas las subsucesiones convergentes de $\{\sigma_{n,x}\}_{n \geq 1}$. Es decir
\begin{equation}
\label{eqnpomegalimit}
p\omega(x):= \{\mu \in {\mathcal M}\colon \ \ \exists n_i \rightarrow + \infty \mbox{ tal que } \lim_{i \rightarrow + \infty} \sigma_{n_i,x} = \mu \},
\end{equation}
donde el l\'{\i}mite a la derecha es en la topolog\'{\i}a d\'{e}bil$^*$ del espacio ${\mathcal M}$ de probabilidades.
Es est\'{a}ndar chequear que, para todo $x \in M$, $p\omega(x) \subset {\mathcal M}$ es un conjunto no vac\'{\i}o y d\'{e}bil$^*$-cerrado (y por lo tanto d\'{e}bil$^*$-compacto, pues ${\mathcal M}$ es d\'{e}bil$^*$-compacto)
\end{definition}
\begin{exercise}\em
Probar las dos \'{u}ltimas afirmaciones de la Definici\'{o}n \ref{definitionpomegalimit}.
\end{exercise}
Recordemos la Definici\'{o}n \ref{DefinicionCuencaDeAtraccionEstadistica} de cuenca de atracci\'{o}n estad\'{\i}stica $B(\mu)$ de una medida de probabilidad $\mu$, y la Definici\'{o}n \ref{definitionMedidaSRB} de medida SRB o f\'{\i}sica. Ahora generalizaremos esas dos definiciones, agregando una $\epsilon$- aproximaci\'{o}n. Para ello elegimos y dejamos fija una m\'{e}trica $\mbox{dist}^*$ en el espacio ${\mathcal M}$ de probabilidades, que induzca la topolog\'{\i}a d\'{e}bil$^*$ (cf. Teorema \ref{teoremaCompacidadEspacioProbabilidades}).
\begin{definition}
\label{DefinicionCuencaDeAtraccionEpsilonEstadistica}{\bf Cuenca de atracci\'{o}n estad\'{\i}stica $\epsilon$-d\'{e}bil} \em \index{cuenca de atracci\'{o}n! estad\'{\i}stica $\epsilon$-d\'{e}bil} \index{$A_{\epsilon}(\mu)$ cuenca de atracci\'{o}n! estad\'{\i}stica $\epsilon$-d\'{e}bil! de la medida $\mu$}
Dada una medida de probabilidad $\mu$ y dado $\epsilon >0$, llamamos \em cuenca de atracci\'{o}n estad\'{\i}stica $\epsilon$-d\'{e}bil \em al conjunto $A_{\epsilon}(\mu)$ definido por:
\begin{equation}
\label{equationAepsilon(mu)}
A_{\epsilon}(\mu):= \{x \in M: \ \mbox{dist} (p \omega(x), \mu) < \epsilon\}. \end{equation}
Comparemos esta Definici\'{o}n \index{cuenca de atracci\'{o}n! estad\'{\i}stica} \ref{DefinicionCuencaDeAtraccionEpsilonEstadistica} con la Definici\'{o}n \ref{DefinicionCuencaDeAtraccionEstadistica} de la cuenca de atracci\'{o}n estad\'{\i}stica (fuerte) $B_{\mu}$ de una medida $\mu$. En efecto, combinando las igualdades (\ref{eqnpomegalimit}) y (\ref{equationB(mu)}), obtenemos:
$$B(\mu):= \big\{x \in M: \ \ p\omega (x) = \{\mu\}\big\}.$$
Luego \begin{equation}
\label{equationB(mu)subsetAepsilon(mu)}
B(\mu) \subset A_{\epsilon}(\mu) \ \ \ \forall \ \ \epsilon \geq 0.\end{equation}
En el segundo t\'{e}rmino de la Igualdad (\ref{equationAepsilon(mu)}), observamos que la distancia entre el conjunto compacto $p\omega(x)$ y el punto $\mu \in {\mathcal M}$ es menor que $\epsilon$. Pero esto no implica que todo el conjunto $p \omega(x)$ (cuando contiene m\'{a}s de un punto) deba estar contenido en la bola de centro $\mu$ y radio $\epsilon$. Por lo tanto, aunque $\bigcap_{\epsilon >0} A_{\epsilon}(\mu)$ pueda ser no vac\'{\i}o, este conjunto contiene a, \em pero no necesariamente coincide con \em $B(\mu)$, quien a\'{u}n, puede ser vac\'{\i}o.
\end{definition}
\begin{definition}
\label{definitionMedidaSRBlike}{\bf Medida SRB-like o pseudo-f\'{\i}sica} \em \index{medida! SRB-like} \index{medida! pseudo-f\'{\i}sica}
Llamaremos a una medida de probabilidad $\mu$ \em SRB-like o f\'{\i}sica \em si {\bf para todo } $\epsilon >0$ su cuenca de atracci\'{o}n estad\'{\i}stica $\epsilon$-d\'{e}bil $A_{\epsilon}(\mu)$ tiene medida de Lebesgue positiva. En breve:
$$m(A_{\epsilon}(\mu)) >0 \ \ \forall \ \epsilon >0,$$
donde $m$ denota la medida de Lebesgue en la variedad $M$.
\end{definition}
Comparando la Definici\'{o}n \ref{definitionMedidaSRBlike} con la Definici\'{o}n \ref{definitionMedidaSRB} de medida SRB, observamos que debido a la inclusi\'{o}n (\ref{equationB(mu)subsetAepsilon(mu)}):
\begin{center}
\em Toda medida SRB es SRB-like. \em
\end{center}
Sin embargo el rec\'{\i}proco es falso. En efecto, toda $f$ continua tiene medidas SRB-like (cf. Teorema \ref{theoremExistenciaSRB-like} que demostraremos m\'{a}s adelante en esta secci\'{o}n). Sin embargo existen ejemplos para los que no existen medidas SRB (cf. Ejemplos \ref{ejemploRotacionEsfera} y \ref{ejemploMilnorNoTopologico1}, y en el caso hiperb\'{o}lico, el mapa $C^1$ en un disco, atribuido a Bowen y estudiado en \cite{Golenishcheva}).
\begin{exercise}\em
\label{ejercicioSRB-likeInvariante}.
{\bf (a)} Probar que toda medida SRB-like es invariante por $f$.
{\bf (b)} Probar que la definici\'{o}n de medida SRB-like no depende de la m\'{e}trica elegida en el espacio ${\mathcal M}$ de probabilidades con la topolog\'{\i}a d\'{e}bil$^*$.
\end{exercise} \index{${\mathcal O}_f $ conjunto de medidas SRB-like! (o pseudo-f\'{\i}sicas u observables) para $f$}
{\bf Notaci\'{o}n:} Denotamos con ${\mathcal O}_f$ al conjunto de todas las medidas SRB-like para $f$. Esta notaci\'{o}n proviene de \cite{CatEnr2011}, que introduce la definici\'{o}n de medidas SRB-like, llam\'{a}ndolas tambi\'{e}n \em medidas observables. \em
\vspace{.2cm}
{\bf Observaciones: } ${\mathcal O}_f $ est\'{a} contenido en el conjunto ${\mathcal M}_f$ de medidas de probabilidad $f$-invariantes, pero usualmente difiere mucho de ${\mathcal M}_f$, como veremos en el Ejemplo \ref{ejemploExpansorasEnElCirculo} $C^1$ gen\'{e}rico. Sin embargo, en el caso $C^0$, y para los que llamamos \em endomorfismos expansores de Misiurewicz en el c\'{\i}rculo $S^1$ \em (para los que no existen medidas SRB), el conjunto de medidas SRB-like coincide con el conjunto ${\mathcal M}_f$ de todas las medidas invariantes (ver \cite{Misiurewicz}, o tambi\'{e}n el Corolario \ref{corolarioSRB-likeMisiurevicz} y el Ejemplo \ref{ejemploExpansorasC0EnElCirculo} m\'{a}s adelante en esta secci\'{o}n).
\vspace{.2cm}
Algunas propiedades que distinguen a las medidas SRB-like son: \index{medida! SRB-like}
$\bullet$ Existen medidas SRB-like, sin necesidad de agregar hip\'{o}tesis adicionales a la continuidad de $f$ (Teorema \ref{theoremExistenciaSRB-like}).
$\bullet$ El conjunto de medidas SRB-like describen en forma \'{o}ptima (con un m\'{\i}nimo posible de medidas invariantes) la estad\'{\i}stica en el futuro de Lebesgue casi toda \'{o}rbita (Teorema \ref{theoremOptimalidadSRB-like}).
$\bullet$ El m\'{\i}nimo soporte compacto com\'{u}n de medidas SRB-like (cf. Definiciones \ref{definitionSoporteDemu} y \ref{definitionSoporteDemu+}) caracteriza a los atractores estad\'{\i}sticos o de Ilyashenko $\alpha$-obs. minimales. (Teorema \ref{theoremSRB-like<->Ilyashenko}). \index{atractor! estad\'{\i}stico} \index{atractor! de Ilyashenko}
$\bullet$ Bajo hip\'{o}tesis adicionales de $C^1$ hiperbolicidad, las medidas SRB-like, satisfacen la F\'{o}rmula (\ref{eqnformuladePesin}) de Pesin de la Entrop\'{\i}a (cf. Ejemplos \ref{ejemploExpansorasEnElCirculo} y \ref{ejemploSRB-likeAnosovC1}), aunque en el contexto general de regularidad $C^1$ no necesariamente tienen propiedades de continuidad absoluta respecto a Lebesgue.
\begin{theorem}
\label{theoremExistenciaSRB-like} {\bf Existencia de medidas SRB-like \cite{CatEnr2011}} \index{teorema! de existencia de! medidas SRB-like} \index{medida! SRB-like} \index{medida! pseudo-f\'{\i}sica} \index{teorema! compacidad de ${\mathcal O}_f$}
Sea $f: M \mapsto M$ continua. Entonces:
{\bf (a) } Existen medidas de probabilidad SRB-like para $f$.
{\bf (b) } El conjunto ${\mathcal O}_f$ de las medidas SRB-like es d\'{e}bil$^*$-compacto en el espacio de las probabilidades invariantes por $f$.
\end{theorem}
Extraemos la prueba de \cite{CatEnr2011}:
{\em Demostraci\'{o}n: }
{\bf (a) } Supongamos por absurdo que ${\mathcal O}_f $ es vac\'{\i}o. Entonces toda probabilidad $\mu $ es no SRB-like. Es decir, $\mu$ no satisface la Definici\'{o}n \ref{definitionMedidaSRBlike}. Recordamos la Definici\'{o}n \ref{definitionpomegalimit} del conjunto $p\omega(x)$ ($p$-omega-l\'{\i}mite de la \'{o}rbita por cada punto $x \in M$). Luego, toda $\mu \in {\mathcal M}$ est\'{a} contenida en un entorno abierto ${B} (\mu) \subset {\mathcal M}$ (con la topolog\'{\i}a d\'{e}bil$^*$ del espacio de probabilidades ${\mathcal M}$) tal que
$$m\Big(\{x \in M \colon \ p\omega(x) \cap {B}(\mu) \neq \emptyset \}\Big) = 0$$
Como ${\mathcal M}$ es d\'{e}bil$^*$ compacto, existe un subcubrimento finito de $M$ $$\{{B}_1, {B}_2, \ldots, {B}_k\}$$
tal que ${B}_i = {B}(\mu_i)$ para alguna medida de probabilidad $\mu_i$. Luego:
$$m\Big(\{x \in M \colon \ p\omega(x) \bigcap \big(\bigcup_{i= 1}^k{B}_i\big) \neq \emptyset \} \Big) =$$ $$= m\Big(\bigcup_{i= 1}^k \{x \in M \colon \ p\omega(x) \cap {B}_i \neq \emptyset \} \Big) \leq $$ $$\leq \sum_{i= 1}^k m\Big(\{x \in M \colon \ p\omega(x) \cap {B}_i \neq \emptyset \} \Big) = 0.$$
Como $\bigcup_{i= 1}^k {B}_i = {\mathcal M}$, deducimos que
$$m\Big(\{x \in M \colon \ p\omega(x) \cap {\mathcal M} \neq \emptyset \} \Big) = 0.$$
Como $p \omega(x) \subset {\mathcal M}$ para todo $x \in M$, deducimos que para Lebesgue c.t.p. $x \in M$, $p \omega(x) = \emptyset$, lo cual es una contradicci\'{o}n porque toda sucesi\'{o}n de probabilidades tiene alguna subsucesi\'{o}n convergente.
{\bf (b) } Para probar que ${\mathcal O}_f$ es d\'{e}bil$^*$ compacto, basta probar que es d\'{e}bil$^*$ cerrado (pues ${\mathcal O}_f \subset{\mathcal M}$ y ${\mathcal M}$ es un espacio metrizable d\'{e}bil$^*$ compacto). Sea $\mu_n \in {\mathcal O}_f$ convergente a $\mu$. Debemos probar que $\mu \in {\mathcal O}_f$.
Sea $\epsilon >0$ arbitrario. Sea $B_{\epsilon}(\mu)$ la bola de centro $\mu$ y radio $\epsilon$. Sea $n$ tal que $\mu_n \in B_{\epsilon}(\mu)$ y sea $\epsilon _n >0$ tal que la bola $B_{\epsilon_n}(\mu_n)$ de centro $\mu_n$ y radio $\epsilon _n$ satisface $$B_{\epsilon_n}(\mu_n) \subset B_{\epsilon}(\mu).$$
Como $\mu_n \in {\mathcal O}_f$, por la Definici\'{o}n \ref{definitionMedidaSRBlike} de medida SRB-like, se cumple
$$m\Big(\{x \in M: \ p \omega(x) \cap B_{\epsilon_n}(\mu_n) \neq \emptyset\} \Big) >0.$$
Como $$ \{x \in M: \ p \omega(x) \cap B_{\epsilon_n}(\mu_n) \neq \emptyset\} \ \subset \ \{x \in M: \ p \omega(x) \cap B_{\epsilon}(\mu) \neq \emptyset\},$$
deducimos que
$$m\Big(\{x \in M: \ p \omega(x) \cap B_{\epsilon}(\mu) \neq \emptyset\} \Big) >0.$$
Siendo $\epsilon >0$, esto prueba que $\mu $ es SRB-like, como quer\'{\i}amos demostrar.
\hfill $\Box$
Para enunciar el siguiente teorema, recordamos la Definici\'{o}n \ref{definitionpomegalimit} del conjunto $p\omega(x)$ ($p$-omega-l\'{\i}mite de la \'{o}rbita por el punto $x$). De la igualdad (\ref{eqnpomegalimit}), observamos que $p \omega(x)$ es el m\'{\i}nimo conjunto de probabilidades que describen completamente la estad\'{\i}stica (es decir los promedios temporales asint\'{o}ticos) de la \'{o}rbita por el punto $x$.
\begin{theorem}
\label{theoremOptimalidadSRB-like}. \index{teorema! optimalidad estad\'{\i}stica} \index{medida! SRB-like}
{\bf Optimalidad estad\'{\i}stica del conjunto de medidas SRB-like \cite{CatEnr2011}}
Para toda $f: M \mapsto M$ continua
el conjunto ${\mathcal O}_f$ de las medidas SRB-like para $f$ es el m\'{\i}nimo conjunto d\'{e}bil$^*$-compacto ${\mathcal K}$ del espacio de probabilidades tal que \em
$$p \omega (x) \subset {\mathcal K} \ \ \mbox{para Lebesgue-c.t.p. } x \in M.$$
\end{theorem}
{\em Demostraci\'{o}n: }
Por la parte (b) del Teorema \ref{theoremExistenciaSRB-like}, el conjunto ${\mathcal O}_f$ de todas las medidas SRB-like para $f$ es no vac\'{\i}o y d\'{e}bil$^*$-compacto.
Primero probemos que $p\omega(x) \subset {\mathcal O}_f$ para Lebesgue c.t.p. $x \in M$. Para $\epsilon >0$ arbitrario, denotamos con $B_{\epsilon}({\mathcal O}_f)$ al conjunto abierto de todas las probabilidades $\nu$ que distan del compacto no vac\'{\i}o ${\mathcal O}_f$ menos que $\epsilon$. Consideremos el complemento $${\mathcal C}:= {\mathcal M} \setminus B_{\epsilon}({\mathcal O}_f).$$ El conjunto ${\mathcal C}$ es compacto porque es el complemento del abierto $B_{\epsilon}({\mathcal O}_f)$ en el espacio compacto ${\mathcal M}$. Por construcci\'{o}n ${\mathcal C} \cap {\mathcal O}_f = \emptyset$. Luego, toda $\nu \in {\mathcal C}$ es no SRB-like. Entonces existe un entorno abierto $B(\nu)$ de $\nu$ tal que:
$$m \Big(\{x \in M: \ p \omega(x) \cap B(\nu) \neq \emptyset \}\Big ) = 0.$$
Al igual que al final de la demostraci\'{o}n de la parte (a) del Teorema \ref{theoremExistenciaSRB-like}, pero escribiendo ${\mathcal C}$ en el rol de ${\mathcal M}$, deducimos que existe un cubrimiento finito $\{B_1, B_2, \ldots, B_k\}$ de ${\mathcal C}$ tal que
$$m \Big(\{x \in M: \ p \omega(x) \bigcap \big( \bigcup_{i= 1}^k B_i \big) \neq \emptyset\} \Big)= 0$$
Como $\bigcup_{i= 1} ^k B_i \supset {\mathcal C}$, deducimos
$$m \Big(\{x \in M: \ p \omega(x) \cap {\mathcal C} \neq \emptyset\} \Big)= 0.$$
Dicho de otra forma, para Lebesgue-casi todo punto $x \in M$, el conjunto $p \omega (x)$ est\'{a} contenido en ${\mathcal M} \setminus {\mathcal C} = B_{\epsilon}({\mathcal O}_f)$. Luego, tomando $\epsilon = 1/h$ para $h \in \mathbb{N}^+$, hemos probado que
$$\forall \ h \in \mathbb{N}^+: \ p \omega(x) \subset B_{1/h}({\mathcal O}_f) \ \ m-\mbox{c.t.p. } x \in M. $$
Como la uni\'{o}n numerable de conjuntos con medida de Lebesgue nula, tiene medida de Lebesgue nula, la intersecci\'{o}n numerable de conjuntos con medida de Lebesgue total, tiene medida de Lebesgue total. Concluimos que:
$$p \omega(x) \subset \bigcap_{h= 1}^{+\infty} B_{1/h}({\mathcal O}_f) = {\mathcal O}_f \ \ m-\mbox{c.t.p. } x \in M,$$
como quer\'{\i}amos demostrar.
Segundo, probemos que ${\mathcal O}_f$ es minimal en el conjunto de compactos no vac\'{\i}os ${\mathcal K} \subset {\mathcal M} $ que tienen la propiedad $p\omega(x) \subset{\mathcal K}$ para Lebesgue-casi todo punto $x \in M$. Tomemos cualquier compacto no vac\'{\i}o ${\mathcal K} \subset{\mathcal O}_f$, tal que ${\mathcal K} \neq {\mathcal O}_f$. Probemos que tal ${\mathcal K}$ no tiene la propiedad mencionada. Es decir, probemos que $m \Big( p \omega(x) \cap ({\mathcal M}\setminus {\mathcal K})\Big) >0$.
En efecto, como ${\mathcal K} \subset \neq{\mathcal O}_f$, existe una medida $\mu \in {\mathcal O}_f \setminus {\mathcal K}$. Sea $\epsilon >0$ tal que la bola $B_{\epsilon}(\mu)$ es disjunta con el compacto ${\mathcal K}$. Como $\mu $ es SRB-like, por las Definiciones \ref{definitionMedidaSRBlike} y \ref{DefinicionCuencaDeAtraccionEpsilonEstadistica}, se cumple:
\begin{equation}
\label{eqn-26}
m\Big(\{x \in M\colon \ p \omega(x) \cap B_{\epsilon}(\mu) \neq \emptyset \} \Big) >0.\end{equation}
Siendo $B_{\epsilon}(\mu) \cap {\mathcal K} = \emptyset$, deducimos que $B_{\epsilon}(\mu) \subset ({\mathcal M} \setminus {\mathcal K})$. Entonces, sustituyendo en (\ref{eqn-26}) concluimos:
$$m\Big(\{x \in M\colon \ p \omega(x) \cap ({\mathcal M} \setminus {\mathcal K}) \neq \emptyset \} \Big) >0,$$ como quer\'{\i}amos demostrar.
\hfill $\Box$
\begin{corollary} \label{corolarioSRB-likefinito}
Sea $f: M \mapsto M$ continua. \index{medida! SRB-like} \index{cuenca de atracci\'{o}n! estad\'{\i}stica} \index{teorema! unicidad de medida SRB-like} \index{medida! SRB}
{\bf (a) } Si la medida SRB-like $\mu$ es \'{u}nica, entonces $\mu$ es SRB y su cuenca de atracci\'{o}n estad\'{\i}stica (fuerte) $B(\mu)$ cubre $M$ Lebesgue c.t.p.
{\bf (b) } Rec\'{\i}procamente, si existe una medida SRB $\mu$ cuya cuenca de atracci\'{o}n estad\'{\i}stica (fuerte) $B(\mu)$ cubre $M$ Lebesgue c.t.p., entonces $\mu$ es la \'{u}nica medida SRB-like.
{\bf (c) } Si el conjunto de medidas SRB-like es finito, entonces todas las medidas SRB-like son SRB y la uni\'{o}n de las cuencas de atracci\'{o}n estad\'{\i}stica de las medidas SRB cubre $M$ Lebesgue c.t.p.
{\bf (d) } Rec\'{\i}procamente, si existe una cantidad finita de medidas SRB tales que la uni\'{o}n de sus cuencas de atracci\'{o}n estad\'{\i}stica cubre $M$ Lebesgue c.t.p., entonces estas son las \'{u}nicas medidas SRB-like, y por lo tanto el conjunto de medidas SRB-like es finito.
\end{corollary}
\begin{exercise}\em
Probar el Corolario \ref{corolarioSRB-likefinito}. Sugerencia: Basta probar (c) y (d), pues estas implican (a) y (b). Probar primero que una medida SRB-like es aislada en el conjunto de las medidas SRB-like si y solo si es SRB. Combinar las Definiciones \ref{definitionMedidaSRB} y \ref{definitionMedidaSRBlike} de medidas SRB y SRB-like respectivamente, junto con las Definiciones \ref{DefinicionCuencaDeAtraccionEstadistica} y \ref{DefinicionCuencaDeAtraccionEpsilonEstadistica} de las cuencas de atracci\'{o}n estad\'{\i}stica fuerte y $\epsilon$-d\'{e}bil, respectivamente.
\end{exercise}
\begin{conjecture} \em \index{conjetura! Palis}
{\bf Palis \cite{PalisConjecture}} \em Para $r \geq 1$ suficientemente grande $C^r$-gen\'{e}ricamente en ${\mbox{Diff }^r(M)}$ existe una cantidad finita de medidas SRB tales que la uni\'{o}n de sus cuencas de atracci\'{o}n estad\'{\i}stica cubre $M$ Lebesgue c.t.p. \em
\vspace{.2cm}
Del Corolario \ref{corolarioSRB-likefinito} deducimos el siguiente enunciado equivalente de la Conjetura de Palis:
\vspace{.2cm}
\index{conjetura! Palis}
\em $C^r$-gen\'{e}ricamente para $f \in {\mbox{Diff }^r(M)}$ el conjunto d\'{e}bil$^*$ no vac\'{\i}o ${\mathcal O}_f$ de medidas de probabalidad SRB-like para $f$, carece de puntos de acumulaci\'{o}n.
\end{conjecture}
\begin{corollary} {\bf (del Teorema \ref{theoremOptimalidadSRB-like})}
\label{corolarioSRB-likeMisiurevicz}
Sea $f: M \mapsto M$ continua, no \'{u}nicamente erg\'{o}dica. Si para Lebesgue c.t.p. $x \in M$ el conjunto de probabilidades $p\omega(x)$ \em (cf. Definici\'{o}n \ref{definitionpomegalimit}) \em coincide con el conjunto de todas las medidas inva\-rian\-tes, entonces no existen medidas SRB y
el conjunto de las medidas SRB-like coincide con el conjunto de todas las medidas in\-va\-rian\-tes.
\end{corollary}
{\bf Nota:} Existen transformaciones $f$ que cumplen las hip\'{o}tesis del Corolario \ref{corolarioSRB-likeMisiurevicz}. En efecto, el mapa $C^0$ expansor en el c\'{\i}rculo construido por Misiurevicz en \index{mapa expansor} \cite{Misiurewicz}, y los mapas $C^0$-expansores gen\'{e}ricos encontrados recientemente por Abdenur y Andersson en \cite{AbdenurAndersson}, satisfacen las hip\'{o}tesis de este Corolario. Ver tambi\'{e}n el Ejemplo \ref{ejemploExpansorasC0EnElCirculo} m\'{a}s adelante en esta secci\'{o}n.
{\em Demostraci\'{o}n: }
{\em del Corolario } \ref{corolarioSRB-likeMisiurevicz}:
Por hip\'{o}tesis, el m\'{\i}nimo conjunto compacto ${\mathcal K}$ de medidas de probabilidad tal que $p \omega(x) \subset {\mathcal K}$ es el conjunto ${\mathcal M}_f$ de todas las medidas $f$-invariantes. Por el Teorema \ref{theoremOptimalidadSRB-like}, ${\mathcal K}$ es el conjunto ${\mathcal O}_f$ de las medidas SRB-like. Luego ${\mathcal O}_f = {\mathcal M}_f$ como quer\'{\i}amos demostrar.
\hfill $\Box$
\subsection{Relaci\'{o}n entre atractor estad\'{\i}stico y medidas SRB-like}
\begin{definition} \index{medida! soporte compacto de}
\label{definitionSoporteDemu+} \em Sea $ {\mathcal K} $ un conjunto no vac\'{\i}o de medidas de probabilidad. Se llama \em soporte compacto de ${\mathcal K}$ \em al m\'{\i}nimo compacto $K \subset M$ tal que $$\mu(K) = 1 \ \ \forall \ \mu \in {\mathcal K}.$$
El m\'{\i}nimo soporte compacto existe como resultado de aplicar el lema de Zorn a la familia de compactos no vac\'{\i}os con $\mu$-medida igual a 1 para toda $\mu \in {\mathcal K}$, y de usar tambi\'{e}n la propiedad de que es no vac\'{\i}a la intersecci\'{o}n de una familia de compactos tal que toda subfamilia finita tiene intersecci\'{o}n no vac\'{\i}o. El soporte compacto de ${\mathcal K}$ es \'{u}nico, debido a su propiedad de minimalidad y a que la intersecci\'{o}n de dos compactos con $\mu$-medida igual a 1 es un compacto con $\mu$-medida igual a 1. \index{soporte compacto}
\end{definition}
\begin{theorem}.\index{teorema! de caracterizaci\'{o}n de! atractor estad\'{\i}stico}
\label{theoremSRB-like<->Ilyashenko}
{\bf Medidas SRB-like y atractores de Ilyashenko} \index{atractor! estad\'{\i}stico} \index{atractor! de Ilyashenko} \index{medida! SRB-like} \index{soporte compacto}
Para toda $f\colon M \mapsto M$ continua, el atractor $1$-obs. minimal de Ilyashenko $K$ \em (cf. Theorem \ref{TeoExistenciaAtrIlyashenko})\ \em es el soporte compacto del conjunto ${\mathcal O}_f$ de medidas SRB-like para $f$.
\end{theorem}
{\bf Nota:} El Teorema \ref{theoremSRB-like<->Ilyashenko} puede generalizarse, adaptando el enunciado adecuadamente para caracterizar todo atractor de Ilyashenko $\alpha$-obs. minimal (para cualquier $0 < \alpha \leq 1$) como el soporte compacto de un subconjunto adecuado de medidas SRB-like para $f$ (ver \cite{CatIlyshenkoAttractors}).
{\em Demostraci\'{o}n: } {\em del Teorema } \ref{theoremSRB-like<->Ilyashenko}:
Sea $K_1$ el atractor de Ilyashenko 1-obs. minimal. Por el Teorema \ref{TeoExistenciaAtrIlyashenko}, el compacto $K_1 \neq \emptyset$ existe y es \'{u}nico. Sea $K_2$ el soporte compacto del conjunto ${\mathcal O}_f$ de medidas SRB-like para $f$, seg\'{u}n la Definici\'{o}n \ref{definitionSoporteDemu+}. De acuerdo a lo observado al final de dicha definici\'{o}n, el compacto $K_2 \neq \emptyset$ existe y es \'{u}nico.
{\em Demostraci\'{o}n de $K_1 \subset K_2$:} Como $K_1$ es 1-obs. minimal, por la Definici\'{o}n \ref{definitionAtractorIlyashenkoMinimal} basta probar que \begin{equation}
\label{eqn-29}
\mbox{{A probar: }} \ \lim_{n\rightarrow + \infty} \frac{1}{n} \sum_{j= 0}^{n-1}\mbox{dist}(f^j(x), K_2) = 0 \mbox{ para } m-\mbox{c.t.p. } x \in M.\end{equation}
Para todo $x \in M$ consideremos la sucesi\'{o}n $\{\sigma_{n,x}\}_{n \geq 1}$ de probabilidades emp\'{\i}ricas de la \'{o}rbita por $x$, seg\'{u}n la Definici\'{o}n \ref{definitionProbaEmpiricas}. Aplicando el Teorema \ref{theoremOptimalidadSRB-like}, tenemos para $m$-c.t.p. $x \in M$ la siguiente propiedad:
\begin{equation}
\label{eqn-27}
\lim_{i \rightarrow + \infty}\sigma_{n_i, x} \in {\mathcal O}_f \end{equation} para toda subsucesi\'{o}n $ n_i \rightarrow + \infty $ tal que existe ese l\'{\i}mite (donde dicho l\'{\i}mite se toma en la topolog\'{\i}a d\'{e}bil$^*$ del espacio de probabilidades).
Consideremos la funci\'{o}n continua $\psi$ definida por \begin{equation}\label{eqndefpsi}\psi(x) := \mbox{dist}(x,K_2) \ \ \forall \ x \in M.\end{equation} Integrando $\psi$ en la inclusi\'{o}n (\ref{eqn-27}), y teniendo en cuenta la definici\'{o}n de la topolog\'{\i}a d\'{e}bil$^*$, deducimos que para $m$-c.t.p. $x \in M$, y para toda subsucesi\'{o}n convergente $\{\sigma_{n_i,x}\}_{i \in \mathbb{N}} $ de probabilidades emp\'{\i}ricas, existe $\mu \in {\mathcal O}_f$ tal que
\begin{equation}
\label{eqn-28}
\int \psi\, d \mu = \lim_{i \rightarrow + \infty} \int \psi \, d (\sigma_{n_i, x} ) = \lim_{i \rightarrow + \infty} \frac{1}{n_i}\sum_{j= 0}^{n_i-1} \mbox{dist}(f^j(x), K_2).\end{equation}
Pero $\mu(K_2) = 1$, porque por hip\'{o}tesis, $K_2$ es soporte compacto com\'{u}n de todas las medidas de probabilidad en ${\mathcal O}_f$. Luego $\psi(y) = \mbox{dist}(y, K_2) = 0$ para $\mu$-c.t.p. $y \in M$, de donde $\int \psi \, d \mu = 0$. Sustituyendo en la igualdad (\ref{eqn-28}) obtenemos
\begin{equation}
\label{eqn-28z}
\lim_{i \rightarrow + \infty} \frac{1}{n_i}\sum_{j= 0}^{n_i-1} \mbox{dist}(f^j(x), K_2) = 0 \end{equation}
para $m$-c.t.p. $x \in M$, para toda subsucesi\'{o}n $n_i \rightarrow + \infty$ tal que $\{\sigma_{n_i, x}\}_{i \in \mathbb{N}}$ sea convergente.
Fijado un tal punto $x$, sea dada una sucesi\'{o}n cualquiera $n_i \rightarrow + \infty$ tal que la subsucesi\'{o}n $\{d_{n_i}\}_{i \in \mathbb{N}}$ de promedios de distancias a $K_2$, dada por $$d_{n_i} := \frac{1}{n_i}\sum_{j= 0}^{n_i-1} \mbox{dist}(f^j(x), K_2), $$ es convergente. Por la compacidad del espacio de probabilidades, siempre existe una subsucesi\'{o}n de esta subsucesi\'{o}n $\{d_{n_i}\}_i$ (para \'{\i}ndices $i= i_h$) tal que $\{\sigma_{n_{i_h}, x}\}_{h }$ es tambi\'{e}n convergente. Entonces, la igualdad (\ref{eqn-28z}) aplicada a $n_{i_h}$ en lugar de $n_i$, implica (para la subsucesi\'{o}n de \'{\i}ndices $\{n_{i}\}_i$), que el l\'{\i}mite del promedio $d_{n_{i}}$ de las distancias a $K_2$ es cero. Concluimos que toda subsucesi\'{o}n convergente de $\{d_n\}_{n \geq 1}$ converge a cero. Luego, la afirmaci\'{o}n (\ref{eqn-29}) est\'{a} probada.
\vspace{.2cm}
{\em Demostraci\'{o}n de $K_2 \subset K_1$:} Por hip\'{o}tesis $K_2$ es el m\'{\i}nimo compacto tal que $\mu(K_2) = 1$ para toda $\mu \in {\mathcal O}_f$. Luego, basta demostrar que \begin{equation}
\label{eqnAprobar-29}
\mbox{A probar: } \mu(K_1) = 1 \ \ \forall \ \mu \in {\mathcal O}_f.\end{equation}
Sean dados $\mu \in {\mathcal O}_f$ y $\epsilon >0$ arbitrario. Sea $\varphi$ la funci\'{o}n continua no negativa definida por \begin{equation}
\label{equation}
\varphi(y) := \mbox{dist}(y, K_1).\end{equation} Por la definici\'{o}n de la topolog\'{\i}a d\'{e}bil$^*$, con la m\'{e}trica $\mbox{dist}^*$ que se haya elegido en el espacio ${\mathcal M}$ de las probabilidades, existe $\delta >0$ tal que
\begin{equation}
\label{eqn-30}
\mbox{dist}^*(\nu, \mu)< \delta \ \Rightarrow \ \Big| \int \varphi \, d\mu - \int \varphi \, d \nu \Big| < \epsilon.\end{equation}
Por la Definici\'{o}n \ref{definitionMedidaSRBlike} de medida SRB-like,
la medida de Lebesgue $m$ de la cuenca $A_{\delta}(\mu)$ de atracci\'{o}n estad\'{\i}stica $\delta$-d\'{e}bil de $\mu$, es positiva.
Como $K_1$ es un atractor de Ilyashenko $1$-obs., su cuenca de atracci\'{o}n estad\'{\i}stica $A(K_1)$ tiene medida de Lebesgue total. Luego deducimos que $$m\big(A(K_1) \cap A_{\delta}(\mu)\big) >0.$$ Tomemos un punto $x \in A(K_1) \cap A_{\delta}(\mu)$. Por la igualdad (\ref{eqnCuencaAtracIlyashenko}) que define $A(K_1)$ tenemos: \begin{equation}
\label{eqn-31}
0 =\lim_{n \rightarrow + \infty} \frac{1}{n} \sum_{j= 0}^{n-1} \mbox{dist} (f^j(x), K_1) = \lim _{n \rightarrow + \infty} \int \varphi \, d (\sigma_{n,x}),\end{equation}
donde $\sigma_{n,x}$ es la probabilidad emp\'{\i}rica dada definida por (\ref{(1)}). Por la Definici\'{o}n \ref{DefinicionCuencaDeAtraccionEpsilonEstadistica} de $A_{\delta}(\mu)$, existe una subsucesi\'{o}n $n_i \rightarrow + \infty$ tal que $\{\sigma_{n_i}\}_{i \in \mathbb{N}}$ converge a una medida $\nu \in p \omega(x)$ tal que $\mbox{dist}^*(\mu, \nu) < \delta$. Luego, combinando la afirmaci\'{o}n (\ref{eqn-30}) con la igualdad (\ref{eqn-31}), deducimos:
$$0 = \lim_{i \rightarrow + \infty} \int \varphi \, d \sigma_{n_i,x} = \int \varphi \, d \nu , \ \ \ \ \mbox{dist}^* (\nu, \mu) < \delta, \ \ \Rightarrow $$ $$ \int \varphi \, d \nu= 0, \ \ \ \Big|\int \varphi \, d \mu - \int \varphi \, d \nu \Big|< \epsilon \ \ \ \Rightarrow $$ $$ \Big| \int \varphi \, d \mu \Big|< \epsilon. $$
Como esta desigualdad vale para todo $\epsilon >0$, deducimos que
$0= \int \varphi \, d \mu. $
Siendo $\varphi \geq 0$, concluimos que $\varphi(y) =\mbox{dist}(y, K_1) = 0$ para $\mu$-c.t.p. $y \in M$. Como $K_1$ es compacto, la distancia de un punto $y$ a $K_1$ es cero si y solo si $y \in K_1$. Hemos probado que $y \in K_1$ para $\mu$-c.t.p. $y \in M$, o dicho de otra forma, $\mu(K_1)= 1$ como quer\'{\i}amos demostrar.
\hfill $\Box$
\subsection{Ejemplos de medidas SRB-like}
\begin{example} \em
\label{ejemploExpansorasEnElCirculo} {\bf Endormofirsmos $C^1$-expansores en $S^1$} \index{mapa expansor} \index{endomorfismo expansor}
Sea $f: S^1 \mapsto S^1$ en el c\'{\i}rculo $S^1$ un endomorfismo de clase $C^1$ expansor, esto es, existe una constante $\sigma > 1$ tal que $|f'(z)| \geq \sigma > 1 \ \ \forall \ z \in S^1$. Por ejemplo en el c\'{\i}rculo $$S^1:= \{z \in \mathbb{C}: \ |z|= 1\}$$ la transformaci\'{o}n $f(z)= z^2$ es expansora. Un resultado cl\'{a}sico de din\'{a}mica topol\'{o}gica (ver por ejemplo
\cite[Theorem 2.4.6]{Katok-Hasselblatt}), establece que todo endormofismo expansor en el c\'{\i}rculo es sobreyectivo, el n\'{u}mero de preim\'{a}genes de cualquier punto $x \in S^1$ es constante igual a $k \geq 2$ (llamado \em grado \em de $f$) y $f$ es topol\'{o}gicamente conjugado a $g_k$ definido por $g_k(z)= z^k$ para todo $z \in S^1= \{|z| = 1\}$.
El endomorfismo expansor, puede mirarse como un endomorfismo (no invertible) uniformemente hiperb\'{o}lico (cf. Definition \ref{definicionAnosov}), en que el subespacio inestable $U_x$ es todo el espacio tangente $T_xS^1$, y el subespacio estable $E_x = \{0\}$. La variedad inestable $W^u (x_0)$ por un punto cualquiera $x_0 \in S^1$ se define por $$W^u (x_0) := \Big \{\ y_0 \in S^1: \ \ \exists \ x_{-n}, \ y_{-n} \in S^1 \mbox{ tales que } $$ $$ f(x_{-n}) = x_{-n + 1}, \ f(y_{-n}) = y_{-n + 1} \ \forall \ n \geq 0, \ $$ $$\ \lim_{-n \rightarrow - \infty} \mbox{dist}(y_{-n}, x_{-n}) = 0 \ \Big \}.$$
La variedad inestable de cualquier punto coincide con todo $S^1$:
$$W^u(x_0) = S^1 \ \ \forall \ x_0 \in S^1.$$
Por esta raz\'{o}n, para los endomorfismos expansores en el c\'{\i}rculo, la medida condicional inestable de $\mu$ (cf. Definici\'{o}n \ref{definitionmedidascondicionalesinestablesAC}) es la misma $\mu$. Entonces, definimos:
\vspace{.2cm}
{\bf Definici\'{o}n (medida de Gibbs):} Si $f$ es un endomorfismo $C^1$ expansor en el c\'{\i}rculo $S^1$, decimos que una medida $f$-invariante $\mu$ \em es de Gibbs \em si $\mu \ll m$, donde $m$ es la medida de Lebesgue en $S^1$. \index{medida! de Gibbs} \index{continuidad absoluta}
\vspace{.2cm}
Si $f$ es de clase $C^{1 + \alpha}$, se tiene el siguiente resultado cl\'{a}sico, que da una versi\'{o}n del Teorema \ref{TheoremSRBanosov} aplicable a los endomorfismos expansores del c\'{\i}rculo (en vez de a los difeomorfismos de Anosov en variedades de dimensi\'{o}n mayor que 1).
\vspace{.3cm}
{\bf Teorema (Ruelle):} \index{teorema! Ruelle} \index{mapa expansor} \index{endormorfismo expansor} \index{medida! de Gibbs} \index{medida! SRB} \index{medida! equivalencia de}
\em Sea $f: S^1 \mapsto S^1$ de clase $C^{1 + \alpha}$ expansor en el c\'{\i}rculo $S^1$. Entonces:
{\em (a)} Existe una \'{u}nica medida de Gibbs $\mu$ \em (i.e. $\mu$ absolutamente continua respecto a la medida de Lebesgue) \em y esta medida $\mu$ es erg\'{o}dica.
{\em (b)} Toda medida SRB es de Gibbs y rec\'{\i}procamente. \em (Por lo tanto existe una \'{u}nica medida SRB y es erg\'{o}dica). \em
{\em (c)} La medida de Gibbs $\mu$ es equivalente a la medida de Lebesgue $m$ \em (i.e. $\mu \ll m$ y $m \ll \mu$). \em
{\em (d)} La cuenca de atracci\'{o}n estad\'{\i}stica de $\mu$ cubre Lebesgue c.t.p. $z \in S^1$. \em \index{cuenca de atracci\'{o}n! estad\'{\i}stica}
\vspace{.3cm}
La prueba de este Teorema de Ruelle para mapas $C^{1+ \alpha}$ expansores en $S^1$, puede encontrarse por ejemplo en
\cite[Theorem 5.1.16]{Katok-Hasselblatt}.
Nota: Un resultado m\'{a}s general, que establece la existencia de medidas de Gibbs para mapas $C^{1 }$ a trozos en el c\'{\i}rculo, que sean expansores en cada trozo, y que cumplan hip\'{o}tesis de variaci\'{o}n acotada, fue demostrado recientemente por Liverani en \cite{Liverani}.
\vspace{.3cm}
Como consecuencia de la parte (d) del Teorema de Ruelle, en el caso $C^{1+ \alpha}$ expansor, no existen otras medidas SRB-like que no sean la medida SRB. En efecto, otra medida $\nu \neq \mu$ tendr\'{\i}a una cuenca de atracci\'{o}n estad\'{\i}stica $\epsilon$-d\'{e}bil $A_{\epsilon}(\nu)$ que debe ser disjunta con la cuenca de atracci\'{o}n estad\'{\i}stica (fuerte) $B(\mu)$ de $\mu$ (porque $\nu \neq \mu$). Pero como $m(M \setminus B(\mu)) = 0$, entonces $m(A_{\epsilon}(\nu))= 0$ y $\nu$ no puede ser SRB-like.
\index{teorema! unicidad de medida SRB}
Ahora veremos que la relaci\'{o}n entre medidas de Gibbs y la F\'{o}rmula (\ref{eqnformuladePesin}) de Pesin para la entrop\'{\i}a (que vale para difeomorfismos de clase $C^{1 + \alpha}$ seg\'{u}n vimos en el Teorema \ref{theoremFormulaPesin} de la secci\'{o}n anterior), tambi\'{e}n se generaliza para endormorfismos expansores de clase $C^{1 + \alpha}$ en el c\'{\i}rculo.
\vspace{.3cm}
{\bf Teorema }
{\bf F\'{o}rmula de Pesin para la Entrop\'{\i}a. Endomorfismos expansores}
\index{Pesin! f\'{o}rmula de} \index{f\'{o}rmula de Pesin} \index{entrop\'{\i}a! f\'{o}rmula de Pesin} \index{mapa expansor} \index{endomorfismo expansor}
\em
Sea $f: S^1 \mapsto S^1$ expansor de clase $C^{2}$. Entonces, la \'{u}nica medida SRB $\mu$ para $f$ \em (que por el teorema de Ruelle es de Gibbs y equivalente a la medida de Lebesgue), \em satisface la F\'{o}rmula \em (\ref{eqnformuladePesin}) \em de Pesin para la Entrop\'{\i}a, y es la \'{u}nica medida que satisface tal f\'{o}rmula. \em
\vspace{.3cm}
Una prueba de este Teorema, fue dada en \cite{PesinLyapunovExponents} por Pesin. Una versi\'{o}n para endomorfismos de clase $C^2$ se encuentra en \cite{QianZhu-TeoLedrYoungParaEndomorfismosC2} o en \cite{QianZhu-Libro}. M\'{a}s a\'{u}n, en \cite{QianZhu-TeoLedrYoungParaEndomorfismosC2} se prueba la versi\'{o}n del Teorema de Ledrappier-Young \cite{Ledrappier-Young} para endomorfismos, que establece la equivalencia, para todo endomorfismo de clase $C^2$, entre las medidas de Gibbs y las medidas que satisfacen la F\'{o}rmula de Pesin para la Entrop\'{\i}a (cf. Teorema \ref{theoremLedrappier-Young}).
\vspace{.3cm}
En el caso que el endormorfismo $f$ sea de clase $C^1$ pero no $C^{1+ \alpha}$, las afirmaciones del Teorema de Ruelle fallan. En efecto, en \cite{AvilaBochi} se prueba que los endormorfismos expansores $C^1$-gen\'{e}ricos no poseen medidas invariantes absolutamente continuas respecto de $m$ (no poseen medidas de Gibbs). M\'{a}s en general, en \cite{SchmittGora} se prueba (con un ejemplo expl\'{\i}cito) que, aunque $f$ sea discontinua, si $f$ es $C^1$ expansora a trozos, entonces la existencia de medidas invariantes absolutamente continuas respecto de $m$ no es necesaria.
Sin embargo, en \cite{CampbellQuas} Campbell y Quas probaron que $C^1$-gen\'{e}ricamente, un endomorfismo expansor en el c\'{\i}rculo (que es $C^1$ pero no $C^{1 + \alpha}$), posee una \'{u}nica medida SRB $\mu$ y que esta medida es erg\'{o}dica, satisface la F\'{o}rmula de Pesin para la Entrop\'{\i}a (\ref{eqnformuladePesin}) y tiene cuenca de atracci\'{o}n estad\'{\i}stica que cubre Lebesgue c.t.p. Pero en vez de ser $\mu$ absolutamente continua respecto a la medida de Lebesgue $m$, se cumple todo lo contrario:
$$\mu \perp m,$$
es decir, $\mu$ es mutuamente singular respecto a la medida de Lebesgue $m$.
Del resultado de Campbell y Quas deducimos que $C^1$-gen\'{e}ricamente para los endomorfismos expansores en el c\'{\i}rculo, existe una \'{u}nica medida SRB-like que es la medida SRB $\mu$ encontrada por Campbell y Quas. En efecto, argumentamos igual que antes, como lo hicimos a partir del Teorema de Ruelle para los expansores de clase $C^{1+ \alpha}$. Como la cuenca de atracci\'{o}n estad\'{\i}stica $B(\mu)$ de la medida SRB $\mu$ cubre Lebesgue c.t.p., entonces ninguna otra medida $\nu \neq \mu$ puede ser SRB-like.
\vspace{.2cm}
Sin necesidad de asumir hip\'{o}tesis de $C^1$-genericidad, todo endormorfismo expansor de clase $C^1$ en el c\'{\i}rculo tiene medidas SRB-like, todas sus medidas SRB-like satisfacen la F\'{o}rmula de Pesin para la Entrop\'{\i}a (\ref{eqnformuladePesin}), y la uni\'{o}n de sus cuencas de atracci\'{o}n estad\'{\i}stica $\epsilon$-d\'{e}bil cubre Lebesgue c.t.p. para todo $\epsilon >0$. Estos resultados fueron probados en \cite{CatEnrPortugaliae}.
Sin embargo, la unicidad de la medida SRB-like (que es cierta para los expansores de clase $C^{1+ \alpha} $ debido al Teorema de Ruelle), es falsa en general para los expansores de clase $C^1$ que no son $C^{1 + \alpha}$ ni son $C^1$-gen\'{e}ricos. En efecto, en \cite{Quas} Quas construy\'{o} un expansor de clase $C^1$ que exhibe m\'{a}s de una medida SRB-like.
\end{example}
\begin{example} \em
\label{ejemploExpansorasC0EnElCirculo} {\bf Endormofirsmos $C^0$-expansores en $S^1$} \index{mapa expansor} \index{endomorfismo expansor}
\end{example}
Un mapa continuo $f: S^1 \mapsto S^1$ en el c\'{\i}rculo $S^1$ se llama \em endomorfismo $C^0$-expansor \em (cf. \cite[Definition 2.4.1]{Katok-Hasselblatt}), si existen constantes $\delta >0$ y $\sigma > 1$ tales que
$$\mbox{dist}(f(x), f(y)) \geq \sigma \mbox{dist}(x,y) \ \ \forall \ x,y \in S^1 \mbox{ tales que } \mbox{dist}(x,y) < \delta.$$
En \cite{Misiurewicz}, Misiurewicz construy\'{o} un endormofismo $C^0$-expansor $f$ en $S^1$, que satisface las hip\'{o}tesis del Corolario \ref{corolarioSRB-likeMisiurevicz}. Por lo tanto, en ese ejemplo no existen medidas SRB y toda medida $f$-invariante es SRB-like. Existe entonces una cantidad infinita no numerable de medidas SRB-like. Llamaremos a los endomorfismos que tienen esta propiedad \em endomorfismos de Misiurewicz. \em En \cite{AbdenurAndersson} se prueba que los endomorfismos de Misiurewicz son $C^0$-gen\'{e}ricos en el espacio de los endomorfismos $C^0$-expansores del c\'{\i}rculo $S^1$.
\begin{example} \em
{\bf Anosov $C^1$} \label{ejemploSRB-likeAnosovC1} \index{difeomorfismos! de Anosov} \index{automorfismo! lineal en el toro} \index{transitividad} \index{transformaci\'{o}n transitiva}
Sea $f$ un difeomorfismo de Anosov transitivo en una variedad compacta y Riemanniana $M$ (cf. Definition \ref{definicionAnosov}).
Primero repasemos el caso en que $f$ es adem\'{a}s de clase $C^{1 + \alpha}$ o en particular si es de clase $C^2$:
En el Teorema \ref{TheoremSRBanosov} de Sinai \cite{Sinai_SRB}, vimos que si $f $ es un difeomorfismo de Anosov de clase $C^{1 + \alpha}$, entonces existe una \'{u}nica medida de probabilidad SRB $\mu$, es de Gibbs erg\'{o}dica, y su cuenca de atracci\'{o}n estad\'{\i}stica cubre Lebesgue c.t.p. \index{medida! de Gibbs erg\'{o}dica} \index{cuenca de atracci\'{o}n! estad\'{\i}stica} \index{medida! SRB-like}
Deducimos que tal medida $\mu$ es la \'{u}nica medida SRB-like, argumentando de igual forma que en el ejemplo \ref{ejemploExpansorasEnElCirculo}.
\vspace{.2cm}
Adem\'{a}s, debido al Teorema \ref{theoremGibbs->SRB}, la \'{u}nica medida SRB-like de un difeomorfismo de Anosov transitivo de clase $C^{1 + \alpha}$, satisface la F\'{o}rmula (\ref{eqnformuladePesin}) de Pesin para la Entrop\'{\i}a.
\vspace{.3cm}
Si el difeomorfismo $f$ de Anosov transitivo es de clase $C^2$, entonces su \'{u}nica medida SRB $\mu$, es tambi\'{e}n la \'{u}nica medida de probabilidad que satisface la F\'{o}rmula de Pesin para la Entrop\'{\i}a. En efecto, por el Teorema \ref{theoremLedrappier-Young} de Ledrappier-Young \cite{Ledrappier-Young}, toda medida $\nu$ que satisfaga la F\'{o}rmula de Pesin para la Entrop\'{\i}a es de Gibbs. Y por el Teorema \ref{theoremGibbs->SRB} las componentes erg\'{o}dicas $\nu_x$ de toda medida de Gibbs es SRB, y por lo tanto es SRB-like. Luego, como existe una \'{u}nica medida SRB-like $\mu$, $\nu$ tiene una \'{u}nica componente erg\'{o}dica que es $\mu$, y por lo tanto $\nu$ es erg\'{o}dica y coincide con $\mu$.
\vspace{.2cm}
Ahora pasemos al caso de difeomorfismo de Anosov transitivo $f$ de clase $C^1$ pero no $C^{1 + \alpha}$. En este caso las demostraciones conocidas de los teoremas mencionados en el repaso anterior, no son aplicables, porque utilizan la Teor\'{\i}a de Pesin. Por ejemplo, las pruebas utilizan el Teorema \ref{theoremTeoriaPesin} que establece la continuidad absoluta de la holonom\'{\i}a de la foliaci\'{o}n estable. Este resultado ser\'{\i}a falso si $f$ no fuera de clase $C^{1 + \alpha}$ (ver \cite{RobinsonYoungContrajemploCAdeFoliacion}, o tambi\'{e}n \cite{Bowen_C1horseshoe}).
Sin embargo, en los \'{u}ltimos a\~{n}os se han obtenido algunos resultados parciales, aplicables a los difeomorfismos de Anosov de clase $C^{1}$, que generalizan el Teorema \ref{TheoremSRBanosov}:
En \cite{QiuHyperbolicExisteUnicaSRBPesinFormula} Qiu y Zhu demostraron que $C^1$-gen\'{e}ricamente los difeomorfismos de Anosov transitivos tienen una \'{u}nica medida SRB, esta medida es erg\'{o}dica, satisface la F\'{o}rmula de Pesin para la Entrop\'{\i}a, es la \'{u}nica medida que satisface tal f\'{o}rmula, y su cuenca de atracci\'{o}n estad\'{\i}stica cubre Lebesgue c.t.p. de la variedad. Por lo tanto $\mu$ es la \'{u}nica medida SRB-like. La prueba de Qiu y Zhu no pasa (a diferencia de la prueba del Teorema \ref{TheoremSRBanosov}) por la construcci\'{o}n de medidas de Gibbs. M\'{a}s a\'{u}n (tanto como la autora de este libro conoce) no se sabe si la \'{u}nica medida SRB $\mu$ de un difeomorfismo de Anosov transitivo y $C^1$-gen\'{e}rico, es medida de Gibbs. Pero se sabe que tal $\mu$ no es absolutamente continua respecto a la medida de Lebesgue $m$ en toda la variedad, pues \'{A}vila y Bochi \cite{AvilaBochi2006} probaron que $C^1$ gen\'{e}ricamente no existen medidas invariantes absolutamente continuas respecto de $m$.
\vspace{.2cm}
Cuando la medida SRB-like $\mu$ es \'{u}nica deducimos, como consecuencia del Teorema \ref{theoremOptimalidadSRB-like}, que $\mu$ es SRB y que su cuenca de atracci\'{o}n estad\'{\i}stica $B(\mu)$ cubre Lebesgue c.t.p. En este caso, por ejemplo para los difeomorfismos de Anosov transitivos $C^1$-gen\'{e}ricos del Teorema de Qiu y Zhu \cite{QiuHyperbolicExisteUnicaSRBPesinFormula}, es v\'{a}lido un teoremas erg\'{o}dico, probado recientemente por Kleptsyn y Ryzhov en \cite{kelpstynRyzov2012}, que estima la velocidad de convergencia de las probabilidades emp\'{\i}ricas $\sigma_{n,x}$ (en la topolog\'{\i}a d\'{e}bil$^*$) a la medida SRB-$\mu$, para Lebesgue c.t.p. $x \in M$.
\vspace{.2cm}
En el caso de difeomorfismo de Anosov $C^1$ no gen\'{e}rico y no $C^{1 + \alpha}$, cuando $f$ preserva una medida invariante $\mu$ absolutamente continua respecto a la medida de Lebesgue, Sun y Tian probaron en \cite{SunTianDominatedSplitting} que esta medida $\mu$ satisface la F\'{o}rmula de Pesin para la entrop\'{\i}a. Esto implica que, bajo las hip\'{o}tesis adicional de existencia de medida invariante $\mu$ equivalente a la medida de Lebesgue $ m$, toda medida SRB-like satisface tal f\'{o}rmula. En efecto, por el Teorema \ref{theoremDescoErgodicaEspaciosMetricos} de Descomposici\'{o}n Erg\'{o}dica, $\mu$-c.t.p. $x$ pertenece a la cuenca de atracci\'{o}n estad\'{\i}stica $B(\mu_x)$ de una componente erg\'{o}dica $\mu_x$ de $\mu$. Como $\mu$ es equivalente a la medida de Lebesgue, deducimos que Lebesgue c.t.p. $x$ pertenece a $B(\mu_x)$. Luego, por el Teorema \ref{theoremOptimalidadSRB-like}, estas componentes erg\'{o}dicas $\mu_x$ son las medidas SRB-like. Como $\mu$ satisface la F\'{o}rmula de Pesin para la entrop\'{\i}a, entonces toda componente erg\'{o}dica $\mu_x$ de $\mu$ tambi\'{e}n satisface tal f\'{o}rmula (cf. \cite{Keller}). Luego, del resultado de Sun y Tian concluimos que toda medida SRB-like para $f$, satisface la F\'{o}rmula de Pesin para la Entrop\'{\i}a, si $f$ es un difeomorfismo de Anosov de clase $C^1$ que preserve una medida equivalente a la medida de Lebesgue.
\vspace{.3cm}
M\'{a}s en general, sin necesidad de asumir hip\'{o}tesis de $C^1$-genericidad ni de existencia de medida invariante absolutamente continua respecto a la medida de Lebesgue, en \cite{CatCerEnrSubmitted} se prueba que para todo $f$ de Anosov de clase $C^1$, toda medida SRB-like $\mu$ satisface la F\'{o}rmula de Pesin para la Entrop\'{\i}a. Pero no sabemos si es necesario que adem\'{a}s $\mu$ sea medida de Gibbs, ni que sea erg\'{o}dica.
\end{example}
\vspace{.3cm}
\newpage
|
1,116,691,498,635 | arxiv | \section{Introduction}
The direct production of $W^{\pm}/Z$ bosons in association with jets is a process of
crucial importance at hadron collider experiments. The presence of a vector boson in the
hard scatter means that these interactions occur at a scale that should make perturbative
QCD applicable, and thus it is an excellent channel to test such predictions. Furthermore, many of the
potential discovery channels for the Higgs boson and beyond standard model processes share a final state signature
with the $W^{\pm}/Z+{\rm jets}$~process. It is thus vital for the success of existing and
future hadron collider experiments that this process is understood, and recently there has been a huge
amount of work put into the modelling of this process, with the appearance of
many new Monte Carlo generators that are already widely used at both the Tevatron and LHC. In Sections~\ref{Sec:zjets}
and~\ref{Sec:wjets} the latest $W+{\rm jets}$~and $Z/\gamma^{*}+{\rm jets}$~measurements from the Tevatron are presented, and in Section~\ref{Sec:th}
we discuss the results and implications of some of the theory comparisons that have thus far been made.
\section{$Z/\gamma^{*}+{\rm jets}$~Measurements}
\label{Sec:zjets}
Both the CDF and D0 collaborations have produced $Z/\gamma^{*}+{\rm jets}$~measurements in the $Z/\gamma^{*}\rightarrow e^{+}e^{-}$~channel~\cite{CDFZ, D0Z}, using $\rm{1.7}fb^{-1}$
and $\rm{400}pb^{-1}$ of Tevatron Run II data respectively. D0 has measured the ratio of $Z/\gamma^{*}\rightarrow e^{+}e^{-} + \geq n {\rm ~jets}$~production cross sections to the
total inclusive $Z/\gamma^{*}\rightarrow e^{+}e^{-}$~cross section for $n = 1 - 4$ and jet $P_{T} > {\rm 20~GeV}$. CDF has measured the inclusive $Z/\gamma^{*}\rightarrow e^{+}e^{-} + \geq n {\rm ~jets}$~differential
cross section as a function of jet $P_{T}$~for $n = 1,2$ and $P_{T} > {\rm 30~GeV}$. In both measurements, $Z/\gamma^{*}\rightarrow e^{+}e^{-}$~events are selected by requiring
two electrons with $P_{T} > {\rm 25~GeV}$ that together form an invariant mass compatible with a
Z resonance. In the D0 analysis only electrons within the central region
of the calorimeter were used, whereas CDF used one central electron and allowed the second one to be either in the central or
forward region of the calorimeter.
Both analyses use ``tag and probe'' methods to extract from the data efficiencies for electron identification, and both correct
for the acceptance of the kinematic and geometrical selection criteria by using simulated signal Monte Carlo samples. In the CDF
analysis, the measured cross section is defined for a limited kinematic range of the $Z/\gamma^{*}\rightarrow e^{+}e^{-}$~decay products (corresponding to the event
selection criteria), and the acceptance factor is defined accordingly. In this way, the sensitivity of the measurement to the
theoretical modelling of the signal is reduced.
\begin{wrapfigure}[18]{L}{0.50\textwidth}
\centerline{\includegraphics[width=0.50\textwidth]{./cooper_ben.fig1.eps}}
\caption{The D0 measured $Z/\gamma^{*}\rightarrow e^{+}e^{-} + \geq n {\rm ~jets}$/$Z/\gamma^{*}\rightarrow e^{+}e^{-}$~cross section ratios for $n = 1-4$ compared to three different predictions: {\sc mcfm} NLO,
a ME-PS matched prediction and {\sc pythia}.}
\label{Fig:D0Z}
\end{wrapfigure}
The dominant sources of background to the $Z/\gamma^{*}+{\rm jets}$~process are those arising from QCD multijet events and $W+{\rm jets}$~events. In the CDF analysis a
data-driven method is used to estimate both these sources by extracting from data the probability for an additional
jet to fake the second leg of a Z boson decay in events with one electron in the final state. In the D0 analysis the QCD background
is extracted from an analysis of the sidebands of the Z peak, and the $W+{\rm jets}$~background is taken from simulated Monte Carlo. In the
D0 case the total background is found to be 3-5\%, whereas in the CDF analysis it is at the level of 12-17\%.
In the D0 measurement,
jets were clustered using a cone algorithm~\cite{D0ZJET} with cone radius $R = 0.5$, requiring jet $P_{T} > {\rm 20~GeV}$ and $|\eta| < 2.5$.
In the CDF measurement, jets
were clustered using a seeded midpoint cone algorithm with cone radius $R = 0.7$, requiring jet
$P_{T} > \rm{30~GeV}$ and $|y|<2.1$. In both analyses data-driven methods were used to correct the transverse momentum of the jets to account for multiple
$p\overline{p}$~interactions and the response of the calorimeter.
In addition, jet reconstruction and identification efficiencies as well as the impact of the finite jet energy resolution
of the detector are accounted for in the cross sections using simulated Monte Carlo samples that have been tuned on data.
In both CDF and D0 analyses the dominant source of systematic uncertainty is that arising from the determination of the
calorimeter jet energy scale. In the CDF measurement the total systematic is $\sim10\%$ (15\%) at low (high) jet~$P_{T}$. In the D0
measurement the total systematic is similar for $n \geq 1,2$ but reaches $\sim50\%$ for $n \geq 4$.
\section{$W+{\rm jets}$~Measurements}
\label{Sec:wjets}
The CDF collaboration has recently published a $W+{\rm jets}$~measurement~\cite{CDFW} in the $W^{\pm}\rightarrow e^{\pm}\nu$~channel using $\rm{320}pb^{-1}$ of
Tevatron Run II data, measuring the differential $W\rightarrow e\nu + \geq n {\rm ~jets}$~cross section as a function
of the $n$th highest $E_{T}$~jet above 20 GeV, for $n = 1 - 4$.
\begin{wrapfigure}[26]{R}{0.45\textwidth}
\centerline{\includegraphics[width=0.45\textwidth]{./cooper_ben.fig2.eps}}
\caption{The CDF measured inclusive $Z/\gamma^{*}\rightarrow e^{+}e^{-} + \geq n {\rm ~jets}$~differential cross sections as a function of jet $P_{T}$~
for events with $n \geq 1,2$, compared to NLO {\sc mcfm} predictions that have been corrected to the
hadron level.}\label{Fig:CDFZ}
\end{wrapfigure}
In this analysis $W\rightarrow e^{\pm}\nu$ + jets~events were selected by
requiring exactly one central electron with $P_{T} > {\rm 20~GeV}$ along with missing transverse energy $E\!\!\!/_{T} > {\rm 30~GeV}$.
It was also required that the reconstructed $W$ transverse mass satisfies $M_{T}^{W} > 20~GeV/c^{2}$, a cut which reduces background
at little expense to the signal.
As in the CDF $Z/\gamma^{*}+{\rm jets}$~analysis, the cross section is defined for a limited kinematic range of the $W$ decay
products equal to this event selection criteria, and the acceptance and efficiencies are computed from simulated signal Monte
Carlo samples accordingly.
Jets in the $W^{\pm}\rightarrow e^{\pm}\nu$~event sample were clustered using the {\sc jetclu}~\cite{JETCLU} cone algorithm with cone size 0.4, requiring jet $E_{T} > {\rm 20~GeV}$ and
$|\eta| < 2.0$. The energy of each jet is corrected for multiple interactions and the calorimeter response.
In addition, once the jet spectra had been corrected for backgrounds, simulated Monte Carlo signal samples were used to correct the jet spectra
to account for the jet reconstruction efficiency and finite calorimeter energy resolution.
The dominant sources of background to the $W+{\rm jets}$~process arise from from QCD multijet and $t\overline{t}$~production. In this analysis, the multijet background was
estimated by using an alternative ``antielectron'' selection criteria to select from the data an event sample that could be used to reliably
model the QCD background in the required jet kinematic distributions. The background from $t\overline{t}$, as well as the less important $W\rightarrow \tau\nu$, $Z/\gamma^{*}\rightarrow e^{+}e^{-}$, $WW$ and $W\gamma$
processes, was modelled using simulated Monte Carlo samples.
The total background fraction increases with increasing jet multiplicity and transverse energy, around 10\% at low $E_{T}$, but reaching 90\% at the highest
measured $E_{T}$~in the four jet sample.
At low jet $E_{T}$~the dominant systematic on the measured
cross sections is that arising from the jet energy scale determination, at the level of 5-10\%. However, at higher jet $E_{T}$~the uncertainty on the background
determination is dominant, up to 80\% in the highest jet $E_{T}$~bins.
\section{Theoretical Comparisons}
\label{Sec:th}
\begin{wrapfigure}[25]{R}{0.45\textwidth}
\centerline{\includegraphics[width=0.45\textwidth]{./cooper_ben.fig3.eps}}
\caption{The CDF measured $W\rightarrow e\nu + \geq n {\rm ~jets}$~differential cross section for $n = 1-3$ compared to {\sc mcfm} NLO and ME-PS matched predictions.
}\label{Fig:CDFW}
\end{wrapfigure}
Figures~\ref{Fig:D0Z},\ref{Fig:CDFZ} and~\ref{Fig:CDFW} show the results of the measurements described above compared to various next-to-leading order
(NLO) and leading order (LO) perturbative QCD predictions. All three measurements make comparisons to {\sc mcfm}~\cite{MCFM} NLO predictions with up to 2 partons in the
final state. These calculations are made at the parton level, and as such do not include the effects of hadronization or the underlying event which will
be present in the data.
However, the CDF Z measurement uses a {\sc pythia tune-a}~\cite{PYTHIA, TUNEA} Monte Carlo
sample to derive for each jet $P_{T}$~bin a parton-to-hadron correction factor that is applied to the {\sc mcfm} predictions to approximately account for these
non-perturbative contributions. In Figure~\ref{Fig:CDFZ} the CDF inclusive $Z/\gamma^{*}\rightarrow e^{+}e^{-} + \geq n {\rm ~jets}$~differential cross sections as a function of jet $P_{T}$~are compared
with the corrected {\sc mcfm} predictions, and good agreement is observed between the data and the prediction both in terms of overall rates and in the
reproduction of the spectra shape. However, in Figures~\ref{Fig:D0Z} and~\ref{Fig:CDFW} one can see that the {\sc mcfm} prediction still well reproduces
the data even in the absence of such corrections. In the CDF $W+{\rm jets}$~analysis it was observed that, for this particular jet definition,
the effects of hadronization and the underlying event cancel each other out at the 5-10\% level~\cite{CDFW}.
In Figure~\ref{Fig:D0Z} comparisons of the D0 $Z/\gamma^{*}+{\rm jets}$~data are made to a LO matrix element parton shower matched prediction~\cite{MEPS} (ME-PS)
based on a modified CKKW scheme~\cite{CKKW,MRENNA}, and to {\sc pythia}. These predictions have been normalised to the measured $Z/\gamma^{*}+\geq 1$
jet cross section ratio. One can see that the ME-PS matched predictions better reproduce the rate of additional jets due to the inclusion of tree-level
processes of up to three partons.
In Figure~\ref{Fig:CDFW} comparisons of the CDF $W+{\rm jets}$~data are made to two different ME-PS matched predictions;
{\sc madgraph}~\cite{MADGRAPH} + {\sc pythia} predictions using a modified CKKW scheme~\cite{MRENNA} (SMPR), and {\sc alpgen}\cite{ALPGEN} +
{\sc herwig}~\cite{HERWIG} predictions using the MLM scheme~\cite{MLM} (MLM). These predictions are not normalised to the data, and the limitations of LO calculations
in reproducing absolute rates is clear. However, the SMPR prediction in particular well reproduces the shape of the measured spectra.
The discrepancies
observed at low jet $E_{T}$~in the comparison to MLM are possibly due to the absence of a Tevatron Run II tuned underlying event model in this prediction.
\section{Summary}
\label{Sec:sum}
The recent CDF and D0 measurements of the $W^{\pm}/Z+{\rm jets}$~process open the door for a thorough exploration into the
ability of the latest theoretical predictions to model this important process.
The comparisons to theory that have been made thus far indicate that important and impressive progress has been made with the latest
Monte Carlo generators.
\vspace{0.9cm}
\begin{footnotesize}
|
1,116,691,498,636 | arxiv | \section{Introduction}
\subsection{Statement of results}
We consider the nonlinear wave equation (NLW), on $\mathbb{T}^d$ for $d=2$ and $d=3$. This is the following equation for an unknown function $u:\mathbb{T}^d\times \mathbb{R}\rightarrow \mathbb{R}$:
\begin{equation}\label{eqn: pre-NLW}
\begin{cases}
\partial^2_tu-\Delta u+u^k=0\\
(u,\partial_tu)|_{t=0}=(u_0,v_0)
\end{cases}
\end{equation}
\noindent
where $k$ is a positive, odd integer. We rewrite \eqref{eqn: pre-NLW} as a first order system:
\begin{equation}\label{NLW}
\begin{cases}
\partial_t u=v\\
\partial_tv=\Delta u-u^k=0\\
(u,v)|_{t=0}=(u_0,v_0)
\end{cases}.
\end{equation}
\noindent
The system (\ref{NLW}) is a Hamiltonian system with Hamiltonian
\begin{equation}\label{eqn: H}
E(u,v):=\frac{1}{2}\int_{\mathbb{T}^d}\left( |\nabla u|^2+v^2\right)\,\mathrm{d}x+\frac{1}{k+1}\int_{\mathbb{T}^d}u^{k+1}\,\mathrm{d}x.
\end{equation}
The solutions of the system \eqref{NLW} we work with have initial data in the Sobolev space
\[\vec{H}^\sigma(\mathbb{T}^d):= H^\sigma(\mathbb{T}^d)\times H^{\sigma-1}(\mathbb{T}^d)\]
for $d=2,3$.
We consider the transport properties of the Gaussian measure on initial data $\vec{u}=(u,v)$ formally given by
\begin{equation}\label{eqn: formal-meas}
\vec{\mu}_s(\mathrm{d}\vec{u})=Z_s^{-1}e^{-\frac{1}{2}\|(u,v)\|^2_{\vec{H}^{s+1}}}\mathrm{d}\vec{u}.
\end{equation}
The expression \eqref{eqn: formal-meas} is given meaning as a product measure on the Fourier coefficients of the pair $(u,v)$:
\begin{equation}
\vec{\mu}_s(\mathrm{d}u\mathrm{d}v)=Z_s^{-1}\prod_{n\in\mathbb{Z}^3}e^{-\frac{1}{2}\langle n\rangle^{2(s+1)}|\widehat{u}_n|^2}e^{-\frac{1}{2}\langle n\rangle^{2s}|\widehat{v}_n|^2}\,\mathrm{d}\widehat{u}_n\mathrm{d}\widehat{v}_n.
\end{equation}
Equivalently, $\mu_s$ is the distribution of the pair of function-valued random variables given by $\omega \mapsto (u^{\omega}, v^{\omega})$, where $u^{\omega}, v^{\omega}$ are the functions on $\mathbb{T}^d$ defined by
\begin{equation}\label{series}
\begin{split}
u^{\omega}(x) &:= \sum_{n\in \mathbb{Z}^3}\dfrac{g_n(\omega)}{\langle n\rangle^{s+1}}e^{i n\cdot x},\\
\quad v^{\omega}(x) &:=\sum_{n\in \mathbb{Z}^3}\dfrac{h_n(\omega)}{\langle n\rangle^s}e^{i n\cdot x}.
\end{split}
\end{equation}
Here, $g_n$, $h_n$, $n\in\mathbb{Z}^d$ are standard complex Gaussian random variables, independent except for the conditions:
\begin{equation}\label{eqn: g_n-conditions}
g_n=\bar{g}_{-n},\quad h_n=\bar{h}_{-n}, n\neq 0,
\end{equation}
and $g_0$, $h_0$ real-valued. By inspection of the series \eqref{series}, it is clear that $\mu_s$ is supported on $\vec{H}^{\sigma}$ for $\sigma < s-\frac{d-2}{2}$.
Let $s\ge 1$ and $d=2$ or $d=3$. The system \eqref{NLW} is said to be globally well-posed in $\vec{H}^s$ if, for every $(u,v)\in \vec{H}^s$, there is a solution $(u(t),v(t)=\partial_t u(t)):\mathbb{R}_+\rightarrow \vec{H}^s$ to the integral equation
\[
u(t)=\cos(t\sqrt{-\Delta})(u_0)+\frac{\sin(t\sqrt{-\Delta})}{\sqrt{-\Delta}}(v_0)-\int_0^t \frac{\sin((t-s)\sqrt{-\Delta})}{\sqrt{-\Delta}}u^{k}(s)\,\mathrm{d}s\]
which is unique in the class $C(\mathbb{R}_+;H^1(\mathbb{T}^d)\times L^2(\mathbb{T}^d))$. The Fourier multipliers $\cos(\sqrt{-\Delta})$ and $\frac{\sin(\sqrt{-\Delta})}{\sqrt{-\Delta}}$ are defined by
\begin{align*}
\mathcal{F}(\cos(\sqrt{-\Delta})u)(n)&=\cos(|n|)\widehat{u}(n),\\
\mathcal{F}\big(\frac{\sin(\sqrt{-\Delta}}{\sqrt{-\Delta}}u\big)(n)&=\frac{\sin(|n|)}{|n|}\widehat{u}(n),
\end{align*}
where for $u\in \mathcal{D}(\mathbb{T}^d)$, we use both $\mathcal{F}(u)(n)$ and $\widehat{u}(n)$ interchangeably to denote the $n$th Fourier coefficient:
\[\mathcal{F}(u)(n)=\widehat{u}(n)=\frac{1}{(2\pi)^d}\int_{\mathbb{T}^d} e^{-in\cdot x} u(x)\,\mathrm{d}x.\]
For $d=2$, the system is globally well-posed for any odd integer $k$, for a certain range of regularities. For $d=3$, global wellposedness is known for $k=3$ and $k=5$ for certain regularities. A pedagogical proof of well-posedness in the three-dimesnional case appears in \cite{tzvetkov-notes}.
\begin{theorem}\label{thm: quasi-invariance}
Let $s>\frac{5}{2}$. Let $k$ be an odd integer such that $3\le k <\infty$ if $d=2$ and $k=3$ or $k=5$ when $d=3$.
For any time $t>0$, the distribution of the solution $\Phi(t)(u,v)=(u(t),v(t))$ of the system \eqref{NLW} is absolutely continuous with respect to the distribution $\mu_s$ of the initial data, given by \eqref{series}.
\end{theorem}
The special case of Theorem \ref{thm: quasi-invariance} for $d=2$, $k=3$ and $s$ an even integer was proved by Oh and Tzvetkov in \cite{OT}. This work first introduced the renormalized weighted measure to improve the energy estimate on the support of the Gaussian measure. Gunaratnam, Oh, Tzvetkov and Weber \cite{GOTW} extended the result to dimension 3 ($k=3$, $s\ge 4$). Their proof uses the recent variational formulation of partition function introduced by Barashkov and Gubinelli for the purpose of renormalization of $\phi^4$ field theories, combined with a recent argument of Planchon, Tzvetkov and Visciglia \cite{PTV} for proving quasi-invariance in ``local'' situations, exploiting deterministic growth bounds on the solution.
The somewhat surprising aspect of \cite{GOTW} is that although the weighted measure involves the quartic quantity
\[\int (D^s u_N)^2 u_N^2,\] no renormalization other than the Wick type subtraction \eqref{eqn: Q-def} is necessary, in contrast to the $\phi^4$ model in dimension 3.
Our estimates also yield a result in the spirit of \cite[Theorem 1.5]{PTV} concerning the transport of bounded subsets of $\vec{H}^{\sigma}$ in settings where global well-posedness is not known.
\begin{theorem}\label{thm: bound}
Let $s>5/2$, and $\sigma<s-\frac{1}{2}$ be sufficiently close to $s-\frac{1}{2}$ for $d=2$, and $s>3$, $\frac{3}{2}<\sigma<s-\frac{1}{2}$ sufficiently close to $s-\frac{1}{2}$ for $d=3$.
For each $R>0$, let
\[B_R(0;\vec{H}^\sigma):=\{(u,v):\|(u,v)\|_{\vec{H}^\sigma}<R\}\]
denote the ball of radius $R$ centered at the origin in $\vec{H}^\sigma$. There exists $T=T(d,R)$ and $C(R)>0$ such that for $(u_0,v_0)\in B_R(0;\vec{H}^\sigma)$,
there is a solution $(u(t),v(t))$ of \eqref{NLW} such that
\begin{equation}\label{eqn: det-bound}
\sup_{[-t,t]\le T}\|(u,v)(t)\|_{\vec{H}^\sigma}<C(R).
\end{equation}
Let $A$ be a Borel subset of $\vec{H}^{\sigma}$ such that
\[A\subset \{u\in \vec{H}^\sigma: \|u\|_{\vec{H}^\sigma}<R\}\]
and $\vec{\mu}_s(A)=0$. Then
\[\vec{\mu}_s(\{u(t): u(t) \text{ solution of with initial data } u_0\in A\})=0\]
for all $t\in [-T,T]$.
\end{theorem}
We remark that the assumptions on $s$ and $\sigma$ in the statement of this result are not optimal. For example, for $d=3$, the restrictions on $\sigma$ imply that $\vec{H}^\sigma$ is an algebra, so that the nonlinearity can be treated by the basic tame estimate
\[\|u^k\|_{H^{\sigma-1}(\mathbb{T})}\lesssim \|u\|_{H^{\sigma-1}(\mathbb{T}^3)}\|u\|^{k-1}_{L^\infty(\mathbb{T}^3)},\]
and so the existence is classical in this case. It is well known that a basic short-time well-posedness result can be proved in the range $s> \frac{3}{2}-\frac{1}{k-1}$ using Strichartz estimates. Since our methods do not reach the optimal range of $s$ in the energy estimate, we do not attempt to optimize $s$ in Theorem \ref{thm: bound}. As remarked in \cite{OT}, \cite{GOTW} it is of interest to consider quasi-invariance in low regularity settings.
Our final result concerns the necessity of the dispersion for the results in Theorem \ref{thm: quasi-invariance} and Theorem \ref{thm: bound} to hold. Omitting the Laplacian term in \eqref{NLW} leads, for any initial data $(u_0,v_0)$, to an ordinary differential system whose solution exists globally thanks to the Hamiltonian structure. Moreover, considering the $x$ dependence, the algebra property of the Sobolev space $\vec{H}^\sigma$ implies that the solution remains regular for all positive times if $\sigma>3$. The next result shows that, unlike preservation of Sobolev regularity, the absolute continuity statements in the two previous results depend crucially on the dispersion.
\begin{theorem}\label{thm: dispersionless}
For $t>0$, let $(\tilde{u}(t),\tilde{v}(t))$ be the solution at time $t$ of the system \eqref{eqn: ODE} with initial data $(u(0),v(0))=(u^\omega,v^\omega)$ distributed according to the random series \eqref{series}. Then $(\tilde{u}(t),\tilde{v}(t))$ is not absolutely continuous with respect to $(u(0),v(0))$.
\end{theorem}
The analogue of Theorem \ref{thm: dispersionless} for a Schr\"odinger-type equation in dimension $d=1$ was proven by Oh, Tzvetkov, and the first author in \cite{OST}. The difference in the nonlinear wave case is that we do not have an explicit solution of the the ODE that appears when the Laplacian term $\Delta u$ is left out of \eqref{NLW}. The proof of Theorem \ref{thm: dispersionless} appears in Section \ref{sec: dispersionless}.
\subsection{Motivation and previous literature}
The current work is motivated by the results of Oh-Tzvetkov \cite{OT} and Gunaratnam-Oh-Tzvetkov-Weber \cite{GOTW} on the transport of Gaussian measures by the flow of the 2d, respectively 3d, cubic nonlinear wave equations. In particular, we address a number of questions mentioned in the introduction to these papers.
The study of the transport properties of Gaussian measures by Hamiltonian dispersive dynamics was recently initiated by N. Tzvetkov in \cite{T1}. The initial motivation in this work was the study of long term estimates in high Sobolev norms. Another motivating question is the existence of invariant measures supported on Sobolev spaces of regularity higher than the formal Gibbs measures.
The paper \cite{T1} follows a long line of work on the dynamics of Hamiltonian dispersive equations with random initial data. This goes back at least to the foundational paper \cite{LRS} by Lebowitz, Rose and Speer (LRS), which constructs measures absolutely continuous with respect to circular Brownian motion. These were expected to be invariant under the flow of the nonlinear Schr\"odinger equation on the torus. J. Bourgain used his $X^{s,b}$ spaces to construct dynamics on the support of these measures and gave a proof of invariance \cite{bourgain1}. Bourgain then applied his method to a number of other equations \cite{bourgain2, bourgain3, bourgain4}.
Quasi-invariance does not have the dynamical implications of invariance, but it is nonetheless a delicate property of the flow, implying for example the propagation of fine regularity properties of the initial data. It is a much stronger property than persistence of the Sobolev regularity implied by the usual well-posedness results. Indeed, it was noted in \cite{OST} that quasi-invariance implies propagation of the (a.s. constant) modulus of continuity of the Gaussian initial data at any point, and this fact was used to show that the dispersion in the nonlinear Schr\"odinger equation is essential for quasi-invariance to hold.
While several results after \cite{T1} used modified measures to obtain a favorable energy estimate, the paper \cite{OT} was the first to consider the addition to the energy of a formally ``infinite'' term. Namely, Oh-Tzvetkov modify the base measure by adding a term which does not converge as the Fourier cutoff is removed on the support of the base Gaussian measure, requiring a renormalization. The renormalization of the modified energy there is based on argument akin to Nelson's argument for the construction of $P(\phi)_2$ quantum fields \cite{N}.
\paragraph{This paper's contributions.}
The main differences with previous works, in particular \cite{GOTW}, are as follows:
\begin{enumerate}
\item The key energy estimates in \cite{OT, GOTW} depend on an initial integration by parts (see \cite[Equations (3.5)-(3.6)]{GOTW}). This integration by parts removes the most singular term in the derivative of the energy. It is also in this step of the argument that the correction needed to define the weighted measure is identified.
As pointed out in \cite{GOTW}, when $s$ is not an integer, we cannot integrate by parts and obtain exact cancellation. The main tools here are paraproducts and an expansion of the relevant multiplier. Since we do not require $s$ to be an integer, we automatically lower the accessible regularities for the measure $\vec{\mu}_s$.
\item To construct the weighted measure $\vec{\rho}_s$, one needs uniform control of the partition function of the trunctated measures $\vec{\rho}_{s,N}$. As in \cite{GOTW}, we achieve this by pathwise bounds on the terms in a stochastic optimization problem involving a measure perturbed by a ``control drift''. In our case, the relevant expression involves higher powers of the drift. To ensure positivity, we must modify the weighted measure to incorporate a high power of the energy.
\end{enumerate}
\subsection{Outline of proof of Theorems \ref{thm: quasi-invariance} and \ref{thm: bound}}
Our proof proceeds along the lines of that in \cite{OT,GOTW}, following a general methodology introduced in \cite{T1}. Tzvetkov method is based on the construction of a measure $\vec{\rho}_s$ which is mutually absolutely continuous with respect to the Gaussian measure $\vec{\mu}_s$ of interest, but for which the time evolution of sets can be controlled effectively. Then, we need a suitable energy estimate on the support of the renormalized measure.
We start by replacing the measure $\mu_s$ by an equivalent measure more suitable to the analysis of the nonlinear wave flow.
\begin{definition}
Let $g_n$, $h_n$, $n\in\mathbb{Z}^3$ be standard complex Gaussian random variables satisfying \eqref{eqn: g_n-conditions} such that $g_0$ and $h_0$ are real valued. Define $\mathrm{d}\vec{\nu}_s$ to be the distribution of the random series
\begin{align}
u^\omega(x) &= g_0+ \sum_{n\in \mathbb{Z}^3\setminus \{0\}}\frac{g_n(\omega)}{(|n|^2+|n|^{2s+2})^{\frac{1}{2}}}e^{i n\cdot x}, \label{eqn: useries}\\
v^\omega(x)&= \sum_{n\in\mathbb{Z}^3}\frac{h_n(\omega)}{(1+|n|^{2s})^{\frac{1}{2}}}e^{in\cdot x}. \label{eqn: vseries}
\end{align}
\end{definition}
For each $N\ge 1$, let $\Phi_N(t)$ denote the time $t$ flow of the following approximation of the flow of the equation \eqref{NLW}:
\[\begin{cases}
\partial_t u =v\\
\partial_t v = \Delta u -\pi_N((\pi_N u)^k)\\
(u,v)|_{t=0}= (u_0,v_0),
\end{cases}\]
where by $\pi_N$ we denote the projection \eqref{eqn: dirichlet-proj} onto frequencies less than $N$.
A change of variables formula (see \cite[Proposition 4.1]{T1} or \cite[Lemma 5.1]{OT}) then implies
\begin{equation}\label{eqn: mu-COV}
\int_{\Phi_N(t)(A)}\vec{\nu}_s(\mathrm{d}\vec{u})=Z_{s,N}^{-1}\int_{A}e^{-\frac{1}{2}\|\Phi_N(t)(\pi_N u)\|^2}\mathrm{d} L_N\otimes \vec{\nu}_{s,N}^\perp(\mathrm{d}\vec{u}).
\end{equation}
In \eqref{eqn: mu-COV}, we have used the notation
\[\|(u,v)\|^2=\int_{\mathbb{T}^3}(D^s v)^2\,\mathrm{d}x+\int_{\mathbb{T}^3}(D^{s+1} u)^2\,\mathrm{d}x +\int_{\mathbb{T}^3} |\nabla u|^2\,\mathrm{d}x+\int_{\mathbb{T}^3} v^2\,\mathrm{d}x+\left(\int_{\mathbb{T}^3} u\,\mathrm{d}x\right)^2.\]
Here, $D^s u$ denotes\footnote{For $s<0$, we define $\widehat{D^su}(0)=0$.} the action on $u$ of the Fourier multiplier with symbol $|n|^s$:
\[D^s u:= |\nabla|^s u =\mathcal{F}^{-1}(|n|^s\widehat{u}).\]
The measure
\[L_N\otimes \vec{\nu}_{s,N}^\perp(\mathrm{d}\vec{u})\]
appearing in \eqref{eqn: mu-COV} is the product of Lebesgue measure $L_N$ on
\[\mathcal{E}_N\times \mathcal{E}_N:=(\pi_N L^2(\mathbb{T}^3))^2\]
and the projection $\vec{\nu}_{s,N}^\perp=(\mathrm{id}-\pi_N)_*\vec{\nu}_{s,N}$ of $\vec{\nu}_{s,N}$ on
\[\mathcal{E}_N^\perp\times \mathcal{E}_N^\perp:=((\mathrm{id}-\pi_N)L^2(\mathbb{T}^3))^2.\]
$Z_{s,N}$ is a normalization factor.
Differentiating \eqref{eqn: mu-COV} and using the invariance of the Hamiltonian, we obtain
\[-Z_{s,N}^{-1}\int_{A}\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\|\Phi_N(t)(\pi_N u)\|^2 e^{-\frac{1}{2}\|\Phi_N(t)(\pi_N u)\|^2}\mathrm{d} L_N\otimes \vec{\nu}_{s,N}^\perp(\mathrm{d}u).\]
Denote by
\[u_N:= \pi_N u, \quad v_N:=\pi_N v\]
the projections of the solution on Fourier frequencies less than or equal to $N$. The derivative of the energy is
\begin{align}
\frac{\mathrm{d}}{\mathrm{d}t}\|\Phi_N(t)(\pi_N u)\|^2&=\partial_t \left(\frac{1}{2}\int_{\mathbb{T}^3} (D^s v_N)^2+ (D^{s+1} u_N)^2+\frac{1}{2}\big(\int_{\mathbb{T}^3}u_N\big)^2\right)\\
&+\partial_t\left( E(u_N,v_N)-\frac{1}{k+1}\int_{\mathbb{T}^3} u_N^{k+1} \right)\nonumber\\
&= \int_{\mathbb{T}^3} D^{2s}v_N (-u_N^k)+\int_{\mathbb{T}^3} u_N\int_{\mathbb{T}^3}v_N-\frac{\mathrm{d}}{\mathrm{d}t}\big(\frac{1}{k+1}\int_{\mathbb{T}^3} u_N^{k+1}\big). \label{derivative-energy}
\end{align}
We now rewrite the first quantity in \eqref{derivative-energy} as
\begin{align}
\int u_N^k D^{2s}v_N =&~ k\int D^s v_N D^s u_N u_N^{k-1} \nonumber \\
&+\int D^s v_N (D^su_N^k-k u_N^{k-1} D^s u_N) \nonumber \\
=&~ \frac{k}{2} \partial_t \int (D^s u_N)^2 u_N^{k-1} - \frac{k(k-1)}{2}\int (D^s u_N)^2v_N u_N^{k-2} \label{eqn: pre}\\
&+\int D^s v_N (D^su_N^k-k u_N^{k-1} D^s u_N). \label{eqn: commutator}
\end{align}
The quantity $(D^s u_N)^2$ is divergent on the support of $\vec{\nu}_s$, and so requires a renormalization. This was one of the innovations in \cite{OT}. Following the notation introduced there, we define
\begin{equation}\label{eqn: Q-def}
Q_{s,N}(f):=(D^s f)^2-\sigma_N,
\end{equation}
with
\[\sigma_N := \mathbb{E}_{\vec{\nu}_s}[(D^s \pi_N u)^2]\sim N.\]
Defining the \emph{renormalized energy} by
\begin{equation}\label{eqn: R-energy}
\mathscr{E}_{s,N}(u,v):= \frac{1}{2}\big(\int_{\mathbb{T}^3} (D^s v_N)^2+\int_{\mathbb{T}^3} (D^{s+1} u_N)^2+\big(\int_{\mathbb{T}^3} u_N\big)^2\big) -\frac{k}{2}\int_{\mathbb{T}^3} Q_{s,N}(u_N)u_N^{k-1},
\end{equation}
the result of the above computations is the following expression for the time derivative of $\mathscr{E}_{s,N}(u,v)$:
\begin{equation}
\begin{split}\label{eqn: dt-energy}
\partial_t \mathscr{E}_{s,N}(\pi_N\Phi_N(t)(u,v))=& -\frac{k(k-1)}{2}\int_{\mathbb{T}^2} Q_{s,N}(u_N)v_N u_N^{k-2}\\
&+\int_{\mathbb{T}^2} D^s v_N (D^su_N^k-k u_N^{k-1} D^s u_N)+\int_{\mathbb{T}^3} u_Nv_N.
\end{split}
\end{equation}
The idea in \cite{T1} is to pass from $\vec{\nu}_s$ to the measure
\[e^{-R_s(u,v)}\mathrm{d}\vec{\nu}_s,\]
where $R_s(u,v)$ is a limit of the terms
\begin{equation}\label{eqn: RskN}
R_{k,s,N}:=\frac{k}{2}\int_{\mathbb{T}^3} Q_{s,N}(u_N)u_N^{k-1}+\frac{1}{k+1}\int_{\mathbb{T}^3} u_N^{k+1}
\end{equation}
appearing in the renormalized energy \eqref{eqn: R-energy}. We must then estimate the time derivative \eqref{eqn: dt-energy}.
The following two propositions contain the main technical results needed to close the argument.
\begin{proposition}\label{lem: Lp-bound}
Let $s> \frac{5}{2}$. Define
\[E_N(u,v):=\frac{1}{2}\int |\nabla u_N|^2+\frac{1}{2}\int v_N^2+\frac{1}{k+1}\int u_N^{k+1} .\]
Then for $p<\infty$ there exists $q>0$ and $C_p>0$ such that
\begin{equation*}
\sup\limits_{N \in\mathbb{N}} \norm{e^{-R_{k,s,N}(u)-E^q_N(u,v)}}_{L^p(\vec{\nu}_s)} \leq C_p<\infty\label{eqn: Lp-bound}
\end{equation*}
and moreover,
\begin{equation}\label{eqn: Lp-conv}
\lim\limits_{N\rightarrow\infty} e^{-R_{k,s,N}(u)-E^q_N(u,v)}= e^{F_{s,k}(u,v)}\quad in\,\, L^p(\vec{\nu}_s).
\end{equation}
In particular, for any $\sigma<s-\frac{1}{2}$, the restrictions to $\vec{H}^\sigma$ of the measures
\[\mathrm{d}\vec{\rho}_{s,N}=\mathcal{Z}_{s,N}^{-1}e^{-R_{k,s,N}(\vec{u})-E^q}\,\vec{\nu}_s(\mathrm{d}\vec{u})\]
converge in total variation to a measure $\vec{\rho}_s$.
\end{proposition}
\begin{proposition}\label{theorem:energy_estimate}
Let $s>\frac{5}{2}$. Given $R_0>0$ there exists $C=C(R_0)$ such that, for all $p\ge 1$ finite, we have
\begin{equation}\label{eqn: energy-estimate}
\mathbb{E}_{\vec{\rho}_s}[\mathbf{1}_{B_{R_0}(\vec{u})}\left|\partial_t \mathscr{E}_{s,N}(\pi_N\Phi_N(t)(\vec{u}))|_{t=0}\right|^p]^{\frac{1}{p}}\leq Cp
\end{equation}
where $C$ can be taken independent of $N$.
\end{proposition}
As in \cite{GOTW}, we obtain the estimates necessary to the construction of our measure from a variational bound introduced in \cite{BG}.
\paragraph{Finishing the proof of Theorem \ref{thm: quasi-invariance}.}We now indicate how to finish the proof given Propositions \ref{lem: Lp-bound} and \ref{theorem:energy_estimate}. Since this part of the argument requires no modification from \cite{GOTW}, we refer the reader to that paper for details.
Let $\sigma<s-\frac{3}{2}$, $d=3$ and $k=3$ or $k=5$. Fix $R>0$ and let $A\subset B_R\subset \vec{H}^\sigma$ be a Borel measurable set (for the topology induced by the norm) such that
\[\vec{\rho}_{s,N}(A)=0.\]
When $k=3$, a simple application of Gronwall's lemma and conservation of the energy\footnote{See \cite[Lemma 2.5]{GOTW}.} shows that for any $T>0$, there is a radius $C(T,R)$ such that for $|t|\le T$,
\begin{equation}\label{eqn: bound-2}
\Phi_N(t)(B_R)\subset B_{C(T,R)}.
\end{equation}
The estimate \eqref{eqn: bound-2} in case $k=5$ is proved in Appendix \ref{sec: critical}.
By the change of variables formula, we have
\[\vec{\rho}_{s,N}(\Phi_N(t)(A))=\mathcal{Z}_{s,N}^{-1}\int_{A}e^{-E_N(\pi_N\Phi_N(t)(u,v))-E^q_N(u,v)}\mathrm{d} L_N\otimes \vec{\nu}_{s,N}^\perp(\mathrm{d}u).\]
Differentiating and using \eqref{eqn: energy-estimate}, we find
\[\frac{\mathrm{d}}{\mathrm{d}t}\vec{\rho}_{s,N}(\Phi_N(t)(A))\le C_R p\cdot \vec{\rho}_{s,N}(\Phi_N(t)(A))^{1-\frac{1}{p}}\]
This inequality is equivalent to
\begin{equation}\label{eqn: deriv-ineq}
\frac{\mathrm{d}}{\mathrm{d}t}\vec{\rho}_s(\Phi_N(t)(A))^{\frac{1}{p}}\le C_R.\end{equation}
The linear dependence on $p$ on the right side is essential in \eqref{eqn: energy-estimate} plays an essential role here. Integrating \eqref{eqn: deriv-ineq}, we find
\begin{align*}
\vec{\rho}_{s,N}(\Phi_N(t)(A))&\le (\vec{\rho}_{s,N}(A)^{1/p}+C_Rt)^p\\
&\le 2^p \vec{\rho}_{s,N}(A) + C_R^p 2^pt^p.
\end{align*}
Taking $t\le \frac{1}{4C_R}$ and $p$ large enough, we find that
\begin{equation}\label{eqn: eps-bound}
\vec{\rho}_{s,N}(\Phi_N(t)(A))<\varepsilon,
\end{equation}
uniformly in $N$, for any $A\subset B_R\subset \vec{H}^\sigma(\mathbb{T}^3)$.
Theorem \ref{thm: quasi-invariance} follows from \eqref{eqn: eps-bound} by a soft approximation argument identical to that in \cite[Section 5.2]{OT}.
For Theorem \ref{thm: bound}, the estimate \eqref{eqn: bound-2} is replaced by \eqref{eqn: det-bound}, where $T$ now depends on $R$, but otherwise the proof proceeds as before.
\section{Basic estimates}
We use the notation $A\lesssim B$ to mean that $A\le CB$ where $C$ is an unspecified constant independent of $N$ whose exact value is unimportant for the argument.
\subsection{Littlewood-Paley theory} We denote the Fourier transform of a function $u\in L^1(\mathbb{T}^3)$ by
\[\widehat{u}(n)=\frac{1}{(2\pi)^3}\int u(x)e^{in\cdot x}\,\mathrm{d}x.\]
The Fourier transform of $u\in \mathcal{D}'(\mathbb{T}^3)$, the space of distributions on $\mathbb{T}^3$ is defined in the usual fashion by duality. As we have already stated above, we denote by $\pi_N$ the Dirichlet truncation of a distribution in Fourier space:
\begin{equation}\label{eqn: dirichlet-proj}
\pi_N u =\sum_{|n|\le N} \widehat{u}(n)e^{in\cdot x}.
\end{equation}
We make extensive use of Littlewood-Paley theory in its dyadic decomposition incarnation. See \cite{BDC} for a thorough treatment. Following these authors, we $B(\xi,r)$ denote the ball in $\mathbb{R}^3$ of radius $r$ around a point $\xi$ in phase space.
Consider functions $\chi$, $\tilde{\chi}$ such that
\begin{align*}
\mathrm{supp}\chi&\subset B\big(0,\frac{4}{3}\big),\\
\mathrm{supp}\tilde{\chi}&\subset B\big(0,\frac{3}{4}\big)\setminus B\big(0,\frac{3}{8}\big).
\end{align*}
We define $\psi_{-1}=\chi$ and
\[\psi_j(\cdot)=\tilde{\chi}(2^{-j}\cdot), \quad j\ge 0.\]
We also define $\mathbf{P}_j$, the \textit{Littlewood-Paley projector}, associated to symbol $\psi_j$ by
\[\mathbf{P}_j u(x)=\big(\psi_j(\nabla)u\big)(x)=\sum_{n\in \mathbb{Z}^3} \tilde{\chi}_j(n)\widehat{u}(n)e^{in\cdot x}.\]
We define the (low-high) paraproduct $T_ab$ by
\[T_ab:= \sum_{j=-1}^{\infty}S_{j-1}a\mathbf{P}_jb,\] where
\[S_j := \sum_{k\leq j-1}\mathbf{P}_k.\]
\subsection{Basic estimates for Besov-type norms}
We recall the definition of Besov spaces through the Littlewood-Paley decomposition. For $s\in \mathbb{R}$ and $1\le p,q\le \infty$, the Besov space $B^s_{p,q}(\mathbb{T}^3)$ is the set of distributions $u$ on $\mathbb{T}^3$ such that
\[\|u\|_{B^s_{p,q}(\mathbb{T}^3)}=\left\|\big(2^{sj}\|\mathbf{P}_j u\|_{L_x^p}\big)_{j\ge 0}\right\|_{\ell^q_j}<\infty.\]
For $p=q=2$, this corresponds to the Sobolev spaces $H^s(\mathbb{T}^3)$, while for $s>0$, $s\notin \mathbb{Z}$, this is the H\"older space $C^s(\mathbb{T}^3)$. We define H\"older spaces for negative $s$ or for $s\in \mathbb{Z}$ by setting $C^s: = B_{\infty,\infty}^{s}$. We again refer to \cite{BDC,MWX} for details.
Following \cite{GOTW}, we denote
\[\vec{B}^s_{p,q}(\mathbb{T}^3)=B^s_{p,q}(\mathbb{T}^3)\times B^{s-1}_{p,q}(\mathbb{T}^3).\]
\begin{lemma}
The following estimates hold
\begin{enumerate}[(i)]
\item Let $s_0,s_1,s\in\mathbb{R}$ and $\nu\in[0,1]$ be such that $s=(1-\nu)s_0+\nu s_1$. Then,
\begin{equation}\label{EQU: Interpolation}
\norm{u}_{H^s}\lesssim \norm{u}_{H^{s_0}}^{1-\nu} \norm{u}_{H^{s_1}}^{\nu}.
\end{equation}
\item Let $s_0,s_1\in\mathbb{R}$ and $p_0,p_1,q_0,q_1\in[1,\infty]$. Then,
\begin{align}\label{EQU: Intermediate embeddings}
\begin{split}
\norm{u}_{B_{p_0,q_0}^{s_0}}&\lesssim \norm{u}_{B_{p_1,q_1}^{s_1}}\quad \textnormal{for } s_0\leq s_1, p_0\leq p_1 \textnormal{ and } q_0\geq q_1, \\
\norm{u}_{B_{p_0,q_0}^{s_0}} &\lesssim \norm{u}_{B_{p_0,\infty}^{s_1}} \quad \textnormal{for } s_0 < s_1,\\
\norm{u}_{B_{p_0,\infty}^0}&\lesssim \norm{u}_{L^p}\lesssim \norm{u}_{B_{p_0,1}^0} .
\end{split}
\end{align}
\item Let $s>0$. Then,
\begin{equation}\label{EQU: Algebra property}
\norm{uv}_{C^{s}}\lesssim \norm{u}_{C^{s}}\norm{v}_{C^{s}}.
\end{equation}
\item Let $1\leq p_1\leq p_0\leq \infty$, $q\in[1,\infty]$ and $s_1=s_0+d\left(\frac{1}{p_1}-\frac{1}{p_0}\right)$. Then,
\begin{equation}\label{EQU: Besov embedding}
\norm{u}_{B^{s_0}_{p_0,q}}\lesssim \norm{u}_{B^{s_1}_{p_1,q}}.
\end{equation}
\item Let $s\in\mathbb{R}$ and $p,p',q,q'\in[1,\infty]$ be such that $\frac{1}{p}+\frac{1}{p'}=1$ and $\frac{1}{q}+\frac{1}{q'}=1$. Then,
\begin{equation}\label{EQU: Duality}
\left|\int_{\mathbb{T}^d} uv\,dx\right| \leq\norm{u}_{B^{s}_{p,q}}\norm{u}_{B^{-s}_{p',q'}}.
\end{equation}
\item Let $p,p_0,p_1,p_2,p_3\in[1,\infty]$ be such that $\frac{1}{p_0}+\frac{1}{p_1}=1$ and $\frac{1}{p_2}+\frac{1}{p_3}=1$.. Then for $s>0$,
\begin{equation}\label{EQU: Fractional Leibniz rule}
\norm{uv}_{B^{s}_{p,q}}\lesssim \norm{u}_{B^{s}_{p_0,q}}\norm{v}_{L^{p_1}}+\norm{v}_{B^{s}_{p_2,q}}\norm{u}_{L^{p_3}}.
\end{equation}
\item Let $m>0$ be an integer, $s>0$ and $q,p,p_0,p_1\in[0,\infty]$ satisfy $\frac{1}{p_0}+\frac{1}{p_1}=\frac{1}{p}$. Then
\begin{equation}\label{EQU: Fracational Leibniz rule cor}
\norm{u^{m+1}}_{B^{s}_{p,q}}\lesssim \norm{u^m}_{L^{p_0}}\norm{u}_{B^s_{p_1,q}}.
\end{equation}
\item Let $s_0<0<s_1$ be such that $s_0+s_1>0$. Then,
\begin{equation}\label{EQU: Multiplicative estimate}
\norm{uv}_{C^{s_0}}\lesssim \norm{u}_{C^{s_0}}\norm{v}_{C^{s_1}}.
\end{equation}
\end{enumerate}
\end{lemma}
We refer to \cite{BDC, MWX} for proofs. We also note the Bernstein inequality:
\begin{equation}\label{eqn: bernstein}
\|\mathbf{P}_j u\|_{L^p}\lesssim 2^{3j(\frac{1}{p}-\frac{1}{q})}\|\mathbf{P}_j u\|_{L^q}
\end{equation}
for $p\le q\le \infty$.
\subsection{Wiener chaos and hypercontractivity}
The hypercontractive estimate is a key tool to estimate nonlinear functions of Gaussian random variables. We recall this estimate here. See S. Janson's book \cite{janson} for more information on hypercontractivity and Wiener chaos spaces.
Let $X_n$, $n\ge 1$ be a sequence of i.i.d. standard Gaussian random variables. We define $\mathcal{H}_k$, the homogeneous Wiener chaos of order $k$ to be the closed span of the polynomials
\begin{equation}\label{eqn: hermite-product}
\prod_{n=1}^\infty H_{k_n}(X_n),
\end{equation}
where $H_j$ is the Hermite polynomial of degree $j$ and $k=\sum_{n=1}^\infty k_n$. Note that since $H_0(x)=1$, the formally infinite product in \eqref{eqn: } has only finitely many non-trivial factors. We have
\[L^2(\Omega,\mathcal{F},\mathbb{P})=\oplus_{k=0}^\infty \mathcal{H}_k,\]
where $\mathcal{F}$ is the $\sigma$-algebra generated by the Gaussian random variables $X_n$, $n\ge 1$.
The next lemma gives the crucial hypercontractivity estimate. See \cite{janson} for a proof.
\begin{proposition} Let $p\ge 2$ be finite. If $X\in \mathcal{H}_k$, then
\begin{equation}\label{eqn: wiener-chaos}
\big( E[|X|^p]\big)^{\frac{1}{p}}\le (p-1)^{\frac{k}{2}}\big(E[|X|^2]\big)^{\frac{1}{2}}.
\end{equation}
\end{proposition}
\section{Energy estimate for fractional $s$}
In this section, we derive Proposition \ref{theorem:energy_estimate}, the energy estimate for fractional $s$. This is the analogue of \cite[estimate 3]{GOTW} to general nonlinearities and fractional regularity $s>5/2$.
The possibility of fractional $s$ makes our derivation more involved, since we cannot integrate by parts as in \cite[Equation 1.25]{OT} and \cite[Equation 3.5]{GOTW} to remove the most singular spatial derivatives in the time derivative of the energy. Instead, we perform a higher order expansion to exploit the cancellation in the commutator term $ku_N^{k-1}D^su_N-D^su_N^k$ appearing in \eqref{eqn:energy_derivative}.
Recall that
\begin{equation}\label{eqn:energy_derivative}
\begin{split}
\partial_t \mathscr{E}_{s,N}(\pi_N\Phi_N(t)(\vec{u}))|_{t=0} &= \frac{k(k-1)}{2}\int Q_{s,N}(u_N)v_Nu_N^{k-2} \\
&+ \int D^sv_N(ku_N^{k-1}D^su_N-D^su_N^k) + \int u_N \int v_N.
\end{split}
\end{equation}
\begin{proposition}\label{theorem:energy_derivative}
For $s>\frac{5}{2}$, there exists $\sigma < s-\frac{1}{2}$, such that, for $\varepsilon$ sufficiently small,
\begin{equation}\label{ineq:energy_derivative}
\begin{split}
&\left|\partial_t \mathscr{E}_{s,N}(\pi_N\Phi_N(t)(\vec{u}))|_{t=0} \right|\\
\lesssim ~&(1+\norm{\vec{u_N}}_{\vec{H}^{\sigma}}^{k-1})(1+\norm{Q_{s,N}(u_N)}_{C^{-1-\varepsilon}}
+\norm{u_N}_{C^{s-\frac{1}{2}-\varepsilon}}\norm{v_N}_{C^{s-\frac{3}{2}-\varepsilon}}\\
&+\norm{D^{s-\frac{3}{2}}vD^{s+\frac{1}{2}}u}_{C^{-1-\varepsilon}}+\norm{D^{s-\frac{3}{2}}vD^{s-\frac{1}{2}}u}_{C^{-\varepsilon}}).
\end{split}
\end{equation}
The implicit constants are uniform in $N$.
\end{proposition}
Then Proposition \ref{theorem:energy_estimate} follows from (\ref{ineq:energy_derivative}) and Lemma \ref{lemma:chaos_estimate}.
Our goal is to prove uniform in $N$ estimates for the derivative of the energy \eqref{eqn:energy_derivative}. For this reason and for simplicity of notation, in this section we omit the subscripts $N$ on $u$ and $v$ in deriving the estimates.
\begin{lemma}\label{lemma:chaos_estimate}
We have
\begin{align}
\norm{v}_{L^p(dv_s)C^{s-\frac{3}{2}-\varepsilon}(\mathbb{T}^3)}&\lesssim \sqrt{p}, \label{eqn: v-C-estimate}\\
\norm{u}_{L^p(\mathrm{d}\vec{\nu}_s)C^{s-\frac{1}{2}-\varepsilon}(\mathbb{T}^3)}&\lesssim \sqrt{p}. \label{eqn: u-C-estimate}
\end{align}
If $\alpha+\beta>\frac{3}{2}$, $\alpha, \beta \geq 0$, then for $\gamma < \min\{\alpha-\frac{3}{2},\beta-\frac{3}{2},\alpha+\beta-3\}$,
\begin{equation}\label{eqn: third}
\norm{D^{s-\alpha}vD^{s+1-\beta}u}_{L^p(\mathrm{d}\vec{\nu}_s)C^{\gamma}(\mathbb{T}^3)}\lesssim p.
\end{equation}
\end{lemma}
\begin{proof}
We only prove the third bound, \eqref{eqn: third}. The first two can be proved in a similar manner. To simplify the notations, let $L^p_w, L^q_x$ denote $L^p(\mathrm{d}\vec{\nu}_s), L^q(\mathbb{T}^3)$ respectively. By Bernstein's inequality \eqref{eqn: bernstein}, Fubini's theorem and Wiener chaos estimate \eqref{eqn: wiener-chaos},
\begin{equation}\label{chaos estimate_bound}
\begin{split}
\norm{D^{s-\alpha}v D^{s+1-\beta}u}_{L^p_{\omega}C_x^{\gamma}}
&\leq \norm{D^{s-\alpha}v D^{s+1-\beta}u}_{L^p_{\omega} B^{\gamma+\frac{3}{p}}_{p,1}} \\
&= \norm{\sum_j 2^{j(\gamma+\frac{3}{p})}\norm{\mathbf{P}_j(D^{s-\alpha}v D^{s+1-\beta}u)}_{L^p_x}}_{L^p_{\omega}}\\
&\leq \sum_j 2^{j(\gamma+\frac{3}{p})}\norm{\mathbf{P}_j(D^{s-\alpha}v D^{s+1-\beta}u)}_{L^p_xL^p_{\omega}}\\
&\lesssim p\sum_j 2^{j(\gamma+\frac{3}{p})}\norm{\mathbf{P}_j(D^{s-\alpha}v D^{s+1-\beta}u)}_{L^p_xL^2_{\omega}}.
\end{split}
\end{equation}
On the other hand, by series representation (\ref{series}),
\begin{equation}\label{ineq: expectation of dyadic}
\begin{split}
\mathbb{E}_{\omega}[|\mathbf{P}_j(D^{s-\alpha}v D^{s+1-\beta}u)|^2]
&= \sum_{n\sim 2^j}\sum_{n_1}\dfrac{1}{\jbrac{n_1}^{2\alpha}\jbrac{n-n_1}^{2\beta}}
\end{split}
\end{equation}
We claim the following convolution estimate: for any $\varepsilon>0$,
\begin{equation}\label{ineq: convolution}
\sum_{n_1\in\mathbb{Z}^3}\dfrac{1}{\jbrac{n_1}^{2\alpha}\jbrac{n-n_1}^{2\beta}} \lesssim \jbrac{n}^{-\min\{2\alpha,2\beta,2\alpha+2\beta-3\}+\varepsilon}
\end{equation}
provided $\alpha+\beta>\frac{3}{2}$ and $\alpha, \beta\geq 0$. Hence (\ref{chaos estimate_bound}) is bounded by $p$ if $p$ is large and $\gamma<\min\{\alpha-\frac{3}{2},\beta-\frac{3}{2},\alpha+\beta-3\}$.
To prove \eqref{ineq: convolution}, we follow the idea of \cite[Lemma 4.1]{MWX}. Split the index set of the summation into
\begin{align*}
\mathcal{A}_1 &:= \{n_1\in\mathbb{Z}^3 :\; |n_1|>2|n| \},\\
\mathcal{A}_2 &:= \{n_1\in\mathbb{Z}^3 :\; |n_1|<\frac{1}{2}|n\},\\
\mathcal{A}_3 &:= \{n_1\in\mathbb{Z}^3 :\; |n-n_1|<\frac{1}{2}|n|\},\\
\mathcal{A}_4 &:= \mathbb{Z}^3\backslash\bigcup_{j=1}^3\mathcal{A}_j,
\end{align*}
and bound each parts separately. If $n_1\in\mathcal{A}_1$, then $|n-n_1|\geq |n_1|-|n| > \frac{1}{2}|n_1|$. Using $2\alpha+2\beta>3$, we have
\begin{align*}
\sum_{n_1\in\mathcal{A}_1}\dfrac{1}{\jbrac{n_1}^{2\alpha}\jbrac{n-n_1}^{2\beta}} \lesssim \sum_{n_1\in\mathbb{Z}^3} \dfrac{1}{\jbrac{n_1}^{2\alpha+2\beta}}\lesssim \jbrac{n}^{-(2\alpha+2\beta-3)}.
\end{align*}
For $n_1\in\mathcal{A}_2$, we have $|n-n_1|\geq |n|-|n_1|>\frac{1}{2}|n|$, then
\begin{align*}
\sum_{n_1\in\mathcal{A}_1}\dfrac{1}{\jbrac{n_1}^{2\alpha}\jbrac{n-n_1}^{2\beta}} \lesssim \dfrac{1}{\jbrac{n}^{2\beta}}\sum_{|n_1|<\frac{1}{2}|n|}\dfrac{1}{\jbrac{n_1}^{2\alpha}}\lesssim \jbrac{n}^{-2\beta -\min\{-\varepsilon, 2\alpha-3\}}.
\end{align*}
Here, $-\varepsilon$ is used to bound the logarithm factor when $2\alpha=3$. Similar bound holds for $\mathcal{A}_3$ by symmetry. Finally, if $n_1\in \mathcal{A}_4$, then $|n-n_1|\geq \frac{1}{2}|n|$ and $\frac{1}{2}|n|\leq |n_1| \leq 2|n|$. Therefore,
\begin{align*}
\sum_{n_1\in\mathcal{A}_4}\dfrac{1}{\jbrac{n_1}^{2\alpha}\jbrac{n-n_1}^{2\beta}} \lesssim \dfrac{1}{\jbrac{n}^{2\beta}}\sum_{\frac{1}{2}|n|\leq |n_1| \leq 2|n|}\dfrac{1}{\jbrac{n_1}^{2\alpha}} \lesssim \jbrac{n}^{-(2\alpha+2\beta-3)}
\end{align*}
\end{proof}
To prove Proposition \ref{theorem:energy_derivative}, we only need to bound the second term on the RHS of (\ref{eqn:energy_derivative}). The other terms are the same as those in \cite[Proposition 5.1]{GOTW}.
\begin{lemma}\label{high term lemma}
For any $\varepsilon>0$, we have $\norm{kT_{u^{k-1}}u-u^k}_{B^{\alpha+\beta-\varepsilon}_{1,1}}\lesssim \norm{u}_{B^{\alpha}_{p,\infty}}\norm{u}_{B^{\beta}_{q,\infty}}\norm{u}_{L^{\infty}_x}^{k-2}$, provided $\alpha+\beta > 0$, and $\frac{1}{p}+\frac{1}{q}=1$.
\end{lemma}
\begin{proof}
A more general $\mathbb{R}^d$ version can be found in \cite[Theorem 2.92]{BDC}. For our case, note
\begin{equation}
\begin{split}
u^k &= \sum_{j=-1}^{\infty} (S_{j+1}u)^k-(S_ju)^k\\
&= \sum_{j=-1}^{\infty} \mathbf{P}_ju\sum_{\ell=0}^{k-1}(S_{j+1}u)^\ell(S_ju)^{k-\ell-1}.
\end{split}
\end{equation}
It suffices to bound $T_{u^{k-1}}u-\sum_j \mathbf{P}_ju(S_{j+1}u)^{k-1}$, the other terms are similar. Since $\alpha+\beta>0$, we may assume $\beta>0$. Taking the $L^1_x$ norm, we have
\begin{align*}
&\norm{\mathbf{P}_n\left(T_{u^{k-1}}u-\sum_j \mathbf{P}_ju(S_{j+1}u)^{k-1}\right)}_{L_x^1}\\
=& \norm{\mathbf{P}_n\sum_{j\geq n-3}\mathbf{P}_ju\left(S_{j-1}u^{k-1}-(S_{j+1}u)^{k-1}\right)}_{L^1_x}\\
\lesssim& \sum_{j\geq n-3}\norm{\mathbf{P}_ju}_{L^p_x}\norm{S_{j-1}(u^{k-1}-(S_{j-k}u)^{k-1})+\left((S_{j-k}u)^{k-1}-(S_{j+1}u)^{k-1}\right)}_{L^{q}_x}\\
\lesssim& \sum_{j\geq n-3}\norm{\mathbf{P}_ju}_{L_x^p}\left(\norm{u^{k-1}-(S_{j-k}u)^{k-1}}_{L^q_x} + \norm{(S_{j-k}u)^{k-1}-(S_{j+1}u)^{k-1}}_{L^{q}_x}\right).
\end{align*}
Expressing $u^{k-1}-(S_{j-1}u)^{k-1}$ in terms of products of lower-order quantities, we find that the previous expression is bounded up to a constant factor by
\begin{align*}
& \sum_{j\geq n-3}\norm{\mathbf{P}_ju}_{L_x^p}\sum_{m\geq j-k}\norm{\mathbf{P}_mu}_{L^q_x}\sum_{\ell =0}^{k-2}\norm{S_{j-k}u}^{\ell}_{L^{\infty}_x}\left(\norm{u}_{L_x^{\infty}}^{k-\ell-2}+\norm{S_{j+1}u}_{L^{\infty}_x}^{k-\ell-2}\right)\\
\lesssim& \sum_{j\geq n-3} 2^{-j\alpha}\norm{u}_{B^{\alpha}_{p,\infty}}\sum_{m\geq j-k}2^{-m\beta}\norm{u}_{B_{q,\infty}^{\beta}}\norm{u}_{L^{\infty}_x}^{k-2}\\
\lesssim& 2^{-n(\alpha+\beta)}\norm{u}_{B^{\alpha}_{p,\infty}}\norm{u}_{B_{q,\infty}^{\beta}}\norm{u}_{L^{\infty}_x}^{k-2}.
\end{align*}
\end{proof}
The next lemma allows us to replace $u^{k-1}D^su$ by $T_{u^{k-1}}D^su$.
\begin{lemma}\label{low and resonant term lemma}
$\norm{T_ab-ab}_{B_{1,1}^{\alpha+\beta}} \lesssim \norm{a}_{B_{1,\infty}^{\alpha}}\norm{b}_{C^{\beta}}$, provided $\beta < 0$ and $\alpha+\beta>0$.
\end{lemma}
\begin{proof}
The proof is a straightforward application of the definition of the paraproduct and Besov spaces.
\end{proof}
The remaining difference $D^sT_{u^{k-1}}u-T_{u^{k-1}}D^su$ cannot be bounded directly. The following decomposition is the main result describing the regularity of this commutator:
\begin{lemma}\label{lemma:decomposition_lemma} Given $s>0$,
$D^s(T_w u) - T_w(D^s u) = F_1 + F_2 + R$, where
\begin{align*}
F_1 =&s\sum_{j = 1}^3T_{\partial_j w}(\partial_j D^{s-2} u),\\
F_2 =&\frac{s(s-2)}{2}\sum_{j=1}^3T_{\partial_j^2 w}(\partial_j^2 D^{s-4}u)+\frac{s}{2}T_{D^2w}D^{s-2}u.
\end{align*}
And for any $\beta \in \mathbb{R}$, $\rho < 3$,
\begin{align}
\norm{R}_{B_{1,1}^{\beta + \rho}} \lesssim \norm{w}_{B_{1,\infty}^{{\rho}}}\norm{u}_{C^{s+\beta}}.
\end{align}
\end{lemma}
\begin{remark}
If we only expand once, then the reminder has the same bound but with more restricted $\rho$. More precisely, with the notations above, for $\rho <2$,
\begin{align}\label{ineq: second expansion}
\norm{F_2+R}_{B_{1,1}^{\beta + \rho}} \lesssim \norm{w}_{B_{1,\infty}^{{\rho}}}\norm{u}_{C^{s+\beta}},
\end{align}
and for $\rho<1$,
\end{remark}
\begin{proof}
Let $m\in C^{\infty}(\mathbb{R})$ be a bump function such that $m=1$ on $[-\frac{1}{2},\frac{1}{2}]$ and is supported on $[-\frac{1}{2}-,\frac{1}{2}+]$. Then,
\begin{align*}
&(D^s(T_w u) - T_w(D^s u))(x)\\
=&\sum_{n_1,n_2\in\mathbb{Z}^3} (|n_1+n_2|^s-|n_2|^s)\widehat{w}(n_1)\widehat{u}(n_2)m(\frac{|n_1|}{|n_2|})e^{i (n_1+n_2)\cdot x}.
\end{align*}
By Taylor's theorem, for $n_2\neq 0$\footnote{If $n_2=0$, then on the support of $m(\frac{|n_1|}{|n_2|})$, we have $n_1=0$. That why we don't need to estimate $\mathbf{P}_{-1}R$.},
\begin{align*}
&|n_1+n_2|^s-|n_2|^s\\
=&~s|n_2|^{s-1}n_2\cdot n_1 +\frac{1}{2}\left(s(s-2)|n_2|^{s-4}(n_2\cdot n_1)^2 + s|n_2|^{s-2}|n_1|^2\right)+ R_1
\end{align*}
where the first 2 terms correspond to $F_1$ and $F_2$, and
\begin{align*}
R_1(n_1,n_2):=\dfrac{s(s-2)}{2} \int_0^1 & (1-t)^2\Big( 3|n_2+tn_1|^{s-4}(n_2+tn_1)\cdot n_1 |n_1|^2\\
&+ (s-4)|n_2+tn_1|^{s-6}((n_2+tn_1)\cdot n_1)^3\Big)\mathrm{d}t.
\end{align*}
Define $R$ by
\[R= \sum_{n_1,n_2\in \mathbb{Z}^3} R_1(n_1,n_2)\widehat{u}(n_1)\widehat{w}(n_2)e^{i(n_1+n_2)\cdot x}.\]
We now estimate $R$. First, we can write $R_1 =\sum_{|\alpha|=3}C_{\alpha}(n_1,n_2)n_1^{\alpha}$, where $\alpha$ is a 3d multi-index and $C_{\alpha}$ can be extended by homogeneity to a function on $\mathbb{R}^6$ such that
\[C_{\alpha}(\lambda\xi_1,\lambda\xi_2) = \lambda^{s-3}C_{\alpha}(\xi_1,\xi_2)\] for any $\lambda>0$, and is smooth on the support of $m(\frac{|\xi_1|}{|\xi_2|})\psi_j(\xi_1+\xi_2)$ for any $j\ge 0$.
To bound $\norm{\mathbf{P}_jR}_{L^1_x}$, set
\[K_{\alpha,j}(\xi_1,\xi_2) := \psi_j(\xi_1+\xi_2)m(\frac{|\xi_1|}{|\xi_2|})C_{\alpha}(\xi_1,\xi_2),\]
and define $h_j\in L^2(\mathbb{T}^3\times\mathbb{T}^3)$ and $H_j\in L^2(\mathbb{R}^3\times\mathbb{R}^3)$ by
\begin{align*}
h_{\alpha,j}(y,z):=&\sum_{|\alpha|=3}(\mathcal{F}_{\mathbb{T}^6}^{-1}K_{\alpha,j})(y,z),\\
H_{\alpha,j}(y,z):=&\sum_{|\alpha|=3}(\mathcal{F}_{\mathbb{R}^6}^{-1}K_{\alpha,j})(y,z).
\end{align*}
By the Poisson summation formula,
\[h_{\alpha,j}(y,z) = \sum_{(m_1, m_2)\in \mathbb{Z}^6}H_{\alpha,j}(y+m_1,z+m_2).\]
Hence,
\begin{align*}
\norm{h_{\alpha,j}}_{L^1(\mathbb{T}^6)}&\leq \norm{H_{\alpha,j}}_{L^1(\mathbb{R}^6)}\\
&= \norm{2^{j(s-3)}\cdot2^{6j}H_{\alpha,0}(2^j\cdot, 2^j\cdot)}_{L^1(\mathbb{R}^6)}\\
&= 2^{j(s-3)}\norm{H_{\alpha,0}}_{L^1(\mathbb{R}^6)}.
\end{align*}
Note $\norm{H_{\alpha,0}}_{L^1}$ is bounded since $K_{\alpha,0}\in C^{\infty}_0(\mathbb{R}^6)$. Then,
\begin{align*}
\norm{\mathbf{P}_j R}_{L_x^1}
&= \norm{\sum_{n_1, n_2}\psi_j(n_1+n_2)m(\frac{|n_1|}{|n_2|})\sum_{|\alpha|=3}C_{\alpha}(n_1,n_2)n_1^{\alpha}\widehat{w}(n_1)\widehat{u}(n_2)e^{ i(n_1+n_2)\cdot x}}_{L^1_x}\\
&=\norm{\sum_{|\alpha|=3}\sum_{n_1, n_2}K_{\alpha,j}(n_1,n_2)\widehat{S_{j-1}\partial^{\alpha}w}(n_1)\widehat{\widetilde{\mathbf{P}}_ju}(n_2)e^{ i(n_1+n_2)\cdot x}}_{L^1_x}\\
&=\norm{\sum_{|\alpha|=3}\iint h_{\alpha,j}(y,z)S_{j-1}\partial^{\alpha}w(x-y)\widetilde{\mathbf{P}}_ju(x-z)dydz}_{L^1_x}\\
&\leq \sum_{|\alpha|=3}\norm{h_{\alpha,j}}_{L^1_{y,z}}\norm{S_{j-1}\partial^{\alpha}w}_{L^1_x}\norm{\widetilde{\mathbf{P}}_ju}_{L^{\infty}_x}\\
&\lesssim \sum_{|\alpha|=3}2^{j(s-3)}2^{j(3-\rho)}\norm{\partial^{\alpha}w}_{B^{\rho-3}_{1,\infty}}\cdot 2^{-j(s+\beta)}\norm{ u}_{C^{s+\beta}}\\
&\lesssim 2^{-j(\rho+\beta)}\norm{w}_{B^{\rho}_{1,\infty}}\norm{u}_{C^{s+\beta}}.
\end{align*}
Here $\widetilde{\mathbf{P}}_j$ is another Littlewood-Paley projector such that $\widetilde{\mathbf{P}}_j\mathbf{P}_j = \mathbf{P}_j$.
\end{proof}
We can now give the proof of the energy estimate \eqref{ineq:energy_derivative}.
\begin{proof}[Proof of Proposition \ref{theorem:energy_derivative}]
We first write
\begin{align*}
&\int_{\mathbb{T}^3} D^sv(ku^{k-1}D^su-D^su^k) \\
= &\int_{\mathbb{T}^3} D^sv \Big[ (ku^{k-1}D^su-kT_{u^{k-1}}D^su)-k(D^s(T_wu)-T_w(D^su))+(kD^sT_{u^{k-1}}u-D^su^k)\Big].
\end{align*}
Using Lemma \ref{low and resonant term lemma}, \ref{lemma:decomposition_lemma}, and \ref{high term lemma}, we estimate the last quantity by
\begin{align*}&\norm{D^sv}_{C^{-\frac{3}{2}-\varepsilon}}\Big( k\norm{u^{k-1}D^su-T_{u^{k-1}}D^su}_{B_{1,1}^{\frac{3}{2}+\varepsilon}} +k\norm{R_1}_{B_{1,1}^{\frac{3}{2}+\varepsilon}}+\norm{kD^sT_{u^{k-1}}u-D^su^k}_{B_{1,1}^{\frac{3}{2}+\varepsilon}}\Big)\\
&+\left|\int D^sv(F_1+F_2)\right|\\
\lesssim& \norm{v}_{C^{s-\frac{3}{2}-\varepsilon}}\norm{u}_{C^{s-\frac{1}{2}-\varepsilon}}\norm{u}_{B_{1,\infty}^{2+2\varepsilon}}\norm{u}^{k-2}_{L_x^{\infty}}+\left|\int D^sv(F_1+F_2)\right|\\
\lesssim& \norm{v}_{C^{s-\frac{3}{2}-\varepsilon}}\norm{u}_{C^{s-\frac{1}{2}-\varepsilon}}\norm{u}_{H^{\sigma}}^{k-1}+\left|\int D^sv(F_1+F_2)\right|.
\end{align*}
Here, $R_1, F_1, F_2$ are the terms in Lemma \ref{lemma:decomposition_lemma}. We used (\ref{EQU: Intermediate embeddings}) and (\ref{EQU: Besov embedding}) in the last step.
It remains to deal with $\int D^sv F_1$ and $\int D^sv F_2$. To simplify the notation, in the remaining part of the proof, we use $D$ to represent both $D$ and $\partial$, and fix $\alpha = \frac{3}{2}$. Since $D^{\alpha}$ is self-adjoint, we can write the first integral as
\begin{align*}
\int_{\mathbb{T}^3}D^svF_1 = \int_{\mathbb{T}^3}D^{s-\alpha}v(T_{Du^{k-1}}D^{s-1+\alpha}u) + \int_{\mathbb{T}^3}D^{s-\alpha}v[D^{\alpha},T_{Du^{k-1}}]D^{s-1}u =: I_1 + I_2
\end{align*}
For $I_1$, since $Du^{k-1}$ has positive regularity(for $s>\frac{5}{2}$), Lemma \ref{low and resonant term lemma} allows us to treat the paraproduct as a real product, which can be bounded by
\begin{align*}
I_1 \lesssim \norm{D^{s-\alpha}vD^{s-1+\alpha}u}_{C^{-1-\varepsilon}}\norm{Du^{k-1}}_{B^{1+\varepsilon}_{1,\infty}} \lesssim \norm{D^{s-\alpha}vD^{s-1+\alpha}u}_{C^{-1-\varepsilon}}\norm{u}^{k-1}_{H^{\sigma}}
\end{align*}
Moving $D^{\alpha}$ from $v$ to $u$ and using \eqref{ineq: second expansion}, we can decompose $I_2$ into
\begin{align}\label{ineq: I2}
I_2 =& \int_{\mathbb{T}^3} D^{s-\alpha}vT_{D^2u^{k-1}}D^{s-2+\alpha}u + \int_{\mathbb{T}^3}D^{s-\alpha}v\widetilde{R},
\end{align}
where\footnote{Here we choose $\rho = 1+2\varepsilon$ and $\beta = -1$}
\begin{align*}
\norm{\widetilde{R}}_{B_{1,1}^{\frac{3}{2}-\alpha+\varepsilon}}\lesssim\norm{Du^{k-1}}_{B^{1+2\varepsilon}_{1,\infty}}\norm{D^{s-1}u}_{C^{\frac{3}{2}-\varepsilon}} \lesssim \norm{u}_{H^{\sigma}}^{k-1}\norm{u}_{C^{s-\frac{1}{2}-\varepsilon}}.
\end{align*}
The first integral in \eqref{ineq: I2}, by Lemma \ref{low and resonant term lemma}, can be treated a real product(for $s>\frac{5}{2}$), then
\begin{align*}
\int_{\mathbb{T}^3}D^{s-\alpha}vD^2u^{k-1}D^{s-2+\alpha}u
&\lesssim\norm{D^{s-\alpha}vD^{s-2+\alpha}u}_{C^{-\varepsilon}}\norm{D^2u^{k-1}}_{B_{1,\infty}^{\varepsilon}}\\
&\lesssim\norm{D^{s-\alpha}vD^{s-2+\alpha}u}_{C^{-\varepsilon}}\norm{u}_{H^{\sigma}}^{k-1}
\end{align*}
For $\int D^svF_2$, note its terms have form
\begin{align*}
\int_{\mathbb{T}^3}D^svT_{D^2u^{k-1}}D^{s-2}u = \int_{\mathbb{T}^3} D^{s-\alpha}vT_{D^2u^{k-1}}D^{s-2+\alpha}u + \int_{\mathbb{T}^3}D^{s-\alpha}v\widetilde{R},
\end{align*}
where $\tilde{R}$ satisfies the same bound as one in \eqref{ineq: I2}. And it can be treated by exactly the same way as $I_2$.
\end{proof}
\section{Construction of the measure}
In this section, we construct a measure $\vec{\rho}_s$ which is absolutely continuous with respect to $\mu_s$ and corresponds to the formal expression:
\begin{equation}
d\vec{\rho_{s}}=\mathcal{Z}_{s}^{-1} e^{-\mathscr{E}_{s}(u,v)-E^q(u,v)}dudv.
\end{equation}
Here $\mathscr{E}_s(u,v)$ is the renormalized energy defined in \eqref{eqn: R-energy}, $E(u,v)$ is the Hamiltonian energy \eqref{eqn: H}, and $q=q(s,k)$ is a large integer to be chosen later.
Define the truncated measures
\begin{align}\label{eqn: truncated-measure}
\begin{split}
d\vec{\rho}_{s,N} &= \mathcal{Z}_{s,N,r}^{-1} e^{-\mathscr{E}_{s,N}(u,v)-E^q_N(u,v)}dudv \\
&= \mathcal{Z}_{s,N}^{-1}e^{-R_{k,s,N}(u)-E^q_N(u,v)}d\nu_s(u,v),
\end{split}
\end{align}
where the truncated energy $E_N(u,v)$ is defined by
\begin{equation}\label{eqn: truncated-energy}
E_N(u,v):=E(u_N,v_N)=\frac{1}{2}\int_{\mathbb{T}^3}(|\nabla \pi_N u|^2+(\pi_N v)^2)\,\mathrm{d}x+\frac{1}{k+1}\int_{\mathbb{T}^3}(\pi_N u)^{k+1}\,\mathrm{d}x.
\end{equation}
In this section, we prove Proposition \ref{lem: Lp-convergence}, which asserts that the measures $\vec{\rho}_{s,N}$ converge to a limiting measure as $N\rightarrow\infty$.
The general method to establish convergence of the measures is standard (see for example \cite[Remark 3.8]{T2}), and consists of two steps, corresponding Lemma \ref{lem: Lp-convergence} and Proposition \ref{lem: Lp-bound}, respectively.
\begin{enumerate}
\item Convergence of $R_{k,s,N}(u)$ and $E^q_N(u,v)$ in $L^p$. This is a consequence of the regularity properties of the field $\vec{u}$ on the support of $\vec{\mu}_s$, since $R_{k,s,N}(u)$ and $E^q_N(u,v)$ are continuous functions of the Fourier truncated field $\pi_N\vec{u}$.
\item Uniform integrability of $e^{-R_{k,s,N}(u)-E^q_N(u,v)}$ with respect to $\vec{\nu}_s$. This will follow from a uniform bound in $L^p$, $p>1$. It is here that we make use of the variational representation of
\begin{equation}\label{eqn: partition}
\mathcal{Z}_{s,N}:=\mathbb{E}_{\vec{\nu}_s}[e^{-R_{s,k,N}(u)-E^q_N(u,v)}].
\end{equation}
\end{enumerate}
Indeed, the uniform integrability resulting from the second point allows us to take the limit in the expectation
\[\mathbb{E}_{\vec{\nu}_s}[e^{-R_{k,s,N}(u)-E^q_N(u,v)}],\]
which is sufficient to define $\vec{\rho}_s$ as a measure.
Compared to the cubic case, $k=3$, in \cite{GOTW}, the addition of $-E^q(u,v)$ makes the construction of the measure easier as it introduces more decay. Also, as the energy is conserved we have
\[\frac{\mathrm{d}}{\mathrm{d}t}E^q_N(u_N,v_N)=0.\]
Consequently, no extra terms appear in the energy estimate in section 3.
\begin{definition}
For $u$ given by \eqref{eqn: useries}, we define
\[:(D^s u_N)^2: = (D^su_N)^2-\mathbb{E}_{\vec{\nu}_s}[(D^s u_N)^2].\]
This notation is inspired by an analogy with Wick ordering in Gaussian analysis and quantum field theory (see \cite[Chapter 3]{janson}).
\end{definition}
\begin{lemma}\label{lem: Lp-convergence}
Let $s>\frac{3}{2} $. Set
\begin{equation}\label{eqn: Fskn}
F_{s,k,N}(u,v) := -R_{k,s,N}(u)-E^q_N(u,v).
\end{equation}
Then for $p<\infty$, $F_{s,k,N}$ converges to some $F_{s,k}$ in $L^p(\nu_s)$ as $N\rightarrow \infty$.
\end{lemma}
\begin{proof}
Since $s>\frac{3}{2}$ we have by \eqref{eqn: u-C-estimate}, \eqref{eqn: v-C-estimate} that $u\in L^p(\Omega,C^{1+\varepsilon}(\mathbb{T}^3))$ and $v\in L^p(\Omega,C^{\varepsilon}(\mathbb{T}^3))$ for some $\varepsilon>0$. These bounds imply that $E_N(u,v)<\infty$ and moreover is bounded in $L^p$ for any $0<p<\infty$, uniformly in $N$. The same holds for $E_N^q$.
As in \cite[Proposition 4.3]{GOTW}, we have, for any $p\ge 2$:
\[\|:(D^s u_N)^2:\|_{L^p(\Omega,C^{-1-\varepsilon}(\mathbb{T}^3))}\le Cp.\]
By duality in $C^\alpha$ spaces \eqref{EQU: Multiplicative estimate}, the term
\[\int Q_{s,N}(u_N) u_N^{k-1}\]
converges in $L^p$.
\end{proof}
\subsection{Variational formulation}
In this section, we apply the Barashkov-Gubinelli variational approach to obtain uniform in $N$ control over the quantity $e^{-R_{k,s,N}(u)-E^q_N(u,v)}$. This is equivalent to showing that the partition function is uniformly bounded, since higher $L^p$ norms of $e^{-R_{k,s,N}(u)-E^q_N(u,v)}$ introduce only constant factors in the representation \eqref{eqn: variational}.
This approach was first applied in \cite{GOTW}. The idea is to write the partition function as an optimization over time-dependent processes, so we begin by representing the measure $\vec{\nu}_s$ as the time 1 distribution of a pair of cylindrical processes. We refer to \cite{BG,GOTW} for more details.
Let $\Omega=C(\mathbb{R}_+, C^{-\frac{3}{2}-\varepsilon}(\mathbb{T}^3))$. Let $B_n^1(t)$, $B_n^2(t)$, $t\ge 0$ be two sequences of independent standard Brownian motions. We define
\begin{equation}
\vec{X}(t)=(X_1(t),X_2(t))=\left(\sum\limits_{n\in\mathbb{Z}^3} B^{n}_1(t)e^{i n\cdot x}, \sum\limits_{n\in\mathbb{Z}^3} B^{n}_2(t) e^{i n\cdot x}\right)
\end{equation}
so that $\vec{X}(t)$ is a Brownian motion on $L^2(\mathbb{T}^3)\times L^2(\mathbb{T}^3)$. Set $\vec{Y}(t)=(Y^1(t),Y^2(t))$ where
\begin{equation}
Y_1(t) = \mathcal{J}^{-s-1}X_1(t) := B_1^{0}(t)+\sum\limits_{n\in\mathbb{Z}^3\backslash\{0\}} \frac{B^{n}_1(t)}{(|n|^2+|n|^{2s}+2)^\frac{1}{2}} e^{i n\cdot x}
\end{equation}
and
\begin{equation}
Y_2(t) = J^{-s}X_2(t) := \sum\limits_{n\in\mathbb{Z}^3} \frac{B^{n}_2(t)}{(1+|n|^{2s})^\frac{1}{2}} e^{i n\cdot x}.
\end{equation}
Note that
\[\textnormal{Law}(\vec{Y}(1)):=\vec{\nu}_s.\]
We let $\mathbb{H}_a$ be the set of progressively measurable processes belonging to
\[L^2([0,1], L^2(\mathbb{T}^3)\times L^2(\mathbb{T}^3))\]
almost surely. For $\theta \in \mathbb{H}_a$, the classical Girsanov theorem \cite[Section 5.5]{legall} describes the semimartingale decomposition of $\vec{X}(t)$ and $\vec{Y}(t)$ with respect to the measure $\mathbb{Q}^\theta$ defined by its relative density
\begin{equation}\label{eqn: com}
\frac{\mathrm{d}\mathbb{Q}^\theta}{\mathrm{d}\mathbb{P}}=e^{\int_0^1 \langle \theta_t,\mathrm{d}X_t\rangle -\frac{1}{2}\int_0^1 \|\theta_s\|^2_{L^2_x}\,\mathrm{d}s}.
\end{equation}
We have the decompositions
\begin{equation}
\vec{X}(t)=\vec{X}^\theta(t)+\int_0^t\vec{\theta}(t)\mathrm{d}t
\end{equation}
and
\begin{equation}
\vec{Y}(t)=(Y^\theta_1(t), Y^\theta_2(t))+\int_0^t (\mathcal{J}^{-s-1}\theta_1, J^{-s}\theta_2)(t')\,\mathrm{d}t',
\end{equation}
where $\vec{X}^\theta$ is a $\mathbb{Q}^\theta$ $L^2$-cylindrical Brownian motion and
\[Y_1^\theta(t):= \mathcal{J}^{-s-1}X_1^\theta(t), \quad Y_2^\theta(t):=J^{-s} X_2^\theta(t).\]
For convenience, we set
\begin{equation}
\vec{I}^\theta(t):=(I_1^\theta(t), I_2^\theta(t)) = \int_0^t (\mathcal{J}^{-s-1}\theta_1, J^{-s}\theta_2)(t')\,\mathrm{d}t'
\end{equation}
and $\vec{Y}^\theta(t):=(Y^\theta_1(t), Y^\theta_2(t))$. With this notation in place we have the following variational formula for $\mathcal{Z}_{s,N}$.
\begin{lemma}\label{lem: relative-entropy}
Let $\theta\in \mathbb{H}_a$, $N\ge 1$ and let $\mathbb{Q}^\theta$ be the measure defined by \eqref{eqn: com}. Then the \emph{relative entropy} of $\mathbb{Q}^\theta$ with respect to $\mathbb{P}$ is finite:
\[H(\mathbb{Q}^\theta\mid \mathbb{P})=\mathbb{E}\left[\frac{\mathrm{d}\mathbb{Q}^\theta}{\mathrm{d}\mathbb{P}}\log \frac{\mathrm{d}\mathbb{Q}^\theta}{\mathrm{d}\mathbb{P}}\right]<\infty.\]
In particular,
\begin{equation}\label{eqn: quad-finite}
\mathbb{E}_{\mathbb{Q}^\theta}\bigg[\int_0^t \|\theta\|_{L^2_x}\mathrm{d}s\bigg]<\infty.
\end{equation}
\begin{proof}
Once we prove the finiteness of the relative entropy, the bound \eqref{eqn: quad-finite} follows from the inequality \cite[Lemma 2.6]{follmer},
\[\mathbb{E}_{\mathbb{Q}^\theta}\bigg[\int_0^t \|\theta\|_{L^2_x}\mathrm{d}s\bigg]\le 2H(\mathbb{Q}^\theta\mid \mathbb{P}).\]
We turn to the relative entropy. In our case, it takes the following explicit form:
\[H(\mathbb{Q}^\theta\mid \mathbb{P})=\mathbb{E}\left[\frac{e^{-R_{s,k,N}-E^q_N}}{\mathcal{Z}_{s,N}}\log \frac{e^{-R_{s,k,N}-E^q_N}}{\mathcal{Z}_{s,N}}\right].\]
For the partition function $\mathcal{Z}_{s,N}$, we have by Jensen's inequality:
\begin{align}
\mathcal{Z}_{s,N}&=\mathbb{E}[e^{-R_{s,k,N}}e^{-E^q_N}] \nonumber\\
&\ge e^{-\mathbb{E}[R_{s,k,N}]-\mathbb{E}[E^q_N]} \nonumber\\
&\ge c(N). \label{eqn: Z-lower}
\end{align}
In the final step, we have used the integrability of $R_{s,k,N}$ and $E^q_N$, which follows directly from \eqref{eqn: u-C-estimate}, \eqref{eqn: v-C-estimate} since $B^\alpha_{\infty,\infty}\subset L^\infty$ when $\alpha>0$.
Using H\"older's inequality and Young's inequality, it is easy to see that for $q\ge 1$,
\begin{align*}
R_{s,k,N}(Y_1)+E_N^q(\vec{Y})&= \frac{k}{2}\int_{\mathbb{T}} ((D^s Y_1)^2-\sigma_N^2)(\pi_N Y_1)^{k-1}\,\mathrm{d}x +E_N^q(\vec{Y})\\
&\ge -\frac{k}{2}\sigma_N^2 \int (\pi_N Y_1)^{k-1}\,\mathrm{d}x+ \frac{1}{(k+1)^q}\left(\int (\pi_N Y_1)^{k+1}\,\mathrm{d}x\right)^q\\
&\ge -\frac{k}{2}\sigma_N^2|\mathbb{T}^3|^{\frac{2}{k+1}}\left( \int (\pi_N Y_1)^{k+1} \right)^\frac{k-1}{k+1} + \frac{1}{(k+1)^q}\left(\int (\pi_N Y_1)^{k+1}\,\mathrm{d}x\right)^q\\
&\ge -\left(\frac{k}{2\varepsilon}\right)^r \frac{ |\mathbb{T}^3|^{\frac{2r}{k+1}}}{r} \sigma_N^{2r}+(\frac{1}{4}-\frac{\varepsilon^q}{r})\left(\int (\pi_N Y_1)^{k+1}\,\mathrm{d}x\right)^q\\
&> -\infty,
\end{align*}
where $\frac{k-1}{q(k+1)}+\frac{1}{r}=1$. Using \eqref{eqn: Z-lower} and Cauchy-Schwarz, we now have
\[\mathbb{E}\left[\frac{\mathrm{d}\mathbb{Q}^\theta}{\mathrm{d}\mathbb{P}}\log \frac{\mathrm{d}\mathbb{Q}^\theta}{\mathrm{d}\mathbb{P}}\right]\le C(N)\mathbb{E}[e^{-2R_{s,k,N}(Y_1)}+|R_{s,k,N}(Y_1)|^2+1]<\infty.\]
\end{proof}
\end{lemma}
\begin{proposition}
Recall the definition of the partition function $\mathcal{Z}_{s,N}$
For $N\in\mathbb{N}$ we have,
\begin{equation}\label{eqn: variational}
-\log \mathcal{Z}_{s,N}= \inf_{\theta\in \mathbb{H}_a}\mathbb{E}_{\mathbb{Q}^\theta}\left[ R_{k,s,N}(Y_1^{\theta}(1)+I_1^\theta(1))+E^q_N(\vec{Y}^\theta(1)+\vec{I}^\theta(1)) +\frac{1}{2}\int_0^1\norm{\vec{\theta}(t)}_{L^2_x\times L^2_x}^2dt \right].
\end{equation}
\end{proposition}
\begin{proof}
Given $\theta \in \mathbb{H}_a$, Girsanov's theorem gives:
\begin{align*}
-\log \mathcal{Z}_{s,N} &= -\log \mathbb{E}[e^{-R_{k,s,N}-E^q(u_N,v_N)}]\\
&=-\log \mathbb{E}_{\mathbb{Q}^\theta}[e^{-R_{k,s,N}(Y_1^{\theta}(1)+I_1^\theta(1))-E^q_N(\vec{Y}^\theta(1)+\vec{I}^\theta(1))}e^{-\int_0^1 \langle \theta_t,\mathrm{d}X_t\rangle +\frac{1}{2}\int_0^1 \|\theta_s\|^2_{L^2_x}\,\mathrm{d}s}].
\end{align*}
By Jensen's inequality, we have
\begin{align*}
-\log \mathcal{Z}_{s,N} \le &~\mathbb{E}_{\mathbb{Q}^\theta}[R_{k,s,N}(Y_1^{\theta}(1)+I_1^\theta(1))-E^q_N(\vec{Y}^\theta(1)+\vec{I}^\theta(1))]\\
&+\mathbb{E}[\int_0^1 \langle \theta_t,\mathrm{d}X_t\rangle -\frac{1}{2}\int_0^1 \|\theta_s\|^2_{L^2_x}\,\mathrm{d}s]\\
= &~\mathbb{E}_{\mathbb{Q}^\theta}[R_{k,s,N}(Y_1^{\theta}(1)+I_1^\theta(1))-E^q_N(\vec{Y}^\theta(1)+\vec{I}^\theta(1))]\\
&+\mathbb{E}[\int_0^1 \langle \theta_t,\mathrm{d}X_t^\theta \rangle +\frac{1}{2}\int_0^1 \|\theta_s\|^2_{L^2_x}\,\mathrm{d}s].
\end{align*}
If
\[\mathbb{E}_{\mathbb{Q}^\theta}\bigg[\int_0^1 \|\theta_s\|^2_{L^2_x}\,\mathrm{d}s\bigg]<\infty,\]
the stochastic integral term is a martingale, so its expectation vanishes and we find
\begin{equation}\label{eqn: var-bound-1}
-\log \mathcal{Z}_{s,N} \le \mathbb{E}_{\mathbb{Q}^\theta}\bigg[R_{k,s,N}(Y_1^{\theta}(1)+I_1^\theta(1))-E^q_N(\vec{Y}^\theta(1)+\vec{I}^\theta(1))+\frac{1}{2}\int_0^1 \|\theta_s\|^2_{L^2_x}\,\mathrm{d}s\bigg].
\end{equation}
If instead
\[\mathbb{E}_{\mathbb{Q}^\theta}\bigg[\int_0^1 \|\theta_s\|^2_{L^2_x}\,\mathrm{d}s\bigg]=\infty,\]
the inequality \eqref{eqn: var-bound-1} holds trivially provided we verify that
\[R_{k,s,N}(Y_1^{\theta}(1)+I_1^\theta(1))-E^q_N(\vec{Y}^\theta(1)+\vec{I}^\theta(1))\]
is $\mathbb{Q}^\theta$-integrable, which we do below.
Conversely, the measure
\[\frac{\mathrm{d}\mathbb{Q}^N}{\mathrm{d}\mathbb{P}}=\frac{e^{-R_{k,s,N}(Y_1)-E^q_N(\vec{Y}(1))}}{\mathcal{Z}_{s,N}}\]
is absolutely continuous with respect to $\mathbb{P}$, so there is a $\tilde{\theta}\in \mathbb{H}_a$, such that
\[\frac{\mathrm{d}\mathbb{Q}^N}{\mathrm{d}\mathbb{P}}=e^{\int_0^1 \langle \tilde{\theta}^N_t,\mathrm{d}X_t\rangle-\frac{1}{2}\int_0^1\|\tilde{\theta}^N_t\|^2_{L^2_x}\,\mathrm{d}t}.\]
Combining the last two expressions gives
\begin{equation}\label{eqn: take-exp}
-\log\mathcal{Z}_{s,N}:=R_{k,s,N}(Y_1)+E^q_N(\vec{Y}(1))+\int_0^1 \langle \tilde{\theta}^N_t,\mathrm{d}X_t\rangle-\frac{1}{2}\int_0^1\|\tilde{\theta}^N_t\|^2_{L^2_x}\,\mathrm{d}t.
\end{equation}
By Lemma \ref{lem: relative-entropy} we have
\[\mathbb{E}_{\mathbb{Q}^{\tilde{\theta}}}\bigg[\int_0^1 \|\tilde{\theta}_s\|^2_{L^2_x}\,\mathrm{d}s\bigg]<\infty,\]
we can take expectations in \eqref{eqn: take-exp} and the martingale term vanishes, so
\[-\log \mathcal{Z}_{s,N}=\mathbb{E}\left[R_{k,s,N}(Y_1)+E^q_N(\vec{Y}(1))+\frac{1}{2}\int_0^1\|\tilde{\theta}^N_t\|^2_{L^2_x}\,\mathrm{d}t\right].\]
\end{proof}
\subsection{Exponential integrability}
We now prove Proposition \ref{lem: Lp-bound} by estimating the quantity on the right side of \eqref{eqn: variational}. Since the time $t=1$ is fixed, for simplicity we set
\[\vec{Y}^\theta:=(Y_1^\theta,Y_2^\theta)=(Y_1^\theta(1), Y_2^\theta(1)).\]
A simple application of Young's inequality gives
\begin{equation}
\frac{1}{2}E^q_N(\vec{I}^\theta)\leq C E^q_N(\vec{Y}^\theta)+E^q_N(\vec{Y}^\theta+\vec{I}^\theta)\nonumber.
\end{equation}
for some large constant $C$. Hence it suffices to bound
\begin{equation}\label{EQU: Need to bound below 2}
\mathbb{E}_{\mathbb{Q}^\theta}\left[ R_{k,s,N}(Y_1^{\theta}+I_1^\theta)-CE^q_N(\vec{Y}^\theta)+\frac{1}{2}E_N^q(\vec{I}^\theta) +\frac{1}{2}\int_0^1\norm{\vec{\theta}(t)}_{L^2_x\times L^2_x}^2dt \right].
\end{equation}
The following lemma gives the regularity of $D^sY_1^\theta$, $:(D^sY_1^\theta)^2:$ and $Y^\theta_1$.
\begin{lemma}\label{Lemma: prob control on Y}
Let $2<p<\infty$. Then for $\varepsilon>0$,
\begin{equation}
\sup\limits_{\theta\in \mathbb{H}_a}\mathbb{E}_{\mathbb{Q}^\theta}\left[\norm{D^sY_1^\theta}_{C^{-\frac{1}{2}-\varepsilon}}^p + \norm{:(D^sY_1^\theta)^2:}_{C^{-1-\varepsilon}}^p + \norm{Y^\theta_1}_{C^{s-\frac{1}{2}-\varepsilon}}^p\right]<\infty.\nonumber
\end{equation}
\begin{proof}
This follows directly from Proposition 4.3 in \cite{GOTW}.
\end{proof}
\end{lemma}
A direct application of Cauchy-Schwarz (see \cite[Lemma 4.7]{GOTW}) gives
\begin{equation}
\norm{I_1^\theta}_{H^{s+1}}^2\leq \int_0^1\norm{\theta_t}_{L^2}^2dt.
\end{equation}
\begin{lemma}
For $s>\frac{3}{2}$,
\begin{equation}
\sup\limits_{\theta\in \mathbb{H}_a}\mathbb{E}_{\mathbb{Q}^\theta}\left[E_N^q(\vec{Y}^\theta) \right] <\infty\nonumber
\end{equation}
independently of $N$.
\end{lemma}
\begin{proof}
Under $\mathbb{Q}^\theta$, $\vec{Y}^\theta(1)=(Y_1^\theta(1),Y_2^\theta(1))$ has the same distribution as the pair $(u,v)$ in under $\vec{\nu}_s$. The result then follows from \eqref{eqn: u-C-estimate}, \eqref{eqn: v-C-estimate}.
\end{proof}
We introduce some abbreviated notations for the most common terms appearing in the estimates below. We set:
\begin{align*}
Y&:=Y_1^\theta,\\
\Theta&:=I_1^\theta,\\
E&:= E_N(\vec{I}^\theta).
\end{align*}
From the definition of $R_{k,s,N}$ we have
\begin{align}
R_{k,s,N}(Y+\Theta)=&\frac{k(k-1)}{2}\int_{\mathbb{T}^3}:(D^sY)^2: \sum\limits_{m=0}^{k-1}\binom{k-1}{m} Y^{k-1-m}\Theta^m\nonumber \\
&+k(k-1)\int_{\mathbb{T}^3}D^sYD^s\Theta \sum\limits_{m=0}^{k-1}\binom{k-1}{m} Y^{k-1-m}\Theta^m\nonumber \\
&+\frac{k(k-1)}{2}\int_{\mathbb{T}^3}(D^s\Theta)^2\sum\limits_{m=0}^{k-1}\binom{k-1}{m} Y^{k-1-m}\Theta^m\nonumber \\
&+\frac{1}{k+1}\int_{\mathbb{T}^3}(Y+\Theta)^{k+1}.
\end{align}
We aim to bound \eqref{EQU: Need to bound below 2} by using Young's inequality and the positive terms
\begin{equation}
\int_{\mathbb{T}^3} (D^s\Theta)^2\Theta^{k-1},\quad \int_{\mathbb{T}^3} \Theta^{k+1},\quad E^q,\quad \norm{\Theta}_{H^{s+1}}^2\nonumber
\end{equation}
in \eqref{EQU: Need to bound below 2}.
\begin{lemma}
(Terms quadratic in $D^sY$). Let $s>\frac{3}{2}$ and $0\leq m\leq k-1$. Then for sufficiently small $\delta>0$ there exists small $\varepsilon>0$ and large $p$ and $c(\delta)$ such that
\begin{align*}
\left|\int_{\mathbb{T}^3} :(D^sY)^2: Y^{k-1-m}\Theta^m \right| \leq &~c(\delta)\left(\norm{:(D^sY)^2:}_{C^{-1-\varepsilon}}^p+ \norm{D^sY}_{C^{-\frac{1}{2}-\varepsilon}}^p +\norm{Y}_{C^{s-\frac{1}{2}-\varepsilon}}^p \right)\\
&+\delta\left(\norm{\Theta}_{H^{s+1}}^2 +E^q \right).
\end{align*}
\end{lemma}
\begin{proof}
For $m=0$, using \eqref{EQU: Duality} and \eqref{EQU: Intermediate embeddings} and \eqref{EQU: Algebra property} we have
\begin{align*}
\left|\int_{\mathbb{T}^3} :(D^sY)^2: Y^{k-1}\right|&\lesssim \norm{:(D^sY)^2:}_{C^{-1-\varepsilon}}\norm{Y^{k-1}}_{B^{1+2\varepsilon}_{1,\infty}}\\
&\lesssim \norm{:(D^sY)^2:}_{C^{-1-\varepsilon}}\norm{Y}_{C^{1+2\varepsilon}}^{k-1}\\
&\lesssim \norm{:(D^sY)^2:}_{C^{-1-\varepsilon}}\norm{Y}_{C^{s-\frac{1}{2}-\varepsilon}}^{k-1}
\end{align*}
if $s>\frac{3}{2}$ and $\varepsilon>0$ is small. The estimate then follows from Young's inequality. If $m=k-1$, using \eqref{EQU: Duality}, \eqref{EQU: Intermediate embeddings}, \eqref{EQU: Fracational Leibniz rule cor} and \eqref{EQU: Besov embedding} we have
\begin{align*}
\left|\int_{\mathbb{T}^3} :(D^sY)^2: \Theta^{k-1}\right|&\lesssim \norm{:(D^sY)^2:}_{C^{-1-\varepsilon}}\norm{\Theta^{k-1}}_{B^{1+2\varepsilon}_{1,\infty}}\\
&\lesssim \norm{:(D^sY)^2:}_{C^{-1-\varepsilon}}\norm{\Theta^{k-2}}_{L^1}\norm{\Theta}_{B^{1+2\varepsilon}_{\infty,\infty}}\\
&\lesssim \norm{:(D^sY)^2:}_{C^{-1-\varepsilon}}\norm{\Theta^{k+1}}_{L^1}^{\frac{k-2}{k+1}}\norm{\Theta}_{B^{\frac{5}{2}+3\varepsilon}_{2,2}}\\
&\lesssim \norm{:(D^sY)^2:}_{C^{-1-\varepsilon}}E^{\frac{k-2}{k+1}}\norm{\Theta}_{H^{s+1}}
\end{align*}
if $s>\frac{3}{2}$ and $\varepsilon>0$ is small. If we choose $q$ large enough so that
\begin{equation}
\frac{k-2}{q(k+1)}+\frac{1}{2}<1
\end{equation}
the stated inequality then follows from Young's inequality.
If $0<m<k-1$ then similar to the above,
\begin{align*}
\left|\int_{\mathbb{T}^3} :(D^sY)^2: Y^{k-1-m}\Theta^m\right| & \lesssim \norm{:(D^sY)^2:}_{C^{-1-\varepsilon}}\norm{Y^{k-1-m}\Theta^m}_{B_{1,\infty}^{1+2\varepsilon}}\\
&\lesssim \norm{:(D^sY)^2:}_{C^{-1-\varepsilon}}\norm{Y^{k-1-m}}_{C^{1+2\varepsilon}}\norm{\Theta^m}_{B^{1+2\varepsilon}_{1,\infty}}\\
&\lesssim \norm{:(D^sY)^2:}_{C^{-1-\varepsilon}}\norm{Y}_{C^{s-\frac{1}{2}-\varepsilon}}^{k-1-m}E^{\frac{m-1}{k+1}}\norm{\Theta}_{H^{s+1}}.
\end{align*}
Hence if we choose $q$ large enough so that
\begin{equation}
\frac{m-1}{q(k+1)}+\frac{1}{2}<1\nonumber.
\end{equation}
Young's inequality the gives the desired result.
\end{proof}
\begin{lemma}
(Terms linear in $D^sY$). Let $s>\frac{5}{2}$ and $0\leq m\leq k-1$. Then for sufficiently small $\delta>0$ there exists small $\varepsilon>0$ and large $p$ and $c(\delta)$ such that
\begin{align*}
\left|\int_{\mathbb{T}^3} D^sYD^s\Theta Y^{k-1-m}\Theta^m \right|\leq &~c(\delta) \left( \norm{D^sY}_{C^{-\frac{1}{2}-\varepsilon}}^p+\norm{Y}_{C^{s-\frac{1}{2}-\varepsilon}}^p \right)\\
&+\delta\left(\norm{\Theta}_{H^{s+1}}^2 +E^q\right).
\end{align*}
\end{lemma}
\begin{proof}
First we estimate the term corresponding to $m=k-1$. Using \eqref{EQU: Duality}, \eqref{EQU: Intermediate embeddings} followed by \eqref{EQU: Fractional Leibniz rule},
\begin{align}
\left|\int_{\mathbb{T}^3} D^sY D^s\Theta \Theta^{k-1}\right| \lesssim \norm{D^sY}_{C^{-\frac{1}{2}-\varepsilon}}\norm{D^s\Theta \Theta^{k-1}}_{B^{\frac{1}{2}+2\varepsilon}_{1,\infty}}.\nonumber
\end{align}
Hence it remains to estimate $\norm{D^s\Theta \Theta^{k-1}}_{B^{\frac{1}{2}+2\varepsilon}_{1,\infty}}$. Using \eqref{EQU: Fractional Leibniz rule}, \eqref{EQU: Intermediate embeddings}, \eqref{EQU: Besov embedding}, \eqref{EQU: Fracational Leibniz rule cor}, \eqref{EQU: Besov embedding} again, Jensen's inequality and \eqref{EQU: Interpolation}, we have
\begin{align}\label{EQU: Ds theta theta k-1}
\norm{D^s\Theta \Theta^{k-1}}_{B^{\frac{1}{2}+2\varepsilon}_{1,\infty}} &\lesssim \norm{D^s\Theta}_{B^{\frac{1}{2}+2\varepsilon}_{2,\infty}}\norm{\Theta^{k-1}}_{L^2}+\norm{D^s\Theta}_{L^2}\norm{\Theta^{k-1}}_{B^{\frac{1}{2}+2\varepsilon}_{2,\infty}}\nonumber \\
&\lesssim \norm{D^s\Theta}_{B^{\frac{1}{2}+2\varepsilon}_{2,\infty}}\norm{\Theta^{k-1}}_{B^{\frac{1}{2}+2\varepsilon}_{2,\infty}}\nonumber \\
&\lesssim \norm{\Theta}_{H^{s+\frac{1}{2}+2\varepsilon}}\norm{\Theta^{k-1}}_{B^{2+2\varepsilon}_{1,\infty}}\nonumber \\
&\lesssim \norm{\Theta}_{H^{s+\frac{1}{2}+2\varepsilon}}\norm{\Theta^{k-2}}_{L^1}\norm{\Theta}_{B^{2+2\varepsilon}_{\infty,\infty}}\nonumber \\
&\lesssim \norm{\Theta}_{H^{s+\frac{1}{2}+2\varepsilon}}\norm{\Theta^{k+1}}_{L^1}^\frac{k-2}{k+1}\norm{\Theta}_{B^{\frac{7}{2}+2\varepsilon}_{2,\infty}}\nonumber \\
&\lesssim \norm{\Theta}_{H^{s+1}}^\gamma \norm{\Theta}_{L^2}^{1-\gamma}\norm{\Theta^{k+1}}_{L^1}^\frac{k-2}{k+1}\norm{\Theta}_{H^{s+1}}\nonumber \\
&\lesssim \norm{\Theta}_{H^{s+1}}^{1+\gamma} E^{1-\gamma+\frac{k-2}{k+1}}
\end{align}
for $s>\frac{5}{2}$ and $\varepsilon>0$ small where $\gamma=\gamma(s,\varepsilon)=\frac{s+\frac{1}{2}+2\varepsilon}{s+1}<1$. If we choose $\varepsilon=\varepsilon(s,k)$ small enough and $q=q(s,k)$ large enough so that
\begin{equation}
\frac{1+\gamma}{2}+\frac{1}{q}\left( 1-\gamma+\frac{k-2}{k+1} \right)<1 \nonumber
\end{equation}
the desired inequality follows from Young's inequality.
Now for the case $0<m<k-1$, using \eqref{EQU: Duality}, \eqref{EQU: Intermediate embeddings},\eqref{EQU: Fractional Leibniz rule} we have,
\begin{align*}
\left| \int_{\mathbb{T}^3} D^sY D^s\Theta Y^{k-1-m}\Theta^{m}\right| &\lesssim \norm{D^sY}_{C^{-\frac{1}{2}-\varepsilon}}\norm{Y^{k-1-m}D^s\Theta \Theta^{m}}_{B^{\frac{1}{2}+2\varepsilon}_{1,\infty}}\\
&\lesssim \norm{D^sY}_{C^{-\frac{1}{2}-\varepsilon}}\norm{Y^{k-1-m}}_{C^{\frac{1}{2}+2\varepsilon}}\norm{D^s\Theta \Theta^{m}}_{B^{\frac{1}{2}+2\varepsilon}_{1,\infty}}\\
&\lesssim \norm{D^sY}_{C^{-\frac{1}{2}-\varepsilon}}\norm{Y}_{C^{s-\frac{1}{2}-\varepsilon}}^{k-1-m}\norm{D^s\Theta \Theta^{m}}_{B^{\frac{1}{2}+2\varepsilon}_{1,\infty}}
\end{align*}
for $s>1$ and $\varepsilon>0$ small.
It remains to estimate the term $\norm{D^s\Theta \Theta^{m}}_{B^{\frac{1}{2}+2\varepsilon}_{1,\infty}}$. If $s>\frac{5}{2}$ and $\varepsilon>0$ is small enough, this term can be estimated in a manner similar to \eqref{EQU: Ds theta theta k-1}.
Finally we estimate the term corresponding to $m=0$. We have,
\begin{align*}
\left|\int_{\mathbb{T}^3} D^sY D^s\Theta Y^{k-1-m}\right| &\lesssim \norm{D^sY}_{C^{-\frac{1}{2}-\varepsilon}}\norm{D^s\Theta Y^{k-1}}_{B^{\frac{1}{2}+2\varepsilon}_{1,\infty}}\\
&\lesssim \norm{D^sY}_{C^{-\frac{1}{2}-\varepsilon}}\norm{Y}_{C^{\frac{1}{2}+2\varepsilon}}^{k-1}\norm{D^s\Theta}_{B^{\frac{1}{2}+\varepsilon}_{1,\infty}}\\
&\lesssim \norm{D^sY}_{C^{-\frac{1}{2}-\varepsilon}}\norm{Y}_{C^{s-\frac{1}{2}+2\varepsilon}}^{k-1}\norm{D^s\Theta}_{H^{s+1}}
\end{align*}
for $s>1$ and $\varepsilon>0$ small. The desired inequality then follows from Young's inequality.
\end{proof}
\begin{lemma}
(Terms quadratic in $D^s\Theta$). Let $s>\frac{1}{2}$ and $0< m< k-1$. Then for sufficiently small $\delta>0$ there exists small $\varepsilon>0$ and large $p$ and $c(\delta)$ such that
\begin{align}
\left|\int_{\mathbb{T}^3} (D^s\Theta)^2 Y^{k-1-m}\Theta^m\right| \leq &c(\delta) \left( \norm{D^sY}_{C^{-\frac{1}{2}-\varepsilon}}^p +\norm{Y}_{C^{s-\frac{1}{2}+2\varepsilon}}^p \right)\nonumber\\
&+\delta\left(\norm{\Theta}_{H^{s+1}}^2 +\norm{D^s\Theta \Theta^{\frac{k-1}{2}}}_{L^2}^2 +E^q\right).\nonumber
\end{align}
\end{lemma}
\begin{proof}
Using Young's inequality,
\begin{align*}
\left|\int_{\mathbb{T}^3} (D^s\Theta)^2 Y^{k-1-m}\Theta^m \right| \leq C(\delta)\int_{\mathbb{T}^3} (D^s\Theta)^2 Y^{k-1}+ \delta\int_{\mathbb{T}^3} (D^s\Theta)^2 \Theta^{k-1}.
\end{align*}
It remains to estimate the first term on the right hand side of the above inequality. Using H\"older's inequality, \eqref{EQU: Interpolation} and the fact that $s>\frac{1}{2}$ we have,
\begin{align*}
\left|\int_{\mathbb{T}^3} (D^s\Theta)^2 Y^{k-1}\right| & \lesssim \norm{\Theta}_{H^s}^2\norm{Y^{k-1}}_{L^\infty}\\
& \lesssim \norm{\Theta}_{H^{s+1}}^{2\frac{s}{s+1}}\norm{\Theta}_{L^2}^{2\frac{1}{s+1}}\norm{Y}^{k-1}_{L^\infty}\\
&\lesssim \norm{\Theta}_{H^{s+1}}^{\frac{2s}{s+1}}\norm{\Theta}_{L^{k+1}}^{\frac{2}{s+1}}\norm{Y}^{k-1}_{C^{s-\frac{1}{2}-\varepsilon}}\\
&\lesssim \norm{\Theta}_{H^{s+1}}^{\frac{2s}{s+1}}E^{\frac{2}{(s+1)(k+1)}}\norm{Y}^{k-1}_{C^{s-\frac{1}{2}-\varepsilon}}.
\end{align*}
For $q$ large enough,
\begin{equation}
\frac{s}{s+1}+\frac{2}{q(s+1)(k+1)}<1 \nonumber
\end{equation}
and so the desired inequality follows from Young's inequality.
\end{proof}
\begin{lemma}
(Remaining Terms) Let $s>\frac{1}{2}$. Then for sufficiently small $\delta>0$ there exists small $\varepsilon>0$ and large $c(\delta)$ such that
\begin{align}
\int_{\mathbb{T}^3} (Y+\Theta)^{k+1}\leq C(\delta)\norm{Y}_{C^{s-\frac{1}{2}-\varepsilon}}+\delta\int_{\mathbb{T}^3} \Theta^{k+1}.\nonumber
\end{align}
\end{lemma}
\begin{proof}
Using Young's inequality and \eqref{EQU: Intermediate embeddings} we have,
\begin{align}
\int_{\mathbb{T}^3} (Y+\Theta)^{k+1}&\leq C(\delta)\int_{\mathbb{T}^3} Y^{k+1}+\delta\int_{\mathbb{T}^3} \Theta^{k+1}\nonumber\\
&\leq C(\delta)\norm{Y}_{C^{s-\frac{1}{2}-\varepsilon}}^{k+1}+\delta\int_{\mathbb{T}^3} \Theta^{k+1},
\end{align}
which completes the proof.
\end{proof}
\section{Dispersionless case}\label{sec: dispersionless}
In this section, we show that the dispersion is essential to the quasi-invariance result, by showing that it fails if the Laplacian term is absent from the system \eqref{NLW}. More precisely, we show that there exists a dense sequence of times $t$ such that the distribution of the flow of the dispersionless model \eqref{eqn: ODE} at time $t$ is not absolutely continuous with respect to the distribution of the initial data. This answers a question posed in the introductions to \cite{OT}, \cite{GOTW}.
The proof uses an idea developed by Oh, Tzvetkov and the third author in \cite{OST} to prove the same result for a Schr\"odinger-type equation, using almost-sure properties of the series \eqref{series}. Unlike in \cite{OST} situation, no explicit solution of the ODE \eqref{eqn: ODE} is available, so we instead use the invariance of the Hamiltonian to derive a contradiction.
The dispersionless system is
\begin{equation}\label{eqn: ODE}
\begin{cases}
\partial_t u=v,\\
\partial_t v= -u^k
\end{cases}.
\end{equation}
We take the initial data
\[(u(0,x),v(0,x)):=(u^\omega,v^\omega),\]
where $u^\omega$, $v^\omega$ are the random series already given in \eqref{series}
\begin{equation}\label{eqn: uv-rep}
\begin{split}
u^\omega (x)&=\sum_{n\in \mathbb{Z}^3}\frac{g_n}{\langle n\rangle^{s+1}}e^{in\cdot x},\\
v^\omega (x)&=\sum_{n\in \mathbb{T}^3}\frac{h_n}{\langle n\rangle^s} e^{in\cdot x}.
\end{split}
\end{equation}
Our main tool to derive Theorem \ref{thm: dispersionless}, the law of the iterated logarithm, gives a fine description of the regularity of the process at a \emph{fixed point} $x\in \mathbb{T}^d$. The key point is that the result holds almost surely, so it must also hold on the support of any measure that is mutually absolutely continuous with respect to $\mu_s$.
The analog for Gaussian fields indexed by $\mathbb{R}^3$ the following result was proved in \cite[Theorem 1.3]{BJR}. Their result is more general and covers non-stationary Gaussian fields whose covariance is defined by a pseudodifferential operator. The proof uses a wavelet decomposition of the process. Given the explicit representations \eqref{series}, they can also be derived more directly using more classical tools. This was done in the case of one dimensional Gaussian fields in \cite{OST}. We do not reproduce the details of the proof here.
\begin{proposition}
For $x\in \mathbb{T}^3$, let $v(x)$ be given by the random series defined in \eqref{eqn: uv-rep}. We have the following:
\begin{enumerate}
\item For $s\in (\frac{1}{2},\frac{3}{2})$,
\[\limsup_{|h|\rightarrow 0} \frac{v(x+h)-v(x)}{\sqrt{c_s |h|^{2s-1}\log\log \frac{1}{|h|}}}=1\]
almost surely.
\item For $s=\frac{3}{2}$,
\[\limsup_{|h|\rightarrow 0} \frac{v(x+h)-v(x)}{\sqrt{c_s |h|^2\log \frac{1}{|h|}\log\log\log \frac{1}{|h|}}}=1.\]
\item When $s>\frac{3}{2}$, let $r=\lfloor s-\frac{1}{2}\rfloor$, then $\partial_x^r v$ exists and is continuous, and satisfies the above LILs with $v$ replaced by $\partial_x^r v$ and $s$ replaced by $s-r$.
\end{enumerate}
\end{proposition}
The Hamiltonian associated to the ODE is
\begin{equation}
H(u,v)=\frac{1}{2}v^2(x)+\frac{1}{k+1}u^{k+1}(x).
\end{equation}
This quantity is conserved along the flow of the ODE \eqref{eqn: ODE}:
\begin{equation}\label{eqn: conserved}
H(u(0),v(0))=H(u(t),v(t))
\end{equation}
for each $t\ge 0$.
\begin{proof}[Proof of Theorem \ref{thm: dispersionless}]
Let $h_n$, $|h_n|\downarrow 0$ be a sequence along which the $\limsup$ in the LIL for $v(t,x)$ is attained. Differentiating both sides of this equation $r$ times, we find:
\begin{equation}\label{eqn: equality}
v(0,x)\partial_x^rv(0,x)+u(0,x)^k\partial_x^r u(0,x)+\text{l.o.t.}=v(t,x)\partial_x^rv(t,x)+u(t,x)^k\partial_x^r u(t,x)+\text{l.o.t.},
\end{equation}
where ``l.o.t.'' denotes lower order terms which are more regular that $\partial_x^r v(0,x)$, $\partial_x^r v(t,x)$. The equality \eqref{eqn: equality} also holds with $x$ shifted by $h_n$ on both sides, so that after subtraction and division by
\[\sqrt{c_s |h_n|^{2s-1}\log\log \frac{1}{|h_n|}}\]
in case $s-r\neq 3/2$ and
\[\sqrt{c_s |h_n|^2\log \frac{1}{|h_n|}\log\log \frac{1}{|h_n|}}\]
if $s-r=\frac{3}{2}$, we find, for each $x\in \mathbb{T}^3$ and $t\ge 0$:
\begin{equation}
v(0,x)\ge v(t,x)
\end{equation}
almost surely.
Exchanging the roles of $v(0,x)$ and $v(t,x)$, we obtain
\begin{equation}\label{eqn: static}
v^\omega(x)=v(0,x)=v(t,x)
\end{equation}
with probability 1.
Using \eqref{eqn: conserved} again, we now find
\[\frac{1}{k+1}u^{k+1}(t,x)=H(u(0),v(0))-\frac{1}{2}v(t,x)=\frac{1}{k+1}u^{k+1}(0,x),\]
so
\[u(t,x)=\pm u(0,x),\]
almost surely. Let $H_0=H(u(0),v(0))$. The equality
\[(u(t,x),v(t,x))=(\pm u(0,x),v(0,x))\]
implies that $t$ is equal to the period $T$ of the system \eqref{eqn: ODE} on the energy surface $H(u,v)=H_0$:
\begin{equation}\label{eqn: T}
T:=\frac{4\cdot (2H_0)^{\frac{1}{2}-\frac{1}{k+1}} }{(k+1)^{1/(k+1)}} \int_0^1\frac{1}{(1-y^2)^{1/(k+1)}}\mathrm{d}y,
\end{equation}
or equal to an integer multiple of $T$ plus or minus
\begin{equation}\label{eqn: T-diff}
\begin{split}
\Delta&:=\frac{2}{(k+1)^{1/(k+1)}}\int_{|v(0,x)|}^{\sqrt{2H_0}}\frac{1}{(H_0-\frac{y^2}{2})^{\frac{1}{k+1}}}\,\mathrm{d}y\\
&=\frac{2\cdot (2H_0)^{\frac{1}{2}-\frac{1}{k+1}}}{(k+1)^{1/(k+1)}}\int_{\frac{|v(0,x)|}{\sqrt{2H_0}}}^1 \frac{1}{(1-y^2)^{\frac{1}{k+1}}}\,\mathrm{d}y.
\end{split}
\end{equation}
Both quantities \eqref{eqn: T} and \eqref{eqn: T-diff} have continuous distributions if $k\neq 1$, so
\[\mathbb{P}(t= n\cdot T + m\cdot \Delta \text{ for some } n\in \mathbb{Z}_+, m\in \{-1,0,1\})=0,\]
so the assumption of absolute continuity between the distributions is untenable.
\end{proof}
\textbf{Acknowledgements} The authors would like to thank Tadahiro Oh for facilitating our collaboration, and explaining the argument in Appendix \ref{sec: critical}. P.S. is supported by NSF Grant DMS-1811093. W. J. T. was supported by The Maxwell Institute Graduate School in Analysis and its Applications, a Centre for Doctoral Training funded by the UK Engineering and Physical Sciences Research Council (grant EP/L016508/01), the Scottish Funding Council, Heriot-Watt University and the University of Edinburgh.
|
1,116,691,498,637 | arxiv | \section{Introduction}
For a positive integer $N$, let $\Gamma_1(N)$ be the subgroup of $\rit{SL}_2(\mathbf Z)$ defined by
\[
\Gamma_1(N)=\left\{\left. \begin{pmatrix} a & b \\ c & d \end{pmatrix}\in \rit{SL}_2(\mathbf Z)~\right |~ a-1\equiv c \equiv 0 \mod N \right\}.
\]
We denote by $A_1(N)$ the modular function field with respect to $\Gamma_1(N)$. For a positive integer $N\geq 6$, let $\mathfrak a=[a_1,a_2,a_3]$ be a triple of integers with the properties $0<a_i\leq N/2$ and $a_i\ne a_j$ for any $i,j$. For an element $\tau$ of the complex upper half plane $\mathfrak H$, we denote by $L_\tau$ the lattice of $\mathbf C$ generated by $1$ and $\tau$ and by $\wp(z;L_\tau)$ the Weierstrass $\wp$-function relative to the lattice $L_\tau$. In \cite{II}, we defined a modular function $W_{\a}(\tau)$ with respect to $\Gamma_1(N)$ by
\[
W_{\a}(\tau)=\frac{\wp (a_1/N;\tau)-\wp (a_3/N;\tau)}{\wp (a_2/N;\tau)-\wp (a_3/N;\tau)}.
\]
This function is one of generalized $\lambda$ functions introduced by S.Lang in Chapter 18, \S6 of \cite{LA}. He describes that it is interesting to investigate special values of generalized $\lambda$ functions at imaginary quadratic points, to see if they generate the ray class field. Here a point of $\H$ is called an imaginary quadratic point if it generates an imaginary quadratic field over $\mathbf Q$. In Theorem 3.7 of \cite{IK}, we showed, under a rather strong condition that $a_1a_2a_3(a_1-a_3)(a_2-a_3)$ is prime to $N$, that the values of $W_\a$ at imaginary quadratic points are units of ray class fields. Let $j$ be the modular invariant function. We showed in Theorem 5 of \cite{II} that each of the functions $W_{[3,2,1]},W_{[5,2,1]}$ generates $A_1(N)$ over $\mathbf C(j)$. In this article, we shall study the functions $W_\a$ in the particular case: $a_2=2,a_3=1$. To simplify the notation, henceforth we denote by $\Lambda_k$ the function $W_{[k,2,1]}$. We shall prove that if $2<k<N/2$, then $\Lambda_k$ generates $A_1(N)$ over $\mathbf C(j)$. This result implies that for an imaginary quadratic point $\alpha$ such that $\mathbf Z[\alpha]$ is the maximal order of the field $K=\mathbf Q(\alpha)$, the values $\Lambda_k(\alpha)$ and $\displaystyle e^{2\pi i/N}$ generate the ray class field of $K$ modulo $N$ over the Hilbert class field of $K$. Let $\delta=(k,N)$ be the greatest common divisor of $k$ and $N$. On the assumption that $k$ satisfies either (i)~$\delta=1$ or (ii) $\delta>1,(\delta,3)=1$ and $N/\delta$ is not a power of a prime number, we shall prove that values of $\Lambda_k$ at imaginary quadratic points are algebraic integers. Throughout this article, we use the following notation:\newline
For a function $f(\tau)$ and $A=\begin{pmatrix}a&b\\c&d\end{pmatrix}\in\rit{SL}_2(\mathbf Z)$, $f[A]_2,f\circ A$ represent
\[
f[A]_2=f\left(\frac{a\tau+b}{c\tau+d}\right)(c\tau+d)^{-2},~f\circ A=f\left(\frac{a\tau+b}{c\tau+d}\right).
\]
The greatest common divisor of $a,b\in\mathbf Z$ is denoted by $(a,b)$.
For an integral domain $R$, $R((q))$ represents the ring of power series of a variable $q$ with coefficients in $R$ and $R[[q]]$ is a subring of $R((q))$ of power series with non-negative order. For elements $\alpha,\beta$ of $R$, the notation $\alpha\mid\beta$ represents that $\beta$ is divisible by $\alpha$, thus $\beta=\alpha\gamma$ for an element $\gamma\in R$.
\section{Auxiliary results}
Let $N$ be a positive integer greater than $6$. Put $q=\rit{exp}(2\pi i\tau/N),\zeta=\exp(2\pi i/N)$. For an integer $x$, let
$\{x\}$ and $\mu (x)$ be the integers defined by the following conditions:
\[
\begin{split}
&0\le \{x\}\le \frac N2,\quad \mu (x)=\pm 1,\\
&\begin{cases}\mu(x)=1\qquad &\text{if } x\modx {0,N/2}N,\\
x\equiv \mu (x)\{x\} \mod N\qquad&\text{otherwise.}
\end{cases}
\end{split}
\]
For an integer $s$ not congruent to $0 \mod N$, let
\[\phi_s(\tau)=\frac 1{(2\pi i)^2}\wp \left(\frac s N;L_\tau\right)-1/12.
\]
Let $\displaystyle A=\begin{pmatrix}a&b\\c&d\end{pmatrix}\in\rit{SL}_2(\mathbf Z)$. Put $s^*=\mu (sc)sd,u_s=\zeta^{s^*}q^{\{sc\}}$. Then by Lemma 1 of \cite{II}, we have
{\small
\begin{equation}\label{eq1}
\phi_s[A]_2=
\begin{cases}\displaystyle
\frac{\zeta^{s^*}}{(1-\zeta^{s^*})^2}-\sum_{m=1}^{\infty}\sum_{n=1}^{\infty}n(1-\zeta^{s^*n})(1-\zeta^{-s^*n})q^{mnN}&\text{if }\{sc\}=0,\\
\displaystyle\sum_{n=1}^{\infty}n u_s^n-\displaystyle\sum_{m=1}^{\infty}\sum_{n=1}^{\infty}n(1-u_s^n)(1-u_s^{-n})q^{mnN}&\text{otherwise}.
\end{cases}
\end{equation}
}
We shall need next lemmas and propositions in the following sections.
\begin{lemma}\label{lem1}
Let $r,s,c,d$ be integers such that $0<r\ne s\leq N/2,~(c,d)=1$. Assume that $\{rc\}=\{sc\}$. Put $r^*=\mu(rc)rd, s^*=\mu(sc)sd$. Then we have $\zeta^{r^*-s^*}\ne 1$. Further if $\{rc\}=\{sc\}=0,N/2$, then $\zeta^{r^*+s^*}\ne 1$.
\end{lemma}
\begin{proof}
The assumption $\{rc\}=\{sc\}$ implies that $(\mu(rc)r-\mu(sc)s)c\modx 0N$. If $\zeta^{r^*-s^*}=1$, then $(\mu(rc)r-\mu(sc)s)d\modx 0N$. From $(c,d)=1$, we obtain $\mu(rc)r-\mu(sc)s\modx 0N$. This shows $r=s$. Suppose $\{rc\}=\{sc\}=0,N/2$ and $\zeta^{r^*+s^*}=1$. Then we have $(r+s)c\modx 0N,~(r+s)d\modx 0N$. Therefore $r+s\modx 0N$. This is impossible, because $0<r\ne s\leq N/2$.
\end{proof}
\begin{lemma} Let $k\in\mathbf Z,\delta=(k,N)$.
\begin{enumerate}
\itemx i For an integer $\ell$, if $\delta\mid \ell$, then $(1-\zeta^\ell)/(1-\zeta^k)\in\mathbf Z[\zeta]$.
\itemx {ii} If $N/\delta$ is not a power of a prime number, then $1-\zeta^k$ is a unit of $\mathbf Z[\zeta]$.
\end{enumerate}
\end{lemma}
\begin{proof} If $\delta | \ell$, then there exist an integer $m$ such that $\ell\modx{mk}{N}$.
Therefore $\zeta^\ell=\zeta^{mk}$ and $(1-\zeta^k)\mid (1-\zeta^\ell)$. This shows (i). Let $p_i~(i=1,2)$ be distinct prime factors of $N/\delta$.
Since $N/p_i=\delta (N/(\delta p_i))$, $1-\zeta^\delta\mid 1-\zeta^{N/p_i}$.
Therefore $1-\zeta^\delta \mid p_i~(i=1,2)$. This implies that $1-\zeta^\delta$ is a unit. Because of $(k/\delta,N/\delta)=1$, $1-\zeta^k$ is also a unit .
\end{proof}
From \eqref{eq1} and Lemma~\ref{lem1}, we immediately obtain the following two propositions
.
\begin{prop}\label{prop1} Let $r,s\in\mathbf Z$ such that $0<r\ne s \leq N/2$.
\begin{enumerate}
\itemx i If $\{rc\},\{sc\}\ne 0$, then
\[
(\phi_r-\phi_s)[A]_2\equiv \sum_{n=1}^\infty n(u_r^n-u_s^n)+u_r^{-1}q^N-u_s^{-1}q^N \mod q^N\mathbf Z[\zeta][[q]].
\]
\itemx{ii} If $\{rc\}=0$ and $\{sc\}\ne 0$, then
\[(\phi_r-\phi_s)[A]_2\equiv \frac{\zeta^{rd}}{(1-\zeta^{rd})^2}-\sum_{n=1}^\infty nu_s^n-u_s^{-1}q^N \mod q^N\mathbf Z[\zeta][[q]].\]
\itemx {iii} If $\{rc\}=\{sc\}=0$, then
\[
(\phi_r-\phi_s)[A]_2\equiv \frac{-\zeta^{sd}(1-\zeta^{(r-s)d})(1-\zeta^{(r+s)d})}{(1-\zeta^{rd})^2(1-\zeta^{sd})^2}~\mod q^N\mathbf Z[\zeta][[q]],
\]
\end{enumerate}
\end{prop}
\begin{prop}\label{prop2} Let $r,s\in\mathbf Z$ such that $0<r\ne s \leq N/2$. Put $\ell=\min(\{rc\},\{sc\})$. Then
\[(\phi_r-\phi_s)|[A]_2=\theta_{r,s}(A)q^\ell(1+qh(q)),\]
where $h(q)\in\mathbf Z[\zeta][[q]]$ and $\theta_{r,s}(A)$ is a non-zero element of $\mathbf Q(\zeta)$ given as follows.
In the case $\{rc\}=\{sc\}$,
\[
\theta_{r,s}(A)=\begin{cases}-\zeta^{s^*}(1-\zeta^{r^*-s^*})\quad&\text{if }\ell\ne 0,N/2,\\
-\zeta^{s^*}(1-\zeta^{r^*-s^*})(1-\zeta^{r^*+s^*})\quad&\text{if }\ell=N/2,\\
\displaystyle\frac{-\zeta^{s^*}(1-\zeta^{r^*-s^*})(1-\zeta^{r^*+s^*})}{(1-\zeta^{r^*})^2(1-\zeta^{s^*})^2}\quad&\text{if }\ell=0.
\end{cases}
\]
In the case $\{rc\}\ne\{sc\}$,assuming that $\{rc\}<\{sc\}$,
\[
\theta_{r,s}(A)=\begin{cases}\displaystyle \zeta^{r^*}\quad&\text{if }\ell\ne 0,\\
\displaystyle\frac{\zeta^{r^*}}{(1-\zeta^{r^*})^2}\quad&\text{if }\ell=0.
\end{cases}
\]
\end{prop}
\section{Values of $\Lambda_k$ at imaginary quadratic points}
In this section, we shall prove that the values of $\Lambda_k=W_{[k,2,1]}$ at imaginary quadratic points are algebraic integers.
\begin{prop}\label{prop3} Let $k$ be an integer such that $3\leq k<N/2$. Put $\delta=(k,N)$. Assume either \rit{(i)} $\delta=1$ or \rit{(ii)} $\delta>1,(\delta,3)=1$ and $N/\delta$ is not a power of a prime number. Then for $A\in\rit{SL}_2(\mathbf Z)$,we have
\[\Lambda_k\circ A\in\mathbf Z[\zeta]((q)).\]
\end{prop}
\begin{proof} Put $A=\begin{pmatrix}a&b\\c&d\end{pmatrix}$. Proposition \ref{prop2} shows
\[\Lambda_k\circ A=\omega f(q),\]
where $\omega=\theta_{k,1}(A)/\theta_{2,1}(A)$ and $f$ is a power series in $\mathbf Z[\zeta]((q))$. Therefore it is sufficient to prove that $\omega\in \mathbf Z[\zeta]$. First we consider the case $\{c\}\ne 0$. Let $\{2c\}\ne\{c\}$
. By (ii) of Proposition\ref{prop2}, we see $1/(\phi_2-\phi_1)[A]_2 \in \mathbf Z[\zeta]((q))$.
Further if $\{kc\}\ne0$, then $(\phi_k-\phi_1)[A]_2\in\mathbf Z[\zeta][[q]]$. If $\{kc\}=0$,then $\delta>1$ and $c\modx0{N/\delta}$. Therefore $\zeta^{kd}$ is a primitive $N/\delta$-th root of unity. The assumption (ii) shows $1-\zeta^{kd}$ is a unit. Thus $(\phi_k-\phi_1)[A]_2\in\mathbf Z[\zeta][[q]]$. Hence we have $\omega\in\mathbf Z[\zeta]$. Let $\{2c\}=\{c\}$.
Then, since $\{c\}\ne 0$, we have $N\modx 03,~(k,3)=1$ and $\{c\}=\{2c\}=\{kc\}=N/3$, $\mu(2c)=-\mu(c)$, $\mu(kc)=(\frac k3)\mu(c)$, where $(\frac *3)$ is the Legendre symbol.
By the same proposition, we know that $\displaystyle\omega=(1-\zeta^{(\mu(kc)k-\mu(c))d})/(1-\zeta^{-3\mu(c)d})$. Since $\mu(kc)k-\mu(c)\modx 03$, we have $\omega\in\mathbf Z[\zeta]$.
Next consider the case $\{c\}=0$. Then we have $\{c\}=\{2c\}=\{kc\}=0,\mu(c)=\mu(2c)=\mu(kc)=1$, $(d,N)=1$ and
\[
\omega=\left(\frac{1-\zeta^{2d}}{1-\zeta^{kd}}\right)^2\cdot\frac{(1-\zeta^{(k-1)d})(1-\zeta^{(k+1)d})}{(1-\zeta^d)(1-\zeta^{3d})}.
\]
If $\delta =1$, then $(kd,N)=1$. If $\delta\ne 1$, then the assumption (ii) implies $(1-\zeta^{kd})$ is a unit. Therefore $(1-\zeta^{2d})/(1-\zeta^{kd})\in\mathbf Z[\zeta]$. If $N\not\equiv 0\mod 3$, then since $(3d,N)=1$, we know \[\frac{(1-\zeta^{(k-1)d})(1-\zeta^{(k+1)d})}{(1-\zeta^d)(1-\zeta^{3d})}\in\mathbf Z[\zeta].\]
If $N\modx 03$, then $(k,3)=1$ and one of $k+1,k-1$ is divisible by $3$. Lemma \ref{lem1} (i) gives
\[\displaystyle\frac{(1-\zeta^{(k-1)d})(1-\zeta^{(k+1)d})}{(1-\zeta^d)(1-\zeta^{3d})}\in\mathbf Z[\zeta].\]
Hence we obtain $\omega\in\mathbf Z[\zeta]$.
\end{proof}
\begin{thm} Let $\alpha$ be an imaginary quadratic point. Then $\Lambda_k(\alpha)$ is an algebraic integer.
\end{thm}
\begin{proof}
Let $\R$ be a transversal of the coset decomposition of $\rit{SL}_2(\mathbf Z)$ by $\Gamma_1(N)\{\pm E_2\}$, where $E_2$ is the unit matrix. Consider a modular equation $\Phi(X,j)=\prod_{A\in \R}(X-\Lambda_k\circ A)$. Since $\Lambda_k\circ A$ has no poles in $\H$ and $\Lambda_k\circ A\in\mathbf Z[\zeta]((q))$ by Proposition \ref{prop3}, the coefficients of $\Phi(X,j)$ are polynomials of $j$ with coefficients in $\mathbf Z[\zeta]$. Since $j(\alpha)$ is an algebraic integer (see Theorem 10.23 in \cite{C1}), $\Phi(X,j(\alpha))$ is a monic polynomial with algebraic integer coefficients. Because $\Lambda_k(\alpha)$ is a root of $\Phi(X,j(\alpha))$, it is an algebraic integer.
\end{proof}
Further we can show that $\Phi(X,j)\in \mathbf Z[j][X]$ and that $\Lambda_k(\alpha)$ belongs to the ray class field of $\mathbf Q(\alpha)$ modulo $N$. For details, see \S 3 of \cite{IK}.
\begin{cor} Let $A\in\rit{SL}_2(\mathbf Z)$. Then the values of the function $\Lambda_k\circ A$ at imaginary quadratic points are algebraic integers. In particular, the function
\[
\frac{\wp (k\tau/N;\tau)-\wp (\tau/N;\tau)}{\wp (2\tau/N;\tau)-\wp (\tau/N;\tau)}
\]
takes algebraic and integral values at imaginary quadratic points, for $2<k<N/2$.
\end{cor}
\begin{proof}
Let $\alpha$ be an imaginary quadratic point. Then, $A(\alpha)$ is an imaginary quadratic point. Therefore, we have the former part of the assertion. If we put $\displaystyle A=\begin{pmatrix}0&-1\\1&0 \end{pmatrix}$, then from the transformation formula of $\wp((r\tau+s)/N;L_\tau)$ in \S2 of \cite{II}, we obtain the latter part.
\end{proof}
\section{Generators of $A_1(N)$}
Let $A(N)$ be the modular function field of the principal congruence subgroup $\Gamma (N)$ of level $N$. For a subfield $\F$ of $A(N)$, let us denote by $\F_{\mathbf Q(\zeta)}$ the subfield of $\F$ consisted of all modular functions having Fourier coefficients in $\mathbf Q(\zeta)$.
\begin{thm}\label{th1} Let $k$ be an integer such that $2<k<N/2$. Then we have $A_1(N)_{\mathbf Q(\zeta)}=\mathbf Q(\zeta)(\Lambda_k,j)$
\end{thm}
\begin{proof}
By Theorem 3 of Chapter 6 of \cite{LA}, the field $A(N)_{\mathbf Q(\zeta)}$ is a Galois extension over $\mathbf Q(\zeta)(j)$ with the Galois group $\rit{SL}_2(\mathbf Z)/\Gamma(N)\{\pm E_2\}$ and the field $A_1(N)_{\mathbf Q(\zeta)}$ is the fixed field of the subgroup $\Gamma_1(N)\{\pm E_2\}$. Since $\Lambda_k\in A_1(N)_{\mathbf Q(\zeta)}$, to prove the assertion, we have only to show $A\in\Gamma_1(N)\{\pm E_2\}$, for $A\in\rit{SL}_2(\mathbf Z)$ such that $\Lambda_k\circ A=\Lambda_k$. Let $A=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in\rit{SL}_2(\mathbf Z)$ such that $\Lambda_k\circ A=\Lambda_k$. Since the order of $q$-expansion of $\Lambda_k$ is $0$ and that of $\Lambda_k\circ A$ is $\min(\br{kc},\br{c})-\min(\br{2c},\br{c})$ by Proposition \ref{prop2}, we have
\begin{equation}\label{eq2}
\min(\br{kc},\br{c})=\min(\br{2c},\br{c}).
\end{equation}
By considering power series modulo $q^N$, thus modulo $q^N\mathbf Q(\zeta)[[q]]$, from Proposition \ref{prop3} we obtain
\begin{equation}\label{eq3}
\theta_{2,1}(E_2)(\phi_k-\phi_1)[A]_2\equiv \theta_{k,1}(E_2)(\phi_2-\phi_1)[A]_2\quad \mod q^N
\end{equation}
For an integer $i$, put $u_i=\zeta^{\mu(ic)id}q^{\br{ic}},\omega_i=\zeta^{(\mu(ic)i-\mu(c))d}$.
First of all, we shall prove that $c\modx 0N$. Let us assume $c\nmodx 0N$.
Suppose that $\br{2c}=\br c$. Since $\br c\ne 0$, we see $\br c=N/3$. Further since by \eqref{eq2} $\br{kc}\geq \br c$, we have $(k,3)=1, \br c=\br{2c}=\br{kc}=N/3$ and $u_k=\omega_ku_1$, $u_2=\omega_2u_1$. Lemma \ref{lem1} gives that $\omega_k,\omega_2\ne 1,\omega_k\ne\omega_2$.
By \eqref{eq3} and Proposition \ref{prop1},
\[
\begin{split}
\theta_{2,1}(E_2)&\left(\sum_nn(u_k^n-u_1^n)+u_k^{-1}q^N-u_1^{-1}q^N\right)\equiv\\
&\theta_{k,1}(E_2)\left(\sum_nn(u_2^n-u_1^n)+u_2^{-1}q^N-u_1^{-1}q^N\right)\quad \mod q^N.
\end{split}
\]
Therefore
\[
\begin{split}
\theta_{2,1}(E_2)&\left(\sum_nn(\omega_k^n-1)u_1^n+(\omega_k^{-1}-1)u_1^{-1}q^N\right)\equiv\\
&\theta_{k,1}(E_2)\left(\sum_nn(\omega_2^n-1)u_1^n+(\omega_2^{-1}-1)u_1^{-1}q^N\right)\quad \mod q^N.
\end{split}
\]
Since $q^N=\zeta^{-3\mu(c)d}u_1^3$,
\[
\begin{split}
\theta_{2,1}(E_2)&((\omega_k-1)u_1+(2(\omega_k^2-1)+\zeta^{-3\mu(c)d}(\omega_k^{-1}-1)u_1^2)\equiv\\
&\theta_{k,1}(E_2)((\omega_2-1)u_1+(2(\omega_2^2-1)+\zeta^{-3\mu(c)d}(\omega_2^{-1}-1)u_1^2)\quad \mod u_1^3.
\end{split}
\]
By comparing the coefficients of $u_1,u_1^2$ on both sides, we have
\[
2(\omega_k+1)-\omega_k^{-1}\zeta^{-3\mu(c)d}=2(\omega_2+1)-\omega_2^{-1}\zeta^{-3\mu(c)d}.
\]
This equation implies that $\zeta^{3\mu(c)d}\omega_2\omega_k=-1/2$. We have a contradiction. Suppose $\br{2c}>\br c$. Then by \eqref{eq2}, we know $\br{kc}\geq\br c$. If $\br{kc}>\br c$, then the $q$-expansion of $\Lambda\circ A$ begins with $1$. Thus $\theta_{k,1}(E_2)=\theta_{2,1}(E_2)$. This gives that $(1-\zeta^{k+2})(1-\zeta^{k-2})=0$. We have a contradiction. If $\br{kc}=\br c$, then $\br{kc},\br c\ne 0,N/2$ and $u_k=\omega_ku_1$. By considering mod $q^N$ as above, we obtain
\[
\begin{split}
\theta_{2,1}(E_2)&\left(\sum_nn(\omega_k^n-1)u_1^n+(\omega_k^{-1}-1)u_1^{-1}q^N\right)\equiv\\
&\theta_{k,1}(E_2)\left(\sum_nn(u_2^n-u_1^n)+u_2^{-1}q^N-u_1^{-1}q^N\right)\quad \mod q^N.
\end{split}
\]
Thus
\[
\begin{split}
u_1+2(\omega_k+1)u_1^2-&\omega_k^{-1}u_1^{-1}q^N\equiv\\
& u_1-u_2+2u_1^2-u_2^{-1}q^N+u_1^{-1}q^N-2u_2^2+\cdots\quad\mod q^N.
\end{split}
\]
Therefore
\[2\omega_k u_1^2-(\omega_k^{-1}+1)u_1^{-1}q^N+h_1(u_1)\equiv -u_2-u_2^{-1}q^N-2u_2^2+h_2(u_2)\quad\mod q^N,\]
where $h_i(u_i)$ is a polynomial of $u_i$ with terms $u_i^n,n>2$.
Since $\br{2c}>\br c$,we see $\br{2c}\leq N-\br{2c}<N-\br c$. Therefore we have $2\br c<N-\br c$ and $2\br c=\br{2c}=N-\br{2c}$ or $2\br c=\br{2c}<N-\br{2c}$. By comparing the coefficients of first terms, we obtain $2\omega_k\zeta^{2\mu(c)d}=-(\zeta^{\mu(2c)2d}+\zeta^{-\mu(2c)2d})$ in the case $\br{2c}=N-\br{2c}$ and $2\omega_k\zeta^{2\mu(c)d}=-\zeta^{\mu(2c)2d}$ in the case $\br{2c}<N-\br{2c}$. In the former case, $N$ is even and $\br{2c}=N/2$. So we have $\mu(2c)2c\modx 0{N/2}$ and $\mu(2c)2d\modx 0{N/2}$. Therefore from $(c,d)=1$ we obtain $2\modx 0{N/2}$. This is impossible. In the latter case, clearly we have a contradiction. Suppose $\br{2c}<\br c$. Then $\br{kc}=\br{2c}$. If $\br{2c}=0$, then $k,N$ are even and $\br c=N/2$. From Proposition \ref{prop1}, we get
\[
\begin{split}
(\phi_k-\phi_1)[A]_2&=\frac{\zeta^{kd}}{(1-\zeta^{kd})^2}-(\zeta^d+\zeta^{-d})q^{N/2}\quad \mod q^N,\\
(\phi_2-\phi_1)[A]_2&=\frac{\zeta^{2d}}{(1-\zeta^{2d})^2}-(\zeta^d+\zeta^{-d})q^{N/2}\quad \mod q^N.
\end{split}
\]
By using \eqref{eq3},
\[
\begin{split}
\theta_{2,1}(E_2)\frac{\zeta^{kd}}{(1-\zeta^{kd})^2}=\theta_{k,1}(E_2)\frac{\zeta^{2d}}{(1-\zeta^{2d})^2},\\
\theta_{2,1}(E_2)(\zeta^d+\zeta^{-d})=\theta_{k,1}(E_2)(\zeta^d+\zeta^{-d}).
\end{split}
\]
If $\zeta^d+\zeta^{-d}=0$, then $2d \modx 0{N/2}$. Since $2c\modx 0{N/2}$ and $(c,d)=1$, we see $2\modx 0{N/2}$. This is impossible. Therefore $\theta_{2,1}(E_2)=\theta_{k,1}(E_2)$ and $\frac{\zeta^{kd}}{(1-\zeta^{kd})^2}=\frac{\zeta^{2d}}{(1-\zeta^{2d})^2}$. This implies that $(1-\zeta^{(k+2)d})(1-\zeta^{(k-2)d})=0$. Lemma \ref{lem1} gives a contradiction. Hence $\br{2c},\br c\ne 0,N/2$. Let $u_k=\omega u_2$, where $\omega=\omega_k/\omega_2$. By \eqref{eq3},
\[
\begin{split}
\theta_{2,1}(E_2)(\sum_n &n(\omega^nu_2^n-u_1^n)+\omega^{-1}u_2^{-1}q^N-u_1^{-1}q^N)\equiv \\
&\theta_{k,1}(E_2)(\sum_n n(u_2^n-u_1^n)+u_2^{-1}q^N-u_1^{-1}q^N)\quad \mod q^N.\end{split}
\]
Therefore $\theta_{2,1}(E_2)\omega=\theta_{k,1}(E_2)$ and
\[
\begin{split}
\sum_nn(\omega^n-\omega)u_2^n&+(\omega^{-1}-\omega)u_2^{-1}q^N\equiv \\
&\sum_nn(1-\omega)u_1^n+(1-\omega)u_1^{-1}q^N\quad \mod q^N.
\end{split}
\]
Since by Lemma \ref{lem1},$\omega\ne 1$, we have
\[
2\omega u_2^2-(1+\omega^{-1})u_2^{-1}q^N+h_2(u_2)\equiv -u_1-u_1^{-1}q^N-2u_1^2+h_1(u_1)~~\mod q^N,
\]
where $h_i(u_i)$ is a polynomial of $u_i$ with terms $u_i^n,n>2$. Since
$\br c<N-\br c<N-\br{2c}$,we have $2\br{2c}=\br c$ and $2\omega\zeta^{2\mu(2c)2d}=-\zeta^{\mu(c)c}$. This gives a contradiction. Hence we have $c\modx 0N$.
Let $c\modx 0N$. Then by the definition of $\phi_s$, we have $\Lambda_k\circ A=\frac{\phi_{\br{kd}}-\phi_{\br{d}}}{\phi_{\br{2d}}-\phi_{\br{d}}}$. From now on, to save labor, we put $r=\br{2d},s=\br{kd},t=\br d$. Then since $r,s,t$ are distinct from each other and $\min(s,t)=\min(r,t),~(d,N)=1$, we have $r,s,t\ne 0,N/2$ and $t<r,s$. We have only to prove $t=1$. Let us assume $t>1$.
Let $T=\begin{pmatrix}1&0\\1&1\end{pmatrix}$. Then
\begin{equation}\label{eq5}
\Lambda_k\circ T=\left(\frac{\phi_s-\phi_t}{\phi_r-\phi_t}\right)\circ T.
\end{equation}
If $\ell$ is an integer such that $0<\ell<N/2$, then $\mu(\ell)=1,\br \ell=\ell$. Let $u=\zeta q$. Then
,
\begin{equation}\label{eq6}
\phi_\ell[T]_2\equiv \sum_n nu^{\ell n}+u^{N-\ell} \mod q^N.
\end{equation}
From \eqref{eq5},
\[(\phi_r\phi_1+\phi_s\phi_2+\phi_t\phi_k)[T]_2=(\phi_t\phi_2+\phi_s\phi_1+\phi_r\phi_k)[T]_2.
\]
By comparing the order of $q$-series in the both sides, we see
$r=t+1<s$. Since $t\geq 2$ and $t+2\leq s<N/2$, we know that $2t\geq t+2,N>2t+4$.
By \eqref{eq6} and by the inequality relations that $r=t+1,s\geq t+2,2t\geq t+2,N>2t+4$, we have modulo $u^{t+4}$,
\[
\begin{split}
&\phi_r\phi_1[T]_2\modx{u^{t+2}+2u^{t+3}}{u^{t+4}},~\phi_s\phi_2[T]_2\modx{0}{u^{t+4}},\\
&~\phi_t\phi_k[T]_2\modx{u^{t+k}}{u^{t+4}},\phi_t\phi_2[T]_2\modx{u^{t+2}}{u^{t+4}},\\
&\phi_s\phi_1[T]_2\modx{u^{s+1}}{u^{t+4}},~\phi_r\phi_k[T]_2\modx{0}{u^{t+4}}.
\end{split}
\]
Therefore we obtain a congruence:
\[2u^{t+3}+u^{t+k}\modx{u^{s+1}}{u^{t+4}}.\]
The coefficients of $u^{t+3}$ on both sides are distinct from each other, we have a contradiction. Hence $t=1$.
\end{proof}
We obtain the following theorem from the Gee-Stevenhagen theory in \cite{GA} and \cite{GAS}. See also Chapter 6 of \cite{SG}.
\begin{thm}
Let $N$ and $k$ be as above. Let $\alpha\in\H$ such that $\mathbf Z[\alpha]$ is the maximal order of an imaginary quadratic field $K$. Then the ray class field of $K$ is generated by $\Lambda_l(\alpha)$ over $\mathbf Q(\zeta,j(\alpha))$.
\end{thm}
\begin{proof}
The assertion is deduced from Theorems 1 and 2 of \cite{GA} and Theorem \ref{th1}.
\end{proof}
|
1,116,691,498,638 | arxiv | \section{Introduction}
\setcounter{equation}{0}
A map $f:(X,d_X)\to(Y,d_Y)$ between two metric spaces $(X,d_X)$ and $(Y,d_Y)$ is said to have distortion at most $D$ if there exists a constant $C>0$ such that
\[
Cd_X(x_1,x_2)\le d_Y(f(x_1),f(x_2))\le CDd_X(x_1,x_2),\quad \forall x_1,x_2\in X.
\]
For $0<\varepsilon<1$, the $(1-\varepsilon)$-snowflake of a metric space $(X,d_X)$ is defined to be the metric space $(X,d_X^{1-\varepsilon})$ (it is clear that $d_X^{1-\varepsilon}$ also defines a metric on $X$).
The metric spaces that we will focus on this paper are Carnot groups. Following Le Donne \cite{le2017primer}, a Carnot group is a 5-tuple $(G,\delta_\lambda,\Delta,\|\cdot\|,d_G)$, where:
\begin{itemize}
\item The Lie group $G$ is a stratified group, i.e., $G$ is a simply connected Lie group whose Lie algebra $\mathfrak{g}$ admits the direct sum decomposition
\[
\mathfrak{g}=V_1\oplus V_2\oplus\cdots\oplus V_s,
\]
where $V_{s+1}=0$ and $V_{r+1}=[V_1,V_r]$ for $r=1,\cdots,s$.
\item For each $\lambda\in \bbr^+$, the linear map $\delta_\lambda:\mathfrak{g}\to \mathfrak{g}$ is defined by
\[
\left.\delta_\lambda\right|_{V_r}=\lambda^i \mathrm{id}_{V_r},\quad r=1,\cdots,s.
\]
\item The bundle $\Delta$ over $G$ is the extension of $V_1$ to a left-invariant subbundle:
\[
\Delta_p\coloneqq (dL_p)_e V_1,\quad p\in G.
\]
\item The norm $\|\cdot\|$ is initially defined on $V_1$, and is then extended to $\Delta$ as a left-invariant norm:
\[
\|(dL_p)_e(v)\|\coloneqq \|v\|,\quad p\in G,~v\in V_1.
\]
\item The metric $d_G$ on $G$ is the \textit{Carnot-Carath\'eodory distance} associated to $\Delta$ and $\|\cdot\|$, i.e.,
\[
d_G(p,q)\coloneqq \inf\left\{\int_0^1 \|\dot{\gamma}(t)\|dt : \gamma\in C_{\mathrm{pw}}^\infty ([0,1];G),\gamma(0)=p,\gamma(1)=q,\dot{\gamma}\in \Delta \right\},\quad p,q\in G,
\]
where $C_{\mathrm{pw}}^\infty ([0,1];G)$ consists of the piecewise smooth functions from $[0,1]$ to $G$.
\end{itemize}
One of the simplest examples of a noncommutative Carnot group is the Heisenberg group $\bbh^3$. It has been shown by Tao \cite{tao2018embedding} that, for $0<\varepsilon<1/2$, one can embed the snowflake $(\bbh^{3},d_{\bbh^{3}}^{1-\varepsilon})$ into a bounded dimensional Euclidean space with optimal distortion $O(\varepsilon^{-1/2})$ (more precisely, Theorem \ref{mainthm} below for the case $G=\bbh^3$ was proven in \cite{tao2018embedding}).\footnote{Following widespread convention, $A\lesssim B$, $A=O(B)$, and $B=\Omega(A)$ mean $A\le C B$ for a universal constant $C$, and $A\asymp B$ means $(A\lesssim B) \wedge (B\lesssim A)$. If the constant $C$ depends on other parameters, this is denoted using subscripts, e.g., $A\lesssim_{C_0,N_0}B$, $A=O_{C_0,N_0}(B)$, and $B=\Omega_{C_0,N_0}(A)$ mean $A\le C(C_0,N_0)B$ where $C(C_0,N_0)$ depends only on $C_0$ and $N_0$, and $A\asymp_{C_0}B$ means $A\lesssim_{C_0}B$ and $B\lesssim_{C_0}A$.} The goal of this paper is to show that the methods of \cite{tao2018embedding} extend\footnote{When comparing \cite{tao2018embedding} and this paper, one should keep in mind that the vectorfields and metrics of \cite{tao2018embedding} are right-invariant, whereas this paper uses left-invariant vectorfields and metrics.} to the more general setting of Carnot groups.
\begin{theorem}\label{mainthm}
For each Carnot group $G$, there exists a natural number $D_G$ such that for every $0<\varepsilon<1/2$ there exists an embedding of $(G,d_G^{1-\varepsilon})$ into $\bbr^{D_G}$ with distortion $O_G(\varepsilon^{-1/2})$.
\end{theorem}
We have not attempted to optimize $D_G$, but from the analysis in this paper it will be clear that $D_G$ is at least $\Omega(23^{n_h})$, where $n_h$ is the Hausdorff dimension of $G$, i.e., $n_h=\sum_{r=1}^s r \dim V_r$.
We will not work in the large epsilon regime $\frac{1}{2}\le \varepsilon <1$ because Theorem \ref{mainthm} is simply not true in this case; see Assouad \cite{assouad1983plongements} for the notion of metric dimension which tells us why such a result is impossible. Furthermore, we will assume $0<\varepsilon<\frac 1A$, where $A$ is a very large number. For $\frac 1A\le \varepsilon<\frac{1}{2}$, a construction of Assouad \cite{assouad1983plongements} gives such an embedding; we are thus only interested in the small epsilon regime.
One standard consequence of the above theorem, which was also observed in \cite{tao2018embedding}, is the following corollary.
\begin{corollary}
Let $G$ be a Carnot group, and suppose $\Gamma\subset G$ is a discrete subgroup of $G$, where for any two distinct points $\gamma_1,\gamma_2\in \Gamma$ one has $d_G(\gamma_1,\gamma_2)\ge 1$. Let $D_G$ be as in Theorem \ref{mainthm}, and for $R\ge 2$ define the discrete ball $B_{\Gamma}(0,R)\coloneqq\{\gamma\in \Gamma:d_G(0,\gamma)<R\}$. Then there exists an embedding of the discrete ball $B_{\Gamma}(0,R)$ with the induced metric $d_G$ into $\bbr^{D_G}$ of distortion $O_G(\log ^{1/2}R)$.
\end{corollary}
This follows from Theorem \ref{mainthm} because on $B_\Gamma(0,R)$ with $R\ge 2$, the metric $d_G$ is comparable to $d_G^{1-1/\log R}$.
Some history behind Theorem \ref{mainthm}: Carnot groups are special cases of doubling metric spaces, where a metric space $(X,d_X)$ is said to be $K$-doubling for some natural number $K$ if for any $x\in X$ and $R>0$ there exist points $y_1,\cdots,y_K\in X$ such that
\[
B_R(x)\subset \bigcup_{j=1}^K B_{R/2}(y_j),
\]
where $B_R(x)\coloneqq\{z\in X:d_X(x,z)<R\}$ is the ball of radius $R$ centered at $x$; we say that $(X,d_X)$ is doubling if $(X,d_X)$ is $K$-doubling for some natural number $K$. In the seminal paper \cite{assouad1983plongements}, Assouad showed that for $0<\varepsilon<\frac 12$, the $(1-\varepsilon)$-snowflake of a $K$-doubling metric space admits an embedding into $\ell_2^{O_K(\varepsilon^{-O(1)})}$ with distortion $O_K(\varepsilon^{-1/2})$. Here, the distortion $O(\varepsilon^{-1/2})$ is sharp for the Heisenberg group $\bbh^3$; see Naor and Neiman \cite[Section 4]{naor2010assouad} for a proof of this fact. There are many relevant literature on how to sharpen Assouad's theorem in different ways, with or without various assumptions; see, for example, Abraham, Bartal, and Neiman \cite{abraham2008embedding}, Bartal, Recht, and Schulman \cite{bartal2011dimensionality}, Gottlieb and Krauthgamer \cite{gottlieb2015nonlinear}, and Gupta, Krauthgamer, and Lee \cite{gupta2003bounded}, but we do not mean this list to be extensive. The direction that is quite relevant to this paper is that of Naor and Neiman \cite{naor2010assouad}, who showed that one can construct snowflake embeddings of any doubling metric space into bounded dimensional Euclidean spaces at the cost of slightly worsening the distortion: more precisely, one can embed the $(1-\varepsilon)$-snowflake of a $K$-doubling space into $\ell_2^{O(\log K)}$ with distortion $O(\frac{\log^2 K}{\varepsilon^2})$, or more generally, for any $0<\delta\le 1$, into $\ell_2^{O(\frac{\log K}{\delta})}$ with distortion $O((\frac{\log K}{\varepsilon})^{1+\delta})$. One may then further inquire whether we can take the target dimension to be uniformly bounded in $0<\varepsilon<\frac 12$ while simultaneously attaining the best possible distortion $O(\varepsilon^{-1/2})$. We state this separately as a question.
\begin{question}[Assouad's theorem with optimal distortion and bounded dimension]\label{weakconj}
For any natural number $K\ge 2$, does there exist a natural number $D(K)$ such that if $(X,d_X)$ is a $K$-doubling metric space and $0<\varepsilon<\frac 12$, then there exists an embedding of the snowflake $(X,d_X^{1-\varepsilon})$ into $\bbr^{D(K)}$ with distortion $O_K(\varepsilon^{-1/2})$?
\end{question}
For completeness, we state a stronger version of the above question, which is motivated by the fact that we must have the lower bound $D(K)\gtrsim \log K$.
\begin{question}[Assouad's theorem with optimal distortion and optimal dimension]\label{strongconj}
For any natural number $K\ge 2$, does there exist a natural number $D(K)=O(\log K)$ such that if $(X,d_X)$ is a $K$-doubling metric space and $0<\varepsilon<\frac 12$, then there exists an embedding of the snowflake $(X,d_X^{1-\varepsilon})$ into $\bbr^{D(K)}$ with distortion $O_K(\varepsilon^{-1/2})$?
\end{question}
These questions are relevant to finding counterexamples to a question raised by Lang and Plaut \cite{lang2001bilipschitz}, which, under our notation, can be stated as follows.
\begin{question}[Lang--Plaut problem, \cite{lang2001bilipschitz}]\label{Lang-Plaut}
For any natural number $K\ge 2$, does there exist a natural number $D(K)$ such that if $X$ is a subspace of $\ell_2$ that is $K$-doubling under the metric it inherits from $\ell_2$, then there exists an embedding of $(X,\|\cdot\|_2)$ into $\bbr^{D(K)}$ with $O_K(1)$ distortion?
\end{question}
As observed in \cite{naor2010assouad}, if an embedding as in Question \ref{weakconj} fails to exist for the Heisenberg group $\bbh^3$, then the Lang--Plaut problem would be answered in the negative, since it is known that $\bbh^3$ admits $(1-\varepsilon)$-snowflake embeddings into $\ell_2$ with distortion $O(\varepsilon^{-1/2})$ with the additional property that the doubling constant of the image is uniformly bounded. Nevertheless, it has been shown in \cite{tao2018embedding} that an embedding as in Question \ref{weakconj} for $\bbh^3$ exists, i.e., Theorem \ref{mainthm} for $G=\bbh^3$ is true. Thus, the Heisenberg group (or more precisely, the doubling $\ell_2$ images of the snowflakes thereof) fails to serve as a counterexample to the Lang--Plaut problem.
The following question was posed in \cite{naor2010assouad} to highlight this connection.
\begin{question}\label{snowflake-with-doubling-image}
For any natural number $K\ge 2$, does there exist a natural number $K'(K)$ such that if $(X,d_X)$ is a $K$-doubling metric space and $0<\varepsilon<\frac 12$, then there exists an embedding of the snowflake $(X,d_X^{1-\varepsilon})$ into $\ell_2$ with distortion $O_K(\varepsilon^{-1/2})$ with the additional property that image of $X$ is $K'(K)$-doubling?
\end{question}
Of course, the connection is that if Question \ref{snowflake-with-doubling-image} has a positive answer while Question \ref{weakconj} has a negative answer, then the Lang--Plaut problem (Question \ref{Lang-Plaut}) must have a negative answer.
The main purpose of this paper is to expand upon our partial knowledge of Question \ref{weakconj}, by answering it in the positive in the setting of Carnot groups. We begin by carrying the methods of \cite{tao2018embedding} into the setting of Carnot groups, and whenever a tool of \cite{tao2018embedding} becomes inadequate in the setting of Carnot groups, we replace it with a tool more suitable in the general language of doubling metric spaces. More specifically, the key tools of \cite{tao2018embedding} are a new variant of the Nash--Moser iteration scheme and a certain extension theorem for orthonormal vector fields using a quantitative homotopy argument. When generalizing to the case of Carnot groups, one potential source of trouble is that arbitrary Carnot groups might have an arbitrarily large step size $s$, which could complicate its geometric properties and make the tools of \cite{tao2018embedding} fail. It will be shown in Section \ref{sec:NM-perturb} that the Nash--Moser iteration scheme directly generalizes to the setting of Carnot groups, and we prove a orthogonality statement slightly stronger than that of \cite{tao2018embedding}. However, in Section \ref{sec:ON-ext}, it will be clear that there are some obstructions to the quantitative homotopy argument. Nevertheless, we will prove the orthonormal vector field extension theorem even for general doubling metric spaces, using the concentration of measure phenomenon for the sphere and the Lov\'{a}sz Local Lemma.
Ultimately, we would like to answer Question \ref{weakconj} for general doubling metric spaces; this paper is an intermediate step in such an investigation, showing that it is at least true for Carnot groups. We plan to address the case of general doubling metric spaces in future work, by possibly adapting some of the proof methods of this paper. In particular, the orthonormal vector field extension theorem for doubling metric spaces (Theorem \ref{Lip-lifting}) seems to be a good starting point for future work, and we would have to find either a metric analog or a replacement for the Nash--Moser iteration scheme.
We now briefly overview some of the results and strategies of Assouad \cite{assouad1983plongements}, Naor and Neiman \cite{naor2010assouad}, and Tao \cite{tao2018embedding} for constructing snowflake embeddings, and describe how these ideas connect to the proof strategy of this paper.
The starting point of constructing snowflake embeddings is the classical fact that if $X$ is a metric space, $A>1$, $0<\varepsilon<\frac 12$, and $\{\phi_m:X\to \ell_2\}_{m\in \bbz}$ is a collection of maps such that $|\phi_m|\le A^m$ and $\phi_m$ is 1-Lipschitz, then the Weierstrass sum
\begin{equation}\label{Weierstrass}
\Phi=\sum_{m\in\bbz}A^{-m\varepsilon}\phi_m
\end{equation}
(which is easily seen to be absolutely convergent, say by requiring $\phi_m(x_0)=0$ for all $m\in \mathbb{Z}$ for some fixed $x_0\in X$) is $(1-\varepsilon)$-H\"older, with the H\"older norm bounded by $O(\varepsilon^{-1})$. If one assumes in addition that the maps $\phi_m$ take values in mutually orthogonal subspaces of $\ell_2$, then we can bound the $(1-\varepsilon)$-H\"older norm by $O(\varepsilon^{-1/2})$. In the case where $X$ is a doubling metric space, Assouad \cite{assouad1983plongements} constructed functions $\phi_m$ with the above properties and with the additional property that if $d(p,q)\asymp A^m$ then $|\phi_m(p)-\phi_m(q)|\gtrsim_{K} A^m$. Then $\Phi$ satisfies the H\"older lower bound $|\Phi(p)-\Phi(q)|\gtrsim_{K} d(p,q)^{1-\varepsilon}$ for any $p,q\in X$, and thus is an embedding of the $(1-\varepsilon)$-snowflake of $X$ into $\ell_2$ with distortion $O_K(\varepsilon^{-1/2})$ (we do not stress dependence on $A$ for now, as Assouad chooses $A$ to be a constant).
When improving upon Assouad's theorem to reduce the target dimension (especially when trying to keep the target dimension independent of $\varepsilon$), one usually keeps the idea of using the Weierstrass sum \eqref{Weierstrass} to guarantee the upper bound but needs to be more clever to enforce the lower bound. Note that for $A^{m-1}\le d(p,q)\le A^m$, the sum $\sum_{n<m}A^{-n\varepsilon}(\phi_n(p)-\phi_n(q))$ is negligible compared to $d(p,q)^{1-\varepsilon}$,\footnote{More precisely, we would have to take $A^{m-1+\delta}\le d(p,q)\le A^{m+\delta}$ for a fixed small $\delta>0$, while taking the length scale $A$ sufficiently large, but for clarity of the introduction we will not address this issue for now. See the proof of Proposition \ref{AlmostLipschitz} for a precise argument. We also remark that the length scale $A$ will be chosen last in our `hierarchy of constants' (see subsection \ref{sec:hierarchy}), so $A$ will dominate every other parameter used in this paper.} so in order to enforce the lower bound it is enough to have $|\sum_{n\ge m}A^{-n\varepsilon}(\phi_n(p)-\phi_n(q))|\gtrsim d(p,q)^{1-\varepsilon}$. Thus, it is natural to devise an iterative construction: having constructed maps $\{\phi_n:X\to\bbr^D\}_{n>m}$ such that the partial sum $\sum_{n>m}A^{-n\varepsilon}\phi_n$ is a map that is able to distinguish points which are at least distance $A^{m+1}$ apart, by being a map that `oscillates' at scale $A^{m+1}$ and above, we need to devise a component $A^{-m\varepsilon}\phi_m:X\to\bbr^D$ which `oscillates' at scale $A^{m}$ and which, when added to $\psi:=\sum_{n>m}A^{-n\varepsilon}\phi_n$, makes the sum $A^{-m\varepsilon}\phi_m+\psi$ able to distinguish points which are at least distance $A^{m}$ apart. Thus, the challenge is that given the map $\psi:X\to\bbr^D$, we need to make use of a limited amount of coordinates in constructing a map $\phi_m$, so that although $\psi$ and $\phi_m$ `share coordinates,' the sum $A^{-m\varepsilon}\phi_m+\psi$ satisfies the required H\"older lower bound.
In the work of Naor and Neiman \cite{naor2010assouad}, the maps $\phi_m$ are defined via a probabilistic construction: after they construct random partitions arising from nets (defined in Section \ref{sec:MG}) with good probabilistic padding properties, they define the maps $\phi_m$ using the distance to the boundaries of these partitions, and by a nested use of the Lov\'{a}sz Local Lemma conditioned on the partial sum $\sum_{n>m}A^{-n\varepsilon}\phi_n$, they show that their maps $\phi_m$ have the desired property. This construction gives distortion $O(\varepsilon^{-1-\delta})$, where the $\varepsilon^{-1}$ factor comes from the H\"older constant of \eqref{Weierstrass}, and the additional factor $\varepsilon^{-\delta}$ comes from some technicalities of the construction, namely that one needs to introduce a slightly finer length scale than $A$ when constructing the nets to ensure that the Lov\'{a}sz Local Lemma is applicable and thus the H\"older lower bound is achieved.
The result of Naor and Neiman \cite{naor2010assouad} was surprising at the time since it was the first to achieve a target dimension that is uniformly bounded in the amount of snowflaking. It is also surprising that they have achieved the theoretically best possible target dimension $\log K$, up to constant factors. However, \cite{naor2010assouad} fell short of giving a final answer to the problem of snowflake embeddings, since the distortion they have achieved was $O(\varepsilon^{-1-\delta})$, not $O(\varepsilon^{-1/2})$. They have asked whether it is even possible, citing that if Question \ref{weakconj} is answered in the negative for $\bbh^3$, then Question \ref{Lang-Plaut} would be answered in the negative. Unfortunately, Tao \cite{tao2018embedding} subsequently answered Question \ref{weakconj} in the positive for $\bbh^3$, and thus overturned the above method of answering Question \ref{Lang-Plaut}.
In order to see how one can achieve the optimal distortion $O(\varepsilon^{-1/2})$, let us revisit \eqref{Weierstrass}. When working with a bounded number of dimensions, one needs to work with a notion of orthogonality weaker than requiring the $\phi_m$'s to have pairwise orthogonal ranges. In fact, one can see that we do not need to require every pair of the $\phi_m$'s to take values in mutually orthogonal spaces; instead, we need only that each $\phi_m$ be orthogonal in some sense to the partial sum $\sum_{n>m}A^{-n\varepsilon}\phi_n$ (recall that we don't need to consider the partial sum $\sum_{n<m}A^{-n\varepsilon}\phi_n$ when considering points $p,q\in X$ with $d(p,q)\asymp A^m$, as the sum $\sum_{n<m}A^{-n\varepsilon}(\phi_n(p)-\phi_n(q))$ is negligible compared to $A^{m(1-\varepsilon)}\asymp d(p,q)^{1-\varepsilon}$ in this case). Following Tao \cite{tao2018embedding}, we will choose the notion of orthogonality to be that the horizontal derivatives are perpendicular: if $\nabla$ denotes the horizontal gradient in the Carnot group $G$ (see subsection \ref{sec:carnotgeom} for the definition of horizontal gradient), then we require the orthogonality $\nabla \phi_m\cdot \nabla \sum_{n>m}A^{-n\varepsilon}\phi_n=0$ (actually we will derive a slightly stronger orthogonality condition; see Section \ref{sec:NM-perturb}). Overall, the $\phi_m$'s are constructed in an iterative manner, and this orthogonality is added to one of the conditions that such an iterative construction must achieve.
Note that this orthogonality easily guarantees the H\"older upper bound of $\Phi$, because an iterated use of the Pythagorean theorem gives
\[
\left|\nabla \sum_{n\ge m}A^{-n\varepsilon}\phi_n\right|^2 = \sum_{n\ge m}A^{-2n\varepsilon}|\nabla \phi_n|^2 \lesssim_A \varepsilon^{-1}A^{-2n\varepsilon},
\]
(we will have $|\nabla \phi_n|=O(1)$); to get the H\"older lower bound, one needs to guarantee that the partial sum $\sum_{n\ge m}A^{-n\varepsilon}\phi_n$ `oscillates' at scale $A^m$. In general, proving the H\"older lower bound is more complicated than proving the H\"older upper bound, as the geometry of the particular space $X$ under consideration plays a more significant role.
For Carnot groups $G$, we can use Taylor expansions to exploit the local geometry of $G$ in the following way. Fixing a basis $X_{r,1},\cdots, X_{r,k_r}$ of each stratum $V_r$, each point $p\in G$ can be expressed in the coordinates $p=\exp\left(\sum_{r=1}^s\sum_{i=1}^{k_r}x_{r,i}X_{r,i}\right)$, while its distance to the identity is roughly $d_G(p,0)\asymp_G\sum_{r=1}^s\sum_{j=1}^{k_r}|x_{r,j}|^{1/r}$ (the notation will be explained in full detail in subsection \ref{sec:carnotgeom}). Under this coordinate system, any $C^{s(s+1)}$-function $\Psi:G\to\mathbb{R}^D$ can be approximated by its Taylor expansion as
\[
\Psi(p)=\Psi(0)+\sum_{m=1}^s \frac{1}{m!}\left(\sum_{r=1}^s\sum_{j=1}^{k_r}x_{r,j}X_{r,j}\right)^m\Psi(0)+(\mbox{Taylor approx. error})
\]
(the smallness of the Taylor approximation error can come from a bound on the $C^{s(s+1)}$-norm of $\Psi$ and on a smallness assumption on $d_G(p,0)$).
It will be seen in the proof of Proposition \ref{AlmostLipschitz} that we can rearrange the above into the nicer form
\begin{align*}
\Psi(p)=&\Psi(0)+\sum_{r=1}^s\sum_{j=1}^{k_r}(x_{r,j}+\mbox{coordinate errors})X_{r,j}\Psi(0)\\
&+ (\mbox{cross derivative terms})+(\mbox{Taylor approx. error}),
\end{align*}
where the ``coordinate errors" and the ``cross derivative terms" can be made small by a smallness assumption on $d_G(p,0)$ and on control of the second and higher order derivatives of $\Psi$. Thus, if we also have quantitative linear independence of the derivatives $X_{r,j}\Psi(0)$ (see subsection \ref{sec:linalg} for the notion of quantitative linear independence; we will later refer to this as a `freeness property'), then we can deduce
\begin{align*}
|\Psi(p)-\Psi(0)|\gtrsim &\max_{1\le r\le s, 1\le j\le k_r}|(x_{r,j}+\mbox{coordinate errors})X_{r,j}\Psi(0)|-(\mbox{errors})\\
\gtrsim &d_G(p,0)^s \min_{1\le r\le s, 1\le j\le k_r}|X_{r,j}\Psi(0)| -(\mbox{errors}).
\end{align*}
We remark that this is one point where our proof becomes more complicated due to the more general Carnot group structure, compared to the setting of $G=\mathbb{H}^3$ in \cite{tao2018embedding}. But this, so far, does not necessitate a fundamental change of techniques.
To deduce the H\"older lower bound for our embedding $\Phi$ at scale $A^0=1$, we will apply the above approximation to $\Psi = \sum_{n\ge 0}A^{-n\varepsilon}\phi_n$, and consider points at a slightly smaller scale, more precisely $d_G(p,0)\asymp A^{-1/(s+1)}$. (This uses the additional fact that we effectively only need a H\"older lower bound only for a part of the scales, as long as we can cover the entire set of scales by a finite set of rescalings. See \eqref{fullmapping}.) Thus, to enforce the H\"older lower bound, we will have to justify the above approximation and give quantitative bounds on the various errors, namely we will need estimates on the $C^{s(s+1)}$-norm of $\Psi$ and quantitative linear independence of the derivatives $X_{r,j}\Psi(0)$.
Thus, all in all, the iterative step can be stated as follows: upon rescaling so that $m=0$, and denoting $\psi=\sum_{n> 0}A^{-n\varepsilon}\phi_n$, the main problem is, given a function $\psi:G\to \bbr^D$, which `oscillates' at scale $A$ (this will be measured by the rescaled $C^m$ norms which will be introduced in subsection \ref{sec:funcspace}), which is a $C^{s(s+1)}$-function, and whose derivatives are quantitatively linearly independent, to construct a function $\phi:G\to \bbr^D$, which `oscillates' at scale 1, which is a $C^{s(s+1)}$-function, and with the orthogonality property $\nabla \phi\cdot \nabla \psi=0$ along with a quantitative linear independence for the derivatives of $\psi+\phi$. The precise statement and proof of the iterative step will be presented in Section \ref{sec:iterationlemma}.
Once we have the iterative step as above, the rest of the inductive construction is fairly easy and is given in Section \ref{sec:applyiteration}. This inductive construction mostly follows the line of \cite{tao2018embedding}, but we have spelled out the details for completeness. In order to begin the iteration, we need a single function $\phi_m$ to begin with, which oscillates at a fixed scale $A^m$, is of class $C^{s(s+1)}$, and whose derivatives are quantitatively linearly independent (we will call this a `freeness' property). This is done in Proposition \ref{FirstStep}. It is easy to construct a smooth function with the first two properties, and then we can guarantee the latter freeness property by a Veronese-type embedding often employed in the Nash embedding literature (and also used in \cite{tao2018embedding}). Once we have this single function, we employ the above iterative step a finite number of times to obtain a finite family of mappings $\{\phi_m:G\to\mathbb{R}^D\}_{M_1\le m\le M_2}$ (Proposition \ref{FiniteIteration}), and then use the Arzel\'a-Ascoli theorem to pass to a full family of mappings $\{\phi_m:G\to\mathbb{R}^D\}_{m\in \mathbb{Z}}$ (Theorem \ref{lacunary}). There is a small issue of losing one degree of regularity when using the Arzel\'a-Ascoli theorem, but this can easily be fixed by requiring $C^{s^2+s+1}$-regularity in the first place.
We devote the rest of the introduction to explaining the proof ideas behind the above iterative step.
A na\"ive approach to solving the equation\footnote{In Section \ref{sec:NM-perturb} $\nabla \phi\cdot \nabla \psi=0$ we will solve the stronger equation $X_i\phi\cdot X_j\psi+X_j\phi\cdot X_i\psi=0$ for $i,j=1,\cdots,k$. We are stating the simpler version to keep the Introduction simple.} would be to use the Leibniz rule directly: denoting by $X_1,\cdots, X_k$ a left-invariant vectorfield basis for $V_1$ so that $\nabla = (X_1,\cdots,X_k)$, the Leibniz rule tells us that $X_i\phi \cdot X_i\psi = X_i(\phi\cdot X_i\psi)-\phi\cdot X_iX_i\psi$, so in order to solve
\[
X_i\phi\cdot X_i\psi=0, \quad i=1,\cdots,k,
\]
we could simply solve
\[
\phi\cdot X_i\psi=\phi\cdot X_iX_i\psi=0,\quad i=1,\cdots,k
\]
(Note that the trivial solution $\phi$=0 is not acceptable, as we need a freeness property of $\psi+\phi$ at scale 1.) This latter equation is at least algebraically well-posed in the sense that the vectors $X_i\psi$ and $X_iX_i\psi$, $i=1,\cdots,k$, are (quantitatively) linearly independent; however it seems inherent that $\phi$ must be in the same regularity class as $X_i\psi$ and $X_iX_i\psi$, i.e., $\phi$ must have two degrees less regularity compared to $\psi$. This `loss of derivatives problem' would make the iteration fail.
The solution proposed by \cite{tao2018embedding} to get around this loss of derivatives problem was to introduce Littlewood--Paley projections $P_{(\le N_0)}$ on $\bbh^3$ (see subsection \ref{sec:LP} for Littlewood--Paley projections), and first solve the approximate equation
\[
X_i\tilde{\phi}\cdot X_iP_{(\le N_0)}\psi=0,\quad i=1,\cdots,k,
\]
by solving
\[
\tilde{\phi}\cdot X_iP_{(\le N_0)}\psi=\tilde{\phi}\cdot X_iX_iP_{(\le N_0)}\psi=0,\quad i=1,\cdots,k.
\]
Because lower order derivatives of $\psi$ can control higher order derivatives of $P_{(\le N_0)}\psi$ (up to losses growing on the order of the Littlewood--Paley frequency $N_0$; see Theorem \ref{LP} for the precise quantitative statement), and because we are looking at a smaller scale $1$ for $\phi$ compared to the larger scale $A$ for $\psi$, it is plausible that one can achieve as much control on $\tilde{\phi}$ as $\psi$, as long as we take $A$ very large compared to $N_0$. More precisely, because $\psi$ oscillates at scale $A$, its homogeneous $\dot{C}^m$-norm will behave like $A^{-m}$, so $\nabla^2 P_{(\le N_0)}\psi$ will have $\dot{C}^m$-norm roughly $A^{-m-2}$ for $m\le s^2+s-2$ and $A^{-s^2-s}N_0^{m- s^2-s+2}$ for $m> s^2+s-2$, which are all $O(1)$ if $A$ is chosen sufficiently larger than $N_0$. This will ensure that any reasonable construction based on $X_iP_{(\le N_0)}\psi$ and $X_iX_iP_{(\le N_0)}\psi$ will produce a function that `oscillates' at scale $1$ and is of class $C^{s^2+s+1}$, and thus has the same level of regularity as the function $\tilde{\phi}$ that we want to produce.
We will show in subsection \ref{sec:LP} that Carnot groups admit Littlewood--Paley projections that share the same properties as those used in \cite{tao2018embedding}.
Thus, the main idea of \cite{tao2018embedding} to solve the equation $\nabla \phi\cdot \nabla \psi =0$ is to decompose it into two steps. In the first step, we need to solve the low-frequency version of the equation $\nabla \tilde{\phi}\cdot \nabla P_{(\le N_0)}\psi =0$ using the Leibniz rule, i.e., we solve the equation
\[
\tilde{\phi}\cdot X_iP_{(\le N_0)}\psi=0, ~\tilde{\phi}\cdot X_iX_iP_{(\le N_0)}\psi=0,\quad i=1,\cdots,k,
\]
while guaranteeing that $\tilde{\phi}$ has the same regularity as $XP_{(\le N_0)}\psi$ and $XXP_{(\le N_0)}\psi$.
In the second step, we would assume the low-frequency solution $\tilde{\phi}$ as given, and then we would need to correct $\tilde{\phi}$ into a `true' solution $\phi$ while guaranteeing that $\phi$ has the same regularity as $\tilde{\phi}$.
The first step, namely solving the approximate equation $\tilde{\phi}\cdot X_iP_{(\le N_0)}\psi=\tilde{\phi}\cdot X_iX_iP_{(\le N_0)}\psi=0$, with $\tilde{\phi}$ having the same regularity as $X_iP_{(\le N_0)}\psi$ and $X_iX_iP_{(\le N_0)}\psi$, is the part where our proof in the setting of general Carnot groups departs the most from the setting of $\mathbb{H}^3$ in \cite{tao2018embedding}, and is the part where the main novelty of this paper lies in. For $\bbh^3$, \cite{tao2018embedding} resolves this issue by first observing that $\bbh^3$ has a cocompact lattice and a CW structure that is compatible with it, and then provides the extension using quantitative nulhomotopy on each of the cells of that CW structure (note that, although the nulhomotopy is used on an infinite number of cells, the final construction is globally controlled, because up to left-translation there are only a finite number of distinct cells). This argument may not work for a general Carnot group $G$, as $G$ might not admit a cocompact lattice (say if the structure constants for every basis of $G$ were irrational). Instead, we give a proof, using the concentration of measure phenomenon on the sphere and the Lov\'{a}sz Local Lemma, that is independent of the topology of the space under consideration, by using only the fact that $G$ is a doubling metric space (see Section \ref{sec:ON-ext} for details). We essentially do not need the differential structure of $G$ because the uniform continuity of $\tilde{\phi}$ is the main hurdle here; we can automatically gain higher regularity by convolving with a mollifier and using the Gram-Schmidt process.
To put our solution to the approximate equation into context, we will pose, in Section \ref{sec:ON-ext}, a more general question (Question \ref{general-lift}), which asks whether one can extend a given orthonormal system of vectorfields within the same regularity class. It will become clear that our solution provides a partial positive answer to Question \ref{general-lift}, at least when the base space is a doubling metric space, the regularity class is contained in the Lipschitz class, the regularity class is closed under simple algebraic operations, and if smoothening a Lipschitz vectorfield by convolving it with some scalar mollifier provides the resulting vectorfield with the desired regularity (see Theorem \ref{partialpositiveanswer} for details).
For the second step, once we have the approximate solution $\tilde{\phi}$, we need to correct it into a true solution $\phi$ while preserving the regularity. Tao \cite{tao2018embedding} solves this by developing a perturbative theory for the bilinear form $\nabla \phi\cdot\nabla \psi$. More precisely, \cite{tao2018embedding} develops a version of the Nash--Moser iteration scheme to show how one can correct $\tilde{\phi}$ by small amounts into a true solution $\phi$ to $\nabla \phi\cdot\nabla \psi=0$, without losing any regularity (technically, the Nash--Moser iteration scheme necessitates that we work in the H\"older class $C^{m,\alpha}$, $m\ge 3$, $\alpha\in (\frac 12,1)$ instead of the usual $C^m$ class, and we will thus have to accommodate for this for the rest of this paper, but this does not affect any of the arguments made so far). This method of solving the orthogonality equation $\nabla \phi\cdot \nabla\psi=0$ only requires us to look at first and second derivatives. We will show in Section \ref{sec:NM-perturb} that generalizing this Nash--Moser iteration argument of \cite{tao2018embedding} from the case of $\bbh^3$ to the broader setting of Carnot groups does not incur serious difficulties even if the step size of $G$ is greater than 2. We will also show that by modifying the techniques of \cite{tao2018embedding}, one can solve the stronger orthogonality equation
\[
X_i\phi\cdot X_j\psi+X_j\phi\cdot X_i \psi =0,\quad i,j=1,\cdots,k,
\]
but we will not be able to obtain the stronger orthogonality equation
\[
X_i\phi\cdot X_j\psi=0,\quad i,j=1,\cdots,k.
\]
See Proposition \ref{Perturbation} and Corollary \ref{perturbation-cor} for the precise statement. One can find a detailed description and motivation of this Nash--Moser iteration scheme in the introduction to \cite{tao2018embedding}.
In light of the proof method for Theorem \ref{mainthm} described above, it is natural to ask whether these methods can be generalized further. Many of these methods depend on the fact that Carnot groups are the tangent spaces of themselves (recall that Carnot groups arise as tangent spaces of sub-Riemannian manifolds). It would be necessary to revamp many of the ideas here, especially the Nash--Moser iteration scheme, to be applicable to the setting of doubling metric spaces. Even the orthonormal vectorfield extension theorem (Theorem \ref{Lip-lifting}) would require some reformulation since it is unlikely that we will have vectorfields in the setting of doubling metric spaces; at the least, the vectorfields should be replaced by elements of a Grassmannian. One realistic hope is that one could transform Theorem \ref{Lip-lifting} into a higher-dimensional block basis variant of the construction of \cite{naor2010assouad} and thus construct an embedding with distortion $O(\varepsilon^{-1/2-\delta})$ using the random net construction of \cite{naor2010assouad}. One could also improve upon the results of \cite{naor2010assouad} by using hierarchical nets since these may be easier to control and describe compared to ordinary nets (see Har-Peled and Mendel \cite{har2006fast} for the construction and applications of hierarchical nets).
The rest of this paper is organized as follows. We first introduce some elementary background in Section \ref{sec:prelim}, and develop the Nash--Moser perturbation theorem in Section \ref{sec:NM-perturb} and the orthonormal extension theorem in Section \ref{sec:ON-ext}. It is in Section \ref{sec:ON-ext} that the proof idea differs the most from \cite{tao2018embedding}: whereas \cite{tao2018embedding} proves the orthonormal extension theorem in the spirit of quantitative topology, we will prove it using the concentration of measure phenomenon on the sphere and the Lov\'{a}sz local lemma. We then develop the main iteration lemma in Section \ref{sec:iterationlemma} and show how it gives us our desired embedding in Section \ref{sec:applyiteration}.
\section{Preliminaries}\label{sec:prelim}
\setcounter{equation}{0}
\subsection{Hierarchy of constants}\label{sec:hierarchy}
We will select absolute constants in the following order:
\begin{itemize}
\item A H\"older exponent $\alpha\in (\frac 12,1)$ and a level of regularity $m^*$ depending on $G$. For simplicity, one can fix $\alpha=\frac 23$ and $m^*=s^2+s+1$, where $s$ is the step size of $G$.
\item A sufficiently large natural number $C_0>1$ depending on $G$, $\alpha$ and $m^*$. Specifically, this choice will occur in \eqref{C_0_hierarchy-1}, the third inequality of \eqref{C_0_hierarchy-2}, the second inequality of \eqref{C_0_hierarchy-3}, \eqref{C_0_hierarchy-4}, right after \eqref{C_0_A_hierarchy-1}, right before \eqref{C_0_A_hierarchy-2}, \eqref{C_0-hierarchy-5}, and right after \eqref{C_0_A_hierarchy-3}.
\item A sufficiently large dyadic number $N_0$ depending on $G$ and $C_0$. This choice will occur in \eqref{N_0_hierarchy-6}, in the derivation of \eqref{G-9-again} of Corollary \ref{perturbation-cor}, \eqref{N_0_hierarchy-1}, \eqref{N_0_hierarchy-2}, \eqref{N_0_hierarchy-3}, \eqref{N_0_hierarchy-4}, \eqref{N_0_hierarchy-5}, and \eqref{N_0_hierarchy-7}.
\item A sufficiently large dyadic number $A$ depending on $G$, $C_0$ and $N_0$. This choice will occur in \eqref{A_hierarchy-2}, \eqref{A_hierarchy_1}, \eqref{A_hierarchy-3}, \eqref{A_hierarchy-5}, right after \eqref{C_0_A_hierarchy-1}, right before \eqref{C_0_A_hierarchy-2}, right after \eqref{C_0_A_hierarchy-3}, right after \eqref{A_hierarchy-6}, \eqref{A_hierarchy-7}, \eqref{A_hierarchy-8}, and \eqref{A_hierarchy-9}.
\end{itemize}
\subsection{Basic linear algebra}\label{sec:linalg}
Denote by $|\cdot |$ the Euclidean metric and by $\langle \cdot ,\cdot \rangle $ the Euclidean inner product on Euclidean spaces $\mathbb{R}^D$.
If $T:\bbr^{D_1}\to \bbr^{D_2}$ is a linear map, we also denote by $|T|$ the Frobenius norm of $T$. Also, for $1\le n\le D$, the exterior power $\bigwedge^n \bbr^D$ can be identified with $\bbr^{ \binom{D}{n}}$, and so we can also define a Euclidean norm $|\cdot |$ and a Euclidean inner product $\langle \cdot ,\cdot \rangle $ on $\bigwedge^n \bbr^D$. With this norm on $\bigwedge^n \bbr^D$, the \textit{Cauchy--Binet formula} tells us that for $v_1,\cdots,v_n\in \bbr^D$,
\[
|v_1\wedge \cdots\wedge v_n|^2=\det\left(v_i\cdot v_j\right)_{1\le i,j\le n}=\det (TT^*),
\]
where $T:\bbr^D\to\bbr^n$ is the linear map
\[
T(u)\coloneqq (u\cdot v_1,\cdots,u\cdot v_n).
\]
More generally, the \textit{polarized Cauchy--Binet formula} tells us that for $u_1,\cdots, u_n,v_1,\cdots,v_n\in \bbr^D$,
\[
\bigg<u_1\wedge \cdots\wedge u_n,v_1\wedge \cdots\wedge v_n\bigg>=\det\left(u_i\cdot v_j\right)_{1\le i,j\le n}.
\]
It is not difficult to see that we have a Cauchy--Schwarz-like inequality: for every $v_1,\cdots,v_n\in\bbr^D$ and $1\le i<n\le D$, we have
\[
|v_1\wedge \cdots\wedge v_n|\le |v_1\wedge \cdots\wedge v_i||v_{i+1}\wedge \cdots\wedge v_n|.
\]
We will simply refer to this as the Cauchy--Schwarz inequality in the rest of this paper.
\subsection{Some metric space geometry}\label{sec:MG}
Let $(X,d)$ be a metric space. For any $f:X\to\bbr^D$, we denote the norms
\[
\|f\|_{C^0}\coloneqq \sup_{x\in X}|f(x)|,\quad \|f\|_{\mathrm{Lip}}\coloneqq \sup_{x,y\in X,~x\neq y}\frac{|f(x)-f(y)|}{d(x,y)}.
\]
These norms satisfy certain algebraic properties. If $f,g:X\to\bbr^D$ then
\begin{equation}\label{first-alg}
\|f\cdot g\|_{\mathrm{Lip}}\le \|f\|_{C^0}\|g\|_{\mathrm{Lip}}+\|f\|_{\mathrm{Lip}}\|g\|_{C^0}.
\end{equation}
Also, if $f:X\to\bbr$ and $f(x)\ge c>0$ for all $x\in X$ then
\begin{align}
\|1/f\|_{\mathrm{Lip}}\le c^{-2}\|f\|_{\mathrm{Lip}},\label{reciprocal}\\
\|\sqrt{f}\|_{\mathrm{Lip}}\le \frac{1}{2\sqrt{c}}\|f\|_{\mathrm{Lip}}.\label{squareroot}
\end{align}
One can use these properties to see that for $f:X\to\bbr^D$ with $|f(x)|\ge c>0$ for all $x\in X$ we have
\begin{equation}\label{unit-alg}
\left\|\frac{f}{|f|}\right\|_{\mathrm{Lip}}\le \left\|\frac{1}{|f|}\right\|_{C^0}\left\|f\right\|_{\mathrm{Lip}}+\left\|\frac{1}{|f|}\right\|_{\mathrm{Lip}}\left\|f \right\|_{C^0}\le (c^{-1}+c^{-2}\left\|f \right\|_{C^0})\left\|f\right\|_{\mathrm{Lip}}.
\end{equation}
For $\delta>0$, a subset $\mathcal{N}_\delta \subset X$ is a \textit{$\delta$-net} if for any distinct $x,y\in \mathcal{N}_\delta$ one has $d(x,y)\ge \delta $. By Zorn's lemma, $\delta$-nets which are maximal with respect to inclusion exist, and if $\mathcal{N}_\delta$ is a maximal $\delta $-net then we have the covering
\[
X=\bigcup_{x\in \mathcal{N}_\delta}B_\delta(x).
\]
An immediate consequence of the doubling property is that if $X$ is a $K$-doubling metric space and $m\ge 0$, then for any $\delta$-net $\mathcal{N}_\delta$ we have
\begin{equation}\label{doubling-net}
|\mathcal{N}_\delta\cap B_{2^m \delta}(x)|\le K^{m+1}\quad \forall x\in X.
\end{equation}
\subsection{Function spaces on Carnot groups}\label{sec:funcspace}
We will assume that the norm $\|\cdot\|$ on $V_1$ is an inner product on $V_1$ (in other words, we may assume $G$ is a sub-Riemannian Carnot group, as opposed to being a sub-Finsler Carnot group). This is possible by John's ellipsoid theorem, which allows us to replace $\|\cdot\|$ by an inner product norm while introducing distortion at most $\sqrt{k}$, which is acceptable since this is independent of the amount $\varepsilon$ of snowflaking.
We fix a left-invariant orthonormal basis $X_1,\cdots,X_k$ of $V_1$ with respect to $\|\cdot\|$. If $\phi:G\to \bbr^D$ is a differentiable function, we let $\nabla \phi:G\to \bbr^{kD}$ denote the horizontal gradient
\[
\nabla \phi\coloneqq (X_1\phi,\cdots, X_k\phi).
\]
By iteration, we have $\nabla^m\phi:G\to \bbr^{k^mD}$ for any $m\ge 1$, if $\phi$ is $m$ times differentiable. We recall the $C^0$ norm
\[
\|\phi\|_{C^0}= \sup_{p\in G}|\phi(p)|,
\]
and define the higher $C^m$ norms
\[
\|\phi\|_{C^m}\coloneqq \sum_{0\le j\le m} \|\nabla^j \phi\|_{C^0}.
\]
For a fixed spatial scale $R>0$, we define the $C^m_R$ norm to be the rescaled norm
\[
\|\phi\|_{C^m_R}\coloneqq \sum_{0\le j\le m} R^j\|\nabla^j \phi\|_{C^0}.
\]
Given a H\"{o}lder exponent $0<\alpha <1$ we may also define the homogeneous H\"{o}lder norm
\[
\|\phi\|_{\dot{C}^{0,\alpha}}\coloneqq \sup_{p,q\in G,p\neq q}\frac{|\phi(p)-\phi(q)|}{d(p,q)^\alpha}
\]
and the higher H\"{o}lder norms
\[
\|\phi\|_{C^{m,\alpha}}\coloneqq \|\phi\|_{C^m}+\|\nabla^m \phi\|_{\dot{C}^{0,\alpha}}
\]
and more generally, the rescaled H\"{o}lder norm
\[
\|\phi\|_{C^{m,\alpha}_R}\coloneqq \|\phi\|_{C^m_R}+R^{m+\alpha}\|\nabla^m \phi\|_{\dot{C}^{0,\alpha}}.
\]
One may easily verify
\[
\|\phi\|_{C_R^{m,\alpha}}\lesssim \|\phi\|_{C_R^{m+1}}.
\]
By an iterated application of the product rule, one can verify the algebra properties
\[
\|\phi\psi\|_{C^m_R}\lesssim_m \|\phi\|_{C^m_R}\|\psi\|_{C^m_R},
\]
\[
\|\phi\psi\|_{C^{m,\alpha}_R}\lesssim_m \|\phi\|_{C^{m,\alpha}_R}\|\psi\|_{C^{m,\alpha}_R}.
\]
These inequalities continue to hold when $\phi$ and $\psi$ are vector-valued and we take the wedge product or the dot product, where the constants do not depend on the dimension of the codomain of $\phi$ and $\psi$:
\[
\|\phi\cdot \psi\|_{C^m_R}\lesssim_m \|\phi\|_{C^m_R}\|\psi\|_{C^m_R},\quad \|\phi\cdot \psi\|_{C^{m,\alpha}_R}\lesssim_m \|\phi\|_{C^{m,\alpha}_R}\|\psi\|_{C^{m,\alpha}_R},
\]
and
\[
\|\phi\wedge \psi\|_{C^m_R}\lesssim_m \|\phi\|_{C^m_R}\|\psi\|_{C^m_R},\quad \|\phi\wedge \psi\|_{C^{m,\alpha}_R}\lesssim_m \|\phi\|_{C^{m,\alpha}_R}\|\psi\|_{C^{m,\alpha}_R}.
\]
More generally, one can observe that these algebra properties continue to hold when we replace the above norms with norms of the form
\[
\|\phi\|_{C^m_{\{R_j\}_{i=0}^m}}\coloneqq \sum_{0\le j\le m} R_j\|\nabla^j \phi\|_{C^0},
\]
where $\{R_j\}_{i=0}^m$ is a `sequence of spatial scales', i.e., a sequence of positive real numbers, that is log-concave: $R_{i}R_j\ge R_{i+j}$. Examples of such norms include
\begin{equation}\label{logconcave}
\|\phi\|+R\|\nabla \phi\|_{C^m},~\mbox{or }\|\phi\|_{C^0}+\|\nabla \phi\|_{C^m_{1/R}},\quad R\ge 1.
\end{equation}
\subsection{Some Carnot group geometry}\label{sec:carnotgeom}
Recall the decomposition $\mathfrak{g}=V_1\oplus V_2\oplus\cdots \oplus V_s$. We will denote $\dim G=n$, $\dim V_r=k_r$, $k=k_1$, and the Hausdorff dimension $n_h\coloneqq \sum_{r=1}^s r k_r$. We will assume $s\ge 2$, since if $s=1$, then $G$ is just a finite-dimensional Euclidean space, and near-optimal snowflake embeddings of Euclidean spaces were constructed in \cite{assouad1983plongements}. This will entail $n_h\ge 4$, as we must have $k_1\ge 2$ and $k_2\ge 1$.
For $2\le r\le s$, we fix a basis $X_{r,1},\cdots, X_{r,k_r}$ of $V_r$ and extend them to left-invariant vectorfields over $G$. For $r=1$, we simply denote $X_{1,i}=X_i$.
As $G$ is nilpotent and simply connected, the exponential map $\exp:\mathfrak{g}\to G$ is a diffeomorphism. Recall that we have defined the scaling maps $\delta_\lambda:\mathfrak{g}\to \mathfrak{g}$ for $\lambda>0$ by
\[
\left.\delta_\lambda\right|_{V_r}=\lambda^i \mathrm{id}_{V_r},\quad r=1,\cdots,s.
\]
One may then define the dilation $\delta_\lambda:G\to G$ so that it commutes with $\exp$:
\[
\delta_\lambda \circ \exp=\exp\circ \delta_\lambda.
\]
One can compute that $\delta_\lambda$ is the unique Lie group automorphism $\delta_\lambda:G\to G$ such that $(\delta_\lambda)_*=\delta_\lambda$.
Moreover, $\delta_\lambda$ interacts with the left-invariant vector fields as follows:
\[
X_{r,i}(\phi \circ \delta_\lambda)=\lambda^r(X_{r,i} \phi)\circ \delta_\lambda,\quad r=1,\cdots, s,~i=1,\cdots, k_r.
\]
The special case $r=1$ tells us that $\delta_\lambda$ is a scaling in the Carnot-Carath\'eodory metric:
\[
d_{G}(\delta_\lambda(p),\delta_\lambda(p'))=\lambda d_{G}(p,p'),\quad p,p'\in G.
\]
By iteration, we can also deduce
\[
\nabla^m (\phi\circ \delta_\lambda)=\lambda^m (\nabla^m \phi)\circ \delta_\lambda.
\]
One can parametrize $G$ by $\bbr^n$, by first identifying $G$ with $\mathfrak{g}$ via the exponential map $\exp$, and then identifying $\mathfrak{g}$ with $\bbr^n$ via the basis $\{X_{r,i}\}_{1\le r\le s,1\le i\le k_r}$. We will denote the corresponding canonical basis as $\{f_{r,i}\}_{1\le r\le s,1\le i\le k_r}$.
We may define a weighted degree for polynomials in $x_{r,i}$ by assigning degree $r$ to $x_{r,i}$. It is clear that $\delta_\lambda$ acted upon a homogeneous polynomial of degree $m$ is just multiplication by $\lambda^m$, so the differential operator $X_{r,i}$ acts on polynomials by reducing the weighted degree by $r$ in each term. One can also see, using the scaling $\delta_\lambda$, that
\[
d_G\left(\exp(\sum_{r=1}^s\sum_{i=1}^{k_r}x_{r,i} X_{r,i}),e_G\right)\asymp_G \sum_{r=1}^s\sum_{i=1}^{k_r}|x_{r,i}|^{1/r}.
\]
One can express the group law in this coordinate system using the Baker-Campbell-Hausdorff formula
\[
gh =
\sum_{m = 1}^s\frac {(-1)^{m-1}}{m}
\sum_{\begin{smallmatrix} r_1 + s_1 > 0 \\ \vdots \\ r_m + s_m > 0 \end{smallmatrix}}
\frac{[ g^{r_1} h^{s_1} g^{r_2} h^{s_2} \dotsm g^{r_m} h^{s_m} ]}{(\sum_{j = 1}^m (r_j + s_j)) \cdot \prod_{i = 1}^m r_i! s_i!},
\]
where the sum is finite since $G$ is of step $s$, and we have used the following notation:
\[
[ g^{r_1} h^{s_1} \dotsm g^{r_m} h^{s_m} ] = [ \underbrace{g,[g,\dotsm[g}_{r_1} ,[ \underbrace{h,[h,\dotsm[h}_{s_1} ,\,\dotsm\, [ \underbrace{g,[g,\dotsm[g}_{r_m} ,[ \underbrace{h,[h,\dotsm h}_{s_m} ]]\dotsm]].
\]
Thus, we can see that
\[
\left(\sum_{r=1}^s\sum_{i=1}^{k_r}x^0_{r,i} X_{r,i}\right)\left(\sum_{r=1}^s\sum_{i=1}^{k_r}x^1_{r,i} X_{r,i}\right)=\left(\sum_{r=1}^s\sum_{i=1}^{k_r}x^2_{r,i} X_{r,i}\right)
\]
where
\[
x^2_{r,i}=x^0_{r,i}+x^1_{r,i}+(\mbox{homogeneous polynomial of }\{x^0_{r',i'}\}_{r'<r},\{x^1_{r',i'}\}_{r'<r}\mbox{ of degree }r).
\]
It easily follows that the Lebesgue measure on $\bbr^n$ is a Haar measure on $G$ because the Jacobian of left-multiplication is a unit upper triangular matrix. Also, the left-invariant vectors $\{X_{r,i}\}_{1\le r\le s,1\le i\le k_r}$ in this coordinate system take the following form:
\begin{equation}\label{vf-form}
X_{r,i}=\frac{\partial}{\partial x_{r,i}}+\sum_{r'>r}^s\sum_{j=1}^{k_{r'}}(\mbox{homogeneous polynomial of }\{x_{r'',i'}\}_{r''<r'}\mbox{ of degree }r'-r)\frac{\partial}{\partial x_{r',j}}.
\end{equation}
For $r,r'=1,\cdots,s$, $i=1,\cdots, k_{r}$, $i'=1,\cdots,k_{r'}$, we define the lexicographic ordering
\[
(r,i)\preceq (r',i')\quad\Leftrightarrow \quad r<r'~\mbox{or}~r=r'\mbox{ and }i\le i'.
\]
For $r>0$, we denote the open ball
\[
B_r\coloneqq \{h\in G:d_G(h,e_G)<r\},
\]
and for $g\in G$ and $r>0$ we denote the open ball
\[
B_r(g)\coloneqq \{h\in G:d_G(h,g)<r\}=gB_r,
\]
where the last equality follows from left-invariance of the metric.
A simple volumetric argument gives the following bound for any $\delta$-net $\mathcal{N}_\delta$:
\begin{equation}\label{locbd}
|\mathcal{N}_\delta\cap B_R(p)|\le (\frac{2R}{\delta}+1)^{n_h},\quad p\in G, R>0.
\end{equation}
\subsection{Littlewood--Paley theory on Carnot groups}\label{sec:LP}
A basic tool used in \cite{tao2018embedding} was a Littlewood--Paley theory for functions defined on the Heisenberg group. One can easily modify the argument in \cite{tao2018embedding} to show the following. For a positive number $N$ and a $C^0$ function $\phi:G\to \bbr^D$, one can construct the Littlewood--Paley projection $P_{(\le N)}\phi:G\to \bbr^D$, which is a $C^\infty$ function, and the variants
\[
{P_{(<N)}}\phi\coloneqq P_{(\le N/2)}\phi,\quad {P_{(N)}}\phi\coloneqq P_{(\le N)}\phi-{P_{(<N)}}\phi,\quad P_{(>N)}\phi\coloneqq \phi-P_{(\le N)}\phi,\quad P_{(\ge N)}\phi\coloneqq \phi-P_{(< N)}\phi
\]
which satisfy the following regularity properties.
\begin{theorem}[Littlewood--Paley Theory, Theorem 6.1 of \cite{tao2018embedding}]\label{LP}
Let $\phi:G\to\bbr^D$ be bounded and continuous.
\begin{enumerate}
\item (Scaling) For any $\lambda>0$ and $N>0$, we have
\[
P_{(\le N)}(\phi\circ \delta_\lambda)=(P_{(\le N/\lambda)}\phi)\circ \delta_\lambda,
\]
and similarly for ${P_{(<N)}}$, ${P_{(N)}}$, $P_{(\ge N)}$, and $P_{(>N)}$.
\item (Littlewood--Paley decomposition) For any dyadic number $N_0$, we have
\[
\phi=P_{(\le N_0)} \phi+\sum_{N>N_0 ~dyadic} {P_{(N)}}\phi.
\]
\item (Regularity) If $N,M>0$, $j,l\ge 0$, and $\phi$ is of class $C^l$, one has the estimates
\begin{align}
\|\nabla^l P_{(\le N)} \phi \|_{C^j_{1/N}}&\lesssim_{G,j,l} \|\nabla^l \phi\|_{C^0},\label{lp-1}\\
\| P_{(N)} \phi \|_{C^j_{1/N}},\| P_{(>N)} \phi \|_{C^j_{1/N}}&\lesssim_{G,j,l,\alpha} N^{-l}\|\nabla^l \phi\|_{C^0},N^{-l-\alpha}\|\nabla^l \phi\|_{\dot{C}^{0,\alpha}},\label{lp-5}\\
\| P_{(\le N)} \phi \|_{C^l_{1/M}},\| P_{(N)} \phi \|_{C^l_{1/M}},\| P_{(>N)} \phi \|_{C^l_{1/M}}&\lesssim_{G,l} \|\phi\|_{C^l_{1/M}}.\nonumber
\end{align}
\end{enumerate}
\end{theorem}
The construction of the Littlewood--Paley projection in \cite{tao2018embedding} is as follows. As the Laplace-Kohn operator
\[
L\coloneqq -\sum_{i=1}^k X_i^2
\]
is self-adjoint on $L^2(G)$, where $G$ is given its Haar measure (which is the Lebesgue measure on $\bbr^{n}$), for any $m\in L^\infty(\bbr)$ one can define bounded operators $m(L)$ on $L^2(G)$ which commute with each other and with $L$. But by a result of H{\"o}rmander \cite{hormander1967hypoelliptic}, $L$ is a hypoelliptic operator, and thus one may apply a result of Hulanicki \cite{hulanicki1984functional} to develop a more refined bounded functional calculus for $L$: if $m\in C_c^\infty(\bbr)$, then this operator is given by convolution with a Schwartz function $K:G\to \bbr$:
\[
m(L)f=f*K,\quad \forall f\in L^2(G),
\]
where a function on $G$ is said to be a Schwarz function if it is a Schwarz function on $\bbr^n$, and $*$ denotes the convolution operator:
\[
f*K(p)=\int_G f(g)K(g^{-1}p)dg.
\]
For such $m$, the operator $m(L)$ can be extended to functions in $C^0$ using the above convolution formula.
Now, if we choose any smooth function $\varphi:\bbr\to\bbr$ supported on $[-1,1]$ that equals $1$ on $[-1/2,1/2]$, we can now define, for $N>0$, the Littlewood--Paley projection $P_{(\le N)}$ using the above functional calculus by the formula
\begin{align*}
P_{(\le N)} &\coloneqq \varphi(L/N^2).
\end{align*}
The proof of the properties listed in Theorem \ref{LP} is mostly the same as presented in Theorem 6.1 of \cite{tao2018embedding}. When following the proof of Theorem 6.1 in \cite{tao2018embedding}, the only part that requires modification in the setting of Carnot groups is the following. For $i=1,\cdots,k$, given any Schwarz function $K:G\to\mathbb{R}$, we need to show that there are Schwarz functions $K_j:G\to\mathbb{R}$, $j=1,\cdots,k$, so that for any $C^1$-function $\phi:G\to\mathbb{R}^D$,
\begin{equation}\label{integration_by_parts}
\phi*X_iK = \sum_{j=1}^k X_j\phi*K_j.
\end{equation}
This is a consequence of integration by parts:
\[
X\phi*K=-\phi*\tilde{X}K,
\]
where $X$ is any left-invariant vector field and $\tilde{X}$ its right-invariant counterpart. First, we have
\[
\phi*X_iK = -X_i\phi*K+\phi*(X_i-\tilde{X}_i)K.
\]
Now recalling \eqref{vf-form} and its variant for the right-invariant counterparts, we see that
\[
X_i-\tilde{X}_i=\sum_{r=2}^s\sum_{j=1}^{k_r}q_{r,j}\tilde{X}_{r,j},
\]
where $q_{r,j}$ is a polynomial that commutes with $\tilde{X}_{r,j}$. Thus
\[
(X_i-\tilde{X}_i)K=\sum_{r=2}^s\sum_{j=1}^{k_r} \tilde{X}_{r,j}(q_{r,j}K),
\]
and since each $\tilde{X}_{r,j}$ may be written as a linear combination of nested brackets of $\tilde{X}_1,\cdots,\tilde{X}_k$, we have
\[
(X_i-\tilde{X}_i)K=\sum_{j=1}^k \tilde{X}_j K_j
\]
for some Schwarz functions $K_j$, since Schwarz functions are closed under multiplication by polynomials and actions by right-invariant vector fields. Hence
\[
\phi*(X_i-\tilde{X}_i)K= -\sum_{j=1}^k X_j\phi * K_j,
\]
and we have the form \eqref{integration_by_parts}.
\section{Nash--Moser Perturbation for a bilinear form}\label{sec:NM-perturb}
\setcounter{equation}{0}
For two given $C^1$ functions $\phi,\psi:G\to \bbr^D$, we define the bilinear form $B(\phi,\psi):G\to\operatorname{Sym}^2(\bbr^{k})\subset \mathbb{R}^k\otimes \mathbb{R}^k$ as
\begin{equation}\label{Bdef}
B(\phi,\psi)\coloneqq \operatorname{Sym}\left((X_i\phi\cdot X_j\psi)_{i,j=1,\cdots,k}\right).
\end{equation}
Later, when constructing good embeddings of the Carnot group $G$, we will encounter the following situation.
Given $\psi:G\to \bbr^D$ with certain regularity properties, so that $\psi$ `represents' the geometry of $G$ at scale $A$ and above, we will need to find a `nontrivial' solution $\phi:G\to \bbr^D$ to
\begin{equation}\label{perp}
B(\phi,\psi)=0,
\end{equation}
so that $X\phi\cdot X\psi=0$ for any horizontal left-invariant vectorfield $X$ of $G$. This way, the Pythagorean theorem will tell us that $|\nabla(\phi+\psi)|^2=|\nabla\phi|^2+|\nabla\psi|^2$, which, coupled with an Assouad-type summation technique, will give us optimal control on the growth of $|\nabla \psi|^2$ and hence provide us with the optimal distortion rate $O(\varepsilon^{-1/2})$ (note that Assouad \cite{assouad1983plongements} achieved this orthogonality and hence the optimal distortion by allowing the $\phi$ and $\psi$ to take values in different direct sum components of the target space, but thereby losing control on the dimension of the target space). Here, when we say that $\phi$ is `nontrivial', we mean that $\psi+\phi$ also has the regularity properties of $\psi$ but at scale 1 instead of $A$. Attempts to solve this system \eqref{perp} directly using the Leibniz rule and linear algebra gives less control on the smoothness on $\phi$ than that on $\psi$, which is unsuitable for iteration. The solution proposed by \cite{tao2018embedding} was to first find a nontrivial and approximate solution $\Tilde{\phi}$ to \eqref{perp}, or more precisely a solution to the low-frequency equation
\begin{equation}\label{G-5}
B(\Tilde{\phi},P_{(\le N_0)}\psi)=0.
\end{equation}
This way, we have control on all levels of smoothness of $P_{(\le N_0)}\psi$ (by Theorem \ref{LP} (3)), and hence also on $\tilde{\phi}$. This $\tilde{\phi}$ will be constructed in later sections. Once we have this approximate solution $\tilde{\phi}$, \cite{tao2018embedding} then proposed to use a variant of the Nash--Moser iteration scheme to find small perturbations of $\tilde{\phi}$, which are small enough to preserve the non-triviality of $\Tilde{\phi}$, and which allows us to solve the original equation \eqref{perp}.
Our goal in this section is mainly to show that the Nash--Moser iteration scheme of \cite{tao2018embedding} carries on to the general setting of Carnot groups without obstruction, while proving a slightly stronger orthogonality statement \eqref{perp} than that of \cite{tao2018embedding}. The rest of this section follows the argument of Section 7 of \cite{tao2018embedding}; we have reproduced the entire argument here to book-keep certain calculations that arise from higher-dimensional matrix operations, as well as to state and verify various estimates in the setting of Carnot groups.
Because the Nash--Moser iteration process produces error terms, we will need to consider a slightly more general setting.
Given $\psi:G\to \bbr^D$ and $F=(F_{ij})_{i,j=1,\cdots,k}:G\to \operatorname{Sym}^2(\bbr^{k})$, we consider the problem of finding a solution $\phi:G\to \bbr^D$ to
\begin{equation}\label{G-1}
B(\phi,\psi)=F.
\end{equation}
One easy way to solve \eqref{G-1} is to solve the system
\begin{align*}
\begin{cases}
&\phi\cdot X_i\psi =0,\\
&\phi\cdot X_iX_j\psi =-F_{ij},\\
&\phi\cdot X_{2,i'}\psi =0,
\end{cases}
\quad 1\le i,j\le k,~1\le i'\le k_2,
\end{align*}
for then, since $X_iX_j-X_jX_i\in \mathrm{span}\{X_{2,1},\cdots,X_{2,k_2}\}$, $1\le i,j\le k$, we have
\[
\begin{cases}
&\phi\cdot X_i\psi =0,\\
&\phi\cdot X_iX_j\psi =-F_{ij},
\end{cases}
\quad i,j=1,\cdots, k,
\]
and so
\[
X_i\phi\cdot X_j \psi=X_i(\phi\cdot X_j\psi)-\phi\cdot X_iX_j\psi=F_{ij},\quad i,j=1,\cdots,k.
\]
This system is solvable if $\{X_i\psi\}_{i=1,\cdots,k}\cup\{X_iX_j\psi\}_{1\le i\le j\le k}\cup \{X_{2,i'}\psi\}_{i'=1,\cdots,k_2}$ are pointwise independent.
More precisely, for each $p\in G$ define the linear map $T_\psi(p):\bbr^{D}\to\bbr^{k+\frac{k(k+1)}{2}+k_2}$ by
\begin{align*}
T_\psi(p)(v)\coloneqq \Big((v\cdot X_i\psi(p))_{1\le i\le k},(v\cdot X_iX_j\psi(p))_{1\le i\le j\le k},\quad (v\cdot X_{2,i'}\psi(p))_{1\le i'\le k_2}\Big),\quad v\in \bbr^D.
\end{align*}
If $\{X_i\psi\}_{i=1,\cdots,k}\cup\{X_iX_j\psi\}_{1\le i\le j\le k}\cup \{X_{2,i'}\psi\}_{i'=1,\cdots,k_2}$ are pointwise independent, i.e., if $T_\psi(p)$ has full rank, or equivalently (by the Cauchy--Binet formula) if
\begin{equation*}
\left|\bigwedge_{i=1}^k X_i\psi(p)\wedge \bigwedge_{1\le i\le j\le k} X_iX_j\psi(p)\wedge \bigwedge_{i'=1}^{k_2} X_{2,i'}\psi(p) \right|>0,
\end{equation*}
then we can define the pseudoinverse $T_\psi(p)^{-1}:\bbr^{k+\frac{k(k+1)}{2}+k_2}\to\bbr^{D}$ of $T_\psi(p)$ by the formula
\[
T_\psi(p)^{-1}\coloneqq T_\psi(p)^*(T_\psi(p)T_\psi(p)^*)^{-1}.
\]
(Note that the linear independence condition necessitates that $D\ge k+\frac{k(k+1)}{2}+k_2$, so the pseudoinverse is well-defined.)
Then for any continuous functions $a_i:G\to\bbr$, $i=1,\cdots,k$, $b_{ij}:G\to\bbr$, $1\le i\le j\le k$, $c_{i'}:G\to\bbr$, $i'=1,\cdots,k_2$, we have the pointwise identities
\begin{align}
\begin{aligned}\label{G-4}
T_\psi(p)^{-1}(a_1(p),\cdots,a_k(p),b_{11}(p),\cdots,b_{kk}(p), c_1(p),\cdots,c_{k_2}(p))\cdot X_i\psi(p) &=a_i(p),\quad i=1,\cdots, k,\\
T_\psi(p)^{-1}(a_1(p),\cdots,a_k(p),b_{11}(p),\cdots,b_{kk}(p), c_1(p),\cdots,c_{k_2}(p))\cdot X_iX_j\psi(p) &=b_{ij}(p),\quad 1\le i\le j\le k,\\
T_\psi(p)^{-1}(a_1(p),\cdots,a_k(p),b_{11}(p),\cdots,b_{kk}(p), c_1(p),\cdots,c_{k_2}(p))\cdot X_{2,i'}\psi(p) &=c_{i'}(p),\quad i'=1,\cdots,k_2.\\
\end{aligned}
\end{align}
If we denote
\begin{equation}\label{flip-b}
b_{ji}=b_{ij}+\sum_{i'=1}^{k_2}\alpha_{i,j,i'}c_{i'},\quad 1\le i< j\le k,
\end{equation}
where $\alpha_{i,j,i'}$ are the structure constants so that
\begin{equation}\label{struc_const}
X_jX_i-X_iX_j=\sum_{i'=1}^{k_2}\alpha_{i,j,i'}X_{2,i'},\quad 1\le i\le j\le k,
\end{equation}
then we can extend the second equation of \eqref{G-4} to
\begin{equation}\label{G-4-ext}
T_\psi(p)^{-1}(a(p),b(p),c(p))\cdot X_iX_j\psi(p) =b_{ij}(p),\quad i, j=1,\cdots, k.
\end{equation}
So, by using the Leibniz rule as above, we have
\begin{equation}\label{G-4-summary}
X_i\Big( T_\psi^{-1}(a,b,c)\Big)\cdot X_j\psi = X_i(a_j)-b_{ij},\quad i, j=1,\cdots, k.
\end{equation}
As a consequence, one has the explicit solution
\begin{equation}\label{explicit}
\phi_{\mathrm{explicit}}(p)\coloneqq T_\psi(p)^{-1}(0,-F(p),0)
\end{equation}
to \eqref{G-1} (when we are plugging in $F(p)$ to the above expression, we are using a standard identification $\operatorname{Sym}^2(\mathbb{R}^k)\simeq \mathbb{R}^{\frac{k(k+1)}{2}}$).
The problem with this solution to \eqref{G-1} is that the solution $\phi_{\mathrm{explicit}}$ constructed in this manner will have two degrees less regularity than $\psi$, which will be unsuitable for iteration purposes. We will overcome this issue by applying the above procedure to the Littlewood--Paley components of $\psi$ and $F$.
\begin{proposition}[Perturbation theorem, analog of Proposition 7.1 of \cite{tao2018embedding}]\label{Perturbation}
Let $M$ be a real number with
\[
M\ge C_0^{-1}.
\]
Let ${m^*}\ge 2$ and $\frac 12<\alpha<1$.
Suppose we are given a $C^{{m^*},\alpha}$-map $\psi:G\to \bbr^{D}$ with the following regularity properties:
\begin{enumerate}
\item (H\"older regularity at scale $A$) We have
\begin{equation}\label{psi-4}
\|\nabla^2 \psi\|_{C_A^{{m^*}-2,\alpha}}\le C_0A^{-1}.
\end{equation}
\item (Non-degenerate first derivatives) For any $p\in G$, we have
\begin{equation}\label{psi-1}
C_0^{-1}M\le |X_i\psi(p)|\le C_0 M,\quad i=1,\cdots,k,
\end{equation}
\item (Locally free embedding) For any $p\in G$, we have
\begin{equation}\label{psi-3}
\left|\bigwedge_{i=1}^k X_i\psi(p)\wedge \bigwedge_{1\le i\le j\le k} X_iX_j\psi(p)\wedge \bigwedge_{i'=1}^{k_2} X_{2,i'}\psi(p) \right|\gtrsim_{C_0} A^{-\frac{k(k+1)}{2}-k_2}M^{k}.
\end{equation}
\end{enumerate}
Let $F:G\to\operatorname{Sym}^2(\bbr^{k})$ be a function with bounded $C^{2{m^*}-1}$-norm: $\|F\|_{C^{2{m^*}-1}}<\infty$. Let $\Tilde{\phi}:G\to\bbr^{D}$ be a solution to the low-frequency equation \eqref{G-5} with bounded $C^{{m^*},\alpha}$-norm: $\|\Tilde{\phi}\|_{C^{{m^*},\alpha}}<\infty$. Then there exists a $C^{{m^*},\alpha}$-solution $\phi$ to \eqref{G-1} which is a small perturbation of $\tilde{\phi}$:
\begin{equation}\label{G-8}
\|\phi-\Tilde{\phi}\|_{C^{{m^*},\alpha}}\lesssim_{C_0} A\|F\|_{C^{2{m^*}-1}}+A^{2-{m^*}-\alpha}\|\Tilde{\phi}\|_{C^{{m^*},\alpha}},
\end{equation}
and which makes the following cross terms small:
\begin{equation}\label{G-9}
\|X_i\phi\cdot X_j\psi-X_i\Tilde{\phi}\cdot X_jP_{(\le N_0)}\psi\|_{C^0}\lesssim_{C_0} \|F\|_{C^{2{m^*}-1}}+N_0^{1-{m^*}-\alpha}A^{1-{m^*}-\alpha}\|\Tilde{\phi}\|_{C^{{m^*},\alpha}},\quad i,j=1,\cdots,k.
\end{equation}
Here, we treat $\alpha$ as a universal constant (we may take $\alpha=\frac 23$), and we allow the constant $C_0$ to depend on ${m^*}$. This will not contradict our hierarchy of constants, as ${m^*}$ will be later chosen to depend on $G$ (more precisely, it will equal $s^2+s+1$).
\end{proposition}
\begin{remark}\label{perturb-rem}
\begin{enumerate}
\item The bilinear form used in \cite{tao2018embedding} was the simpler version
\[
\tilde B(\phi,\psi)=(X_1\phi\cdot X_1\psi,\cdots,X_k\phi\cdot X_k\psi),
\]
and the corresponding Nash-Moser iteration scheme only established the weaker orthogonality $\tilde B (\phi,\psi)=0$, while still being able to establish the estimate \eqref{G-9}. We may create a version of Proposition \ref{Perturbation} for $\tilde B$ by assuming the weaker freeness property for \eqref{psi-3}:
\[
\left|\bigwedge_{i=1}^k X_i\psi(p)\wedge \bigwedge_{i=1}^k X_iX_i\psi(p) \right|\gtrsim_{C_0} A^{-k}M^{k}.
\]
The proof of this weaker Proposition would be not so different from the proof of Proposition \ref{Perturbation} given below, where we would modify the pseudoinverse $T_\psi^{-1}$ and its usage in the obvious way. The statement \eqref{G-9} becomes slightly harder to prove, but one may directly implement the methods of \cite{tao2018embedding}.
\item One may imagine strengthening the theorem to obtain the stronger full orthogonality statement
\[
X_i\phi\cdot X_j\psi=0,\quad i,j=1\cdots,k,
\]
but attempts to modify the Nash--Moser iteration scheme to accommodate this difference causes the resulting infinite series to diverge (more specifically, we are then forced to place derivatives on ${P_{(N)}}\psi$ in \eqref{new-ab}, which we must avoid in order to make the defining series converge). The best one can achieve with the tools outlined in this section is \eqref{G-9}. Nevertheless, we will be able to choose $\tilde{\phi}$ with
\[
X_i\tilde{\phi}\cdot X_jP_{(\le N_0)}\psi=0,\quad i,j=1\cdots,k,
\]
which, along with \eqref{G-9}, establishes that $X_i\phi\cdot X_j\psi$ is sufficiently small.
\end{enumerate}
\end{remark}
\begin{proof} It will suffice to find a function $\phi$ with the stated bounds solving the approximate equation
\begin{equation}\label{phi-bounds}
\| B(\phi,\psi) - F \|_{C^{2{m^*}-1}} \le A^{2-{m^*}}\|F\|_{C^{2{m^*}-1}}+A^{3-2{m^*}}\|\tilde{\phi}\|_{C^{{m^*},\alpha}}
\end{equation}
rather than the precise equation \eqref{G-1}, while satisfying \eqref{G-8} and \eqref{G-9}:
\begin{equation}\label{G-8-prime}
\|\phi-\Tilde{\phi}\|_{C^{{m^*},\alpha}}\lesssim_{C_0} A\|F\|_{C^{2{m^*}-1}}+A^{2-{m^*}-\alpha}\|\Tilde{\phi}\|_{C^{{m^*},\alpha}},
\end{equation}
and
\begin{equation}\label{G-9-prime}
\|X_i\phi\cdot X_j\psi-X_i\Tilde{\phi}\cdot X_jP_{(\le N_0)}\psi\|_{C^0}\lesssim_{C_0} \|F\|_{C^{2{m^*}-1}}+N_0^{1-{m^*}-\alpha}A^{1-{m^*}-\alpha}\|\Tilde{\phi}\|_{C^{{m^*},\alpha}},\quad i,j=1,\cdots,k.
\end{equation}
One can then iteratively replace $(\tilde \phi, F)$ by the error term $(0, F - B(\phi,\psi))$ and sum the resulting solutions to obtain an exact solution to \eqref{G-1}. This is possible due to the linearity of this equation in $\phi$.
We will first construct a low-frequency solution $\phi_{(\le N_0)}$ to the low-frequency equation
\[
B(\phi_{(\le N_0)}, P_{(\le N_0)} \psi) = P_{(\le N_0)} F
\]
and then, given $\phi_{(\le N/2)}$ for dyadic $N > N_0$ by induction, we will add higher frequency components $\phi_{(N)}$ to obtain $\phi_{(\le N)}:=\phi_{(\le N/2)}+\phi_{(N)}$, which approximately solves the higher-frequency equation
\[
B(\phi_{(\le N)}, P_{(\le N)} \psi) \approx P_{(\le N)} F.
\]
More specifically, the construction goes as follows. We first construct the low-frequency component as
\begin{equation}\label{phi-start}
\phi_{(\le N_0)} \coloneqq \tilde \phi + T_{P_{(\le N_0)} \psi}^{-1}(0, -P_{(\le N_0)} F, 0).
\end{equation}
This form was chosen so that, from \eqref{G-4-summary}, one has
\begin{equation}\label{low-freq-cross-term}
X_i(\phi_{(\le N_0)}-\tilde \phi)\cdot X_jP_{(\le N_0)} \psi = P_{(\le N_0)} F_{ij},\quad i,j=1,\cdots,k,
\end{equation}
and from \eqref{G-5} and the bilinearity of $B$, one has
\begin{equation}\label{b-ident-1}
B(\phi_{(\le N_0)}, P_{(\le N_0)} \psi) = P_{(\le N_0)} F.
\end{equation}
Next, for every dyadic $N > N_0$ we recursively define the higher-frequency component $\phi_{(N)}$ by the formula
\begin{equation}\label{phi-add}
\phi_{(N)} \coloneqq T_{P_{(\le N)} \psi}^{-1}((a^i_N)_{1\le i\le k},(b^{ij}_N)_{1\le i\le j\le k}, (c^{i'}_N)_{1\le i'\le k_2})
\end{equation}
where
\begin{align}\label{new-ab}
\begin{cases}
a^i_N &\coloneqq - (X_i P_{(\le N)} \phi_{(<N)}) \cdot {P_{(N)}} \psi, \\
b^{ij}_N &\coloneqq - (X_iX_j P_{(\le N)} \phi_{(<N)}) \cdot {P_{(N)}} \psi - {P_{(N)}} F_{ij},\\
c^{i'}_N &\coloneqq - (X_{2,i'}P_{(\le N)}\phi_{(<N)}) \cdot {P_{(N)}} \psi,
\end{cases}
\quad 1\le i\le j\le k,~1\le i'\le k_2,
\end{align}
and
\[
\phi_{(<N)} \coloneqq \phi_{(\le N_0)} + \sum_{\substack{N_0 < M < N\\ M ~\mathrm{dyadic}}} \phi_{(M)}.
\]
We will also denote
\[
\phi_{(\le N)} \coloneqq \phi_{(\le N_0)} + \sum_{\substack{N_0 < M \le N\\ M~\mathrm{dyadic}}} \phi_{(M)}.
\]
Note that in the definition of $\phi_{(N)}$, no derivatives are placed on ${P_{(N)}} \psi$ and there is some mollification of the $\phi_{(<N)}$ term, in order to avoid the loss of derivatives problem.
This form of $\phi_{(N)}$ was chosen so that
\begin{equation}\label{b-ident-2}
B(\phi_{(N)}, P_{(\le N)} \psi) + B(P_{(\le N)} \phi_{(<N)}, {P_{(N)}} \psi) = {P_{(N)}} F.
\end{equation}
Indeed, by \eqref{flip-b}, \eqref{struc_const}, and \eqref{G-4-ext}, we have
\[
b^{ij}_N = - (X_iX_j P_{(\le N)} \phi_{(<N)}) \cdot {P_{(N)}} \psi - {P_{(N)}} F_{ij},\quad i,j=1,\cdots, k,
\]
so we can compute, for $i,j=1,\cdots,k$,
\begin{align}\label{high-freq-cross-term}
\begin{aligned}
&X_i \phi_{(N)}\cdot X_j P_{(\le N)} \psi+X_iP_{(\le N)} \phi_{(<N)}\cdot X_j {P_{(N)}} \psi\\
&\stackrel{\eqref{G-4-summary}}{=}\big(X_ia_N^j-b_N^{ij}\big)+X_iP_{(\le N)} \phi_{(<N)}\cdot X_j {P_{(N)}} \psi\\
&={P_{(N)}} F_{ij}-X_jP_{(\le N)} \phi_{(<N)}\cdot X_i {P_{(N)}} \psi+X_iP_{(\le N)} \phi_{(<N)}\cdot X_j {P_{(N)}} \psi.
\end{aligned}
\end{align}
Symmetrizing in $i$ and $j$ now gives \eqref{b-ident-2}.
We have the following control on the regularity of $T_{P_{(\le N)} \psi}^{-1}$:
\begin{lemma}[Regularity of the pseudoinverse; analogue of Lemma 7.2 of \cite{tao2018embedding}]\label{tpsi} For any $N \geq N_0$, one has
\[
\| T_{P_{(\le N)} \psi}^{-1} \|_{C^{{m^*}-2}_A} \lesssim_{C_0} A
\]
and
\[
\| \nabla^{{m^*}-2} T_{P_{(\le N)} \psi}^{-1} \|_{C^{{m^*}+1}_{1/N}} \lesssim_{C_0} A^{3-{m^*}}.
\]
\end{lemma}
\begin{proof} Abbreviating $T=T_{P_{(\le N)} \psi}$, recall the definition of the pseudoinverse
\[
T^{-1} = \frac{1}{\det(TT^*)} T^* \mathrm{adj}(T T^*)
\]
where $\mathrm{adj}(A)$ denotes the adjugate matrix of $A$.
We then need to show the bounds
\[
\left\| \nabla^l\left(\frac{1}{\det(TT^*)} T^* \mathrm{adj}(T T^*) \right) \right\|_{C^0} \lesssim_{C_0} A B_l
\]
for $0 \le l \le 2{m^*}-1$, where $B_l$ is the log-convex sequence
\[
B_l \coloneqq
\begin{cases}
A^{-l}, & 0\le l\le {m^*}-2,\\
N^{l-{m^*}+2} A^{2-{m^*}},& l>{m^*}-2.
\end{cases}
\]
From \eqref{psi-4}, \eqref{psi-1}, and Theorem \ref{LP}(iii), we have
\[
|\nabla^l X_i P_{(\le N)} \psi| \lesssim_{C_0} M B_l,\quad i=1,\cdots, k, ~0 \le l \le 2{m^*}-1,
\]
\[
|\nabla^l X_iX_j P_{(\le N)} \psi| \lesssim_{C_0} A^{-1} B_l,\quad i,j=1,\cdots, k, ~0 \le l \le 2{m^*}-1,
\]
and
\[
|\nabla^l X_{2,i'} P_{(\le N)} \psi| \lesssim_{C_0} A^{-1} B_l,\quad i'=1,\cdots, k_2, ~0 \le l \le 2{m^*}-1.
\]
Thus, viewing $T$ as a $\left(k+\frac{k(k+1)}{2}+k_2\right) \times D$ matrix, for any $l$-th order horizontal differential operator $W_l$ the first $k$ rows of $W_l T$ have norm $O_{C_0}(M B_l)$, and the bottom $\frac{k(k+1)}{2}+k_2$ have norm $O_{C_0}(A^{-1} B_l)$. By the product rule, and noting that $B_l B_{l'} \le B_{l+l'}$ for all $l,l' \ge 0$ we conclude that the $\left(k+\frac{k(k+1)}{2}+k_2\right) \times \left(k+\frac{k(k+1)}{2}+k_2\right)$ matrix $W_l(TT^*)$ has top left $k \times k$ block consisting of elements of magnitude $O_{C_0}(M^2 B_l)$, top right $k\times \left(\frac{k(k+1)}{2}+k_2\right)$ and bottom left $\left(\frac{k(k+1)}{2}+k_2\right)\times k$ blocks consisting of elements of magnitude $O_{C_0}(A^{-1} M B_l)$, and bottom right $\left(\frac{k(k+1)}{2}+k_2\right)\times \left(\frac{k(k+1)}{2}+k_2\right)$ block consisting of elements of magnitude $O_{C_0}(A^{-2} B_l)$. By the product rule and cofactor expansion, $W_l \mathrm{adj}(TT^*)$ then has top left block of norm $O_{C_0}(M^{2k-2}A^{-k(k+1)-2k_2} B_l)$, top right and bottom left blocks of norm $O_{C_0}(M^{2k-1}A^{-k(k+1)-2k_2+1} B_l)$, and bottom right block of norm $O_{C_0}(M^{2k}A^{-k(k+1)-2k_2+2} B_l)$. Again, by the product rule, every row of the $ D\times \left(k+\frac{k(k+1)}{2}+k_2\right)$ matrix $W_l(T^* \mathrm{adj}(TT^*))$ is of norm $O_{C_0}(M^{2k} A^{-k(k+1)-2k_2+1} B_l)$ (many are lower order than this).
Similarly, $\nabla^l(\det(TT^*))$ has magnitude $O_{C_0}(M^{2k} A^{-k(k+1)-2k_2} B_l)$.
Meanwhile, from \eqref{psi-1}, \eqref{psi-3}, \eqref{psi-4}, and using \eqref{lp-5} to approximate $P_{(\le N)} \psi$ by $\psi$ up to negligible error (note that $N\ge N_0$ and our hierarchy of constants, namely that $N_0$ is chosen after $C_0$), we easily obtain the wedge product lower bound
\begin{equation}\label{N_0_hierarchy-6}
\left|\bigwedge_{i=1}^k X_iP_{(\le N)}\psi(p)\wedge \bigwedge_{1\le i\le j\le k} X_iX_jP_{(\le N)}\psi(p)\wedge \bigwedge_{i'=1}^{k_2} X_{2,i'}P_{(\le N)}\psi(p) \right|\gtrsim_{C_0} A^{-\frac{k(k+1)}{2}-k_2}M^{k}.
\end{equation}
From this and the Cauchy--Binet formula we have the matching lower bound
\[
\det(TT^*) \gtrsim_{C_0} M^{2k} A^{-k(k+1)-2k_2}
\]
for the determinant. Hence, by the quotient rule, $\nabla^l(\det(TT^*)^{-1})$ has magnitude $O_{C_0}(M^{-2k} A^{k(k+1)+2k_2} B_l)$. The claim now follows from the product rule.
\end{proof}
\begin{remark}
Of course, Lemma \ref{tpsi} can be strengthened to guarantee not only $C^{2{m^*}-1}$-regularity of $T^{-1}$ but also all higher levels of regularity. We have stopped at $C^{2{m^*}-1}$ because this is only what we will need later.
\end{remark}
The rest of the proof of Proposition \ref{Perturbation} follows mostly as in \cite{tao2018embedding}, except for the proof of \eqref{G-9}. We have reproduced the argument for completeness.
First, we would need the smoothness of our proposed solution. We begin with the smoothness of the low-frequency component $\phi_{(\le N_0)}$. From the above Lemma we have the estimate
\begin{equation}\label{A_hierarchy-2}
\| T_{P_{(\le N_0)} \psi}^{-1} \|_{C^{2{m^*}-1}} \lesssim_{C_0} A,
\end{equation}
(this uses our hierarchy of constants, namely that $A$ is chosen after $N_0$), while from Theorem \ref{LP}(iii)-\eqref{lp-1} we have
\[
\| P_{(\le N_0)} F \|_{C^{2{m^*}-1}} \lesssim_G \|F\|_{C^{2{m^*}-1}}
\]
and thus from \eqref{phi-start},
\begin{equation}\label{sim}
\| \phi_{(\le N_0)} - \tilde \phi \|_{C^{2{m^*}-1}} \lesssim \|T^{-1}_{P_{(\le N_0)}\psi}\|_{C^{2{m^*}-1}}\|P_{(\le N_0)}F\|_{C^{2{m^*}-1}}\lesssim_{C_0} A\|F\|_{C^{2{m^*}-1}}.
\end{equation}
Next, we establish the smoothness of the higher-frequency components $\phi_{(N)}$. From Theorem \ref{LP}(iii)-\eqref{lp-5} we have, for $N\ge N_0$ dyadic,
\begin{equation*}
\| \nabla^m P_{(\le N)} \phi_{(<N)} \|_{C^{{m^*}+1}_{1/N}} \lesssim_G \| \nabla^m \phi_{(<N)} \|_{C^0} \le \| \phi_{(<N)} \|_{C^2},\quad m=1,2.
\end{equation*}
This implies in particular that
\[
\| X_i P_{(\le N)} \phi_{(<N)} \|_{C^{{m^*}+1}_{1/N}},\| X_iX_j P_{(\le N)} \phi_{(<N)} \|_{C^{{m^*}+1}_{1/N}},\| X_{2,i'} P_{(\le N)} \phi_{(<N)} \|_{C^{{m^*}+1}_{1/N}} \lesssim_G \| \phi_{(<N)} \|_{C^2}.
\]
Again, from Theorem \ref{LP}(iii)-\eqref{lp-5} and \eqref{psi-4}, we also have the estimates
\begin{equation}\label{P_Npsi}
\| {P_{(N)}} \psi \|_{C^{{m^*}+1}_{1/N}} \lesssim_G N^{-{m^*}-\alpha} \| \nabla^{{m^*}} \psi \|_{\dot C^{0,\alpha}} \lesssim N^{-{m^*}-\alpha} A^{2-{m^*}-\alpha} \| \nabla^2 \psi \|_{C^{{m^*}-2,\alpha}_A} \lesssim_{C_0} N^{-{m^*}-\alpha} A^{1-{m^*}-\alpha}
\end{equation}
and
\begin{align*}
\| {P_{(N)}} F \|_{C^{{m^*}+1}_{1/N}} &\lesssim_G N^{-2{m^*}+1} \| \nabla^{2{m^*}-1} F \|_{C^0} \lesssim N^{-2{m^*}+1} \| F \|_{C^{2{m^*}-1}}.
\end{align*}
Finally, from Lemma \ref{tpsi} one has
\[
\| T_{P_{(\le N)} \psi}^{-1} \|_{C^{2{m^*}-1}_{1/N}} \lesssim_{C_0} A
\]
since $A^{1-j} \lesssim A N^j$ for $0 \le j \le {m^*}-2$ and $A^{3-{m^*}} N^{j-{m^*}+2} \lesssim A N^j$ for ${m^*}-2 \le j \le 2{m^*}-1$. Inserting the above estimates into \eqref{phi-add}, we conclude that
\begin{align}\label{phi-add-estimate}
\begin{aligned}
\| \phi_{(N)} \|_{C^{{m^*}+1}_{1/N}} & \lesssim \|T^{-1}_{P_{(\le N)}\psi}\|_{C^{{m^*}+1}_{1/N}}\Big(\sum_{i=1}^k\|X_iP_{(\le N)}\phi_{(<N)}\|_{C^{{m^*}+1}_{1/N}}\|{P_{(N)}}\psi\|_{C^{{m^*}+1}_{1/N}}\\
&\qquad\qquad+\sum_{i,j=1}^k\|X_iX_iP_{(\le N)}\phi_{(<N)}\|_{C^{{m^*}+1}_{1/N}}\|{P_{(N)}}\psi\|_{C^{{m^*}+1}_{1/N}}\\
&\qquad\qquad+\sum_{i'=1}^{k_2}\|X_{2,i'}P_{(\le N)}\phi_{(<N)}\|_{C^{{m^*}+1}_{1/N}}\|{P_{(N)}}\psi\|_{C^{{m^*}+1}_{1/N}}\\
&\qquad\qquad +\|{P_{(N)}} F\|_{C^{{m^*}+1}_{1/N}}\Big)\\
&\lesssim_{C_0} A \Big(A^{1-{m^*}-\alpha}N^{-{m^*}-\alpha}\|\phi_{(<N)}\|_{C^2}+N^{-2{m^*}+1}\|F\|_{C^{2{m^*}-1}}\Big)\\
&\lesssim_{C_0} A^{2-{m^*}-\alpha} N^{-{m^*}-\alpha}\|\phi_{(<N)}\|_{C^2} + A N^{-2{m^*}+1} \| F \|_{C^{2{m^*}-1}}
\end{aligned}
\end{align}
and so
\begin{equation}\label{phi-add-estimate-2}
\| \phi_{(N)} \|_{C^{{m^*}}} \lesssim_{C_0} A^{2-{m^*}-\alpha} N^{-\alpha}\|\phi_{(<N)}\|_{C^2} + A N^{-{m^*}+1} \| F \|_{C^{2{m^*}-1}}.
\end{equation}
By the triangle inequality we thus have
\begin{align*}
\| \phi_{(\le N)} - \tilde \phi \|_{C^{{m^*}}} &\le \| \phi_{(< N)} - \tilde \phi \|_{C^{{m^*}}} + \| \phi_{(N)} \|_{C^{{m^*}}} \\
&\le (1 + O_{C_0}(A^{2-{m^*}-\alpha} N^{-\alpha})) \| \phi_{(<N)} - \tilde \phi \|_{C^{{m^*}}}\\
&\quad + O_{C_0}(A^{2-{m^*}-\alpha} N^{-\alpha}) \| \tilde \phi \|_{C^{{m^*},\alpha}} + O_{C_0}(A N^{-{m^*}+1}) \| F \|_{C^{2{m^*}-1}}.
\end{align*}
One can easily see by induction, with base case \eqref{sim}, that
\[
\| \phi_{(\le N)} - \tilde \phi \|_{C^{{m^*}}} \lesssim_{C_0} A\|F\|_{C^{2{m^*}-1}}+A^{2-{m^*}-\alpha} N_0^{-\alpha}\|\tilde{\phi}\|_{C^{{m^*},\alpha}} ,\quad N\ge N_0,
\]
and so by the triangle inequality, and noting that ${m^*}\ge 2$, we have
\begin{equation} \label{sim-3b}
\| \phi_{(\le N)} \|_{C^{{m^*}}} \lesssim_{C_0} A\|F\|_{C^{2{m^*}-1}}+\|\tilde{\phi}\|_{C^{{m^*},\alpha}} ,\quad N\ge N_0.
\end{equation}
Inserting this back into \eqref{phi-add-estimate} and \eqref{phi-add-estimate-2}, and noting again that ${m^*}\ge 2$, we obtain
\begin{align}\label{sim-3a}
\begin{aligned}
\| \phi_{(N)} \|_{C^{{m^*}+1}_{1/N}} &\lesssim_{C_0} A^{2-{m^*}-\alpha} N^{-{m^*}-\alpha}\|\tilde{\phi}\|_{C^{{m^*},\alpha}} + (A^{3-{m^*}-\alpha} N^{-{m^*}-\alpha}+AN^{-2{m^*}+1}) \| F \|_{C^{2{m^*}-1}} \\
&\le A^{2-{m^*}-\alpha} N^{-{m^*}-\alpha}\|\tilde{\phi}\|_{C^{{m^*},\alpha}} + AN^{-{m^*}-\alpha} \| F \|_{C^{2{m^*}-1}}
\end{aligned}
\end{align}
and
\begin{align}\label{sim-3}
\begin{aligned}
\| \phi_{(N)} \|_{C^{{m^*}}} \le N^{{m^*}} \| \phi_{(N)} \|_{C^{{m^*}+1}_{1/N}}\lesssim_{C_0} A^{2-{m^*}-\alpha} N^{-\alpha}\|\tilde{\phi}\|_{C^{{m^*},\alpha}} + A N^{-\alpha} \| F \|_{C^{2{m^*}-1}}.
\end{aligned}
\end{align}
We thus conclude that the sum
\[
\phi \coloneqq \phi_{(\le N_0)} + \sum_{N > N_0} \phi_{(N)}
\]
converges in the $C^{{m^*}}$ norm (and consequently also in the $C^2$ norm, as ${m^*}\ge 2$).
We now prove \eqref{G-8-prime}. From \eqref{sim}, it is enough to show
\[
\left\| \sum_{N > N_0} \phi_{(N)} \right\|_{C^{{m^*},\alpha}} \lesssim_{C_0} A \|F\|_{C^{2{m^*}-1}}+A^{2-{m^*}-\alpha}\|\Tilde{\phi}\|_{C^{{m^*},\alpha}}.
\]
As noted above, from \eqref{sim-3} we have
\[
\left\| \sum_{N > N_0} \phi_{(N)} \right\|_{C^{{m^*}}}\le \sum_{N > N_0} \left\|\phi_{(N)} \right\|_{C^{{m^*}}} \lesssim_{C_0} AN_0^{-\alpha} \|F\|_{C^{2{m^*}-1}}+A^{2-{m^*}-\alpha}N_0^{-\alpha}\|\Tilde{\phi}\|_{C^{{m^*},\alpha}},
\]
so it remains to show H\"older regularity:
\begin{equation}\label{hold}
\left|\nabla^{{m^*}} \sum_{N > N_0} \phi_{(N)}(p) - \nabla^{{m^*}} \sum_{N > N_0} \phi_{(N)}(q)\right| \lesssim_{C_0} (A \|F\|_{C^{2{m^*}-1}}+A^{2-{m^*}-\alpha}\|\Tilde{\phi}\|_{C^{{m^*},\alpha}}) d(p,q)^\alpha
\end{equation}
for any $p,q \in G$. By the triangle inequality, it is enough to show
\begin{equation*}
\sum_{N > N_0}\left|\nabla^{{m^*}} \phi_{(N)}(p) - \nabla^{{m^*}} \phi_{(N)}(q)\right| \lesssim_{C_0} (A \|F\|_{C^{2{m^*}-1}}+A^{2-{m^*}-\alpha}\|\Tilde{\phi}\|_{C^{{m^*},\alpha}}) d(p,q)^\alpha
\end{equation*}
On one hand, we may bound
\begin{align*}
|\nabla^{{m^*}} \phi_{(N)}(p) - \nabla^{{m^*}} \phi_{(N)}(q)| &\lesssim \| \nabla^{{m^*}} \phi_{(N)} \|_{C^0} \lesssim \| \phi_{(N)} \|_{C^{{m^*}}} \\
&\stackrel{\eqref{sim-3}}{\lesssim_{C_0}} A^{2-{m^*}-\alpha}N^{-\alpha}\|\tilde{\phi}\|_{C^{{m^*},\alpha}} + A N^{-\alpha}\|F\|_{C^{2{m^*}-1}}\\
&= (A^{2-{m^*}-\alpha}\|\tilde{\phi}\|_{C^{{m^*},\alpha}} + A\|F\|_{C^{2{m^*}-1}})N^{-\alpha}.
\end{align*}
On the other hand, one has
\begin{align*}
|\nabla^{{m^*}} \phi_{(N)}(p) - \nabla^{{m^*}} \phi_{(N)}(q)| &\lesssim \| \nabla^{{m^*}+1} \phi_{(N)} \|_{C^0} d(p,q)\lesssim N^{{m^*}+1} \| \phi_{(N)} \|_{C^{{m^*}+1}_{1/N}} d(p,q)\\
&\stackrel{\eqref{sim-3a}}{\lesssim_{C_0} } (A^{2-{m^*}-\alpha}N^{-\alpha}\|\tilde{\phi}\|_{C^{{m^*},\alpha}} + A N^{-\alpha}\|F\|_{C^{2{m^*}-1}})(N d(p,q))\\
&\lesssim_{C_0} (A^{2-{m^*}-\alpha}\|\tilde{\phi}\|_{C^{{m^*},\alpha}} + A\|F\|_{C^{2{m^*}-1}}) N^{-\alpha}(N d(p,q)).
\end{align*}
Thus, the left-hand side of \eqref{hold} is bounded by
\[
\lesssim_{C_0} (A^{2-{m^*}-\alpha}\|\tilde{\phi}\|_{C^{{m^*},\alpha}} + A\|F\|_{C^{2{m^*}-1}}) \sum_N N^{-\alpha} \min(1, N d(p,q))
\]
and the claim \eqref{G-8-prime} follows by summing the double-ended geometric series $\sum_N N^{-\alpha} \min(1, N d(p,q))$ using the hypothesis $0 < \alpha < 1$.
Now we prove \eqref{phi-bounds}. As $\phi_{(\le N)}$ converges in $C^2$ to $\phi$ as $N \to \infty$, and $P_{(\le N)} \psi$ converges in $C^2$ to $\psi$, we may write $B(\phi,\psi)$ as the uniform limit of $B(\phi_{(\le N)}, P_{(\le N)} \psi)$. Using \eqref{b-ident-1} and \eqref{b-ident-2}, we have the telescoping sum
\begin{align*}
B(\phi,\psi) &= B(\phi_{(\le N_0)}, P_{(\le N_0)} \psi) + \sum_{N > N_0} (B(\phi_{(\le N)}, P_{(\le N)} \psi) - B(\phi_{(< N)}, P_{(< N)} \psi)) \\
&= P_{(\le N_0)} F + \sum_{N > N_0} (B(\phi_{(N)}, P_{(\le N)} \psi) + B(\phi_{(<N)}, {P_{(N)}} \psi))\\
&= P_{(\le N_0)} F + \sum_{N > N_0} (P_{(N)} F + B(P_{(>N)}\phi_{(<N)}, {P_{(N)}} \psi))\\
&= F+\sum_{N > N_0} B(P_{(>N)}\phi_{(<N)}, {P_{(N)}} \psi),
\end{align*}
or
\[
B(\phi,\psi) - F = \sum_{N>N_0} B(P_{(>N)}\phi_{(<N)}, {P_{(N)}} \psi).
\]
Each of the terms in the right-hand side, being a `high-high paraproduct' of $\nabla \psi$ and $\nabla \phi$, has much higher regularity ($C^{2{m^*}-1}$) than either $\nabla \psi$ or $\nabla \phi$ ($C^{{m^*}-1}$). Indeed, by the triangle inequality and product rule, we have
\begin{align*}
\| B(\phi,\psi) - F\|_{C^{2{m^*}-1}} &\le \sum_{N>N_0} \| B(P_{(\ge N)}\phi_{(<N)}, {P_{(N)}} \psi) \|_{C^{2{m^*}-1}}\\
&\lesssim \sum_{N>N_0} \sum_{j_1+j_2 = 2{m^*}-1} \| \nabla P_{(>N)} \phi_{(<N)} \|_{C^{j_1}}\| \nabla {P_{(N)}} \psi \|_{C^{j_2}}.
\end{align*}
For any $0 \le j_1 \le 2{m^*}-1$, one has from Theorem \ref{LP}(iii)-\eqref{lp-5} and \eqref{sim-3} that
\begin{align*}
\| \nabla P_{(>N)} \phi_{(<N)} \|_{C^{j_1}} &\lesssim N^{j_1+1} \| P_{(>N)} \phi_{(<N)} \|_{C^{j_1+1}_{1/N}} \\
&\lesssim_G N^{j_1 +1-{m^*}} \| \nabla^{{m^*}} \phi_{(<N)} \|_{C^0} \\
&\lesssim_{C_0} N^{j_1 +1-{m^*}} (A^{2-{m^*}-\alpha} N^{-\alpha}\|\tilde{\phi}\|_{C^{{m^*},\alpha}} + AN^{-\alpha} \| F \|_{C^{2{m^*}-1}})\\
&= N^{j_1 +1-{m^*}-\alpha} (A^{2-{m^*}-\alpha} \|\tilde{\phi}\|_{C^{{m^*},\alpha}} + A\| F \|_{C^{2{m^*}-1}}).
\end{align*}
Also, for any $0 \le j_2 \le 2{m^*}-1$, we have from Theorem \ref{LP}(iii)-\eqref{lp-5} and \eqref{psi-4} that
\begin{align*}
\| \nabla {P_{(N)}} \psi \|_{C^{j_2}} &\lesssim N^{j_2+1} \| {P_{(N)}} \psi \|_{C^{j_2+1}_{1/N}} \lesssim_{G} N^{j_2+1} N^{-{m^*}-\alpha} \| \nabla^{{m^*}} \psi \|_{\dot C^{0,\alpha}}\lesssim_{C_0} N^{j_2+1-{m^*}-\alpha} A^{1-{m^*}-\alpha},
\end{align*}
and thus
\begin{align*}
&\| B(\phi,\psi) - F\|_{C^{2{m^*}-1}} \\
&\lesssim_{C_0}\sum_{N>N_0}\sum_{j_1+j_2=2{m^*}-1} N^{j_1 +1-{m^*}-\alpha}N^{j_2+1-{m^*}-\alpha} A^{1-{m^*}-\alpha} (A^{2-{m^*}-\alpha} \|\tilde{\phi}\|_{C^{{m^*},\alpha}} + A \| F \|_{C^{2{m^*}-1}})\\
&\lesssim_{C_0}\sum_{N>N_0} N^{1-2\alpha} A^{1-{m^*}-\alpha} (A^{2-{m^*}-\alpha} \|\tilde{\phi}\|_{C^{{m^*},\alpha}} + A \| F \|_{C^{2{m^*}-1}})\\
&\lesssim_{C_0} N_0^{1-2\alpha} A^{1-{m^*}-\alpha} (A^{2-{m^*}-\alpha} \|\tilde{\phi}\|_{C^{{m^*},\alpha}} + A \| F \|_{C^{2{m^*}-1}})
\end{align*}
(we used $\frac 12<\alpha<1$) which gives \eqref{phi-bounds}.
Finally, we prove \eqref{G-9-prime}. Fix $i,j=1,\cdots,k$. We need to establish $\|X_i\phi\cdot X_j\psi-X_i\Tilde{\phi}\cdot X_jP_{(\le N_0)}\psi\|_{C^0}\lesssim_{C_0} \|F\|_{C^{2{m^*}-1}}+N_0^{1-{m^*}-\alpha}A^{1-{m^*}-\alpha}\|\Tilde{\phi}\|_{C^{{m^*},\alpha}}$. Using telescoping sums,
\begin{align*}
&X_i\phi\cdot X_j\psi-X_i\Tilde{\phi}\cdot X_jP_{(\le N_0)}\psi\\
&= X_i(\phi_{(\le N_0)}-\tilde \phi)\cdot X_jP_{(\le N_0)} \psi+\sum_{N>N_0}\left( X_i\phi_{(\le N)}\cdot X_jP_{(\le N)}\psi-X_i\phi_{(<N)}\cdot X_jP_{(< N)}\psi\right)\\
&\stackrel{\eqref{low-freq-cross-term}}{=} P_{(\le N_0)} F_{ij}+\sum_{N>N_0}\left(X_i \phi_{(N)}\cdot X_j P_{(\le N)} \psi+X_i \phi_{(<N)}\cdot X_j {P_{(N)}} \psi\right)\\
&\stackrel{\eqref{high-freq-cross-term}}{=} P_{(\le N_0)} F_{ij}+\sum_{N>N_0}{P_{(N)}} F_{ij}+\sum_{N>N_0}\left(-X_jP_{(\le N)} \phi_{(<N)}\cdot X_i {P_{(N)}} \psi+X_i \phi_{(<N)}\cdot X_j {P_{(N)}} \psi\right)\\
&=F+\sum_{N>N_0}\left(-X_jP_{(\le N)} \phi_{(<N)}\cdot X_i {P_{(N)}} \psi+X_i \phi_{(<N)}\cdot X_j {P_{(N)}} \psi\right).
\end{align*}
Thus
\begin{align*}
&\|X_i\phi\cdot X_j\psi-X_i\Tilde{\phi}\cdot X_jP_{(\le N_0)}\psi\|_{C^0}\\
&\le \|F\|_{C^0}+\sum_{N>N_0}\left(\|\nabla P_{(\le N)} \phi_{(<N)}\|_{C^0}+\|\nabla \phi_{(<N)}\|_{C^0} \right)\|\nabla {P_{(N)}} \psi\|_{C^0}\\
&\stackrel{\eqref{lp-1}}{\lesssim_{G}}\|F\|_{C^0}+\sum_{N>N_0}\|\nabla \phi_{(<N)}\|_{C^0}\|\nabla {P_{(N)}} \psi\|_{C^0}\\
&\stackrel{\eqref{sim-3b}, \eqref{P_Npsi}}{\lesssim_{C_0}}\|F\|_{C^0}+\sum_{N>N_0}\left(A\|F\|_{C^{2{m^*}-1}}+\|\tilde{\phi}\|_{C^{{m^*},\alpha}}\right)N^{1-{m^*}-\alpha} A^{1-{m^*}-\alpha}\\
&\lesssim \|F\|_{C^{2{m^*}-1}}+N_0^{1-{m^*}-\alpha} A^{1-{m^*}-\alpha}\|\tilde{\phi}\|_{C^{{m^*},\alpha}},
\end{align*}
as desired.
\end{proof}
We state Proposition \ref{Perturbation} for the case $F=0$, which is the form that will be used later. (We remark than when converting \eqref{G-9} into \eqref{G-9-again} below, we have used our hierarchy of choosing $N_0$ after $C_0$.)
\begin{corollary}\label{perturbation-cor}
Let $M$ be a real number with
\[
M\ge C_0^{-1}.
\]
Let ${m^*}\ge 2$ and $\frac 12<\alpha<1$. Suppose we are given a $C^{{m^*},\alpha}$-map $\psi:G\to \bbr^{D}$ with the following regularity properties:
\begin{enumerate}
\item (H\"older regularity at scale $A$) We have
\begin{equation*}
\|\nabla^2 \psi\|_{C_A^{{m^*}-2,\alpha}}\le C_0A^{-1}.
\end{equation*}
\item (Non-degenerate first derivatives) For any $p\in G$, we have
\begin{equation*}
C_0^{-1}M\le |X_i\psi(p)|\le C_0 M,\quad i=1,\cdots,k,
\end{equation*}
\item (Locally free embedding) For any $p\in G$, we have
\begin{equation*}
\left|\bigwedge_{i=1}^k X_i\psi(p)\wedge \bigwedge_{1\le i\le j\le k} X_iX_j\psi(p)\wedge \bigwedge_{i'=1}^{k_2} X_{2,i'}\psi(p) \right|\gtrsim_{C_0} A^{-\frac{k(k+1)}{2}-k_2}M^{k}.
\end{equation*}
\end{enumerate}
Let $\Tilde{\phi}:G\to\bbr^{D}$ be a $C^{{m^*},\alpha}$-solution to the low-frequency equation \eqref{G-5} with $\|\Tilde{\phi}\|_{C^{{m^*},\alpha}}<\infty$. Then there exists a $C^{{m^*},\alpha}$-solution $\phi$ to \eqref{G-1} which is a small perturbation of $\tilde{\phi}$:
\begin{equation}\label{G-8-again}
\|\phi-\Tilde{\phi}\|_{C^{{m^*},\alpha}}\lesssim_{C_0} A^{2-{m^*}}\|\Tilde{\phi}\|_{C^{{m^*},\alpha}},
\end{equation}
and which makes the following cross terms small:
\begin{equation}\label{G-9-again}
\|X_i\phi\cdot X_j\psi-X_i\Tilde{\phi}\cdot X_jP_{(\le N_0)}\psi\|_{C^0}\le A^{1-{m^*}}\|\Tilde{\phi}\|_{C^{{m^*},\alpha}},\quad i,j=1,\cdots,k.
\end{equation}
\end{corollary}
\begin{remark}
\begin{enumerate}
\item To repeat, we will later on use $\alpha=\frac 23$ and ${m^*}=s^2+s+1$, where $G$ is of step $s$, so these are to be considered `geometric constants'. Following our hierarchy of constants, we will be choosing $C_0$, $N_0$, and $A$, in that order.
\item At first glance, Corollary \ref{perturbation-cor} may seem to be missing a necessary dependence on $N_0$: the solution $\Tilde{\phi}$ to \eqref{G-5} approximates the solution $\phi$ to \eqref{G-1}, with the approximation getting better as $N_0$ gets larger, yet the quantitative estimates \eqref{G-8-again} and \eqref{G-9-again} lack any explicit dependence on $N_0$. However, the dependence is implicit: \eqref{G-8-again} and \eqref{G-9-again} depend on $A$ which in turn depends on $N_0$, and as $N_0$ gets larger, the right-hand sides of \eqref{G-8-again} and \eqref{G-9-again} get smaller. The part when we needed the dependence of $A$ on $N_0$ in the proof of Proposition \ref{Perturbation} was in \eqref{A_hierarchy-2}.
\end{enumerate}
\end{remark}
\section{Regular extensions of orthonormal systems}\label{sec:ON-ext}
\setcounter{equation}{0}
Another tool that we will need is a certain result on extending orthonormal systems. In order to apply Corollary \ref{perturbation-cor}, we first need to construct a nontrivial solution $\tilde{\phi}$ to the low-frequency equation \eqref{G-5}, or more generally the stronger equation
\begin{equation}\label{lowfreq-strongperp}
X_i\tilde{\phi}\cdot X_j P_{(\le N_0)}\psi =0,\quad i,j=1,\cdots,k
\end{equation}
as promised in Remark \ref{perturb-rem}. One can see using the Leibniz rule that it is enough to solve the system
\begin{equation}\label{pre-strongperp}
\begin{cases}
\tilde{\phi}\cdot X_i P_{(\le N_0)}\psi =0,& i=1,\cdots,k,\\
\tilde{\phi}\cdot X_iX_j P_{(\le N_0)}\psi =0,& i,j=1,\cdots,k.
\end{cases}
\end{equation}
However, the vectors $\{X_iX_j P_{(\le N_0)}\psi\}_{i,j=1,\cdots,k}$ may not be linearly independent, as $\{X_iX_j-X_jX_i\in V_2:1\le i<j\le k\}$ may be linearly dependent, so the above system \eqref{pre-strongperp} may be overdetermined. Instead, we will solve the equivalent system
\begin{equation}\label{strongperp}
\begin{cases}
\tilde{\phi}\cdot X_i P_{(\le N_0)}\psi =0,& i=1,\cdots,k,\\
\tilde{\phi}\cdot X_iX_j P_{(\le N_0)}\psi =0,& i,j=1,\cdots,k,~i\le j,\\
\tilde{\phi}\cdot X_{2,i} P_{(\le N_0)}\psi =0,& i=1,\cdots,k_2,
\end{cases}
\end{equation}
where we recall that $\{X_{2,i}\}_{i=1}^{k_2}$ is a basis of $V_2$ (the equivalence of \eqref{pre-strongperp} and \eqref{strongperp} follows from $[V_1,V_1]=V_2$). The vectors $\{X_i P_{(\le N_0)}\psi\}_{i=1,\cdots,k}\cup\{X_iX_j P_{(\le N_0)}\psi\}_{1\le i\le j\le k}\cup \{X_{2,i} P_{(\le N_0)}\psi\}_{i=1,\cdots,k_2}$ can be made linearly independent if we require the stronger freeness property
\begin{equation}\label{freeness-depth2}
\left|\bigwedge_{i=1}^k X_i\psi\wedge \bigwedge_{1\le i\le j\le k}X_iX_j\psi\wedge \bigwedge_{i=1}^{k_2}X_{2,i}\psi\right|\gtrsim_{C_0}\prod_{i=1}^k |X_i\psi|\cdot \prod_{1\le i\le j\le k}|X_iX_j\psi|\cdot \prod_{i=1}^{k_2}|X_{2,i}\psi|
\end{equation}
along with some regularity, say $\|\nabla^2\psi\|_{C_A^1}\lesssim_{C_0} A^{-1}$, for then we can apply Theorem \ref{LP}(iii)-\eqref{lp-5} to approximate the derivatives of $P_{(\le N_0)}\psi$ by the corresponding derivatives of $\psi$.
Now suppose $v_1,\cdots,v_{\frac{k(k+1)}{2}+k_2}$ is the result of applying the Gram-Schmidt process to the vectors $\{X_i P_{(\le N_0)}\psi\}_{i=1,\cdots,k}\cup\{X_iX_j P_{(\le N_0)}\psi\}_{1\le i\le j\le k}\cup \{X_{2,i} P_{(\le N_0)}\psi\}_{i=1,\cdots,k_2}$. The freeness property \eqref{freeness-depth2} guarantees some regularity of the $v_i$'s, and solving \eqref{strongperp} is equivalent to solving
\begin{equation}\label{perp-motiv}
\tilde{\phi}\cdot v_i=0,\quad i=1,\cdots, \frac{k(k+1)}{2}+k_2.
\end{equation}
Thus, the following general question arises:
\begin{question}\label{general-lift}
Given a space $X$ on which a function space $(\mathcal{F}(X),\|\cdot\|_\mathcal{F})$ is defined, and given maps $v_1,\cdots,v_m:X\to\bbs^{D-1}$, $D\ge m+1$, which form a pointwise orthonormal system and which have the uniform regularity bound $\|v_i\|_\mathcal{F}\lesssim 1$, when can we extend the system to include a new map $v_{m+1}:X\to\bbs^{D-1}$, such that $v_1,\cdots,v_m,v_{m+1}$ forms a pointwise orthonormal system and $\|v_{m+1}\|_\mathcal{F}\lesssim_{X,\mathcal{F}(X),m,D}1$?
\end{question}
We will provide partial positive answers to this question that is applicable to our construction of the embedding. Theorem \ref{Cj-lifting-lattice} is a previous positive answer from \cite{tao2018embedding}, while Theorems \ref{Lip-lifting}, \ref{Cj-lifting}, and \ref{partialpositiveanswer} and Corollary \ref{multiple-Cj-lifting} are the new positive answers of this paper.
Under conditions that validate the above question, we would be able to simply take $\tilde{\phi}$ to be $v_{m+1}$ to solve \eqref{perp-motiv}, for then $\tilde{\phi}$ would have similar regularity as the $v_i$'s, i.e., it would have bounded $C^{{m^*},\alpha}$ norm (there is some loss of constant factors when applying Theorem \ref{LP} (iii)-\eqref{lp-1}, but this is offset by the fact that there is a change of scale: we have control on the $C_A^{{m^*},\alpha}$ norm of $\psi$, and we need only control the $C^{{m^*},\alpha}$ norm of $\tilde{\phi}$).
Actually, in order to also obtain a freeness property for $\psi+\phi$, we will actually take $\tilde{\phi}$ to be a linear combination of a larger extension $v_{m+1},\cdots,v_{m+m'}$ of $v_1,\cdots,v_m$ with variable coefficients; the coefficients will guarantee the freeness property in this case (see \eqref{IsomCompose} and \eqref{decomp}). Obtaining a larger orthonormal extension will be possible simply by adding a new vector one by one.
In \cite[Section 8]{tao2018embedding} Question \ref{general-lift} has been answered in the positive for the case $X=\bbh^3$ and
\[
\|\phi\|_\mathcal{F}=\|\phi\|_{C^0}+R\|\nabla \phi\|_{C^j}
\]
for any $j\ge 0$ and $R\ge 1$. The proof in \cite{tao2018embedding} used the fact that the Heisenberg group $\bbh^3$ admits a CW complex structure that is periodic with respect to its standard discrete cocompact lattice. The methods of \cite{tao2018embedding} can be generalized in a straightforward manner to prove the following theorem.
\begin{theorem}[Corollary 8.4 of \cite{tao2018embedding}]\label{Cj-lifting-lattice}
Let $G$ be a Carnot group that admits a cocompact lattice $\Gamma$ and a CW structure whose cells can be obtained from left $\Gamma$-translation from a finite list of cells. Let $1\le m\le D-n-1$ (where $n$ is the topological dimension of $G$), $j\ge 1$, and let $\{R_i\}_{i=1}^j$ be a log-concave sequence of positive reals, i.e., $R_iR_{i'}\ge R_{i+i'}$ whenever $i+i'\le j$. Let $v_1,\cdots,v_m:G\to \bbs^{D-1}$ be functions that form an orthonormal system at each point, with the uniform regularity bound
\[
\sum_{k=1}^j R_k\|\nabla^k v_i\|_{C^0}\le 1,\quad i=1,\cdots, m.
\]
Then there exists another function $v_{m+1}:G\to \bbs^{D-1}$ such that $v_1,\cdots,v_m$ along with $v_{m+1}$ form an orthonormal system at each point, and
\begin{equation}\label{controlled-norms-lattice}
\sum_{k=1}^j R_k\|\nabla^k v_{m+1}\|_{C^0}\lesssim_{G,D,j} 1.
\end{equation}
\end{theorem}
In other words, we are given a bundle $B$ over $G$, where for each $p\in G$ the fibre of $B$ over $p$ is the collection of unit vectors $v\in \bbs^{D-1}$ which are perpendicular to $v_1(p),\cdots,v_m(p)$ (so each fibre is homeomorphic to $\bbs^{D-m-1}$), and we need to show that there is a section of this bundle which has the same level of control on the regularity of the bundle itself. Tao \cite[Section 8]{tao2018embedding} achieved this for the Heisenberg group $\bbh^3$ in the spirit of quantitative topology, by imposing a `uniform' CW-structure on $\bbh^3$ as above and then inductively constructing the section starting from low-dimensional skeleta. In the inductive step in \cite{tao2018embedding}, one has to use the fact that the homotopy groups $\pi_i(\bbs^n)$ vanish for $i<n$, which necessitates the `dimension gap' $m\le D-3-1$.
The main difficulty in proving Theorem \ref{Cj-lifting-lattice} is to construct a section $\tilde{v}_{m+1}$ which is uniformly continuous, with the modulus of uniform continuity depending only on $G$, $D$ and $R_1$. We can then obtain a section $v_{m+1}$ with the stronger regularity property \eqref{controlled-norms-lattice} simply by mollifying $\tilde{v}_{m+1}$ and then applying the Gram-Schmidt orthogonalization process; the regularity \eqref{controlled-norms-lattice} will simply be a consequence of the algebra property for norms of the form $\|\cdot\|_{C^0}+\sum_{k=1}^j R_k\|\nabla^k \cdot\|_{C^0}$.
General Carnot groups may not admit a cocompact lattice (as the structure constants for any basis may be irrational), and it is not clear whether $G$ admits a ``uniform'' CW structure that is amenable to the above proof method. We will avoid the need for a CW structure by only using the fact that $G$ is a doubling metric space. More precisely, we will first prove that the most challenging part of Theorem \ref{Cj-lifting-lattice}, i.e., the case $j=1$, can be done in the setting of doubling metric spaces. We state this result separately in anticipation of future work.
\begin{theorem}\label{Lip-lifting}
Let $(X,d)$ be a $K$-doubling metric space ($K\ge 2$), and let $m\le D-224 K^4\log K$. If $v_1,\cdots,v_m:X\to\bbs^{D-1}$ are 1-Lipschitz functions that form an orthonormal system at each point, then there exists a $150K^5m(m+1)$-Lipschitz function $v_{m+1}:X\to \bbs^{D-1}$ such that $v_1,\cdots,v_m$ along with $v_{m+1}$ form an orthonormal system at each point.
\end{theorem}
It will later become clear that Theorem \ref{Lip-lifting} provides a partial positive answer to Question \ref{general-lift} for many other function spaces as well; see Theorem \ref{partialpositiveanswer}.
\begin{proof}
Take a maximal $\delta$-net $\mathcal{N}_\delta$ of $X$, where $\delta=\frac{1}{8Km}$. By \eqref{doubling-net},
\[
|\mathcal{N}_\delta \cap B_{2\delta}(p)|\le K^2\quad \mathrm{and}\quad|\mathcal{N}_\delta \cap B_{4\delta}(p)|\le K^3.
\]
Let $\Omega$ be a probability space on which independent random variables $v_{m+1}'(p)\in \bbs^{D-1}$ are defined so that for each $\omega\in \Omega$ and $p\in X$, $v_{m+1}'(p)(\omega)$ forms an orthonormal set along with $v_1(p)(\omega),\cdots,v_m(p)(\omega)$. Let $\epsilon=\frac{1}{4K^2}$, and for each $p\in \mathcal{N}_\delta$, let us define the event
\[
A_p=\{\omega\in \Omega:\exists q\in \mathcal{N}_\delta \cap (B_{2\delta}(p)\setminus \{p\})~|v_{m+1}'(p)(\omega)\cdot v_{m+1}'(q)(\omega)|>\epsilon\}.
\]
Note that for any $p,q\in \mathcal{N}_\delta$ distinct, we may compute
\[
\mathrm{Pr}(|v_{m+1}'(p)\cdot v_{m+1}'(q)|>\epsilon)= \mathbb{E}_p \mathrm{Pr}_q(|v_{m+1}'(p)\cdot v_{m+1}'(q)|>\epsilon)\le \exp\left(-\frac{\epsilon^2}{2}(D-m)\right),
\]
where the last inequality follows from a standard computation on the area of caps on the sphere (see Milman and Schechtman \cite[Chapter 2]{10.5555/21465} for instance). Therefore, we may estimate the probability of each $A_p$ using a union bound:
\begin{align*}
\mathrm{Pr}(A_p)&\le \sum_{q\in \mathcal{N}_\delta \cap (B_{2\delta}(p)\setminus \{p\})}\mathrm{Pr}(|v_{m+1}'(p)\cdot v_{m+1}'(q)|>\epsilon)\\
&\le |\mathcal{N}_\delta \cap (B_{2\delta}(p)\setminus \{p\})|\cdot \exp\left(-\frac{\epsilon^2}{2}(D-m)\right)\\
&\le K^2\cdot \exp\left(-\frac{\epsilon^2}{2}(D-m)\right).
\end{align*}
Also note that for each $p\in \mathcal{N}_\delta$, $A_p$ is mutually independent with the collection of events $\{A_q:q\in \mathcal{N}_\delta \setminus B_{4\delta}(p)\}$, which are all the $A_q$'s except possibly $|\mathcal{N}_\delta\cap B_{4\delta}(p)|\le K^3$ of them. By the Lov\'{a}sz local lemma, we see that if
\begin{equation}\label{lovasz}
e\cdot K^3 \cdot K^2 \cdot \exp\left(-\frac{\epsilon^2}{2}(D-m)\right)< 1,
\end{equation}
then for any finite subcollection $S\subset \mathcal{N}_\delta$ we have $\mathrm{Pr}(\bigcap_{p\in S}A_p^\mathsf{c})>0$. But by our choice of parameters $D-m\ge 224K^4\log K$ and $\epsilon=\frac{1}{4K^2}$, the condition \eqref{lovasz} is indeed satisfied, since the LHS is bounded by
\begin{align*}
&e\cdot K^5\cdot \exp\left(-\frac{\epsilon^2}{2}(D-m)\right)\\
&\le e\cdot K^5\cdot \exp\left(-7\log K)\right)<1\quad(\because ~7\log K>5\log K+1\mbox{ as }K\ge 2)
\end{align*}
Hence, for any finite subcollection $S\subset \mathcal{N}_\delta$ we have $\mathrm{Pr}(\bigcap_{p\in S}A_p^\mathsf{c})>0$, and in particular $\bigcap_{p\in S}A_p^\mathsf{c}\neq \emptyset$. We can thus find an assignment $\{v_{m+1,S}'(p)\}_{p\in S}$ such that for any distinct $p,q\in S$ with $d(p,q)<2\delta$ we have $|v_{m+1,S}'(p)\cdot v_{m+1,S}'(q)|\le \epsilon$. By taking an arbitrary enumeration of $\mathcal{N}_\delta$, taking a monotone increasing sequence of $S$'s that cover $\mathcal{N}_\delta$ and passing to a limit along a nonprincipal ultrafilter, we conclude the existence of an assignment $\{v_{m+1}'(p)\}_{p\in \mathcal{N}_\delta}$ such that for any distinct $p,q\in \mathcal{N}_\delta$ with $d(p,q)<2\delta$ we have $|v_{m+1}'(p)\cdot v_{m+1}'(q)|\le \epsilon$.
We now `interpolate' the discrete vector field $\{v_{m+1}'(p)\}_{p\in \mathcal{N}_\delta}$ to produce a vector field $\{\tilde{v}_{m+1}(p)\}_{p\in X} $ defined on the entirety of $X$, which nearly has the desired properties. We first construct a `quadratic' partition of unity $\{\phi_q\}_{q\in \mathcal{N}_\delta}$, i.e., functions $\phi_q:X\to [0,1]$ defined for each $q\in \mathcal{N}_\delta$ such that
\begin{itemize}
\item $\mathrm{supp~} \phi_q\subset B_{2\delta}(q)$, $q\in \mathcal{N}_\delta$,
\item $\sum_{q\in \mathcal{N}_\delta}\phi_q^2=1$ on $X$,
\item $\|\phi_q\|_{\mathrm{Lip}}\le 2K^2\delta^{-1}$, $q\in \mathcal{N}_\delta$.
\end{itemize}
Indeed, we start by defining for each $q\in \mathcal{N}_\delta$ the function
\[
\tilde{\phi}_q(p)=
\begin{cases}
1,& d(p,q)\le \delta,\\
2-\frac{d(p,q)}{\delta},&\delta<d(p,q)\le 2\delta,\\
0,& d(p,q)>2\delta,
\end{cases}
\]
and then define
\[
\phi_q\coloneqq \frac{\tilde{\phi}_q}{\sqrt{\sum_{r\in \mathcal{N}_\delta}\tilde{\phi}_r^2}}.
\]
This obviously satisfies the first two properties, and it remains to compute $\|\phi_q\|_{\mathrm{Lip}}$. We clearly have $\|\tilde{\phi}_q\|_{C^0}\le 1$, $\|\tilde{\phi}_q\|_{\mathrm{Lip}}\le \delta^{-1}$.
We first observe that
\[
\sum_{r\in \mathcal{N}_\delta}\tilde{\phi}_r^2\ge 1\quad\mathrm{and}\quad
\left\|\sum_{r\in \mathcal{N}_\delta}\tilde{\phi}_r^2\right\|_{\mathrm{Lip}}\le (4K^2-2)\delta^{-1}.
\]
Indeed, the first follows from the fact that $\mathcal{N}_\delta$ is a maximal $\delta$-net. For the second property, fix any $p,p'\in X$ with $p\neq p'$. If $B_{2\delta}(p)\cap B_{2\delta}(p')\cap \mathcal{N}_\delta=\emptyset$ then we must have $d(p,p')\ge \delta$, because otherwise $B_{2\delta}(p)\cap B_{2\delta}(p')\cap \mathcal{N}_\delta\supseteq B_\delta(p)\cap \mathcal{N}_\delta\neq\emptyset$, by maximality of $\mathcal{N}_\delta$. We have
\[
1\le \sum_{r\in \mathcal{N}_\delta}\tilde{\phi}_r(p)^2=\sum_{r\in B_{2\delta}(p)\cap \mathcal{N}_\delta}\tilde{\phi}_r(p)^2\le |B_{2\delta}(p)\cap \mathcal{N}_\delta|\le K^2,
\]
and similarly
\[
1\le \sum_{r\in \mathcal{N}_\delta}\tilde{\phi}_r(p')^2\le K^2.
\]
Therefore, in this case,
\[
\frac{\left|\sum_{r\in \mathcal{N}_\delta}\tilde{\phi}_r(p)^2-\sum_{r\in \mathcal{N}_\delta}\tilde{\phi}_r(p')^2\right|}{d(p,p')}\le (K^2-1)\delta^{-1}\le (4K^2-2)\delta^{-1}.
\]
On the other hand, if $B_{2\delta}(p)\cap B_{2\delta}(p')\cap \mathcal{N}_\delta\neq \emptyset$ then clearly $|B_{2\delta}(p)\cap B_{2\delta}(p')\cap \mathcal{N}_\delta|\le |B_{2\delta}(p)\cap \mathcal{N}_\delta|+| B_{2\delta}(p')\cap \mathcal{N}_\delta|-1\le 2K^2-1$. We have
\[
\sum_{r\in \mathcal{N}_\delta}\tilde{\phi}_r(p)^2=\sum_{r\in B_{2\delta}(p)\cap \mathcal{N}_\delta}\tilde{\phi}_r(p)^2=\sum_{r\in (B_{2\delta}(p)\cup B_{2\delta}(p'))\cap \mathcal{N}_\delta}\tilde{\phi}_r(p)^2
\]
and similarly
\[
\sum_{r\in \mathcal{N}_\delta}\tilde{\phi}_r(p')^2=\sum_{r\in (B_{2\delta}(p)\cup B_{2\delta}(p'))\cap \mathcal{N}_\delta}\tilde{\phi}_r(p')^2,
\]
so, noting that $|\tilde{\phi}_r(p)^2-\tilde{\phi}_r(p')^2|\le 2|\tilde{\phi}_r(p)^2-\tilde{\phi}_r(p')|\le 2\|\tilde{\phi}_r\|_{\mathrm{Lip}} d(p,p')\le 2\delta^{-1}d(p,p')$,
\begin{align*}
\left|\sum_{r\in \mathcal{N}_\delta}\tilde{\phi}_r(p)^2-\sum_{r\in \mathcal{N}_\delta}\tilde{\phi}_r(p')^2\right|&\le \sum_{r\in (B_{2\delta}(p)\cup B_{2\delta}(p'))\cap \mathcal{N}_\delta} |\tilde{\phi}_r(p)^2-\tilde{\phi}_r(p')^2|\\
&\le |(B_{2\delta}(p)\cup B_{2\delta}(p'))\cap \mathcal{N}_\delta|\cdot 2 \delta^{-1}d(p,p')\\
&\le (2K^2-1)\cdot 2 \delta^{-1}d(p,p').
\end{align*}
This finishes the verification that $\left\|\sum_{r\in \mathcal{N}_\delta}\tilde{\phi}_r^2\right\|_{\mathrm{Lip}}\le (4K^2-2)\delta^{-1}$.
By \eqref{reciprocal} and \eqref{squareroot}, we have
\[
\left\|\frac{1}{\sqrt{\sum_{r\in \mathcal{N}_\delta}\tilde{\phi}_r^2}}\right\|_{\mathrm{Lip}}\le\left\|\sqrt{\sum_{r\in \mathcal{N}_\delta}\tilde{\phi}_r^2}\right\|_{\mathrm{Lip}}\le \frac 12 \left\|\sum_{r\in \mathcal{N}_\delta}\tilde{\phi}_r^2\right\|_{\mathrm{Lip}}\le (2K^2-1)\delta^{-1}.
\]
Thus, by the definition of $\phi_q$ and \eqref{first-alg}, we have
\begin{align*}
\|\phi_q\|_{\mathrm{Lip}}&\le \left\|\frac{1}{\sqrt{\sum_{r\in \mathcal{N}_\delta}\tilde{\phi}_r^2}}\right\|_{C^0}\left\|\tilde{\phi}_q\right\|_{\mathrm{Lip}}+\left\|\frac{1}{\sqrt{\sum_{r\in \mathcal{N}_\delta}\tilde{\phi}_r^2}}\right\|_{\mathrm{Lip}}\left\|\tilde{\phi}_q\right\|_{C^0}\\
&\le 1\cdot \delta^{-1}+(2K^2-1)\delta^{-1}\cdot 1\le 2K^2\delta^{-1}.
\end{align*}
We now interpolate the vectors $\{v_{m+1}'(q)\}_{q\in \mathcal{N}_\delta}$ using the quadratic partition of unity $\{\phi_q\}_{q\in \mathcal{N}_\delta}$:
\begin{equation}\label{interpolate}
\tilde{v}_{m+1}(p)\coloneqq\sum_{q\in \mathcal{N}_\delta}\phi_q(p)v_{m+1}'(q),\quad p\in X.
\end{equation}
The idea is that this interpolates nearby $v'_{m+1}$'s, which are mutually almost orthogonal, so $\tilde{v}_{m+1}$ should nearly be a unit vector. Moreover, since the 1-Lipschitz functions $v_1,\cdots,v_m$ vary slowly over distance $\delta$, this $\tilde{v}_{m+1}$ should also be nearly orthogonal to $v_1,\cdots,v_m$. This $\tilde{v}_{m+1}$ will oscillate much quicker than $v_1,\cdots,v_m$, but its Lipschitz constant will be controlled by $K$ and $\delta$.
More precisely, we claim that $\tilde{v}_{m+1}$ satisfies the following quantitative estimates:
\begin{enumerate}
\item $\frac 34\le |\tilde{v}_{m+1}(p)|^2\le \frac 54$ for each $p\in X$,
\item $|\tilde{v}_{m+1}(p)\cdot v_i(p)|\le \frac 1{4m}$ for each $p\in X$ and $i=1,\cdots,m$,
\item $\|\tilde{v}_{m+1}\|_{\mathrm{Lip}}\le 2K^2(2K^2-1)\delta^{-1}$.
\end{enumerate}
Indeed, by the support property of $\phi_q$, we see that the summation in \eqref{interpolate} is locally finite:
\[
\tilde{v}_{m+1}(p)=\sum_{q\in \mathcal{N}_\delta\cap B_{2\delta}(p)}\phi_q(p)v_{m+1}'(q).
\]
We can verify property (1) using the near-orthogonality and the fact that the sum of squares of the $\phi_q$'s is 1:
\begin{align*}
\left||\tilde{v}_{m+1}(p)|^2-1\right|&=\left|\sum_{\substack{q,q'\in \mathcal{N}_\delta\cap B_{2\delta}(p) \\ q\neq q'}}\phi_q(p)\phi_{q'}(p)v_{m+1}'(q)\cdot v_{m+1}'(q')\right|\\
&\le \sum_{\substack{q,q'\in \mathcal{N}_\delta\cap B_{2\delta}(p) \\ q\neq q'}}\phi_q(p)\phi_{q'}(p)\left|v_{m+1}'(q)\cdot v_{m+1}'(q')\right|\\
&\le \sum_{\substack{q,q'\in \mathcal{N}_\delta\cap B_{2\delta}(p) \\ q\neq q'}}\frac{\phi_q(p)^2+\phi_{q'}(p)^2}{2}\epsilon\\
&\le |\mathcal{N}_\delta\cap B_{2\delta}(p)|\epsilon\le K^2\epsilon=\frac 14.
\end{align*}
To verify property (2), we see that for each $p\in X$ and $i=1,\cdots, m$,
\[
\tilde{v}_{m+1}(p)\cdot v_i(p) =\sum_{q\in \mathcal{N}_\delta\cap B_{2\delta}(p)}\phi_q(p)v_{m+1}'(q)\cdot v_i(p).
\]
We observe that for each $q\in \mathcal{N}_\delta\cap B_{2\delta}(p)$, we can estimate
\begin{align*}
|v_{m+1}'(q)\cdot v_i(p)|&\le |v_{m+1}'(q)\cdot v_i(q)|+|v_{m+1}'(q)\cdot (v_i(p)-v_i(q))|\\
&\le 0+ | v_i(p)-v_i(q)|\le 2\delta,
\end{align*}
where we have used the orthogonality of $v_{m+1}'(q)$ against $v_i(q)$ and the fact that $\| v_i\|_{\mathrm{Lip}}\le 1$. By these facts and Cauchy--Schwarz, we have the bound
\begin{align*}
|\tilde{v}_{m+1}(p)\cdot v_i(p)| \le \sum_{q\in \mathcal{N}_\delta\cap B_{2\delta}(p)}\phi_q(p)\cdot 2\delta\le |\mathcal{N}_\delta\cap B_{2\delta}(p)|^{1/2}\cdot 2\delta\le K\cdot 2\delta=\frac 1{4m}.
\end{align*}
To verify (3), we first fix $p,p'\in X$ with $p\neq p'$. If $B_{2\delta}(p)\cap B_{2\delta}(p')\cap \mathcal{N}_\delta=\emptyset$ then we must have $d(p,p')\ge \delta$, because otherwise $B_{2\delta}(p)\cap B_{2\delta}(p')\cap \mathcal{N}_\delta\supseteq B_\delta(p)\cap \mathcal{N}_\delta\neq\emptyset$. By (1), we have $\frac{\sqrt{3}}{2}\le |\tilde{v}_{m+1}(p)|,|\tilde{v}_{m+1}(p')|\le \frac{\sqrt{5}}{2}$, so
\[
\frac{|\tilde{v}_{m+1}(p)-\tilde{v}_{m+1}(p')|}{d(p,p')}\le \frac{\sqrt{5}-\sqrt{3}}{2}\delta^{-1}\le 2K^2(2K^2-1)\delta^{-1}.
\]
On the other hand, if $B_{2\delta}(p)\cap B_{2\delta}(p')\cap \mathcal{N}_\delta\neq \emptyset$ then clearly $|B_{2\delta}(p)\cap B_{2\delta}(p')\cap \mathcal{N}_\delta|\le |B_{2\delta}(p)\cap \mathcal{N}_\delta|+| B_{2\delta}(p')\cap \mathcal{N}_\delta|-1\le 2K^2-1$. We have
\[
\tilde{v}_{m+1}(p)=\sum_{r\in B_{2\delta}(p)\cap \mathcal{N}_\delta}\phi_r(p)v_{m+1}(r)=\sum_{r\in (B_{2\delta}(p)\cup B_{2\delta}(p'))\cap \mathcal{N}_\delta}\phi_r(p)v_{m+1}(r)
\]
and similarly
\[
\tilde{v}_{m+1}(p')=\sum_{r\in (B_{2\delta}(p)\cup B_{2\delta}(p'))\cap \mathcal{N}_\delta}\phi_r(p')v_{m+1}(r),
\]
so
\begin{align*}
\left|\tilde{v}_{m+1}(p)-\tilde{v}_{m+1}(p')\right|&\le \sum_{r\in (B_{2\delta}(p)\cup B_{2\delta}(p'))\cap \mathcal{N}_\delta}|\phi_r(p)-\phi_r(p')|\cdot |v_{m+1}(r)|\\
&\le \sum_{r\in (B_{2\delta}(p)\cup B_{2\delta}(p'))\cap \mathcal{N}_\delta}\|\phi_r\|_{\mathrm{Lip}}d(p,p')\\
&\le |(B_{2\delta}(p)\cup B_{2\delta}(p'))\cap \mathcal{N}_\delta|\cdot 2K^2\delta^{-1}d(p,p')\\
&\le 2K^2(2K^2-1)\delta^{-1}d(p,p').
\end{align*}
This finishes the verification of properties (1) to (3).
Properties (1)-(3) above finally allow us to use the Gram-Schmidt orthogonalization process to obtain a true $v_{m+1}$ with the desired properties:
\[
v_{m+1}(p):=\frac{\tilde{v}_{m+1}(p)-\sum_{i=1}^m \tilde{v}_{m+1}(p)\cdot v_i(p)}{|\tilde{v}_{m+1}(p)-\sum_{i=1}^m \tilde{v}_{m+1}(p)\cdot v_i(p)|},\quad p\in X.
\]
This is well-defined because
\[
|\tilde{v}_{m+1}(p)-\sum_{i=1}^m \tilde{v}_{m+1}(p)\cdot v_i(p)|\ge |\tilde{v}_{m+1}(p)|-\sum_{i=1}^m |\tilde{v}_{m+1}(p)\cdot v_i(p)|\ge \frac{\sqrt{3}}{2}-\frac 14>0,
\]
and clearly forms an orthonormal system along with $v_1(p),\cdots,v_m(p)$. We also note that
\[
|\tilde{v}_{m+1}(p)-\sum_{i=1}^m \tilde{v}_{m+1}(p)\cdot v_i(p)|\le |\tilde{v}_{m+1}(p)|+\sum_{i=1}^m |\tilde{v}_{m+1}(p)\cdot v_i(p)|\le \frac{\sqrt{3}}{2}+\frac 14.
\]
To check the Lipschitz regularity of $v_{m+1}$ we first compute (recalling $\delta^{-1}=8Km$)
\begin{align*}
\left\|\tilde{v}_{m+1}-\sum_{i=1}^m \tilde{v}_{m+1}\cdot v_i\right\|_{\mathrm{Lip}}&\le \left\|\tilde{v}_{m+1}\right\|_{\mathrm{Lip}}+\sum_{i=1}^m (\left\|\tilde{v}_{m+1}\right\|_{C^0}\left\| v_i\right\|_{\mathrm{Lip}}+\left\|\tilde{v}_{m+1}\right\|_{\mathrm{Lip}}\left\| v_i\right\|_{C^0})\\
&\le 2K^2(2K^2-1)\delta^{-1}+m(\frac{\sqrt{5}}{2}\cdot 1+2K^2(2K^2-1)\delta^{-1}\cdot 1)\\
&= 2K^2\delta^{-1}\Big(2K^2-1+\frac{\sqrt{5}}{32K^3}+2K^2m-m\Big)\\
&\le 4K^4\delta^{-1}(m+1)=32K^5m(m+1).
\end{align*}
By \eqref{unit-alg} we finally have
\begin{align*}
\|v_{m+1}\|_{\mathrm{Lip}}&\le \Big(\big(\frac{\sqrt{3}}{2}-\frac 14\big)^{-1}+\big(\frac{\sqrt{3}}{2}-\frac 14\big)^{-2}\big(\frac{\sqrt{3}}{2}+\frac 14\big)\Big) \left\|\tilde{v}_{m+1}-\sum_{i=1}^m \tilde{v}_{m+1}\cdot v_i\right\|_{\mathrm{Lip}}\\
&\le \Big(\big(\frac{\sqrt{3}}{2}-\frac 14\big)^{-1}+\big(\frac{\sqrt{3}}{2}-\frac 14\big)^{-2}\big(\frac{\sqrt{3}}{2}+\frac 14\big)\Big)\cdot 32K^5m(m+1)\\
&\le 150K^5m(m+1).
\end{align*}
\end{proof}
\begin{remark}
In Theorem \ref{Cj-lifting-lattice}, the required dimension gap $D-m-1$ is precisely the topological dimension of the given domain $G$. In contrast, in Theorem \ref{Lip-lifting} the dimension gap is $224 K^4\log K$, which is exponential in the `metric dimension' $\log K$ of the given domain $X$. Note that a dimension gap of $\Omega(\log K)$ is necessary: take for example $X=S^{2n}$, $D=2n+1$, $m=1$, and $v_1:S^{2n}\hookrightarrow\bbr^{2n+1}$ the standard inclusion. Then the log of the doubling constant for $X$ is a universal constant multiple of $n$, and the dimension gap is $D-m-1=2n-1$. However, the high-dimensional hairy ball theorem tells us that no continuous orthonormal extension is possible, let alone a Lipschitz orthonormal extension.
Perhaps one could reduce the dimension gap in Theorem \ref{Lip-lifting}, say by randomizing the proof using the machinery of random nets and partitions and their padding properties as in \cite{naor2010assouad}, though reducing it to somewhere near $\Omega(\log K)$ seems to require more effort (or perhaps it is impossible, but we do not have a counterexample yet).
\end{remark}
We now state and prove a version of Theorem \ref{Cj-lifting-lattice} but for general Carnot groups, using the idea of proof of Theorem \ref{Lip-lifting}.
\begin{theorem}\label{Cj-lifting}
Let $1\le m\le D-2^{4n_h+7}n_h$, $j\ge 1$, and let $\{R_i\}_{i=1}^j$ be a log-concave sequence of positive reals. Let $v_1,\cdots,v_m:G\to \bbs^{D-1}$ form an orthonormal system at each point, with the uniform regularity bound
\[
\sum_{k=1}^j R_k\|\nabla^k v_i\|_{C^0}\le 1,\quad i=1,\cdots, m.
\]
Then there exists $v_{m+1}:G\to \bbs^{D-1}$ such that $v_1,\cdots,v_m$ along with $v_{m+1}$ form an orthonormal system at each point, and
\[
\sum_{k=1}^j R_k\|\nabla^k v_{m+1}\|_{C^0}\lesssim_{G,m,j} 1.
\]
\end{theorem}
\begin{proof}
By a simple rescaling argument using the scaling map $\delta_{R_1}$, we may assume $R_1=1$. We repeat the proof of Theorem \ref{Lip-lifting} with a slight variation.
Take a maximal $\delta$-net $\mathcal{N}_\delta$ of $G$, where $\delta=\frac{1}{6\cdot 2^{n_h}m}$. Let $\epsilon=\frac{1}{4^{n_h+1}}$. We repeat the first part of the proof of Theorem \ref{Lip-lifting}, but instead using the estimates
\[
|\mathcal{N}_\delta \cap B_{1.5\delta}(p)|\le 4^{n_h},\quad |\mathcal{N}_\delta \cap B_{3\delta}(p)|\le 7^{n_h}
\]
which follow from \eqref{locbd}.
We again define the probability space $\Omega$ and random variables $\{v_{m+1}'(p)\in \bbs^{D-1}:p\in \mathcal{N}_\delta\}$, and for each $p\in \mathcal{N}_\delta$, we define the slightly different event
\[
A_p=\{\omega\in \Omega:\exists q\in \mathcal{N}_\delta \cap (B_{1.5\delta}(p)\setminus \{p\})~|v_{m+1}'(p)(\omega)\cdot v_{m+1}'(q)(\omega)|>\epsilon\}.
\]
By the same computations as before we have that for any $p,q\in \mathcal{N}_\delta$ distinct,
\[
\mathrm{Pr}(|v_{m+1}'(p)\cdot v_{m+1}'(q)|>\epsilon)\le \exp\left(-\frac{\epsilon^2}{2}(D-m)\right),
\]
so we again estimate the probability of each $A_p$ using a union bound:
\begin{align*}
\mathrm{Pr}(A_p)&\le |\mathcal{N}_\delta \cap (B_{1.5\delta}(p)\setminus \{p\})|\cdot \exp\left(-\frac{\epsilon^2}{2}(D-m)\right)\\
&\le 4^{n_h}\cdot \exp\left(-\frac{\epsilon^2}{2}(D-m)\right).
\end{align*}
Observing that for each $p\in \mathcal{N}_\delta$, $A_p$ is mutually independent with the collection of events $\{A_q:q\in \mathcal{N}_\delta \setminus B_{3\delta}(p)\}$, which are all the $A_q$'s except possibly $|\mathcal{N}_\delta\cap B_{3\delta}(p)|\le 7^{n_h}$ of them, we see that our choice of parameters $D-m\ge 2^{4n_h+7}n_h$ and $\epsilon=\frac{1}{4^{n_h+1}}$ allows us to apply the Lov\'{a}sz local lemma because
\begin{align*}
&e\cdot 7^{n_h}\cdot 4^{n_h}\cdot \exp\left(-\frac{\epsilon^2}{2}(D-m)\right)\\
&=e\cdot 28^{n_h}\cdot \exp\left(-\frac{\epsilon^2}{2}(D-m)\right)\\
&<e^{4n_h}\cdot \exp\left(-\frac{1}{2^{4n_h+5}}(2^{4n_h+7}n_h)\right)=1\\
&\quad (\because 1+\log(28)n_h<4n_h\mbox{ as }n_h\ge 2).
\end{align*}
By the same limiting argument, we conclude the existence of an assignment $\{v_{m+1}'(p)\}_{p\in \mathcal{N}_\delta}$ such that for any distinct $p,q\in \mathcal{N}_\delta$ with $d(p,q)<1.5\delta$ we have $|v_{m+1}'(p)\cdot v_{m+1}'(q)|\le \epsilon$.
As in the previous proof, we now `interpolate' the discrete vector field $\{v_{m+1}'(p)\}_{p\in \mathcal{N}_\delta}$ to produce a vector field $\{\tilde{v}_{m+1}(p)\}_{p\in G} $ defined on the entirety of $G$, which nearly has the desired properties. It is not difficult to define a `quadratic' partition of unity $\{\phi_q\}_{q\in \mathcal{N}_\delta}$, i.e., functions $\phi_q:G\to [0,1]$ defined for each $q\in \mathcal{N}_\delta$ such that
\begin{itemize}
\item $\mathrm{supp~} \phi_q\subset B_{1.5\delta}(q)$, $q\in \mathcal{N}_\delta$,
\item $\sum_{q\in \mathcal{N}_\delta}\phi_q^2=1$ on $G$,
\item $\|\phi_q\|_{C^0}+\sum_{k=1}^j R_k\|\nabla^k\phi_q\|_{C^0}\le \|\phi_q\|_{C^{j+1}}\lesssim_{G,m} 1$, uniformly over $q\in \mathcal{N}_\delta$.
\end{itemize}
We now interpolate the $\{v_{m+1}'(p)\}_{p\in \mathcal{N}_\delta}$ using $\{\phi_q\}_{q\in \mathcal{N}_\delta}$:
\begin{equation}\label{interpolate2}
\tilde{v}_{m+1}(p)=\sum_{q\in \mathcal{N}_\delta}\phi_q(p)v_{m+1}'(q),\quad p\in G.
\end{equation}
We claim that $\tilde{v}_{m+1}$ nearly satisfies the required properties, namely it satisfies
\begin{enumerate}
\item $\frac{3}{4}\le |\tilde{v}_{m+1}(p)|^2\le \frac 54$ for each $p\in G$,
\item $|\tilde{v}_{m+1}(p)\cdot v_i(p)|\le \frac 1{4m}$ for each $p\in G$ and $i=1,\cdots,m$,
\item $\|\tilde{v}_{m+1}\|_{C^0}+\sum_{k=1}^j R_k\|\nabla^k\tilde{v}_{m+1}\|_{C^0}\lesssim_{G,j,m} 1$.
\end{enumerate}
Indeed, by the support property of $\phi_q$, we see that the summation in \eqref{interpolate2} is locally finite:
\[
\tilde{v}_{m+1}(p)=\sum_{q\in \mathcal{N}_\delta\cap B_{1.5\delta}(p)}\phi_q(p)v_{m+1}'(q).
\]
From this, property (3) above immediately follows. We can verify property (1) using the near-orthogonality and the fact that the sum of squares of the $\phi_q$'s is 1:
\begin{align*}
\left||\tilde{v}_{m+1}(p)|^2-1\right|&=\left|\sum_{\substack{q,q'\in \mathcal{N}_\delta\cap B_{1.5\delta}(p) \\ q\neq q'}}\phi_q(p)\phi_{q'}(p)v_{m+1}'(q)\cdot v_{m+1}'(q')\right|\\
&\le \sum_{\substack{q,q'\in \mathcal{N}_\delta\cap B_{1.5\delta}(p) \\ q\neq q'}}\phi_q(p)\phi_{q'}(p)\left|v_{m+1}'(q)\cdot v_{m+1}'(q')\right|\\
&\le \sum_{\substack{q,q'\in \mathcal{N}_\delta\cap B_{1.5\delta}(p) \\ q\neq q'}}\frac{\phi_q(p)^2+\phi_{q'}(p)^2}{2}\epsilon\\
&\le |\mathcal{N}_\delta\cap B_{1.5\delta}(p)|\epsilon\le 4^{n_h}\epsilon=\frac 14.
\end{align*}
Finally, we verify property (2). We have, for each $p\in G$ and $i=1,\cdots, m$,
\[
\tilde{v}_{m+1}(p)\cdot v_i(p) =\sum_{q\in \mathcal{N}_\delta\cap B_{1.5\delta}(p)}\phi_q(p)v_{m+1}'(q)\cdot v_i(p).
\]
We observe that for each $q\in \mathcal{N}_\delta\cap B_{1.5\delta}(p)$, we can estimate
\begin{align*}
|v_{m+1}'(q)\cdot v_i(p)|&\le |v_{m+1}'(q)\cdot v_i(q)|+|v_{m+1}'(q)\cdot (v_i(p)-v_i(q))|\\
&\le 0+ | v_i(p)-v_i(q)|\le 1.5\delta,
\end{align*}
where we have used the orthogonality of $v_{m+1}'(q)$ against $v_i(q)$ and the fact that $\|\nabla v_i\|_{C^0}\le 1$. From these facts and Cauchy--Schwarz, we have the bound
\begin{align*}
|\tilde{v}_{m+1}(p)\cdot v_i(p)| \le \sum_{q\in \mathcal{N}_\delta\cap B_{1.5\delta}(p)}\phi_q(p)\cdot 1.5\delta\le |\mathcal{N}_\delta\cap B_{1.5\delta}(p)|^{1/2}\cdot 1.5\delta\le 2^{n_h}\cdot 1.5\delta=\frac 1{4m}.
\end{align*}
This finishes the verification of properties (1) to (3). Now we can use Gram-Schmidt orthogonalization to obtain a true $v_{m+1}$:
\[
v_{m+1}(p):=\frac{\tilde{v}_{m+1}(p)-\sum_{i=1}^m \tilde{v}_{m+1}(p)\cdot v_i(p)}{|\tilde{v}_{m+1}(p)-\sum_{i=1}^m \tilde{v}_{m+1}(p)\cdot v_i(p)|},\quad p\in G.
\]
This is well-defined because
\[
|\tilde{v}_{m+1}(p)-\sum_{i=1}^m \tilde{v}_{m+1}(p)\cdot v_i(p)|\ge |\tilde{v}_{m+1}(p)|-\sum_{i=1}^m |\tilde{v}_{m+1}(p)\cdot v_i(p)|\ge \frac{\sqrt{3}}{2}-\frac 14>0,
\]
and clearly forms an orthonormal system along with $v_1(p),\cdots,v_m(p)$. The required regularity of $v_{m+1}$ follows from the algebra property for norms defined by log-concave sequences.
\end{proof}
From the proof of Theorem \ref{Cj-lifting}, we can easily see that Theorem \ref{Lip-lifting} provides a partial positive answer to Question \ref{general-lift} for many other function spaces as well, described in the following theorem.
\begin{theorem}\label{partialpositiveanswer}
Let $(X,d)$ be a $K$-doubling metric space ($K\ge 2$), and let $\mathcal{F}_1\subset \mathrm{Lip}(X, \mathbb{R})\coloneqq \{f:X\to\mathbb{R}|f \mbox{ is Lipschitz}\}$, $\mathcal{F}\subset \mathrm{Lip}(X, \mathbb{R}^D)\coloneqq \{f:X\to\mathbb{R}^D|f \mbox{ is Lipschitz}\}$ be spaces of functions on $X$ such that
\begin{enumerate}
\item The function spaces $(\mathcal{F}_1,\|\cdot\|_{\mathcal{F}_1})$, $(\mathcal{F},\|\cdot\|_\mathcal{F})$ are normed linear spaces.
\item $\|f\|_{\mathrm{Lip}(X,\mathbb{R})} \lesssim \|f\|_{\mathcal{F}_1}$ for all $f \in \mathcal{F}$ and $\|v\|_{\mathrm{Lip}(X,\mathbb{R}^D)} \lesssim \|v\|_{\mathcal{F}}$ for all $v \in \mathcal{F}$.
\item (Closure under algebraic operations)
For $f\in \mathcal{F}_1,v,w\in \mathcal{F}$ we have $fv\in \mathcal{F}$ and $v\cdot w\in \mathcal{F}_1$ with
\[
\|fv\|_\mathcal{F}\lesssim \|f\|_{\mathcal{F}_1}\|v\|_\mathcal{F}, \quad \|v\cdot w\|_{\mathcal{F}_1}\lesssim \|v\|_\mathcal{F}\|w\|_\mathcal{F}.
\]
Also, if $f\in \mathcal{F}_1$, $f(x)\ge c>0$ for all $x\in X$, then $1/f \in \mathcal{F}_1$ with
\[
\|1/f\|_{\mathcal{F}_1}\lesssim_c \|f\|_{\mathcal{F}_1}.
\]
If $v\in \mathcal{F}$, $|v(x)|\ge c>0$ for all $x\in X$, then $|v|\in \mathcal{F}_1$ and
\[
\||v|\|_{\mathcal{F}_1}\lesssim_c \|v\|_{\mathcal{F}}.
\]
\item (Density in the space of Lipschitz functions) For any $\delta>0$ and $w \in \mathrm{Lip}(X, \mathbb{R}^D)$ with $\|w\|_{\mathrm{Lip}}\le 1$ there exists $v\in \mathcal{F}(X)$ such that
\[
\|v-w\|_{L^\infty}<\delta,\quad \|v\|_{\mathcal{F}}\lesssim_\delta \|w\|_{L^\infty}+\|w\|_{\mathrm{Lip}}.
\]
\end{enumerate}
Let $1\le m\le D-224 K^4\log K$. If $v_i:X\to\bbs^{D-1}$, $\|v_i\|_{\mathcal{F}}\le 1$, $i=1,\cdots,m,$ form an orthonormal system at each point $p\in X$, then there exists $v_{m+1}:X\to \bbs^{D-1}$ with $\|v_{m+1}\|_\mathcal{F}\lesssim_{K,m,\mathcal{F}} 1$ such that $v_1,\cdots,v_m$ along with $v_{m+1}$ form an orthonormal system at each point $p \in X$.
\end{theorem}
In words, Theorem \ref{partialpositiveanswer} says that as long as the function space $\mathcal{F}$ behaves `better than' the space of Lipschitz functions in the sense of (2), is closed under the operations used in the Gram-Schmidt process in the sense of (3), and can approximate Lipschitz functions arbitrarily well (4) (for Carnot groups we accomplished this by convolving with a smooth mollifier), then we have a positive answer to Question \ref{general-lift}.
By applying Theorem \ref{Cj-lifting} several times, we obtain the following.
\begin{corollary}\label{multiple-Cj-lifting}
Let $1\le m\le D-2^{4n_h+7}n_h-m'+1$, $j\ge 1$, and let $\{R_i\}_{i=1}^j$ be a log-concave sequence of positive reals. Let $v_1,\cdots,v_m:G\to \bbs^{D-1}$ form an orthonormal system at each point, with the uniform regularity bound
\[
\sum_{k=1}^j R_k\|\nabla^k v_i\|_{C^0}\le 1,\quad i=1,\cdots, m.
\]
Then there exist $v_{m+1},\cdots,v_{m+m'}:G\to \bbs^{D-1}$ such that $v_1,\cdots,v_m$ along with $v_{m+1},\cdots,v_{m+m'}$ form an orthonormal system at each point, and
\[
\sum_{k=1}^j R_k\|\nabla^k v_i\|_{C^0}\lesssim_{G,m,m',j} 1,\quad i=m+1,\cdots, m+m'.
\]
\end{corollary}
\begin{remark}
The need for a normal field in embedding problems dates back to Nash \cite{nash1954c1}, where the existence of a section was demonstrated using a homotopy argument from Steenrod \cite{steenrod1999topology} based on the fact that the base space is contractible. However, such an argument in this situation will fail to control the regularity of the section at points of the base far from the contraction point. In \cite[Section 8]{tao2018embedding} this was achieved for the Heisenberg group $\bbh^3$ by imposing a `uniform' CW-structure on $\bbh^3$ and then inductively defining the section starting from low-dimensional skeleta. In the inductive step in \cite{tao2018embedding}, one has to use the fact that the homotopy groups $\pi_i(\bbs^n)$ vanish for $i<n$, which necessitates the `dimension gap' $D-m-1\ge 3$. Here we have chosen the section over a 0-skeleton such that the section is locally roughly an orthonormal set and obtained a section by directly interpolating. This allows us to avoid the need for a CW structure by only using the doubling property of the base space, but have thus increased the dimension gap exponentially.
\end{remark}
\section{Main Iteration Lemma}\label{sec:iterationlemma}
\setcounter{equation}{0}
The starting point of the iterative construction is a function that oscillates at a fixed scale while satisfying a suitable freeness property. (This will also be an ingredient in the inductive step of the iterative construction, when we pass from a larger scale $A^{m+1}$ down to a smaller scale $A^m$.) We begin by constructing this single oscillating function on the Carnot group $G$. (See also Proposition 5.2 of \cite{tao2018embedding} for an analogous statement.)
\begin{proposition}\label{FirstStep}
There exists a smooth map $\phi^0:G\to \bbr^{14^{n_h}}$ with the following properties.
\begin{enumerate}
\item (Smoothness) For all $j\ge 1$,
\[
\|\phi^0\|_{C^j}\lesssim_{G,j} 1.
\]
\item (Locally free embedding) For all $p\in G$, we have
\begin{equation}\label{firstfree}
\left|\bigwedge_{\substack{W=X_{r_1,j_1}\cdots X_{r_m,j_m} \\ 1\le m\le s, (r_1,j_1)\preceq \cdots \preceq (r_{m},j_{m})}}W\phi^0(p)\right|\ge 1.
\end{equation}
\end{enumerate}
\end{proposition}
\begin{proof}
Inspired by the Veronese-type embedding used in \cite{tao2018embedding}, we first begin with the function $\varphi^0:G\to \bigoplus_{r=1}^s \otimes^r \bbr^{n}=\bbr^{\sum_{r=1}^s\binom{n}{r}}$,
\[
\varphi^0(\exp(x))=\bigoplus_{r=1}^s \frac{1}{r!}\otimes^r x,\quad x\in \mathfrak{g},
\]
where in the image we identify $\mathfrak{g}$ with $\bbr^n$ via $\sum_{r=1}^s\sum_{i=1}^{k_r}x_{r,i}X_{r,i}\leftrightarrow \sum_{r=1}^s\sum_{i=1}^{k_r}x_{r,i}f_{r,i}$. In these coordinates, we would have
\[
\varphi^0\left(\exp\left(\sum_{r=1}^s\sum_{i=1}^{k_r}x_{r,i}X_{r,i}\right)\right)=\sum_{r=1}^s \frac{1}{r!}\otimes^r \left(\sum_{r=1}^s\sum_{i=1}^{k_r}x_{r,i}f_{r,i}\right),\quad x_{r,i}\in \bbr.
\]
Recall that
\[
X_{r,i}=\frac{\partial}{\partial x_{r,i}}+\sum_{r'>r}^s\sum_{j=1}^{k_{r'}}(\mbox{polynomial in }\{x_{r'',i'}\}_{r''<r'}\mbox{ of weighted degree }r-r')\frac{\partial}{\partial x_{r',j}},
\]
and recall that $X_{r,i}$ acts on polynomials by reducing the weighted degree by $r$. Consequently, for each $(r_1,j_1)\preceq \cdots \preceq (r_{m},j_{m})$, the differential operator $X_{r_1,j_1} \cdots X_{r_{m},j_{m}}$ reduces weighted degrees by $\sum_{i=1}^m r_i$, and so
\begin{align*}
X_{r_1,j_1} \cdots X_{r_{m},j_{m}}\varphi^0=&f_{r_1,j_1}\otimes \cdots \otimes f_{r_{m},j_{m}}\\
&+\sum_{\substack{(r'_1,j'_1)\preceq \cdots\preceq (r'_{m'},j'_{m'}) \\ r'_1+\cdots+r'_{m'}> r_1+\cdots+r_m \\ m'\le s}}(\mbox{polynomial of degree }\Sigma r'_i-\Sigma r_i)f_{r'_1,j'_1}\otimes \cdots \otimes f_{r'_{m'},j'_{m'}}.
\end{align*}
Thus, when taking the wedge product of all these differentials, we can rearrange the differentials in order of their degree and cancel the higher degree terms. This leads to
\[
\left|\bigwedge_{\substack{W=X_{r_1,j_1}\cdots X_{r_m,j_m} \\ 1\le m\le s, (r_1,j_1)\preceq \cdots \preceq (r_{m},j_{m})}}W\varphi^0(p)\right|=\left|\bigwedge_{\substack{W=X_{r_1,j_1}\cdots X_{r_m,j_m} \\ 1\le m\le s, (r_1,j_1)\preceq \cdots \preceq (r_{m},j_{m})}} f_{r_1,j_1}\otimes \cdots \otimes f_{r_{m},j_{m}}\right|= 1.
\]
Hence, we can create a mapping with the required freeness property. It remains to modify this construction so that we have bounded $C^j$ norms as well. This can be done by viewing $\varphi^0$ as a mapping that works locally around the origin and then pasting together mollifications of $\varphi^0$, using the doubling property of $G$ to ensure that the pasting process only increases the $C^j$ norm by a bounded factor. (This construction is inspired from \cite{assouad1983plongements}.)
Take a smooth function $\eta:G\to [0,1]$ which is identically 1 on the unit ball $B_1$ and which vanishes on $B_{1.5}^c$. Then the function $\varphi^1:G\to \bbr^{\sum_{r=1}^s\binom{n}{r}}$ defined by $\varphi^1=\eta \varphi^0$ has bounded $C^j$ norm for all $j\ge 1$, and satisfies $\left|\bigwedge_{W= X_{r_1,j_1},\cdots,X_{r_m,j_m}}W\varphi^1\right|= 1$ on $B_1$. Now take a maximal 1-net $\mathcal{N}_1$ of $G$. We claim that we can decompose
\[
\mathcal{N}_1=\bigsqcup_{a=1}^{7^{n_h}}\mathcal{N}^a_3,
\]
where $n_h$ is the Hausdorff dimension of $G$, and each $\mathcal{N}^a_3$ is a 3-net of $G$. (This is a standard coloring argument.) Indeed, each point $g\in \mathcal{N}_1$ has at most $7^{n_h}-1$ points in $\mathcal{N}_1$ in its 3-neighborhood by \eqref{locbd}; having inductively assigned a finite number of points of $\mathcal{N}_1$ in one of the $7^{n_h}$ sets $\{\mathcal{N}^a_3\}_{a=1}^{7^{n_h}}$'s, any other point of $\mathcal{N}_1$ can be assigned to one of them consistently.
We now define $\phi^0:G\to\bbr^{7^{n_h}\sum_{r=1}^s\binom{n}{r}}$ as
\[
\phi^0(p)\coloneqq \bigoplus_{a=1}^{7^{n_h}}\sum_{g\in \mathcal{N}^a_3}\varphi^1(g^{-1}p),\quad p\in G.
\]
Then this satisfies the given properties because for each $a=1,\cdots,7^{n_h}$, the function $\sum_{g\in \mathcal{N}^a_3}\varphi^1(g^{-1}p)$ is a sum of smooth compactly supported functions whose supports $gB_{1.5}$ are disjoint, so the sum $\sum_{g\in \mathcal{N}^a_3}\varphi^1(g^{-1}p)$ has bounded $C^j$ norm and we have the wedge product bound
\[
\left|\bigwedge_{\substack{W=X_{r_1,j_1}\cdots X_{r_m,j_m} \\ 1\le m\le s, (r_1,j_1)\preceq \cdots \preceq (r_{m},j_{m})}}W\sum_{g\in \mathcal{N}^a_3}\varphi^1(g^{-1}p)\right|\ge 1\quad p\in \mathcal{N}^a_3B_1.
\]
As $\{\mathcal{N}^a_3B_1\}_{a=1}^{7^{n_h}}$ covers $G$, we see that $\varphi^0$ satisfies \eqref{firstfree}. As $\sum_{r=1}^s\binom{n}{r}\le 2^n\le 2^{n_h}$ we are done by composing $\phi^0$ with an embedding $\bbr^{7^{n_h}\sum_{r=1}^s\binom{n}{r}}\to \bbr^{14^{n_h}}$.
\end{proof}
\begin{remark}
The method of proof of Proposition \ref{FirstStep} in \cite{tao2018embedding} for the case $G=\bbh^3$, or more generally for the case when $G$ admits a cocompact lattice $\Gamma$, is the following. Since the nilmanifold $G/\Gamma$ is a smooth compact $n$-dimensional manifold, by the strong Whitney immersion theorem \cite{whitney1944singularities} there exists a smooth immersion $G/\Gamma\to \bbr^{2n-1}$. By precomposing with the projection $G\to G/\Gamma$, one obtains a map $\varphi^1:G\to \bbr^{2n-1}$ that satisfies the weaker freeness property
\[
\left|\bigwedge_{r=1}^s\bigwedge_{i=1}^{k_r}X_{r,i}\varphi^1\right|\gtrsim 1
\]
while having bounded $C^j$ norms due to the compactness of $G/\Gamma$. We can then obtain the stronger freeness property \eqref{firstfree} by composing $\varphi^1$ with a Veronese-type embedding, say $\phi^1:G\to\bigoplus_{r=1}^s \otimes^r \bbr^{2n-1}=\bbr^{\sum_{r=1}^s\binom{2n-1}{r}}$, where
\[
\phi^1=\bigoplus_{r=1}^s \frac{1}{r!}(\varphi^1)^{\otimes r}.
\]
(One can indeed prove the stronger freeness property by using a simple change of coordinates argument.) There is an exponential saving in the target dimension: the target dimension, in this case, is polynomial in the topological dimension of $G$, whereas the target dimension in Proposition \ref{FirstStep} is exponential in the Hausdorff dimension of $G$. Perhaps one could improve the target dimension in Proposition \ref{FirstStep} to be polynomial in the Hausdorff dimension of $G$, say by using random nets and partitions as in \cite{naor2010assouad} and being more careful about how we paste the different embeddings.
\end{remark}
Now we state the inductive step. Having constructed a map $\psi:G\to\bbr^D$ which `represents' the geometry of $G$ at scale $A^{m+1}$ and above, we need to construct a correction $\phi:G\to\bbr^D$ which oscillates at scale $A^m$ such that $\psi+\phi$ represents the geometry of $G$ at scale $A^m$. When we say that a mapping represents the geometry of $G$ at scale $A^m$, we mean that its $C_{A^m}^{m^*,2/3}$ norm is controlled and that it has good freeness properties. To make a viable induction argument, we need to show that the quantitative controls on the $C_{A^m}^{m^*,2/3}$ norm and the freeness properties are preserved when we pass from $\psi$ to $\psi+\phi$. By rescaling, we may simply assume $m=0$. The precise statement for the inductive step is as follows.
\begin{proposition}[Main iteration step]\label{KeyIteration2}
Let $M$ be a real number with
\[
M\ge C_0^{-1},
\]
and let ${m^*}\ge \max\{3,s^2\}$. Suppose a map $\psi:G\to \bbr^{128\cdot 23^{n_h}}$ obeys the following estimates:
\begin{enumerate}
\item (Non-degenerate first derivatives) For any $p\in G$, we have
\begin{equation}\label{It-1-again}
C_0^{-1}M\le |X_i\psi(p)|\le C_0 M,\quad i=1,\cdots,k,
\end{equation}
and
\begin{equation}\label{It-2-again}
\left|\bigwedge_{m=1}^i X_{j_m}\psi(p)\right|\ge C_0^{-i^2-i+2} \prod_{m=1}^i \left|X_{j_m}\psi(p)\right|
\end{equation}
for $2\le i\le k$ and $1\le j_1<\cdots<j_i\le k$.
\item (Locally free embedding) For any $p\in G$, we have
\begin{equation}\label{It-3-again}
\left|\bigwedge_{\substack{W=X_{r_1,j_1}\cdots X_{r_m,j_m} \\ 1\le m\le s, (r_1,j_1)\preceq \cdots \preceq (r_{m},j_{m})}}W\psi(p)\right|\ge C_0^{-2k^2-7k+2}A^{-\sum_{j=2}^s (j-1)\binom{n+j-1}{j}}\prod_{j=1}^k \left|X_j\psi(p)\right|.
\end{equation}
\item (H\"older regularity at scale $A$) We have
\begin{equation}\label{It-4-again}
\|\nabla^2 \psi\|_{C_A^{{m^*}-2,{2/3}}}\le C_0A^{-1}.
\end{equation}
\end{enumerate}
Then there exists a map $\phi:G\to\bbr^{128\cdot 23^{n_h}}$ such that $\phi$ obeys the following estimates:
\begin{enumerate}
\item (Regularity at scale 1) We have
\begin{equation}\label{newholder}
\|\phi\|_{C^{{m^*},{2/3}}}\lesssim_G 1.
\end{equation}
\item (Orthogonality) We have
\begin{equation}\label{neworthog}
B(\phi,\psi)=0.
\end{equation}
\item (Non-degenerate first derivatives) We have
\begin{equation}\label{newnondeg}
|X_i\phi|\gtrsim_G 1,\quad i=1,\cdots,k.
\end{equation}
\end{enumerate}
and the sum $\psi+\phi$ obeys the following regularity estimates:
\begin{enumerate}
\item (Non-degenerate first derivatives) For any $p\in G$, we have
\begin{equation}\label{It-1-again-lower-scale}
C_0^{-1}\sqrt{M^2+1}\le |X_i(\psi+\phi)(p)|\le C_0 \sqrt{M^2+1},\quad i=1,\cdots,k,
\end{equation}
and
\begin{equation}\label{It-2-again-lower-scale}
\left|\bigwedge_{m=1}^i X_{j_m}(\psi+\phi)(p)\right|\ge C_0^{-i^2-i+2} \prod_{m=1}^i \left|X_{j_m}(\psi+\phi)(p)\right|
\end{equation}
for $2\le i\le k$ and $1\le j_1<\cdots<j_i\le k$.
\item (Locally free embedding) For any $p\in G$, we have
\begin{equation}\label{It-3-again-lower-scale}
\left|\bigwedge_{\substack{W=X_{r_1,j_1}\cdots X_{r_m,j_m} \\ 1\le m\le s, (r_1,j_1)\preceq \cdots \preceq (r_{m},j_{m})}}W(\psi+\phi)(p)\right|\ge C_0^{-2k^2-7k+3}A^{-\sum_{j=2}^s (j-1)\binom{n+j-1}{j}}\prod_{m=1}^i \left|X_{j_m}(\psi+\phi)(p)\right|.
\end{equation}
\item (H\"older regularity at scale $1$) We have
\begin{equation}\label{It-4-again-lower-scale}
\|\nabla^2 (\psi+\phi)\|_{C^{{m^*}-2,{2/3}}}\le C_0.
\end{equation}
\end{enumerate}
\end{proposition}
\begin{remark}
The dimension $128\cdot 23^{n_h}$ will result from Proposition \ref{FirstStep} along with Corollary \ref{multiple-Cj-lifting}; see Lemma \ref{GoodIsom} below.
\end{remark}
\begin{remark}
Proposition \ref{KeyIteration2} shows that if $\psi$ is a map with the regularity and freeness properties \eqref{It-1-again}-\eqref{It-4-again} at scale $A$, then we can find a correction $\phi$ so that $\psi+\phi$ is a map with the same regularity and freeness properties \eqref{It-1-again-lower-scale}-\eqref{It-4-again-lower-scale} but at scale $1$. In particular, the constants in the freeness properties are the same. We will make this possible by creating a `hierarchy' of freeness properties. Namely, the freeness property \eqref{It-2-again-lower-scale} for $i$-fold wedge products of horizontal derivatives of $\psi+\phi$ will be based on the freeness property \eqref{It-2-again} for $(i-1)$-fold wedge products of horizontal derivatives of $\psi$. Also, the freeness property \eqref{It-3-again-lower-scale} for the wedge product of up to $s$-order derivatives of $\psi+\phi$ will be based on the freeness property \eqref{It-2-again} for $k$-fold wedge products of horizontal derivatives of $\psi$. Thus, we do not lose constants when passing from $\psi$ to $\psi+\phi$, which will allow us to close the iteration.
\end{remark}
Proposition \ref{KeyIteration2} will be a consequence of the following lemma, which is a generalization of Proposition 5.1 of \cite{tao2018embedding} for the case $G=\bbh^3$.
\begin{lemma}[Main iteration lemma]\label{KeyIteration}
Let $M$ be a real number with
\[
M\ge C_0^{-1},
\]
and let ${m^*}\ge \max\{3,s^2\}$. Suppose a map $\psi:G\to \bbr^{128\cdot 23^{n_h}}$ obeys the following estimates:
\begin{enumerate}
\item (Non-degenerate first derivatives) For any $p\in G$, we have
\begin{equation}\label{It-1}
C_0^{-1}M\le |X_i\psi(p)|\le C_0 M,\quad i=1,\cdots,k,
\end{equation}
and
\begin{equation}\label{It-2}
\left|\bigwedge_{m=1}^i X_{j_m}\psi(p)\right|\ge C_0^{-i^2-2i+2} M^{i},
\end{equation}
for $2\le i\le k$ and $1\le j_1<\cdots<j_i\le k$.
\item (Locally free embedding) For any $p\in G$, we have
\begin{equation}\label{It-3}
\left|\bigwedge_{\substack{W=X_{r_1,j_1}\cdots X_{r_m,j_m} \\ 1\le m\le s, (r_1,j_1)\preceq \cdots \preceq (r_{m},j_{m})}}W\psi(p)\right|\ge C_0^{-2k^2-8k+2}A^{-\sum_{j=2}^s (j-1)\binom{n+j-1}{j}}M^{k}.
\end{equation}
\item (H\"older regularity at scale $A$) We have
\begin{equation}\label{It-4}
\|\nabla^2 \psi\|_{C_A^{{m^*}-2,\alpha}}\le C_0A^{-1}.
\end{equation}
\end{enumerate}
Then there exists a map $\phi:G\to\bbr^{128\cdot 23^{n_h}}$ obeying the following estimates.
\begin{enumerate}
\item (Non-degenerate first derivatives) For any $p\in G$, we have
\begin{equation}\label{It-5}
|X_i\phi(p)|\gtrsim_G 1
\end{equation}
and
\begin{equation}\label{It-6}
\left|\bigwedge_{m=1}^i X_{j_m}(\psi+\phi)(p)\right|^2-\left|\bigwedge_{m=1}^i X_{j_m}\psi(p)\right|^2\ge C_0^{-2i^2+5}M^{2(2i-1)},
\end{equation}
for $2\le i\le k$ and $1\le j_1<\cdots<j_i\le k$.
\item (Locally free embedding) For any $p\in G$, we have
\begin{equation}\label{It-7}
\left|\bigwedge_{\substack{W=X_{r_1,j_1}\cdots X_{r_m,j_m} \\ 1\le m\le s, (r_1,j_1)\preceq \cdots \preceq (r_{m},j_{m})}}W(\psi+\phi)(p)\right|\ge C_0^{-2k^2-5k+3}M^{k}.
\end{equation}
\item (H\"older regularity at scale 1) We have
\begin{equation}\label{It-8}
\|\phi\|_{C^{{m^*},\alpha}}\lesssim_G 1.
\end{equation}
\item (Orthogonality) We have
\begin{equation}\label{It-9}
B(\phi,\psi)=0.
\end{equation}
\end{enumerate}
\end{lemma}
We now show why Proposition \ref{KeyIteration2} follows from Lemma \ref{KeyIteration}.
\begin{proof}[proof of Proposition \ref{KeyIteration2} assuming Lemma \ref{KeyIteration}]
Let $\psi$ be as in Proposition \ref{KeyIteration2}.
One can easily verify the hypotheses of Lemma \ref{KeyIteration} for $\psi$, as \eqref{It-2} and \eqref{It-3} each follow from \eqref{It-2-again} and \eqref{It-3-again} combined with \eqref{It-1-again}. Thus, by Lemma \ref{KeyIteration}, there exists a function $\phi:G\to\bbr^{128\cdot 23^{n_h}}$ that satisfies \eqref{newholder}, \eqref{neworthog}, and \eqref{newnondeg} (as these are exactly \eqref{It-8}, \eqref{It-9}, and \eqref{It-5}, respectively) and the following (which are just restatements of \eqref{It-7} and \eqref{It-8}):
\begin{enumerate}
\item (Non-degenerate first derivatives)
For $p\in G$, $2\le i\le k$ and $1\le j_1<\cdots<j_i\le k$,
\begin{equation}\label{It-6-again}
\left|\bigwedge_{q=1}^i X_{j_q}(\psi+\phi)(p)\right|^2-\left|\bigwedge_{q=1}^i X_{j_q}\psi(p)\right|^2\ge C_0^{-2i^2+5}M^{2(i-1)}.
\end{equation}
\item (Locally free embedding) For any $p\in G$, we have
\begin{equation}\label{It-7-again}
\left|\bigwedge_{\substack{W=X_{r_1,j_1}\cdots X_{r_m,j_m} \\ 1\le m\le s, (r_1,j_1)\preceq \cdots \preceq (r_{m},j_{m})}}W(\psi+\phi)(p)\right|\ge C_0^{-2k^2-5k+3}M^{k}.
\end{equation}
\end{enumerate}
We now verify that the map $\psi+\phi$ satisfies the properties \eqref{It-1-again-lower-scale} to \eqref{It-4-again-lower-scale}.
\begin{enumerate}
\item Verification of \eqref{It-1-again-lower-scale}.
By \eqref{neworthog}, we have $|X_i(\psi+\phi)|^2=|X_i\psi|^2+|X_i\phi|^2$. But by \eqref{newnondeg} and \eqref{newholder}, we have $|X_i\phi|\sim_G 1$, so we have
\begin{equation}\label{C_0_hierarchy-1}
C_0^{-1}\le |X_i\phi|\le C_0.
\end{equation}
(This uses our hierarchy of constants in subsection \ref{sec:hierarchy}, by choosing $C_0$ depending on $G$.)
Combining these facts with \eqref{It-1-again} we obtain \eqref{It-1-again-lower-scale}.
\item Verification of \eqref{It-2-again-lower-scale}.
This follows from \eqref{It-6-again}:
\begin{align}\label{C_0_hierarchy-2}
\begin{aligned}
\left|\bigwedge_{q=1}^i X_{j_q}(\psi+\phi)\right|^2&\ge \left|\bigwedge_{q=1}^i X_{j_q}\psi\right|^2+ C_0^{-2i^2+5}M^{2(i-1)}\\
&\ge C_0^{-2i^2-2i+4}\prod_{q=1}^i\left| X_{j_q}\psi\right|^2+ C_0^{-2i^2+5}M^{2(i-1)}\\
&\ge C_0^{-2i^2-2i+4}\prod_{q=1}^i\left(\left| X_{j_q}\psi\right|^2+|X_{j_q}\phi|^2\right)\\
&= C_0^{-2i^2-2i+4}\prod_{q=1}^i| X_{j_q}(\psi+\phi)|^2
\end{aligned}
\end{align}
where in the third inequality we used $|X_{j_q}\psi|\le C_0M$ and $|X_{j_q} \phi|\lesssim_G 1$.
\item Verification of \eqref{It-3-again-lower-scale}.
This follows from \eqref{It-7-again}:
\begin{align}\label{C_0_hierarchy-3}
\begin{aligned}
\left|\bigwedge_{\substack{W=X_{r_1,j_1}\cdots X_{r_i,j_i} \\ 1\le i\le s, (r_1,j_1)\preceq \cdots \preceq (r_{i},j_{i})}}W(\psi+\phi)(p)\right| &\ge C_0^{-2k^2-5k+3}M^{k}\\
&\ge C_0^{-2k^2-7k+3}\prod_{i=1}^k|X_i(\psi+\phi)(p)|,
\end{aligned}
\end{align}
where in the last inequality we used that $|X_i(\psi+\phi)|^2=|X_i\psi|^2+|X_i\phi|^2\le C_0^2M^2+O_G(1)\le C_0^4M^2$ (recall that $M\ge C_0^{-1}$), and our hierarchy of constants where we choose $C_0$ depending on $G$.
\item Verification of \eqref{It-4-again-lower-scale}.
This follows from \eqref{It-4-again} and \eqref{newholder}:
\begin{equation}\label{C_0_hierarchy-4}
\|\nabla^2 (\psi+\phi)\|_{C^{{m^*}-2,{2/3}}}\le \|\nabla^2 \psi\|_{C_A^{{m^*}-2,{2/3}}}+\|\phi_0\|_{C^{{m^*},{2/3}}}\le C_0,
\end{equation}
and our hierarchy of constants where we choose $C_0$ depending on $G$.
\end{enumerate}
\end{proof}
For the rest of the section, we will prove Lemma \ref{KeyIteration}.
Suppose that $\psi$ is as in Lemma \ref{KeyIteration}. We will first construct a solution $\Tilde{\phi}$ to the low-frequency equation \eqref{G-5} as
\begin{equation}\label{IsomCompose}
\Tilde{\phi}(p)=U(p)\left(\phi^0(p)\right),
\end{equation}
where $\phi^0:G\to \bbr^{14^{n_h}}$ is as in Proposition \ref{FirstStep}, and $U(p):\bbr^{14^{n_h}}\to \bbr^{128\cdot 23^{n_h}}$ is a linear isometry with the following properties, which is constructed using Corollary \ref{multiple-Cj-lifting}. (See also Lemma 9.1 of \cite{tao2018embedding} for an analogous statement for $\bbh^3$.)
\begin{lemma}\label{GoodIsom}
There exists $U:G\to\hom(\bbr^{14^{n_h}},\bbr^{128\cdot 23^{n_h}})$ such that
\begin{enumerate}
\item For each $p\in G$, $U(p)\in \hom(\bbr^{14^{n_h}},\bbr^{128\cdot 23^{n_h}})$ is an isometry.
\item For each $p\in G$, $s\in \bbr^{14^{n_h}}$, we have
\begin{equation}\label{IsomPerp}
(U(p)(s))\cdot X_iP_{(\le N_0)}\psi(p)=0,~ (U(p)(s))\cdot X_iX_jP_{(\le N_0)}\psi(p)=0,\quad 1\le i, j\le k.
\end{equation}
\item We have the smoothness
\[
\|\nabla U\|_{C^{{m^*}}}\lesssim_{N_0}\frac{1}{A}.
\]
\end{enumerate}
\end{lemma}
\begin{proof}
Let $W_1,\cdots, W_{\frac{k(k+3)}{2}+k_2}$ denote the rescaled differential operators
\[
((M^{-1}X_i)_{i=1}^k,(AX_iX_j)_{1\le i\le j\le k},(AX_{2,i})_{i=1}^{k_2}).
\]
Let $w_i(p)\coloneqq W_iP_{(\le N_0)}\psi(p)$ for $1\le i\le \frac{k(k+3)}{2}+k_2$ for all $p\in G$. Then, by \eqref{It-1}, \eqref{It-4}, and Theorem \ref{LP} (3) (more specifically, \eqref{lp-5} with $j=1$, $l=2$, and recalling $m^*\ge 3$),
\begin{equation}\label{N_0_hierarchy-1}
w_i=W_i\psi+O_G\left(\frac{C_0}{AN_0 M}\right)=O(C_0),\quad i=1,\cdots, k,
\end{equation}
and by \eqref{It-4} and Theorem \ref{LP} (3) (more specifically, \eqref{lp-5} with $j=2$, $l=3$),
\begin{equation}\label{N_0_hierarchy-2}
w_i=W_i\psi+O_G\left(\frac{C_0}{AN_0}\right)=O(C_0),\quad i=k+1,\cdots,\frac{k(k+3)}{2}+k_2.
\end{equation}
(Note that we have used our hierarchy of constants in \eqref{N_0_hierarchy-1} and \eqref{N_0_hierarchy-2}, choosing $N_0$ depending on $G$ and $C_0$, while we have only used $A\ge 1$ so far.)
On the other hand, \eqref{It-3} tells us that
\[
\left|\bigwedge_{i=1,\cdots,\frac{k(k+3)}{2}+k_2}W_i\psi(p)\right|\gtrsim_{C_0} 1,
\]
so by applying the triangle inequality and Cauchy--Schwarz inequality, we conclude that
\[
\left|\bigwedge_{i=1,\cdots,\frac{k(k+3)}{2}+k_2}w_i(p)\right|\gtrsim_{C_0} 1.
\]
By applying Cauchy--Schwarz again, we conclude that
\begin{equation}\label{wp-lb}
\left|\bigwedge_{i=1,\cdots,j}w_i(p)\right|\sim_{C_0} 1,\quad j=1,\cdots, \frac{k(k+3)}{2}+k_2.
\end{equation}
Again, using \eqref{It-4} along with Theorem \ref{LP} (3), we can see that
\begin{equation}\label{N_0_hierarchy-3}
\|w_i\|_{C_0}+A\|\nabla w_i\|_{C^{{m^*}}} \lesssim_{N_0}1, \quad j=1,\cdots, \frac{k(k+3)}{2}+k_2,
\end{equation}
because for $i=1,\cdots, k,$ we use \eqref{It-4}, and \eqref{lp-1} with $l=2$, $j=m^*$, to see
\begin{align}\label{N_0_hierarchy-4}
\begin{aligned}
\|w_i\|_{C_0}+A\|\nabla w_i\|_{C^{{m^*}}}&\stackrel{\eqref{N_0_hierarchy-1}}{\lesssim} C_0+\frac AM \|\nabla^2 P_{(\le N_0)}\psi\|_{C^{m^*}}\le C_0+\frac {AN_0^{m^*}}M \|\nabla^2 P_{(\le N_0)}\psi\|_{C^{m^*}_{1/N_0}}\\
& \stackrel{\eqref{lp-1}}{\lesssim_{G,m^*}} C_0+ \frac {AN_0^{m^*}}M \|\nabla^2\psi\|_{C^0}\stackrel{\eqref{It-4}}{\le} C_0+ \frac {AN_0^{m^*}}M \cdot C_0A^{-1}\lesssim_{N_0}1,
\end{aligned}
\end{align}
and for $i=k+1,\cdots,\frac{k(k+3)}{2}+k_2,$ we use \eqref{It-4}, and \eqref{lp-1} with $l=3$, $j=m^*$, to see
\begin{align}\label{N_0_hierarchy-5}
\begin{aligned}
\|w_i\|_{C_0}+A\|\nabla w_i\|_{C^{{m^*}}}&\stackrel{\eqref{N_0_hierarchy-2}}{\lesssim} C_0+A^2 \|\nabla^3 P_{(\le N_0)}\psi\|_{C^{m^*}}\le C_0+A^2 N_0^{m^*} \|\nabla^3 P_{(\le N_0)}\psi\|_{C^{m^*}_{1/N_0}}\\
& \stackrel{\eqref{lp-1}}{\lesssim_{G,m^*}} C_0+ A^2N_0^{m^*} \|\nabla^3\psi\|_{C^0}\stackrel{\eqref{It-4}}{\le} C_0+ A^2N_0^{m^*} \cdot C_0A^{-2}\lesssim_{N_0}1.
\end{aligned}
\end{align}
(We have used our hierarchy of constants in \eqref{N_0_hierarchy-3}, \eqref{N_0_hierarchy-4}, and \eqref{N_0_hierarchy-5}, by choosing $N_0$ after $G$ and $C_0$.)
Now, because the norm of \eqref{N_0_hierarchy-3} is of the form \eqref{logconcave}, we can apply the product rule to this norm: for instance,
\[
\left\|\bigwedge_{i=1,\cdots,j}w_i\right\|_{C_0}+A\left\|\nabla\bigwedge_{i=1,\cdots,j} w_i\right\|_{C^{{m^*}}}\lesssim_{N_0}1, \quad j=1,\cdots, \frac{k(k+3)}{2}+k_2.
\]
Now let us consider the orthonormal system $v_1,\cdots, v_{\frac{k(k+3)}{2}+k_2}$ formed by applying the Gram-Schmidt process to the vectors $w_i$, i.e., we inductively define
\[
v_i\coloneqq \frac{|\bigwedge_{j<i}w_j|}{|\bigwedge_{j\le i}w_j|}\left(w_i-\sum_{j<i}(w_i\cdot v_j)v_j\right).
\]
By \eqref{wp-lb} this is well-defined, and by a repeated application of the aforementioned product rule, one can deduce the smoothness
\[
\|v_i\|_{C_0}+A\|\nabla v_i\|_{C^{{m^*}}}\lesssim_{N_0}1,\quad i=1,\cdots, \frac{k(k+3)}{2}+k_2.
\]
We now apply Corollary \ref{multiple-Cj-lifting} with $m=\frac{k(k+3)}{2}$, $m'=14^{n_h}$, $j={m^*}+1$, $R_i=A$ to the above $v_i$'s. This is possible because
\[
\frac{k(k+3)}{2}+k_2+14^{n_h}+2^{4n_h+7}n_h\le \frac 12 n_h^2+\frac 32 n_h+14^{n_h}+2^{4n_h+7}n_h\le 128\cdot 23^{n_h}\quad(\because n_h\ge 4).
\]
Thus, we have maps $v_{\frac{k(k+3)}{2}+k_2+1},\cdots,v_{\frac{k(k+3)}{2}+k_2+14^{n_h}}:G\to \bbr^{128\cdot 23^{n_h}}$ such that
\[
\|v_i\|_{C_0}+A\|\nabla v_i\|_{C^{{m^*}}}\lesssim_{N_0}1,\quad i=1,\cdots, \frac{k(k+3)}{2}+k_2+14^{n_h}
\]
and such that $v_1(p),\cdots, v_{\frac{k(k+3)}{2}+k_2+14^{n_h}}(p)$ are orthonormal for all $p\in G$.
Now define $U(p):\mathbb{R}^{14^{n_h}}\to\mathbb{R}^{128\cdot 23^{n_h}}$, $p\in G$, to be the map
\[
U(p)(s)=\sum_{i=1}^{14^{n_h}}s_iv_{\frac{k(k+3)}{2}+k_2+i}(p).
\]
This clearly has the properties (1) and (3) asserted above, and we can also deduce property (2) once we note that $X_iX_j-X_jX_i\in \mathrm{span}\{X_{2,1},\cdots,X_{2,k_2}\}$.
\end{proof}
Continuing with the proof of Lemma \ref{KeyIteration}, let us define $\Tilde{\phi}$ as in \eqref{IsomCompose}. By Lemma \ref{GoodIsom} and Proposition \ref{FirstStep} (1), and using our hierarchy of choosing $A$ after $C_0$ and $N_0$, we have
\begin{equation}\label{A_hierarchy_1}
\|\Tilde{\phi}\|_{C^{{m^*},{2/3}}}\lesssim \|\Tilde{\phi}\|_{C^{{m^*}+1}}\lesssim_G 1,
\end{equation}
where we take $\alpha=\frac 23$. Also, by \eqref{IsomPerp} and the Leibniz rule, it is clear that $\tilde{\phi}$ satisfies \eqref{lowfreq-strongperp} and \textit{a fortiori} solves the low-frequency equation \eqref{G-5}. It is also clear that $\psi$ satisfies the hypothesis of Corollary \ref{perturbation-cor}. By applying Corollary \ref{perturbation-cor}, there exists a $C^{{m^*},{2/3}}$-function $\phi:G\to\bbr^{128\cdot 23^{n_h}}$ such that
\[
B(\phi,\psi)=0,
\]
\begin{equation}\label{G-8-again-again}
\|\phi-\Tilde{\phi}\|_{C^{{m^*},{2/3}}}\lesssim_{C_0} A^{2-{m^*}},
\end{equation}
and, as $\tilde{\phi}$ satisfies \eqref{lowfreq-strongperp},
\begin{equation}\label{NearOrtho}
\|X_i\phi\cdot X_j\psi\|_{C^0}=\|X_i\phi\cdot X_j\psi-X_i\Tilde{\phi}\cdot X_jP_{(\le N_0)}\psi\|_{C^0}\le A^{1-{m^*}},\quad 1\le i,j\le k.
\end{equation}
It remains to verify conditions \eqref{It-5}-\eqref{It-9}. Conditions \eqref{It-8} and \eqref{It-9} are immediate from the construction. For later use, we note that
\begin{equation}\label{decomp}
X_i\phi=X_i\tilde{\phi}+X_i(\phi-\tilde{\phi})=U(X_i\phi^0)+(X_iU)\left(\phi^0\right)+O_{C_0}\left(A^{2-{m^*}}\right)=U(X_i\phi^0)+O_{N_0}\left(A^{-1}\right)
\end{equation}
for $i=1,\cdots,k$. From this and Proposition \ref{FirstStep}, we immediately have (again using our hierarchy of constants that $A$ is chosen after $N_0$)
\begin{equation}\label{A_hierarchy-3}
|X_i\phi|\sim_G 1,\quad \left|\bigwedge_{i=1}^k X_i\phi\right|\sim_G 1,
\end{equation}
so in particular \eqref{It-5} immediately follows. It remains to verify \eqref{It-6} and \eqref{It-7}.
\begin{enumerate}
\item Verification of \eqref{It-6}.
Recall from \eqref{NearOrtho} that $|X_i\phi\cdot X_j\psi|\le \frac{1}{A}$ for $i,j=1,\cdots,k$.
Now we observe that for $1\le j_1<\cdots<j_i\le k$ we have the expansion
\[
\bigwedge_{m=1}^i X_{j_m}(\psi+\phi)=\bigwedge_{m=1}^i X_{j_m}\psi+\sum_{n=1}^{2^{i}-1}\bigwedge_{m=1}^i X_{j_m} f_m^n,
\]
for some sequence $\{f_m^n\}_{n=1,\cdots,2^i-1,~m=1,\cdots, i}$ of functions, each being either $\phi$ or $\psi$. Note that for each $n$ there must exist some $m$ such that $f_m^n=\phi$. This expansion implies
\begin{align*}
&\left|\bigwedge_{m=1}^i X_{j_m}(\psi+\phi)\right|^2-\left|\bigwedge_{m=1}^i X_{j_m}\psi\right|^2=\left|\sum_{n=1}^{2^{i}-1}\bigwedge_{m=1}^i X_{j_m} f_m^n\right|^2+2\sum_{n=1}^{2^{i}-1}\bigg< \bigwedge_{m=1}^i X_{j_m}\psi,\bigwedge_{m=1}^i X_{j_m} f_m^n\bigg>.
\end{align*}
For each fixed $n$, the polarized Cauchy--Binet formula shows that $\bigg< \bigwedge_{m=1}^i X_{j_m}\psi,\bigwedge_{m=1}^i X_{j_m} f_m^n\bigg>$ can be represented as the determinant of an $i\times i$ matrix, whose entries are each of magnitude $O_{C_0}(M^2)$ and one of whose columns (the $m$-th column, where $m$ is such that $f_m^n=\phi$) consists of entries of magnitude $O(A^{-1})$ (because of \eqref{NearOrtho}). Thus, the determinant of this $i\times i$ matrix is of magnitude $O_{C_0}(A^{-1}M^{2(i-1)})$, or equivalently $\bigg< \bigwedge_{m=1}^i X_{j_m}\psi,\bigwedge_{m=1}^i X_{j_m} f_m^n\bigg>=O_{C_0}(A^{-1}M^{2(i-1)})$. Summing over all $n$, we obtain
\[
2\sum_{n=1}^{2^{i}-1}\bigg< \bigwedge_{m=1}^i X_{j_m}\psi,\bigwedge_{m=1}^i X_{j_m} f_m^n\bigg>=O_{C_0}(A^{-1}M^{2(i-1)}),
\]
and so (again using our hierarchy of choosing $A$ after $C_0$) it is enough to show that
\begin{equation}\label{A_hierarchy-5}
\left|\sum_{n=1}^{2^{i}-1}\bigwedge_{m=1}^i X_{j_m} f_m^n\right|^2\ge C_0^{-2i^2+5.5}M^{2(i-1)}.
\end{equation}
But by Cauchy--Schwarz,
\begin{align*}
&\left|\sum_{n=1}^{2^{i}-1}\bigwedge_{m=1}^i X_{j_m} f_m^n\right|^2 \asymp \left|\sum_{n=1}^{2^{i}-1}\bigwedge_{m=1}^i X_{j_m} f_m^n\right|^2\left|\bigwedge_{m=2}^i X_{j_m}\phi \right|^2\ge \left|\left(\sum_{n=1}^{2^{i}-1}\bigwedge_{m=1}^i X_{j_m} f_m^n\right)\wedge \bigwedge_{m=2}^i X_{j_m}\phi \right|^2\\
&= \left|\left(X_{j_1}\phi\wedge \bigwedge_{m=2}^i X_{j_m} \psi \right)\wedge \bigwedge_{m=2}^i X_{j_m}\phi \right|^2= \left|\bigwedge_{m=2}^i X_{j_m} \psi \wedge \bigwedge_{m=1}^i X_{j_m}\phi \right|^2.
\end{align*}
By the Cauchy--Binet formula, this is the determinant of a certain $(2i-1)\times (2i-1)$ matrix, which, by \eqref{NearOrtho}, is close to being block-diagonal: the upper-left $(i-1)\times (i-1)$ block consists of entries of size $O_{C_0}(M^2)$, the lower-right $i\times i$ block consists of entries of size $O_{C_0}(1)$, while the off-block-diagonal entries are of size $O(A^{-1})$ (by \eqref{NearOrtho}). Therefore, we may estimate the total determinant with the determinant of the block diagonal approximation, with error $O_{C_0}(A^{-2}M^{2(i-2)})$:
\[
\left|\bigwedge_{m=2}^i X_{j_m} \psi \wedge \bigwedge_{m=1}^i X_{j_m}\phi \right|^2=\left|\bigwedge_{m=2}^i X_{j_m} \psi \right|^2\left| \bigwedge_{m=1}^i X_{j_m}\phi \right|^2+O_{C_0}(A^{-2}M^{2(i-2)}).
\]
But by \eqref{It-2}, we have
\begin{equation}\label{C_0_A_hierarchy-1}
\left|\bigwedge_{m=2}^i X_{j_m} \psi \right|^2 \ge C_0^{-2i^2+6}M^{2(i-1)}.
\end{equation}
This completes the verification of \eqref{It-6} (again using the hierarchy that $C_0$ is chosen after $G$ and $A$ is chosen after $C_0$).
\item Verification of \eqref{It-7}.
We first observe that for differential operators $W$ of degree at least 2, $W\phi$ dominates $W\psi$, and so we may approximate $\bigwedge_W W(\psi+\phi)$ by $\bigwedge_W W\phi$, where $W$ ranges over such operators.
More precisely, for $W=X_{r_1,j_1}\cdots X_{r_m,j_m}$, where either $m=1$ and $r_1\ge 2$ or $2\le m\le s$, we have
\begin{align*}
W(\psi+\phi)&=W\tilde{\phi}+W(\phi-\tilde{\phi})+W\psi\\
&=W\Big(U(\phi^0)\Big)+O_{C_0}(A^{2-{m^*}})+O_{C_0}(A^{-1})\\
&=W\Big(U(\phi^0)\Big)+O_{C_0}(A^{-1}).
\end{align*}
(The second equation holds because our choice of ${m^*}\ge s^2$ allows us to use our bounds \eqref{G-8-again-again} and \eqref{It-4} on $\|\phi-\tilde{\phi}\|_{C^{{m^*}}}$ and $\|\nabla^2\psi\|_{C_A^{{m^*}-2}}$). Many applications of the Leibniz rule tell us that $W\Big(U(\phi^0)\Big)-U(W\phi^0)$ is a linear combination of derivatives of $U$ times $\phi^0$ or derivatives of $\phi^0$, so from $\|\nabla U\|_{C^{{m^*}}}\lesssim_{N_0}A^{-1}$, we have
\[
W\Big(U(\phi^0)\Big)-U(W\phi^0)=O_{N_0}(A^{-1})
\]
and consequently
\[
W(\psi+\phi)=U(W\phi^0)+O_{N_0}(A^{-1}).
\]
Therefore, we have
\begin{equation}\label{higher-deriv-approx}
\bigwedge_{\substack{W=X_{r_1,j_1}\cdots,X_{r_m,j_m} \\ m=1 \mbox{ and }r_1\ge 2, \mbox{ or } 2\le m\le s}}W(\psi+\phi)=\omega+O_{N_0}(A^{-1}),
\end{equation}
where
\[
\omega\coloneqq \bigwedge_{\substack{W=X_{r_1,j_1}\cdots,X_{r_m,j_m} \\ m=1 \mbox{ and }r_1\ge 2, \mbox{ or } 2\le m\le s}}U(W\phi^0).
\]
Since $U$ is an isometry, and $\phi^0$ has the freeness property \eqref{firstfree}, we have
\begin{equation}\label{omega_is_free}
|\omega|= \left|\bigwedge_{\substack{W=X_{r_1,j_1}\cdots,X_{r_m,j_m} \\ m=1 \mbox{ and }r_1\ge 2, \mbox{ or } 2\le m\le s}}W\phi^0\right|\asymp_G 1.
\end{equation}
Therefore, by \eqref{It-1}, \eqref{C_0_hierarchy-1}, \eqref{higher-deriv-approx} and our hierarchy that $C_0$ is chosen after $G$ and that $A$ is chosen after $C_0$ and $N_0$ and, to verify \eqref{It-7} it is enough to verify
\begin{equation}\label{C_0_A_hierarchy-2}
\left|\bigwedge_{i=1}^kX_i(\psi+\phi)\wedge \omega \right|\gtrsim C_0^{-2k^2-5k+3.3}M^{k}.
\end{equation}
By Cauchy--Schwarz, and
\[
\left|\bigwedge_{i=1}^k X_iP_{(\le N_0)}\psi\right|\le \prod_{i=1}^k\left|X_iP_{(\le N_0)}\psi\right|\stackrel{\eqref{lp-1}}{\lesssim_{G}} \left\|\nabla\psi\right\|_{C^0}^k\le C_0^{k}M^{k},
\]
(we used \eqref{lp-1} with $l=1$), we see (using our hierarchy that $C_0$ is chosen after $G$) that it is enough to verify
\begin{equation}\label{C_0-hierarchy-5}
\bigg< \bigwedge_{i=1}^k X_iP_{(\le N_0)}\psi\wedge \omega,\bigwedge_{i=1}^k X_i(\psi+\phi)\wedge \omega \bigg>\gtrsim C_0^{-2k^2-4k+3.7}M^{2k}.
\end{equation}
Since all the components of $\omega$ are orthogonal to the vectors $X_iP_{(\le N_0)}\psi$, we can use Cauchy--Binet twice to see that the left-hand side is equal to
\[
\bigg< \bigwedge_{i=1}^k X_iP_{(\le N_0)}\psi,\bigwedge_{i=1}^k X_i(\psi+\phi) \bigg>|\omega|^2.
\]
Using that for $1\le i,j\le k$ we have
\begin{align}\label{N_0_hierarchy-7}
\begin{aligned}
X_iP_{(\le N_0)}\psi\cdot X_j(\psi+\phi)&=X_i\psi\cdot X_j\psi+X_i\psi\cdot X_j\phi\\
&\stackrel{\eqref{NearOrtho}}{=}X_i\psi\cdot X_j\psi-X_iP_{(>N_0)}\psi\cdot X_j(\psi+\phi)+O(A^{-1}),
\\ &\stackrel{\eqref{It-1},\eqref{C_0_hierarchy-1}, \eqref{lp-5}}{=}X_i\psi\cdot X_j\psi+ O_G\left(\frac{C_0}{AN_0}\right)\cdot (O(C_0M)+O(C_0))+O(A^{-1}),
\\ &=X_i\psi\cdot X_j\psi+ O(C_0A^{-1}M),
\end{aligned}
\end{align}
(where in the penultimate equation we used \eqref{lp-5} with $j=1$, $l=2$, and in the last equation we used our hierarchy that $N_0$ is chosen after $C_0$), we see using Cauchy--Binet twice that
\[
\bigg< \bigwedge_{i=1}^kX_iP_{(\le N_0)}\psi,\bigwedge_{i=1}^k X_i(\psi+\phi) \bigg> =\left|\bigwedge_{i=1}^k X_i\psi\right|^2+O_{C_0}(A^{-1}M^{2k-1}).
\]
But, from \eqref{It-2}, we have
\begin{equation}\label{C_0_A_hierarchy-3}
\left|\bigwedge_{i=1}^k X_i\psi\right|^2\ge C_0^{-2k^2-4k+4}M^{2k}.
\end{equation}
The claim \eqref{C_0-hierarchy-5} follows from the above, \eqref{omega_is_free}, and our hierarchy of choosing $C_0$ after $G$ and $A$ after $C_0$.
\end{enumerate}
This concludes the proof of Lemma \ref{KeyIteration}.
\section{Construction of the embedding}\label{sec:applyiteration}
\setcounter{equation}{0}
One obtains the following proposition by repeating Proposition \ref{KeyIteration2} a finite number of times (see also Claim 5.4 of \cite{tao2018embedding} for an analogous statement for $\bbh^3$).
\begin{proposition}[Finite iteration]\label{FiniteIteration}
Let $0<\varepsilon\le 1/A$, and let $M_1\le M_2$ be integers. One can find maps $\phi_m:G\to\bbr^{128\cdot 23^{n_h}}$ for $M_1\le m\le M_2$ obeying the following bounds, where $\phi_{(\ge m)}:G\to\bbr^{128\cdot 23^{n_h}}$ is defined by
\[
\phi_{(\ge m)}\coloneqq \sum_{m\le m'\le M_2}A^{-\varepsilon(m'-m)}\phi_{m'}.
\]
\begin{enumerate}
\item (Smoothness at scale $A^m$) For all $M_1\le m\le M_2$, we have
\begin{equation}\label{Finite-1}
\|\phi_m\|_{C_{A^m}^{s^2+s+1}}\le C_0 A^m
\end{equation}
and
\begin{equation}\label{Finite-2}
\|\nabla^2 \phi_{(\ge m)}\|_{C^{s^2+s-1,{2/3}}_{A^{m}}}\le C_0 A^{-m}.
\end{equation}
\item (Orthogonality) One has, for all $M_1\le m\le M_2$,
\begin{equation}\label{Finite-3}
\sum_{m'>m}A^{-\varepsilon(m'-m)}B(\phi_m,\phi_{m'})=0.
\end{equation}
\item (Non-degeneracy) For all $p\in G$ and $M_1\le m\le M_2$, we have the estimates
\begin{equation}\label{Finite-4}
|X_i\phi_m(p)|\ge C_0^{-1},\quad i=1,\cdots,k,
\end{equation}
\begin{equation}\label{Finite-5}
\left|\bigwedge_{q=1}^i X_{j_q}\phi_{(\ge m)}(p)\right|\ge C_0^{-i^2-i+2}\prod_{q=1}^i |X_{j_q}\phi_{(\ge m)}(p)|,\quad 2\le i\le k,~1\le j_1<\cdots<j_i\le k,
\end{equation}
and
\begin{equation}\label{Finite-6}
\left|\bigwedge_{\substack{W=X_{r_1,j_1}\cdots X_{r_i,j_i} \\ 1\le i\le s, (r_1,j_1)\preceq \cdots \preceq (r_{i},j_{i})}}W\phi_{(\ge m)}(p)\right|\ge C_0^{-2k^2-7k+3}A^{-m\sum_{j=2}^s (j-1)\binom{n+j-1}{j}}\prod_{i=1}^k|X_i\phi_{(\ge m)}(p)|.
\end{equation}
\end{enumerate}
\end{proposition}
\begin{proof}
We prove this by induction on $M_2-M_1$. Note that the statement is invariant under rescaling, so we may assume $M_1=0$. The case $M_2=0$ follows directly from Proposition \ref{FirstStep}, so we may assume $M_2>0$. By the inductive hypothesis applied to $M_1'=1$ and $M_2'=M_2$, we may find functions $\phi_m$, $1\le m\le M_2$, so that
\begin{enumerate}
\item (Smoothness at scale $A^n$) For all $1\le m\le M_2$, we have
\begin{equation}\label{Finite-7}
\|\phi_m\|_{C_{A^m}^{s^2+s+1}}\le C_0 A^m
\end{equation}
and
\begin{equation}\label{Finite-8}
\|\nabla^2 \phi_{(\ge m)}\|_{C^{s^2+s-1,{2/3}}_{A^m}}\le C_0 A^{-m}.
\end{equation}
\item (Orthogonality) One has, for all $1\le m\le M_2$,
\begin{equation}\label{Finite-9}
\sum_{m'>m}A^{-\varepsilon(m'-m)}B(\phi_m,\phi_{m'})=0.
\end{equation}
\item (Non-degeneracy) For all $p\in G$ and $1\le m\le M_2$, we have the estimates
\begin{equation}\label{Finite-10}
|X_i\phi_m(p)|\ge C_0^{-1},\quad i=1,\cdots,k,
\end{equation}
\begin{equation}\label{Finite-11}
\left|\bigwedge_{q=1}^i X_{j_q}\phi_{(\ge m)}(p)\right|\ge C_0^{-i^2-i+2}\prod_{q=1}^i |X_{j_q}\phi_{(\ge m)}(p)|,\quad 2\le i\le k,~1\le j_1<\cdots<j_i\le k,
\end{equation}
and
\begin{equation}\label{Finite-12}
\left|\bigwedge_{\substack{W=X_{r_1,j_1}\cdots X_{r_i,j_i} \\ 1\le i\le s, (r_1,j_1)\preceq \cdots \preceq (r_{i},j_{i})}}W\phi_{(\ge m)}(p)\right|\ge C_0^{-2k^2-7k+3}A^{-m\sum_{j=2}^s (j-1)\binom{n+j-1}{j}}\prod_{i=1}^k|X_i\phi_{(\ge m)}(p)|.
\end{equation}
\end{enumerate}
We now verify the hypotheses of Proposition \ref{KeyIteration2} for $\psi\coloneqq A^{-\varepsilon}\phi_{(\ge 1)}=\sum_{1\le m\le M_2}A^{-\varepsilon m}\phi_m$ with $M=\left(\sum_{1\le m\le M_2}A^{-2\varepsilon m}\right)^{1/2}$, ${m^*}=s^2+s+1$ and $\alpha=\frac 23$. We have $M\ge A^{-\varepsilon}\ge A^{-1/A}\ge \frac 12$, so we have $M\ge C_0^{-1}$.
\begin{enumerate}
\item Verification of \eqref{It-1-again}.
From \eqref{Finite-9} and \eqref{Finite-10} we have, for $i=1,\cdots,k$,
\[
|X_i\psi|^2=\sum_{1\le m\le M_2}A^{-2\varepsilon m}|X_i\phi_m|^2\ge \sum_{1\le m\le M_2}A^{-2\varepsilon m}C_0^{-2}=C_0^{-2}M^2,
\]
and from \eqref{Finite-7} we have, for $i=1,\cdots,k$,
\[
|X_i\psi|^2=\sum_{1\le m\le M_2}A^{-2\varepsilon m}|X_i\phi_m|^2\le \sum_{1\le m\le M_2}A^{-2\varepsilon m}C_0^{2}=C_0^{2}M^2.
\]
\item Verification of \eqref{It-2-again}.
From \eqref{Finite-11} we have, for $2\le i\le k$ and $1\le j_1<\cdots<j_i\le k$,
\begin{align*}
\left|\bigwedge_{q=1}^i X_{j_q}\psi\right|&=A^{-i\varepsilon}\left|\bigwedge_{q=1}^i X_{j_q}\phi_{(\ge 1)}\right|\ge A^{-i\varepsilon}C_0^{-i^2-i+2}\prod_{q=1}^i\left| X_{j_q}\phi_{(\ge 1)}\right|=C_0^{-i^2-i+2}\prod_{q=1}^i\left| X_{j_q}\psi\right|.
\end{align*}
\item Verification of \eqref{It-3-again}.
From \eqref{Finite-12} we have
\begin{align*}
&\left|\bigwedge_{\substack{W=X_{r_1,j_1}\cdots X_{r_m,j_m} \\ 1\le m\le s, (r_1,j_1)\preceq \cdots \preceq (r_{m},j_{m})}}W\psi(p)\right|=A^{-\sum_{m=1}^s \binom{n+m-1}{m}\varepsilon}\left|\bigwedge_{\substack{W=X_{r_1,j_1}\cdots X_{r_m,j_m} \\ 1\le m\le s, (r_1,j_1)\preceq \cdots \preceq (r_{m},j_{m})}}W\phi_{(\ge 1)}\right|\\
&\ge A^{-\sum_{m=1}^s \binom{n+m-1}{m}\varepsilon}C_0^{-2k^2-7k+3}A^{-m\sum_{j=2}^s (j-1)\binom{n+j-1}{j}}\prod_{i=1}^k|X_i\phi_{(\ge m)}(p)|\\
&= A^{-(\sum_{m=1}^s \binom{n+m-1}{m}-k)\varepsilon}C_0^{-2k^2-7k+3}A^{-m\sum_{j=2}^s (j-1)\binom{n+j-1}{j}}\prod_{i=1}^k|X_i\psi(p)|\\
&\ge C_0^{-2k^2-7k+2}A^{-m\sum_{j=2}^s (j-1)\binom{n+j-1}{j}}\prod_{i=1}^k|X_i\psi(p)|.
\end{align*}
\item Verification of \eqref{It-4-again}.
From \eqref{Finite-8} we have
\[
\|\nabla^2 \psi\|_{C_A^{s^2+s-1,{2/3}}}=A^{-\varepsilon}\|\nabla^2 \phi_{(\ge 1)}\|_{C^{s^2+s-1,{2/3}}_{A}}\le A^{-\varepsilon}C_0 A^{-1}\le C_0A^{-1}.
\]
\end{enumerate}
Hence, $\psi$ and $M$ satisfy the assumptions of Proposition \ref{KeyIteration2}, and so there exists a function $\phi_0:G\to\bbr^{128\cdot 23^{n_h}}$ that satisfies the following:
\begin{enumerate}
\item (Regularity at scale 1) We have
\begin{equation}\label{newholder-prime}
\|\phi_0\|_{C^{s^2+s+1,{2/3}}}\lesssim_G 1.
\end{equation}
\item (Orthogonality) We have
\begin{equation}\label{neworthog-prime}
B(\phi_0,\psi)=0.
\end{equation}
\item (Non-degenerate first derivatives) For any $p\in G$, we have
\begin{equation}\label{newnondeg-prime}
|X_i\phi_0(p)|\gtrsim_G 1,\quad i=1,\cdots,k,
\end{equation}
\begin{equation}\label{It-1-again-lower-scale-prime}
C_0^{-1}\sqrt{M^2+1}\le |X_i(\psi+\phi_0)(p)|\le C_0 \sqrt{M^2+1},\quad i=1,\cdots,k,
\end{equation}
and
\begin{equation}\label{It-2-again-lower-scale-prime}
\left|\bigwedge_{m=1}^i X_{j_m}(\psi+\phi_0)(p)\right|\ge C_0^{-i^2-i+2} \prod_{m=1}^i \left|X_{j_m}(\psi+\phi_0)(p)\right|
\end{equation}
for $2\le i\le k$ and distinct $1\le j_1<\cdots<j_i\le k$.
\item (Locally free embedding) For any $p\in G$, we have
\begin{equation}\label{It-3-again-lower-scale-prime}
\left|\bigwedge_{\substack{W=X_{r_1,j_1}\cdots X_{r_m,j_m} \\ 1\le m\le s, (r_1,j_1)\preceq \cdots \preceq (r_{m},j_{m})}}W(\psi+\phi_0)(p)\right|\ge C_0^{-2k^2-7k+3}A^{-\sum_{j=2}^s (j-1)\binom{n+j-1}{j}}\prod_{m=1}^i \left|X_{j_m}(\psi+\phi_0)(p)\right|.
\end{equation}
\item (H\"older regularity at scale $1$) We have
\begin{equation}\label{It-4-again-lower-scale-prime}
\|\nabla^2 (\psi+\phi_0)\|_{C^{s^2+s-1,{2/3}}}\le C_0.
\end{equation}
\end{enumerate}
Note that $\phi_{(\ge 0)}=\psi+\phi_0$. To verify that the larger family of maps $\{\phi_m\}_{0\le m\le M_2}$ satisfies the properties \eqref{Finite-1} to \eqref{Finite-6}, we need only verify these properties for $m=0$. But, for $m=0$, \eqref{Finite-1} follows directly from \eqref{newholder-prime} and \eqref{Finite-4} follows from \eqref{newnondeg-prime}, while \eqref{Finite-2}, \eqref{Finite-3}, \eqref{Finite-5}, and \eqref{Finite-6} are precisely conditions \eqref{It-4-again-lower-scale-prime}, \eqref{neworthog-prime}, \eqref{It-2-again-lower-scale-prime}, and \eqref{It-3-again-lower-scale-prime}, respectively.
\end{proof}
By taking the limit $M_1\to-\infty$, $M_2\to\infty$, we now obtain a full set of lacunary maps. (See also Theorem 4.1 of \cite{tao2018embedding} for an analogous statement for $\bbh^3$.)
\begin{theorem}[Maps oscillating at lacunary scales]\label{lacunary}
Let $0<\varepsilon\le 1/A$. Then one can find a map $\phi_m:G\to \bbr^{128\cdot 23^{n_h}}$ for each integer $m$ obeying the following bounds:
\begin{itemize}
\item (Smoothness at scale $A^n$) For all integers $m$, one has
\begin{equation*}
\|\phi_m\|_{C^{s^2+s}_{A^m}}\lesssim_{C_0} A^m.
\end{equation*}
In particular, we have
\begin{equation}\label{D-5}
X_{r,i}\phi_m(p)=O_{C_0}(A^{-m(r-1)}),\quad \mbox{for all }r,i.
\end{equation}
\item (Orthogonality) For all integers $m$, one has
\begin{equation*}
\sum_{m'>m} A^{-\varepsilon(m'-m)}B(\phi_m,\phi_{m'})=0
\end{equation*}
identically on $G$. (By \eqref{D-5} this sum is absolutely convergent.)
\item (Non-degeneracy and immersion) For all integers $m$ and all $p\in G$, one has
\begin{equation*}
|X_i\phi_m(p)|\gtrsim_{C_0}1
\end{equation*}
and
\begin{equation}\label{D-8}
\left|\bigwedge_{\substack{W=X_{r_1,j_1}\cdots X_{r_i,j_i} \\ 1\le i\le s, (r_1,j_1)\preceq \cdots \preceq (r_{i},j_{i})}}W\phi_{(\ge m)}(p)\right|\gtrsim_{C_0} \prod_{\substack{W=X_{r_1,j_1}\cdots X_{r_i,j_i} \\ 1\le i\le s, (r_1,j_1)\preceq \cdots \preceq (r_{i},j_{i})}}\left|W\phi_{(\ge m)}(p)\right|,
\end{equation}
where
\[
\phi_{(\ge m)}(p)=\sum_{m'\ge m} A^{-\varepsilon(m'-m)}\phi_{m'}.
\]
\end{itemize}
\end{theorem}
\begin{proof}
For each $M\in\bbn$, we can apply Proposition \ref{FiniteIteration} to find $\phi^M_m:G\to\bbr^{128\cdot 23^{n_h}}$ for $-M\le m\le M$ obeying the following bounds, where $\phi^M_{(\ge m)}:G\to\bbr^{128\cdot 23^{n_h}}$ is defined by
\[
\phi^M_{(\ge m)}\coloneqq \sum_{m\le m'\le M_2}A^{-\varepsilon(m'-m)}\phi^M_{m'}.
\]
\begin{enumerate}
\item (Smoothness at scale $A^m$) For all $-M\le m\le M$, we have
\begin{equation}\label{D-20}
\|\phi^M_m\|_{C_{A^m}^{s^2+s+1}}\le C_0 A^m.
\end{equation}
\item (Orthogonality) One has, for all $-M\le m\le M$,
\begin{equation*}
\sum_{m'>m}A^{-\varepsilon(m'-m)}B(\phi^M_m,\phi^M_{m'})=0.
\end{equation*}
\item (Non-degeneracy) For all $p\in G$ and $-M\le m\le M$, we have the estimates
\begin{equation*}
|X\phi^M_m(p)|\ge C_0^{-1},
\end{equation*}
\begin{equation}\label{D-23}
\left|\bigwedge_{\substack{W=X_{r_1,j_1}\cdots X_{r_i,j_i} \\ 1\le i\le s, (r_1,j_1)\preceq \cdots \preceq (r_{i},j_{i})}}W\phi^M_{(\ge m)}(p)\right|\ge C_0^{-2k^2-7k+3}A^{-m\sum_{j=2}^s (j-1)\binom{n+j-1}{j}}\prod_{i=1}^k|X_i\phi^M_{(\ge m)}(p)|.
\end{equation}
\end{enumerate}
From \eqref{D-20} and \eqref{D-23} we see that
\[
\left|\bigwedge_{\substack{W=X_{r_1,j_1}\cdots X_{r_i,j_i} \\ 1\le i\le s, (r_1,j_1)\preceq \cdots \preceq (r_{i},j_{i})}}W\phi^M_{(\ge m)}(p)\right|\ge C_0^{-2k^2-7k+3-\sum_{j=2}^s (j-1)\binom{n+j-1}{j}}\prod_{\substack{W=X_{r_1,j_1}\cdots X_{r_i,j_i} \\ 1\le i\le s, (r_1,j_1)\preceq \cdots \preceq (r_{i},j_{i})}}|W\phi^M_{(\ge m)}(p)|.
\]
For each $m\in \bbz$, the sequence $\{\phi_m^M\}_{M\ge |m|}$ is bounded in the $C_{A^m}^{s^2+s+1}$ norm, so by the Arzel\'a-Ascoli theorem, one can find a subsequence of $\{M_k\}$ such that $\{\phi_m^{M_k}\}_{M_k\ge |m|}$ locally converges in the $C^{s^2+s}$ topology, say to $\phi_m$, for every $m\in \bbz$. It now readily follows that these $\phi_m$ satisfy the above properties.
\end{proof}
Once we have this lacunary family guaranteed by Theorem \ref{lacunary}, we can construct a function $\Phi_1:G\to \bbr^{128\cdot 23^{n_h}}$ as
\[
\Phi_1(p)\coloneqq \sum_{m=-\infty}^\infty A^{-\varepsilon m}(\phi_m(p)-\phi_m(0)).
\]
Then $\Phi_1$ is `almost' a bi-Lipschitz embedding of $(G,d_{G}^{1-\varepsilon})$ into $\bbr^{128\cdot 23^{n_h}}$:
\begin{proposition}\label{AlmostLipschitz}
The map $\Phi_1:G\to \bbr^{128\cdot 23^{n_h}}$ satisfies the following estimates.
\begin{enumerate}
\item (H\"older upper bound)
\begin{equation}\label{partial-ub}
|\Phi_1(p)-\Phi_1(p')|\lesssim_A \varepsilon^{-1/2} d_{G}(p,p')^{1-\varepsilon},\quad \forall p,p'\in G.
\end{equation}
\item (Partial H\"older lower bound) For $p,p'\in G$ so that $A^{n_0}A^{-1/(s+1)}\le d_G(p,p')\le 2A^{n_0}A^{-1/(s+1)}$ for some integer $n_0$, we have
\begin{equation}\label{partial-lb}
|\Phi_1(p)-\Phi_1(p')|\gtrsim_A d_{G}(p,p')^{1-\varepsilon}.
\end{equation}
\end{enumerate}
\end{proposition}
\begin{proof}
\begin{enumerate}
\item Let $p,p'\in G$. By translating and rescaling, we may assume $p'=0$ and $A^{-1}\le d_G(p,0)\le 1$. We introduce the low-frequency component
\[
\Psi(q)\coloneqq \sum_{m=0}^\infty A^{-\varepsilon m}(\phi_m(q)-\phi_m(0)),\quad q\in G.
\]
Then
\begin{equation}\label{lowfreq-approx}
\Phi_1(q)=\Psi(q)+O_{C_0}(A^{-1}),
\end{equation}
so it will be enough to show
\[
|\Psi(p)|\lesssim_{A} \varepsilon^{-1/2}.
\]
As $d_G(p,0)\le 1$, there exists a horizontal curve $\gamma$ in $G$ from $0$ to $p$ of length $\le 1$. Therefore, we have
\[
|\Psi(p)|=|\Psi(p)-\Psi(0)|\le 1\cdot \||\nabla \Psi|\|_{L^\infty(G)},
\]
But by the orthogonality statement of Theorem \ref{lacunary}, we have
\[
|X_i \Psi|=|\sum_{m=0}^\infty A^{-\varepsilon m}X_i\phi_m|=\left(\sum_{m=0}^\infty A^{-2\varepsilon m}|X_i\phi_m|^2\right)^{1/2}\asymp_{C_0}M,\quad i=1,\cdots, k,
\]
where
\[
M\coloneqq \left(\sum_{n=0}^\infty A^{-2\varepsilon n}\right)^{1/2}=\left(\frac{1}{1-A^{-2\varepsilon }}\right)^{1/2}\asymp \frac{1}{\sqrt{\varepsilon \log A}}\lesssim_A \varepsilon^{-1/2},
\]
so we conclude
\begin{equation}\label{A_hierarchy-6}
|\Psi(p)|\le \|\nabla \Psi\|_{L^\infty \ell^2}\lesssim_{C_0} M\lesssim_{A} \varepsilon^{-1/2},
\end{equation}
as desired (we just used the hierarchy of choosing $A$ after $C_0$).
\item
Let $p,p'\in G$ be so that $A^{n_0}A^{-1/(s+1)}\le d_G(p,p')\le 2A^{n_0}A^{-1/(s+1)}$ for some integer $n_0$. Again, by translating and rescaling, we may assume $p'=0$ and $n_0=0$, i.e. $A^{-1/(s+1)}\le d_G(p,0)\le 2A^{-1/(s+1)}$. Writing $p=\exp\left(\sum_{r=1}^s\sum_{i=1}^{k_r}x_{r,i}X_{r,i}\right)$, we have
\[
\sum_{r=1}^s\sum_{j=1}^{k_r}|x_{r,j}|^{1/r}\asymp_G d_G(p,0)\asymp A^{-1/(s+1)}.
\]
Equivalently, we have $|x_{r,j}|\lesssim_G A^{-r/(s+1)}$ for all $r,j$, and there exists some pair $(r,j)$ such that $|x_{r,j}|\asymp_G A^{-r/(s+1)}$.
By Taylor expansion
\[
\Psi(p)=\Psi(0)+\sum_{m=1}^s \frac{1}{m!}\left(\sum_{r=1}^s\sum_{j=1}^{k_r}x_{r,j}X_{r,j}\right)^m\Psi(0)+O_{C_0}(A^{- 1})
\]
(note that this is where we used the $C^{s(s+1)}$-regularity of $\Psi$), and in light of \eqref{lowfreq-approx} and $\Psi(0)=0$, we have
\[
\Phi_1(p)=\sum_{m=1}^s \frac{1}{m!}\left(\sum_{r=1}^s\sum_{j=1}^{k_r}x_{r,j}X_{r,j}\right)^m\Psi(0)+O_{C_0}(A^{- 1}).
\]
This expression contains many terms of the form $X_{r_1,j_1}\cdots X_{r_m,j_m}\Psi$, where the $(r_1,j_1),\cdots,(r_m,j_m)$ are arbitrary with $m\le s$ and are often unordered. In order to use the freeness property \eqref{D-8}, it would be necessary to modify the above Taylor expansion formula so that the only differential operators acting on $\Psi$ are the ones of the form $X_{r_1,j_1}\cdots X_{r_m,j_m}$ where $(r_1,j_1)\preceq \cdots \preceq (r_{m},j_{m})$.
This modification is possible once we note that for any permutation $\pi$ of $\{1,\cdots,m\}$ we can express $X_{r_1,j_1}\cdots X_{r_m,j_m}-X_{r_{\pi(1)},j_{\pi(1)}}\cdots X_{r_{\pi(m)},j_{\pi(m)}}$ as a linear combination of differential operators $X_{r'_1,j'_1}\cdots X_{r'_{m'},j'_{m'}}$ where $\sum_{i=1}^{m'}r'_i=\sum_{i=1}^m r_i$ and $(r'_1,j'_1)\preceq \cdots \preceq (r'_{m'},j'_{m'})$. This can be proven by a simple induction argument on $m$ using the fact that $[X_{r,j},X_{r',j'}]\in V_{r+r'}$ is a linear combination of $X_{r+r',i}$ for $i=1,\cdots,k_{r+r'}$ if $r+r'\le s$, and $[X_{r,j},X_{r',j'}]=0$ if $r+r'>s$. Applying this fact to the above Taylor expansion formula and keeping track of the degrees, we obtain the following modified Taylor expansion formula:
\begin{align}\label{modified-Taylor}
\begin{aligned}
\Psi(p)=&\sum_{r=1}^s\sum_{j=1}^{k_r}(x_{r,j}+p_{r,j})X_{r,j}\Psi(0)\\
&+ \sum_{m=2}^s\sum_{(r_1,j_1)\preceq \cdots \preceq (r_{m},j_{m})}p_{(r_1,j_1), \cdots , (r_{m},j_{m})}X_{r_1,j_1}\cdots X_{r_m,j_m}\Psi(0)+O_{C_0}(A^{- 1})
\end{aligned}
\end{align}
where $p_{r,j}$ is a homogeneous polynomial of degree $r$ where each monomial is a product of at least two terms, each of the form $x_{r',j'}$ with $r'<r$ (recall that we define the homogeneous degree by assigning weight $r$ to $x_{r,j}$), and $p_{(r_1,j_1), \cdots , (r_{m},j_{m})}$ is a homogeneous polynomial of degree $\sum_{i=1}^m r_i$.
By the freeness property \eqref{D-8}, each term in \eqref{modified-Taylor} (except for the error term) may serve as a lower bound for the entire sum, up to multiplicative constants. Thus, it suffices to show that there exists $r,j$ such that $x_{r,j}+p_{r,j}$ has non-negligible size.
More precisely, the freeness property \eqref{D-8} tells us that
\[
\left|\bigwedge_{\substack{W=X_{r_1,j_1}\cdots X_{r_m,j_m} \\ 1\le m\le s, (r_1,j_1)\preceq \cdots \preceq (r_{m},j_{m})}}W\Psi(p)\right|\asymp_{C_0} \prod_{\substack{W=X_{r_1,j_1}\cdots X_{r_m,j_m} \\ 1\le m\le s, (r_1,j_1)\preceq \cdots \preceq (r_{m},j_{m})}}\left|W\Psi(p)\right|,
\]
which immediately gives us, for each $(r_0,j_0)$, the following control on the main term of the Taylor expansion:
\begin{align*}
&\left|\sum_{r=1}^s\sum_{j=1}^{k_r}(x_{r,j}+p_{r,j})X_{r,j}\Psi(0)+ \sum_{m=2}^s\sum_{(r_1,j_1)\preceq \cdots \preceq (r_{m},j_{m})}p_{(r_1,j_1), \cdots , (r_{m},j_{m})}X_{r_1,j_1}\cdots X_{r_m,j_m}\Psi(0)\right|\\
&\gtrsim_{C_0} \left|(x_{r_0,j_0}+p_{r_0,j_0})X_{r_0,j_0}\Psi(0)\right|.
\end{align*}
So it remains to single out a pair $(r_0,j_0)$ such that the right-hand side is large enough. Indeed, recall that we have $|x_{r,j}|\lesssim_G A^{-r/(s+1)}$ for all $r,j$, and $|x_{r,j}|\asymp_G A^{-r/(s+1)}$ for some pair $(r,j)$. Therefore, there exists some $(r_0,j_0)$ such that
\begin{equation}\label{A_hierarchy-7}
|x_{r,j}|< A^{-r-0.5} \mbox{ for all } r<r_0,1\le j\le k_r,\quad |x_{r_0,j_0}|\ge A^{-r_0-0.5}
\end{equation}
(we have just used the hierarchy that $A$ is chosen after $G$). Then, as $p_{r_0,j_0}$ is homogeneous of degree $r_0$ and consist of monomials which are the product of at least two terms, we must have
\begin{equation}\label{A_hierarchy-8}
|p_{r_0,j_0}|\lesssim_G A^{-r_0-1}, \quad \mbox{or}\quad |p_{r_0,j_0}|\le \frac 12 A^{-r_0-0.5}
\end{equation}
(we have again just used the hierarchy that $A$ is chosen after $G$).
Hence, $|x_{r_0,j_0}+p_{r_0,j_0}|\ge |x_{r_0,j_0}|-|p_{r_0,j_0}|\ge \frac 12 A^{-r_0-0.5}$, and we see that $\left|(x_{r_0,j_0}+p_{r_0,j_0})X_{r_0,j_0}\Psi(0)\right|\gtrsim_{C_0}A^{-r_0-0.5}$. In conclusion, we have $|\Psi(p)|\gtrsim_{C_0}A^{-r_0-0.5}$, or
\begin{equation}\label{A_hierarchy-9}
|\Psi(p)|\gtrsim_{A}1
\end{equation}
(we just used the hierarchy of choosing $A$ after $C_0$). This completes the proof.
\end{enumerate}
\end{proof}
Now, with Proposition \ref{AlmostLipschitz} in hand, we are ready to prove the main theorem.
\begin{proof}
Recall that the map $\Phi_1:G\to \bbr^{128\cdot 23^{n_h}}$ satisfies the H\"older upper bound \eqref{partial-ub} and the partial H\"older lower bound \eqref{partial-lb}, and that $A$ is a dyadic number, say $A=2^a$, $a\in \mathbb{N}$. By precomposing $\Phi_1$ with the scaling maps $\delta_{2^{-m+1}}$ and rescaling by $2^{(m-1)(1-\varepsilon)}$, $m=1,\cdots,a$, we obtain mappings $\Phi_m=2^{(m-1)(1-\varepsilon)}\Phi_1\circ \delta_{2^{-m+1}}:G\to \bbr^{128\cdot 23^{n_h}}$ that satisfy the same H\"older upper bound \eqref{partial-ub}:
\begin{equation}\label{partial-ub-full}
|\Phi_m(p)-\Phi_m(p')|\lesssim_A \varepsilon^{-1/2} d_{G}(p,p')^{1-\varepsilon},\quad \forall p,p'\in G,
\end{equation}
and a different partial H\"older lower bound \eqref{partial-lb}:
\begin{equation}\label{partial-lb-full}
|\Phi_m(p)-\Phi_m(p')|\gtrsim_A d_{G}(p,p')^{1-\varepsilon},
\end{equation}
for $p,p'\in G$ so that $2^{m-1} A^{n_0}A^{-1/(s+1)}\le d_G(p,p')\le 2^m A^{n_0}A^{-1/(s+1)}$ for some integer $n_0$.
We now obtain the full embedding $\Phi:G\to \bbr^{128\cdot 23^{n_h}\cdot a}$ by directly concatenating the mappings $\Phi_1,\cdots,\Phi_a$:
\begin{equation}\label{fullmapping}
\Phi(p)\coloneqq \Big(\Phi_m(p)\Big)_{m=1}^{a},\quad p\in G.
\end{equation}
It remains to observe
\[
d_G(p,p')^{1-\varepsilon}\lesssim_A |\phi(p)-\phi(p')| \lesssim_A \varepsilon^{-1/2} d_G(p,p')^{1-\varepsilon},\quad p,p'\in G.
\]
The upper bound follows from \eqref{partial-ub-full}, and the lower bound follows by observing that for any given pair $p,p'\in G$, \eqref{partial-lb-full} applies to at least one coordinate.
\end{proof}
\bibliographystyle{acm}
|
1,116,691,498,639 | arxiv | \section{Introduction}
Let $G\leq \GL(d,\mathbb{Z})$ be a finite group.
We consider orbit polytopes
$\conv(Gz)$ of integral vectors $z\in \mathbb{Z}^d$,
that is, the convex hull of an orbit of a point
$z$ with integer coordinates.
We call $z$ a \emph{core point} for $G$ when
the vertices are the only integral vectors in the orbit polytope
$\conv(Gz)$.
Core points were introduced in \cite{BoediHerrJoswig13,herrrehnschue13}
in the context of convex integer optimization,
in order to develop new techniques to exploit symmetries.
Core points are relevant to symmetric convex integer optimization,
since a $G$-symmetric convex set contains an
integer vector if and only if it contains a core point for $G$.
So when a $G$-invariant convex integer optimization problem has a
solution,
then there is a core point attaining the optimal value.
As the set of core points is itself $G$-symmetric,
it even suffices to consider only one core point from each $G$-orbit.
In this way, solving a $G$-invariant convex integer optimization problem
can be reduced to a considerably smaller set of integral vectors.
Furthermore, since core points are known to be close to some
$G$-invariant subspace~\cite[Theorem~3.13]{rehn13diss}
\cite[Theorem~9]{herrrehnschue15},
one can for example use them to
add additional linear or quadratic constraints to a given
symmetric problem (see, for instance, \cite[Section~7.3]{rehn13diss}).
Previous work on core points mainly considered groups
for which there are only finitely many core points up to
a natural equivalence relation called
\emph{translation equivalence}
\cite{BoediHerrJoswig13,herrrehnschue13,herrrehnschue15}.
For some of these groups,
even a naive enumeration approach is sufficient
to beat state-of-the-art commercial solvers.
Moreover, when there are only finitely many core points up to
translation equivalence,
then one can parametrize the core points of $G$ in a natural way,
and thereby obtain a beneficial reformulation of a $G$-invariant
problem~\cite{herrrehnschue13}.
This technique was used to solve a previously open problem from
the MIPLIB 2010~\cite{KochEtAl11}
(see~\cite{herrrehnschue13}).
More elaborate algorithms taking advantage of core points,
possibly combined with classical techniques from integer optimization,
have yet to be developed.
In this paper we extend the list of groups for which
core points can be parametrized.
This is achieved by introducing a new equivalence relation
for core points,
which is coarser than the translation equivalence previously used.
It turns out that this new equivalence relation
not only helps to classify core points,
but also suggests a new way to transform $G$-invariant
convex integer optimization problems in a natural way into possibly
simpler equivalent ones.
Knowing a group $G$ of symmetries,
elements of its normalizer in
$\GL(d,\mathbb{Z})$
can be used to transform a
$G$-invariant convex integer optimization problem
linearly into an equivalent $G$-invariant problem.
As we show in \cref{sec:app}
for the case of integer linear problem instances,
the transformed optimization problems are sometimes
substantially easier to solve.
To apply this technique in general, one needs to know how to find
a good transformation from the normalizer
which yields an easy-to-solve transformed problem.
While this is easy in some cases as in our examples,
we do not yet understand satisfactorily how to find
good transformations from the normalizer in general.
In the following, we write
\[
\Fix(G) = \{ v\in \mathbb{R}^d
\mid
gv=v \text{ for all } g\in G
\}
\]
for the fixed space of $G$ in $\mathbb{R}^d$.
When $z$ is a core point and $t\in \Fix(G) \cap \mathbb{Z}^d$,
then $z+t$ is another core point.
We call the core points $z$ and $z+t$ \emph{translation equivalent}.
Herr, Rehn and Schürmann~\cite{herrrehnschue15} consider the question
of whether there are finitely or infinitely many core points up to
translation equivalence in the case where
$G$ is a permutation group acting by permuting coordinates.
Their methods can be used to show that there are only finitely many
core points up to translation
when $\mathbb{R}^d / \Fix(G)$ has no $G$-invariant subspaces
other than the trivial ones~\cite[Theorem~3.13]{rehn13diss}.
They conjectured that in all other cases,
there are infinitely many core points up to translation.
This has been proved in special cases but is open in general.
In this paper, we consider a coarser equivalence relation,
where we allow
to multiply core points with invertible integer matrices
$S\in \GL(d,\mathbb{Z})$ which normalize the subgroup $G$.
Thus two points $z$ and $w$ are called
\emph{normalizer equivalent}, when
$w= Sz +t$, where $S$ is an element of the
\emph{normalizer} of $G$ in $\GL(d,\mathbb{Z})$
(in other words, $S^{-1} G S = G$),
and $t\in \Fix(G)\cap \mathbb{Z}^d$.
In \cref{t:finitecrit}, we will determine
when these coarser equivalence classes contain infinitely many
points up to translation equivalence, in terms of the
decomposition into irreducible invariant subspaces.
For example, if $\mathbb{R}^d$ has an \emph{irrational}
invariant subspace $U\leq \mathbb{R}^d$
(that is, a subspace $\{0\}\neq U\leq \mathbb{R}^d$ such that
$U \cap \mathbb{Z}^d = \{0\}$),
then each integer point $z$ with nonzero projection onto $U$
is normalizer equivalent to infinitely many points,
which are not translation equivalent.
This yields another proof of the result of
Herr, Rehn and Schürmann~\cite[Theorem~32]{herrrehnschue15}
that there are infinitely many core points up to translation,
when there is an irrational invariant subspace.
In \cref{t:QI_finitelycorep}, we prove the following:
Suppose that $G\leq\Sym{d}$ is a transitive permutation group acting on
$\mathbb{R}^d$ by permuting coordinates.
Suppose that $\Fix(G)^{\perp}$ contains no rational
$G$-invariant subspace other than $\{0\}$ and
$\Fix(G)^{\perp}$ itself.
(A subspace of $ \mathbb{R}^d$ is rational if
it has a basis contained in $\mathbb{Q}^d$.)
Such a group $G$ is called a \emph{QI-group}.
Then there are only finitely many core points up to normalizer
equivalence.
For example, this is the case when $d=p$ is a prime number
(and $G\leq \Sym{p}$ is transitive).
In the case that the group is not
$2$-homogeneous, there are infinitely many
core points up to translation,
but these can be obtained from finitely many by multiplying
with invertible integer matrices from the normalizer.
The paper is organized as follows.
In \cref{sec:defs},
we introduce different equivalence relations
for core points and make some elementary observations.
\Cref{sec:orders} collects some elementary
properties of orders in semisimple algebras.
In \cref{sec:finiteequiv},
we determine when the normalizer equivalence classes
contain infinitely many points up to translation equivalence.
In \cref{sec:qi-groups},
we prove the aforementioned result on QI-groups.
\Cref{sec:finiteequiv,sec:qi-groups}
can mostly be read independently from one another.
Finally, in \cref{sec:app} we give an example to show how
normalizer equivalence can be applied
to integer convex optimization problems with suitable symmetries.
\section{Equivalence for core points}
\label{sec:defs}
Let $V$ be a finite-dimensional vector space
over the real numbers $\mathbb{R}$
and $G$ a finite group acting linearly on $V$.
\begin{defi}
An \defemph{orbit polytope} (for $G$)
is the convex hull of the $G$-orbit of a point $v\in V$.
It is denoted by
\[ \orbpol(G,v) = \conv\{gv\mid g\in G\}.\]
\end{defi}
Let $\Lambda\subseteq V$ be a full
$\mathbb{Z}$-lattice in $V$, that is,
the $\mathbb{Z}$-span of an
$\mathbb{R}$-basis of $V$.
Assume that $G$ maps $\Lambda$ onto itself.
\begin{defi}\cite{herrrehnschue13}
An element $z\in \Lambda$ is called a
\defemph{core point} (for $G$ and $\Lambda$) if
the only lattice points in $\orbpol(G,z)$ are its vertices,
that is, the elements in the orbit $Gz$.
In other words, $z$ is a core point if
\[ \orbpol(G,z)\cap \Lambda = Gz.\]
\end{defi}
The barycenter
\[ \frac{1}{\card{G}}
\sum_{g\in G} gv \in \orbpol(G,v)
\]
is always fixed by $G$.
If $\Fix_V(G)$, the set of vectors in $V$
fixed by all $g\in G$, consists only of $0$,
then the barycenter of each orbit polytope is
the zero vector.
In this case, only the zero vector is a core point
\cite[Remark~3.7, Lemma~3.8]{rehn13diss}.
More generally, the map
\[ e_1 =
\frac{1}{\card{G}}
\sum_{g\in G} g
\]
gives the projection from $V$ onto the fixed space
$\Fix_V(G)$~\cite[\S 2.6, Theorem~8]{SerreLRFG},
and thus yields a decomposition
$V = \Fix_V(G) \oplus \ker(e_1)$ into $G$-invariant
subspaces.
If this decomposition restricts to a decomposition
of the lattice,
$\Lambda = e_1\Lambda \oplus (\ker(e_1)\cap \Lambda)$,
then $e_1z\in \Lambda \cap P(G,z)$ for any $z\in \Lambda$,
and so $z$ can only be a core point for $G$
when $z$ is itself in the fixed space.
But in general, we do not have such a decomposition,
since the projection
$e_1\Lambda$ may not be contained in $\Lambda$.
An important class of examples where
$e_1\Lambda \not\subseteq \Lambda$
is transitive permutation groups $G \leq \Sym{d}$,
acting on $V=\mathbb{R}^d$ by permuting coordinates,
and where $\Lambda = \mathbb{Z}^d$.
The fixed space consists of the vectors with all entries
equal and is thus generated by the all ones vector
$\mathds{1}:=(1,1,\dotsc,1)^t$.
For $v=(v_1,\dotsc, v_d)^t$ we have
$e_1v = (\sum_i v_i)/d \cdot \mathds{1}$.
In particular, we see that
$e_1\Lambda$ contains all integer multiples
of $(1/d)\mathds{1}$.
We can think of $\Lambda$ as being partitioned into
\emph{layers}, where a layer consists
of all $z\in \Lambda$ with $z^t\mathds{1} = k$
(equivalently, $e_1z= (k/d)\mathds{1}$),
for a fixed integer~$k$.
Returning to general groups of integer matrices,
we claim that for each
$v\in e_1\Lambda $,
there are core points $z$ with
$e_1z=v$.
Namely, among all $z\in \Lambda$
with $e_1z=v$,
there are elements such that
$\sum_g\norm{gz}^2$ is minimal,
and these are core points.
If $z$ is a core point and $b\in \Fix_{\Lambda}(G)$,
then $z+b$ is a core point, too,
because $\orbpol(G,z+b) = \orbpol(G,z) + b$.
Such core points should be considered as
\emph{equivalent}.
This viewpoint was adopted by
Herr, Rehn and Schürmann~\cite{herrrehnschue13,herrrehnschue15}.
In the present paper, we consider a coarser equivalence relation.
We write $\GL(\Lambda)$ for the invertible
$\mathbb{Z}$-linear maps $\Lambda \to \Lambda$.
Since $\Lambda$ contains a basis of $V$, we may view
$\GL(\Lambda)$ as a subgroup of $\GL(V)$.
(If $V= \mathbb{R}^d$ and $\Lambda= \mathbb{Z}^d$, then we can identify
$\GL(\Lambda)$ with $\GL(d,\mathbb{Z})$,
the set of matrices over $\mathbb{Z}$ with determinant $\pm 1$.)
By assumption, $G$ is a subgroup of $\GL(\Lambda)$.
We use the following notation from group theory:
The \defemph{normalizer} of a finite group $G$ in
$\GL(\Lambda)$ is the set
\[ \N_{\GL(\Lambda)}(G) :=
\{ S\in \GL(\Lambda) \mid
\forall g\in G\colon S^{-1} g S \in G
\}.
\]
The \defemph{centralizer} of $G$ in $\GL(\Lambda)$
is the set
\[ \C_{\GL(\Lambda)}(G) :=
\{ S\in \GL(\Lambda) \mid
\forall g\in G\colon
S^{-1} g S = g
\}.
\]
\begin{defi}\label{df:equiv}
Two points $z$ and $w$ are called
\defemph{normalizer equivalent}
if there is a point $b\in \Fix_{\Lambda}(G)$ and
an element $S$ in the normalizer $\N_{\GL(\Lambda)}(G)$
of $G$ in $\GL(\Lambda)$ such that
$w = Sz + b$.
The points are called \defemph{centralizer equivalent}
if $w= Sz+b$ with $S\in \C_{\GL(\Lambda)}(G)$
and $b\in \Fix_{\Lambda}(G)$.
Finally, we call two points $z$ and $w$
\defemph{translation equivalent}
when $w-z \in \Fix_{\Lambda}(G)$.
\end{defi}
Since $\C_{\GL(\Lambda)}(G)\subseteq \N_{\GL(\Lambda)}(G)$,
each normalizer equivalence class is a union
of centralizer equivalence classes,
and obviously each centralizer equivalence class
is a union of translation equivalence classes.
The definition is motivated by the following simple observation:
\begin{lemm}\label{l:equiv_core_pts}
If
\[ w = Sz +b
\quad \text{with}\quad
S \in \N_{\GL(\Lambda)}(G)
\quad \text{and} \quad
b \in \Fix_{\Lambda}(G),
\]
then
$x \mapsto Sx +b$ defines a bijection between
\[ \orbpol(G,z) \cap \Lambda
\quad \text{and} \quad
\orbpol(G,w) \cap \Lambda.
\]
In particular, $z$ is a core point for $G$
if and only if $w$ is a core point for $G$.
\end{lemm}
\begin{proof}
The affine bijection
$ x \mapsto Sx + b $ maps the orbit polytope
$\orbpol(G,z)$ to another polytope.
The vertex $gz$ is mapped to the vertex
\[
S gz + b = (SgS^{-1}) Sz + b = hSz +b = h(Sz+b) = hw,
\]
where $h= SgS^{-1}\in G$ (since $S$ normalizes $G$).
The second to last equality follows as $b$ is fixed by $G$.
As $SgS^{-1}$ runs through $G$ as $g$ does, it follows that
$x\mapsto Sx+b$ maps the orbit $Gz$ to the
orbit $Gw$ and thus maps the orbit polytope
$\orbpol(G,z)$ to the orbit polytope $\orbpol(G,w)$.
Since $x\mapsto Sx+b$ also maps $\Lambda$ onto itself,
the result follows.
\end{proof}
Notice that a point $w$ is equivalent to $z=0$
(for any of the equivalence relations
in \cref{df:equiv})
if and only if
$w = S\cdot 0 +b = b\in \Fix_{\Lambda}(G)$.
Any $w\in \Fix_{\Lambda}(G)$ is a core point.
We call these points the \defemph{trivial core points}.
In the important example of transitive permutation groups,
the fixed space is one-dimensional.
More generally, when $V$ is spanned linearly by some
orbit~$Gz$, then $\Fix_V(G)$ is spanned by $e_1z$ and thus
$\dim(\Fix_V(G)) \leq 1$.
\begin{remark}\label{rem:nrmtrans}
Suppose that $\Fix_V(G)$ has dimension~$1$.
Then there is at most one $w\in \N_{\GL(\Lambda)}(G)z$
such that $w\neq z$
and $w$ is translation equivalent to $z$.
\end{remark}
\begin{proof}
The elements of $\N_{\GL(\Lambda)}(G)$ map
$\Fix_{\Lambda}(G)$ onto itself and thus act on
$\Fix_V(G)$ as $\pm 1$.
Let $S\in \N_{\GL(\Lambda)}(G)$.
Suppose $w=Sz$ and $z$ are translation equivalent,
so that $Sz-z=b\in \Fix_{\Lambda}(G)$.
Then $b=e_1b =e_1Sz-e_1z = Se_1 z - e_1 z
= \pm e_1 z -e_1 z$
and thus either $Sz=z$ or $Sz = z - 2e_1 z$.
(The latter case can only occur when
$2e_1z\in \Lambda$.)
\end{proof}
In particular, if the orbit
$\N_{\GL(\Lambda)}(G)z$ is infinite,
then the normalizer equivalence class of
a nontrivial core point~$z$
contains infinitely many translation equivalence
classes.
Also notice that when $z$ is a nontrivial core point,
then $e_1z$ must not be a lattice point.
Herr, Rehn, and
Schürmann~\cite{herrrehnschue15,Herr13_Diss,rehn13diss}
considered the question of whether the set of core points
up to translation is finite or infinite
(in the case where $G$ acts by permuting coordinates).
We might ask the same question about core points up to
normalizer equivalence as defined here.
Also, it is of interest whether our bigger equivalence classes
contain finitely or infinitely many points
up to translation.
\begin{example}
Let $G= \Sym{d}$, the symmetric group on $d$ elements,
acting on $\mathbb{R}^d$ by permuting coordinates,
and $\Lambda=\mathbb{Z}^d$.
We identify $G$ with the group of all permutation matrices.
For this group, Bödi, Herr and
Joswig~\cite{BoediHerrJoswig13} have shown that every
core point is translation equivalent to a vector
with all entries $0$ or $1$.
(Conversely, these vectors are obviously core points.)
One can show that the normalizer of the group $G$ of
all permutation matrices in $\GL(d,\mathbb{Z})$ is generated
by $-I$ and the group $G$ itself.
As $G$ is transitive on the subsets of $\{1,\dotsc,d\}$
of size $k$,
all $0/1$-vectors with fixed number $k$ of $1$'s are
normalizer equivalent.
A vector $z$ with $k$ ones and $d-k$ zeros is also
normalizer equivalent to the vector
$-z + \mathds{1}$ with $d-k$ ones and $k$ zeros.
Thus up to normalizer equivalence, there are only
$\lfloor d/2 \rfloor +1$ core points.
\end{example}
\begin{example}\label{ex:cycgen}
Let $G=C_d = \erz{(1,2,\dotsc,d)}$ be a cyclic group,
again identified with a matrix group
which acts on $\mathbb{R}^d$
by permuting the coordinates cyclically.
For $d=4$ we have a finite normalizer
(as we will see in \cref{sec:finiteequiv}) but infinitely
many core points up to normalizer or translation equivalence:
for example, all the points
$(1+m,-m,m,-m)^t$, $m\in \mathbb{Z}$,
are core points for $C_4$~\cite[Example~26]{herrrehnschue15}.
If $d=p$ is prime, then we will see that there are only
finitely many core points up to normalizer equivalence,
but for $p\geq 5$ the normalizer is infinite and there
are infinitely many core points up to translation equivalence.
(See \cref{ex:c5} below.)
For $d=8$ (say),
the normalizer is infinite \emph{and} there are infinitely many
core points up to normalizer equivalence.
Namely, let $b_1\in \mathbb{R}^8$ be the first standard basis vector
and let $v\in \mathbb{R}^8$ be the vector with entries
alternating between $1$ and $-1$.
Then the points
$b_1 + m v$ for $m\in \mathbb{Z}$ are
core points~\cite[Theorem~30]{herrrehnschue15}
(the construction principle here is the same as above in
the case $d=4$).
The circulant
$8\times 8$-matrix $S$ with
first row $(2,1,0,-1,-1,-1,0,1)$
is contained in the centralizer of $G$ and has infinite order.
Since $S$ is symmetric and $Sv=v$, we have
$v^t S^k b_1 = v^t b_1 =1$ for all $k\in \mathbb{Z}$ and thus
the vectors $S^kb_1 + mv $ are all different for different
pairs $(k,m)\in \mathbb{Z}^2$.
And since we also have $S\mathds{1} = \mathds{1}$,
where $\mathds{1} = (1,1,\dotsc,1)^t$ spans the fixed space,
we also see that different vectors of the form
$S^kb_1 + mv$ can not be translation equivalent.
Finally, one can show that
the subgroup generated by $S$ has finite index in the
normalizer $\N_{\GL(8,\mathbb{Z})}(C_8)$.
Thus at most finitely many of the points
$b_1 + mv$ can be normalizer equivalent to each other.
\end{example}
It is sometimes easier to work with the
centralizer $\C_{\GL(\Lambda)}(G)$ instead of the normalizer
$\N_{\GL(\Lambda)}(G)$,
which yields a slightly finer equivalence relation.
By the following simple observation, the
$\C_{\GL(\Lambda)}(G)$-equivalence classes can not be much
smaller than the
$\N_{\GL(\Lambda)}(G)$-equivalence classes:
\begin{lemm}\label{l:normcentfin}
$\card{\N_{\GL(\Lambda)}(G):\C_{\GL(\Lambda)}(G)}$ is finite.
\end{lemm}
\begin{proof}
The factor group $\N_{\GL(\Lambda)}(G)/\C_{\GL(\Lambda)}(G)$
is isomorphic to a subgroup of
$\Aut(G)$~\cite[Corollary~X.19]{IsaacsFGT},
and $\Aut(G)$ is finite,
since $G$ itself is finite by assumption.
\end{proof}
\section{Preliminaries on orders}
\label{sec:orders}
In this section, we collect some simple properties
of \emph{orders} in semisimple algebras over $\mathbb{Q}$.
Orders are relevant for us since the centralizer
$\C_{\GL(\Lambda)}(G)$ can be identified with the
\emph{unit group} of such an order, as we explain below.
Recall the following definition~\cite{reinerMO}:
Let $A$ be a finite-dimensional algebra over $\mathbb{Q}$
(associative, with one).
An \defemph{order} (or \defemph{$\mathbb{Z}$-order})
in $A$ is a subring $R \subset A$
which is finitely
generated as a $\mathbb{Z}$-module and
contains a $\mathbb{Q}$-basis of $A$.
(Here, ``subring'' means in particular that $R$ and
$A$ have the same multiplicative identity.)
In other words, an order is a full $\mathbb{Z}$-lattice
in $A$ which is at the same time a subring of $A$.
For the moment, assume
that $W$ is a finite-dimensional vector space over the rational
numbers $\mathbb{Q}$,
and let $\Lambda$ be a full $\mathbb{Z}$-lattice in $W,$
that is, the $\mathbb{Z}$-span of a $\mathbb{Q}$-basis of $W,$
and $G$ a finite subgroup of $\GL(\Lambda)$.
(In the situation of \cref{sec:defs},
we can take for $W$ the $\mathbb{Q}$-linear span
of $\Lambda$.)
Let
$A:=\enmo_{\mathbb{Q} G} (W)$ be the ring of $\mathbb{Q} G$-module
endomorphisms of $W$, that is, the set of linear maps
$\alpha\colon W\to W$ such that
$\alpha(gv) = g\alpha(v)$ for all $v\in W$ and $g\in G$.
This is just the centralizer of $G$ in the ring of all
$\mathbb{Q}$-linear endomorphisms of $W$.
We claim that
\[ R:=
\{ \alpha \in A
\mid
\alpha(\Lambda)\subseteq \Lambda
\}
\]
is an order in $A$.
Namely, choose a $\mathbb{Z}$-basis of $\Lambda$.
This basis is also a $\mathbb{Q}$-basis of $W$.
By identifying linear maps with matrices with respect to the
chosen basis,
$A$ gets identified with the centralizer of $G$
in the set of \emph{all} $d\times d$ matrices over $\mathbb{Q}$,
and $R$ gets identified with the centralizer of $G$ in
the set of
$d\times d$ matrices with entries in $\mathbb{Z}$.
It follows that $R$ is finitely generated as a $\mathbb{Z}$-module,
and for every $\alpha\in A$ there
is an $m\in \mathbb{Z}$ such that $m\alpha \in R$.
Thus $R$ is an order of $A$.
(Also, $R\iso \enmo_{\mathbb{Z} G}(\Lambda)$ naturally.)
Moreover, the centralizer
$\C_{\GL(\Lambda)}(G)$ is exactly the set of invertible
elements of $R$, that is, the unit group
$\Units(R)$ of $R$.
For this reason, it is somewhat easier to work
with $\C_{\GL(\Lambda)}(G)$ instead of
the normalizer $\N_{\GL(\Lambda)}(G)$.
The unit group $\Units(R)$ of an order $R$ is a finitely
generated (even finitely presented) group~\cite[Section~3]{Kleinert94}.
Finding explicit generators of $\Units(R)$
(and relations between them)
is in general a difficult task, but there do exist algorithms
for this purpose~\cite{BraunCNS15}.
The situation is somewhat better when
$R$ is commutative, for example when
$R \iso \mathbb{Z} A$, where $A$ is a finite
abelian group~\cite{FaccindeGraafPlesken13}.
Moreover, it is quite easy to give generators of a
subgroup of $\Units(\mathbb{Z} A)$ which has finite index
in $\Units(\mathbb{Z} A)$~\cite{Hoechsmann92,MarciniakSehgal05}.
We now collect some general elementary facts about orders
that we need.
(For a comprehensive treatment of orders (not only over
$\mathbb{Z}$), we refer the reader to Reiner's book on
maximal orders~\cite{reinerMO}.
For unit groups of orders,
see the survey article by Kleinert~\cite{Kleinert94}.)
\begin{lemm}\label{l:intersectorders}
Let $R_1$ and $R_2$ be two orders in the
$\mathbb{Q}$-algebra $A$.
Then $R_1 \cap R_2$ is also an order
in $A$.
\end{lemm}
\begin{proof}
Clearly, $R_1\cap R_2$ is a subring.
Since $R_2$ is finitely generated over $\mathbb{Z}$ and
$\mathbb{Q} R_1=A$,
there is a non-zero integer $m \in \mathbb{Z}$ with
$mR_2 \subseteq R_1$.
Thus $mR_2 \subseteq R_1 \cap R_2$.
Since $mR_2$ contains a $\mathbb{Q}$-basis of $A$,
it follows that $R_1\cap R_2$ contains such a basis.
As a submodule of a finitely generated $\mathbb{Z}$-module,
$R_1 \cap R_2$ is again finitely generated.
Thus $R_1\cap R_2$ is an order of $A$.
\end{proof}
\begin{lemm}\label{l:suborder}
Let $R_1$ and $R_2$ be orders in the
$\mathbb{Q}$-algebra $A$ with
$R_1 \subseteq R_2$.
Then
$\card{\Units(R_2) : \Units(R_1)}$ is finite.
\end{lemm}
\begin{proof}
There exists a non-zero integer $m$ such that
$m R_2 \subseteq R_1$.
Suppose that $u$, $v\in \Units(R_2)$ are such that
$u-v \in mR_2$.
Then $u \in v + mR_2$ and thus
$uv^{-1} \in 1 + mR_2 \subseteq R_1$.
Similarly, $vu^{-1} \in 1 + mR_2 \subseteq R_1$.
Thus $uv^{-1} \in \Units(R_1)$.
This shows
$\card{\Units(R_2): \Units(R_1)}
\leq \card{R_2 : mR_2} < \infty$,
as claimed.
\end{proof}
\begin{cor}\label{c:changeorder}
Let $R_1$ and $R_2$ be two orders in the
$\mathbb{Q}$-algebra $A$.
Then $\Units(R_1)$ is finite if and only if
$\Units(R_2)$ is finite.
\end{cor}
\begin{proof}
By \cref{l:intersectorders},
$R_1 \cap R_2$ is an order.
By \cref{l:suborder}, the index
$\card{ \Units(R_i) : \Units(R_1\cap R_2) }$
is finite for $i=1$, $2$.
The result follows.
\end{proof}
\section{Finiteness of equivalence classes}
\label{sec:finiteequiv}
In this section we determine for which groups $G$
the normalizer equivalence classes are finite or not.
We use the notation introduced in \cref{sec:defs}.
Thus $G$ is a finite group acting on the
finite-dimensional, real vector space $V$,
and $\Lambda\subset V$ is a full $\mathbb{Z}$-lattice in $V$
which is stabilized by $G$.
A subspace $U \leq V$ is called
\defemph{$\Lambda$-rational}
if $U\cap \Lambda$ contains a basis of $U$, and
\defemph{$\Lambda$-irrational}
if $U\cap \Lambda = \{0\}$.
If $U$ is an irreducible $\mathbb{R} G$-submodule,
then $U$ is either $\Lambda$-rational or
$\Lambda$-irrational.
\begin{thm}\label{t:finitecrit}
Let
\[ V =
U_1 \oplus \dotsb \oplus U_r
\]
be a decomposition of $V$ into irreducible
$\mathbb{R} G$-subspaces.
Then
$\N_{\GL(\Lambda)}(G)$ has finite order if and only if
all the $U_i$'s are $\Lambda$-rational and pairwise non-isomorphic.
\end{thm}
The proof of \cref{t:finitecrit}
involves some non-trivial representation and number theory.
By \cref{l:normcentfin},
the normalizer $\N_{\GL(\Lambda)}(G)$ is finite if and only if
the centralizer $\C_{\GL(\Lambda)}(G)$ is finite.
As remarked earlier,
the centralizer can naturally be identified with the set of units
of the ring $\enmo_{\mathbb{Z} G}(\Lambda)$,
and $\enmo_{\mathbb{Z} G}(\Lambda)$ is an order in the
$\mathbb{Q}$-algebra
$\enmo_{\mathbb{Q} G}( \mathbb{Q} \Lambda) $,
where $\mathbb{Q} \Lambda$ denotes the $\mathbb{Q}$-linear span
of $\Lambda$.
For this reason, it is more convenient to work with the
$\mathbb{Q}$-vector space $W:= \mathbb{Q} \Lambda$.
(We get back our $V$ from $W$ by scalar extension,
that is, $V\iso \mathbb{R} \otimes_{\mathbb{Q}} W$.)
Fix a decomposition of $W= \mathbb{Q} \Lambda$ into
simple modules:
\[ W \iso m_1 S_1 \oplus \dotsb \oplus m_r S_r,
\quad m_i \in \mathbb{N},
\]
where we assume that $S_i \not\iso S_j$ for
$i\neq j$.
Set $D_i := \enmo_{\mathbb{Q} G} (S_i)$, which is by
Schur's lemma~\cite[(3.6)]{lamFC}
a division ring, and finite-dimensional over $\mathbb{Q}$.
\begin{lemm}\label{l:endo_decom}
With the above notation, we have
\[ \enmo_{\mathbb{Q} G}(W)
\iso
\mat_{m_1}(D_1) \times \dotsb \times
\mat_{m_r}(D_r),
\]
where $\mat_m(D)$ denotes the ring of $m\times m$ matrices
with entries in $D$.
If $R_i$ is an order in $D_i$ for each $i$, then
\[ R:=
\mat_{m_1}(R_1) \times \dotsb \times
\mat_{m_r}(R_r)
\]
is an order in $\enmo_{\mathbb{Q} G}(W)$.
\end{lemm}
\begin{proof}
The first assertion is a standard observation,
used, for example, in one proof of the
Wedderburn-Artin structure theorem for semisimple
rings~\cite[Thm.~3.5 and proof]{lamFC}.
The assertion on orders is then easy.
\end{proof}
In particular, the group of units of $R$ is then isomorphic
to
the direct product of groups of the form
$\GL(m_i, R_i)$.
To prove \cref{t:finitecrit},
in view of \cref{c:changeorder},
it suffices to determine when all these
groups are finite.
The following is a first step toward the proof of
the theorem:
\begin{cor}\label{c:multfree}
If some $m_i > 1$, then
$\Units(R)$
(and thus $\N_{\GL(\Lambda)}(G)$)
is infinite.
\end{cor}
\begin{proof}
$\Units(R)$ contains a subgroup isomorphic to
$\GL(m_i, R_i)$, which contains the group
$\GL(m_i, \mathbb{Z})$.
This group is infinite if $m_i>1$.
\end{proof}
To continue with the proof of
\cref{t:finitecrit},
we have to look at the units of an order $R_i$ in $D_i$.
We will need extension of scalars
for algebras over a field via tensor products, as explained
in \cite[Chapter~3]{FarbDennis93}.
Thus for a $\mathbb{Q}$-algebra~$A$,
we get an $\mathbb{R}$-algebra denoted by $\mathbb{R} \otimes_{\mathbb{Q}} A$.
We use the following theorem of Käte Hey
which can be seen as a generalization of Dirichlet's unit theorem:
\begin{thm}\cite[Theorem~1]{Kleinert94}
\label{t:kaetehey}
Let $D$ be a finite-dimensional division algebra
over $\mathbb{Q}$,
and let $R$ be an order of $D$
with unit group $\Units(R)$.
Set
\[ S = \{ d\in \mathbb{R} \otimes_{\mathbb{Q}} D
\mid
(\det d)^2 = 1
\}.
\]
Then $S/\Units(R)$ is compact.
(Here $\det d$ refers to the action of $d$
as linear operator on $\mathbb{R} \otimes_{\mathbb{Q}} D$.
One can also use the reduced norm, of course.)
\end{thm}
From this, we can derive the following result
(probably well known):
\begin{lemm}\label{l:genquats}
Let $D$ be a finite-dimensional division algebra over $\mathbb{Q}$
and $R$ an order of $D$.
Then $\card{\Units(R)}< \infty$ if and only if\/
$\mathbb{R} \otimes_{\mathbb{Q}} D$ is a division ring.
\end{lemm}
\begin{proof}
Suppose $D_{\mathbb{R}}:=\mathbb{R} \otimes_{\mathbb{Q}} D$ is a division ring.
By Frobenius's theorem~\cite[Theorem~3.20]{FarbDennis93}, we have
$D_{\mathbb{R}} \iso \mathbb{R}$, $\mathbb{C}$, or $\mathbb{H}$.
In each case, one checks that the set $S$
defined in \cref{t:kaetehey} is compact.
Thus the discrete group $\Units(R)\subseteq S$ must be finite.
(Notice that we did not use
\cref{t:kaetehey} here---only
that $\Units(R)\subseteq S$.)
Conversely, suppose that $D_{\mathbb{R}}$ is not a division ring.
Then there is some non-trivial idempotent
$e\in D_{\mathbb{R}}$,
that is, $e^2=e$, but $e \neq 0$, $1$.
(This follows since $D_{\mathbb{R}}$ is semisimple.)
Set $f=1-e$.
Then for $\lambda$, $\mu\in \mathbb{R}$,
we have $\det(\lambda e + \mu f) = \lambda^{k_1}\mu^{k_2}$
with $k_1 = \dim (D_{\mathbb{R}}e)$
and $k_2 = \dim (D_{\mathbb{R}}f)$.
In particular, for every
$\lambda \neq 0$ there is some $\mu$ such that
$\lambda e + \mu f \in S$.
This means that $S$ is unbounded, and thus not compact.
It follows from \cref{t:kaetehey}
that $\Units(R)$ can not be finite.
\end{proof}
\begin{proof}[Proof of \cref{t:finitecrit}]
First, assume that we are given a decomposition
$V = U_1 \oplus \dotsb \oplus U_r$ as in the theorem.
Then $S_i := U_i \cap \mathbb{Q}\Lambda$ contains a basis
of $U_i$ and thus is non-zero
and necessarily simple as a $\mathbb{Q} G$-module.
Thus
\begin{align*}
W = V\cap \mathbb{Q}\Lambda
&= S_1
\oplus \dotsb \oplus
S_r
\end{align*}
is a decomposition of $W$ into simple
$\mathbb{Q} G$-modules, which are pairwise
non-isomorphic.
It follows that
\[ \enmo_{\mathbb{Q}}(W) \iso
D_1 \times \dotsm \times D_r,
\]
where $D_i = \enmo_{\mathbb{Q} G}(S_i)$.
Since $\mathbb{R} \otimes_{\mathbb{Q}} D_i
\iso \enmo_{\mathbb{R} G}(U_i)$ is a division ring, too,
it follows that the orders of each $D_i$ have a finite
unit group, by \cref{l:genquats}.
Thus $\C_{\GL(\Lambda)}(G)$ is finite.
Conversely, assume that $\N_{\GL(\Lambda)}(G)$ is finite.
It follows from \cref{c:multfree}
that $m_i=1$ for all $i$
(in the notation introduced before \cref{l:endo_decom}).
Thus $W$ has a decomposition into simple summands
which are pairwise non-isomorphic:
\[ W = S_1 \oplus \dotsb \oplus S_r.
\]
Let $D_i = \enmo_{\mathbb{R} G}(S_i)$.
Then \cref{l:genquats} yields that
$\mathbb{R} \otimes_{\mathbb{Q}} D_i $ is a division ring, too.
Since
$\mathbb{R} \otimes_{\mathbb{Q}} D_i \iso \enmo_{\mathbb{R} G}(\mathbb{R} S_i)$,
it follows that $U_i := \mathbb{R} S_i$ is simple.
(Otherwise, the projection to a nontrivial invariant
submodule would be a zero-divisor in
$\enmo_{\mathbb{R} G}(U_i)$.)
For $i\neq j$, we have $U_i \not\iso U_j$
by the Noether-Deuring theorem~\cite[Theorem~19.25]{lamFC}.
Thus $V$ has a decomposition
$V = U_1 \oplus \dotsb \oplus U_r$ as required.
\end{proof}
\begin{remark}
Let $z\in V$ be an element such that
the orbit $Gz$ linearly spans $V.$
Then the normalizer equivalence class of $z$ contains
infinitely many translation equivalence classes
if (and only if)
$\N_{\GL(\Lambda)}(G)$ has infinite order.
\end{remark}
\begin{proof}
The ``only if'' part is clear,
so assume that $\N_{\GL(\Lambda)}(G)$ has infinite order.
By \cref{rem:nrmtrans}, it suffices to show that
the orbit $\N_{\GL(\Lambda)}(G)z$ has infinite size.
By \cref{l:normcentfin},
the centralizer $\C_{\GL(\Lambda)}(G)$ has also infinite order.
If $cz=z$ for $c\in \C_{\GL(\Lambda)}(G)$,
then $cgz = gcz = gz$ for all $g\in G$ and thus
$c=1$.
Thus
\[
\infty
=\card{\C_{\GL(\Lambda)}(G)}
= \card{\C_{\GL(\Lambda)}(G)z}
\leq \card{\N_{\GL(\Lambda)}(G)z}.
\]
\end{proof}
So when $\N_{\GL(\Lambda)}(G)$ is infinite,
only elements contained in proper invariant subspaces can have
finite orbits under the normalizer.
(Notice that the linear span of an orbit $Gz$ is always
a $G$-invariant subspace of $V$.)
If $G$ is a transitive permutation group acting
on the coordinates, then there are always points $z$
such that the orbit $Gz$ spans the ambient
space---for example, $z=(1,0,\dotsc,0)^t$.
When $V$ has
an irrational invariant subspace, then
$\N_{\GL(\Lambda)}(G)$ is infinite, by \cref{t:finitecrit}.
Thus if $z$ is a core point for $G$ such that its orbit spans
the ambient space, then there are infinitely
many core points, even up to translation.
This was first proved by
Rehn~\cite{rehn13diss,herrrehnschue15}
for permutation groups.
Another consequence of \cref{t:finitecrit} and the remark above
is that there are infinitely many core points
for transitive permutations groups $G$ acting on $V=\mathbb{R}^d$
such that $V$ is not multiplicity-free (as an $\mathbb{R} G$-module).
\begin{example}
Consider the regular representation of a group $G$,
that is, $G$ acts on $\mathbb{Q} G$ by left multiplication,
so it permutes the canonical basis $G$.
As a lattice, we choose the group ring
$\mathbb{Z} G$, the vectors with integer coordinates.
Then $\enmo_{\mathbb{Z} G}(\mathbb{Z} G) \iso \mathbb{Z} G$.
Units of group rings are a much studied problem.
A theorem of Higman says that
$\Units(\mathbb{Z} G)$ is finite if and only if
$G$ is abelian of exponent $1$, $2$, $3$, $4$ or $6$,
or $G\iso Q_8\times E$ with $E^2 = \{1\}$.
This can also be derived from
\cref{t:finitecrit}.
In \cref{ex:cycgen},
we described some core points in the cases
$G=C_4$ and $C_8$.
In the case of $C_8$, the decomposition
of $\mathbb{Q} C_8$ into simple modules is given by
\[ \mathbb{Q} C_8
\iso \mathbb{Q} \oplus \mathbb{Q} \oplus \mathbb{Q}[i]
\oplus \mathbb{Q}[ e^{2\pi i/8}].
\]
Over $\mathbb{R}$, the last summand decomposes into
two invariant, irrational subspaces of dimension~$2$.
The normalizer of $C_8$ is infinite because of this last
summand.
Of course, any $z$ contained in the sum of the first
three summands has only a finite orbit under the normalizer,
for example $z=(1,0,0,0,1,0,0,0)^t$.
When $p$ is prime and $p\geq 5$, then
$\Units(\mathbb{Z} C_p)$ is infinite,
but there are only finitely many core points up to
normalizer equivalence
in $\mathbb{Z} C_p$, by \cref{t:QI_finitelycorep} below.
\end{example}
\section{Rationally irreducible}
\label{sec:qi-groups}
Suppose that $\Lambda= \mathbb{Z}^d $, and assume that
$G$ acts on $\mathbb{R}^d$ by matrices in $\GL(d,\mathbb{Z})$.
A subspace $U \leq \mathbb{R}^d$ is called
\defemph{irrational} if
$U\cap \mathbb{Q}^d = \{0\}$ and \defemph{rational} if
$U$ has a basis contained in $\mathbb{Q}^d$.
If $U$ is an irreducible $\mathbb{R} G$-submodule,
then $U$ is either rational or irrational.
In this section, we consider permutation groups
acting on $\mathbb{R}^d$ by permuting coordinates.
(We conjecture that a version of the main result remains true
more generally for finite matrix groups $G\leq \GL(d,\mathbb{Z})$,
but we are not able to prove it yet.
One problem is that we can not extend
\cref{l:decomp_ratirr} below to
this more general setting.)
Since permutation matrices are orthogonal,
it follows that the orthogonal complement $U^{\perp}$
of any $G$-invariant subspace is itself $G$-invariant.
Following Dixon~\cite{dixon05},
we call a transitive permutation group $G$ a \defemph{QI-group},
when $\Fix(G)^{\perp}$ does not contain any rational
$G$-invariant subspace other than $\{0\}$ and\/
$\Fix(G)^{\perp}$ itself.
Notice that $\Fix(G)^{\perp}$ contains no non-trivial rational
invariant subspaces
if and only if $\Fix(G)^{\perp}\cap \mathbb{Q}^d$
contains no proper $G$-invariant subspace
other than $\{0\}$.
In algebraic language, this means that
$\Fix(G)^{\perp} \cap \mathbb{Q}^d$
is a simple module over $\mathbb{Q} G$.
Let us emphasize that by definition, QI-groups are transitive.
Thus the fixed space $\Fix(G)$ is generated by the
all ones vector $(1,1,\dotsc,1)^t$,
and so $\dim \Fix(G)= 1$.
\begin{thm}\label{t:QI_finitelycorep}
Let $G \leq \Sym{d}$ be a QI-group.
Then there is a constant $M$ depending only on the group $G$
such that every core point is normalizer equivalent
to a core point $w$ with $\norm{w}^2\leq M$.
In particular, there are only finitely many core points for $G$
up to normalizer equivalence.
\end{thm}
We divide the proof of \cref{t:QI_finitelycorep}
into a number of lemmas.
The idea is the following:
We show that for any vector $z\in \mathbb{Z}^d$ there is
some $c\in \C_{\GL(d,\mathbb{Z})}(G)$ such that the projections
of $cz$ to the different irreducible real subspaces of
$\Fix(G)^{\perp}$
have approximately the same norm.
(At the same time, this point $cz$ is one with minimal
norm in the orbit $\C_{\GL(\Lambda)}(G)z$.)
When $z$ is a core point,
at least one of these norms must be ``small''
by a fundamental result of Herr, Rehn, and
Schürmann~\cite[Theorem~9]{herrrehnschue15}
(\cref{t:projectionbounded} below).
We begin with a short reminder of some character theory.
The facts we need can be found in any basic text on
representations of finite groups,
for example Serre's text~\cite{SerreLRFG}.
Saying that a group $G$ acts linearly on a
(finite-dimensional) vector space~$V$ over some field~$K$
is equivalent to having a representation
$R\colon G\to \GL(V)$
(or even $R\colon G\to \GL(d,K)$ when $V=K^d $).
The \emph{character} $\chi$ of $R$ (or $V$) is the function
defined by $\chi(g) = \tr (R(g))$.
An \emph{irreducible character} is the trace of an irreducible
representation $R\colon G\to \GL(d,\mathbb{C})$ over
the field of complex numbers~$\mathbb{C}$.
The set of irreducible characters of
the group $G$ (over the complex numbers)
is denoted by $\Irr(G)$.
For finite groups $G$, this is a finite set.
Indeed, by the orthogonality relations,
the set $\Irr(G)$ is orthonormal with respect
to a certain inner product on the space of all
functions $G\to \mathbb{C}$~\cite[Section~2.3, Thm.~3]{SerreLRFG}.
Every character of a finite group can be written uniquely as a
nonnegative integer linear combination of irreducible
characters.
This corresponds to the fact that for
each representation $G\to \GL(V)$ on some vector space $V$ over
$\mathbb{C}$, we can write $V$
as a direct sum of irreducible, $G$-invariant
subspaces \cite[\S 1.4, Thm.~2, \S 2.3, Thm.~4]{SerreLRFG}.
Suppose $\chi$ is the character of some representation
$R$ of the finite group $G$.
Then the eigenvalues of $R(g)$, where $g\in G$, must be
$\card{G}$th roots of unity.
Thus the values of $\chi$ are contained in the field
generated by the $\card{G}$th roots of unity.
We write $\mathbb{Q}(\chi)$ for the field generated by all values
of $\chi$.
It follows that $\mathbb{Q}(\chi)$ is a finite Galois extension
of $\mathbb{Q}$,
with abelian Galois group $\Gal(\mathbb{Q}(\chi)/\mathbb{Q})$.
The following lemma appears in
Dixon's paper~\cite[Lemma~6(b)]{dixon05}.
\begin{lemm}[Dixon~\cite{dixon05}]\label{l:decomp_ratirr}
Let $G$ be a QI-group and let
$\pi$ be the character of the corresponding permutation
representation of $G$.
Let $\chi\in \Irr G$ be an irreducible constituent of
$\pi - 1$ (the character of $G$ on $\Fix(G)^{\perp}$).
Then
\[ \pi = 1 + \sum_{ \alpha \in \Gamma } \chi^{\alpha},
\quad\text{where}\quad
\Gamma = \Gal(\mathbb{Q}(\chi)/\mathbb{Q}).
\]
\end{lemm}
For the moment, we work with the complex space
$\mathbb{C}^d$, on which $G$ acts by permuting coordinates.
Recall that to each $\chi\in \Irr G$ there corresponds a central primitive
idempotent of the group algebra $\mathbb{C} G$, namely
\[ e_{\chi} = \frac{\chi(1)}{\card{G}} \sum_{g\in G} \chi(g^{-1})g
\in \Z(\mathbb{C} G).
\]
If $V$ is any $\mathbb{C} G$-module, then
$e_{\chi}$ acts on $V$ as the projection onto
its $\chi$-homogeneous component.
So the image $e_{\chi}(V)$ coincides
with the set $\{v\in V\mid e_{\chi}v=v \}$,
and the character of $e_{\chi}(V)$ is an integer multiple
of $\chi$~\cite[\S 2.6]{SerreLRFG}.
In the present situation, it follows
from \cref{l:decomp_ratirr} that
\[ U:= e_{\chi} (\mathbb{C}^d )
= \{ v\in \mathbb{C}^d \mid
e_{\chi}v=v
\}
\]
is itself an irreducible module affording the character $\chi$.
The projection $e_{\chi}$ maps the standard basis of $\mathbb{C}^d$
to vectors contained in
$K^d$, where $K:= \mathbb{Q}(\chi)$.
Thus $U$ has a basis contained in $K^d$.
(This means that the representation corresponding to the linear action
of $G$ on $U$ can be described by matrices
with all entries in $K$.
Thus $\chi$
is the character of a representation where all matrices have
entries in $K=\mathbb{Q}(\chi)$.)
Another consequence of \cref{l:decomp_ratirr} is
that we have the decomposition
\[ \mathbb{C}^d = \Fix(G) \oplus
\bigoplus_{\gamma\in \Gamma}
U^{\gamma}.
\]
Here $U^{\gamma}$ means this:
Since $U$ has a basis in $\mathbb{Q}(\chi)^d$,
we can apply $\gamma$ to the coordinates of the
vectors in such a basis.
The linear span of the result is denoted by $U^{\gamma}$.
This is independent of the chosen basis.
\begin{lemm}\label{l:endohom}
Set
$ A:= \C_{\mat_d(\mathbb{Q})}(G)
= \{a\in \mat_d(\mathbb{Q}) \mid \forall g\in G \colon
ag=ga\}$, the full centralizer of $G$
in the ring of $d\times d$-matrices over $\mathbb{Q}$.
There is an algebra homomorphism
$\lambda\colon A \to \mathbb{Q}(\chi)$
such that each $a\in A$ acts on $U^{\gamma}$
by multiplication with $\lambda(a)^{\gamma}$,
and such that $\lambda(a^t) = \overline{\lambda(a)}$.
There is another homomorphism
$m\colon A \to \mathbb{Q}$ such that
\[ A \iso \mathbb{Q} \times \mathbb{Q}(\chi)
\quad \text{via}\quad
a \mapsto (m(a),\lambda(a)).
\]
\end{lemm}
The isomorphism $A\iso \mathbb{Q} \times \mathbb{Q}(\chi)$
appears in Dixon's paper~\cite[Lemma~6(d)]{dixon05}
and follows from \cref{l:decomp_ratirr}
together with general results in representation theory.
But as we need the specific properties of the map
$\lambda$ from the lemma, we give a detailed proof.
\begin{proof}[Proof of \cref{l:endohom}]
Suppose the matrix $a$ centralizes $G$,
and let $\lambda(a)\in \mathbb{C}$ be an eigenvalue
of $a$ on $U$.
The corresponding eigenspace is $G$-invariant
since $a$ centralizes $G$.
Since $U$ is irreducible, $U$ is contained
in the eigenspace of $\lambda(a)$.
When $a\in A \subseteq \mat_d(\mathbb{Q})$,
then $a$ maps
$U \cap \mathbb{Q}(\chi)^d\neq \{0\}$ to itself, and thus
$\lambda(a)\in \mathbb{Q}(\chi)$.
This defines the algebra homomorphism
$\lambda\colon A \to \mathbb{Q}(\chi)$.
When $u\in U \cap \mathbb{Q}(\chi)^d$, $\gamma\in \Gamma$,
and $a\in A$, then
$a u^{\gamma} = (au)^{\gamma} = \lambda(a)^{\gamma}u^{\gamma}$.
Thus $a$ acts as $\lambda(a)^{\gamma}$ on $U^{\gamma}$.
Each $a\in A$ acts also on the one-dimensional
fixed space by multiplication with
some $m(a)\in \mathbb{Q}$.
As
\[ \mathbb{C}^d = \Fix(G) \oplus
\bigoplus_{\gamma\in \Gamma}
U^{\gamma},
\]
we see that the space $\mathbb{C}^d$ has a basis of
common eigenvectors for all $a\in A$.
With respect to this basis, each $a$ is a diagonal matrix,
where $m(a)$ appears once and $\lambda(a)^{\gamma}$
appears $\chi(1)$-times for each $\gamma\in \Gamma$.
In particular,
the map $A\ni a \mapsto (m(a), \lambda(a))$
is injective.
Since $G$ acts orthogonally with respect to the standard
inner product on $\mathbb{C}^d$,
the above decomposition into irreducible subspaces
is orthogonal and we can find an orthonormal basis
of common eigenvectors of all $a\in A$.
From this, it is clear that
$\lambda(a^t) = \lambda(a^{*}) = \overline{\lambda(a)}$.
To see that $a\mapsto (m(a),\lambda(a))$ is onto,
let $(q, \mu)\in \mathbb{Q} \times \mathbb{Q}(\chi)$.
Define
\begin{align*}
\phi(q,\mu)
&:= qe_1 +
\sum_{\gamma\in \Gamma} (\mu e_{\chi})^{\gamma}
\\
&= q\frac{1}{\card{G}} \sum_{g\in G} g
+ \frac{\chi(1)}{\card{G}}
\sum_{g\in G}
\left( \sum_{\gamma\in \Gamma}
\big( \mu \chi(g^{-1})
\big)^{\gamma}
\right) g
\\
&\in \Z(\mathbb{Q} G).
\end{align*}
Then the corresponding map
$v \mapsto \phi(q,\mu)v$ is in $A$,
and from
$\phi(q,\mu)e_1 = qe_1$ and
$\phi(q,\mu)e_{\chi} = \mu e_{\chi}$
we see that $m(\phi(q,\mu))=q$ and
$\lambda(\phi(q,\mu)) = \mu$.
This finishes the proof that $A\iso \mathbb{Q} \times \mathbb{Q}(\chi)$.
\end{proof}
\begin{lemm}\label{l:realdecomp}
Set $W:= (U + \overline{U})\cap \mathbb{R}^d$.
Then the decomposition of\/ $\mathbb{R}^d$ into irreducible
$\mathbb{R} G$-modules is given by
\[ \mathbb{R}^d = \Fix(G)
\oplus
\bigoplus_{\alpha\in \Gamma_0}
W^{\alpha}
, \quad
\Gamma_0 = \Gal((\mathbb{Q}(\chi)\cap \mathbb{R})/\mathbb{Q}).
\]
(In particular, $W$ is irreducible as an $\mathbb{R} G$-module.)
For $w\in W^{\alpha}$ and $a\in A$, we have
$\norm{aw}^2
= \left( \overline{\lambda(a)} \lambda(a)
\right)^{\alpha}
\norm{w}^2$.
\end{lemm}
\begin{proof}
When $\mathbb{Q}(\chi)\subseteq \mathbb{R}$, then
$ \overline{U} = U$ and
$W = U\cap \mathbb{R}^d$.
The result is clear in this case.
Otherwise, we have $ U\cap \mathbb{R}^d =\{0\} $
and $U\cap \overline{U} = \{0\}$,
and so $W = (U \oplus \overline{U})\cap \mathbb{R}^d\neq \{0\}$,
and thus again $W$ is simple over $\mathbb{R} G$.
The extension $\mathbb{Q}(\chi)/\mathbb{Q} $ has an abelian Galois group,
and thus $\mathbb{Q}(\chi)\cap \mathbb{R}$ is also Galois over~$\mathbb{Q}$.
The Galois group $\Gamma_0$ is isomorphic to the
factor group $\Gamma / \{\id, \kappa\}$,
where $\kappa$ denotes complex conjugation.
Suppose $\alpha \in \Gamma_0$ is the restriction
of $\gamma \in \Gamma$ to $\mathbb{Q}(\chi)\cap \mathbb{R}$.
Then
\[ W^{\alpha}
= \left( (U+\overline{U})\cap \mathbb{R}^d \right)^{\alpha}
= (U^{\gamma} + \overline{U}^{\gamma}) \cap \mathbb{R}^d
= (U^{\gamma} + U^{\kappa\gamma})\cap \mathbb{R}^d.
\]
The statement about the decomposition follows.
The last statement is immediate from \cref{l:endohom}.
\end{proof}
\begin{lemm}\label{l:full_latt}
Let $C:= \C_{\GL(d,\mathbb{Z})}(G)$, and define
\[ L\colon C \to \mathbb{R}^{\Gamma_0}
, \quad
L(c) := \big( \log (\overline{\lambda(c)} \lambda(c))^{\alpha}
\big)_{\alpha\in \Gamma_0}.
\]
Then the image $L(C)$ of $C$ under this map is a full lattice
in the hyperplane
\[ H = \bigg\{ (x_{\alpha})_{\alpha\in \Gamma_0}
\mid \sum_{\alpha\in \Gamma_0} x_{\alpha} = 0
\bigg\}.
\]
\end{lemm}
We will derive this lemma from the following
version of Dirichlet's unit theorem~\cite[Satz~I.7.3]{Neukirch07_AZT}:
\begin{lemm}\label{l:diri_lattice}
Let $K$ be a finite field extension over $\mathbb{Q}$,
let $\alpha_1$, $\dots$, $\alpha_r \colon K\to \mathbb{R}$
be the different real field embeddings of $K$,
and let
$\beta_1$, $\overline{\beta_1}$, $\dots$,
$\beta_s$, $\overline{\beta_s} \colon K\to \mathbb{C}$
be the different complex embeddings of $K$,
whose image is not contained in $\mathbb{R}$.
Let $O_K$ be the ring of algebraic integers in $K$
and $l\colon K^{*} \to \mathbb{R}^{r+s}$ the map
\[ z \mapsto
l(z)=
(\log \abs{ z^{\alpha_1} }, \dotsc,
\log \abs{ z^{\alpha_r} },
\log \abs{ z^{\beta_1} }, \dotsc,
\log \abs{ z^{\beta_s} }
) \in \mathbb{R}^{r+s}.
\]
Then the image $l(\Units(O_K))$
of the unit group of $O_K$ under $l$ is a full lattice in the
hyperplane
\[ H =
\bigg\{ x\in \mathbb{R}^{r+s}
\mid
\sum_{i=1}^{r+s} x_i = 0
\bigg\}.
\]
\end{lemm}
In the proof of \cref{l:full_latt},
we will apply this result to
$K = \mathbb{Q}(\chi)$.
Set $F= K\cap \mathbb{R}$,
$\Gamma_0 = \Gal(F/\mathbb{Q})$
and $\Gamma = \Gal(K/\mathbb{Q})$.
If $F=K\subseteq \mathbb{R}$, then
$r= \card{K:\mathbb{Q}}$ and $s=0$.
In this case,
$\{\alpha_1, \dotsc, \alpha_r\} = \Gamma = \Gamma_0$.
If $K \not\subseteq\mathbb{R}$,
then $\card{K:F}=2$,
$r=0$, and $s= \card{F:\mathbb{Q}}$.
In this case, we may identify the set
$\{\beta_1,\dotsc, \beta_s\}$ with
the Galois group $\Gamma_0$:
for each $\alpha \in \Gamma_0$, there are two
extensions of $\alpha$ to the field $K$,
and these are complex conjugates of each other.
Thus we get a set $\{\beta_1,\dotsc, \beta_s\}$
as in \cref{l:diri_lattice} by choosing
exactly one extension for each $\alpha\in \Gamma$.
The map $l$ is independent of this choice anyway.
It follows that in both cases, we may rewrite the map $l$
(somewhat imprecisely) as
\[ l(z) = \big( \log \abs{z^{\alpha}} \big)_{\alpha\in \Gamma_0}.
\]
\begin{proof}[Proof of \cref{l:full_latt}]
First notice that the entries of $L(c)$
can be written as
\begin{align*}
\log\big(\overline{\lambda(c)} \lambda(c)
\big)^{\alpha}
= \log \big( \overline{\lambda(c)^{\alpha}}
\lambda(c)^{\alpha}
\big)
= \log \abs{ \lambda(c)^{\alpha}}^2
&= 2 \log \abs{\lambda(c)^{\alpha}},
\end{align*}
where we tacitly replaced $\alpha$ by an extension to $\mathbb{Q}(\chi)$
when $\mathbb{Q}(\chi)\not\subseteq \mathbb{R}$.
Thus $L(c) = 2 l (\lambda(c))$
for all $c\in C$,
with $l$ as in \cref{l:diri_lattice}.
In view of \cref{l:diri_lattice},
it remains to show that
the group $\lambda(C)$ has finite index in
$\Units(O_K)$.
We know that $C$ is the group of units in
$\C_{\mat_d(\mathbb{Z})}(G) \iso \enmo_{\mathbb{Z} G}(\mathbb{Z}^d)$,
which is an order in
$A \iso \mathbb{Q} \times K$.
Another order in $\mathbb{Q} \times K$
(in fact, the unique maximal order) is
$\mathbb{Z} \times O_K$ with unit group
$\{\pm 1\} \times \Units(O_K)$.
By \cref{l:suborder},
it follows that $C$ has finite index in
$\{\pm 1\} \times \Units(O_K)$.
Thus $\lambda(C)$ has finite index in
$\Units(O_K)$ and the result follows.
\end{proof}
For each $v\in \mathbb{R}^d$, let
$v_{\alpha}$ be the orthogonal projection
of $v$ onto the simple subspace $W^{\alpha}$.
\begin{lemm}\label{l:almostequal}
There is a constant $D$, depending only on the group $G$,
such that for every $v\in \mathbb{R}^d $ with
$v_{\alpha}\neq 0$ for all $\alpha\in \Gamma_0$,
there is a
$c\in C$ with
\[ \frac{\norm{(cv)_{\alpha}}^2}{\norm{(cv)_{\beta}}^2}
\leq D
\]
for all $\alpha$, $\beta \in \Gamma_0$.
\end{lemm}
As $\Fix(G)^{\perp}\cap \mathbb{Q}^d$ is a simple module,
the assumption $v_{\alpha}\neq 0$ for all $\alpha$
holds in particular for all
$v \in \mathbb{Q}^d \setminus \Fix(G)$.
\begin{proof}[Proof of \cref{l:almostequal}]
By \cref{l:full_latt}, there is a compact set~$T$,
\[ T \subset H = \bigg\{ (x_{\alpha}) \in \mathbb{R}^{\Gamma_0}
\mid
\sum_{\alpha} x_{\alpha} = 0
\bigg\},
\]
such that $H = T + L(C)$.
(For example, we can choose $T$ as a fundamental
parallelepiped of the full lattice $L(C)$ in $H$.)
For $v\in \mathbb{R}^d$ as in the statement of the lemma, define
\[ N(v) = \big( \log \norm{v_{\alpha}}^2 \big)_{\alpha}
\in \mathbb{R}^{\Gamma_0}.
\]
Let $S\in \mathbb{R}^{\Gamma_0}$ be the vector having
all entries equal to
\[ s := \frac{1}{\card{\Gamma_0}} \sum_{\alpha} \log\norm{v_{\alpha}}^2.
\]
This $s$ is chosen such that
$N(v) - S \in H$.
Thus there is $c\in C$ such that
$L(c) + N(v) - S \in T$, say
$L(c) + N(v) - S = t = (t_{\alpha})$.
As
\[ \norm{ (cv)_{\alpha}}^2
= \norm{ c v_{\alpha}}^2
= \left( \overline{\lambda(c)} \lambda(c)
\right)^{\alpha} \norm{ v_{\alpha}}^2,
\]
it follows that
\[ N(cv) = L(c) + N(v)
\]
in general.
Thus
\begin{align*}
\log\norm{cv_{\alpha}}^2 - \log\norm{cv_{\beta}}^2
&= (\log\norm{cv_{\alpha}}^2 -s)
- (\log\norm{cv_{\beta}}^2 - s)
\\
&= (N(cv) - S)_{\alpha} - (N(cv)-S)_{\beta}
\\
&= t_{\alpha} - t_{\beta}
\\
& \leq
\max_{\alpha, t} t_{\alpha} - \min_{\beta, t} t_{\beta}
=: D_0.
\end{align*}
This maximum and minimum exist
since $T$ is compact.
The number $D_0$ may depend on the choice of the set $T$,
but not on $v$ or $c$.
Thus $\norm{cv_{\alpha}}^2/\norm{cv_{\beta}}^2$ is bounded
by $ D := e^{D_0} $.
\end{proof}
We see from the proof that we get a bound whenever we have
a subgroup $C_0$ of $\C_{\GL(d,\mathbb{Z})}(G)$ such that
$L(C_0)$ is a full lattice in the hyperplane $H$.
Of course, we do not get the optimal bound then,
but in practice it may be difficult to compute the full
centralizer.
We will prove \cref{t:QI_finitelycorep}
by combining the last lemma with the following fundamental
result~\cite[Theorem~9]{herrrehnschue15}
(which is actually true for arbitrary matrix
groups~\cite[Theorem~3.13]{rehn13diss}).
\begin{thm}\label{t:projectionbounded}
Let $G\leq \Sym{d}$ be a transitive permutation group.
Then there is a constant $C$ (depending only on $d$)
such that for each core point~$z$,
there is a non-zero invariant subspace $U\leq \Fix(G)^{\perp}$
over $\mathbb{R}$
such that $\norm{z|_{U}}^2 \leq C$.
\end{thm}
In our situation, the $W^{\alpha}$ from \cref{l:realdecomp}
are the only irreducible subspaces, and thus
for every core point $z$ there is some
$\alpha\in \Gamma_0$
with $\norm{z_{\alpha}}^2 \leq C$.
\begin{proof}[Proof of \cref{t:QI_finitelycorep}]
Let $z$ be a core point with $z\notin \Fix (G) $.
We want to show that there is
a $c\in \C_{\GL(d,\mathbb{Z})}(G)$ and a vector
$b\in \Fix(G)\cap \mathbb{Z}^d$,
such that
$ \norm{cz +b} \leq M$,
where $M$ is a constant depending only on $G$ and not on~$z$.
By \cref{l:almostequal}, there is
$c \in \C_{\GL(d,\mathbb{Z})}(G)$ such that
$\norm{cz_{\alpha}}^2 \leq D \norm{cz_{\beta}}^2$
for all $\alpha$, $\beta\in \Gamma$,
where $D$ is some constant depending only on $G$
and not on $z$.
Since $y = cz$ is also a core point
(\cref{l:equiv_core_pts}),
\cref{t:projectionbounded} yields that
there is a $\beta\in \Gamma$ with
$ \norm{ y_{\beta} }^2 \leq C$
(where, again, the constant~$C$ depends only on the group,
not on $z$).
It follows that
the squared norms of the other projections $y_{\alpha}$ are bounded by
$CD$.
Thus
\[ \norm{ y|_{\Fix(G)^{\perp}} }^2
\leq C + (\card{\Gamma}-1)CD
\]
is bounded.
Since the projection to the fixed space can be bounded
by translating with some $b\in \Fix(G)\cap \mathbb{Z}^d $,
the theorem follows.
\end{proof}
\begin{example}\label{ex:c5}
Let $p$ be a prime,
and let $G = C_p \leq \Sym{p}$ be generated by a $p$-cycle
acting on $\mathbb{R}^p$ by (cyclically) permuting coordinates.
Then $G$ is a QI-group.
(Of course, every transitive group of prime degree
is a QI-group.)
For $p$ odd,
$\mathbb{R}^p$ decomposes into $\Fix(G)$ and
$(p-1)/2$ irreducible subspaces of dimension $2$.
Here the lattice can be identified with the group ring
$\mathbb{Z} G$, and thus
$\C_{\GL(p,\mathbb{Z})}(G) \iso \Units(\mathbb{Z} G)$.
The torsion free part of this unit group is a free abelian
group of rank $(p-3)/2$.
Let us see what constant we can derive for
$p=5$.
For concreteness, let $g=(1,2,3,4,5)$
and $G= \erz{g}$.
We have the decomposition
\[ \mathbb{R}^5 = \Fix(G) \oplus W \oplus W'.
\]
The projections from $\mathbb{R}^5$ onto $W$ and $W'$
are given by
\begin{align*}
e_W &= \frac{1}{5}(2+ag+bg^2+bg^3+ag^4)
,&
a &= \frac{-1+\sqrt{5}}{2},
\\
e_{W'} &= \frac{1}{5}(2+bg+ag^2+ag^3+bg^4)
,&
b &= \frac{-1-\sqrt{5}}{2}.
\end{align*}
The centralizer of $G$ has the form
\[ \C_{\GL(5,\mathbb{Z})}(G)
= \{ \pm I\} \times G \times \erz{ u },
\]
where $u$ is a unit of infinite order.
Here we can choose
$u= -1+g+g^4$ with inverse $-1+g^2+g^3$.
To $u$ corresponds the matrix
\begin{equation} \label{eqn:CentralizerMatrix}
\begin{pmatrix}
-1 & 1 & 0 & 0 & 1 \\
1 & -1 & 1 & 0 & 0 \\
0 & 1 & -1 & 1 & 0 \\
0 & 0 & 1 & -1 & 1 \\
1 & 0 & 0 & 1 & -1
\end{pmatrix} \in \GL(5,\mathbb{Z})
.
\end{equation}
This unit acts on $W$ as $-1+a$ and on $W'$
as $-1+b$.
For the constant~$D$ of \cref{l:almostequal},
we get $D=(b-1)^2 = 2-3b = (7+3\sqrt{5})/2$.
For the constant $C$ in \cref{t:projectionbounded},
we get a bound $C = 48/5$ (from the proof).
We can conclude that every core point is equivalent to one
with squared norm smaller
than $M= (2/5) + (48/5)(1 + 2-3b) \approx 50.6$.
We can get somewhat better bounds by applying
\cref{t:projectionbounded} ``layerwise''.
The $k$-layer is, by definition, the set of all
$z\in \mathbb{Z}^d$ with $\sum z_i = k$.
In our example, every lattice point is equivalent to one in
layer $1$ or layer $2$.
For example, it can be shown that
each core point in the $1$-layer
is equivalent to a point $z$
with $\norm{z}^2 \leq 31$.
However, this bound is still far from optimal.
Using the computer algebra system \texttt{GAP}~\cite{GAP486},
we found that the only core points of
$C_5$ in the $1$-layer up to normalizer equivalence are just
\begin{align*}
& (1,0,0,0,0)^t, && (1,1,0,0,-1)^t, &&(1,1,1,0,-2)^t,
\\
& (2,1,0,-1,-1)^t, && (2,1,-2,0,0)^t.
\end{align*}
(The normalizer $\N_{\GL(5,\mathbb{Z})}(G)$ is generated by
the centralizer and the permutation matrix corresponding to
the permutation $(2,3,5,4)$.)
For completeness, we also give a list of core points
up to normalizer equivalence in the $2$-layer:
\begin{align*}
& (1,1,0,0,0)^t, && (1,1,1,0,-1)^t, && (2,1,0,0,-1)^t,
\\
& (2,1,1,-1,-1)^t, && (2,1,1,-2,0)^t.
\end{align*}
Every nontrivial core point for $C_5$ is normalizer equivalent
to exactly one of these ten core points.
For this example,
an infinite series of core points of the form
\[ (f_{j+1},0,f_j,f_j,0)^t,
\]
where $f_j$ is the $j$th Fibonacci number,
was found by Rehn~\cite[5.2.2]{rehn13diss}.
Each point in this series is normalizer equivalent
to one of the two obvious core points
$(1,0,0,0,0)^t$ and $(1,0,1,1,0)^t$.
This follows from
\[ (1-g-g^4)(f_{j+1},0,f_j,f_j,0)^t
=(f_{j+1},-f_{j+2},0,0,-f_{j+2})^t
\]
and thus
\begin{equation*}
(1-g-g^4)(f_{j+1},0,f_j,f_j,0)^t
+ f_{j+2}(1,1,1,1,1)^t
= (f_{j+3},0,f_{j+2},f_{j+2},0)^t.
\end{equation*}
\end{example}
\begin{example}
Now set
\[ G = \erz{ (1,2,3,4,5), \; (1,4)(2,3) }
\iso D_5,
\]
the dihedral group of order $10$.
Then
\[ C_{\GL(5,\mathbb{Z})}(G) = \{ \pm I\} \erz{u},
\]
where $u$ is as in the previous example.
The normalizer of $G$ is the same as that of the cyclic group
$C_5=\erz{(1,2,3,4,5)}$.
In particular, normalizer equivalence for $D_5$ and $C_5$
is the same equivalence relation.
Of the core points from the last example,
only $(1,0,0,0,0)^t$ and $(1,1,0,0,0)^t$
are also core points for $D_5$.
(In fact, for most of the other points, we have
some lattice point on an interval between
two vertices---for example
$(1,0,0,0,0)^t = (1/2)
\big((1,1,0,0,-1)^t + (2,5)(3,4)(1,1,0,0,-1)^t \big)$.
Thus there are only two core points up to normalizer equivalence
in this example.
\end{example}
\begin{remark}
The number of core points up to normalizer equivalence
seems to grow quickly for cyclic groups of prime order.
For $p=7$, we get $515$ core points up to normalizer
equivalence.
\end{remark}
Herr, Rehn, and Schürmann~\cite{herrrehnschue15}
conjectured that a finite transitive
permutation group $G$ has infinitely many core points
up to translation equivalence if the group is not
$2$-homogeneous.
This conjecture is still open
but is known to be true in a number of special cases,
including imprimitive permutation groups and
all groups of degree $d\leq 127$.
It is known that a permutation group
$G \leq \Sym{d}$ is $2$-homogeneous
if and only if $\Fix_{\mathbb{R}^d}(G)^{\perp}$
is irreducible~\cite[Lemma~2(iii)]{cameron72}.
In this case, there are only finitely many core points up to
translation equivalence~\cite[Corollary~10]{herrrehnschue15}.
We propose the following conjecture,
which is the converse of \cref{t:QI_finitelycorep}:
\begin{conjecture}
Let $G\leq \Sym{d}$ be a transitive permutation
group such that
$\Fix(G)^{\perp}$ contains a rational
$G$-invariant subspace other than
$\{0\}$ and $\Fix(G)^{\perp}$ itself.
Then there are infinitely many core points up to
normalizer equivalence.
\end{conjecture}
This can be seen as a generalization
of the Herr-Rehn-Schürmann conjecture,
since translation equivalence refines normalizer equivalence,
and since whenever
$\Fix(G)^{\perp}$ contains a nontrivial irrational $G$-invariant
subspace, there are infinitely many core points up to translation
equivalence by \cref{t:finitecrit}
(or~\cite[Theorem~32]{herrrehnschue15}).
\section{Application to integer linear optimization}
\label{sec:app}
In this last section we describe a possible application of
the concept of normalizer equivalence
to symmetric integer linear optimization problems.
For many years it has been known that symmetry
often leads to difficult problem instances in integer optimization.
Standard approaches like branching usually work
particularly poorly when large symmetry groups are present,
since a lot of equivalent subproblems have to be dealt with in such cases.
Therefore, in recent years several new methods for exploiting symmetries
in integer linear programming have been developed.
See, for example,
\cite{Margot03,Friedman07,BulutogluMargot08,%
KaibelPfetsch08,LinderothMT09,OstrowskiLRS11,
GhoniemSherali11,FischettiLiberti12,HojnyPfetsch17}
and the surveys by Margot \cite{Margot2009} and
Pfetsch and Rehn \cite{PfetschRehn2017}
for an overview.
These methods (with the exception of \cite{FischettiLiberti12}) fall broadly into two classes:
Either they modify the standard branching approach,
using isomorphism tests or isomorphism free generation to
avoid solving equivalent subproblems, or they use techniques to
cut down the original symmetric problem to a less symmetric one,
which contains at least one element of each orbit of solutions.
By now, many of the
leading commercial solvers, like
\texttt{CPLEX} \cite{cplex}, \texttt{Gurobi} \cite{gurobi},
and \texttt{XPRESS} \cite{xpress},
have included some techniques
to detect and exploit special types of symmetries.
Accompanying their computational survey \cite{PfetschRehn2017},
Pfetsch and Rehn also published implementations of some
symmetry exploiting algorithms for \texttt{SCIP} \cite{scip},
like isomorphism pruning and orbital branching.
Core points were introduced as an additional
tool to deal with symmetries in integer convex optimization problems.
Knowing the core points for a given symmetry group
allows one to restrict the search for optima
to this subset of the integer vectors \cite{herrrehnschue13}.
There are many possible ways how core points could be used.
For instance, one could use the fact that
core points are close to invariant subspaces,
by adding additional quadratic constraints
(second order cone constraints).
In the case of QI-groups,
hence with finitely many core points
up to normalizer equivalence
(\cref{t:QI_finitelycorep}),
one could try to systematically run through
core points satisfying the problem constraints.
In contrast to the aforementioned approaches,
we here propose natural reformulations of symmetric
integer optimization problems using the normalizer of the symmetry
group.
Recall that a general standard form of an integer linear optimization
problem is
\begin{equation} \label{eqn:standardILP}
\max c^t x
\quad \text{such that} \quad
A x \leq b,\;
x\in \mathbb{Z}^d,
\end{equation}
for some given matrix $A$ and vectors $b$ and $c$,
all of them usually rational.
If $c=0$, then
we have a so-called
\emph{feasibility problem},
asking simply whether or not
there is an integral solution to a
given system of linear inequalities.
Geometrically, we are asking
whether some polyhedral set (a polytope, if bounded)
contains an integral point.
A group $G\leq \GL(d,\mathbb{Z})$ is called a
\emph{group of symmetries}
of problem~\eqref{eqn:standardILP}
if the constraints $Ax\leq b$
and the linear objective function~$c^tx$ are
invariant under the action of $G$ on $\mathbb{R}^d$,
that is, if $c^t (gx) = c^t x$ and $A (gx) \leq b$
for all $g\in G$ whenever $A x \leq b$.
The first condition is, for instance, satisfied
if $c$ is in the fixed space $\Fix(G)$.
Practically, computing a group of symmetries for a given problem is
usually reduced to the problem of finding symmetries
of a suitable colored graph~\cite{BremnerETAL2014,PfetschRehn2017}.
Quite often in optimization, attention is restricted to groups
$G \leq \Sym{d}$ acting on $\mathbb{R}^d$ by permuting coordinates.
Generally, a linear reformulation of a problem as
in~\eqref{eqn:standardILP} can be obtained
by an integral linear substitution $x\mapsto Sx$
for some matrix~$S \in \GL(d,\mathbb{Z})$:
\begin{equation} \label{eqn:reformulatedILP}
\max (c^t S) x
\quad \text{such that} \quad
(AS) x \leq b,\: x\in \mathbb{Z}^d.
\end{equation}
(More generally, one can use integral
affine substitutions $x\mapsto Sx + t$
with $S\in \GL(d,\mathbb{Z})$ and $t\in \mathbb{Z}^d$.
For simplicity, we assume $t=0$ in the discussion to follow.)
We remark that reformulations as in
\eqref{eqn:reformulatedILP} with a
matrix $S \in \GL(d,\mathbb{Z})$ can of course be applied to any linear
integer optimization problem.
In fact, this is a key idea of
Lenstra's famous polynomial time algorithm
in fixed dimension $d$~\cite{Lenstra1983}.
In Lenstra's algorithm, the transformation matrix $S$ is chosen
to correspond to a suitable LLL-reduction of the lattice,
such that the transformed polyhedral set
$\{ x \in \mathbb{R}^d \mid (AS) x \leq b\}$ is sufficiently round.
This idea has successfully been used for different problem classes
of integer linear optimization problems
(for an overview see~\cite{AardalEisenbrand2005}).
The main difficulty is the choice of an appropriate unimodular matrix
$S$ which simplifies the optimization problem.
If the symmetry group of an optimization problem
contains the group $G$,
then it is natural to choose matrices $S$
which keep the problem $G$-invariant.
When $S$ is an element of the normalizer
$\N_{\GL(d,\mathbb{Z})}(G)$,
problem~\eqref{eqn:standardILP} is
$G$-invariant if and only if \eqref{eqn:reformulatedILP} is
$G$-invariant.
Note also that then $(c^t S)^t$ is in $\Fix (G)$.
We illustrate the idea
with a small concrete problem instance
of~\eqref{eqn:standardILP}
which is invariant under the cyclic group $C_5$.
In particular, using core points,
we construct
$C_5$-invariant integral optimization problems
that are quite hard or even impossible to solve for
state-of-the-art commercial solvers
like \texttt{CPLEX} or \texttt{GUROBI}.
For instance, this is often the case when
the constraints $Ax\leq b$ can be satisfied by real vectors $x$,
but not by integral ones.
\begin{example}
The orbit polytope $P(C_5,z)$ of some integral point~$z$
has a description with linear inequalities of the form
$x_1 + \dotsb + x_5 = k$ and $Ax \leq b$,
where $A$ is a circulant $5\times 5$-matrix
\[
A=\begin{pmatrix}
a_1 & a_2 & a_3 & a_4 & a_5 \\
a_2 & a_3 & a_4 & a_5 & a_1 \\
a_3 & a_4 & a_5 & a_1 & a_2 \\
a_4 & a_5 & a_1 & a_2 & a_3 \\
a_5 & a_1 & a_2 & a_3 & a_4
\end{pmatrix}
\]
with integral entries $a_1$, $\dotsc $, $a_5$,
and $b \in \mathbb{Z}^5$ satisfies $b_1=\dotsb = b_5$.
If $z$ is a core point and if we replace
$b_i$ by $b_i':=b_i-1$, then we get a system of inequalities
having no integral solution.
Applying this construction to the core point
\[ z=U^{10}\cdot (1,1,1,0,-2)^t,
\]
where $U$ is the
matrix from~\eqref{eqn:CentralizerMatrix} in \cref{ex:c5},
we get parameters
\begin{alignat*}{4}
a_1 &= 515161, & \quad
a_2 &= 18376, & \quad
a_3 &= -503804, \\
a_4 &= -329744, & \quad
a_5 &= 300011, & \quad
b_1' &= 60.
\end{alignat*}
We can vary the values of $k\equiv 1 \mod 5$
(geometrically, this corresponds to translating the polytope
by some integral multiple of the all-ones vector).
This gives a series of problem instances
on which the commercial solvers very often not finish within
a time limit of 10000~seconds on a usual desktop computer.
For $k=1$, which seems computationally the easiest case,
a solution still always takes more than 4000~seconds.
However, knowing that a given problem such as the above is
$C_5$-invariant, we can try
to find an easier reformulation \eqref{eqn:reformulatedILP}
by using matrices from the centralizer.
As a rule of thumb, we assume that a transformed problem with smaller
coefficients is ``easier.''
Here, the torsion free part of the centralizer is generated
by the matrix $U$ from~\eqref{eqn:CentralizerMatrix} in
\cref{ex:c5}, and so the only possible choices
for $S$ are $U$ and $U^{-1}$.
(A matrix of finite order will probably not simplify
a problem significantly.)
Here, applying $S=U$ yields an easier problem,
and one quickly finds that after applying $S$ ten times,
the problem is not simplified further by
applying $U$ (or $U^{-1}$).
In other words,
we transform the original problem instance with $U^{10}$.
This yields an
equivalent $C_5$-invariant feasibility problem,
which is basically instantly solved by the commercial solvers
(finding that there is no integral solution).
As far as we know, this approach is in particular by far better
than any previously known one that uses the
symmetries of a cyclic group.
One standard approach is, for example, to add symmetry-breaking
inequalities
$x_1\leq x_2, \ldots, x_1\leq x_5$.
This yields an improved performance in some cases
but is far from the order of computational gain
that is possible with our proposed reformulations.
\end{example}
In general, when an integer linear program~\eqref{eqn:standardILP}
is invariant
under a QI-group $G$, and when it has any solutions at all,
then \cref{t:QI_finitelycorep} tells us that there is a
transformation $x\mapsto Sx+ t$ with $S\in \N_{\GL(d,\mathbb{Z})}(G)$
such that the reformulated problem has a feasible solution
in a given finite set
(a set of representatives of core points under normalizer equivalence).
Heuristically, this means that we should be able to transform
any $G$-invariant problem into one of bounded difficulty:
By \cref{l:almostequal}, for any vector $x\in \mathbb{R}^d$,
there is an element $S\in \C_{\GL(d,\mathbb{Z})}(G)$
such that the projections of $S x$ to the different
$G$-invariant subspaces have approximately the same norm.
This means that the orbit polytope of $S x$ is ``round.''
Our approach is particularly straightforward
when the torsion free part of
the centralizer $\C_{\GL(d,\mathbb{Z})}(G)$ has just rank~$1$,
as in the example with $G=C_5$ above.
When the centralizer contains a free abelian group of
some larger rank, then it is less clear
how to reduce the problem efficiently.
A possible heuristic is as follows:
Recall that in \cref{l:full_latt}, we described a map $L$
which maps the centralizer,
and thus its torsion-free part of rank~$r$ (say),
onto a certain lattice in $\mathbb{R}^{r+1}$.
This maps the problem of finding a
reformulation~\eqref{eqn:reformulatedILP}
with ``small'' $AS$ to a minimization problem
on a certain lattice.
For example, when we minimize
$\norm{AS}$, this translates to minimizing a convex function
on a lattice.
So we can find a good reformulation by finding
a lattice point in $\mathbb{R}^{r+1}$ which is close to the minimum,
using, for instance, LLL-reduction.
This will be further studied in a forthcoming paper.
\section*{Acknowledgments}
We would like to thank the anonymous referees for several valuable
comments.
We also gratefully acknowledge support by DFG grant SCHU 1503/6-1.
\printbibliography
\end{document}
|
1,116,691,498,640 | arxiv | \section{Introduction}
This year 2014 is the fiftieth anniversary of Kondo's \cite{K8} seminal paper
"Resistance minimum in Dilute Magnetic Alloys" and the fortieth anniversary of
Wilson's \cite{W18} renormalization paper about the Kondo effect. For 50 years
the Kondo effect has been investigated with the most sophisticated theoretical
methods \cite{A51}, \cite{F30}, \cite{K58}, \cite{K59}, \cite{N14}, \cite{N5},
\cite{N7}), \cite{G19}, \cite{B103}, \cite{W12}, \cite{A50}, \cite{S29},
\cite{B195}. Kondo \cite{K8} solved the puzzle of the low-temperature
resistance increase in dilute magnetic alloys \cite{H34} above the Kondo
temperature $T_{K}$. Wilson calculated the Kondo ground-state properties with
a numerical renormalization, known as NRG theory. He observed a crossover from
weak to strong coupling with increasing $n$ (number of renormalization steps).
In this article the FAIR solution of the Kondo ground-state \cite{B187} is
applied to reproduce and interpret Wilson's results (FAIR=Friedel artificially
inserted resonance).
\section{Wilson's Numerical Renormalization Theory}
The interaction between a magnetic impurity and the conduction electrons can
be described by an exchange interaction with the potential $-2J\left(
\mathbf{S\cdot s}\right) \delta\left( r\right) $, $J<0$ where $\mathbf{S}$,
$\mathbf{s}$ are the spins of the impurity and the conduction electrons.
Wilson invented and applied a number of tricks to tackle the Kondo
ground-state. Using a band with a constant density of states and a band width
of $2W$ he divided all energies by $W,$ yielding a band range $\left(
-1:1\right) $ with the Fermi level at $0$. Then Wilson made the (almost)
infinite number of s-electron states $\varphi_{k}^{\dagger}$ manageable by
dividing the band into energy cells. (I use the same symbol for a state, (for
example $\varphi_{k}$)$,$when addressing it by a creation operator
$\varphi_{k}^{\dagger}$, an annihilation operator $\widehat{\varphi}_{k}$ or
as a wave-function $\widetilde{\varphi}_{k}\left( \mathbf{r}\right) $).
In Fig.4 in the appendix Wilson's sub-division is shown. The ranges $\left(
-1:0\right) $ and $\left( 0:1\right) $ are split at $\pm1/2,\pm
1/4,\pm1/8,..\pm1/2^{\nu},..\pm1/2^{\infty}$. In the next step Wilson combined
all states $\varphi_{k}^{\dagger}$ within each cell $\mathfrak{C}_{\nu}$ into
a single state $c_{\nu}^{\dagger}$ (as the normalized sum of all states
$\varphi_{k}^{\dagger}$ within the cell). These states $c_{\nu}^{\dagger}$ I
will call Wilson states. They contain the full interaction of all electrons in
the cell with the impurity.
From these states $c_{\nu}^{\dagger}$ Wilson constructed a series of new
states $f_{\mu}^{\dagger}$. The state $f_{0}^{\dagger}$ is the normalized sum
of all the original band states $\varphi_{k}^{\dagger}$. It is concentrated at
the impurity, being a Wannier state of the s-band. The next state
$f_{1}^{\dagger}$ surrounds the inner state $f_{0}^{\dagger}$ and is itself
surrounded by $f_{2}^{\dagger}$, etc. All the $f_{\mu}^{\dagger}$ surround the
magnetic impurity like onion shells. Their width in real space increases each
time by a factor of two. Wilson chose the states $f_{\mu,\,\sigma}^{\dagger}$
in such a way that their Hamiltonian is that of a linear chain with next
nearest neighbor coupling. Only the states $f_{0,\sigma}^{\dagger}$ interact
with the impurity. He solved this Hamiltonian by renormalization, i.e. by
initially cutting off the chain at a small $n$ and solving the resulting
Hamiltonian $H_{n}$ by diagonalization. With the eigenstates of $H_{n}$ and
the states $f_{\left( n+1\right) \uparrow}^{\dagger},$ $f_{\left(
n+1\right) \downarrow}^{\dagger}$ Wilson built the next Hamiltonian $H_{n+1
$. This NRG cycle is repeated. The number of basis states increases at each
NRG step by a factor of four (yielding $4^{n}$) but is generally limited to
the 1000 states with the lowest energies.
Wilson compared the resulting excitation spectrum for a finite exchange
interaction, for example $J=-0.055,$ with the spectrum for $J=0$ and
$J=-\infty$. For a small number $n$ of NRG steps the spectrum of $H_{n}$
resembled that of $J=0$. But after a critical number $n_{0}$ the spectrum
crossed over, resembling the strong coupling case $J=-\infty$. In addition,
Wilson observed that the effective number of band electrons changed from odd
to even at the transition.
With this work Wilson achieved a break through in the low-temperature
properties of the Kondo effect. From the flow diagram and the fixed-point
properties he obtained an effective Hamiltonian for low temperatures.
Evaluation of his numerical results lead Nozieres \cite{N14} to the
Fermi-liquid description of the Kondo ground-state.
Despite this great success, it appears that Wilson was not completely
satisfied with his achievement. In his review article about the Kondo
renormalization Wilson wrote (\cite{W18}, page 810): "Why the crossover from
weak to strong coupling takes place will not be explained. The author has no
simple physical explanation for it. It is the result of a complicated
numerical calculation".
The reason that Wilson had no simple interpretation of his results, i.e., that
the physics of the Kondo ground-state is so veiled, is due to the fact that
the wave function of the ground-state is so intangible. In NRG only a tiny
fraction of the ground-state Slater states can be maintained, which makes it
very difficult to uncover the hidden physics. Unfortunately the exact solution
using the Bethe-ansatz \cite{W12}, \cite{A50}, \cite{S29} does not help
because it is very difficult to extract the wave function from this ansatz.
\section{Magnetic and Kondo Ground-State in FAIR}
The author has developed in the past years a very compact solution for the
Kondo ground-state. It is known as the FAIR solution of the Kondo
ground-state. A short review is given in the festschrift to Jaques Friedel's
90's birthday \cite{B187} with extended references therein. Although it is not
an exact solution as the Bethe-ansatz, it describes the physics of the Kondo
ground-state very well, and it is well equipped to answer Wilson's implicit
questions behind the physics of the NRG cross-over.
Kondo and Wilson used a rigid magnetic impurity in their initial calculations.
However, the most common group of magnetic impurities are 3d-atoms dissolved
in a host. These impurities possess d-resonances. Friedel \cite{F57} and
Anderson \cite{A31} showed that a sufficiently large Coulomb exchange
interaction between opposite d-spins creates a magnetic moment in the
d-states. Anderson reduced the ten-fold degeneracy of the FA-impurity to a
two-fold degeneracy, making it de facto an impurity with $l=0$ and $s=1/2$ (it
is still called a d-impurity). This Anderson model is used in most theoretical
calculations of the Kondo effect of d-impurities. Schrieffer and Wolff
\cite{S31} showed that for sufficiently strong Coulomb interaction the
Anderson model yields the Kondo effect. Krishna-murthy, Wilkins, and Wilson
\cite{K58} performed NRG calculations for the FA-impurity and obtained an
equivalent crossover. Here I will discuss the Kondo ground-state of the
d-impurity because it demonstrates an interesting feed back of the singlet
state on the electronic structure of its magnetic components.
In the FAIR approach we use the same trick as Wilson to reduce the large
number of s-electron states. The positive and negative bands of s-electrons
are repeatedly sub-divided. But we stop the sub-division when a given number
$N=2n$ of energy cells $\mathfrak{C}_{\nu}\ $ is obtained, $n$ cells below and
$n$ cells above the Fermi level at energy zero. For each energy cell a Wilson
state is constructed. Then the smallest level spacing between the resulting
Wilson states is (next to the Fermi level) equal to $\delta=2^{-n+1}$ (in
units of $\varepsilon_{F}$ or $W$). The corresponding size of the host is
$R\thickapprox2^{n}\lambda_{F}/4$ where $\lambda_{F}$ is the Fermi wave
length. As in NRG the sample size \ doubles when $n$ is increased by one. Out
of the Wilson states two \emph{fair} states $a_{0\uparrow}^{\dagger}$ and
$b_{0\downarrow}^{\dagger}$ of spin-up and down are composed.
The easiest way to explain the logic behind the FAIR approach is to compare it
with a monarch whose subjects elect an ombudsman. This ombudsman does all the
negotiation with the king relieving all the other subjects from this duty. In
our case the $d_{\uparrow}^{\dagger}$-state is the king and the spin-up
s-states $c_{\nu\uparrow}^{\dagger}$ are the subjects. The latter elect the
\emph{fair} state $a_{0\uparrow}^{\dagger}$ as ombudsman who now exclusively
negotiates with the $d_{\uparrow}^{\dagger}$-state. This negotiation occurs in
form of \ s-d-hopping between $d_{\uparrow}^{\dagger}$ and $a_{0\uparrow
}^{\dagger}$, from which the remaining subjects $a_{i}^{\dagger}$ are
excluded. Their only function is to optimally elect and equip the ombudsman,
i.e. \emph{fair} state. (Of \ course, the remaining $\left( N-1\right) $
s-states $c_{\nu\uparrow}^{\dagger}$ have to be rebuilt so that they are
orthogonal to $a_{0\uparrow}^{\dagger}$, orthonormal to each other and
diagonal in the band Hamiltonian $H^{0}$. Because of the spin, there is a
second royal copy, consisting of $d_{\downarrow}^{\dagger}$ and
$b_{0\downarrow}^{\dagger}$. In the appendix the FAIR method is summarized for
a simple Friedel resonance.
This idea may appear too simple to work but actually it yields a much better
magnetic state for the Anderson model than the mean field theory \cite{B152}.
Equation (\ref{Psi_MS}) shows the structure of the magnetic state. For
sufficiently strong Coulomb interaction it assumes a magnetic moment, i.e.
$\left\vert B\right\vert ^{2}\neq\left\vert C\right\vert ^{2}$. \ For
$\left\vert B\right\vert ^{2}>>\left\vert A\right\vert ^{2},\left\vert
C\right\vert ^{2},\left\vert D\right\vert ^{2}$ the net d-spin is down. The
Coulomb repulsion affects only the term $Dd_{\uparrow}^{\dagger}d_{\downarrow
}^{\dagger}$ and the s-d-hopping is, for example, observed between the terms
$Ba_{0\uparrow}^{\dagger}d_{\downarrow}^{\dagger}$ and $Dd_{\uparrow
^{\dagger}d_{\downarrow}^{\dagger},$ $Aa_{0\uparrow}^{\dagger}b_{0\downarrow
}^{\dagger}$. The two half-filled FAIR bands $\left\vert \mathbf{0
_{a\uparrow}\right\rangle =a_{1\uparrow}^{\dagger}...a_{n\uparrow}^{\dagger
}\Omega$ and $\left\vert \mathbf{0}_{b\downarrow}\right\rangle =b_{1\downarrow
}^{\dagger}...b_{n\downarrow}^{\dagger}\Omega$ don't participate in any of the
interactions ($\Omega$ = vacuum state)
\begin{equation}
\Psi_{MS\downarrow}=\left[ Aa_{0\uparrow}^{\dagger}b_{0\downarrow}^{\dagger
}+Ba_{0\uparrow}^{\dagger}d_{\downarrow}^{\dagger}+Cd_{\uparrow}^{\dagger
}b_{0\downarrow}^{\dagger}+Dd_{\uparrow}^{\dagger}d_{\downarrow}^{\dagger
}\right] \left\vert \mathbf{0}_{a\uparrow}\right\rangle \left\vert
\mathbf{0}_{b\downarrow}\right\rangle \label{Psi_MS
\end{equation}
Fig.1 shows the electron structure of a magnetic d-impurity in the FAIR
description graphically. If one suppresses the spin-flip processes then one
obtains an enforced magnetic ground-state $\Psi_{MS\downarrow}$ with net
spin-down moment. The spin-up and -down FAIR bands are shown in the $\left\{
a_{i\uparrow}^{\dagger}\right\} $- and $\left\{ b_{i\downarrow}^{\dagger
}\right\} $-bases. The d-states are drawn to the left and right of the FAIR
bands. The circles within the FAIR bands represent the \emph{fair} states
$a_{0\uparrow}^{\dagger}$ and $b_{0\downarrow}^{\dagger},$ white is empty and
black is occupied. The figure shows the Slater state with the largest
amplitude. The double arrows indicate the transitions between the d- and the
\emph{fair} states via s-d-coupling. One obtains for the magnetic state a
total of four Slater states with the four possible occupations of d- and
\emph{fair} states as shown in equ. (\ref{Psi_MS}). The explicit form of the
magnetic solution is obtained by varying the composition of the two
\emph{fair} states $a_{0\uparrow}^{\dagger}$ and $b_{0\downarrow}^{\dagger}$
and minimizing the energy expectation value of the Anderson Hamiltonian. The
\emph{fair} states determine the remaining FAIR band states $a_{i\uparrow
}^{\dagger}$, $b_{i\downarrow}^{\dagger}$ and the coefficients $A,.,D$ uniquely.
Although the total spin of $\Psi_{MS\downarrow}$ in equ. (\ref{Psi_MS}) is
zero the d-impurity possesses a finite magnetic moment. The band electrons
which appear to compensate the moment are pushed to the surface of the host
\begin{align*}
&
{\includegraphics[
height=2.0614in,
width=2.2756in
{Fig1.eps
\\
&
\begin{tabular}
[c]{l
Fig.1: The dominant Slater state for the magnetic state $\Psi_{MS\downarrow}$.
The spin of the\\
$\left\{ a_{i}^{\dagger}\right\} $ FAIR band (red or dark) is anti-parallel
to the net spin of the magnetic state $\Psi_{MS\downarrow}$.
\end{tabular}
\end{align*}
If one reverses all spins in Fig.1 then one obtains an impurity $\Psi
_{MS\uparrow}$ with net spin up. (A modified version of) Both states together
will form the Kondo ground-state. But let us first consider the enforced
magnetic state $\Psi_{MS\downarrow}$ with a net d-spin down. Its half-filled
band states are $\left\vert \mathbf{0}_{a\uparrow}\right\rangle \left\vert
\mathbf{0}_{b\downarrow}\right\rangle $. Since the orbital wave functions
$a_{0}^{\dagger}$ and $b_{0}^{\dagger}$ of the \emph{fair} states are
different the corresponding bands $\left\{ a_{i}^{\dagger}\right\} $ and
$\left\{ b_{j}^{\dagger}\right\} $ are different too (the net spin of the
impurity breaks the up-down symmetry). Any transition between $\Psi
_{MS\downarrow}$ and $\Psi_{MS\uparrow}$ contains the multi-electron scalar
products (MESP)
\begin{equation}
\left\langle \mathbf{0}_{b\uparrow}\mathbf{0}_{a\downarrow}|\mathbf{0
_{a\uparrow}\mathbf{0}_{b\downarrow}\right\rangle =\left\langle \mathbf{0
_{b\downarrow}|\mathbf{0}_{a\downarrow}\right\rangle \left\langle
\mathbf{0}_{a\uparrow}|\mathbf{0}_{b\uparrow}\right\rangle =\left\vert
\left\langle \mathbf{0}_{a}|\mathbf{0}_{b}\right\rangle \right\vert ^{2}
\label{Dsp
\end{equation}
The MESP $\left\langle \mathbf{0}_{a}|\mathbf{0}_{b}\right\rangle $ is often
called the fidelity $F$. It can be calculated from the $\Psi_{MS\downarrow}$
alone if one takes only the orbital parts of $\left\vert \mathbf{0
_{a\uparrow}\right\rangle $ and $\left\vert \mathbf{0}_{b\downarrow
}\right\rangle $. The single electron states $a_{i\uparrow}^{\dagger}$ and
$b_{i\downarrow}^{\dagger}$ in Fig.1 experience the opposite polarization
potential. Therefore one expects that $\left\langle \mathbf{0}_{a
|\mathbf{0}_{b}\right\rangle $ in equ. (\ref{Dsp}) decreases with increasing
electron number or volume. It should show an orthogonality catastrophe.
\section{Internal Orthogonality Catastrophe}
We first check the fidelity $\left\langle \mathbf{0}_{a}|\mathbf{0
_{b}\right\rangle $ for the (enforced) magnetic state $\Psi_{MS\downarrow}$ as
a function of $n$ (where $n$ is half the number of Wilson states, $n=N/2$).
The smallest level spacing is $2^{-n+1}$ and the effective size of the host is
$2^{n}\lambda_{F}/4.$ In Fig.2 the logarithm of the fidelity $\ln\left(
F\right) =\ln\left\langle \mathbf{0}_{a}|\mathbf{0}_{b}\right\rangle $ is
plotted as a function of $n$ for the magnetic state of a d-impurity (stars).
The parameters of the d-impurity are: d-state energy $E_{d}=-0.5,$ Coulomb
energy $U=1$ and s-d-hopping matrix element $\left\vert V_{sd}\right\vert
^{2}=0.03$. We find a linear dependence of $\ln\left( F\right) $ on $n$,
i.e., the fidelity $\left\langle \mathbf{0}_{a}|\mathbf{0}_{b}\right\rangle $
decreases exponentially with $n$. This causes an internal orthogonality
catastrophe (IOC), in analogy to Anderson's orthogonality catastrophe
\cite{A53}.
This IOC makes the transition matrix element between $\Psi_{MS\downarrow}$ and
$\Psi_{MS\uparrow}$ arbitrarily small for large sample volume and would
prevent any energy reduction in the singlet state. Therefore the IOC has to be
averted in the Kondo ground-state.
When spin-flip processes are permitted between $\Psi_{\downarrow}$ and
$\Psi_{\uparrow}$ then the system forms a singlet state. A new optimization
yields new compositions of the \emph{fair} states $a_{0}^{\dagger}$ and
$b_{0}^{\dagger}$ which yields new FAIR bands. The ground-state is the
(normalized) sum of the state in Fig.1 and its spin-inverted image. The
composition of $\Psi_{MS\downarrow}$ and $\Psi_{MS\uparrow}$ changes to a very
different form which I denote as $\Psi_{SS\downarrow}$ and $\Psi_{SS\uparrow}$
and the singlet ground-state is the normalized sum of $\Psi_{SS\downarrow}$
and $\Psi_{SS\uparrow}$. Now the fidelity shows a completely different
behavior (full circles in Fig.2). At about $n=15,$ the fidelity becomes
constant. The singlet state prevents the IOC. As we will see below this
transition into a constant fidelity at about $n=15$ is closely related to
Wilson's track change in in the NRG ladder.
\begin{align*}
&
{\includegraphics[
height=2.8227in,
width=3.4338in
{Fig2.eps
\\
&
\begin{tabular}
[c]{l
Fig.2: The internal fidelity in the magnetic state (stars)\\
and the singlet state (full circles) as a function of $n.$\\
($2n$ is the number of Wilson states per spin. The radius\\
of the host is $2^{n}\lambda_{F}/4$).
\end{tabular}
\end{align*}
\section{Energy Shifts due to the Magnetic Impurity}
The formation of the singlet ground-state has a dramatic effect on the
electronic band structure. This becomes even more obvious when one
investigates the energy spectrum $E_{i}^{a}$ and $E_{i}^{b}$ of the two
FAIR-bands$.$ In the absence of the d-impurity the energy spectra for spin-up
and down are, of course, equal. We denote these initial energies as
$\varepsilon_{i}$. \{For the first $n-1$ states this energy depends
exponentially on $i$ and has the values $\varepsilon_{i}=-3/2\ast2^{-i}$,
while $\varepsilon_{n}=-2^{-n}$. Above the Fermi level one has the mirror
image of the negative energies. The total number of Wilson states for a given
$n$ is $N=2n.$ (The FAIR bands have one state less).
Now we can plot the relative energy shift $r_{i}=\left( E_{i}-\varepsilon
_{i}\right) /\left( \varepsilon_{i+1}-\varepsilon_{i}\right) $ for the two
FAIR bands, the $\left\{ a_{i}^{\dagger}\right\} $- and the $\left\{
b_{i}^{\dagger}\right\} $-band as a function of $i$. This is done in Fig.3.
The abscissa gives the number $i$ of the (energy ordered) Wilson states. The
second abscissa below is the corresponding energy scale.
We first discuss the energy shift in the magnetic state $\Psi_{MS\downarrow}$
(open triangles) (they are the same in $\Psi_{MS\uparrow}$). The increase of
$r_{i}$ from the left to the right of Fig.3 is due to the fact that the FAIR
bands have one state less than the band of Wilson states. The value of $r_{i}$
represents essentially the phase shift of the state $a_{i}^{\dagger}$ or
$b_{i}^{\dagger}$ in units of $\pi$. One recognizes that $r_{i},$ i.e. the
phase shifts are very different for the FAIR bands anti-parallel and parallel
to the net spin of the impurity. Close to the Fermi level the difference in
$r_{i}$ is almost equal to one, i.e. one level spacing
\begin{align*}
&
{\includegraphics[
height=3.2594in,
width=3.9344in
{Fig3.eps
\\
&
\begin{tabular}
[c]{l
Fig.3: The relative energy shifts $r_{i}$ of the FAIR bands $\left\{
a_{i}^{\dagger}\right\} $ and $\left\{ b_{i}^{\dagger}\right\} $\\
as a function of $i$ or energy (lower scale) for the magnetic state
$\Psi_{MS\uparrow}$\\
(open symbols) and the component $\Psi_{SS\uparrow}$ of the singlet state
(full symbols).
\end{tabular}
\end{align*}
In the singlet state the relative energy shift $r_{i}$, shown as full
triangles, presents a rather fascinating behavior. For $\left\vert
E\right\vert >2^{-10}$ the values of $r_{i}$ for the singlet and magnetic
states are quite close. However, if one approaches the low energy region,
$\left\vert E\right\vert <2^{-15}$, then the band energies $E_{i}^{a}$ and
$E_{i}^{b}$ move towards each other and become essentially identical. The
corresponding states $a_{i}^{\dagger}$ and $b_{i}^{\dagger}$ become
synchronized. As a consequence the internal orthogonality catastrophe is
averted in the Kondo ground-state.
The physical reason for the synchronization at low energy is the following. In
the Kondo ground-state one has a competition between polarization energy and
spin-flip energy. The spin-flip energy likes the two FAIR bands $\left\{
a_{i}^{\dagger}\right\} $ and $\left\{ b_{i}^{\dagger}\right\} $ to be
synchronized because its (transition) matrix element is proportional to
$\left\vert \left\langle \mathbf{0}_{a}|\mathbf{0}_{b}\right\rangle
\right\vert ^{2}.$ The polarization energy wants to shift the $\left\{
a_{i}^{\dagger}\right\} $ and $\left\{ b_{i}^{\dagger}\right\} $ bands in
opposite directions. At large (absolute) energies $\left\vert E\right\vert $
the polarization energy wins. Only for small energies of the order of
$k_{B}T_{K}$ (i.e. $n>n_{0}$) does the spin-flip gain a minor victory by
synchronizing the two bands in the very small energy range of the Kondo energy.
The synchronization of the two electron bases close to the Fermi level, i.e.
the suppression of the internal orthogonality catastrophe, is therefore a
characteristic property of the Kondo ground-state. In the process the two
\emph{fair }states $a_{0}^{\dagger}$ and $b_{0}^{\dagger}$ dramatically change
their composition. In the singlet state for $n>n_{0}$ they increase their
weight at very small energies.
Wilson's renormalization can be roughly visualized by means of the single
Fig.3. The figure corresponds to roughly $n=30$ NRG steps. If one wants to
visualize the situation after $10$ NRG steps one removes in Fig.3 the inner
section for $11\leq i\leq50$ and joins the remaining outer parts, then one
obtains, at least qualitatively, the relative energy shifts $r_{i}$ for
$n=10$. One easily recognizes that the crossover has not yet taken place. In
our case it occurs in the range $13<n<17$.
\section{The Physics of the Bound Electron in the Singlet State}
Wilson observed in his normalization sequence that \textbf{the spectrum
changed as if one electron was removed at the Fermi level }when the system had
crossed over from weak to strong coupling. The general interpretation is that
the impurity has bound one conduction electron and formed a singlet state,
removing this electron from the band.
In Fig.3 one recognizes that for the singlet state and $n=30$ the energies
$E_{i}^{a}$ and $E_{i}^{b}$ (for the same $i$ close to the Fermi level, i.e.
close to $n=30$) possess the same energy. An even number of band electrons
fills the two FAIR bands up to the same energy.
If one removes the inner forty states (as discussed above) then one obtains
roughly the energy shifts for $n=10$. Now the energy shifts $r_{i}$ for the
two FAIR bands in the singlet state differ roughly by one, $\Delta r_{i
=r_{i}^{a}-r_{i}^{b}$ $\thickapprox1$ or $\left( E_{i}^{a}-E_{i}^{b}\right)
\thickapprox\left( \varepsilon_{i+1}-\varepsilon_{i}\right) $. Here the
energies $E_{i}^{a}$ lie about one level higher than $E_{i}^{b}$ (for the same
$i$). Now an odd number of electrons would fill the two FAIR bands up to the
same energy. This is exactly what Wilson observed. It is due to the
synchronization of the electron states within $k_{B}T_{K}$ of the Fermi level.
Of course, this is only possible when there are levels within $k_{B}T_{K}$ of
the Fermi level. For a spherical host this requires that the radius is larger
than the Kondo length.
\section{Summary}
In summary the synchronization of the FAIR band states close to the Fermi
level averts the internal orthogonality catastrophe between the states
$\Psi_{SS\uparrow}$ and $\Psi_{SS\downarrow}$. It arises because it permits
the system to lower its potential energy due to a tiny but finite spin-flip
energy between these two states. It is also this synchronization that appears
to remove an electron from the conduction band. At the same time the
composition of the $\emph{fair}$ states $a_{0}^{\dagger}$ and $b_{0}^{\dagger
}$ changes dramatically, which in turn changes the spectrum of the two FAIR
bands. This changes the charge distribution around each $\Psi_{SS\sigma}$
within a radius of the Kondo length, that is known as the Kondo cloud. It is
this low energy synchronization process which makes the Kondo effect such a
extraordinary phenomenon.
\textbf{This might be the physical interpretation Wilson was looking for.}
\newpage
|
1,116,691,498,641 | arxiv | \section{Introduction}
A successful description of multiparticle production
based on perturbative QCD has been established for "hard"
processes which are initiated by an interaction of
elementary quanta (quarks, leptons, gauge bosons, ...) at large momentum
transfers $Q^2\gg\Lambda^2$, whereby the
characteristic scale in QCD is $\Lambda\sim$ few 100 MeV.
In this kinematic regime the running coupling constant
$\alpha_s(Q^2) $ is small and the lowest order terms
of the perturbative expansion provide the desired accuracy.
The coloured quarks and gluons which emerge from the primary hard
process cannot escape towards large distances because of the confinement of
the colour fields. Rather, they ``fragment'' into particle jets
which may consist of many stable and unstable hadrons.
Here we are interested in the emergence of the hadronic final
states and jet structure. The partons participating in the
hard process generate parton cascades through gluon Bremsstrahlung and quark
antiquark pair production processes which
can be treated again perturbatively, at least approximately.
The singular behaviour of the gluon Bremsstrahlung
in the angle $\Theta$ and momentum~$k$
\begin{equation}
\frac{dn}{dk d\Theta}\propto
\alpha_s(k_T/\Lambda)\frac{1}{k\Theta},
\quad k_T>Q_0.
\label{brems}
\end{equation}
(in lowest order and for small angles)
leads to the collimation of the partons and the jet structure.
The transverse momentum $k_T$
is taken as characteristic scale for the
coupling $\alpha_s\sim 1/\ln(k_T/\Lambda)$,
so it will rise with decreasing scale during jet evolution
and one expects the
perturbation theory to loose its valididity below a limiting scale $Q_0$.
The transition to the hadronic final state, finally, proceeds
at small momentum transfers $k_T\sim Q_0$ by
non-perturbative processes.
There have been different approaches to obtain predictions on
the hadronic final states:
\enlargethispage{1.0cm}
\pagebreak
\noindent {\it 1. ``Microscopic'' Monte Carlo models}\\
In a first step a parton final state
is generated perturbatively corresponding to a cut-off scale like the
above $Q_0$. Then, according to a non-perturbative model
intermediate hadronic systems (clusters, strings, \ldots) are formed
which decay, partly through intermediate resonances, into the final hadrons
of any flavour composition.
Depending on the considered complexity a larger number
of adjustable parameters are allowed for in addition to the QCD
scale and cut-off parameters.
Because of the complexity of these models only Monte Carlo methods
are available for their analysis. They are able to reproduce many very
detailed properties of the final state successfully.
\noindent {\it 2. Parton Hadron Duality approaches}\\
One compares the perturbative QCD result for particular observables directly
with the corresponding result for hadrons. The idea is that the effects of
hadronization are averaged out for sufficiently inclusive observables.
In this case analytical results are aimed for which are closer to a direct
physical interpretation than the MC results (for reviews, see
\cite{dkmt2,ko}).
This general idea comes in
various realizations, we emphasize three kinds of observables:
\noindent {\it Jet cross sections:}
Jets are defined with respect to a
certain resolution criterion (parameter $y_{cut}$), then
the cross sections for hadron and parton jets are compared directly
at the same resolution. This
phenomenological ansatz has turned out to be extremely successful in
the physics of energetic jets.
A priori, it is nontrivial that
an energetic hadron jet with dozens of hadrons
should be compared directly to a parton jet
with only very few (1-3) partons.\\
\noindent {\it Infrared and collinear safe observables:}
The value of such an observable is not changed if
a soft particle with $k\to 0$ or a collinear particle ($\Theta\to 0$) is
added to the final state. It is then expected that the observables are less
sensitive to the kinematic region $k_T\sim Q_0$ in (\ref{brems}).
Especially, event shape observables like "Thrust"
or energy flow patterns
belong into this
category. Perturbative calculations with all order resummations
have been generally successful. In recent years
perturbative calculations to $O(\alpha_s^2)$ in combination with
power corrections $\sim 1/Q^q$ have found considerable interest.\\
\noindent
{\it Infrared sensitive observables:}
Global particle multiplicities as well as inclusive
particle distributions and
correlations belong into this category; these observables are divergent
for $Q_0\to 0$ and therefore are
particularly sensitive to the transition region from partons to hadrons.
$Q_0$ plays the role of a nonperturbative hadronization parameter.
In this report we will be concerned with the last class of observables
to learn about the soft phenomena and ultimately about the colour
confinement mechanisms.
Specific questions concerning the role of perturbative QCD are
\begin{itemize}
\item What is the limiting value of $Q_0$ for which perturbative QCD can be
applied successfully. Especially, can $Q_0$ be of the order of $\Lambda\sim$
few 100 MeV?
\item Is there any evidence for
the strong rise of the coupling constant $\alpha_s$ towards small scales
below 1 GeV?
\item Is there evidence for characteristic QCD coherence effects at small scales
which are expected for soft gluons, evidence for the colour factors
$C_A,C_F$?
\end{itemize}
\section{Theoretical approach}
\subsection{Partons}
The evolution of a parton jet is described in terms of a multiparticle
generating functional $Z_A(P,\Theta;\{u(k)\})$ with momentum test functions
$u(k)$ for a primary parton $A$
($A=q,g$) of momentum $P$ and jet opening angle $\Theta$. This functional
fulfils a differential-integral equation
\cite{dkmt2}
\begin{equation}
\begin{split}
\frac{d}{d \: \ln \Theta} \: Z_A (P,& \Theta) =
\frac{1}{2}
\; \sum_{B,C} \; \int_0^1 \; dz \\
\; &\times \ \frac{\alpha_s (k_T)}{2 \pi} \: \Phi_A^{BC} (z)
\left [Z_B (zP, \Theta) \: Z_C ((1 - z)P, \Theta) \: - \: Z_A
(P,\Theta) \right ]
\label{Zevol}
\end{split}
\end{equation}
and has to be solved with constraints $k_T>Q_0$ and with initial condition
\begin{equation}
Z_A (P, \Theta; \{ u \})|_{P \Theta = Q_0} \; = \; u_A
(k = P).
\label{init}
\end{equation}
which means that at threshold $P \Theta = Q_0$ there is only one particle
in the jet.
From the functional $Z_A$ one can obtain the inclusive n-parton momentum
distributions by functional differentiation after the functions $u(k_i)$,
$i=1\ldots n$, at
u=1 and then one finds the corresponding evolution equations as in (\ref{Zevol}).
This ``Master Equation'' includes the following features:
the splitting functions $\Phi_A^{BC} (z)$
of partons $A\to BC$;
evolution in angle $\Theta$ yielding
a sequential angular ordering
which limits the phase space of soft emission as a consequence
of colour coherence; the running coupling $\alpha_s(k_T)$.
For large
momentum fractions $z$ the equation approaches the
usual DGLAP evolution equations.
The solution of the evolution equations can be found by iteration and then
generates an all order perturbation series; it is
complete in leading order (``Double Logarithmic Approximation -- DLA)
and in the
next to leading order (``Modified Leading Log Approximation'' -- MLLA),
i.e. in the terms
$\alpha_s^n \log ^{2n}(y)$ and $\alpha_s^n \log ^{2n-1}(y)$. The
logarithmic terms of lower order
are not complete, but it makes sense to include them as
well as they are important for taking into acount energy conservation and
the correct behaviour near threshold (\ref{init}).
The complete partonic
final state of a reaction may be constructed by matching with
an exact matrix element result for the primary hard process.
\subsection{Hadrons}
We investigate here the possibility that the parton cascade
resembles the hadronic final state for
sufficiently inclusive quantities. One motivation is ``preconfinement''
\cite{preconf},
the preparation of colour neutral clusters of limited mass within the
perturbative cascade.
If the cascade is evolved towards a low scale $Q_0\sim\Lambda$,
a successful description
of inclusive single particle distributions has been obtained
(``Local Parton Hadron Duality''-LPHD \cite{adkt1}). More generally,
one could test relations between parton and hadron observables of the
type
\begin{equation}
O(x_1,x_2,...)|_{hadrons}\; = \;K\: O(x_1,x_2,...;Q_0,\Lambda)|_{partons}
\label{lphdeq}
\end{equation}
where the nonperturbative cut-off $Q_0$ and an arbitrary factor $K$ are
to be determined by experiment (for review, see \cite{ko}).
In comparing differential parton and hadron distributions there can be a
mismatch near the soft limit because of mass effects, especially,
the (massless) partons
are restricted by $k_T>Q_0$ in (\ref{Zevol})
but hadrons are not. This mismatch can be avoided by an appropriate choice of
energy and momentum variables. In a simple model \cite{lo,klo} one compares
partons and
hadrons at the same energy (or transverse mass) using an
effective mass $Q_0$ for the hadrons, i.e.
\begin{equation}
E_{T,parton}=k_{T,parton}\qquad \Leftrightarrow \qquad
E_{T,hadron}=\sqrt{k_{T,hadron}^2+Q_0^2}, \label{phcorr}
\end{equation}
then, the corresponding lower limits are $k_{T,parton}\to Q_0$ and
$k_{T,hadron}\to 0$.
\section{From Jets to Hadrons, the limit $y_{cut}\to 0$}
We turn now to the discussion of several observables
and their behaviour in the limit of a small scale.
First, we consider the transition from jets to hadrons by decreasing the
resolution scale of jets. Jet physics is a standard testing ground for
perturbative QCD, the transition to hadrons
therefore corresponds to the transition from the known to the unknown
territory.
The jets are defined in the multiparticle final state by a
cluster-algorithm. Popular is the ``Durham algorithm'' \cite{durham}
which allows the all order summation in the perturbative analysis. For a
given resolution parameter $y_{cut}=(Q_{cut}/Q)^2$ in a final state with
total energy $Q$ particles are successively combined into clusters until all
relative transverse momenta are above the resolution parameter
$y_{ij}=k_T^2/Q^2>y_{cut}$.\footnote{more precisely, the distance is defined
by $y_{ij}= 2(1-\cos\Theta_{ij})\ {\rm
min}(E_i^2E_j^2)/Q^2>y_{cut}$
}
We study now the mean jet multiplicity $N_{jet}$ in the event as function of
$y_{cut}$. In $e^+e^-$-annihilation for $y_{cut}\to 1$ all particles are
combined into two jets and therefore $N_{jet}=2$, on the other hand, for
$y_{cut}\to 0$ all hadrons are resolved and $N_{jet}\to N_{had}$.
\begin{figure*}[t]
\begin{center}
\mbox{\epsfig{file=ochs1.ps,width=7.0cm,bbllx=3.2cm,bblly=9.2cm,bburx=18.cm,bbury=22.8cm}}
\end{center}
\vspace{-0.7cm}
\caption[]{
Data on the average jet multiplicity $\protect {\mathcal N}$ at $Q$ = 91 GeV
for different resolution parameters $y_c$ (lower set) and
the average hadron multiplicity (assuming $\protect {\mathcal N} = \frac{3}{2}
\protect {\mathcal N}_{ch}$)
at different $cms$ energies between $Q=3$ and $Q=91$ GeV using
$Q_c=Q_0$ = 0.508 GeV in the parameter $y_c$ (upper set).
The curves follow from the evolution equation (\protect\ref{Zevol})
with $\Lambda$ = 0.5 GeV; the upper curve for hadrons is based on
the duality picture (\ref{lphdeq}) with $K=1$ and parameter $Q_0$
(Fig. from \cite{lo2}) }
\label{fig1}
\end{figure*}
\begin{figure*}[ht]
\begin{center}
\noindent
\begin{minipage}{5.8cm
\mbox{\epsfig{bbllx=0bp,bblly=45bp,bburx=285bp,bbury=280bp,%
file=ochs2a.ps,
width=5.8cm}}
\end{minipage}
\hfill
\begin{minipage}{5.8cm}
\mbox{\epsfig{bbllx=0bp,bblly=45bp,bburx=285bp,bbury=280bp,%
file=ochs2b.ps,
width=5.8cm}}
\end{minipage}
\end{center}
\vspace{-0.3cm}
\caption{Jet multiplicities
extending towards lower $y_{cut}$ parameters; full lines as in
Fig.~1 for jets, dashed lines the same predictions but shifted $y_{cut}\to
y_{cut}-Q_0^2/Q^2$ according to the different kinematical boundaries
as in (\ref{phcorr}), with parameters as in Fig. 1
(preliminary data from OPAL \cite{Pfeifenschneider})}
\label{fig2a}
\end{figure*}
Results on jet multiplicities are shown in Fig. 1.
The jet multiplicity rises only slowly
with decreasing
$y_{cut}$. For $y_{cut} \gtrsim 0.01$ the data are well described by
the complete matrix element calculations to $O(\alpha_s^2)$
(first results of this kind in \cite{kl})
and allow the precise determination of the
coupling or, equivalently, of the QCD scale parameter
$\Lambda_{\overline{MS}}$
\cite{L3jmul,opaljmul}. In the region above $y_{cut}>10^{-3}$
the resummation of the higher orders in $\alpha_s$ becomes important
\cite{cdotw} and the MLLA calculation describes the data well.
The lower curve shown in Fig. 1 is obtained \cite{lo2} from a full
(numerical) solution of the evolution equations corresponding to
(\ref{Zevol}), matched with the $O(\alpha_s)$ matrix element,
and describes the data obtained at LEP-1
\cite{L3jmul,opaljmul} down to $10^{-4}$.
The theoretical curve diverges for small cut-off $Q_{cut}\to \Lambda$
as in this case the coupling $\alpha_s(k_T)$ diverges.
In the duality picture discussed above the parton final state corresponds to
a hadron final state at the resolution $k_T\sim Q_0$ according to
(\ref{lphdeq}) and this limit is reached for $Q_{cut}\to Q_0$.
The calculation meets the hadron multiplicity data
for the cut-off parameter $Q_0\simeq 0.5$ GeV. If this
calculation is done for lower $cms$ energies,
agreement with all hadron multiplicity
data down to $Q=3$ GeV is obtained with the same parameter $Q_0$
as seen in Fig. 1 by the upper set of data and the theoretical curve.
Moreover, the normalization constant in (\ref{lphdeq}) can be chosen as
$K=1$ whereas in previous approximate calculations $K\approx 2$
(see, e.g. \cite{lo}). This result
implies that the hadrons, in the duality picture, correspond
to very narrow jets with resolution $Q_0\simeq 0.5$ GeV.
In this unified description of hadron and jet multiplicities the running of
the coupling plays a crucial role. Namely, for constant $\alpha_s$ both
curves for hadrons and jets in Fig. 1 would coincide, as only one scale
$Q_{cut}/Q$ were available. With running $\alpha_s(k_T/\Lambda)$ the absolute
scale of $Q_{cut}$ matters: $\alpha_s$ varies most strongly for
$Q_{cut}\to\Lambda$ for jets at small $y_{cut}$ in the transition to hadrons
and for hadrons near the threshold of the process at large $y_{cut}$
where $\alpha_s>1$.
It appears that the final stage of hadronization in the jet evolution can be
well represented by the parton cascade with the strongly rising
coupling.
Preliminary results on jet multiplicities at very small $y_{cut}$
have been obtained recently by
OPAL \cite{Pfeifenschneider} and examples
are shown in Fig. 2. Whereas in the theoretical calculation all hadrons
(partons in the duality picture) are resolved
for $Q_{cut}\to Q_0$,
for the experimental quantities this limit occurs for $Q_{cut}\to 0$.
This is an example of the kinematical mismatch between
experimental and theoretical quantities discussed above
and can be taken into account \cite{lo2} by a shift in $y_{cut}$
according to (\ref{phcorr}). The shifted (dashed) curves in Fig. 2
describe the data rather well (also at intermediate $cms$ energies)
whereby the $Q_0$ parameter has been taken from
the fit to the hadron multiplicity before; the predictions fall a bit
below the data at lower energies like 35 GeV. The
nonperturbative $Q_0$ correction becomes negligable for $Q_c\gtrsim 1.5$
GeV.
We conclude that in case of this simple global observable the perturbative
QCD calculation provides a good description of hard and soft phenomena
in terms of one non-perturbative
parameter $Q_0\sim \Lambda$ (from fit \cite{lo2}
$Q_0\approx 1.015 \Lambda$).
Multiplicity moments are described very well in this approach also
\cite{lupia}.
\section{Shape of Energy Spectrum, the Limit $\sqrt{s}\to 0$}
A standard procedure in perturbative QCD is the derivation of the $Q^2$
evolution of the inclusive distributions -- either of the structure
functions in DIS ($Q^2<0$) or of the hadron momentum
distributions (``fragmentation
functions'', $Q^2\equiv s>0$). One starts from an input function at an initial scale
$Q_1^2$ and predicts the change of shape with $Q^2$.
In the LPHD picture one derives the parton distribution from the evolution
equation (\ref{Zevol}) with initial condition (\ref{init}) at
threshold, here the spectrum is simply
\begin{equation}
D(x,Q_0)=\delta(x-1). \label{Dthresh}
\end{equation}
If we start from this initial condition
the further QCD evolution
predicts the absolute shape of the particle
energy distribution at any higher $cms$ energy $\sqrt{s}$. Within certain
high energy approximations one can let $Q_0\to \Lambda$ and obtains an
explicit analytical expression for the spectrum
in the variable $\xi=\ln(1/x)$, the so-called ``limiting
spectrum'' \cite{adkt1} which has been found to agree well with the data
in the sense of (\ref{lphdeq}) --
disregarding the very soft region $p\lesssim Q_0$ (see, e.g. the
review \cite{ko}).
In the more general case $Q_0\neq \Lambda$ the cumulant
moments $\kappa_q$ of the $\xi$ distribution
have been
calculated as well \cite{FW,DKTInt};
they are defined by $\kappa_1 = <\xi> = \bar
\xi$, $\kappa_2 \equiv \sigma^2 = <(\xi - \bar \xi)^2>$, $\kappa_3 = <(\xi -
\bar \xi)^3>$, $\kappa_4 = <(\xi - \bar \xi)^4> - 3 \sigma^4$, \dots;
also one introduces the reduced cumulants $k_q \equiv \kappa_q/ \sigma^q$,
in
particular the skewness $s = k_3$ and the kurtosis $k = k_4$.
In the comparison with data some attention has to be paid again to the soft
region. The experimental data are usually presented in terms of the momentum
fraction $x_p=2p/\sqrt{s}$, then $\xi_p\to\infty$ for $p\to 0$. On the other
hand, the theoretical distribution, because of $p>p_T>Q_0$, is limited
to the interval $0<\xi<Y$, $Y=\ln(\sqrt{s}/2Q_0)$. Therefore, in this region
near and beyond the boundary the two distributions cannot agree.
A consistent description can be obtained if theoretical and experimental
distributions are compared at the same energy
as in (\ref{phcorr}),
then both $\xi$ spectra have the
same upper limit $Y$. With a corresponding ``transformation'' of
$E\frac{d^3n}{d^3p}$ the
spectra are well described by the appropriate
theoretical formula near the boundary \cite{lo}.
The cumulant moments of the energy spectrum of
hadrons determined in this way have been compared \cite{lo}
with the theoretical calculation based on the MLLA evolution
equation \cite{DKTInt}. As seen in Fig. \ref{fig:moments}
the data agree well
with the limiting spectrum result ($Q_0=\Lambda$),
both in their energy
dependence and their absolute normalization at threshold (the moments
vanish because of (\ref{Dthresh})).
This suggests that perturbative calculations are realistic
even down to threshold if a treatment of kinematic mass effects
is supplemented.
\begin{figure}
\begin{center}
\vspace{-3.3cm}
\mbox{ \hspace{-1.5cm}
\mbox{\epsfig{file=ochs3b,bbllx=1.0cm,bblly=7.cm,bburx=5.2cm,bbury=26.cm,%
height=12cm}} \hspace{0.2cm}
\mbox{\epsfig{file=ochs3a.ps,bbllx=5cm,bblly=7cm,bburx=18.cm,bbury=27.5cm,%
height=8.4cm}}
} \end{center}
\vspace{-0.2in}
\mbox{\hspace{5.2cm} $Y = \log (\sqrt{s}/2 Q_0)$}
\vspace{0.3cm}
\caption{
The first four
cumulant moments of charged particles' energy spectra
i.e., the average
value $\bar \xi_E$, the dispersion $\sigma^2$, the skewness $s$ and the
kurtosis $k$,
are shown as a function of $cms$ energy $\sqrt{s}$
for $Q_0$ = 270 MeV and $n_f$ = 3, in comparison with
MLLA predictions of the ``limiting spectrum'' (i.e. $Q_0 = \Lambda$)
for running $\alpha_s$ (full line)
and for fixed $\alpha_s$ (dashed line) (from \cite{lo})
}
\label{fig:moments}
\end{figure}
Recently, results on cumulant moments have been presented by the ZEUS group at
HERA (see talk by N. Brook \cite{brook}). The moments have been
determined directly from the momentum distribution of particles in the Breit
frame. The $\xi_p$ distributions are seen to extend beyond the
theoretical limit $Y$. The cumulant moments of order $q\geq 2$
determined from this distribution
show large deviations from the MLLA predictions
at low energies $Q^2$. The kinematic effects become less important at higher
energies and at $Q^2\gtrsim 1000$ GeV$^2$ the agreement with the
predictions using $Q_0=\Lambda$
is restored. These results demonstrate the
importance of the soft region in the analysis of the $\xi$-moments.
\section{Particle Spectra: the limit of small momenta $p,\ p_T\to 0$}
In this limit simple expectations follow from the coherence of the soft
gluon emission. If a soft gluon is emitted from a $q\overline q$ two jet
system then it cannot resolve with its large wave length
all individual partons but only \lq\lq sees''
the total charge of the primary partons $q\overline q$.
Consequently, in the analytical treatment,
the soft gluon radiation is determined by the Born term of $O(\alpha_s)$
and one expects a nearly energy independent soft particle spectrum \cite{adkt1}.
The consequences and further predictions have been studied
recently in more detail.
\subsection{Energy Independence}
\begin{figure}[t]
\begin{center}
\mbox{
\mbox{\epsfig{file=ochs4.ps,%
bbllx=2.5cm,bblly=12.6cm,bburx=16.5cm,bbury=28.0cm,%
height=6cm,clip=}}
} \end{center}
\vspace{-0.2cm}
\hspace{5.5cm} {\bf $\sqrt{s}\ $ GeV}
\caption{
Particle density at fixed momentum $p$ as function of $cms$ energy,
from \protect\cite{klo}
}
\label{fig:dnd3p}
\end{figure}
The limit of small momenta $p$ and $p_T$ has been
considered in \cite{klo}. The behaviour
of the inclusive spectrum in rapidity and for small $p_T$ is given by
\begin{equation}
\frac{dn}{dydp_T^2}\ \sim \ C_{A,F} \frac{\alpha_s(p_T)}{p_T^2}
\left( 1+O\left(
\ln\frac{\ln (p_T/\Lambda)}{\ln(Q_0/\Lambda)}\
\ln\frac{\ln(p_T/(x\Lambda))}{\ln(p_T/\Lambda)}\right)\right)
\label{Born}
\end{equation}
where the second term is known within MLLA and vanishes for $p_T\to Q_0$.
Again, the limit $p_T\to Q_0$ at the parton level
corresponds to $p_T\to 0$ at the hadron level. Only the first term (the Born
term) is energy independent. The approach to energy independence for the
soft particles at $p\to 0$ is seen from $e^+e^-$ data \cite{lo,klo}
and also from DIS \cite{H1}, see Fig. \ref{fig:dnd3p}. Although the detailed
behaviour depends a bit on the specific implementation of the kinematic
relations between partons and hadrons the approach towards energy independence in the limit $p\to
0$ is universal and this expectation is nicely supported by the data.
\subsection{Colour Factors $C_A$ and $C_F$}
A crucial test of this interpretation is
the dependence of the soft particle density on the
colour of the primary partons in (\ref{Born}): The particle density
in gluon and quark jets should approach the ratio
$R(g/q)=C_A/C_F=9/4$ in the soft limit.
This factor has been originally considered
for the overall event multiplicity in colour triplet and octet systems
but is approached there only at asymptotically high energies
\cite{bgven}. On the other hand, the prediction (\ref{Born})
for the soft particles applies already at finite energies \cite{klo}.
In practice, it is difficult to obtain
$gg$ jet systems for this test. An interesting
possibility is the study of 3-jet events in $e^+e^-$ annihilation with the
gluon jet recoiling against a $q\overline q$ jet pair with relative opening
angle of $\sim 90^{\rm o}$
\cite{gary}. For such ``inclusive gluon jets''
the densities of soft particles in comparison to quark jets
approach a ratio
$R(g/q)\sim 1.8$ for $p\lesssim 1$ GeV
which is above the overall multiplicity ratio $\sim $ 1.5
in the quark and gluon jets but still
below the ratio $C_A/C_F=9/4$ (see Fig. \ref{fig:Rgq}).
This difference may be attributed
to the deviation of the events from exact collinearity.
If the analysis is performed as function of $p_T$ of the
particles
the ratio
becomes consistent with $9/4$ but not for small $p_T<1$ GeV \cite{gary}.
This behaviour indicates the transition from the very soft emission
which is coherent
from all primary partons to the semisoft emission from the
parton closest in angle ($q$ or $g$) which yields directly the ratio
$C_A/C_F$.
\begin{figure}[t]
\begin{center}
\mbox{
\mbox{\epsfig{file=ochs5.ps,bbllx=2cm,bblly=4.2cm,bburx=19cm,bbury=14.2cm,%
height=4.5cm}}%
} \end{center}
\caption{
Ratio of particle densities at small momenta p in inclusive gluon jets and
quark jets \protect\cite{gary}
}
\label{fig:Rgq}
\end{figure}
In order to test the role of the colour of the primary partons further
in realistic processes
it has been proposed \cite{klo} to study the soft particle emission
perpendicular to the primary partons in 3-jet events in $e^+e^-$
annihilation or in 2-jet production either in $pp$ or in $ep$ collisions,
in particular in photoproduction. In these cases, for special limiting
configurations of the primary partons, the particle density is
either proportional to $C_F$ or to $C_A$, but it is also known for all
intermediate configurations.
A first result of this kind of
analysis has been presented by DELPHI \cite{delphipout} which shows the
variation of the density by about 50\% in good agreement with
the prediction. The findings by OPAL \cite{gary} (Fig. \ref{fig:Rgq})
and DELPHI \cite{delphipout}
are hints that also the soft particles
indeed reflect the colour charges of the primary partons.
Important tests are possible at HERA with two-jet production from
direct and resolved photons. The former process corresponds to quark
exchange, the latter to gluon exchange. The associated soft perpendicular
radiation again reflects the different flow of the primary colour charges:
At small scattering angles $\Theta_s\to 0$ in the di-jet $cms$
the ratio $R_\perp$ of the soft particles
approaches the limits
\begin{eqnarray}
{\rm direct }\; \gamma p\; {\rm production\; (q\; exchange):}&\qquad & R_\perp\to 1 \\
{\rm indirect}\; \gamma p\;
{\rm production\; (g\; exchange):}&\qquad & R_\perp\to C_A/C_F.
\label{direct}
\end{eqnarray}
In a feasibility study \cite{bko} using the event generator HERWIG
these ratios have been studied as function of the particle $p_T$
and angle $\Theta_s$.
With an assumed luminosity of 4.5 $pb^{-1}$ significant results
can be obtained. In the MC the predicted ratios are approached for small
$p_T\lesssim 0.5$ GeV but deviate considerably for larger $p_T$.
A study towards small angles $\Theta_s$ appears feasible. It would be
clearly interesting to carry out such an analysis.
\subsection{Rapidity Plateaux}
Another consequence of the lowest order approximation (\ref{Born})
is the flat distribution in rapidity $y$ at fixed (small)
$p_T$. An interesting
possibility appears in DIS where the soft gluon in the current hemisphere
is emitted from the quark, in the target hemisphere from a gluon. This would
lead one to expect a step in rapidity by factor $\sim$2 between both
hemispheres at high energies \cite{klo,o97}.
This problem has been studied recently by the H1 group \cite{H1plateau}.
They observed a considerable change of the rapidity spectrum with the $p_T$
cut: for large $p_T>1$ GeV the spectrum was peaked near $y=0$ in the Breit
frame -- as expected from maximal perturbative gluon radiation --
whereas for small $p_T<0.3$ GeV a plateau develops in the target
hemisphere.
On the other hand, no plateau is observed in the current direction at all.
A MC study of the $e^+e^-$ hadronic final state did not reveal a clear sign of a
flat plateau at small $p_T$ either.
The reason for the failure seeing the flat plateau is apparently
the angular recoil of the
primary parton which is neglected in the result (\ref{Born});
this introduces an uncertainty in the definition of
$p_T$, especially for the higher momenta. We have investigated this
hypothesis further by studying the rapidity distribution in
selected MC events
where all particles are limited in transverse momentum
$p_T<p_T^{max}$. Then the events are more collimated and the
jet axis is better defined.
The MC results in Table 1 show that the rapidity density gets flatter if
the transverse size of the jet decreases with the $p_T^{max}$ cut
which is in support of the above
hypothesis. This selection, however, considerably reduces the event sample.
A step in the rapidity hight of DIS events should therefore be expected only
in events with strong collimation of particles.
\begin{table}
\caption{Density of particles with $p_T<0.15$ GeV
in rapidity $y$, normalized at $y=-1$,
in events with $p_T<p_T^{max}$ selection.
Results obtained from
the ARIADNE MC \cite{ariadne} (parameters $\Lambda=0.2$ GeV,
$\ln(Q_0/\Lambda)=0.015$ as in \cite{low})
}
\begin{center}
\renewcommand{\arraystretch}{1.4}
\setlength\tabcolsep{5pt}
\begin{tabular}{lllllll}
\hline\noalign{\smallskip}
$p_T^{max}$ & $y=0$ & $y=-1$ & $y=-2$ & $y=-3$ & $y=-4$ & fraction of
events\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
no cut & 1.03 & 1.0 & 0.69 & 0.34 & 0.052 & 100 \% \\
0.5 & 0.9 & 1.0 & 0.78 & 0.50 & 0.13 & 9 \% \\
0.3 & 0.9 & 1.0 & 0.90 & 0.84 & 0.37 & 0.7 \% \\
\hline
\end{tabular}
\end{center}
\label{Table}
\end{table}
\subsection{Multiplicity Distributions of Soft Particles and Poissonian
Limit}
The considerations on the inclusive single particle distributions can be
generalized to multiparticle distributions
\cite{low}. Interesting predictions apply
for the multiplicity distributions of particles which are restricted in
either the transverse momentum $p_t<p_T^{cut}$ or in spherical momentum
$p<p^{cut}$.
In close similarity to QED the soft particles are independently emitted in
rapidity for limited $p_T$: because of the soft gluon coherence the
secondary emissions at small angles are suppressed. This is less so for the
spherical cut. For small values of the cut parameters one finds
the following limiting behaviour of the normalized factorial multiplicity
moments
\begin{eqnarray}
{\rm cylinder:} \qquad & F^{(q)}(X_{\perp},Y) & \simeq \
1+\frac{q(q-1)}{6}\frac{X_{\perp}}{Y}\\
{\rm sphere:} \qquad & F^{(q)}(X,Y) & \simeq \ {\rm const}
\label{Poisson}
\end{eqnarray}
where we used the logarithmic variables $X_{\perp}=\ln(p_T^{cut}/Q_0)$,
$X=\ln(p^{cut}/Q_0)$ and $Y=\ln(P/Q_0)$ at jet energy $P$. Both cuts act
quite differently and for small cylindrical cut $p_T^{cut}$ the
multiplicity distribution approaches a Poisson distribution (all moments
$ F^{(q)}\to 1$).
This prediction is verified by the ARIADNE MC at the parton
level. Interestingly, the predictions from the full hadronic final
state after string hadronization yield factorial moments rising at small
$p_T^{cut}<1$ GeV. These predictions provide a
novel test of soft gluon coherence in multiparticle production.
\section{Conclusions and Physical Picture}
The simple idea to derive hadronic multiparticle phenomena directly
from the partonic final state works surprisingly well also for the soft
phenomena discussed here
which do not belong to the standard repertoire of perturbative QCD.
Nevertheless, some clear QCD effects can be noticed
in the soft phenomena
and the three questions at the end of the introduction
can be answered positively. A description with small cut-off
$Q_0\sim\Lambda$ is possible
for various inclusive quantities. The coupling is running by
more than an order of magnitude at small scales as is seen,
in particular, in the transition from jets to hadrons.
Also, coherence effects from soft gluons are reflected in the behaviour of
soft particles. These effects for the soft particles need further
comparison with quantitative predictions. Especially worthwhile are the tests
on soft particle flows as function of the primary emitter configuration.
Predictions exist also for nontrivial limits of
multiparticle soft correlations.
The different threshold behaviour of partons and hadrons
can be taken into account by appropriate relations between the
respective kinematical
variables.
Some apparent discrepencies between MLLA predictions and observations
can be related to such mass effects.
\begin{figure}[b]
\begin{center}
\mbox{
\mbox{\epsfig{file=ochs6.ps,bbllx=1.5cm,bblly=16.5cm,bburx=20.0cm,bbury=21.cm,%
width=10cm}}
} \end{center}
\vspace{-0.3cm}
\caption{
Dual picture of parton and hadron cascades. Ultrasoft partons are
confined to
narrow tubes with $p_T<Q_0\sim\Lambda$ around the partons in the
perturbative cascade.
}
\label{fig:tubes}
\end{figure}
Finally, we remark on the physical picture which is supported by these
results (Fig. \ref{fig:tubes}).
The partons in the perturbative cascade are accompagnied by ultrasoft
partons with $p_T\lesssim Q_0\sim\Lambda$
as in very narrow jets; they cannot be
further resolved because of confinement and therefore the perturbative
partons resolved with $p_T\geq Q_0$
correspond to single
final hadrons. This is consistent with the finding of normalization unity
($K=1$ in (\ref{lphdeq}))
in the transition jet $\to$ hadron $(y_{cut}\to 0$).
Colour at each perturbative vertex can be neutralized
by the (non-perturbative) emission of one (or several) soft quark pairs;
in this way
the partons in the perturbative cascade evolve as colour neutral systems
outside a volume with confinement radius $R\sim Q_0^{-1}$.
In the timelike cascade
there is only parton splitting,
no parton recombination into massive colour singlets as in the
preconfinement model. Such a picture can only serve as a rough guide, it can
certainly not be complete as is exemplified by the existence of resonances.
Nevertheless, its intrinsic simplicity with only one non-perturbative
parameter $Q_0$ besides the QCD scale $\Lambda$ makes it attractive as a
guide into a further more detailed analysis.
|
1,116,691,498,642 | arxiv | \section{Introduction}
\label{sec:intro}
In logic design, one typically extends the Boolean domain $\{0,
1\}$ with a third value denoted by `$u$' to indicate the presence
of unstable voltage levels. In other words, it indicates that the value
of a Boolean variable is \emph{unknown}. Useful computation can be performed
even in the presence of unstable/unknown values. For example, consider a
Boolean circuit over $\ensuremath{\wedge}$, $\ensuremath{\vee}$, and $\ensuremath{\neg}$ gates
that takes $2n$ bits as input and decides whether a
majority of the input bits are set to $1$.
Clearly if at least $n+1$ of its inputs are $1$
(or, $0$) and the rest of the inputs are $u$, the circuit should
ideally output $1$ (resp.,~$0$).\footnote{See Table~\ref{tab:truth} for the definition of Boolean gates in the presence of $u$.}
However, if the circuit outputs $u$ on
such inputs, then the circuit is said to have a \emph{hazard}.
A priori the circuit
may be hazard-free or not, but if it is monotone
(only $\ensuremath{\wedge}$ and $\ensuremath{\vee}$ gates are allowed)
then it must be hazard-free~\cite{IKLLMS18}.
It is well-known that popular physical realizations of the basic
logic gates are hazard-free. For example, if we feed a $0$ and a
$u$ to an AND ($\ensuremath{\wedge}$) gate, then the $\ensuremath{\wedge}$ gate will output a
$0$ (Table~\ref{tab:truth}).
However, it is not
necessary that a circuit constructed from hazard-free logic gates
is hazard-free. For example, the smallest circuit implementing a
one bit multiplexer has a hazard (see, e.g., \cite{IKLLMS18}).
For a logic circuit designer, it is desirable that every circuit
they construct is hazard-free. In a recent paper, Ikenmeyer
\textit{et al}.~\cite{IKLLMS18} showed that there are
$n$-variable Boolean functions with polynomial (in $n$) size
circuits such that any hazard-free circuit implementation for the
same function must use exponentially many gates. Therefore,
constructing hazard-free circuits is not always feasible. They
also showed that even the computational problem of detecting
whether a circuit has a hazard is $\textsc{NP}$-complete.
Eichelberger's algorithm \cite{EIC65} for detecting hazards in a
circuit enumerates all minterms and maxterms of the Boolean
function computed by the circuit and evaluates the circuit on
each of them. Since an $n$-variable Boolean function can have as
many as $\Omega(3^n/n)$ minterms \cite{CM78}, this algorithm is
not always efficient. Since this problem is $\textsc{NP}$-complete, one
cannot hope to obtain a polynomial time algorithm that works in
general. In such cases, moderately exponential time algorithms
are sought over algorithms that employ brute-force search. For
example, it is known \cite{XN17} that the independent set problem
has a $O(\poly(n)1.19^n)$ time algorithm that performs much
better than the brute-force $O(\poly(n)2^n)$ time algorithm. Is
it possible to obtain such a moderately exponential time
algorithm for hazard detection in circuits?
In this work, we show that the $O(3^n\poly(s))$ time algorithm
for hazard detection on input circuits of size $s$ over $n$
variables is almost optimal under a widely held conjecture known
as the \emph{strong exponential time hypothesis} (\textsc{Seth}). \textsc{Seth}\
implies that there is no $O(2^{(1-\varepsilon)n}\poly(m))$ time
algorithm, for any $\varepsilon > 0$, for checking whether an
$n$-variable, $m$-clause \textsc{Cnf}\ is satisfiable. We show that there
is no $O(3^{(1-\varepsilon)n}\poly(s))$ time algorithm, for any
$\varepsilon >0$, for hazard detection on circuits of size $s$
over $n$ variables assuming \textsc{Seth}. In fact, we show that this is
true even when the input circuits are restricted to be formulas
of depth four.
We also give a polynomial time algorithm to detect whether a
given \textsc{Dnf}\ formula has a 1-hazard. Since 0-hazards in \textsc{Dnf}\
formulas are easy to eliminate, this algorithm can be used to
check whether a given \textsc{Dnf}\ formula has a hazard in practice. We
remark that, using duality of hazards, this also implies a hazard
detection algorithm for \textsc{Cnf}\ formulas.
\section{Preliminaries}
\label{sec:prelim}
We study Boolean functions $f : \{0,1\}^n \to \{0,1\}$ on $n$
variables where $n$ is an arbitrary natural number. We are
interested in Boolean circuits over AND ($\ensuremath{\wedge}$), OR ($\ensuremath{\vee}$), and
NOT ($\ensuremath{\neg}$) gates computing such functions. We recall Boolean
circuits are directed acyclic graphs with a unique sink node
(output gate), where the source nodes (input gates) are labeled
by literals, i.e., $x_i$ or $\neg x_i$ for $i \in [n]$ and
non-source nodes are labeled by $\wedge$ or $\vee$ gates. The
\emph{depth} of a gate in the circuit is defined as the maximum
number of $\ensuremath{\wedge}$ or $\ensuremath{\vee}$ gates occurring on any path from an
input gate to this gate (inclusive). (Note that $\ensuremath{\neg}$ gates do
not contribute to depth.) The \emph{depth} of a circuit is then
defined to be the depth of the output gate. In particular, $\textsc{Cnf}$
and $\textsc{Dnf}$ formulas have depth two. We recall \emph{formulas}
are circuits such that the underlying undirected graph is a tree,
i.e., every gate other than the output gate has out-degree
exactly $1$.
We refer to constant depth formulas where all gates of the same depth
are of the same type
by the sequence of $\ensuremath{\wedge}$ and $\ensuremath{\vee}$ starting from the output gate. For
example, $\textsc{Cnf}$ formulas are $\ensuremath{\wedge}\ensuremath{\vee}$ formulas.
In our setting the input variables to circuits are allowed to take
an unstable value, denoted by $u$, in addition to the usual stable
values $0$ and $1$. The truth tables for gates in the basis
$\{\wedge, \vee, \neg\}$ in the presence of unstable values are given in
Table~\ref{tab:truth}. The truth table for larger fan-in $\ensuremath{\wedge}$ and $\ensuremath{\vee}$
gates in the presence of $u$ can be similarly defined using associativity.
Thus, we can evaluate circuits on inputs from $\{u, 0, 1\}^n$
in the usual inductive fashion.
\begin{table}[ht]
\centering
\begin{tabular}{c|ccc}
$\ensuremath{\wedge}$ & $u$ & 1 & 0 \\ \hline
$u$ & $u$ & $u$ & 0 \\
1 & $u$ & 1 & 0 \\
0 & 0 & 0 & 0 \\
\end{tabular}
\quad
\begin{tabular}{c|ccc}
$\ensuremath{\vee}$ & $u$ & 0 & 1 \\ \hline
$u$ & $u$ & $u$ & 1 \\
0 & $u$ & 0 & 1 \\
1 & 1 & 1 & 1
\end{tabular}
\quad
\begin{tabular}{c|c}
$\ensuremath{\neg}$ & \\ \hline
$u$ & $u$ \\
0 & 1 \\
1 & 0
\end{tabular}
\caption{Truth table for AND, OR, and NOT gates.}
\label{tab:truth}
\end{table}
We now formally introduce the notion of \emph{hazard}.
\begin{defn}
A string $b\in \{0, 1\}^n$ is called a \emph{resolution} of a
string $a\in \{u, 0, 1\}^n$ if $b$ can be obtained from $a$ by
only changing the unstable values in $a$ to stable values.
\end{defn}
For example, strings $0100$, $0110$, $1110$, and $1100$ are all possible resolutions of the string $u1u0$, but $0111$ is not.
\begin{defn}
A circuit $C$ implementing a Boolean function has a
\emph{$1$-hazard} (or,~\emph{$0$-hazard}) on an input $a\in \{u, 0, 1\}^n$
if and only if $C(a) = u$ yet for all resolutions $b$ of
$a$, the value $C(b)$ is $1$ (resp.,~$0$). A circuit has a
\emph{hazard} if it has a $1$-hazard or a $0$-hazard.
\end{defn}
\begin{ex}
\label{ex:hazards}
Consider the $\textsc{Dnf}$ formula $F = (x_1 \wedge x_2) \vee (\neg
x_1 \wedge x_2) \vee (\neg x_1 \wedge \neg x_2)$ implementing
the function $f$ that evaluates to $0$ only when $x_1 = 1$ and
$x_2 = 0$. Consider the input $x_1x_2 = 0u$. The function $f$
evaluates to $1$ on both resolutions of $0u$. But, the formula
$F$ evaluates to $u$ on input $x_1 = 0$, $x_2 = u$. Therefore,
$F$ has a $1$-hazard at the input $0u$. We note that $u1$ is another
input where $F$ has a hazard.
\end{ex}
We remark that being hazard-free or not is a property of the formula or circuit and not a property of the function being computed by them.
In this paper, we are interested in the time complexity of the
following language.
\begin{defn}
The language \textsc{Hazard}\ consists of all circuits that have
hazards.
\end{defn}
The hazards in a circuit implementing a function $f$ are closely
related to the minterms and maxterms of $f$. A definition of
these concepts and their relationship to hazards in circuits
follows.
\begin{defn}
\label{defn:implicants}
A \emph{$1$-implicant} (\emph{$0$-implicant}) of a Boolean
function $f$ on variables $x_1, \dotsc, x_n$ is an AND
(resp.,~OR) over a subset $I$ of literals $x_1,\dotsc,
x_n$,$\neg{x}_1, \ldots , \neg{x}_n$ such that for any
assignment $a\in\{0, 1\}^n$, if $I(a) = 1$ (resp.,~$I(a) =
0$),\footnote{By $I(a)$, we mean the AND (or, OR) function
over the set $I$ of literals evaluated at $a$. We often
overload notation to denote both the set of literals in an
implicant and the function by the same notation $I$.} then
$f(a) = 1$ (resp.,~$f(a) = 0$). In such a case the
assignment $a$ is said to be \emph{covered} by the implicant
$I$. The \emph{size} of an implicant is defined to be the
size of the set $I$.
\end{defn}
\begin{ex}
\label{ex:implicants}
Consider the function $f$ from Example~\ref{ex:hazards}.
The only $0$-implicant of $f$ is $\neg x_1 \vee x_2$ and it is of size $2$.
The function has five $1$-implicants $x_1 \wedge x_2$,
$\neg x_1 \wedge x_2$, $\neg x_1 \wedge \neg x_2$, $x_2$, and
$\neg x_1$. The assignments $x_1 = 0$, $x_2 = 0$ and $x_1 = 0$,
$x_2 = 1$ are covered by the $1$-implicant $\neg x_1$.
\end{ex}
\begin{defn}
A $1$-implicant ($0$-implicant) that is minimal with respect to set
containment is called a \emph{minterm} (resp.,~\emph{maxterm}).
\end{defn}
\begin{ex}
\label{ex:minterm-maxterm}
Continuing from Example~\ref{ex:implicants}, we note that $f$
has one maxterm $\neg x_1 \vee x_2$ and two minterms,
namely $\neg x_1$ and $x_2$.
\end{ex}
We have the following well-known \emph{cross-intersection} property of
the set of all minterms and the set of all maxterms.
\begin{fact}
\label{fact:min-max-intersection}
Let $S$ be any minterm and $T$ be any maxterm for a function $f$.
Then, $S \cap T \neq \emptyset$.
\end{fact}
\begin{proof}
Suppose not, then there exists an assignment $a$ such that $S(a) =1$
and $T(a) = 0$. But then from Definition~\ref{defn:implicants}
we have $f(a) = 1$ as well as $f(a) = 0$, which is a contradiction. \qed
\end{proof}
An implicant can be naturally represented as an assignment of
variables to $\{u,0,1\}$ where the variables not in the
implicant are set to $u$ and the ones present are set to $0$ or
$1$ so as to make the corresponding literal evaluate to $1$ for
$1$-implicants (or, $0$ for $0$-implicants). By evaluating a
circuit at a minterm or maxterm, we mean evaluating the circuit
on the corresponding assignment in $\{u,0,1\}^n$.
\begin{obs}
A circuit $C$ implementing a Boolean function $f$ has a
$1$-hazard ($0$-hazard) if and only if it has hazard at a
minterm (resp.,~maxterm).
\end{obs}
\begin{proof}
Given an input $a$ at which $C$ has hazard,
consider a minterm or maxterm that covers $a$. It is easily seen that
the output of evaluating $C$ on this minterm or maxterm is $u$, because
changing stable values in the input to $u$ cannot cause the
output to go from $u$ to a stable value. \qed
\end{proof}
Since any minterm or maxterm of an $n$-variable Boolean function can be
represented by a string from $\{u,0,1\}^n$, there can be at
most $3^n$ minterms for a Boolean function. How tight is this
upper bound? Chandra and Markowsky \cite{CM78} gave an improved upper bound
of $O(3^n/\sqrt{n})$ and also gave an example to show that this upper bound
is almost tight. We recall the function witnessing this lower bound now.
We call it the Chandra-Markowsky (\textsc{CM}) function.
The Chandra-Markowsky (\textsc{CM}) function
\cite{CM78} on $N = 3n$ variables for any natural $n$ is defined
as follows: it evaluates to $1$ if and only if at least $n$ of
the variables are set to $1$ and at least $n$ of the variables
are set to $0$. This function has $\binom{3n}{n}\binom{2n}{n}
=\Theta(3^N/N)$ minterms. Therefore, it has
almost the maximum possible number of minterms.
The Strong Exponential Time Hypothesis (\textsc{Seth}) is a conjecture
introduced by Impagliazzo, Paturi and Zane \cite{IP01, IPZ01} to
address the time complexity of the \textsc{Cnf}\ satisfiability problem
(\textsc{Cnfsat}). It has been used to establish conditional lower
bounds for many $\textsc{NP}$-complete problems (e.g., \cite{LMS11,
Cygan12}) and problems with polynomial time algorithms (e.g.,
\cite{Bringmann15, Abboud15, VWilliams18}).
\begin{hyp}[\textsc{Seth}\ \cite{IP01, IPZ01}]
For every $\varepsilon >0$, there exists an integer $k \geq 3$
such that no algorithm can solve $k$-\textsc{Cnfsat} \footnote{Every clause in the \textsc{Cnf}\ is defined on at most $k$ literals.} on $n$
variables in $O(2^{(1-\varepsilon)n})$ time.
\end{hyp}
To establish our lower bound, we reduce from the \textsc{Dnf}\
falsifiability problem ($\textsc{Dnffalse}$): \emph{Given a \textsc{Dnf}\ formula
as an input, determine whether there exists an assignment that
falsifies it.}
This problem clearly has the same time complexity as the
$\textsc{Cnfsat}$ problem. \textsc{Seth}\ implies that there is no
$O(2^{(1-\epsilon)n} \poly(s))$ time algorithm ,for any $\epsilon
> 0$, for \textsc{Dnffalse}\ where $n$ is the number of variables and $s$
is the number of clauses.
\section{A tight lower bound}
The idea behind the proof is as follows: The given \textsc{Dnf}\ formula
$F$ on $n$ variables has $2^n$ assignments. We construct a
formula $F'$ on $m \sim \log_3(2) n$ variables that implements a
function with more than $2^n$ minterms. This allows us to map
each assignment of variables in $F$ to a distinct minterm of
the function implemented by $F'$. We then show that $F$ is
falsifiable by an assignment $a$ if and only if the formula $F'$
has a hazard at the minterm $b$ that corresponds to $a$ in the
mapping. First, we define the function that is going to be
implemented by $F'$ and prove some important properties related
to it.
Let $s$ be any natural number that is a multiple of $3$. We now
define the auxiliary function $\textsc{ACM}$ that will be used in our
reduction. It is defined on $sn$ variables which are partitioned
into $n$ groups of $s$ variables each. We simply compose the
\textsc{AND} function on $n$ variables with the $\textsc{CM}$ function on
$s$ variables to define the $\textsc{ACM}$ function. That is,
\begin{align} \label{eq:ACM}
\textsc{ACM}(X_1, \dotsc, X_n) = \textsc{CM}(X_1) \ensuremath{\wedge} \dotsm \ensuremath{\wedge} \textsc{CM}(X_n)
\end{align}
where $X_i$ denotes the $i$-th group of $s$ variables. We denote
the $\textsc{CM}$ function on the $i$-th group of variables $X_i$ by
$\textsc{CM}_i$.
The following proposition characterizes the minterms and maxterms
of the $\textsc{ACM}$ function.
\begin{prop}
\label{prop:minterm-maxterm-ACM}
The following statements are true:
\begin{enumerate}
\item[(i)] The set of minterms of $\textsc{ACM}$ is the direct product
of the set of minterms of the $n$ disjoint $s$-variable $\textsc{CM}$
functions.
\item[(ii)] The set of maxterms of $\textsc{ACM}$ is given by the union
of the set of maxterms of $\textsc{CM}_i$ for $1\leq i \leq n$.
\end{enumerate}
\end{prop}
\begin{proof}
\phantom{We show}
\begin{enumerate}
\item[\textit{(i)}] We show a one-to-one correspondence between
the set of minterms of $\textsc{ACM}$ and the direct product of the set
of minterms of $\textsc{CM}_j$ for $j \in [n]$. For a minterm $I$ of
$\textsc{ACM}$, let $I_j$ be the restriction of $I$ to the variables in
$X_j$ for each $j$. Since $\textsc{ACM}$ evaluates to $1$ on $I$,
$\textsc{CM}_j$ must evaluate to $1$ on $I_j$. Thus, $I_j$ is a
$1$-implicant of $\textsc{CM}_j$ for each $j$. We now argue that in
fact it is a minterm. Suppose not, then there exists a $j$ such
that $I_j$ is not a minterm of $\textsc{CM}_j$. However it must contain
a minterm, since it is a $1$-implicant. Let $I'_j \subset I_j$
be the minterm contained in $I_j$. By replacing the part of
$I_j$ in $I$ by $I'_j$ we obtain $I'$. Clearly, $I' \subset I$
is a $1$-implicant of $\textsc{ACM}$. Thus, we have a contradiction to
the fact that $I$ is a minterm.
On the other hand, given minterms $I_j$ of $\textsc{CM}_j$ for each $j
\in [n]$, their union $I = \cup_{j \in [n]}I_j$ is a minterm of
$\textsc{ACM}$. Suppose not, then there exists $I' \subset I$ that is a
minterm. Since $I' \subset I$, then there exists $j \in [n]$
such that $I'_j \subset I_j$. Thus, we obtain a contradiction
to $I_j$ being a minterm of $\textsc{CM}_j$.
\item[\textit{(ii)}] For some $j \in [n]$, let $I$ be a maxterm
of $\textsc{CM}_j$. Then it is easily seen that $I$ is also a maxterm
of $\textsc{ACM}$. To prove the other direction, for a maxterm $I$ of
$\textsc{ACM}$, we argue that there exists a unique $j \in [n]$ such
that $I$ is a maxterm of $\textsc{CM}_j$. Clearly, there exists a $j
\in [n]$ such that $I$ is a $0$-implicant of $\textsc{CM}_j$. Since $I$
is a maxterm of $\textsc{ACM}$, it must only set variables in $X_j$.
And therefore, the term $I$ is a $0$-implicant of the unique
$\textsc{CM}_j$. Hence, the term $I$ must also be a maxterm of
$\textsc{CM}_j$.
\end{enumerate}\qed
\end{proof}
Huffman \cite{Huf:57} showed that for any Boolean function $f$
with the set $\mathcal{M}$ of all minterms, the \textsc{Dnf}\ $F =
\bigvee_{I\in \mathcal{M}}I$ is hazard-free. For example,
consider the function $f$ defined in Example~\ref{ex:hazards}.
From Example~\ref{ex:minterm-maxterm} we know that $\neg x_1$ and $x_2$ are
the only two minterms of it. Therefore, $(\neg x_1) \vee (x_2)$
is the hazard-free DNF implementation for $f$ given by Huffman's construction.
In our reduction,
it will be crucial for us to be able to introduce hazards to the
implementation at specific minterms. For this purpose we modify
Huffman's hazard-free \textsc{Dnf}\ construction as follows.
\begin{prop}
\label{prop:hazard-in-f}
Let $f$ be a function on $n$ variables and
$S$ be a set of minterms of $f$ where
each minterm in $S$ is of size at most $n-1$.
Then, we can construct a \textsc{Dnf}\ for $f$ that has
hazards exactly at the minterms in $S$.
\end{prop}
\begin{proof}
Let $F$ be the hazard-free \textsc{Dnf}\ for $f$ given by Huffman's
construction. Let $I$ be a minterm in $S$ and $x$ be a variable not
in $I$. Such a variable exists by assumption.
Consider the formula $F'$ obtained by replacing
the term $I$ in $F$ with two new terms,
namely $I \ensuremath{\wedge} x$ and $I \ensuremath{\wedge} \bar{x}$.
$F'$ computes the same function and has a $1$-hazard at the minterm
$I$, since the two new terms evaluate to $u$ on $I$ and every
other term will evaluate to $0$ or $u$ on $I$. For any minterm
not in $S$, $F'$ evaluates to $1$. Therefore, these
are the only $1$-hazards. Also, $F'$ has no
$0$-hazards, because every maxterm and minterm intersects contradictorily.
We repeat the aforementioned transformation for every minterm in $S$
to obtain the required \textsc{Dnf}\ for $f$ that has hazards
at the minterms in $S$. \qed
\end{proof}
To illustrate we consider our running example, the function $f$ from
Example~\ref{ex:hazards}. We know that $(\neg x_1) \vee (x_2)$
is a hazard-free DNF of $f$. Suppose we want to selectively introduce hazard
only at the minterm $\neg x_1$. Following Proposition~\ref{prop:hazard-in-f},
we modify the hazard-free representation to obtain the following:
\[(\neg x_1 \wedge x_2) \vee (\neg x_1 \wedge \neg x_2) \vee (x_2).\]
Suppose we further wanted to introduce hazard at the minterm $x_2$.
Then, again following Proposition~\ref{prop:hazard-in-f}, we obtain
\[(\neg x_1 \wedge x_2) \vee (\neg x_1 \wedge \neg x_2) \vee (x_2 \wedge x_1),\]
which has hazards at both the minterms. We note that this is
the DNF implementation from Example~\ref{ex:hazards}.
The following lemma applies the above construction to the $\textsc{ACM}$
function to efficiently introduce hazards in a selective manner. Notice that
any minterm of $\textsc{ACM}$ has some variable that is not in the
minterm.
\begin{lem}
\label{lem:hazard-in-ACM}
Consider the $\textsc{ACM}$ function on $sn$ variables where $s$ is
regarded as a constant. For $j \in [n]$, let $\mathcal{M}_j$ be
the set of all minterms of $\textsc{CM}_j$. Further, let $\mathcal{S}
\subseteq \mathcal{M}_i$ for some $i$. Then, there is a
poly-time algorithm that constructs an $\ensuremath{\wedge}\ensuremath{\vee}\ensuremath{\wedge}$ formula
for $\textsc{ACM}$ that has hazards exactly at minterms in the set
$\mathcal{M}_1 \times \dotsm \times \mathcal{M}_{i-1} \times
\mathcal{S} \times \mathcal{M}_{i+1}\times \dotsm \times
\mathcal{M}_n$.
\end{lem}
\begin{proof}
Let $F_j$ be the hazard-free \textsc{Dnf}\ formula for $\textsc{CM}_j$ and
$F'_i$ be the \textsc{Dnf}\ formula for $\textsc{CM}_i$ that has hazards only
at minterms in the set $\mathcal{S}$ obtained by
Proposition~\ref{prop:hazard-in-f}. We output the $\ensuremath{\wedge}\ensuremath{\vee}\ensuremath{\wedge}$
formula \((\ensuremath{\wedge}_{j \neq i} F_j) \ensuremath{\wedge} F'_i \) for $\textsc{ACM}$. The
size of the formula is $O(n)$ because $s$ is a constant.
We now argue that this formula has hazards only at minterms in
the set $\mathcal{M}_1 \times \dotsm \times \mathcal{M}_{i-1}
\times \mathcal{S} \times \mathcal{M}_{i+1}\times \dotsm \times
\mathcal{M}_n$. Since the individual implementation of
$\textsc{CM}_j$'s have no $0$-hazards, by
Proposition~\ref{prop:minterm-maxterm-ACM}~$(ii)$, the formula
for $\textsc{ACM}$ has no $0$-hazards. Now suppose $I$ is a minterm of
$\textsc{ACM}$ such that the $\ensuremath{\wedge}\ensuremath{\vee}\ensuremath{\wedge}$ formula has a hazard at it.
By Proposition~\ref{prop:minterm-maxterm-ACM}~$(i)$, we know
that $I_j$ is a minterm of $\textsc{CM}_j$ for each $j$. Therefore,
there exists a $j$ such that the \textsc{Dnf}\ implementation of
$\textsc{CM}_j$ has a hazard at $I_j$. But then by construction
it must be that $j = i$ and $I_j \in \mathcal{S}$. \qed
\end{proof}
We now prove our main theorem.
\begin{thm}
\label{thm:main}
If \textsc{Seth}\ is true, then for any $\epsilon > 0$, there is no
algorithm for \textsc{Hazard}\ that runs in time
$O({3}^{(1-\epsilon)n}\poly(s))$, even when the inputs are
formulas of depth four. Here $n$ is the number of variables in
the formula and $s$ is the size of the formula.
\end{thm}
\begin{proof}
Let $r$ be a positive integer and $s = s(r)$ be the minimum
integer such that $2^r \leq \binom{s}{s/3}\binom{2s/3}{s/3}$.
We will reduce \textsc{Dnffalse}\
instances on $rn$ variables to instances of \textsc{Hazard}\ on $sn$
variables. In addition, the circuit we output will be an
$\ensuremath{\vee} \ensuremath{\wedge} \ensuremath{\vee} \ensuremath{\wedge}$ formula. Recall, by our choice, $s$ is a
multiple of $3$. For any $\epsilon > 0$, we claim that there
exists a $\delta > 0$ such that $3^{(1-\epsilon)s} <
2^{(1-\delta)r}$ for sufficiently large $r$. Let $f(s)$ be the
number of minterms in the $s$-variable \textsc{CM}\ function. Then
$f(s+3) / f(s) \to 27$ as $s\to \infty$. As increasing the
number of variables by $3$ multiplies the number of assignments
by $8$, we have $s(r)/r \to \log_{27}(8) = \log_3(2)$ as $r\to
\infty$. The claim follows.
Let $F$ be the input \textsc{Dnf}\ on $rn$ variables. We consider the
variables of $F$ to be partitioned into $n$ groups $Y_j$, $j
\in [n]$, of $r$ variables each. We arbitrarily associate with
every assignment $\alpha \in \{0,1\}^r$ to the variables in
$Y_j$ a unique minterm $I_\alpha$ of $\textsc{CM}_j$ and call this
bijection $\beta_j$. Recall that $s(r)$ is defined such that
the number of minterms of $\textsc{CM}_j$ is at least $2^r$. The
mapping $\beta_j$ is constant-sized and can be computed easily
given $j$. The reduction is given in Algorithm~\ref{alg:main}.
It is easy to see that the algorithm runs in polynomial time and produces
formulas of depth four. We now argue the correctness of the
reduction.
\begin{algorithm}
\caption{Reduction from \textsc{Dnffalse}\ to \textsc{Hazard}}
\label{alg:main}
\begin{algorithmic}
\State $F' \leftarrow F$
\ForAll {literals $\ell$ occurring in $F$}
\State Replace $\ell$ in $F'$ with LITERAL($\ell$)
\EndFor
\State Collapse $\ensuremath{\wedge}$ gates at depth three and four in
$F'$ to a single layer.
\State \textbf{return} $F'$
\Procedure {LITERAL}{$\ell$}
\State Let $\ell$ be $x_j$ or $\neg x_j$.
\State $i \leftarrow \lceil j/r \rceil$; then, $x_j$ belongs to
the group $Y_i$.
\State \( T := \{ \alpha \in \{0,1\}^{|Y_i|} \mid \text{ the literal }\ell\text{ is falsified by } \alpha\} \)
\State Recall $\beta_i$ is the bijection from $\{0,1\}^{Y_i}$ to the set of minterms $\mathcal{M}_i$ of $\textsc{CM}_i$.
\State $S \leftarrow \beta_i(T)$
\State $G \leftarrow \ensuremath{\wedge}\ensuremath{\vee}\ensuremath{\wedge} \text{ formula for }\textsc{ACM} \text{ given by Lemma~\ref{lem:hazard-in-ACM} on input } S\subseteq \mathcal{M}_i$
\State \textbf{return} $G$
\EndProcedure
\end{algorithmic}
\end{algorithm}
Given an assignment $y \in \{0,1\}^{rn}$ to all variables, let
$y_j$ denote the restriction of $y$ to variables in the group
$Y_j$. Further let $I_{y_j}$ be the unique minterm of $\textsc{CM}_j$
associated with $y_j$. From
Proposition~\ref{prop:minterm-maxterm-ACM}~$(i)$, we know that
$I_y = \bigtimes_{j \in [n]}I_{y_j}$ is a minterm of $\textsc{ACM}$.
Thus, we associate this minterm $I_y$ with the assignment $y$.
We will now prove that $F$ is falsified by $y$ if and only if
$F'$ has a hazard at $I_y$.
To prove this, we consider the formula $F'$ in the algorithm
just before collapsing $\ensuremath{\wedge}$ gates of depths three and four
(i.e., it has depth five). The gates in $F'$ correspond to
gates in $F$ in the following fashion: The output gate and
gates of depth four in $F'$ correspond to the output gate and
depth one gates in $F$ respectively and the gates of depth
three in $F'$ correspond to literals in $F$. Since all
occurrences of literals in $F$ are replaced with formulas
computing $\textsc{ACM}$ in $F'$, the function computed at the output
gate and all gates of depth four and three in $F'$ is also
$\textsc{ACM}$.
Consider a gate $g$ at \emph{depth three} in $F'$. By
construction, the sub-formula rooted at $g$ satisfies the
property that it has a hazard at $I_y$ if and only if the
corresponding literal in $F$ evaluates to $0$ on $y$.
We now consider a gate $g$ at \emph{depth four} in $F'$.
It is an $\ensuremath{\wedge}$ gate. Assume that it evaluates to $0$ on input $y$ in $F$.
Then, at least one of its inputs in $F$ must also evaluate to $0$ on $y$.
From the above argument about depth three gates, we know that the
corresponding gate in $F'$ must have a hazard at the minterm $I_y$.
Therefore, this gate must evaluate to $u$ on $I_y$ while
the other inputs to the gate $g$ evaluate to $1$ or $u$.
This is because we are evaluating an implementation of
$\textsc{ACM}$ on one of its minterms. Thus, the sub-formula rooted at
$g$ in $F'$ must have a hazard at the minterm $I_y$ corresponding to $y$.
In the other direction, suppose $g$ has a hazard at the
minterm $I_y$. Then, at least one of its inputs must have a
hazard at this minterm, which in turn implies that the corresponding
literal in $F$ evaluates to $0$ on the assignment $y$.
Since $g$ is an $\ensuremath{\wedge}$ gate we thus obtain that $g$ evaluates to $0$ on $y$
in $F$.
Finally we consider the $\ensuremath{\vee}$ gate $g$ at the root of $F'$.
If $g$ outputs $u$ on $I_y$, all gates feeding into $g$ must
output $u$ on $I_y$. (They cannot evaluate to $0$ because each of them
is evaluating $\textsc{ACM}$ function on a minterm.)
Therefore, all the corresponding gates in $F$ must output $0$ on
$y$ causing $F$ to output $0$. On the other hand, if $F$
outputs $0$ on $y$, every gate feeding into the root in $F$ must
output $0$ on $y$ and therefore, all the corresponding gates in
$F'$ must output $u$ on $I_y$ causing $F'$ to output $u$ as
well. \qed
\end{proof}
\section{Detecting hazards in depth-two formulas}
We now look at the time complexity of detecting hazards in depth
two formulas. We will focus on input formulas in \textsc{Dnf}. The dual
statements are true for formulas in \textsc{Cnf}. It is known that a
\textsc{Dnf}\ formula that does not contain terms with contradictory
literals (i.e., $x$ and $\ensuremath{\neg} x$ for some variable $x$) cannot
have $0$-hazards. Since it is trivial to remove such terms, the
interesting case is to detect 1-hazards in \textsc{Dnf}\ formulas.
M\'at\'e, Das, and Chuang \cite{MATE} gave an exponential time
algorithm that takes as input a \textsc{Dnf}\ formula and outputs an
equivalent \textsc{Dnf}\ formula that is hazard-free. Such an algorithm
is necessarily exponential time because there are functions that
have size $s$ \textsc{Dnf}\ formulas such that any hazard-free \textsc{Dnf}\
formulas for it has size at least $3^{s/3}$
\cite[Theorem~1.3]{CM78}. We show that there is a polynomial time
algorithm if the goal is to only \emph{detect} whether an input
\textsc{Dnf}\ formula has a 1-hazard.
We start with a crucial observation that help witness $1$-hazards
in \textsc{Dnf}\ easily. The simplest \textsc{Dnf}\ formula with a $1$-hazard is
$x \ensuremath{\vee} \ensuremath{\neg} x$. We show that every \textsc{Dnf}\ formula with a
$1$-hazard has a $1$-hazard $\alpha \in \{0,1,u\}^n$ such that
when the \textsc{Dnf}\ is restricted by the stable values in $\alpha$,
the \emph{simplified} \textsc{Dnf}\ has the form \[x \ensuremath{\vee} \ensuremath{\neg} x \ensuremath{\vee} H,\]
for some \textsc{Dnf}\ formula $H$ such that no term in $H$ evaluates to
$1$. Obviously, it is easy to detect that there is a hazard in
such a simplified \textsc{Dnf}\ formulas. We now introduce a definition
that will help us formally state the lemma.
\begin{defn}
A \textsc{Dnf}\ $H$ over variables and constants is said to be \emph{equal} to $1$ if
at least one of the terms in it evaluates to $1$.
\end{defn}
For example, $x \ensuremath{\vee} \ensuremath{\neg} x$ is \emph{not equal} to $1$, though it evaluates to $1$ on all possible inputs. On the other hand $x \ensuremath{\vee} \ensuremath{\neg} x \ensuremath{\vee} 1$ is \emph{equal} to $1$, since it contains the term $1$ that trivially evaluates to $1$.
\begin{lem}
\label{lem:easy-hazards}
Let $F$ be a \textsc{Dnf}\ on $n$ variables. Suppose that $F$ has a
$1$-hazard. Then, there exists $\alpha \in \{0,1,u\}^n$ such
that $F$ has a $1$-hazard at $\alpha$, and furthermore,
\[F|_{\alpha} = x \ensuremath{\vee} \ensuremath{\neg} x \ensuremath{\vee} H,\] for some variable $x$ and
\textsc{Dnf}\ $H$ that is not equal to $1$. Here, $F|_{\alpha}$
represents the \textsc{Dnf}\ obtained by simplifying the terms of $F$
upon substitution of variables by the stable values of $\alpha$.
A \textsc{Dnf}\ is simplified by exhaustively applying the following two rules:
\begin{enumerate}
\item[(i)] Remove terms with a literal that evaluates to $0$,
\item[(ii)] Shorten terms by the removal of literals that evaluate to $1$.
\end{enumerate}
\end{lem}
\begin{proof}
Let $\beta \in \{0,1,u\}^n$ be an arbitrary 1-hazard for $F$.
Substitute the stable variables given by $\beta$ in $F$ to
obtain a simplified \textsc{Dnf}\ $G$. If $G = x \ensuremath{\vee} \ensuremath{\neg} x \ensuremath{\vee} H$,
for some variable $x$ and \textsc{Dnf}\ $H$, then $H$ is not
equal to $1$ because $G$ must evaluate to $u$ on $\beta$ and, hence,
$\beta$ is the required $1$-hazard. Suppose not, then either
there exists a term of size $1$ in $G$ or every term is of size
at least $2$. In both cases we construct the required hazard
from $\beta$ iteratively. We will increase the number of
stable values in $\beta$ at each step while ensuring that it
remains a $1$-hazard. In particular, we argue in both cases
that there exists a variable $x$ in $G$ such that we can set it
to $0$ and the resulting partial assignment is still a
$1$-hazard. Clearly this process terminates in at most $n$
steps. We now show how to find the variable in each case.
Suppose there exists a term of size $1$ in $G$. That is,
$G =\ell \ensuremath{\vee} H$, for some literal $\ell$ and, moreover, $H$ does
not have $\ensuremath{\neg}{\ell}$ as a term. Then we extend the partial
assignment $\beta$ by setting $\ell = 0$. We now claim that
the new partial assignment is still a $1$-hazard. This is
easily seen because $\ell =0$ either kills a term in $G$ or
reduces its size. It never makes a term evaluate to $1$, and
therefore the hazard propagates.
In the remaining case, every term in $G$ is of size at least
$2$. We pick an arbitrary literal from an arbitrary term and
set it to $0$. Again as before we can argue that the hazard
propagates since any term is either killed or reduced in size,
but never evaluated to $1$.
Note that if $G$ has only one variable, then it must be $x \ensuremath{\vee}
\ensuremath{\neg} x$ for some $x$. This completes the proof of the lemma.
\qed
\end{proof}
We now give a polynomial time algorithm to detect $1$-hazards in
\textsc{Dnf}\ formulas.
\begin{thm}
\label{thm:poly-time-detection}
There is a polynomial time algorithm that detects $1$-hazards
in \textsc{Dnf}\ formulas.
\end{thm}
\begin{proof}
Let $F$ be the input \textsc{Dnf}\ formula. From
Lemma~\ref{lem:easy-hazards}, we know that to check whether $F$
has a $1$-hazard it suffices to check for an
\emph{easy-to-detect} hazard. Observe that an easy-to-detect
hazard is nothing but a partial assignment $\alpha$ such that
$F|_\alpha$ has both $x$ and $\ensuremath{\neg} x$ as terms for some
variable $x$ and, furthermore, no term evaluates to $1$.
In fact, we give a polynomial time procedure to
find an easy-to-detect hazard.
To find such a partial assignment we do the following: For
every pair of terms $S\ensuremath{\wedge} x$ and $T\ensuremath{\wedge} \ensuremath{\neg}{x}$ in $F$, for
some variable $x$, we check if $S$ and $T$ can be
simultaneously set to $1$ while no other term in $F$ evaluates
to $1$. If so, then clearly this partial assignment is an
easy-to-detect hazard.
It is easily seen that the above procedure runs in polynomial
time. \qed
\end{proof}
Even though all $0$-hazards can be eliminated from \textsc{Dnf}\ formulas
by removing all terms with contradictory literals, the presence
of such terms do not imply a $0$-hazard. For example, the single
variable \textsc{Dnf}\ formula $(x\ensuremath{\wedge}\ensuremath{\neg}{x}) \ensuremath{\vee} x$ does not have any
hazards.
In contrast to the poly-time algorithm to detect $1$-hazards, the
following simple reduction shows that detecting $0$-hazards in
\textsc{Dnf}\ formulas is hard.
\begin{thm}
If \textsc{Seth}\ is true, then there is no
$O(2^{(1-\epsilon)n}\poly(s))$ time algorithm, for any
$\epsilon > 0$, that detects 0-hazards in \textsc{Dnf}\ formulas on $n$
variables and $s$ terms.
\end{thm}
\begin{proof}
We will reduce the \textsc{Dnf}\ falsifiability problem to this
problem. Let $F$ be the input \textsc{Dnf}\ formula for the
falsifiability problem. We assume without loss of generality
that $F$ does not contain terms with contradictory literals.
We now claim that the \textsc{Dnf}\ formula $G = F \ensuremath{\vee} (x\ensuremath{\wedge}\ensuremath{\neg}{x})$,
where $x$ is a new variable, has a $0$-hazard if and only if
$F$ is falsifiable. If $F$ is falsifiable at an input $a$,
then the input $(a, u)$ is a $0$-hazard for $G$. On the other
hand if $F$ is a tautology, then so is $G$ and, therefore, $G$
cannot have a $0$-hazard.\qed
\end{proof}
The above results can also be stated for \textsc{Cnf}\ formulas using the
following observation.
\begin{obs}
A \textsc{Cnf}\ formula $F$ has a $0$-hazard if and only if the \textsc{Dnf}\
formula $G = \neg F$ has a $1$-hazard.
\end{obs}
\begin{proof}
Assume $F$ has a $0$-hazard at $a\in \{u, 0, 1\}^n$. Each
clause in $F$ evaluates to $1$ or $u$ on input $a$. Then, the
corresponding term in $G$ evaluates to $0$ or $u$ respectively
which implies that the output of $G$ is also $u$. The other
direction is similar.
\end{proof}
Observe that any \textsc{Cnf}\ formula that has a $1$-hazard contains a
clause that contains a variable and its negation. Therefore, we
can easily eliminate all $1$-hazards from a given $\textsc{Cnf}$ formula.
Combining these observations with theorems for $\textsc{Dnf}$ formulas,
we have the following theorem.
\begin{thm}
There is a polynomial-time algorithm that detects $0$-hazards
in \textsc{Cnf}\ formulas. Also, if \textsc{Seth}\ is true, there is no
$O(2^{(1-\epsilon)n}\poly(s))$ time algorithm for any $\epsilon
> 0$ that detects $1$-hazards in \textsc{Cnf}\ formulas on $n$
variables and $s$ clauses.
\end{thm}
\section{Conclusion}
We show that under \textsc{Seth}\ the straightforward hazard detection
algorithm cannot be significantly improved upon, even when the
inputs are restricted to be depth-$4$ formulas. We also show that
there are polynomial time algorithms for detecting $1$-hazards
in \textsc{Dnf}\ formulas (resp., $0$-hazards in \textsc{Cnf}), while $0$-hazards
(resp., $1$-hazards) can be easily eliminated. The complexity of
hazard detection for depth-$3$ formulas remain open.
\section*{Acknowledgements}
The authors would like to thank the anonymous reviewers. Their
suggestions helped to greatly improve the presentation of results
in the paper.
\bibliographystyle{elsarticle-num}
|
1,116,691,498,643 | arxiv | \section{Probing Cosmic Dawn}\label{sec1}
Understanding the early Universe, when the first stars and galaxies formed, is one of the major science goals of a number of new observatories. The recently launched JWST will directly image these early galaxies in deep near-infrared surveys. Its increased sensitivity, in comparison to the previous generation of telescopes, will allow JWST to target faint high-redshift galaxies existing during the first few hundred million years of cosmic history all the way out to the cosmological redshift of $z\sim 20$ \cite{Windhorst_JWST_2006}. Future confirmed and proposed X-ray missions, such as ATHENA~\cite{Athena}, LYNX~\cite{Lynx} and AXIS \cite{axis}, will supplement this exploration by observing the hot gas in the Universe.
Radio telescopes aim at complementing this picture by mapping the neutral intergalactic gas across the first billion years of cosmic history via observations of the 21-cm spin-flip transition of atomic hydrogen seen against the radio background radiation, which is usually assumed to be the Cosmic Microwave Background~(CMB) \cite{Madau, Mesinger_2019}.
Upper limits on the 21-cm signal from the Epoch of Reionization~(EoR, $z\sim6-15$) are already available measured by both the radiometers \cite{ EDGES_high_band_experimental_paper_2017, SARAS2_radiometer_2018}, which probe the sky-averaged (global) 21-cm signal, and large interferometric arrays \cite{HERA_2017, LOFAR_current_EoR_2018, LEDA_2018, Trott_mwa_2020, Gehlot_lofar_2019} targeting fluctuations. These data have recently allowed constraints to be derived on the astrophysical processes at the EoR \cite{Singh_saras2_2017, Singh_saras2_2018, Monsalve_2019, Mondal_LOFAR_2020, Ghara_MWA_2021, LOFAR2021, HERA, SARAS2} and are in a broad agreement with other probes of reionization history such as high redshift quasars and galaxies \cite{Mesinger_2010, Schroder_2012, Ouchi_2018, Morales_2021, Greig_2022}.
The observational status of Cosmic Dawn~(CD) signal originating from higher redshifts ($z\sim15-30$) is more intriguing: the EDGES Low-Band collaboration reported a tentative detection of an absorption profile at $z\sim 17$ \citep{EDGES2018} which is at least two times deeper than what is predicted by conventional theoretical modelling \citep{Cohen_charting_2017, Reis_sta_2021}. Such a strong signal implies either an existence of an excess radio background above the CMB \citep{Feng2018,EDGES2018} or a non-standard thermal history of the intergalactic gas \cite{BarkanaDM2018,EDGES2018}.
The cosmological origin of this signal was recently disputed by the SARAS3 collaboration, who conducted an independent experiment and reported a non-detection of the EDGES best-fit profile in their data \cite{SARAS3_spectrometer_2020, SARAS3_antenna_2021, SARAS_reciever_2021, SARAS3}.
It has also been shown that the reported EDGES signal can partially be explained by invoking sinusoidal instrument systematics \cite{Hills2018, Singh2019, Bradley2019, Sims2020}, however, additional efforts are being made to verify the EDGES detection and to make independent measurements of the 21-cm signal from CD both with interferometers \cite{Mellema_SKA_2013, Zarka_nenuFar_2018, AARTFAAC_2020, HERA} and radiometers \cite{8879199, MIST, LEDA_2018}.
The non-detection by SARAS3 of the EDGES profile increases the likelihood of the anomalous absorption feature being non-cosmological and brings the focus back to the more conventional astrophysical scenarios.
In this work, we use the SARAS3 data to provide constraints on the astrophysical processes at CD. We consider a potential population of high-redshift radio-luminous galaxies which contribute to the radio background radiation, thus affecting the 21-cm signal. In general, radio galaxies are expected in standard astrophysical scenarios \cite{Mirocha2019, Reis2020}, it is only for extremely high values of their radio luminosity that their contribution is large enough to explain the EDGES signal \cite{EDGES2018, Feng2018, Ewall2018, Jana2018, Mirocha2019, Fialkov2019, Reis2020}. Here, we consider a wide selection of models varying astrophysical properties of high-redshift galaxies over a broad range. We repeat our analysis for two additional scenarios (shown in \textit{Supplementary Material}): one with the CMB as the radio background radiation \cite{Reis_sta_2021} and the other with a phenomenological synchrotron radio background in addition to the CMB \cite{Fialkov2019}.
In \cref{sec:data} we discuss in more detail the SARAS3 data. \Cref{sec:modelling} introduces the different modelled components in our analysis and discusses how we determine constraints on the astrophysical processes at CD, with further details given in \textit{Methods}. Our constraints on high-redshift radio galaxies are discussed in \cref{sec:results}. We conclude in \cref{sec:conclusions}. Additional astrophysical models are discussed in \textit{Supplementary Material}.
\section{Data} \label{sec:data}
SARAS3 is a radiometer based on a monocone antenna that has made observations of the sky from a location in Southern India in the band $43.75-87.5$ MHz, targeting the cosmological 21-cm signal from $z\sim 15-32$
\cite{SARAS3_spectrometer_2020,SARAS3_antenna_2021,SARAS_reciever_2021}. The experiment is the first global 21-cm experiment of its kind to take observations whilst floating on a body of water which is expected to improve the total efficiency of the antenna. The total efficiency quantifies how the sky radiation is coupled to the antenna, including losses in the local environment of the antenna such as ground or water beneath the antenna. \cite{SARAS3_antenna_2021} and prevent the introduction of non-smooth systematics caused by stratified ground emission that can impede the detection of a global signal \cite{SARAS2}.
15 hours of observations were integrated in the frequency range $55-85$~MHz~($z\sim 15 - 25$), reduced after radio frequency interference filtering, with corrections made for emission from the water beneath the antenna and receiver noise temperature. The data were then appropriately scaled, given an estimate of the total efficiency,
to produce an average measurement of the sky temperature, $T_\mathrm{sky}$,
which we expect to be the sum of the Galactic and extra-Galactic foregrounds, $T_\mathrm{fg}$, and the cosmological 21-cm signal, $T_{21}$.
Previously, a log-log polynomial foreground model was fitted to the data in combination with the phenomenological best-fit EDGES absorption profile multiplied by a scale factor, $s$, using a Markov Chain Monte Carlo analysis.
The data were shown to reject the presence of the EDGES signal with 95.3\% confidence and a series of EDGES-like signals, representing the likelihood distributions of uncertainties in the profile parameters, were rejected with a significance of 90.4\% \cite{SARAS3}. The SARAS3 measurement of the sky-averaged radio spectrum thus represents a non-detection of the EDGES absorption feature, with the potential to constrain astrophysical scenarios that result in signals larger in magnitude than the instrument noise.
\section{The Cosmological 21-cm Signal}
\label{sec:modelling}
To provide constraints on the astrophysical processes at CD using the SARAS3 data, we need to model the global 21-cm signal. Theoretical predictions of the signal are made difficult owing to the non-local impact of the non-uniform radiative fields produced by a distribution of luminous sources. Either numerical or semi-numerical methods are required to calculate the three-dimensional 21-cm signal and evolve it with time. The global signal can then be calculated as the spatial average, although one-dimensional radiative transfer codes are also used to calculate the global signal \cite{Mirocah:2020}. As astrophysical processes at high redshifts are poorly understood, a range of theoretical predictions for the 21-cm signal need to be computed for different astrophysical scenarios, which can then be constrained by data. In this work, we use a semi-numerical method \cite{Visbal2012, Fialkov2013, Fialkov2014, Fialkov2014Natur, Reis_sta_2021} to calculate the 21-cm signal from an evolving simulated Universe
The calculation takes into account important processes that shape the 21-cm signal: The baryonic matter in the early Universe is predominantly composed of atomic hydrogen and, as the first stars and black holes form at $z\sim 30$ \cite{Fialkov2012, Klessen2019}, they affect both the total intensity and the fluctuations of the hydrogen signal. Radiation from the first stars in the Lyman band plays a fundamental role as it enables observations of the 21-cm signal against the radio background by coupling the characteristic temperature of the spin-flip transition, the spin temperature $T_\mathrm{S}$, to the kinetic temperature of the gas, $T_\mathrm{K}$ \cite{Wouthuysen, Field}. At CD, typically, radiation is warmer than the gas, and the signal appears as an absorption feature against the radio background. Heating of the intergalactic medium subsequently raises the gas temperature to and, perhaps, above the radio background \cite{Fialkov2014, Fialkov2014Natur, Reis_sta_2021}, resulting in emission at low redshifts. This evolution culminates at the EoR when ultraviolet radiation from stars ionizes the neutral hydrogen in the intergalactic medium and the signal disappears. If it exists at high redshifts, any additional radio background above the CMB would contribute to the 21-cm signal by increasing $T_\mathrm{rad}$ and, thus, deepening the absorption profile \cite{Feng2018, Ewall2018, Mirocha2019,Fialkov2019, Reis2020}.
For a specified set of astrophysical parameters, each simulation can take a few hours to produce the desired global 21-cm signal. However, in order to derive parameter constraints from real data, a multitude of such signals needs to be created probing the vast astrophysical parameter space. The application of machine learning to the problem of signal modelling is common in the analysis of data from 21-cm experiments \cite{Mondal_LOFAR_2020, HERA, Monsalve_2019, SARAS2} and allows different physical signal models to be generated quickly in computationally intensive fitting algorithms. Starting from a set of the simulated signals, we use neural networks \cite{Bevins_globalemu_2021} to interpolate the astrophysical parameter space (see \textit{Methods} and \cref{tab:networks} for a discussion of the network training and accuracy).
To extract constraints on the global signal, we also need to model the foreground in the data. We do this using the same log-log polynomial as in the previous analysis of the SARAS3 data \cite{SARAS3} for consistency (see \textit{Methods}).
The theoretical CD 21-cm signal is sensitive to the process of star formation, thermal history and the temperature of the radio background radiation (see \textit{Methods}). The root mean squared~(RMS) of the appropriately weighted SARAS3 residuals after foreground modelling and removal is 213~mK at their native spectrum resolution of 61~kHz \cite{SARAS3}. Signals that are within the sensitivity of the instrument and would have been detected are those with deeper absorption troughs that have strong variation within the band. Typically, such signals are created in scenarios with high intensity of the Ly-$\alpha$ photons, corresponding to a combination of low minimum virial circular velocities $V_c$ and high star formation efficiencies $f_*$, and a strong contrast between the gas temperature and the temperature of the radio background, i.e. for low values of the X-ray production efficiency $f_X$ and high radio production efficiencies $f_\mathrm{radio}$. Therefore, we expect the SARAS3 data to constrain these model parameters. In our analysis, we marginalize parameters, including the CMB optical depth, $\tau$, and the mean free path of ionizing photons, $R_\mathrm{mfp}$, that determine the structure of the signal during the EoR because they are not relevant to the SARAS3 band.
We use the Bayesian nested sampling algorithm to perform our model fitting \cite{skilling_nested_2004} (see \textit{Methods}).
\section{Results} \label{sec:results}
In this section, we discuss the SARAS3 constraints in the redshift range $z\sim 15-25$ on a population of high-redshift radio-luminous galaxies and show the main results in \cref{fig:fradio_results}. Constraints on models with the CMB-only background and an excess radio background from a phenomenological synchrotron source \cite{Fialkov2019} are discussed in \textit{Supplementary Material}.
We calculate the posterior distribution (using \cref{eq:posterior} in \textit{Methods}), which is a multivariate probability distribution for the thirteen parameters that describe the foreground~(seven polynomial coefficients), noise~(one parameter, the standard deviation of the Gaussian noise in the data, see \textit{Methods}) and the cosmological 21-cm signal~(five astrophysical parameters, $f_\mathrm{radio}$, $f_*$, $V_c$, $f_X$, and $\tau$ the CMB optical depth). We then marginalize over the foreground parameters, noise and $\tau$ which allows us to calculate the likelihood of different astrophysical 21-cm signals. The standard deviation of the noise is shown to be independent of the astrophysical parameters in \cref{fig:noise} for excess background models. We subsequently derive limits on the parameters related to star formation, heating, and the excess radio background above the CMB. In \textit{Methods} and \textit{Supplementary Material}, we discuss the foreground model in more detail \cite{SARAS3}. Although we note here that in fits with both a foreground and 21-cm signal model we found no correlation between the two sets of parameters as can be seen in \cref{fig:correlations}.
We find no evidence for an astrophysical signal in the data. The foreground-only fit consistently has a larger log-evidence by approximately 10 log units when compared to fits with signal profiles (i.e. $\mathcal{Z}_{M_1} > \mathcal{Z}_{M_2}$ as described in \textit{Methods}). Therefore, any cosmological signal present in the data is likely to be undetectable in the residuals (with RMS of $\approx 213$~mK), after foreground modelling and subtraction. Since the predicted amplitude of the 21-cm signal is expected to be lower than $\lesssim 165$ mK \cite{Reis_sta_2021} in the case of the standard scenario with the CMB as the only source of the background radiation, the SARAS3 constraints on this scenario are very weak (see \textit{Supplementary Material}). However, in the case of an excess radio background the predicted signals can be much stronger, allowing us to disfavour regions of the astrophysical parameter space.
Panel (a) of \cref{fig:fradio_results} shows the contraction from the initial set of possible 21-cm signals (prior, blue region, details of the astrophysical priors can be found in \textit{Methods} and \cref{tab:priors}) to the set of scenarios that are allowed by the data (i.e., functional posterior, shown in red). The functional posterior and prior is produced by taking the posterior samples returned from our nested sampling run and a representative set of prior samples are transformed into realizations of the global 21-cm signal using the trained neural networks. The figure illustrates that, as anticipated, the deepest signals and the signals with the strongest variation in the SARAS3 band, $z = 15-25$, are disfavoured. Signals with a modest variation within the band and signals with minima at $z \lesssim 15$ are indistinguishable from the foregrounds and, thus, cannot be ruled out.
We can estimate the quantitative contribution of the SARAS3 measurement to our understanding of the high-redshift Universe by calculating the information gain using the Kullback-Leibler~(KL) divergence \cite{kullback_information_1951}, $\mathcal{D}$, between the functional prior and posterior (bottom of panel (a)). The KL divergence by definition has arbitrary scaling but should always be $\gtrsim 0$. We see that $\mathcal{D}$ is highest, meaning the information gain is largest, at $z\sim 20$ which corresponds to the middle of the SARAS3 band. Owing to the dependence of the 21-cm signal on the star formation and heating histories, we see that the constraining power of the SARAS3 measurement extends to lower redshifts outside the SARAS3 frequency band (KL divergence is non-vanishing). On the other hand, $\mathcal{D}$ is approximately zero at $z\gtrsim 30$ where the global 21-cm signals are dominated by cosmology rather than astrophysics.
We next consider the astrophysical parameter constraints and show the corresponding 1D and 2D posteriors in panel (b) of \cref{fig:fradio_results}. The visualization of the constraints on the signal parameters is non-trivial and is achieved in 2D and 1D via marginalization. Marginalization involves integrating out of the posterior distribution the dependence on the $N-1$ or $N-2$ parameters to leave 1D and 2D distributions for the astrophysical parameters. Key numerical results and comparison with SARAS2 \cite{SARAS2} and HERA \cite{HERA} constraints are summarized in \cref{tab:numbers}. To guide the eye, we show the approximate 2D constraints (red dashed lines) roughly corresponding to the 68\% confidence contours (solid black lines) and list these limits in the figure (inverted triangle table). Note that there are regions of low probability outside these guides. From the figure it is clear that SARAS3 data most strongly constrain the process of primordial star formation (clear trends in the 1D posterior probabilities of $V_c$ and $f_*$) and the strength of the radio background (limits on the radio luminosity per star formation rate at 150~MHz, $L_\mathrm{r}/\mathrm{SFR}$) with somewhat weaker sensitivity to the heating process, which, however, is clearly constrained in combination with the strong radio background.
From the 1D posterior distribution, we see that the data constrain the radio production efficiency of the early sources, with the values of $L_\mathrm{r}/\mathrm{SFR} \gtrsim 1.549 \times 10^{25}$~W Hz$^{-1}$M$_\odot^{-1}$ yr at 150~MHz ($f_\mathrm{radio} \geq 1549$) being disfavoured at 68\% confidence. Moreover, we expect a significant correlation between the impact of radio background and that of the thermal history on the global 21-cm signal, as both a strong radio emission and weak X-ray heating contribute in the same direction deepening the absorption trough. Considering the 2D posterior probability in the plane $L_\mathrm{r}/\mathrm{SFR}-L_\mathrm{X}/\mathrm{SFR}$ and the corresponding approximate contours in red, we see that high-redshift galaxies that are both efficient in producing radio photons
with $L_\mathrm{r}/\mathrm{SFR} \gtrsim 1.00\times10^{25}$~W Hz$^{-1}$M$_\odot^{-1}$ yr
and inefficient at producing X-ray photons
with an X-ray luminosity per star formation rate of $L_\mathrm{X}/\mathrm{SFR} \lesssim 1.09\times10^{42}\textnormal{erg~s}^{-1}$M$_\odot^{-1}$ yr
are disfavoured. $L_{r}/$SFR and $L_{X}/$SFR are proportional to $f_\mathrm{radio}$ and $f_X$ respectively and defined in \textit{Methods}.
The data disfavour (at 68\% confidence) models with early onset of efficient star formation which is characterized by low values of $5.37 \lesssim V_c \lesssim 15.5$~kms$^{-1}$ (note that the lower limit of the prior range is $V_c = 4.2$~kms$^{-1}$), corresponding to small typical dark matter halos of $4.4\times10^{5}$~M$_\odot \lesssim M \lesssim 1.1\times10^{7}$~M$_\odot$ at $z = 20$ \citep[e.g.][]{Reis2020}, and high values of star formation efficiency $f_* \gtrsim 0.05$, interpreted as a large fraction of collapsed gas that turns into stars. Each one of these criteria individually, as well as their combination, would guarantee efficient Ly-$\alpha$ coupling, resulting in a deep high-redshift absorption profile. Considering the 2D posterior distribution, we find, using the approximate red contours on \cref{fig:fradio_results}, that $f_* \gtrsim 0.03$
together with galaxies hosted in dark matter halos of masses $M \lesssim 8.53\times10^8$ M$_\odot$ at $z = 20$ ($V_c \lesssim 31$~km/s) are disfavoured. We also find that combinations of high $f_*$ (and low $V_c$) with both low X-ray efficiency and high radio efficiency are disfavoured. We note that when fitting the data with the phenomenological synchrotron radio background model, we disfavour similar combinations of $V_c$ and $f_*$ (see \textit{Supplementary Material}).
The derived constraints can be compared to the recently published constraints from SARAS2 \cite{SARAS2}, see \cref{tab:numbers} and \cref{fig:saras2_saras3_comparison} in \textit{Supplementary Material}. SARAS2 probes a lower redshift range, $z = 7 -12$, and, thus, is complementary to SARAS3, being more sensitive to the process of heating and ionization. However, the experiment has a comparatively low signal-to-noise ratio, meaning that any constraints derived from it are likely to be weaker. For example, one particular signal may have a magnitude lower than the SARAS3 noise floor but higher than the SARAS2 noise floor. This means that were that particular signal real, SARAS3 would have detected it but SARAS2 would not of, and hence, given the non-detection in the SARAS3 data, the corresponding combination of astrophysical parameters will produce a lower posterior probability for SARAS3, $\mathcal{P}(\theta|D_\mathrm{SARAS3}, M_\mathrm{SARAS3})$, than SARAS2, $\mathcal{P}(\theta|D_\mathrm{SARAS2}, M_\mathrm{SARAS2})$~(see \textit{Methods} for a discussion on the posterior probability). Previously, it was found that SARAS2 disfavours (at approximately 68\% confidence) early galaxies with X-ray luminosity of $L_\mathrm{X}/\mathrm{SFR} \lesssim 6.3\times10^{39}\textnormal{erg~s}^{-1}$M$_\odot^{-1}$ yr in combination with $L_\mathrm{r}/\mathrm{SFR} \gtrsim 4.07\times10^{24}$~W Hz$^{-1}$M$_\odot^{-1}$ yr \cite{SARAS2}. This corresponds to disfavouring $\approx$23\% of the available parameter space in the $L_\mathrm{X} - L_\mathrm{r}$ plane at approximately 68\% confidence compared to $\approx$32\% for SARAS3, although we note that both experiments disfavour slightly different regions of the parameter space.
The same set of astrophysical models, used here, has recently been constrained with an upper limit on the 21-cm power spectrum measured by HERA \cite{HERA}. HERA disfavours at 68\% confidence level values of $L_\mathrm{r}/\mathrm{SFR} \gtrsim 4\times10^{24}$~W Hz$^{-1}$M$_\odot^{-1}$ yr as well as $L_\mathrm{X}/\mathrm{SFR} \lesssim 7.6\times10^{39}\textnormal{erg~s}^{-1}$M$_\odot^{-1}$ yr. We find that SARAS3 provides a similar constraint in the plane $L_\mathrm{r}/\mathrm{SFR}$ and $L_\mathrm{X}/\mathrm{SFR}$ with a weaker limit on $L_\mathrm{r}/\mathrm{SFR}$ but a stronger limit on $L_\mathrm{X}/\mathrm{SFR}$ than HERA. Similarly, SARAS2 gives a comparable constraint in the $L_\mathrm{r}/\mathrm{SFR}- L_\mathrm{X}/\mathrm{SFR}$ plane. We note that both SARAS2 and HERA probe the 21-cm signal at much lower redshifts than SARAS3
thus the experiments potentially probe different populations of sources. Moreover, HERA constraints come from the limit on the 21-cm power spectrum, rather than the global signal. Constraints on the 21-cm power spectrum are also available from the MWA~\cite{Trott_mwa_2020, Ghara_MWA_2021} and LOFAR~\cite{Gehlot_lofar_2019, Mondal_LOFAR_2020, LOFAR2021} interferometers. However, these limits are slightly weaker than those from HERA.
In the context of verifying the EDGES Low-Band detection, we assess the constraining power of the SARAS3 data on physical models that could, in principle, describe the reported absorption feature. Although, we note that none of our models can fit the flattened EDGES absorption signal well. We define EDGES-like signals using a conditional equation that ensures the models have approximately the same central frequency, width and depth as the EDGES absorption feature but does not strictly enforce the flattened Gaussian shape of the EDGES profile \cite{Fialkov2019, Reis_sta_2021}. In our analysis so far, the broad prior range was determined by our poor understanding of the high-redshift Universe. Now we use the restricted EDGES-like space as our prior, which is shown in \cref{fig:EDGES-like}.
We perform the fitting procedure and penalize models that do not meet the EDGES-like criteria by setting the likelihood to zero. The volume contraction from prior to posterior gives a quantitative measure of the level of consistency between the EDGES-like prior and the SARAS3 data and can be estimated using a marginal KL divergence \cite{kullback_information_1951}. This effectively allows us to say that if EDGES is true and indicative of a physical scenario, then a given percentage of the physical EDGES-like parameters space is inconsistent or ruled out by the SARAS3 data. We find that the volume of the EDGES-like posterior, when fitting with the radio galaxy models, is 60\% of the EDGES-like prior volume. In other words, 60\% of the physical EDGES-like parameter space is consistent with the SARAS3 data. See \textit{Methods} and \textit{Supplementary Material} for details.
Finally, we find that the data provide interesting limits on the amplitude of the synchrotron radio background in excess of the CMB disfavouring contributions of $\gtrsim6$\% at a reference frequency of 1.42~GHz with 68\% confidence. The constraints from SARAS3 can be compared to the excess backgrounds inferred from ARCADE2 \cite{fixsen_arcade_2011} and LWA \cite{dowell_radio_2018} experiments, assuming that the excess is cosmological and is not due to incorrect calibration of the Galactic foregrounds \cite{Subrahmanyan2013}. We find that the 68\% confidence limit on $T_\mathrm{rad}$ is significantly lower than the reported deductions from the two experiments (see \textit{Supplementary Material}).
\section{Conclusion}\label{sec:conclusions}
We provide astrophysical constraints on the Universe at $z\sim 20$, corresponding to $\sim 200$ million years after the Big Bang, using upper limits on the sky-averaged 21-cm signal measured by the SARAS3 radiometer in the frequency range 55 and 85 MHz, $z\sim 15-25$. These are the first astrophysical limits of their kind. The only other existing constraining data (from EDGES) revealed a controversial flattened absorption profile, which is awaiting verification by an independent experiment. The residuals observed in SARAS3 data, after modelling for foregrounds, do not provide evidence for a detected 21-cm signal, including the EDGES profile, and they allow for the first time constraints of astrophysics at cosmic dawn.
We fit the data with a log-log polynomial foreground model, as in the original SARAS3 data analysis paper, together with astrophysically motivated models for the global 21-cm signal, showing that deep global signals are disfavoured by the data. These constraints are then mapped into the astrophysical parameter space using a fully Bayesian analysis. We find that the SARAS3 data provide constraints on the processes that are linked to the formation of first stars and galaxies, production of radio photons at high redshifts as well as heating of the intergalactic medium.
We disfavour, at 68\% confidence, a population of radio galaxies with luminosity per SFR of $L_\mathrm{r}/\mathrm{SFR} \gtrsim 1.549 \times 10^{25}$~W Hz$^{-1}$M$_\odot^{-1}$ yr at 150~MHz, i.e. a factor of a thousand brighter than their low-redshift counterparts, and a synchrotron radio background in excess of the CMB of $\gtrsim 6\%$ at 1.42 GHz. We also find correlation between the constraints on the radio background and on the thermal history of the global 21-cm signal, showing that galaxies which are both luminous in the radio band and inefficient at producing X-ray photons are disfavoured. Finally, the non-detection of the 21-cm signal in the SARAS3 data can be used to derive constraints on the properties of the first star forming regions. We find that, as an approximation to the 68\% confidence constraint, the data disfavour efficient star formation at high redshifts with a minimum mass of star forming halos of $M \lesssim 8.53\times10^8$~M$_\odot$ at $z=20$ in which $\gtrsim 3\%$ of the gas is converted into stars.
Lessons learned from the SARAS3 analysis can be contrasted with those from other instruments, specifically with the EDGES Low-Band detection at $z\sim17$ as well as the astrophysical limits derived from the SARAS2 data at $z\sim7-12$ and the limits from HERA on the 21-cm power spectrum at $z\sim8-10$, MWA at $z\approx 6.7 - 8.5$ \cite{Trott_mwa_2020, Ghara_MWA_2021} and LOFAR at $z\approx 9$ \cite{Gehlot_lofar_2019, LOFAR2021}. For example, by conditioning the prior parameter space to be compatible with the EDGES detection and neglecting the steep walls of the feature, we find that $\sim60\%$ of the available parameter space is still consistent with the SARAS3 data.
Although the SARAS3 constraints on the high-redshift astrophysical processes are weak, the analysis presented here demonstrates the potential of the 21-cm line as a probe for the early Universe. The cosmic dawn constraints are expected to tighten in the next few years as the new low-frequency 21-cm experiments are coming online \cite{Zarka_nenuFar_2018, REACH_pipeline_2021, de_lera_acedo_reach_2022}.
\setcounter{section}{0}
\renewcommand\thesubsection{M.\arabic{subsection}}
\renewcommand\thesection{M}
\section{Methods} \label{sec:method}
\subsection{Nested Sampling}
To identify constraints on the parameter space of the global signal, we use the nested sampling \cite{skilling_nested_2004} algorithm implemented with \textsc{polychord} \cite{Handley2015a, Handley2015b}. Samples of the parameter space of a model, $M$, are derived using Bayes theorem
\begin{equation}
P(\theta|D, M) = \frac{\mathcal{L}(\theta)\pi(\theta)}{\mathcal{Z}},
\label{eq:posterior}
\end{equation}
where $\theta$ is the vector of model parameters, $D$ represents the data, $\mathcal{L}$ is the likelihood representing the probability that we observe the data given the model, $\pi$ is the prior probability representing our knowledge of the parameter space before we perform any fitting and $\mathcal{Z}$ is the evidence which normalizes the posterior, $P(\theta|D, M)$. Nested sampling generates samples from the likelihood and prior probabilities to numerically approximate $\mathcal{Z}$ and effectively sample the posterior.
A higher value of $\mathcal{Z}$ when fitting model $M_1$ to the data in comparison to when fitting model $M_2$ indicates a preference for the former. This means that the evidence can be used to determine if a signal is present in the data or not. For example, if $M_1$ comprises just a foreground model and $M_2$ includes both a foreground and signal model then $\mathcal{Z}_{M_1} > \mathcal{Z}_{M_2}$ means that we do not require a signal model to effectively describe the data.
The posterior distribution can then be interpreted as constraints on the model. The use of Bayesian inference is becoming more common in global 21-cm analysis and is an effective method to constrain the astrophysical processes in the early Universe \cite{REACH_pipeline_2021, SARAS2, de_lera_acedo_reach_2022}. Throughout our analysis we assume a Gaussian likelihood function, $\mathcal{L}$, and a Gaussian noise distribution with a constant standard deviation, $\sigma$,
\begin{equation}
\log\mathcal{L} = \sum_i \bigg(-\frac{1}{2}\log(2\pi\sigma^2) -\frac{1}{2}\bigg(\frac{T_\mathrm{D}(\nu_i) - T_\mathrm{fg}(\nu_i) - T_\mathrm{21}(\nu_i)}{\sigma}\bigg)^2\bigg),
\label{eq:likelihood}
\end{equation}
where $T_\mathrm{D}$ is the SARAS3 data, $T_{21}$ is the global 21-cm signal model and $T_\mathrm{fg}$ is the foreground model. In practice, the noise in a global 21-cm experiment is expected to be larger at low frequencies and decrease with increasing frequencies, following the general trend of the sky temperature. A full treatment of any frequency dependence in the noise is left for future work. However, we find, see \cref{fig:noise}, that the posterior probability for the constant standard deviation on the assumed Gaussian noise is uncorrelated with the astrophysical parameters, and we would therefore expect a full treatment of the noise to have little impact on the derived parameter constraints for the two excess background models. We expect a full treatment of the noise to be more important in future experiments that provide tighter constraints on the astrophysical process during CD.
\subsection{Foreground Modelling}
The foreground model used here is identical to the one employed in the original SARAS3 analysis \cite{SARAS3}. The log-log polynomial foreground model is given by
\begin{equation}
\log_{10}T_\mathrm{fg} = \sum_{i=0}^{i=6}a_i\left(\mathcal{R}(\log_{10}\nu)\right)^i,
\end{equation}
where $a_i$ are the fitted coefficients, $\mathcal{R}$ is a normalizing function that scales its argument, $\log_{10}\nu$, linearly between -1 and +1 and $\nu$ is frequency in MHz. When fitting the model with \textsc{polychord} we provide a uniform prior, our initial assumption about the model parameters, of $-10$ to 10 on each of the foreground model coefficients, $a_i$. In addition to the foregrounds, the model is designed to account for any residual systematics from the calibration process.
\subsection{Signal Modelling and Emulation}
At the high redshifts of CD the most important factors that drive the 21-cm signal are the intensity of the Ly-$\alpha$ background which determines the efficiency of the coupling between $T_\mathrm{S}$ and $T_\mathrm{K}$, the temperature of the radio background, $T_\mathrm{rad}$, and the thermal history of the gas. The dependence of the 21-cm signal on these processes is as follows: The earlier star formation starts, the lower will be the frequency of the absorption profile; the stronger the Ly-$\alpha$ background, the steeper and deeper will be the resulting 21-cm signal; the colder is the gas, relative to the background radiation, the deeper will be the absorption feature. The resulting 21-cm signal can be written as
\begin{equation}
T_{21} = \frac{T_\mathrm{S} - T_\mathrm{rad}}{1+z} \left[1 - \exp(-\tau_{21})\right] \propto 1-\frac{T_\mathrm{rad}}{T_\mathrm{S}},
\label{eq:t21}
\end{equation}
where we assumed that the Universe is largely neutral at the high redshifts of CD.
One potential source of radio photons at CD are early radio galaxies \citep{Mirocha2019}. The radio background contribution created by such sources is proportional to the star formation rate~(SFR), thus increasing with time, and is non-uniform following the distribution of galaxies. The radio luminosity spectrum as a function of frequency $\nu$ produced by a star forming region and calculated per SFR in units of WHz$^{-1}$ is given by
\begin{equation}
L_\mathrm{r} = f_\mathrm{radio} 10^{22} \bigg(\frac{\nu}{150~\mathrm{MHz}}\bigg)^{-\alpha_\mathrm{radio}} \frac{\mathrm{SFR}}{\mathrm{M}_\odot\mathrm{yr}^{-1}},
\label{eq:radio_luminosity}
\end{equation}
where $f_\mathrm{radio}$ is an efficiency factor that measures radio photon production in high-redshift galaxies compared to their present day counterparts and $\alpha_\mathrm{radio}=0.7$ is the spectral index in the radio band \cite{Reis2020}. The temperature of the radio background produced by such galaxies at redshift $z$ is calculated by integrating over the contribution of all galaxies within the past light-cone \cite{Reis2020} and is added to the temperature of the CMB to give the total radio background temperature.
We quote constraints on the radio luminosity per SFR, $L_\mathrm{r}/\mathrm{SFR}$, at a reference frequency of 150~MHz in section \ref{sec:results}.
We take into account several heating and cooling mechanisms, such as cooling due to the expansion of the Universe and heating due to structure formation, Ly-$\alpha$ \cite{Madau, Chuzhoy2007} and CMB \cite{Venumadhav2018} heating, as well as heating by first X-ray binaries \cite{Fialkov2014Natur}. In our model, the first four effects are fully determined by cosmology and star formation, whereas heating by X-ray binaries invokes new astrophysical processes (such as black hole binary formation and X-ray production by the high redshift sources). Therefore, X-ray heating requires independent parameterization, and we model X-ray luminosity per SFR \cite{Fragos_Xrays_2013} as
\begin{equation}
L_\mathrm{X, 0.2 - 95 \textnormal{keV}} = 3\times10^{40} f_X \frac{\mathrm{SFR}}{\mathrm{M}_\odot\mathrm{yr}^{-1}}
\end{equation}
calculated in units of erg~s$^{-1}$ between 0.2 and 95 keV, where $f_X$ is the efficiency of X-ray photon production.
Gas thermal history is then evaluated at every redshift by integrating over the contribution of all galaxies within the past light-cone to find the corresponding heating rate and then solving a differential equation to evolve the gas temperature.
Both the radio luminosity and the thermal history of the gas depend on the SFR, which is not well constrained for early galaxies. Therefore, our model also includes free parameters that regulate star formation. One is the star formation efficiency of high-redshift galaxies, $f_*$, which measures the fraction of collapsed gas in star forming regions that turns into stars, and the other is the minimum mass of star forming halos, or, equivalently, the minimum circular velocity of star forming halos, $V_c$ \cite{Barkana_mass_2001}.
This quantity depends on the local environment of each star forming region and is affected by factors such as the local intensity of the radiative background in the Lyman-Werner band \cite{Fialkov2013, Schauer2021} or the relative velocity between dark matter and gas \cite{Tseliakhovich2010, Fialkov2012, Schauer2021}.
In order to physically model the global 21-cm signal, we rely on neural network-based emulation with the \textsc{python} package \textsc{globalemu} \cite{Bevins_globalemu_2021} trained on the results of the full semi-numerical simulations of the global 21-cm signal \citep[][]{Visbal2012, Fialkov2014, Fialkov2019, Cohen_charting_2017, Reis2020, Reis_sta_2021}.
For each global signal model, we have a series of testing and training signals. \Cref{tab:priors} shows the ranges of the parameters in each of the training and testing data sets for the different models of the global 21-cm signals used in this work. The boundaries correspond to the broadest possible ranges allowed for each one of the parameters from the astrophysical principles and existing (weak) observational constraints. Outside these ranges the emulators are unreliable and consequently the ranges act as the prior bounds for the nested sampling code \textsc{polychord}. The parameters are sampled either uniformly or log-uniformly between the ranges in the training and test data, and we use appropriate prior probability distributions for each parameter when running the fits.
For all three signal emulators, we use the same neural network architecture with 4 hidden layers of 16 nodes each. The same radio galaxies and CMB only emulators were recently used in our analysis of the SARAS2 data \cite{SARAS2}. We note that the network for the CMB only radio background models, however, has seven astrophysical inputs compared to the radio galaxy and synchrotron radio background networks which both have five. For the CMB only model the X-ray spectral energy density~(SED) is characterized by the slope of the spectrum, $\alpha$, and a low energy cut-off, $E_\mathrm{min}$; while for the other two models the X-ray SED is fixed to that of high-redshift X-ray binaries \cite{Fragos_Xrays_2013}. Further, parameters related to reionization have very modest effect on the 21-cm signal in the SARAS3 range. Therefore, we fix the mean free path of ionizing photons. For the radio galaxies and radio synchrotron backgrounds, the mean free path was fixed to 40~Mpc, while in the CMB-only case it was fixed to $R_\mathrm{mfp} = 30$~Mpc.
We assess the accuracy of the neural networks in the SARAS3 band, $z \approx 15 - 25$, using the root mean squared error~(RMSE) when emulating the test data after training. The synchrotron radio background network has a mean RMSE when emulating 1034 test models, after training on 9304 models, of 7.98~mK, a 95$\textsuperscript{th}$ percentile RMSE of 23.06~mK and a worst RMSE of 85.65~mK. The CMB only background network is trained on 5137 models and tested on 570 models. In the SARAS3 band the mean RMSE for the test data is 0.78~mK, the 95$\textsuperscript{th}$ percentile is 2.67~mK and the worst is 13.36~mK. Finally, when trained on a data set of 4311 models and tested on 479 models the radio galaxy radio background neural network emulator is found to have a mean RMSE of 5.11~mK, a 95$\textsuperscript{th}$ percentile RMSE of 20.53~mK and a worst RMSE of 81.70~mK in the SARAS3 band. All the trained networks have 95$\textsuperscript{th}$ percentile RMSEs well below the RMS found after modelling and subtracting the log-log polynomial foreground model from the SARAS3 data. The numbers are summarized in \cref{tab:networks}.
When fitting all three signal models with the foreground, we see no correlation between the astrophysical parameters and the foreground parameters. An example of this can be seen in the 2D posteriors, which are shown in \cref{fig:correlations}, between the astrophysical and foreground parameters from the fit with the radio galaxies background model.
\subsection{Marginal KL Divergence and EDGES-like signals}
To determine the volume of the EDGES-like parameter space that the SARAS3 data rules out, we calculate a marginal Kullback-Leibler~(KL) divergence, $\mathcal{D}$. To illustrate the types of signals that we are selecting by constraining our parameter space to be EDGES-like, we show the corresponding functional prior and posterior in \textit{Supplementary Material}.
The KL divergence is a measure of the information gained when contracting a prior onto a posterior. For our purposes we are interested in the EDGES-like parameter space and as a result we would consider the foreground parameters to be nuisance parameters and need to integrate them out.
To calculate the marginal KL divergence, $\mathcal{D}$, we therefore need to evaluate the log-probabilities associated with the signal parameters in the EDGES-like prior, $\pi$, and the corresponding posterior, $\mathcal{P}$,
\begin{equation}
\mathcal{D(\mathcal{P}||\mathcal{\pi})} = \int \mathcal{P}(\theta) \log_{e}\bigg(\frac{\mathcal{P}(\theta)}{\mathcal{\pi}(\theta)}\bigg) d\theta = \bigg\langle \log_{e}\bigg(\frac{\mathcal{P}(\theta)}{\mathcal{\pi}(\theta)}\bigg) \bigg\rangle_\mathcal{P}.
\label{eq:kl_divergence}
\end{equation}
We use a Gaussian kernel density estimators~(KDE) to replicate the samples in the signal sub-spaces of our EDGES-like prior and posterior via the recently developed code \textsc{margarine
\cite{margarine_neurips, margarine_maxent}. A multivariate Gaussian KDE, implemented with \textsc{scipy}, is produced by summing over multiple multivariate Gaussian profiles with known standard deviation centred around each sample point, consequently the log of the probability density function is easily tractable making the KL divergence easily calculable.
When training our KDEs on the prior and posterior samples, we first transform our data into the standard normal parameter space, which improves the accuracy of the density estimator and allows it to better capture the sharp edges of approximately flat distributions.
It can be shown that the KL divergence is related to the volume fraction between the posterior and the prior via
\begin{equation}
\mathcal{D}(\mathcal{P}||\mathcal{\pi}) = \log_{e}\bigg(\frac{V_\mathcal{P}}{V_\pi}\bigg),
\end{equation}
and therefore
\begin{equation}
\exp(-\mathcal{D}(\mathcal{P}||\mathcal{\pi})) = \frac{V_\pi}{V_\mathcal{P}},
\end{equation}
can be used to determine the volume of the prior contained in the posterior or in our case the volume of the EDGES-like prior that is still consistent with the SARAS3 data after fitting.
\section*{Acknowledgments}
The authors would like to thank the reviewers for their helpful comments regarding our manuscript. HTJB acknowledges the support of the Science and Technology Facilities Council (STFC) through grant number ST/T505997/1. WJH and AF were supported by Royal Society University Research Fellowships. EdLA was supported by the STFC through the Ernest Rutherford Fellowship. RB acknowledges the support of the Israel Science Foundation (grant No. 2359/20), The Ambrose Monell Foundation and the Institute for Advanced Study as well as the Vera Rubin Presidential Chair in Astronomy and the Packard Foundation.
\section*{Author Contribution}
HTJB performed the data analysis and led the writing of the paper. AF initiated the project, supervised it and helped writing and revising the article. EdLA supervised the project and the analysis, and helped with the writing and revision of the article. WJH provided technical support and advice regarding the Bayesian methodology. The analysis in the paper is of non-public data provided by RS and SS. The astrophysical signal models were provided by AF and RB. All co-authors provided comments and contributed to the structure of the article.
\section*{Competing Interests}
The authors declare that they have no competing interests.
\section*{Data Availability}
The SARAS3 data are available upon reasonable request to SS.
\section*{Code Availability}
\textsc{globalemu} is available at \url{https://github.com/htjb/globalemu} and \textsc{margarine} at \url{https://github.com/htjb/margarine}. The nested sampling tool \textsc{polchord} at \url{https://github.com/PolyChord/PolyChordLite} and the nested sampling post-processing codes, \textsc{anesthetic} and \textsc{fgivenx}, are available at \url{https://github.com/williamjameshandley/anesthetic} and \url{https://github.com/williamjameshandley/fgivenx} respectively. All other codes used are available upon reasonable request to HTJB.
\\
\section*{Tables}
\bgroup
\def1.5{1.5}
\begin{table}[h!]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
Background & Radio Galaxies & Synchrotron & CMB Only \\
\hline
\hline
Training Models & 4311 & 9304 & 5137 \\
\hline
Testing Models & 479 & 1034 & 570 \\
\hline
\hline
Mean RMSE & 5.11 & 7.98 & 0.78 \\
\hline
95 Percentile RMSE & 20.53 & 23.06 & 2.67 \\
\hline
Worst RMSE & 81.70 & 85.65 & 13.36\\
\hline
\end{tabular}
\caption{\textbf{The neural network emulation.} The table summarizes the number of models used to train and test the three different neural network emulators used in this paper, along with summary statistics for their accuracies. Temperatures RMSEs are given in mK.}
\label{tab:networks}
\end{table}
\egroup
\bgroup
\def1.5{1.5}
\begin{table}[h!]
\centering
\begin{tabular}{|c|c|c|}
\hline
Parameter & Radio Background & Range \\
\hline
\hline
$f_*$ & CMB Only, Synchrotron, Radio Galaxies & 0.001 - 0.5 \\
\hline
$V_c$ & CMB Only, Synchrotron, Radio Galaxies & 4.2 - 100 km/s\\
\hline
$f_X$ & CMB Only, Synchrotron, Radio Galaxies & 0.001 - 1000\\
\hline
$f_\mathrm{radio}$ & Radio Galaxies & 1.0 - 99,500 \\
\hline
$A_{\mathrm{r}}^{1420}$ & Synchrotron & 0 - 47 \\
\hline
\multirow{3}*{$\tau$} & CMB Only & 0.026 - 0.103\\\cline{2-3}
& Synchrotron & 0.016 - 0.158\\\cline{2-3}
& Radio Galaxies & 0.035 - 0.077\\
\hline
$\alpha$ & CMB Only & 1.0 - 1.5 \\
\hline
$E_\mathrm{min}$ & CMB Only & 0.1 - 3.0 keV\\
\hline
$R_\mathrm{mfp}$ & CMB Only, Synchrotron, Radio Galaxies & Fixed at 30, 40 and 40 ~Mpc\\
\hline
\end{tabular}
\caption{\textbf{The astrophysical priors.} The prior ranges on the parameters for the CMB only, synchrotron and high-redshift radio galaxy background global 21-cm signal models fitted in this paper. The definitions of the parameters are given in the text. The prior ranges are designed to encompass the current uncertainty in the properties of the high-redshift Universe. The emulators are unreliable outside these bounds. Note that $\tau$ is not an important parameter in the SARAS3 band, however, we train the models with this parameter as an input, perform fits with it and then marginalize over it. Similarly, $R_\mathrm{mfp}$ is only important at lower redshifts outside the SARAS3 band. The global signal only has a weak dependence on this parameter, and so we fix its value of 40~Mpc for the radio galaxies and radio synchrotron backgrounds, while in the CMB-only case it was fixed to $R_\mathrm{mfp} = 30$~Mpc. }
\label{tab:priors}
\end{table}
\egroup
\bgroup
\def1.5{1.5}
\begin{table}[h!]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
& SARAS3 & HERA & SARAS2 \\
\hline
\hline
Signal type & Global & Power Spectrum & Global \\
\hline
Redshift range & $z\approx 15 - 25$ & $z\approx 8$ and $\approx 10$ & $z\approx 7 - 12$\\
\hline
\hline
$L_{r}/\mathrm{SFR}$ & $\gtrsim 1.549\times10^{25}$ & $\gtrsim4.00\times10^{24}$ & -- \\
\hline
$L_{r}/\mathrm{SFR} \cap L_{X}/\mathrm{SFR}$ & $\gtrsim 1\times10^{25} \cap \lesssim 1.09\times10^{42}$ & $\gtrsim 4.00\times10^{24} \cap \lesssim 7.60\times10^{39}$ & $\gtrsim 4.07\times10^{24} \cap \lesssim 6.3\times10^{39}$\\
\hline
$M$ & $4.4\times10^{5} \lesssim M \lesssim 1.1\times10^{7}$ & -- & --\\
\hline
$f_*$ & $\gtrsim 0.05$ & -- & --\\
\hline
$f_* \cap M$ & $\gtrsim 0.03 \cap \lesssim 8.53\times10^{8}$ & -- & --\\
\hline
\end{tabular}
\caption{\textbf{Summary of key constraints from SARAS3 (this work), HERA \cite{HERA} and SARAS2 \cite{SARAS2} experiments.} We specify the signal type measured by each instrument (either global signal or power spectrum); redshift range targeted by each experiment; constraints on the value of $L_r$/SFR expressed in units of W Hz$^{-1}$M$_\odot^{-1}$ yr at 150~MHz; limits on $L_r$/SFR in combination with $L_X/$SFR (calculated between 0.2 and 95 keV and expressed in units of $\textnormal{erg~s}^{-1}$M$_\odot^{-1}$ yr); limits on the mass of star forming halos, $M$, given in solar masses at $z=20$, star formation efficiency $f_*$ and, finally, constraint on $f_*$ in combination with the halo mass. Limits on the individual parameters correspond to the regions that are disfavoured (with 68\% confidence) in the 1D posteriors, combined constraints approximately correspond to the 68\% confidence limits in the 2D posteriors. Note that: SARAS2 is unable to constrain individual parameters; HERA targets the power spectrum in comparison to the two SARAS experiments which attempt to measure the sky-averaged signal; SARAS3 is at much higher redshifts than the other two experiments; while HERA provides individual constraints on $L_r$/SFR and $L_X/$SFR \cite{HERA}, here we only quote the individual constraint on $L_r/$SFR and the combined constraint with $L_X/$SFR, which is done to ease the comparison with SARAS3 and SARAS2.}
\label{tab:numbers}
\end{table}
\egroup
\pagebreak
\section*{Figure Legends/Captions (main text)}
\textbf{Figure 1: SARAS3 constraints on high-redshift radio galaxies.} The data disfavour deep global signals, as can be seen by comparing the functional prior (blue) with the posterior (red) in panel (a). At the bottom of this panel we show the Kullback-Leibler (KL) divergence, $\mathcal{D}$, as a function of redshift between the functional prior and posterior. The KL divergence gives a measure of the information gain when moving from one to the other and illustrates the constraining power of the SARAS3 data, which peaks at around $z\approx20$. Panel (b) shows the corresponding 1D and 2D posteriors for the astrophysical parameters found when fitting the foreground and a global 21-cm signal. From left to right (see text for details): fraction of gas that turns into stars, $f_*$; circular velocity in units of km s$^{-1}$, $V_c$; radio luminosity per unit SFR, $L_\mathrm{r}/\mathrm{SFR}$, in units of ~W Hz$^{-1}$M$_\odot^{-1}$ yr calculated at 150~MHz; X-ray luminosity per unit SFR, $L_\mathrm{X}/\mathrm{SFR}$, in $\textnormal{erg~s}^{-1}$M$_\odot^{-1}$ yr. The corresponding 1D posterior with marked 68\% disfavoured regions are shown in the top of each column. The colour of the 2D posteriors (see colour bar) reflects the magnitude of the 2D posterior probabilities. The dashed black lines encapsulate the 95\% confidence regions. The solid black lines show the 68\% confidence regions for which we make a conservative approximation with the dashed red lines (to guide the eye) with the corresponding numerical values summarized in the inverted triangle table. Figures produced with \textsc{anesthetic} \protect\cite{anesthetic} and \textsc{fgivenx} \protect\cite{fgivenx}.
\textbf{Figure 2: The relationship between the astrophysical parameters and the noise.} The figure shows that the standard deviation of the assumed Gaussian noise is uncorrelated with the astrophysical parameters, and consequently we would expect that a full treatment of any frequency dependence of the noise in the data, which is left for future work, will have little impact on the derived parameter constraints.
\textbf{Figure 3: The 2D posterior distributions between the astrophysical and foreground parameters found when fitting the data with the radio galaxy radio background models.} We see no clear correlations between the two sets of parameters, indicating that they are independent of each other.
\pagebreak
\bibliographystyle{naturemag}
|
1,116,691,498,644 | arxiv | \section{\normalsize{Introduction}}
\label{sec:introduction}
Real data often contain outliers,
which can create serious problems when analyzing
it. Many methods have been developed to deal with
outliers, often by constructing a fit that is robust to
them and then detecting the outliers by their large
deviation (distance, residual) from that fit.
For a brief overview of this approach see
\cite{Rousseeuw:WIRE-Anomaly}.
Unfortunately, most robust methods cannot handle data
with missing values, some rare exceptions being
\cite{Cheng:Missing} and \cite{Danilov:GSE}.
Moreover, they are typically restricted to casewise
outliers, which are cases that deviate from the majority.
We call these {\it rowwise outliers} because multivariate
data are typically represented by a rectangular matrix
in which the rows are the cases and the columns are the
variables (measurements).
In general, robust methods require that fewer than half
of the rows are outlying, see e.g.
\cite{Lopuhaa:BDP}.
However, recently a different type of outliers, called
{\it cellwise outliers}, have received much attention
\citep{Alqallaf:Propag,VanAelst:HSD,Agostinelli:Cellwise}.
These are suspicious cells (entries) that can occur
anywhere in the data matrix.
Figure \ref{fig:Cellmap_DDC} illustrates the difference
between these types of outliers.
The regular cells are shown in gray, whereas black means
outlying.
Rows 3 and 7 are rowwise outliers, and the other rows
contain a fairly small percentage of cellwise outliers.
As in this example, a small proportion of
outlying cells can contaminate over half the rows, which
causes most methods to break down.
This effect is at its worst when the dimension
(the number of columns) is high.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.42\textwidth]
{Cellmap_DDC_2rows.pdf}
\caption{Data matrix with missing values and
cellwise and rowwise contamination.}
\label{fig:Cellmap_DDC}
\end{figure}
In high-dimensional situations, which are becoming
increasingly common, one often applies principal
component analysis (PCA) to reduce the dimension.
However, the classical PCA (CPCA) method is not robust
to either rowwise or cellwise outliers.
Robust PCA methods that can deal with rowwise outliers
include \citet{Croux:Proj}, \citet{Hubert:RAPCA},
\citet{Locantore:Funcdata}, \citet{Maronna:ORreg} and the
ROBPCA method \citep{Hubert:ROBPCA}. The latter method
combines projection pursuit ideas with robust covariance
estimation.
In order to deal with missing values,
\cite{Nelson:miss} and \cite{Kiers:WLS} developed
the {\it iterative classical PCA algorithm} (ICPCA),
see \citet{Walczak:TutorialI} for a tutorial.
The ICPCA follows the spirit of the EM algorithm.
It starts by replacing the missing values by
initial estimates such as the columnwise means.
Then it iteratively fits a CPCA, yielding scores
that are transformed back to the original space
resulting in new estimates for the missing values,
until convergence.
\citet{Serneels:MPCA} proposed a rowwise
robust PCA method that can also cope with
missing values.
We will call this method MROBPCA (ROBPCA for
missing values) as its key idea is to combine the
ICPCA and ROBPCA methods.
MROBPCA starts by imputing the NA's by robust
initial estimates. The main difference with the
ICPCA algorithm is that in each iteration the PCA
model is fit by ROBPCA, which yields different
imputations and flags rowwise outliers.
As of yet there are no PCA methods that can deal
with cellwise outliers in combination with
rowwise outliers and NA's.
This paper aims to fill that gap by constructing a
new method called MacroPCA, where `Macro' stands
for {\bf M}issingness {\bf A}nd {\bf C}ellwise
and {\bf R}owwise {\bf O}utliers.
It starts by applying a multivariate method called
DetectDeviatingCells \citep{Rousseeuw:DDC} for
detecting cellwise outliers, which provides initial
imputations for the outlying cells and the NA's
as well as an initial measure of rowwise outlyingness.
In the next steps MacroPCA combines ICPCA and
ROBPCA to protect against rowwise outliers and to
create improved imputations of the outlying cells
and missing values.
MacroPCA also provides graphical displays to
visualize the different types of outliers.
R code for MacroPCA is publicly available
(Section \ref{sec:software}).
\section{The MacroPCA algorithm}
\label{sec:MacroPCA}
\subsection{Model}
\label{subsec:Model}
The data matrix is denoted as $\bX_{n,d}$ in which
the subscripts are the number of rows (cases) $n$
and the number of columns (variables) $d$ .
In the absence of outliers and missing values the
goal is to represent the data in a lower dimensional
space, i.e.\
\begin{equation} \label{eq:pcamodel}
\bX_{n,d} = \bone_n \bmu_d' +
\bmT_{n,k} (\bmP_{d,k})' + \bmE_{n,d}
\end{equation}
with $\bone_n$ the column vector with all $n$
components equal to 1, $\bmu_d$ the
$d$-variate column vector of location,
$\bmT_{n,k}$ the $n \times k$ score matrix,
$\bmP_{d,k}$ the $d \times k$ loadings matrix
whose columns span the PCA subspace, and
$\bmE_{n,d}$ the $n \times d$ error matrix.
The reduced dimension $k$ can vary from
1 to $d$ but we assume that $k$ is low.
The $\bmu_d$\,, $\bmT_{n,k}$\, and $\bmP_{d,k}$
are unknown, and estimates of them will be
denoted by $\bm_d$\,, $\bT_{n,k}$ and
$\bP_{d,k}$\,.
Several realities complicate this simple model.
First, the data matrix may not be fully observed,
i.e., some cells $x_{ij}$ may be missing.
Here we assume that they are
\textit{missing at random} (MAR),
meaning that the missingness of a cell is
unrelated to the value the cell would have had,
but may be related to the values of other
cells in the same row; see, e.g.,
\cite{Schafer:missing}.
This is the typical assumption underlying
EM-based methods such as ICPCA and MROBPCA
that are incorporated in our proposal.
Secondly, the data may contain rowwise outliers,
e.g. cases from a different population.
The existing rowwise robust methods require
that fewer than half of the rows are outlying,
so we make the same assumption here.
Thirdly, cellwise outliers may occur
as described in the introduction.
The outlying cells may be imprecise, incorrect or
just unusual.
Outlying cells do not necessarily stand out in their
column because the correlations between the columns
matter as well, so these cells may not be detectable by
simple univariate outlier detection methods.
There can be many cellwise outliers, and
in fact each row may contain one or
more outlying cells.
\subsection{Dealing with missing values
and cellwise and rowwise outliers}
\label{subsec:Algorithm}
We propose the MacroPCA algorithm for analyzing data
that may contain one or more of the following issues:
missing values, cellwise outliers, and rowwise outliers.
Throughout the algorithm we will use the following
two notations:
\begin{itemize}
\item the {\it NA-imputed matrix} $\bcX_{n,d}$ only
imputes the missing values of $\bX_{n,d}$\;;
\item the {\it cell-imputed matrix} $\bfX_{n,d}$ has
imputed values for the outlying cells that do not
belong to outlying rows, and for all missing
values.
\end{itemize}
Both of these matrices still have $n$ rows.
Neither is intended to simply replace the true data
matrix $\bX_{n,d}$\;.
Note that $\bfX_{n,d}$ does not try to impute outlying
cells inside outlying rows, which would mask these rows
in subsequent computations.
Since we do not know in advance which cells and
rows are outlying, the set of flagged cellwise
and rowwise outliers (and hence $\bcX_{n,d}$ and
$\bfX_{n,d}$) will be updated in the course of
the algorithm.
The first part of MacroPCA is the
DetectDeviatingCells (DDC) algorithm.
The description of this method can be found in
\cite{Rousseeuw:DDC} and in
Section 1 of the Supplementary Material.
The main purpose of the DDC method is to detect
cellwise outliers.
DDC outputs their positions $I_{c,DDC}$ as
well as imputations for these outlying cells
and any missing values.
It also yields an initial outlyingness
measure on the rows, which is however not
guaranteed to flag all outlying rows.
The set of flagged rows $I_{r,DDC}$ will be
improved in later steps.
The second part of MacroPCA constructs principal
components along the lines of the ICPCA
algorithm but employing a version of ROBPCA
\citep{Hubert:ROBPCA} to fit subspaces.
It consists of the following steps, with all
notations listed in Section 2
of the Supplementary Material.
\begin{enumerate}
\item {\bf Projection pursuit.} The goal of this
step is to provide an initial indication of which
rows are the least outlying.
For this ROBPCA starts by
identifying the $h < n$ least outlying rows by a
projection pursuit procedure.
We write $0.5 \leqslant \alpha=h/n < 1$.
This means that we can withstand up to a fraction
$1-\alpha$ of outlying rows.
To be on the safe side the default is
$\alpha=0.5$\,.
However, due to cellwise outliers there may be far
fewer than $h$ uncontaminated rows, so we cannot
apply this step to the original data $\bX_{n,d}$.
We also cannot use the entire imputed matrix
$\btX_{n,d}$ obtained
from DDC in which all outlying cells are imputed, even
those in potentially outlying rows, as this could mask
outlying rows.
Instead we use the cell-imputed matrix
$\bfX_{n,d}^{(0)}$ defined as follows:
\begin{enumerate}
\item In all rows flagged as outlying we keep the
original data values. Only the missing values in
these rows are replaced by the values imputed by
DDC. More precisely, for all $i$ in $I_{r,DDC}$
we set $\bfx_i^{(0)} = \bcx_i^{(0)}$.
\item In the $h$ unflagged rows with the fewest cells
flagged by DDC we impute those cells,
i.e. $\bfx_i^{(0)} = \btx_i$.
\end{enumerate}
As in ROBPCA the outlyingness of a point
$\bfx_i^{(0)}$ is then computed as
\begin{equation}
\text{outl}(\bfx_i^{(0)}) = \max_{\bv \in B}
\frac{|\bv'\bfx_i^{(0)} - \lmcd(\bv'\bfx_j^{(0)})|}
{\smcd(\bv'\bfx_j^{(0)})} \label{outlo} \ \ ,
\end{equation}
where $\lmcd(\bv'\bfx_j^{(0)})$ and
$\smcd(\bv'\bfx_j^{(0)})$ are the univariate MCD location
and scale estimators \citep{Rousseeuw:RobReg} of
$\{\bv'\bfx_1^{(0)},\ldots,\bv'\bfx_n^{(0)}\}$ .
The set $B$ contains 250 directions through two data
points (or all of them if there are fewer than 250).
Finally, the indices of the $h$ rows $\bfx_i^{(0)}$
with the lowest outlyingness and not belonging to
$I_{r,DDC}$ are stored in the set $H_0$\,.
\item {\bf Subspace dimension.}
Here we choose the number of principal components.
For this we build a new cell-imputed matrix
$\bfX^{(1)}_{n,d}$ which imputes the outlying cells in
the rows of $H_0$ and imputes the NA's in all rows.
This means that $\bfx_i^{(1)} = \btx_i$ for
$i \in H_0$\,, and $\bfx_i^{(1)} = \bcx_i^{(0)}$ if
$i \notin H_0$.
Then we apply classical PCA to the $\bfx_i^{(1)}$ with
$i \in H_0$.
Their mean $\bm_{d}^{(1)}$ is an estimate of the center,
whereas the spectral decomposition of their covariance
matrix yields a loading matrix $\bP_{d,d}^{(1)}$ and a
diagonal matrix $\bL_{d,d}^{(1)}$ with the eigenvalues
sorted from largest to smallest.
These eigenvalues can be used to construct a screeplot
from which an appropriate dimension $k$ of the subspace
can be derived.
Alternatively, one can retain a certain cumulative
proportion of explained variance, such as 80\%.
The maximal number of principal components that MacroPCA
will consider is the tuning constant $k_{\max}$ which is
set to 10 by default.
\item {\bf Iterative subspace estimation.}
This step aims to estimate the $k$-dimensional
subspace fitting the data.
As in ICPCA this requires iteration, for $s \gs 2$:
\begin{enumerate}
\item The scores matrix in \eqref{eq:pcamodel} based
on the cell-imputed cases is computed as
$\bfT_{n,k}^{(s-1)} = (\bfX_{n,d}^{(s-1)} -
\boldsymbol 1_n
(\bm_{d}^{(s-1)})') \bP_{d,k}^{(s-1)}$\;.
The predicted data values are set to
$\bhX_{n,d}^{(s)} = \boldsymbol 1_n (\bm_d^{(s-1)})'
+ \bfT_{n,k}^{(s-1)} (\bP_{d,k}^{(s-1)})'$\;.
We then update the imputed matrices to
$\bcX_{n,d}^{(s)}$ and $\bfX_{n,d}^{(s)}$ by replacing
the appropriate cells by the corresponding cells of
$\bhX_{n,d}^{(s)}$.
That is, for $\bcX_{n,d}^{(s)}$ we update all the
imputations of missing cells, whereas for
$\bfX_{n,d}^{(s)}$ we update the imputations of the
outlying cells in rows of $H_0$ as well as the
NA's in all rows.
\item The PCA model is re-estimated by applying
classical PCA to the $\bfx_i^{(s)}$ with $i \in H_0$.
This yields a new estimate $\bm_{d}^{(s)}$
as well as an updated loading matrix
$\bP_{d,k}^{(s)}$\;.
\end{enumerate}
The iterations are repeated until $s=20$ or until
convergence is reached, i.e.\ when
the maximal angle between a vector in the new subspace
and the vector most parallel to it in the previous
subspace is below some tolerance
(by default 0.005).
Following~\citet{Krzanowski:GroupPC} this angle is
computed as $arccos(\sqrt{\delta_k})$
where $\delta_k$ is the smallest eigenvalue of
$(\bP_{d,k}^{(s)})' \bP^{(s-1)}_{d,k}
(\bP^{(s-1)}_{d,k})' \bP_{d,k}^{(s)}$\;.
After all iterations we have the cell-imputed
matrix $\bfX_{n,d}^{(s)}$ as well as the estimated
center $\bm_{d}^{(s)}$ and the
loading matrix $\bP_{d,k}^{(s)}$\,.
\item {\bf Reweighting.}
In robust statistics one often follows an initial
estimate by a reweighting step in order to improve
the statistical efficiency at a low computational
cost, see e.g.
\citep{Rousseeuw:RobReg,Engelen:PCA}.
Here we use the orthogonal distance of each
$\bfx_i^{(s)}$ to the current PCA subspace:
\begin{equation*}
\fod_i = \|\bfx_i^{(s)} - \bfhx_i^{(s)}\| =
\| \bfx_i^{(s)}- (\bm_d^{(s)} +
(\bfx^{(s)}_i- \bm_d^{(s)})\bP_{d,k}^{(s)}
(\bP_{d,k}^{(s)})') \| \ .
\end{equation*}
The orthogonal distances to the power 2/3 are roughly
Gaussian except for the outliers \citep{Hubert:ROBPCA},
so we compute the cutoff value
\begin{equation}\label{eq:cutoffOD}
c_{\od} :=
\left(\lmcd(\{\fod_j^{2/3}\})+
\smcd(\{\fod_j^{2/3}\})
\, \Phi^{-1}(0.99) \right)^{3/2} \;\;.
\end{equation}
All cases for which $\fod_i \ls c_{\od}$
are considered non-outlying with respect to the
PCA subspace, and their indices are stored in a
set $H^*$. As before, any $i \in I_{r,DDC}$ is
removed from $H^*$. The cases not in $H^*$ are
flagged as rowwise outliers.
The final cell-imputed matrix $\bfX_{n,d}$
is given by
$\fx_{i,j} = \hfx_{i,j}^{(s)}$ if
$i \in H^*$ and $j \in I_{c,DDC}$ and
$\fx_{i,j} = \cx_{i,j}$ otherwise.
Applying classical PCA to the $n^*$ rows
$\bfx_i$ in $H^*$ yields a new center $\bm_d^*$
and a new loading matrix $\bP_{d,k}^*$\;.
\item {\bf DetMCD.} Now we want to estimate
a robust basis of the estimated subspace.
The columns of $\bP_{d,k}^*$ from step 4 need not
be robust, because some of the $n^*$ rows in
$H^*$ might be outlying inside the subspace.
These so-called
good leverage points do not harm the
estimation of the PCA subspace but they can still
affect the estimated eigenvectors and eigenvalues,
as illustrated by a toy example in Section
\ref{A:toy} of the Appendix.
In this step we first project the $n^*$ points
of $H^*$ onto the subspace, yielding
\begin{equation*}
\bfT_{n^*,k} = \left(\bfX_{n^*,d} -
\boldsymbol 1_{n^*} \bm_d^{*'} \right)
\bP_{d,k}^* \ \ .
\end{equation*}
Next, the center and scatter matrix of the scores
$\bfT_{n^*,k}$ are estimated by the DetMCD method
of \citet{Hubert:DetMCD}.
This is a fast, robust and deterministic algorithm
for multivariate location and scatter, yielding
$\bm_k^{\mcd}$ and $\bS_{k,k}^{\mcd}$.
Its computation is feasible because the dimension
$k$ of the subspace is quite low.
The spectral decomposition of $\bS_{k,k}^{\mcd}$
yields a loading matrix $\bP_{k,k}^{\mcd}$
and eigenvalues $\hlam_j$ for $j=1,\ldots,k$\;.
We set the final center to
$\bm_d = \bm_d^{*} +
\bP_{d,k}^{*}\bm_k^{\mcd}$
and the final loadings to
$\bP_{d,k} = \bP_{d,k}^*\bP_{k,k}^{\mcd}$.
\item{\bf Scores, predicted values and residuals.}
We now provide the final output.
We compute the scores of $\bcX_{n,d}$
as $\bcT_{n,k} = (\bcX_{n,d} -
\boldsymbol 1_{n}\bm_d')\bP_{d,k}$
and the predictions of $\bcX_{n,d}$
as $\bchX_{n,d} = \boldsymbol 1_{n} \bm_d' +
\bcT_{n,k} (\bP_{d,k})'$\;.
(The formulas for $\bfT_{n,k}$ and $\bfhX_{n,d}$
are analogous.)
This yields the difference matrix
$\bcX_{n,d}-\bchX_{n,d}$ which we then
robustly scale by column, yielding the
final standardized residual matrix $\bR_{n,d}$\,.
The orthogonal distance of $\bcx_i$ to the PCA
subspace is given by
\begin{equation}\label{eq:od}
\cod_i = \| \bcx_i - \bchx_i \| \;.
\end{equation}
\end{enumerate}
See Section \ref{sec:software} for the R code
carrying out MacroPCA.
MacroPCA can be carried out
in $O(nd(\min(n,d) +\log(n) + \log(d)))$
time (see\linebreak
Section \ref{A:complexity} of the Appendix)
which is not much more than the
complexity\linebreak
$O(nd\min(n,d))$ of classical PCA.
Figure \ref{fig:times} shows times as a
function of $n$ and $d$ indicating that
MacroPCA is quite fast.
The fraction of NA's in the data had no
substantial effect on the computation
time, as seen in Figure
\ref{fig:timesNAs}
in Section \ref{A:complexity}.
\begin{figure}[!ht]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.45\textwidth]
{MCAR_A09_Comptimes_in_n.pdf} &
\includegraphics[width=0.45\textwidth]
{MCAR_A09_Comptimes_in_d.pdf}
\vspace{-0.4cm}
\end{tabular}
\caption{Computation times of MacroPCA in
seconds on Intel i7-4800MQ at 2.70 GHz,
as a function of the number of
cases $n$ (left) and of the dimension
$d$ (right).}
\label{fig:times}
\end{figure}
Note that PCA loadings are highly influenced by
the variables with the largest variability.
For this the MacroPCA code provides the option to
divide each variable by a robust scale. This does
not increase the computational complexity.
\section{Outlier detection}
\label{sec:detection}
MacroPCA provides several tools for outlier detection.
We illustrate them on a dataset collected by
\cite{Alfons:robustHD} from the website of the
Top Gear car magazine.
It contains data on 297 cars, with 11 continuous
variables.
Five of these variables (price, displacement, BHP,
torque, top speed) are highly skewed, and were
logarithmically transformed.
The dataset contains 95 missing cells, which is only
2.9\% of the $297 \times 11 = 3267$ cells.
We retained two principal components ($k=2$).
\begin{figure}[ht!]
\centering
\includegraphics[width=1.0\textwidth]
{TopGear_IPCA_MacroPCA_residualMap_cropped}
\vspace{-1.0cm}
\caption{Residual map of selected rows from Top Gear
data: (left) when using ICPCA; (right) when using
MacroPCA. The numbers shown in the cells are
the original data values (with price in units
of 1000 UK Pounds).}
\label{fig:Cars1}
\end{figure}
The right hand panel of Figure~\ref{fig:Cars1} shows
the results of MacroPCA by a modification of the
cell map introduced by \citet{Rousseeuw:DDC}.
The computations were performed on all 297 cars, but
in order to make the map fit on a page it only shows
24 cars, including some of the more eventful cases.
The color of the cells stems from the standardized
residual matrix $\bR_{n,d}$ obtained by MacroPCA.
Cells with $|r_{ij}| \ls \sqrt{\chi^2_{1,0.99}} = 2.57$
are considered regular and colored yellow in the
residual map, whereas the missing values are white.
Outlying residuals receive a color which ranges from light
orange to red when $r_{ij} > 2.57$ and from light purple
to dark blue when $r_{ij} < -2.57$\;.
So a dark red cell indicates that its observed value is
much higher than its fitted value, while a dark blue
cell means the opposite.
To the right of each row in the map is a circle
whose color varies from white to black according
to the orthogonal distance $\cod_i$ given by
\eqref{eq:od} compared to the cutoff
\eqref{eq:cutoffOD}.
Cases with $\cod_i \ls c_{\od}$ lie close to the PCA
subspace and receive a white circle.
The others are given darker shades of gray up to black
according to their $\cod_i$\;.
On these data we also ran the ICPCA method, which handles
missing values in classical PCA.
It differs from MacroPCA in some important ways: the
initial imputations are by nonrobust column means, the
iterations carry out CPCA and do not exclude outlying
rows, and the residuals are standardized by the
nonrobust standard deviation.
By itself ICPCA does not provide a residual map, but
we can construct one anyway by plotting the nonrobust
standardized residuals with the same color scheme,
yielding the left panel of Figure~\ref{fig:Cars1}.
The ICPCA algorithm finds high orthogonal distances
(dark circles)
for the BMW i3, the Chevrolet Volt, the Renault Twizzy
and the Vauxhall Ampera.
These are hybrid or purely electrical cars with a high
or missing MPG (miles per gallon).
Note that the Ssangyong Rodius and Renault Twizzy get
blue cells for their acceleration time of zero seconds,
which is physically impossible.
On this dataset the ICPCA algorithm provides decent results
because the total number of outliers is small compared
to the size of the data, and indeed the residual map of
all 297 cars was mostly yellow.
But MacroPCA (right panel) detects more deviating behavior.
The orthogonal distance of the hybrid Citroen DS5 and the
electrical Mitsubishi i-MiEV are now on the high side,
and the method flags the Bugatti Veyron and Pagani Huayra
supercars as well as the Land Rover Defender and
Mercedes-Benz G all-terrain vehicles.
It also flags more cells, giving a more complete
picture of the special characteristics of some cars.
\begin{figure}[!ht]
\centering
\begin{tabular}{cc}
\includegraphics[width=0.48\textwidth]
{TopGear_ICPCA_outlierMap_cropped} &
\includegraphics[width=0.48\textwidth]
{TopGear_MacroPCA_outlierMap_cropped}
\end{tabular}
\vspace{-0.7cm}
\caption{Outlier map of Top Gear data: (left) when
using ICPCA; (right) when using MacroPCA.}
\label{fig:Cars2}
\end{figure}
We can also compute the {\it score distance} of each
case, which is the robustified Mahalanobis distance of
its projection on the PCA subspace among all such
projected points.
It is easily computed as
\begin{equation}
\csd_i = \sqrt{\sum_{j=1}^k (\ct_{ij})^2/\hlam_j}
\label{eq:sd}
\end{equation}
where $\ct_{ij}$ are the scores and $\hlam_j$ the
eigenvalues obtained by MacroPCA.
This allows us to construct a PCA outlier map of cases
as introduced in ~\citet{Hubert:ROBPCA}, which plots
the orthogonal distances $\cod_i$ on the vertical axis
versus the score distances $\csd_i$\;.
The MacroPCA outlier map of these data is the right
panel of Figure~\ref{fig:Cars2}.
The vertical line indicates the cutoff
$c_{\sd} = \sqrt{\chi^2_{k,0.99}}$
and the horizontal line is the cutoff $c_{\od}$\,.
Regular cases are those with a small
$\csd_i \ls c_{\sd}$ and a small $\cod_i \ls c_{\od}$\;.
Cases with large $\csd_i$ and small $\cod_i$ are called
good leverage points.
The cases with large $\cod_i$ can be divided into
orthogonal outliers (when their $\csd_i$ is small) and
bad leverage points (when their $\csd_i$ is large too).
We see several orthogonal outliers such as the Vauxhall
Ampera as well as some bad leverage points, especially
the BMW i3.
There are also some good leverage points.
The left panel displays the outlier map for ICPCA.
It flags the BMW i3 as an orthogonal outlier.
This behavior is typical because a bad leverage point
will attract the fit of classical methods, making it
appear less special.
For the same reason ICPCA considers some of the good
leverage points as regular cases.
That the ICPCA outlier map is still able to flag some
outliers is due to the fact that this dataset
only has a small percentage of outlying rows.
\section{Online data analysis}
\label{sec:predict}
Applying MacroPCA to a data set $\bX_{n,d}$ yields a
PCA fit.
Now suppose that a new case (row) $\bx$ comes in, and
we would like to impute its missing values,
detect its outlying cells and impute them, estimate
its scores, and find out whether it is a rowwise outlier.
We could of course append $\bx$ to $\bX_{n,d}$ and rerun
MacroPCA, but that would be very inefficient.
Instead we propose a method to analyze $\bx$ using
only the output of MacroPCA on the initial
set $\bX_{n,d}$\;.
This can be done quite fast, which makes the procedure
suitable for online process control.
For outlier-free data with NA's this was
studied by \cite{Nelson:miss} and
\cite{Walczak:TutorialI}.
\cite{FolchFortuny:PCAmissing} call this model
exploitation, as opposed to model building
(fitting a PCA model).
Our procedure consists of two stages, along the
lines of MacroPCA.
\begin{enumerate}[label={\arabic*.}]
\item {\bf DDCpredict} is a new function
which only uses $\bx$ and the output of DDC on the
initial data $\bX_{n,d}$\;.
First the entries of $\bx$ are standardized using
the robust location and scale estimates from DDC.
Then all $x_j$ with
$|x_j| > \sqrt{\chi^2_{1,0.99}} = 2.57$
are replaced by NA's.
Next all NA's are estimated
as in DDC making use of the pre-built
coefficients $b_{jh}$ and weights $w_{jh}$\;.
Also the deshrinkage step uses the original
robust slopes.
The \textit{DDCpredict} stage yields the imputed
vector $\btx^{(0)}$ and the standardized residual
of each cell $x_j$.
\item {\bf MacroPCApredict} improves on the initial
imputation $\btx^{(0)}$\;.
The improvements are based solely on the $\bm_d$
and $\bP_{d,k}$ that were obtained by MacroPCA
on the original data $\bX_{n,d}$\;.
Step $s \gs 1$ is of the following form:
\begin{enumerate}
\item Project the imputed case $\btx^{(s-1)}$
on the MacroPCA subspace to obtain its scores
vector
$\bt^{(s)} = (\bP_{d,k})'(\btx^{(s-1)} - \bm_d)$;
\item transform the scores to the original
space, yielding
$\hat{\bx}^{(s)} = \bm_d + \bP_{d,k} \bt^{(s)}$\;;
\item Reimpute the outlying cells and missing values
of $\bx$ by the corresponding values of
$\hat{\bx}^{(s)}$, yielding $\btx^{(s)}$\,.
\end{enumerate}
These steps are iterated until convergence
(when the new imputed values are within a tolerance
of the old ones) or
the maximal number of steps (by default 20) is
reached.
We denote the final $\btx^{(s)}$ as $\btx$.
Next we create $\bcx$ by replacing
the missing values in $\bx$ by the corresponding
cells in $\btx$.
We then compute the orthogonal distance
$\OD(\bcx)$ and the score distance $\SD(\bcx)$.
If $\OD(\bcx) > c_{\od}$ the new case $\bx$ is
flagged as an orthogonal outlier.
Finally the cell residuals $\cx_j - \chx_j$
are standardized as in the last step of MacroPCA,
and used to flag outlying cells in $\bx$.
\end{enumerate}
To illustrate this prediction
procedure we re-analyze the Top Gear data set.
We exclude the 24 cars shown in the residual map
of Figure~\ref{fig:Cars1}
and build the MacroPCA model on the remaining data.
This model was then provided to analyze the 24
selected cars as `new' data.
Figure~\ref{fig:Cars3} shows the result.
As before the cells are colored according to their
standardized residual, and the circles on
the right are filled according to their $\cod$.
The left panel is the MacroPCA residual map shown
in Figure~\ref{fig:Cars1}, which was obtained by
applying MacroPCA to the entire data set.
The right panel shows the result of analyzing these
24 cases using the fit obtained without them.
The residual maps are quite similar.
Note that each cell now shows its standardized
residual (instead of its data value as in
Figure~\ref{fig:Cars1}), making it
easier to see the differences.
\begin{figure}[!ht]
\centering
\includegraphics[width=1.0\textwidth]
{TopGear_MacroPCApredict_residualMap_cropped}
\vspace{-1.0cm}
\caption{Top Gear data set: residual maps obtained by
(left) including and (right) excluding these 24
cars when fitting the PCA model.}
\label{fig:Cars3}
\end{figure}
\section{Simulations}
\label{sec:Simulations}
We have compared the performance of ICPCA, MROBPCA
and MacroPCA in an extensive simulation study.
Several contamination models were considered with
missing values, cellwise outliers, rowwise outliers,
and combinations of them. Only a few of the results
are reported here since the others yielded similar
conclusions.
The clean data $\bX_{n,d}^0$ are generated from a
multivariate Gaussian with $\bmu = \bzero$ and two
types of covariance matrices $\bSigma_{d,d}$.
The first one is based on the structured
correlation matrix called A09 where each off-diagonal
entry is $\rho_{i,j} = \left(-0.9\right)^{|i-j|}$.
The second type of covariance matrix is based on the
random correlation matrices of
\cite{Agostinelli:Cellwise} and will be called ALYZ.
These correlation matrices are turned into covariance
matrices with other eigenvalues.
More specifically, the diagonal elements of the
matrix $\bL_{d,d}$ from the spectral decomposition
$\bSigma_{d,d}={\bP_{d,d}}\bL_{d,d}{\bP'_{d,d}}$
are replaced by the
desired values listed below.
The specifications of the clean
data are $n=100$, $d=200$,
$\bL_{d,d} = \text{diag}(30, 25, 20, 15, 10, 5,
0.098, 0.0975, \ldots, 0.0020, 0.0015)$ and $k=6$
(since $\sum_{j=1}^{6} \lambda_j /
\sum_{j=1}^{200} \lambda_j = 91.5\% $).
MacroPCA takes less than a second for $n=100$,
$d=200$ as seen in Figure \ref{fig:times}.
In a first simulation setting, the clean data
$\bX_{n,d}^0$ are modified by replacing a random
subset of 5\%, 10\%, ... up to 30\% of the
$n \times d$ cells with NA's.
The second simulation setting generates NA's and
outlying cells by randomly replacing 20\% of the cells
$x_{ij}$ by missing values and 20\% by the value
$\gamma\sigma_j$
where $\sigma^2_{j}$ is the $j$-th diagonal element of
$\bSigma_{d,d}$ and $\gamma$ ranges from 0 to 20.
The third simulation setting generates NA's and
outlying rows.
Here 20\% of random cells are replaced by NA's
and a random subset of 20\% of the rows is replaced
by rows generated from
$N(\gamma \bv_{k+1},\bSigma_{d,d})$ where
$\gamma$ varies from 0 to 50 and $\bv_{k+1}$ is the
$(k+1)$th eigenvector of $\bSigma_{d,d}$.
The last simulation setting generates 20\% of NA's,
together with 10\% of cellwise outliers and 10\% of
rowwise outliers in the same way.
In each setting we consider the set C consisting
of the rows $i$
that were not replaced by rowwise outliers,
with $c := \#C$, and the data matrix
$\bX_{c,d}^0$ consisting of those rows of the
clean data $\bX_{n,d}^0$\,.
As a baseline for the simulation we apply
classical PCA to $\bX_{c,d}^0$ and denote the
resulting predictions by $\hat{x}_{ij}^C$ for
$i$ in $C$.
We then measure the mean squared error (MSE) from
the baseline:
\begin{equation*}
\text{MSE} = \frac{1}{c d}\,
\sum_{i \in C} \sum_{j=1}^{d}
\left(\hat{x}_{ij} - \hat{x}_{ij}^{C}\right)^2
\end{equation*}
where $\hat{x}_{ij}$ is the predicted value for
$x_{ij}$ obtained by applying the different
methods to the contaminated data.
The MSE is then averaged over 100 replications.
\begin{figure}[!ht]
\centering
\vskip2mm
\begin{tabular}{cc}
\hskip5mm A09, fraction $\varepsilon$ of missing
values & \hskip0mm ALYZ, fraction $\varepsilon$
of missing values \\
\includegraphics[width=0.45\textwidth]
{MCAR_d200_A09_MSE} &
\includegraphics[width=0.45\textwidth]
{MCAR_d200_ALYZ_MSE}
\vspace{-0.3cm}
\end{tabular}
\caption{Average MSE as a function of the fraction
$\eps$ of missing values. The data were generated
using A09 (left) and ALYZ (right).}
\label{fig:MCAR}
\end{figure}
Figure \ref{fig:MCAR} shows the performance of
ICPCA, MROBPCA and MacroPCA when some data becomes missing.
As CPCA and ROBPCA cannot deal with NA's, they are
not included in this comparison.
Since there are no outliers the classical ICPCA performs
best, followed by MROBPCA and MacroPCA which perform
similarly to each other, and only slightly worse than
ICPCA considering the scale of the vertical axis which is
much smaller than in the other three simulation settings.
\begin{figure}[!ht]
\centering
\vskip2mm
\begin{tabular}{cc}
\hskip7mm A09, missing values \& cellwise
& \hskip3mm ALYZ, missing values \& cellwise \\
\includegraphics[width=0.45\textwidth]
{ICMMCAR_d200_20_A09_MSE} &
\includegraphics[width=0.45\textwidth]
{ICMMCAR_d200_20_ALYZ_MSE}
\vspace{-0.2cm}
\end{tabular}
\caption{Average MSE for data with 20\% of missing
values and 20\% of cellwise outliers, as a
function of $\gamma$ which determines the
distance of the cellwise outliers.}
\label{fig:ICM}
\end{figure}
Now we set 20\% of the data cells to missing and
add 20\% of cellwise contamination given by $\gamma$.
Figure~\ref{fig:ICM} shows the performance of ICPCA,
MROBPCA and MacroPCA in this situation.
The MSE of both ICPCA and MROBPCA grows very fast with
$\gamma$ which indicates that these methods are not at
all robust to cellwise outliers.
Note that $d=200$ so on average
$1-(1-0.2)^{200}\approx 100 \%$ of the rows are
contaminated, whereas no purely rowwise method
can handle more than 50\%.
MacroPCA is the only method that can withstand cellwise
outliers here. When $\gamma$ is smaller than 5 the MSE
goes up, but this is not surprising as in that case the
values in the contaminated cells are still close to the
clean ones. As soon as the contamination is sufficiently
far away, the MSE drops to a very low value.
\begin{figure}[!ht]
\centering
\vskip2mm
\begin{tabular}{cc}
\hskip8mm A09, missing values \& rowwise &
\hskip3mm ALYZ, missing values \& rowwise \\
\includegraphics[width=0.45\textwidth]
{THCMMCAR_d200_20_A09_MSE} &
\includegraphics[width=0.45\textwidth]
{THCMMCAR_d200_20_ALYZ_MSE}
\vspace{-0.2cm}
\end{tabular}
\caption{Average MSE for data with 20\% of missing
values and 20\% of rowwise outliers, as a
function of $\gamma$ which determines the
distance of the rowwise outliers.}
\label{fig:THCM}
\end{figure}
Figure~\ref{fig:THCM} presents the results of ICPCA,
MROBPCA and MacroPCA when there are 20\% of missing
values combined with 20\% of rowwise contamination.
As expected, the ICPCA algorithm breaks down while
MROBPCA and MacroPCA provide very good results.
MROBPCA and MacroPCA are affected the most (but
not much) by nearby outliers, and very little by far
contamination.
\begin{figure}[!ht]
\centering
\vskip2mm
\begin{tabular}{cc}
\hskip 1mm A09, missing \& cellwise \& rowwise &
\hskip 1mm ALYZ, missing \& cellwise \& rowwise \\
\includegraphics[width=0.45\textwidth]
{BOTHMCAR_d200_20_A09_MSE} &
\includegraphics[width=0.45\textwidth]
{BOTHMCAR_d200_20_ALYZ_MSE}
\vspace{-0.2cm}
\end{tabular}
\caption{Average MSE for data with 20\% of missing
values, 10\% of cellwise outliers and 10\% of
rowwise outliers, as a function of $\gamma$
which determines the distance of both the
cellwise and the rowwise outliers.}
\label{fig:BOTH}
\end{figure}
Finally, Figure~\ref{fig:BOTH} presents the results
in the situation of 20\% of missing values combined
with 10\% of cellwise and 10\% of rowwise
contamination.
In this scenario the ICPCA and MROBPCA algorithms
break down whereas MacroPCA still provides reasonable
results.
In this section the missing values were generated
in a rather simple way. In Section \ref{A:MAR}
they are generated in a more challenging way but
still MAR, with qualitatively similar results.
\section{Real data examples}
\label{sec:Examples}
\subsection{Glass data}
\label{sec:glass}
The {\it glass} dataset \citep{Lemberge:PLS}
contains spectra with $d=750$ wavelengths of $n=180$
archeological glass samples.
It is available in the R package
{\it cellWise} \citep{Raymaekers:cellWise}.
The MacroPCA method selects 4 principal components
and yields a $180 \times 750$ matrix of
standardized residuals. There is not enough
resolution on a page to show so many individual
cells in a residual map.
Therefore we created a map (the top panel of
Figure \ref{fig:Glass1}) which combines the residuals
into blocks of $5 \times 5$ cells.
The color of each block now depends on the most
frequent type of outlying cell in it, the resulting
color being an average.
For example, an orange block indicates that quite a
few cells in the block were red and most of the
others were yellow.
The more red cells in the block, the darker red the
block will be.
We see that MacroPCA has flagged a lot of cells,
that happen to be concentrated in a minority of the
rows where they show patterns.
In fact, the colors indicate that some of the glass
samples (between 22 and 30) have a higher
concentration of phosphor, whereas rows 57--63 and
74--76 had an unusually high concentration of calcium.
The bottom part of the residual map looks very
different,
due to the fact that the measuring instrument was
cleaned before recording the last 38 spectra.
One could say that those outlying rows belong to a
different population.
\begin{figure}[htb]
\centering
\includegraphics[width=1.0\textwidth]
{Glass_MacroPCA_ROBPCA_residualMap_cropped}
\vspace{-1.0cm}
\caption{Residual maps of the glass dataset when
fitting the PCA model by MacroPCA (top) and
ROBPCA (bottom).}
\label{fig:Glass1}
\end{figure}
Since the dataset has no NA's and we found that
fewer than half of the rows are outlying, it can
also be analyzed by the original ROBPCA method as
was done by \cite{Hubert:ROBPCA}, also for $k = 4$.
This detects the same rowwise outliers.
In principle ROBPCA is a purely rowwise method that
does not flag cells.
Even though ROBPCA does not produce a residual map,
we can construct one analogously to that of MacroPCA.
First we construct the residual matrix of ROBPCA,
the rows of which are given by $\bx_i - \bhx_i$
where $\bhx_i$ is the projection of $\bx$ on the
ROBPCA subspace.
We then standardize the residuals in each
column by dividing them by a
robust 1-step scale M-estimate.
This yields the bottom panel of
Figure \ref{fig:Glass1}.
We see that the two residual maps look quite
similar.
This example illustrates that purely rowwise robust
methods can be useful to detect cellwise outliers
when these cells occur in fewer than 50\% of the rows.
But if the cellwise outliers contaminate more rows,
this approach is insufficient.
\subsection{DPOSS data}
\label{sec:DPOSS}
In our last example we analyze data from the
Digitized Palomar Sky Survey (DPOSS) described by
\cite{Odewahn:Sky}.
This is a huge database of celestial objects,
from which we have drawn 20,000 stars at random.
Each star has been observed in the color bands
J, F, and N.
Each band has 7 variables.
Three of them measure light intensity:
for the J band they are MAperJ, MTotJ and
MCoreJ where the last letter indicates the band.
The variable AreaJ is the size of the star
based on its number of pixels.
The remaining variables IR2J, csfJ and EllipJ
combine size and shape.
(There were two more variables in the original
data, but these measured the background rather
than the star itself.)
There are substantial correlations between these
21 variables.
\begin{figure}[!ht]
\centering
\vspace{0.3cm}
\begin{tabular}{cc}
\includegraphics[width=0.485\textwidth]
{DPOSS_MacroPCA_loadings_cropped} &
\includegraphics[width=0.485\textwidth]
{DPOSS_MacroPCA_Scores12_cropped}\\
\end{tabular}
\vspace{-0.8cm}
\caption{DPOSS stars data: (left) loadings of the
first (black full line) and the second (blue
dashed line) component of MacroPCA, with vertical
lines separating the three color bands; (right)
plot of
the first two scores, with filled red circles
for stars with high orthogonal distance
$\protect\cod$ and
open black circles for the others.}
\label{fig:DPOSSloadings}
\end{figure}
In this dataset 84.6\% of the rows contain NA's
(in all there are 50.2\% missing entries.)
Often an entire color band is missing, and
sometimes two.
We applied MacroPCA to these data, choosing $k=4$
components according to the scree plot.
The left panel of Figure \ref{fig:DPOSSloadings}
shows the loadings of the first and second
component.
It appears that the first component captures the
overall negative correlation between two groups
of variables: those measuring light intensity
(the first 3 variables in each band) and the
others (variables 4 to 7 in each band).
The right panel is the corresponding scores plot,
in which the 150 stars with the
highest orthogonal distance $\cod$ are shown in red.
Most of these stand out in the space of PC1 and
PC2 (bad leverage points), whereas some only
have a high $\cod$ (orthogonal outliers).
\begin{figure}[!ht]
\centering
\includegraphics[width=0.7\textwidth]
{DPOSS_MacroPCA_residualMap_cropped}
\vspace{-0.5cm}
\caption{MacroPCA residual map of stars in the
DPOSS data, with 25 stars per row block.
The six row blocks at the top correspond
to the stars with highest
$\protect\cod$.}
\label{fig:DPOSScellmap}
\end{figure}
Figure \ref{fig:DPOSScellmap} shows the residual
map of MacroPCA, in which each row
block combines 25 stars.
The six rows at the top correspond to the
150 stars with highest $\cod$.
We note that the outliers tend to be more
luminous (MTot) than expected
and have a larger Area, which suggests
giant stars.
The analogous residual map of ICPCA (not shown)
did not reveal much.
Note that the non-outlying rows in the bottom
part of the residual map are yellow, and the
missing color bands show up as blocks in
lighter yellow (a combination of yellow and
white cells).
\section{Conclusions}
\label{sec:Conclusions}
The MacroPCA method is able to handle missing
values, cellwise outliers, and rowwise outliers.
This makes it well-suited for the analysis of
possibly messy real data.
Simulation showed that its performance is similar
to a classical method in the case of outlier-free
data with missing values, and to an existing
robust method when the data only has rowwise
outliers.
The algorithm is fast enough to deal with many
variables, and we intend to speed it up
by recoding it in C.
MacroPCA can analyze new data as they come in,
only making use of its existing output obtained
from the initial dataset.
It imputes missing values in the new data,
flags and imputes outlying cells, and flags
outlying rows.
This computation is fast, so it can be used to
screen new data in quality control or even
online process control. (One can update the
initial fit offline from time to time.)
The advantage of MacroPCA is that it not only
tells us when the process goes out of control,
but also which variables are responsible.
Potential extensions of MacroPCA include methods
of PCA regression and partial least squares
able to deal with rowwise and
cellwise outliers and missing values.
\section{Software Availability}
\label{sec:software}
The R code of MacroPCA, as well as the data
sets and an example script, are available at
{\it https://wis.kuleuven.be/stat/robust/software}.
It will be incorporated in the R package
{\it cellWise} on CRAN.
|
1,116,691,498,645 | arxiv |
\section*{Appendix}\label{ap:AppA}}{\section{}\label{ap:AppA}}
We provide results that are used throughout the paper and all the
omitted proofs. The following result is useful in bounding the eigenvalues of the linear
Newton approximation of $F_\gamma$, and is required by Theorem~\ref{th:ProxPropQuad} and Proposition~\ref{prop:PSDHess}.
\begin{lemma}\label{lem:eigen}
If $Q\in\mathbb S_+^n$ and $\mu_f=\lambda_{\min}(Q)$, $L_f=\lambda_{\max}(Q)$ then
\[
\lambda_{\min}(Q(I-\gamma Q))=\begin{cases}
\mu_f(1-\gamma\mu_f),&\mathrm{ if }\ 0<\gamma\leq 1/(L_f+\mu_f),\\
L_f(1-\gamma L_f),&\mathrm{ if }\ 1/(L_f+\mu_f)\leq\gamma< 1/L_f.
\end{cases}
\]
\end{lemma}
\begin{proof}
Since $Q$ is symmetric positive semidefinite, there exists an invertible matrix $S\in\mathbb R^{n\times n}$ such that $Q=SJS^{-1}$, where $J=\mathop{\rm diag}\nolimits(\lambda_1(Q),\ldots,\lambda_n(Q))$.
Therefore,
\begin{align*}
Q(I-\gamma Q)&=SJS^{-1}(I-\gamma SJS^{-1})\\
&=SJS^{-1} S(I-\gamma J)S^{-1}\\
&=SJ(I-\gamma J)S^{-1},
\end{align*}
and the eigenvalues of $Q(I-\gamma Q)$ are exactly
$$\lambda_1(Q)(1-\gamma\lambda_1(Q)),\ldots,\lambda_n(Q)(1-\gamma\lambda_n(Q)).$$
Next, consider the minimization problem $\min_{\lambda\in [\mu_f,L_f]}\phi(\lambda)\triangleq\lambda(1-\gamma\lambda)$.
Since $\gamma$ is positive, $\phi$ is concave and the minimum is attained either at $\mu_f$ or $L_f$. The proof finishes by noticing that
$$\mu_f(1-\gamma\mu_f)\leq L_f(1-\gamma L_f) \Leftrightarrow \gamma\in\left(0,1/(L_f+\mu_f)\right).$$
\iftoggle{svver}{\qed}{}
\end{proof}
The next result gives condition for the Lipschitz-continuity of $P_\gamma$ and $G_\gamma$,
and is needed by Theorem~\ref{th:ProxPropQuad} to obtain the Lipschitz constant of
$\nabla F_\gamma$ in the case where $f$ is quadratic, and by Theorem~\ref{eq:PNMconvRate}
and~\ref{eq:PGNMconvRate} in order to assess the local convergence properties of
Algorithm~\ref{al:PNM} and~\ref{al:PGNM}.
\begin{lemma}\label{le:zNonExp}
Suppose that $\gamma<1/L_f$. Then $P_\gamma:\Re^n\to\Re^n$ is nonexpansive, i.e.,
\begin{equation}\label{eq:Pnonexp}
\|P_{\gamma}(x)-P_{\gamma}(y)\|\leq\|x-y\|,
\end{equation}
and $G_\gamma:\Re^n\to\Re^n$ is $(2/\gamma)$-Lipschitz continuous, i.e.,
\begin{equation}
\|G_{\gamma}(x)-G_{\gamma}(y)\|\leq2/\gamma\|x-y\|.
\end{equation}
\end{lemma}
\begin{proof}
On one hand we know that $\mathop{\rm prox}\nolimits_{\gamma g}$ is firmly nonexpansive~\cite{moreau1965proximiteet},
therefore $\mathop{\rm prox}\nolimits_{\gamma g}$ is a $1/2$-averaged operator~\cite[Rem. 4.24(iii)]{bauschke2011convex}.
On the other hand, being $\nabla f$ the Lipschitz continuous gradient of a convex function, it is $1/L_f$-cocoercive.
Therefore, since $\gamma<1/L_f$, the operator $x\to x-\gamma\nabla f(x)$ is $\gamma L_f/2$-averaged~\cite[Prop. 4.33]{bauschke2011convex}. Since $P_\gamma$ is the composition of two averaged operators, it is an averaged operator as well~\cite[Prop. 4.32]{bauschke2011convex}. By~\cite[Rem. 4.24(i)]{bauschke2011convex} this implies that $P_\gamma$ is nonexpansive, proving~\eqref{eq:Pnonexp}. Next, consider any $x,y\in\Re^n$:
\begin{align*}
\|G_\gamma(x)-G_\gamma(y)\|&\leq 1/\gamma\|P_\gamma(x)-P_{\gamma}(y)-(x-y)\|\\
&\leq 1/\gamma\left(\|P_\gamma(x)-P_{\gamma}(y)\|+\|x-y\|\right)\\
&\leq 2/\gamma\|x-y\|.
\end{align*}
\iftoggle{svver}{\qed}{}
\end{proof}
The following proposition is an extension of \cite[Lemma 2.3]{beck2009fast}
that handles the case where $f$ can be strongly convex.
\begin{proposition}\label{prop:ProxBasic}
For any $\gamma\in (0,1/L_f]$, $x\in\Re^n$, $\bar{x}\in\Re^n$
\begin{equation}\label{eq:ProxBasic}
F(x)\geq F(P_\gamma(\bar{x}))+G_{\gamma}(\bar{x})'(x-\bar{x})+\tfrac{\gamma}{2}\|G_{\gamma}(\bar{x})\|^2+\tfrac{\mu_f}{2}\|x-\bar{x}\|^2.\nonumber
\end{equation}
\end{proposition}
\begin{proof}
For any $x\in\Re^n$, $\bar{x}\in\Re^n$ we have
\begin{align*}
F(x) & \geq f(\bar{x})+\nabla f(\bar{x})'(x-\bar{x})+\tfrac{\mu_f}{2}\|x-\bar{x}\|^2\\
& \phantom{\geq f(\bar{x})}+g(P_\gamma(\bar{x}))+(G_{\gamma}(\bar{x})-\nabla f(\bar{x}))'(x-P_\gamma(\bar{x}))\\
& = f(\bar{x})+g(P_\gamma(\bar{x}))-\nabla f(\bar{x})'(\bar{x}-P_\gamma(\bar{x}))+G_{\gamma}(\bar{x})'(x-P_\gamma(\bar{x}))+\tfrac{\mu_f}{2}\|x-\bar{x}\|^2\\
& = F_\gamma(\bar{x})-\tfrac{\gamma}{2}\|G_\gamma(\bar{x})\|^2+G_{\gamma}(\bar{x})'(\bar{x}-P_{\gamma}(\bar{x}))+G_{\gamma}(\bar{x})'(x-\bar{x})+\tfrac{\mu_f}{2}\|x-\bar{x}\|^2\\
& = F_\gamma(\bar{x})-\tfrac{\gamma}{2}\|G_\gamma(\bar{x})\|^2+\gamma\|G_{\gamma}(\bar{x})\|^2+G_{\gamma}(\bar{x})'(x-\bar{x})+\tfrac{\mu_f}{2}\|x-\bar{x}\|^2\\
& \geq F(P_\gamma(\bar{x}))+\tfrac{\gamma}{2}(2-\gamma L_f)\|G_\gamma(\bar{x})\|^2+G_{\gamma}(\bar{x})'(x-\bar{x})+\tfrac{\mu_f}{2}\|x-\bar{x}\|^2.
\end{align*}
The first inequality follows by strong convexity of $f$ and $G_{\gamma}(\bar{x})-\nabla f(\bar{x})\in\partial g(P_{\gamma}(\bar{x}))$,
the equality by the definition of $F_\gamma$ and the final inequality by Theorem~\ref{Th:PropFg}(\ref{prop:LowBnd}).
The result follows by noticing that $\gamma\in (0,1/L_f]$ implies $2-\gamma L_f\geq 1$.
\iftoggle{svver}{\qed}{}
\end{proof}
An immediate result of Proposition~\ref{prop:ProxBasic} is the following.
\begin{corollary}\label{prop:GradLowBnd}
For any $\gamma\in (0,1/L_f]$, $x\in\Re^n$, it holds
\[
\|G_{\gamma}(x)\|^2\geq 2\mu_f(F(P_{\gamma}(x))-F_\star).
\]
\end{corollary}
\begin{proof}
According to Proposition~\ref{prop:ProxBasic}, if $\gamma\in (0,1/L_f]$ then for any $x,\bar{x}\in\Re^n$ we certainly have
\begin{equation}\label{eq:StronConvzbound}
F(x)\geq F(P_\gamma(\bar{x}))+G_{\gamma}(\bar{x})'(x-\bar{x})+\tfrac{\mu_f}{2}\|x-\bar{x}\|^2.
\end{equation}
Minimizing both sides with respect to $x$ we obtain $F_\star$ for the left hand side
and $x=\bar{x}-\mu_f^{-1}G_{\gamma}(\bar{x})$ for the right hand side.
Substituting in \eqref{eq:StronConvzbound} we obtain
\begin{align*}
F_\star&\geq F(P_{\gamma}(\bar{x}))-\tfrac{1}{2\mu_f}\|G_{\gamma}(\bar{x})\|^2.
\end{align*}
\iftoggle{svver}{\qed}{}
\end{proof}
The next proposition is useful for proving the global linear convergence rate of
Algorithm~\ref{al:PGNM}, in the case of $f$ strongly convex,
cf. Theorem~\ref{th:PGNMbnds2}.
\begin{proposition}\label{prop:LipLowBndDist}
For any $x\in\Re^n$, $x_\star\in X_\star$ and $\gamma\in (0,1/L_f]$
\[
F(P_{\gamma}(x))-F_\star\leq\tfrac{1}{2\gamma}(1-\gamma\mu_f)\|x-x_\star\|^2.
\]
\end{proposition}
\begin{proof}
By definition of $F_\gamma$ we have
\begin{align*}
F_\gamma(x)&{=}\min_{z\in\Re^n}\left\{f(x){+}\nabla f(x)'(z-x)+g(z){+}\tfrac{1}{2\gamma}\|z-x\|^2\right\}\\
&\leq f(x){+}\nabla f(x)'(x_\star-x)+g(x_\star)+\tfrac{1}{2\gamma}\|x_\star-x\|^2\\
&\leq f(x_\star)+g(x_\star)-\tfrac{\mu_f}{2}\|x-x_\star\|^2+\tfrac{1}{2\gamma}\|x_\star-x\|^2,
\end{align*}
where the second inequality follows from (strong) convexity of $f$. The proof finishes by invoking Theorem~\ref{Th:PropFg}(\ref{prop:LowBnd}).
\iftoggle{svver}{\qed}{}
\end{proof}
Hereafter we provide the proofs omitted in Sections~\ref{sec:LNA} and~\ref{sec:FBNCG}.
\iftoggle{svver}{\paragraph{Proof of Proposition~\ref{prop:LNAprops1}\\\\}\label{par:proofLNA1}}
{\begin{proof}[Proof of Proposition~\ref{prop:LNAprops1}]\label{par:proofLNA1}}
Let $T(x)=x-\gamma\nabla f(x)$. Then $P_\gamma$ can be expressed as the composition
of mappings $\mathop{\rm prox}\nolimits_{\gamma g}$ and $T$, \emph{i.e.},~ $P_\gamma(x)=\mathop{\rm prox}\nolimits_{\gamma g}(T(x))$.
Since $\mathop{\rm prox}\nolimits_{\gamma g}$ is (strongly) semismooth at $T(x_\star)$ we have that
$\partial_C\mathop{\rm prox}\nolimits_{\gamma g}$ is a (strong) LNA for $\mathop{\rm prox}\nolimits_{\gamma g}$ at $T(x_\star)$.
On the other hand, since $T$ is twice continuously differentiable, its Jacobian
$\nabla T(x)=I-\gamma\nabla^2 f(x)$ is a LNA of $T$ at $x_\star$.
If in addition $\nabla^2 f$ is Lipschitz continuous around $x_\star$ then
$\nabla T$ is a strong LNA of $T$ at $x_\star$~\cite[Prop. 7.2.9]{facchinei2003finite}.
Invoking~\cite[Th. 7.5.17]{facchinei2003finite} we have that
$$\mathscr{P}_\gamma(x)=\{P(I-\gamma\nabla^2 f(x))\ |\ P\in\partial_C(\mathop{\rm prox}\nolimits_{\gamma g})(x-\gamma\nabla f(x))\},$$
is a (strong) LNA of $P_\gamma$ at $x_\star$.
Next consider $G_\gamma(x)=\gamma^{-1}(x-P_\gamma(x))$. Applying~\cite[Cor. 7.5.18(a)(b)]{facchinei2003finite}
we have
$$\mathscr{G}_\gamma(x)=\{\gamma^{-1}(I-V)\ |\ V\in\mathscr{P}_\gamma(x)\},$$
is a (strong) LNA for $G_\gamma$ at $x_\star$.
Reinterpreting $\hat{\partial}^2F_\gamma(x)$ with the current notation,
\begin{align*}
\hat{\partial}^2F_\gamma(x)&=\{(I-\gamma\nabla^2f(x))Z\ |\ Z\in\mathscr{G}_\gamma(x)\}.
\end{align*}
Therefore, for any $H\in\hat{\partial}^2F_\gamma(x)$
\begin{align*}
\|\nabla F_\gamma(x)+H(x_\star-x)-\nabla F_\gamma(x_\star)\|&=\|(I-\gamma\nabla^2f(x))(G_\gamma(x)+Z(x-x_\star)-G_\gamma(x_\star))\|\\
&\leq\|G_\gamma(x)+Z(x-x_\star)-G_\gamma(x_\star)\|,
\end{align*}
where the equality follows by $0=\nabla F_\gamma(x_\star)=(I-\gamma\nabla f^2(x_\star))G_\gamma(x_\star)$,
and the inequality by $\gamma\in (0,1/L_f)$. Since $\mathscr{G}_\gamma$ is a (strong) LNA of $G_\gamma$, the last term is $o(\|x-x_\star\|)$
(and $O(\|x-x_\star\|^2)$ in the case where $\nabla^2 f$ is locally Lipschitz continuous).
This shows that $\hat{\partial}F_\gamma$ is a (strong) LNA of $\nabla F_\gamma$ at $x_\star$.
\iftoggle{svver}{\qed}{\end{proof}}
\iftoggle{svver}{\paragraph{Proof of Proposition~\ref{prop:PSDHess}\\\\}\label{par:proofLNA2}}
{\begin{proof}[Proof of Proposition~\ref{prop:PSDHess}]}
Any $H\in\hat{\partial}^2 F_\gamma(x)$ can be expressed as
$$H=\gamma^{-1}(I-\gamma\nabla^2 f(x))-\gamma^{-1}(I-\gamma\nabla^2 f(x))P(I-\gamma\nabla^2 f(x))$$
for some $P\in\partial_C(\mathop{\rm prox}\nolimits_{\gamma g})(x-\gamma\nabla f(x))$. Obviously, recalling Theorem~\ref{th:JacProx}, $H$ is a symmetric matrix. We have
\begin{align*}
d'Hd & =\gamma^{-1}d'(I-\gamma\nabla^2 f(x))d-\gamma^{-1}d'(I-\gamma\nabla^2 f(x))P(I-\gamma\nabla^2 f(x))d\\
&\geq \gamma^{-1}d'(I-\gamma\nabla^2 f(x))d-\gamma^{-1}\|(I-\gamma\nabla^2 f(x))d\|^2\\
&= d'(I-\gamma\nabla^2 f(x))\nabla^2 f(x)d\\
&\geq \min\{(1-\gamma\mu_f)\mu_f,(1-\gamma L_f)L_f\}\|d\|^2,
\end{align*}
where the first inequality follows by Theorem~\ref{th:JacProx} and the second by Lemma~\ref{lem:eigen}.
On the other hand
\begin{align*}
d'Hd & =\gamma^{-1}d'(I-\gamma\nabla^2 f(x))d-\gamma^{-1}d'(I-\gamma\nabla^2 f(x))P(I-\gamma\nabla^2 f(x))d\\
&\leq\gamma^{-1}d'(I-\gamma\nabla^2 f(x))d\\
&\leq \gamma^{-1}(1-\gamma\mu_f)\|d\|^2,
\end{align*}
where the first inequality follows by Theorem~\ref{th:JacProx}.
\iftoggle{svver}{\qed}{\end{proof}}
\iftoggle{svver}{\paragraph{Proof of Lemma~\ref{lem:sharpMin}\\\\}\label{par:proofLNA3}}
{\begin{proof}[Proof of Lemma~\ref{lem:sharpMin}]}
It suffices to prove that
$\|x-x_\star\|\leq c\|\nabla F_\gamma(x)\|$, for all
$x\ \mathrm{with}\ \|x-x_\star\|\leq\delta$ and some positive
$c$, $\delta$. The result will then follow, since
$\|\nabla F_\gamma(x)\|=\|(I-\gamma\nabla^2f(x))G_\gamma(x)\|\leq \|G_\gamma(x)\|$,
for $\gamma\in (0,1/L_f)$. For the sake of contradiction assume that there
exists a sequence of vectors $\{x^k\}$ converging to $x_\star$ such that
$x^k\neq x_\star$ for every $k$ and
\begin{equation}\label{eq:contra}
\lim_{k\to\infty}\frac{\nabla F_\gamma(x^k)}{\|x^k-x_\star\|}=0.
\end{equation}
The assumptions of the lemma guarantee through Proposition~\ref{prop:LNAprops1} that $\hat{\partial}^2F_\gamma$ is a LNA of $\nabla F_\gamma$ at $x_\star$, therefore
$$0=\lim_{k\to\infty}\frac{\nabla F(x^k)+H^k(x_\star-x^k)-\nabla F_\gamma(x^\star)}{\|x^k-x_\star\|}=\lim_{k\to\infty}\frac{H^k(x_\star-x^k)}{\|x^k-x_\star\|},$$
where the second equality follows from~\eqref{eq:contra}. This implies that
$$\lim_{k\to\infty}\frac{(x_\star-x^k)'H^k(x_\star-x^k)}{\|x^k-x_\star\|^2}=0.$$
But since $\hat{\partial}^2F_\gamma$ is compact-valued and outer semicontinuous at $x_\star$, and $\{x^ k\}$ converges to $x_\star$, the nonsingularity assumption on the elements of $\hat{\partial}^2 F_\gamma(x_\star)$ implies through~\cite[Lem.~7.5.2]{facchinei2003finite} that for sufficiently large $k$, the smallest eigenvalue of $H^ k$ is minorized by a positive number. Therefore the above limit must be positive, reaching to a contradiction. Uniqueness follows from the fact that the set of zeros of $\nabla F_\gamma$ is equal to the set of optimal solutions of~\eqref{eq:GenProb}, through Theorem~\ref{Th:PropFg}\eqref{prop:DerPen}.
\iftoggle{svver}{\qed}{\end{proof}}
\iftoggle{svver}{\paragraph{Proof of Theorem~\ref{th:ComplPNM}\\\\}\label{proof:ComplPNM}}
{\begin{proof}[Proof of Theorem~\ref{th:ComplPNM}.]}
Since $\mu_f>0$ and $\zeta=0$, using Proposition~\ref{prop:PSDHess}, Eq.~\eqref{eq:CGprop} gives
\begin{equation}\label{eq:Dir1}
\nabla F_\gamma(x^ k)'d^ k\leq -c_1\|\nabla F_\gamma(x^ k)\|^2.
\end{equation}
where $c_1=\frac{\gamma}{(1-\gamma\mu_f)}$
while Eq.~\eqref{eq:boundd} gives
\begin{equation}\label{eq:Dir}
\|d^ k\|\leq c_2\|\nabla F_\gamma(x^ k)\|
\end{equation}
where $c_2=(\eta+1)/\xi_1$, $\xi_1\triangleq\min\left\{(1-\gamma\mu_f)\mu_f,(1-\gamma L_f)L_f\right\}$.
Using Eqs.~\eqref{eq:Armijo},~\eqref{eq:Dir1}, step $\tau_k=2^{-i_k}$ satisfies
$$F_\gamma(x^k+\tau_kd^k)-F_\gamma(x^k)\leq-\sigma\tau_kc_1\|\nabla F_\gamma(x^k)\|^2.$$
Due to Theorem~\ref{th:ProxPropQuad}, $\nabla F_\gamma$ is Lipschitz continuous, therefore using the descent Lemma~\cite[Prop. A.24]{bertsekas1999nonlinear}
\begin{align}
F_\gamma(x^k+2^{-i}d^k)-F_\gamma(x^k)&\leq 2^{-i}\nabla F_\gamma(x^k)'d^k+\tfrac{L_{F_\gamma}}{2}2^{-2i}\|d^k\|^2\nonumber\\
&\leq-2^{-i}c_1\|\nabla F_{\gamma}(x^k)\|^2+\tfrac{L_{F_{\gamma}}}{2}{c^2_2}2^{-2i}\|\nabla F_{\gamma}(x^k)\|^2 \nonumber\\
&\leq-2^{-i}c_1(1-\tfrac{L_{F_{\gamma}}}{2}\tfrac{c^2_2}{c_1}2^{-i})\|\nabla F_{\gamma}(x^k)\|^2\label{eq:LipDecrease}
\end{align}
where the second inequality follows by~\eqref{eq:Dir}.
Let $i_{\min}$ be the first index $i$ for which $1-\tfrac{L_{F_\gamma}}{2}\tfrac{c^2_2}{c_1}2^{-i}\geq\sigma$, \emph{i.e.},~
\begin{subequations}\label{eq:Arm}
\begin{align}
1-\tfrac{L_{F_\gamma}}{2}\tfrac{c^2_2}{c_1}2^{-i}&<\sigma,\quad 0\leq i< i_{\min}\label{eq:Arm1}\\
1-\tfrac{L_{F_\gamma}}{2}\tfrac{c^2_2}{c_1}2^{-i_{\min}} &\geq\sigma\label{eq:Arm2}
\end{align}
\end{subequations}
From~\eqref{eq:Armijo},~\eqref{eq:LipDecrease} and~\eqref{eq:Arm} we conclude that $i_k\leq i_{\min}$, therefore $\tau_k\geq\hat{\tau}_{\min}$, where $\hat{\tau}_{\min}=2^{-i_{\min}}$, thus we have
\begin{equation}\label{eq:MinDecrease}
F_{\gamma}(x^k+\tau_kd^k)-F_{\gamma}(x^k)\leq-\sigma\hat{\tau}_{\min}c_1\|\nabla F_{\gamma}(x^k)\|^2
\end{equation}
From Eq.~\eqref{eq:Arm1} we obtain
$$\sigma > 1-\tfrac{L_{F_\gamma}}{2}\tfrac{c^2_2}{c_1}2^{-(i_{\min}-1)}=1-\tfrac{c^2_2}{c_1}L_{F_\gamma}2^{-i_{\min}}=1-\tfrac{c^2_2}{c_1}L_{F_\gamma}\hat{\tau}_{\min}$$
Hence
\begin{equation}\label{eq:minStep}
\hat{\tau}_{\min}>\frac{1-\sigma}{L_{F_\gamma}}\frac{c_1}{c^2_2}.
\end{equation}
Subtracting $F_\star$ from both sides of~\eqref{eq:MinDecrease} and using~\eqref{eq:minStep}
\begin{equation}\label{eq:OnS}
F_\gamma(x^k+\tau_kd^k)-F_\star\leq F_\gamma(x^k)-F_\star-\tfrac{\sigma(1-\sigma)}{L_{F_\gamma}}\tfrac{c^2_1}{c^2_2}\|\nabla F_\gamma(x^k)\|^2.
\end{equation}
Since $F_\gamma$ is strongly convex (cf. Theorem~\ref{th:ProxPropQuad}) we have~\cite[Th. 2.1.10]{nesterov2003introductory}
\begin{equation}\label{eq:strConvLow}
F_\gamma(x^{k})-F_\star\leq\frac{1}{2\mu_{F_\gamma}}\|\nabla F_\gamma(x^k)\|^2.
\end{equation}
Combining~\eqref{eq:OnS} and~\eqref{eq:strConvLow} we obtain
$$F_\gamma(x^{k+1})-F_\star\leq r_{F_\gamma}(F_\gamma(x^k)-F_\star)$$
where $r_{F_\gamma}= 1-\tfrac{2\mu_{F_\gamma}\sigma(1-\sigma)}{L_{F_\gamma}}\tfrac{c^2_1}{c^2_2}$, therefore
$$F_\gamma(x^{k})-F_\star\leq r_{F_\gamma}^k(F_\gamma(x^0)-F_\star).$$
Using $F(P_\gamma(x^k))\leq F_\gamma(x^k)$ (cf. Theorem~\ref{Th:PropFg}\eqref{prop:LowBnd}) we arrive at~\eqref{eq:QuadRateF}.
Using~\cite[Th. 2.1.8]{nesterov2003introductory}
$$(\mu_{F_\gamma}/2)\|x-x_\star\|^2\leq F_\gamma(x)-F_\star\leq (L_{F_\gamma}/2)\|x-x_\star\|^2$$
we obtain~\eqref{eq:QuadRatex}.
\iftoggle{svver}{\qed}{\end{proof}}
\iftoggle{svver}{\paragraph{Proof of Theorem~\ref{th:PGNMbnds1}\\\\}\label{proof:PGNMbnds1}}
{\begin{proof}[Proof of Theorem~\ref{th:PGNMbnds1}]}
If $k\notin\mathcal{K}$ and $s_k=0$, then $F(x^{ k+1})=F(P_\gamma(x^{ k}))\leq F_\gamma (x^{ k})$, where the inequality follows from~\eqref{eq:LowBnd4Gamma}.
If $k\in\mathcal{K}$ or $s_k=1$, then $F(x^{ k+1})=F(P_{\gamma}(\hat{x}^k))\leq F_\gamma(\hat{x}^k)\leq F_\gamma(x^ k)$, where the first inequality uses~\eqref{eq:LowBnd4Gamma} while the second uses the fact that $d^ k$ is a direction of descent for $F_\gamma$. Therefore, we have
\begin{equation}\label{eq:Comp1}
F(x^{k+1})\leq F_\gamma (x^{k}),\quad k\in\mathbb N.
\end{equation}
Next, for any $x\in\Re^n$
\begin{equation}\label{eq:Comp2}
F_\gamma(x)\leq\min_{z\in\Re^n}\left\{f(z)+g(z)+\tfrac{1}{2\gamma}\|z-x\|^2\right\}=F^{\gamma}(x),
\end{equation}
where the inequality uses the convexity of $f$ (recall that $F^\gamma$ is the Moreau envelope of $F=f+g$). Combining~\eqref{eq:Comp1} with~\eqref{eq:Comp2}, we obtain $F(x^{k+1})\leq F^\gamma(x^k)$.
The rest of the proof is similar to \cite[Th. 4]{nesterov2007gradient}. In particular we have
\begin{align*}
F(x^{k+1})&\leq F^\gamma(x^k)=\min_{x\in\Re^n}\left\{F(x)+\tfrac{1}{2\gamma}\|x-x^k\|^2\right\}\\
&\leq\min_{0\leq\alpha\leq 1}\left\{F(\alpha x_\star+(1-\alpha)x^k)+\tfrac{\alpha^2}{2\gamma}\|x^k-x_\star\|^2\right\}\\
&\leq\min_{0\leq\alpha\leq 1}\left\{F(x^k)-\alpha(F(x^k)-F_\star)+\tfrac{R^2}{2\gamma}\alpha^2\right\},
\end{align*}
where the last inequality follows by convexity of $F$.
If $F(x^0)-F_\star\geq R^2/\gamma$, then the optimal solution of the latter problem for $k=0$ is $\alpha=1$ and we obtain~
\eqref{eq:FirstStep}. Otherwise, the optimal solution is $\alpha=\frac{\gamma(F(x^k)-F_\star)}{R^2}\leq \frac{\gamma(F(x^0)-F_\star)}{R^2}\leq 1$ and we obtain
$$F(x^{k+1})\leq F(x^k)-\frac{\gamma(F(x^k)-F_\star)^2}{2R^2}.$$
Letting $\lambda_k=\frac{1}{F(x^k)-F_\star}$ the latter inequality is expressed as
$$\frac{1}{\lambda_{k+1}}\leq\frac{1}{\lambda_k}-\frac{\gamma}{2R^2\lambda_k^2}.$$
Multiplying both sides by $\lambda_{k}\lambda_{k+1}$ and rearranging
\begin{align*}
\lambda_{k+1}\geq\lambda_k+\frac{\gamma}{2R^2}\frac{\lambda_{k+1}}{\lambda_k}\geq \lambda_k+\frac{\gamma}{2R^2}
\end{align*}
where the latter inequality follows from the fact that $\{F(x^{k})\}_{k\in\mathbb N}$ is nonincreasing (cf.~\eqref{eq:DesPGNM}). Summing up for $0,\ldots,k-1$ we obtain
$$\lambda_k\geq\lambda_0+\frac{\gamma}{2R^2}k\geq\frac{\gamma}{2R^2}(k+2)$$
where the last inequality follows by $F(x^0)-F_\star\leq R^2/\gamma$. Rearranging, we arrive at~\eqref{eq:kStep}.
\iftoggle{svver}{\qed}{\end{proof}}
\iftoggle{svver}{\paragraph{Proof of Theorem~\ref{th:PGNMbnds2}\\\\}\label{proof:PGNMbnds2}}
{\begin{proof}[Proof of Theorem~\ref{th:PGNMbnds2}]}
If $k\notin\mathcal{K}$ and $s_k=0$, then $x^{ k+1}=P_\gamma(x^ k)$ and the decrease condition \eqref{eq:DesPGNM} holds.
Subtracting $F_\star$ from both sides and using Corollary~\ref{prop:GradLowBnd} we obtain
\begin{equation}\label{eq:StronConDec}
F(x^ k)-F_{\star}\geq (1+\gamma\mu_f)(F(x^{ k+1})-F_\star).
\end{equation}
If $k\in\mathcal{K}$ or $s_k=1$, we have $F(x^{ k+1})=F(P_\gamma(\hat{x}^ k))\leq F_\gamma(\hat{x}^ k)-\tfrac{\gamma}{2}\|G_\gamma(\hat{x}^ k)\|^2\leq F_\gamma({x}^ k)-\tfrac{\gamma}{2}\|G_\gamma(\hat{x}^ k)\|^2\leq F(x^ k)-\tfrac{\gamma}{2}\|G_\gamma(\hat{x}^k)\|^2$, where the first inequality follows from Theorem~\ref{Th:PropFg}(\ref{prop:LowBnd}), the second from~\eqref{eq:Armijo} and the descent property of $d^ k$ and the third one from Theorem~\ref{Th:PropFg}(\ref{prop:UppBnd}).
Subtacting $F_\star$ from both sides
$$F(x^{ k+1})-F_\star+\tfrac{\gamma}{2}\|G_\gamma(\hat{x}^k)\|^2\leq F(x^k)-F^\star.$$
Using Corollary~\ref{prop:GradLowBnd}, we obtain $\|G_\gamma(\hat{x}^ k)\|^2\geq2\mu_f(F(P_\gamma(\hat{x}^ k)-F_\star)=2\mu_f(F(x^{ k+1})-F_\star)$. Combining the last two inequalities we again obtain \eqref{eq:StronConDec}, which proves \eqref{eq:strC1}. Now, from \eqref{eq:StronConDec} we obtain
\begin{align}
F(x^{ k+1})-F_{\star}&\leq(1+\gamma\mu_f)^{- k}(F(x^1)-F_\star)\nonumber\\
&= (1+\gamma\mu_f)^{- k}(F(P_{\gamma}(x^0))-F_\star)\nonumber\\
&\leq\frac{1-\gamma\mu_f}{2\gamma(1+\gamma\mu_f)^{ k}}\|x^0-x_\star\|^2\label{eq:distX1},
\end{align}
where the equality comes from the fact that $s_0=0$ and the second inequality follows from Proposition~\ref{prop:LipLowBndDist}.
Finally, putting $x=x^{ k+1}$, $\bar{x}=x_\star\in X_\star$ in \eqref{eq:ProxBasic} and minimizing both sides we obtain
\begin{equation}\label{eq:LowStrConvex}
F(x^{ k+1})-F_\star\geq\tfrac{\mu_f}{2}\|x^{ k+1}-x_\star\|^2.
\end{equation}
Combining~\eqref{eq:distX1} and~\eqref{eq:LowStrConvex} we arrive at~\eqref{eq:strC2}.
\iftoggle{svver}{\qed}{\end{proof}}
\section{Conclusions and Future Work}
In this paper we presented a framework, based on the continuously differentiable function~\eqref{eq:Penalty} which we called
\emph{forward-backward envelope (FBE)},
to address a wide class of nonsmooth convex optimization problems in composite form.
Problems of this form arise in many fields such as control, signal and image processing, system identification and machine learning.
Using tools from nonsmooth analysis we derived two algorithms, namely FBN-CG I and II, that are Newton-like methods minimizing the FBE,
for which we proved fast asymptotic convergence rates.
Furthermore, Theorems~\ref{th:ComplPNM},~\ref{th:PGNMbnds1} and~\ref{th:PGNMbnds2} provide global complexity estimates,
making the algorithms also appealing for real-time applications. The considered approach makes it possible to exploit the sparsity patterns of many problems in the vicinity of the solution,
so that the resulting Newton system is usually of small dimension for many significant problems. This also implies that the algorithms can favorably take
advantage of warm-starting techniques. Our computational experience supports the theoretical results, and shows how in some scenarios our method
challenges other well known approaches.
The framework we introduced opens up the possibility of extending many existing and well known algorithms, originally introduced for smooth unconstrained optimization,
to the nonsmooth or constrained case. This is the case for example of Newton methods based on a trust-region approach, as well as quasi-Newton methods.
Future work includes embedding the Newton iterations in accelerated versions of the forward-backward splitting, in order to obtain better global convergence rates.
Finally, the extension of the framework to the nonconvex case (\emph{i.e.},~ to the case in which the smooth term $f$ in~\eqref{eq:GenProb} is nonconvex) can also be considered
in order to address a wider range of applications.
\section{Examples}\label{sec:Examples}
In this section we discuss the generalized Jacobian of the proximal mapping of many relevant nonsmooth functions. Some of the considered examples will be particularly useful in
Section~\ref{sec:Simulations} to test the effectiveness of Algorithms~\ref{al:PNM} and \ref{al:PGNM} on specific problems.
\subsection{Indicator functions}\label{ex:IndFun} Constrained convex problems can be cast in the composite form \eqref{eq:GenProb} by encoding the feasible set $D$ with the appropriate indicator
function $\delta_D$. Whenever $\Pi_D$, the projection onto $D$, is efficiently computable, then algorithms like the forward-backward splitting \eqref{eq:FBS} can be conveniently considered. In the following we
analyze the generalized Jacobian of some of such projections.
\subsubsection{Affine sets}\label{ex:projAff}
If $D=\{x\ |\ Ax= b\}$, $A\in\Re^{m\times n}$, then $\Pi_D(x)=x-A^{\dagger} (Ax-b)$, where $A^{\dagger}$ is the Moore-Penrose pseudoinverse of $A$. For example if $m<n$ and $A$ has full row rank, then $A^{\dagger}=A'(AA')^{-1}$. Obviously $\Pi_D$ is an affine mapping, thus everywhere differentiable with
\begin{equation}
\partial_C(\Pi_D)(x)=\partial_B(\Pi_D)(x)=\{\nabla\Pi_D(x)\}=\{I-A^{\dagger} A\}.
\end{equation}
\subsubsection{Polyhedral sets}\label{ex:projPoly}
In this case $D=\{x\ |\ Ax=b,\ Cx\leq d\}$, with $A\in\Re^{m_1\times n}$ and $C\in\Re^{m_2\times n}$. It is well known
that $\Pi_D$ is piecewise affine. In particular let
$$\mathscr{I}_D=\left\{I\subseteq[m_2]\ \left|\begin{array}{l} \textrm{there exists a vector }x\in\Re^n\textrm{ with }Ax=b,\\ \ C_{i\cdot}x=d_i,\ i\in I,\ C_{j\cdot}x<d_j,\ j\in[m_2]\setminus I\end{array}\right.\right\}$$
For each $I\in \mathscr{I}_D$ let
\begin{align*}
F_I&=\{x\in D\ |\ C_{i\cdot}x=d_i,\ i\in I\},\\
S_I&=\mathop{\rm aff}\nolimits F_I=\{x\in \Re^n\ |\ Ax=b,\ C_{i\cdot}x=d_i,\ i\in I\},\\
N_I&=\mathop{\rm cone}\nolimits\left\{\begin{bmatrix}A'&C_{I\cdot}'\end{bmatrix}\right\},\\
C_I&=F_I+N_I.
\end{align*}
We then have $\Pi_D(x)\in\{\Pi_{S_I}(x)\ |\ I\in\mathscr{I}_D\}$, \emph{i.e.},~ $\Pi_D$ is a piecewise affine function. The affine pieces of $\Pi_D$ are the projections on the corresponding affine subspaces $S_I$, see Section~\ref{ex:projAff}. In fact for each $x\in C_I$ we have $\Pi_D(x)=\Pi_{S_I}(x)$, each $C_I$ is full dimensional and $\Re^n=\bigcup_{I\in\mathscr{I}_D}C_I$.
For each $I\in \mathscr{I}_D$ let $P_I=\nabla \Pi_{S_I}$ and for each $x\in\Re^n$ let $J(x)=\{I\in \mathscr{I}_D\ |\ x\in C_I\}$. Then
$$\partial_C(\Pi_D)(x)=\mathop{\rm conv}\nolimits\partial_B(\Pi_D)(x)=\mathop{\rm conv}\nolimits\{P_I\ | I\in J(x)\}.$$
Therefore, in order to determine an element $P$ of $\partial_B(\Pi_D)(x)$ it suffices to compute $\bar{x}=\Pi_D(x)$ and take
$P=I-B^{\dagger} B$, where
$$B=\begin{bmatrix}A\\ C_{I(x)\cdot}\end{bmatrix},$$
and $I(x)=\{i\in[n]\ |\ A_{i\cdot}\bar{x}=b_i\}$.
\subsubsection{Halfspaces}
We denote $(x)_{+} = \max\{0, x\}$. If $D=\{x\ |\ a'x\leq b\}$ then
$$\Pi_D(x)=x-\left(\frac{(a'x-b)_+}{\|a\|_2^2}\right)a$$
and
$$\partial_C(\Pi_D)(x)=\begin{cases}\{I-(1/\|a\|^2)aa'\},&\textrm{ if }a'x>b,\\
\{I\},&\textrm{ if }a'x<b,\\
\mathop{\rm conv}\nolimits\{I,I-(1/\|a\|^2)aa'\},&\textrm{ if }a'x=b.
\end{cases}$$
\subsubsection{Boxes}\label{ex:projBox}
Consider the box $D=\{x\ |\ \ell\leq x\leq u\}$, with $\ell_i\leq u_i$. We have
$$\Pi_D(x)=\min\{\max\{x,\ell\},u\}.$$
The corresponding indicator function $\delta_D$ is clearly separable, therefore (Prop.~\ref{prop:Separ}) every element $P\in \partial_B(\Pi_D)(x)$ is diagonal with
$$P_{ii}=\begin{cases}
1,&\textrm{ if } \ell< x< u,\\
0,&\textrm{ if } x<\ell\textrm{ or }x> u,\\
\{0,1\}, &\textrm{ if } x=\ell\textrm{ or }x= u.
\end{cases}$$
\subsubsection{Unit simplex}\label{ex:projSimplex}
When $D=\left\{x\ |\ x\geq 0,\ \sum_{i=1}^nx_i=1\right\}$,
one can easily see, by writing down the optimality conditions for the corresponding projection problem, that
$$\Pi_D(x)=(x-\lambda\mathbf{1})_+,$$
where $\lambda$ solves $\mathbf{1}'(x-\lambda\mathbf{1})_+=1$. Since the unit simplex is a polyhedral set, we are dealing with a special case of Section~\ref{ex:projPoly}, where $A=\mathbf{1}_n'$, $b=1$, $C=-I_n$ and $d=0$. Therefore, to calculate an element of the generalized Jacobian of the projection, we first compute $\Pi_D(x)$ and then determine the set of active indices \mbox{$J=\{i\in[n]\ |\ (\Pi_D(x))_i=0\}$}. Let $n_J=|J|$ and $J_c=[n]\setminus J$. An element $P$ of $\partial_B(\Pi_D)(x)$ is given by
$$P_{ij}=\begin{cases}0,&\textrm{ if } i,j\in J\\
-1/(n-n_J),&\textrm{ if } i\neq j, i,j\in J_c,\\
1-1/(n-n_J),&\textrm{ if } i= j, i,j\in J_c.\end{cases}$$
Notice that $P$ is block-diagonal after a permutation of rows and columns.
The nonzero part $P_{J_cJ_c}$ is Toeplitz, so we can compute matrix vector products in $O(n_{J_c}\log n_{J_c})$ instead of $O(n_{J_c}^2)$ operations. Computing an element of the generalized Jacobian of the projection on \mbox{$D=\{x\ |\ a'x=b,\ \ell\leq x\leq u\}$} can be treated in a similar fashion.
\subsubsection{Euclidean unit ball}
If $g=\delta_{B_2}$, where $B_2$ is the Euclidean unit ball then
$$\Pi_{B_2}(x)=\begin{cases}
x/\|x\|_2,&\textrm{ if } \|x\|_2>1,\\
x, & \textrm{ otherwise }
\end{cases}$$
and
$$\partial_C(\Pi_{B_2})(x)=\begin{cases}
\{(1/\|x\|_2)(I-ww')\},&\textrm{ if } \|x\|_2>1,\\
\{I\},&\textrm{ if } \|x\|_2<1,\\
\mathop{\rm conv}\nolimits\{(1/\|x\|_2)(I-ww'),I\}, & \textrm{ otherwise,}
\end{cases}$$
where $w=x/\|x\|_2^2$.
Equality follows from the fact that $\Pi_{B_2}:\Re^n\to\Re^n$ is a piecewise smooth function.
\subsubsection{Second-order cone}
Given a point $x=(x_0,\bar{x})\in\Re\times\Re^n$, each element of $V\in\partial_B(\Pi_K)(z)$ has the following
representation~\cite[Lem.~2.6]{kanzow2009local}:
$$ V=0\textrm{ or } V=I_{n+1}\textrm{ or } V=\begin{bmatrix}1 & \bar{w}'\\ \bar{w}& H\end{bmatrix}, $$
for some vector $\bar{w}\in\Re^n$ with $\|\bar{w}\|_2=1$ and some matrix $H\in\Re^{n\times n}$ of the form
\begin{equation}
H=(1+\alpha)I_n-\alpha\bar{w}\bar{w}',\quad |\alpha|\leq 1. \label{eq:HSOC}
\end{equation}
More precisely:
\begin{enumerate}[(i)]
\item if $x_0\neq\pm\|\bar{x}\|_2$, then $\bar{w} = \bar{x}/\|\bar{x}\|,\ \alpha=x_0/\|\bar{x}\|,$
\item if $\bar{x}\neq 0$ and ${x}_0=+\|\bar{x}\|_2$, then $\bar{w} = \bar{x}/\|\bar{x}\|,\ \alpha=+1,$
\item if $\bar{x}\neq 0$ and ${x}_0=-\|\bar{x}\|_2$, then $\bar{w} = \bar{x}/\|\bar{x}\|,\ \alpha=-1,$
\item if $\bar{x}=0$ and $x_0=0$, then either $V=0$ or $V=I_{n+1}$ or it has $H$ as in~\eqref{eq:HSOC}
for any $\bar{w}$ with $\|\bar{w}\|=1$ and $\alpha$ with $|\alpha|\leq 1$.
\end{enumerate}
\subsection{Vector norms}
\subsubsection{Euclidean norm}
If $g(x)=\|x\|_2$ then the proximal mapping is given by
$$\mathop{\rm prox}\nolimits_{\gamma g}(x)=\begin{cases}
(1-\gamma/\|x\|_2)x,&\textrm{ if } \|x\|_2\geq\gamma,\\
0,&\textrm{ otherwise}.\end{cases}$$
Since $\mathop{\rm prox}\nolimits_{\gamma g}$ is a $P{C}^1$ mapping, its $B$-subdifferential can be computed by simply computing the Jacobians of its smooth pieces. Specifically we have
$$\partial_B(\mathop{\rm prox}\nolimits_{\gamma g})(x)=\begin{cases}\left\{I-\gamma/\|x\|_2\left(I-ww'\right)\right\},&\textrm{ if } \|x\|_2>\gamma,\\
\{0\},&\textrm{ if } \|x\|_2<\gamma,\\
\left\{I-\gamma/\|x\|_2\left(I-ww'\right),0\right\},&\textrm{ otherwise}.\end{cases}$$
where $w=x/\|x\|_2$.
\subsubsection{$\ell_1$ norm}\label{ex:EllOne}
The proximal mapping of $g(x)=\|x\|_1$ is the well known soft-thresholding operator
$$(\mathop{\rm prox}\nolimits_{\gamma g}(x))_i=(\mathop{\rm sign}\nolimits(x_i)(|x_i|-\gamma)_+)_i,\quad i\in[n].$$
Function $g$ is separable, therefore according to Proposition~\ref{prop:Separ} every element of $\partial_B(\mathop{\rm prox}\nolimits_{\gamma g})$ is a diagonal matrix. The explicit form of the elements of $\partial_B(\mathop{\rm prox}\nolimits_{\gamma g})$ is as follows.
Let $\alpha=\{i\ |\ |x_i|>\gamma\}$, $\beta=\{i\ |\ |x_i|=\gamma\}$, $\delta=\{i\ |\ |x_i|<\gamma\}$. Then
$P\in\partial_B(\mathop{\rm prox}\nolimits_{\gamma g})(x)$ if and only if $P$ is diagonal with elements
$$P_{ii}=\begin{cases}
1,&\textrm{ if } i\in\alpha,\\
\in\{0,1\},&\textrm{ if } i\in\beta,\\
0,&\textrm{ if } i\in\delta.
\end{cases}$$
We could also arrive to the same conclusion by applying Proposition~\ref{prop:MorDec} to the function of Section~\ref{ex:projBox} with $u=-\ell=\mathbf{1}_n$, since the $\ell_1$ norm is the conjugate of the indicator of the $\ell_\infty$ -norm ball.
\subsubsection{Sum of norms} If $g(x)=\sum_{s\in\mathcal{S}}\|x_s\|_2$, where $\mathcal{S}$ is a partition of $[n]$, then
$$(\mathop{\rm prox}\nolimits_{\gamma g}(x))_s=\left(1-\frac{\gamma}{\|x_s\|_2}\right)_+x_s,$$
for all $s\in\mathcal{S}$. Any $P\in\partial_B(\mathop{\rm prox}\nolimits_{\gamma g})(x)$ is block diagonal with the $s$-th block equal to $I-\gamma/\|x_s\|_2\left(I-(1/\|x_s\|_2^2)x_sx_s'\right)$, if $\|x_s\|_2>\gamma$, $I$ if $\|x_s\|_2<\gamma$ and any of these two matrices if $\|x_s\|_2=\gamma$.
\subsection{Support function}\label{ex:SuppFun} Since $\sigma_C(x) = \sup_{y\in C}x'y$ is the conjugate of the indicator $\delta_C$, one can use Proposition~\ref{prop:MorDec} to find that
$$\partial_B(\mathop{\rm prox}\nolimits_{\gamma g})(x) = \left\{P = I-Q : Q\in\partial_B(\Pi_C)(x/\gamma)\right\}.$$
Depending on the specific set $C$ (see Section~\ref{ex:IndFun}) one obtains the appropriate subdifferential. A particular example is the following.
\subsection{Pointwise maximum}
Function $g(x)=\max\{x_1,\ldots,x_n\}$ is conjugate to the indicator of the unit simplex already analyzed in Section~\ref{ex:projSimplex}. Applying Proposition~\ref{prop:MorDec} we obtain
$$\partial_B(\mathop{\rm prox}\nolimits_{\gamma g})(x)=\{P=I-Q\ |\ Q\in\partial_B(\Pi_D)(x/\gamma)\}$$
Then $\Pi_D(x/\gamma)=(x/\gamma-\lambda\mathbf{1})_+$ where $\lambda$ solves $\mathbf{1}'(x/\gamma-\lambda\mathbf{1})_+=1$.
Let $J=\{i\in[n]\ |\ (\Pi_D(x/\gamma))_i=0\}$, $n_J=|J|$ and $J_c=[n]\setminus J$. It follows that an element of $\partial_B(\mathop{\rm prox}\nolimits_{\gamma g})(x)$ is block-diagonal (after a reordering of variables) with
$$P_{ij}=\begin{cases}1,&\textrm{ if } i,j\in J\\
1+1/(n-n_J),&\textrm{ if } i\neq j, i,j\in J_c,\\
1/(n-n_J),&\textrm{ if } i= j, i,j\in J_c.\end{cases}$$
\subsection{Spectral functions}
For any symmetric $n$ by $n$ matrix $X$, the eigenvalue function $\lambda:\mathbb S^n\to\Re^n$ returns the vector of its eigenvalues in nonincreasing order.
Now consider function $G:\mathbb S^n\to\bar{\Re}$
\begin{equation}\label{eq:SpecFun}
G(X)=h(\lambda(X)),\quad X\in\mathbb S^n,
\end{equation}
where $h:\Re^n\to\bar{\Re}$ is proper, closed, convex and symmetric, \emph{i.e.},~ invariant under coordinate permutations.
Functions of this form are called \emph{spectral functions}\cite{lewis1996convex}. Being a spectral function, $G$ inherits most of the properties of $h$\cite{lewis1996derivatives, lewis2001twice}. In particular, its proximal mapping is simply\cite[Sec.~6.7]{parikh2013proximal}
$$\mathop{\rm prox}\nolimits_{\gamma G}(X)=Q\mathop{\rm diag}\nolimits(\mathop{\rm prox}\nolimits_{\gamma h}(\lambda(X)))Q',$$
where $X=Q\mathop{\rm diag}\nolimits(\lambda(X))Q'$ is the spectral decomposition of $X$ ($Q$ is an orthogonal matrix).
Next, we further assume that
\begin{equation}\label{eq:SymSep}
h(x)=g(x_1)+\cdots+g(x_N),
\end{equation}
where $g:\Re\to\bar{\Re}$.
Since $h$ is also separable we have that
$$\mathop{\rm prox}\nolimits_{\gamma h}(x)=(\mathop{\rm prox}\nolimits_{\gamma g}(x_1),\ldots,\mathop{\rm prox}\nolimits_{\gamma g}(x_N)),$$
therefore the proximal mapping of $G$ can be expressed as
\begin{equation}\label{eq:ProxSpec}
\mathop{\rm prox}\nolimits_{\gamma G}(X)=Q\mathop{\rm diag}\nolimits(\mathop{\rm prox}\nolimits_{\gamma g}(\lambda_1(X)),\ldots,\mathop{\rm prox}\nolimits_{\gamma g}(\lambda_n(X)))Q'.
\end{equation}
Functions of this form are called \emph{symmetric matrix-valued functions}
~\cite[Chap. V]{bhatia1997matrix},~\cite[Sec.~6.2]{horn1991topics}.
Now we can use the theory of nonsmooth symmetric matrix-valued functions developed in~\cite{chen2003analysis} to
analyze differentiability properties of $\mathop{\rm prox}\nolimits_{\gamma G}$. In particular $\mathop{\rm prox}\nolimits_{\gamma G}$ is (strongly)
semismooth at $X$ if and only if $\mathop{\rm prox}\nolimits_{\gamma g}$ is (strongly) semismooth at the eigenvalues of X~\cite[Prop.~4.10]{chen2003analysis}.
Moreover, for any $X\in\mathbb S^n$ and $P\in\partial_B(\mathop{\rm prox}\nolimits_{\gamma G})(X)$ we have~\cite[Lem.~4.7]{chen2003analysis}
\begin{equation}\label{eq:JacSpec}
P(S)=Q(\Omega\circ(Q'SQ))Q',\ \forall S\in\mathbb S^n,
\end{equation}
where $\circ$ denotes the Hadamard product and the matrix $\Omega\in\Re^{n\times n}$ is defined by
\begin{equation}\label{eq:GammaJac}
\Omega_{ij}=\begin{cases}
\frac{\mathop{\rm prox}\nolimits_{\gamma g}(\lambda_i)-\mathop{\rm prox}\nolimits_{\gamma g}(\lambda_j)}{\lambda_i-\lambda_j},&\textrm{ if } \lambda_i\neq\lambda_j,\\
\in\partial(\mathop{\rm prox}\nolimits_{\gamma g})(\lambda_i),&\textrm{ if } \lambda_i=\lambda_j.
\end{cases}
\end{equation}
\subsubsection{Indicator of the positive semidefinite cone}
The indicator of $\mathbb S_+^n$ can be expressed as in~\eqref{eq:SpecFun} with $h$ given by~\eqref{eq:SymSep} and $g=\delta_{\Re_+}$. Then $\mathop{\rm prox}\nolimits_{\gamma g}(x)=\Pi_{\Re_+}(x)=(x)_+$ and according to~\eqref{eq:ProxSpec} we have
$$\Pi_{\mathbb S^n_+}(X)=Q\mathop{\rm diag}\nolimits((\lambda_1)_+,\ldots,(\lambda_n)_+)Q'.$$
Let
$\alpha=\{i\ |\ \lambda_i>0\}$ and $\bar{\alpha}=[n]\setminus\alpha$. An element of $\partial_B\Pi_{\mathbb{S}_+^n}(X)$ is given by~\eqref{eq:JacSpec} with
$$\Omega =\begin{bmatrix} \Omega_{\alpha\alpha}& k_{\alpha\bar{\alpha}}\\ k_{\alpha\bar{\alpha}}'&0 \end{bmatrix},$$
where $\Omega_{\alpha\alpha}$ is a matrix of ones and $ k_{ij}=\frac{\lambda_i}{\lambda_i-\lambda_j},\ i\in\alpha,\ j\in\bar{\alpha}$. In fact we have $P(S)=H+H'$~\cite[Sec.~4]{zhao2010newton} where
$$H=Q_\alpha\left(\tfrac{1}{2}(UQ_\alpha)Q_\alpha'+( k_{\alpha\bar{\alpha}}\circ(UQ_{\bar{\alpha}}))Q_{\bar{\alpha}}'\right)$$
and $U=Q_{\alpha}'S$. Therefore we can form $P(S)$ in at most $8|\alpha|n^2$ flops. When $|\alpha|>|\bar{\alpha}|$, we can alternatively express $P(S)$ as $S-Q'((E-\Omega)\circ(Q'SQ))Q'$, where $E$ is a matrix of all ones and compute it in $8|\bar{\alpha}|n^2$ flops.
\subsection{Orthogonally invariant functions}
A function $G:\Re^{m\times n}\to\bar{\Re}$ is called \emph{orthogonally invariant} if
$$G(UXV')=G(X),$$
for all $X\in\Re^{m\times n}$ and all orthogonal matrices $U\in\Re^{m\times m}$, $V\in\Re^{n\times n}$. When the elements of $X$ are allowed to be complex numbers then functions of this form are called \emph{unitarily invariant}~\cite{lewis1995convex}.
A function $h:\Re^{q}\to\bar{\Re}$ is \emph{absolutely symmetric} if $h(Qx)=h(x)$ for all $x\in\Re^p$ and any generalized permutation matrix $Q$, \emph{i.e.},~ a matrix $Q\in\Re^{q\times q}$ that has exactly one nonzero entry in each row and each column, that entry being $\pm 1$~\cite{lewis1995convex}.
There is a one-to-one correspondence between orthogonally invariant functions on $\Re^{m\times n}$ and absolutely symmetric functions on $\Re^q$. Specifically if $G$ is orthogonally invariant then
$$G(X)=h(\sigma(X)),$$
for the absolutely symmetric function $h(x)=G(\mathop{\rm diag}\nolimits(x))$. Here for $X\in\Re^{m\times n}$, the spectral function
$\sigma:\Re^{m\times n}\to\Re^q$, $q=\min\{m,n\}$ returns the vector of its singular values in nonincreasing order.
Conversely, if $h$ is absolutely symmetric then $G(X)= h(\sigma(X))$ is orthogonally invariant.
Therefore, convex-analytic and generalized differentiability properties of orthogonally invariant functions can be
easily derived from those of the corresponding absolutely symmetric functions~\cite{lewis1995convex}.
For example, assuming for simplicity that $m\leq n$, the proximal mapping of $G$ is given by
(see e.g. \cite[Sec.~6.7]{parikh2013proximal})
$$\mathop{\rm prox}\nolimits_{\gamma G}(X)=U\mathop{\rm diag}\nolimits(\mathop{\rm prox}\nolimits_{\gamma h}(\sigma(X)))V_1',$$
where $X=U\begin{bmatrix}\mathop{\rm diag}\nolimits(\sigma(X)),&0\end{bmatrix}\begin{bmatrix}V_1,& V_2\end{bmatrix}'$ is the singular value decomposition of $X$.
If we further assume that $h$ is separable as in~\eqref{eq:SymSep} then
\begin{equation}\label{eq:ProxSpec2}
\mathop{\rm prox}\nolimits_{\gamma G}(X)=U\Sigma_g(X)V_1',
\end{equation}
where $\Sigma_g(X)=\mathop{\rm diag}\nolimits(\mathop{\rm prox}\nolimits_{\gamma g}(\sigma_1(X)),\ldots,\mathop{\rm prox}\nolimits_{\gamma g}(\sigma_n(X)))$.
Functions of this form are called \emph{nonsymmetric matrix-valued functions}. We also assume that $g$ is a non-negative function such that $g(0)=0$.
This implies that $\mathop{\rm prox}\nolimits_{\gamma g}(0)=0$ and guarantees that the nonsymmetric matrix-valued function~\eqref{eq:ProxSpec2} is well-defined~\cite[Prop.~2.1.1]{yang2009study}. Now we can use the results of~\cite[Ch. 2]{yang2009study} to draw conclusions about generalized differentiability properties of $\mathop{\rm prox}\nolimits_{\gamma G}$.
For example, through~\cite[Th. 2.27]{yang2009study} we have that
$\mathop{\rm prox}\nolimits_{\gamma G}$ is continuously differentiable at $X$ if and only if $\mathop{\rm prox}\nolimits_{\gamma g}$ is continuously differentiable at the singular values of $X$. Furthermore, $\mathop{\rm prox}\nolimits_{\gamma G}$ is (strongly) semismooth at $X$ if $\mathop{\rm prox}\nolimits_{\gamma g}$ is (strongly) semismooth at the singular values of $X$ \cite[Th. 2.3.11]{yang2009study}.
For any $X\in\Re^{m\times n}$ the generalized Jacobian $\partial_B(\mathop{\rm prox}\nolimits_{\gamma G}) (X)$ is well defined and nonempty and
any $P\in\partial_B(\mathop{\rm prox}\nolimits_{\gamma G})(X)$ acts on $H\in\Re^{m\times n}$ as \cite[Prop.~2.3.7]{yang2009study
\begin{equation}\label{eq:JacNS}
P(H)=U\begin{bmatrix}\left(\Omega_{1}\circ\left(\frac{H_1+H_1'}{2}\right)+\Omega_{2}\circ\left(\frac{H_1-H_1'}{2}\right)\right),&(\Omega_{3}\circ H_2)\end{bmatrix}\begin{bmatrix}V_1,&V_2\end{bmatrix}'
\end{equation}
where $H_1=U'HV_1\in\Re^{m\times m}$, $H_2=U'HV_2\in\Re^{m\times(n-m)}$
and $\Omega_{1}\in\Re^{m\times m }$, $\Omega_{2}\in\Re^{m\times m }$, $\Omega_{3}\in\Re^{m\times (n-m) }$ are given by
\begin{align*}
(\Omega_{1})_{ij}&=\begin{cases}
\frac{\mathop{\rm prox}\nolimits_{\gamma g}(\sigma_i)-\mathop{\rm prox}\nolimits_{\gamma g}(\sigma_j)}{\sigma_i-\sigma_j},&\textrm{ if } \sigma_i\neq\sigma_j,\\
\in\partial \mathop{\rm prox}\nolimits_{\gamma g}(\sigma_i),&\textrm{ if }\sigma_i=\sigma_j,
\end{cases}\\
(\Omega_{2})_{ij}&=\begin{cases}
\frac{\mathop{\rm prox}\nolimits_{\gamma g}(\sigma_i)-\mathop{\rm prox}\nolimits_{\gamma g}(-\sigma_j)}{\sigma_i+\sigma_j},&\textrm{ if } \sigma_i\neq -\sigma_j,\\
\in\partial \mathop{\rm prox}\nolimits_{\gamma g}(0),&\textrm{ if }\sigma_i=\sigma_j=0,
\end{cases}\\
(\Omega_{3})_{ij}&=\begin{cases}
\frac{\mathop{\rm prox}\nolimits_{\gamma g}(\sigma_i)}{\sigma_i},&\textrm{ if } \sigma_i\neq 0,\\
\in\partial \mathop{\rm prox}\nolimits_{\gamma g}(0),&\textrm{ if }\sigma_i=0.
\end{cases}
\end{align*}
\subsubsection{Nuclear norm} For an $m$ by $n$ matrix $X$ the nuclear norm, $G(X)=\|X\|_{*}$, is the sum of its singular values, \emph{i.e.},~ $G(X)=\sum_{i=1}^m\sigma_i(X)$
(we are again assuming, for simplicity, that $m\leq n$). The nuclear norm serves as a convex surrogate for the rank of a matrix.
It has found many applications in systems and control theory, including system identification and model reduction~\cite{fazel2001rank,fazel2002matrix,fazel2004rank, liu2009interior,recht2010guaranteed}.
Other fields of application include \emph{matrix completion problems} arising in machine learning~\cite{srebro2004learning,rennie2005fast}
and computer vision~\cite{tomasi1992shape,morita1997sequential}, and \emph{nonnegative matrix factorization problems} arising in data mining~\cite{elden2007matrix}.
The nuclear norm can be expressed as $G(X)=h(\sigma(X))$, where $h(x)=\|x\|_1$.
Apparently, $h$ is absolutely symmetric and separable. Specifically, it takes the form~\eqref{eq:SymSep} with $g=|\cdot|$, for which $0\in\mathop{\rm dom}\nolimits g$ and $0\in\partial g(0)$.
The proximal mapping of the absolute value is the soft-thresholding operator.
In fact, since the case of interest here is $x\geq 0$ (because $\sigma_i(X)\geq 0$), we have $\mathop{\rm prox}\nolimits_{\gamma g}(x)=(x-\gamma)_+$. Consequently, the proximal mapping of
$\|X\|_*$ is given by~\eqref{eq:ProxSpec2} with
$$\Sigma_g(X)=\mathop{\rm diag}\nolimits((\sigma_1(X)-\gamma)_+,\ldots,(\sigma_m(X)-\gamma)_+).$$
For $x\in\Re_+$ we have that
\begin{equation}\label{eq:subSoft}
\partial(\mathop{\rm prox}\nolimits_{\gamma g})(x)=\begin{cases}
0,&\textrm{ if } 0\leq x<\gamma,\\
[0,1],&\textrm{ if } x=\gamma,\\
1,&\textrm{ if } x>\gamma.
\end{cases}
\end{equation}
Let $\alpha=\{i\ |\ \sigma_i(X)>\gamma\}$, $\beta=\{i\ |\ \sigma_i(X)=\gamma\}$ and $\delta=\{i\ |\ \sigma_i(X)<\gamma\}$.
Taking into account~\eqref{eq:subSoft}, an element $P$ of the $B$-subdifferential $\partial_B(\mathop{\rm prox}\nolimits_{\gamma G})(X)$ satisfies~\eqref{eq:JacNS} with
\begin{align*}
\Omega_1&=
\begin{bmatrix}\omega_{\alpha\alpha}^1&\omega_{\alpha\beta}^1&\omega_{\alpha\delta}^1\\
(\omega_{\alpha\beta}^1)'&\omega_{\beta\beta}^1&0\\
(\omega_{\alpha\delta}^1)'&0&0\end{bmatrix},
\quad&
\begin{array}{ll}\omega^1_{ij}=1, &i\in\alpha, j\in\alpha\cup\beta,\\
\omega^1_{ij}=\frac{\sigma_i(X)-\gamma}{\sigma_i(X)-\sigma_j(X)},& i\in\alpha, j\in\delta,\\
\omega_{ij}^1=\omega_{ji}^1=[0,1],&i,j\in\beta\end{array}\\
\Omega_2&=
\begin{bmatrix}\omega_{\alpha\alpha}^2&\omega_{\alpha\beta}^2&\omega_{\alpha\delta}^2\\
(\omega_{\alpha\beta}^2)'&0&0\\
(\omega_{\alpha\delta}^2)'&0&0\end{bmatrix},
\quad&
\begin{array}{ll}\\
\omega^2_{ij}=\frac{(\sigma_i(X)-\gamma)_++(\sigma_j(X)-\gamma)_+}{\sigma_i(X)+\sigma_j(X)},& i\in\alpha, j\in [m],\\
\\\end{array}\\
\Omega_3&=
\begin{bmatrix}\omega_{\alpha [n-m]}^3\\
0\end{bmatrix},
\quad&
\begin{array}{ll}\\
\omega^3_{ij}=\frac{\sigma_i(X)-\gamma}{\sigma_i(X)},& i\in\alpha, j\in [n-m].\\%\bar{\beta}=\{1,\ldots,n-m\}.\\
\\
\end{array}
\end{align*}
\section{Forward-backward envelope}\label{sec:FBE}
In the following we indicate by $X_\star$ and $F_\star$, respectively, the
set of solutions of problem~\eqref{eq:GenProb} and its optimal objective value.
Forward-backward splitting for solving~\eqref{eq:GenProb} relies on computing,
at every iteration, the following update
\begin{equation}\label{eq:FBS}
x^{k+1} = \mathop{\rm prox}\nolimits_{\gamma g}(x^k-\gamma\nabla f(x^k)),
\end{equation}
where the \emph{proximal mapping}~\cite{moreau1965proximiteet} of $g$
is defined by
\begin{equation}\label{eq:prox}
\mathop{\rm prox}\nolimits_{\gamma g}(x) \triangleq\operatornamewithlimits{argmin}_u\left\{g(u)+\tfrac{1}{2\gamma}\|u-x\|^2\right\}.
\end{equation}
The value function of the optimization problem~\eqref{eq:prox} defining the proximal mapping
is called the \emph{Moreau envelope} and is denoted by $g^\gamma$, \emph{i.e.},~
\begin{equation}\label{eq:MoreauEnv}
g^{\gamma}(x) \triangleq\inf_u\left\{g(u)+\tfrac{1}{2\gamma}\|u-x\|^2\right\}.
\end{equation}
Properties of the Moreau envelope and the proximal mapping are well documented in the literature \cite{bauschke2011convex,rockafellar2011variational,combettes2005signal,combettes2011proximal}.
For example, the proximal mapping is single-valued, continuous and nonexpansive (Lipschitz continuous with Lipschitz $1$)
and the envelope function $g^{\gamma}$ is convex, continuously differentiable, with $\gamma^{-1}$-Lipschitz continuous gradient
\begin{equation}\label{eq:nabla_e}
\nabla g^{\gamma}(x)=\gamma^{-1}(x-\mathop{\rm prox}\nolimits_{\gamma g}(x)).
\end{equation}
We will next proceed to the reformulation of~\eqref{eq:GenProb} as the minimization of an unconstrained continuously differentiable function.
It is well known \cite{bauschke2011convex} that an optimality condition
for~\eqref{eq:GenProb} is
\begin{equation}\label{eq:OptCond}
x=\mathop{\rm prox}\nolimits_{\gamma g}(x-\gamma\nabla f(x)).
\end{equation}
Since $f\in\mathcal{S}_{\mu_f,L_f}^{2,1}(\Re^n)$, we have that $\|\nabla^2 f(x)\|\leq L_f$ \cite[Lem. 1.2.2]{nesterov2003introductory}, therefore $I-\gamma\nabla^2 f(x)$ is symmetric
and positive definite whenever $\gamma\in(0,1/L_f)$.
Premultiplying both sides of~\eqref{eq:OptCond} by $\gamma^{-1}(I-\gamma\nabla^2 f(x))$, $\gamma\in(0,1/L_f)$,
one obtains the equivalent condition
\begin{equation}\label{eq:OptCondScaled}
\gamma^{-1}(I-\gamma\nabla^2f(x))(x-\mathop{\rm prox}\nolimits_{\gamma g}(x-\gamma\nabla f(x))) = 0.
\end{equation}
The left-hand side of equation~\eqref{eq:OptCondScaled} is the gradient of the function that we call \emph{forward-backward envelope},
indicated by $F_\gamma$. Using~\eqref{eq:nabla_e} to integrate~\eqref{eq:OptCondScaled}, one obtains
the following definition.
\begin{definition}
Let $F(x) = f(x)+g(x)$, where $f\in\mathcal{S}_{\mu_f,L_f}^{2,1}(\Re^n)$,
$g\in\mathcal{S}^0(\Re^n)$. The forward-backward envelope of $F$ is given by
\begin{equation}\label{eq:Penalty}
F_\gamma(x)\triangleq f(x)-\tfrac{\gamma}{2}||\nabla f(x)||_2^2+g^{\gamma}(x-\gamma\nabla f(x)).
\end{equation}
\end{definition}
Alternatively, one can express $F_\gamma$ as the value function of the minimization
problem that yields forward-backward splitting. In fact
\begin{subequations}
\begin{align}
F_\gamma(x)&=\min_{u\in\Re^n}\left\{f(x)+\nabla f(x)'(u-x)+g(u)+\tfrac{1}{2\gamma}\|u-x\|^2\right\}\label{eq:Fmin}\\
&=f(x)+g(P_{\gamma}(x))-\gamma\nabla f(x)'G_{\gamma}(x)+\tfrac{\gamma}{2}\|G_{\gamma}(x)\|^2,
\end{align}
\end{subequations}
where
\begin{align*}
P_{\gamma}(x)&\triangleq\mathop{\rm prox}\nolimits_{\gamma g}(x-\gamma\nabla f(x)),\\
G_{\gamma}(x)&\triangleq\gamma^{-1}(x-P_{\gamma}(x)).
\end{align*}
One distinctive feature of $F_\gamma$ is the fact that it is real-valued despite the fact that $F$ can be extended-real-valued.
In addition, $F_\gamma$ enjoys favorable properties, summarized in the next theorem.
\begin{theorem}\label{Th:PropFg}
The following properties of $F_\gamma$ hold:
\begin{enumerate}[\rm (i)]
\item\label{prop:DerPen} $F_\gamma$ is continuously differentiable with
\begin{equation}\label{eq:DerPen}
\nabla F_\gamma(x)=\left(I-\gamma\nabla^2 f(x)\right)G_{\gamma}(x).
\end{equation}
If $\gamma\in (0,1/L_f)$ then the set of stationary points of $F_\gamma$ equals $X_\star$.
\item\label{prop:UppBnd} For any $x\in\Re^n$, $\gamma>0$
\begin{equation}\label{eq:UppBnd}
F_\gamma(x)\leq F(x)-\tfrac{\gamma}{2}\|G_{\gamma}(x)\|^2.
\end{equation}
\item\label{prop:LowBnd} For any $x\in\Re^n$, $\gamma>0$
\begin{equation}\label{eq:LowBnd}
F(P_{\gamma}(x))\leq F_\gamma(x)-\tfrac{\gamma}{2} \left(1-{\gamma}L_f\right)\|G_{\gamma}(x)\|^2.
\end{equation}
In particular, if $\gamma\in\left(0,1/L_f\right]$ then
\begin{equation}\label{eq:LowBnd4Gamma}
F(P_{\gamma}(x))\leq F_\gamma(x).
\end{equation}
\item\label{cor:Equivalence} If $\gamma\in (0,1/L_f)$ then $X_\star=\operatornamewithlimits{argmin} F_\gamma$.
\end{enumerate}
\end{theorem}
\begin{proof}
Part (i) has already been proven.
Regarding (ii), from the optimality condition for the problem defining the proximal mapping we have
\[
G_{\gamma}(x)-\nabla f(x)\in\partial g(P_{\gamma}(x)),
\]
\emph{i.e.},~ $G_{\gamma}(x)-\nabla f(x)$ is a subgradient of $g$ at $P_{\gamma}(x)$. From the subgradient inequality
\begin{align*}
g(x)&\geq g(P_{\gamma}(x))+(G_{\gamma}(x)-\nabla f(x))'(x-P_{\gamma}(x))\\
&=g(P_{\gamma}(x))-\gamma\nabla f(x)'G_{\gamma}(x)+\gamma\|G_{\gamma}(x)\|^2
\end{align*}
Adding $f(x)$ to both sides proves the claim.
For part (iii), we have
\begin{align*}
F_\gamma (x)&=f(x)+\nabla f(x)'(P_{\gamma}(x)-x)+g(P_{\gamma}(x)){+}\tfrac{\gamma}{2}\|G_{\gamma}(x)\|^2\\
&\geq f(P_{\gamma}(x))+g(P_{\gamma}(x))-\tfrac{L_f}{2}\|P_{\gamma}(x)-x\|^2+\tfrac{\gamma}{2}\|G_{\gamma}(x)\|^2.
\end{align*}
where the inequality follows by Lipschitz continuity of $\nabla f$ and the descent lemma, see e.g.~\cite[Prop. A.24]{bertsekas1999nonlinear}. For part (iv), putting $x_\star\in X_\star$ in~\eqref{eq:UppBnd} and~\eqref{eq:LowBnd} and using $x_\star=P_\gamma(x_\star)$ we obtain $F(x_\star)=F_\gamma(x_\star)$. Now, for any $x\in\Re^n$ we have $F_\gamma(x_\star)=F(x_\star)\leq F(P_\gamma(x))\leq F_\gamma(x)$, where the first inequality follows by optimality of $x_\star$ for $F$, while the second inequality follows by~\eqref{eq:LowBnd}. This shows that every $x_\star\in X_\star$ is also a (global) minimizer of $F_\gamma$. The proof finishes by recalling that the set of minimizers of $F_\gamma$ are a subset of the set of its stationary points, which by (i) is equal to $X_\star$.
\iftoggle{svver}{\qed}{}
\end{proof}
Parts (i) and (iv) of Theorem~\eqref{Th:PropFg} show that if $\gamma\in (0,1/L_f)$, the nonsmooth problem~\eqref{eq:GenProb} is completely equivalent to the unconstrained minimization of the continuously differentiable function $F_\gamma$, in the sense that the sets of minimizers and optimal values are equal. In other words we have
$$\operatornamewithlimits{argmin} F=\operatornamewithlimits{argmin} F_\gamma,\qquad \inf F = \inf F_\gamma.$$
Part (ii) shows that an $\epsilon$-optimal solution $x$ of $F$ is automatically $\epsilon$-optimal for $F_\gamma$, while part (iii) implies that from an $\epsilon$-optimal for $F_\gamma$ we can directly obtain an $\epsilon$-optimal solution for $F$ if $\gamma$ is chosen sufficiently small, \emph{i.e.},~
\begin{align*}
F(x)-F_\star&\leq\epsilon\implies F_\gamma(x)-F_\star\leq\epsilon,\\
F_\gamma(x)-F_\star&\leq\epsilon\implies F(P_\gamma(x))-F_\star\leq\epsilon.
\end{align*}
Notice that part (iv) of Theorem~\ref{Th:PropFg} states that if $\gamma\in (0,1/L_f)$, then not only do the stationary points of $F_\gamma$ agree with $X_\star$ (cf. Theorem~\ref{Th:PropFg}(\ref{prop:DerPen})), but also that its set of minimizers agrees with $X_\star$, \emph{i.e.},~ although $F_\gamma$ may not be convex, the set of stationary points turns out to be equal to the set of its minimizers. However, in the particular but important case where $f$ is convex quadratic, the FBE is convex with Lipschitz continuous gradient, as the following theorem shows.
\begin{theorem}\label{th:ProxPropQuad}
If $f(x)=\tfrac{1}{2}x'Qx+q'x$ and $\gamma\in(0,1/L_f)$, then $F_\gamma\in\mathcal{S}^{1,1}_{\mu_{F_\gamma},L_{F_\gamma}}(\Re^n)$, where
\begin{subequations}
\begin{align}
L_{F_\gamma}&=2(1-\gamma\mu_f)/\gamma,\label{eq:LipF}\\
\mu_{F_\gamma}&=\min\{(1-\gamma\mu_f)\mu_f,(1-\gamma L_f)L_f\}\label{eq:muF}
\end{align}
\end{subequations}
and $\mu_f=\lambda_{\min}(Q)\geq 0$, $L_f=\lambda_{\max}(Q)$.
\end{theorem}
\begin{proof}
Let
\begin{align*}
\psi_1(x) & \triangleq f(x)-(\gamma/2)\|\nabla f(x)\|^2=(1/2)x'Q(I-\gamma Q)x-\gamma q'Qx-\gamma q'q,\\
\psi_2(x) & \triangleq g^\gamma(x-\gamma\nabla f(x)).
\end{align*}
Due to Lemma~\ref{lem:eigen} (in the Appendix), $\psi_1$ is strongly convex with modulus $\mu_{F_\gamma}$.
Function $\psi_2(x)$ is convex, as the composition of the convex function $g^\gamma$
with the linear mapping $x-\gamma\nabla f(x)$.
Therefore, $F_\gamma(x)=\psi_1(x)+\psi_2(x)$ is strongly convex with convexity
parameter $\mu_{F_\gamma}$.
On the other hand, for every $x_1,x_2\in\Re^n$
\begin{align*}
\|\nabla F_\gamma(x_1)-\nabla F_\gamma(x_2)\| &\leq\|I-\gamma Q\|\|G_{\gamma}(x_1)-G_{\gamma}(x_2)\|\\
&\leq 2(1-\gamma\mu_f)/\gamma\|x_1-x_2\|
\end{align*}\
where the second inequality is due to Lemma~\ref{le:zNonExp} in the Appendix.
\iftoggle{svver}{\qed}{}
\end{proof}
Notice that if $\mu_f>0$ and we choose $\gamma=1/(L_f+\mu_f)$, then $L_{F_\gamma}=2L_f$ and $\mu_{F_\gamma}=L_f\mu_f/(L_f+\mu_f)$, so $L_{F_\gamma}/\mu_{F_\gamma}=2(L_f/\mu_f+1)$. In other words the condition number of $F_\gamma$ is roughly double compared to that of $f$.
\subsection{Interpretations}
It is apparent from~\eqref{eq:FBS} and~\eqref{eq:OptCond} that FBS is a Picard
iteration for computing a fixed point of the nonexpansive mapping $P_\gamma$.
It is well known that fixed-point iterations may exhibit slow asymptotic
convergence. On the other hand, Newton methods achieve much faster asymptotic
convergence rates. However, in order to devise globally convergent Newton-like
methods one needs a merit function on which to perform a line search, in order
to determine a step size that guarantees sufficient decrease and damps the
Newton steps when far from the solution. This is exactly the role that FBE plays
in this paper.
Another interesting observation is that the FBE provides a link between gradient methods and FBS, just like
the Moreau envelope~\eqref{eq:MoreauEnv} does for the proximal point algorithm~\cite{rockafellar1976monotone}.
To see this, consider the problem
\begin{equation}\label{eq:NSprob}
\minimize\ g(x)
\end{equation}
where $g\in\mathcal{S}^0(\Re^n)$. The proximal point algorithm for
solving~\eqref{eq:NSprob} is
\begin{equation}\label{eq:ProxMin}
x^{k+1}=\mathop{\rm prox}\nolimits_{\gamma g}(x^k).
\end{equation}
It is well known that the proximal point algorithm can be interpreted as
a gradient method for minimizing the Moreau envelope of $g$,
cf.~\eqref{eq:MoreauEnv}. Indeed, due to~\eqref{eq:nabla_e},
iteration~\eqref{eq:ProxMin} can be expressed as
$$x^{k+1}=x^k-\gamma\nabla g^\gamma(x^k).$$
This simple idea provides a link between nonsmooth and smooth optimization and
has led to the discovery of a variety of algorithms for problem~\eqref{eq:NSprob},
such as semismooth Newton methods~\cite{fukushima1996globally},
variable-metric~\cite{bonnans1995family} and quasi-Newton methods~\cite{mifflin1998quasi},
and trust-region methods~\cite{sagara2005trust}, to name a few.
However, when dealing with composite problems, even if $\mathop{\rm prox}\nolimits_{\gamma g}$ and $g^\gamma$
are cheaply computable, computing proximal mapping and Moreau envelope of
$(f+g)$ is usually as hard as solving~\eqref{eq:GenProb} itself.
On the other hand, forward-backward splitting takes advantage of the
structure of the problem
by operating separately on the two summands, cf.~\eqref{eq:FBS}.
The question that naturally arises is the following:
\begin{quote}
\emph{Is there a continuously differentiable function that provides an
interpretation of FBS as a gradient method, just like the Moreau envelope does
for the proximal point algorithm and problem~\eqref{eq:NSprob}?}
\end{quote}
The forward-backward envelope provides an affirmative answer. Specifically, FBS
can be interpreted as the following (variable metric) gradient method on the FBE:
$$x^{k+1}=x^k-\gamma(I-\gamma\nabla^2 f(x^k))^{-1}\nabla F_\gamma(x^k).$$
Furthermore, the following properties holding for $g^\gamma$
\begin{equation*}
g^{\gamma} \leq g,\quad\inf g^{\gamma} = \inf g,\quad\operatornamewithlimits{argmin} g^{\gamma} = \operatornamewithlimits{argmin} g.
\end{equation*}
correspond to Theorem~\ref{Th:PropFg}(\ref{prop:LowBnd}) and Theorem~\ref{Th:PropFg}(\ref{cor:Equivalence})
for the FBE.
The relationship between Moreau envelope and forward-backward envelope is then apparent.
This opens the possibility of extending FBS and devising new
algorithms for problem~\eqref{eq:GenProb} by simply reconsidering
and appropriately adjusting methods for unconstrained minimization of
continuously differentiable functions, the most well studied problem in
optimization. In this work we exploit one of the numerous alternatives, by
devising Newton-like algorithms that are able to achieve fast asymptotic
convergence rates.
The next section deals with the other obstacle that needs to be overcome,
\emph{i.e.},~ constructing a second-order expansion for the $\mathcal{C}^1$
(but not $\mathcal{C}^2$) function $F_\gamma$ around any optimal solution,
that behaves similarly to the Hessian for $\mathcal{C}^2$ functions and allows
us to devise algorithms with fast local convergence.
\section{Forward-Backward Newton-CG Methods}\label{sec:FBNCG}
Having established the equivalence between minimizing $F$ and $F_\gamma$, as well as a LNA for $\nabla F_\gamma$, it is now very easy to design globally convergent Newton-like algorithms with fast asymptotic convergence rates, for computing a $x_\star\in X_\star$. Algorithm~\ref{al:PNM} is a standard line-search method for minimizing $F_\gamma$, where a conjugate gradient method is employed to solve (approximately) the corresponding regularized Newton system. Therefore our algorithm does not require to form an element of the generalized Hessian of $F_\gamma$ explicitly. It only requires the computation of the corresponding matrix-vector product and is thus suitable for large-scale problems. Similarly, there is no need to form explicitly the Hessian of $f$, in order to compute the directional derivative $\nabla F_\gamma(x^k)'d^k$ needed in the backtracking procedure for computing the stepsize~\eqref{eq:Armijo}; only matrix-vector products with $\nabla^2 f(x)$ are required.
Under nonsingularity of the elements of $\hat{\partial}^2 F_\gamma(x_\star)$, eventually the stepsize becomes equal to 1 and Algorithm~\ref{al:PNM} reduces to a regularized version of the (undamped) linear Newton method \cite[Alg. 7.5.14]{facchinei2003finite} for solving $\nabla F_\gamma(x)=0$.
\begin{algorithm}
\LinesNumbered
\DontPrintSemicolon
\caption{Forward-Backward Newton-CG Method (FBN-CG I)} \label{al:PNM}
\KwIn{$\gamma\in (0,1/L_{f})$, $\sigma\in \left(0,1/2\right)$, $\bar{\eta}\in (0,1)$, $\zeta\in (0,1)$, $\rho\in(0,1]$, $x^0\in\Re^n$, $k=0$}
Select a $H^ k\in \hat{\partial}^2 F_{\gamma}(x^ k)$. Apply CG to
\begin{equation}\label{eq:RegNewtSys}
(H^ k+\delta_ k I)d^ k=-\nabla F_\gamma(x^ k)
\end{equation}
to compute a $d^ k\in\Re^n$ that satisfies
\begin{equation}\label{eq:InRegNewtSys}
\|(H^ k+\delta_ k I)d^ k+\nabla F_\gamma(x^ k)\|\leq \eta_ k\|\nabla F_\gamma(x^ k)\|,
\end{equation}
where
\begin{subequations}
\begin{align}
\delta_ k&=\zeta\|\nabla F_\gamma(x^ k)\|,\label{eq:deltas}\\
\eta_ k&=\min\{\bar{\eta},\|\nabla F_\gamma(x^ k)\|^\rho\}.\label{eq:etas}
\end{align}
\end{subequations}\;
Compute
$\tau_ k=\max\{2^{-i}\ |\ i=0,1,2,\ldots\}$ such that
\begin{equation}\label{eq:Armijo}
F_{\gamma}({x}^{ k}+\tau_ k d^ k)\leq F_{\gamma}({x}^{ k})+\sigma \tau_ k\nabla F_{\gamma}({x}^{ k})'d^{ k}.
\end{equation}\;
${x}^{ k+1}\gets {x}^{ k}+\tau_ k d^ k$\;
$ k\gets k+1$ and go to Step 1.\;
\end{algorithm}
The next theorem delineates the basic convergence properties of Algorithm~\ref{al:PNM}.
\begin{theorem}\label{th:ConvAlg1}
Every accumulation point of the sequence $\{x^ k\}$ generated by Algorithm~\ref{al:PNM} belongs to $X_\star$.
\end{theorem}
\begin{proof}
We will first show that the sequence $\{d^ k\}$ is \emph{gradient related to $\{x^ k\}$} \cite[Sec. 1.2]{bertsekas1999nonlinear}. That is, for any subsequence
$\{x^ k\}_{ k\in \mathcal N}$ that converges to a nonstationary point of $F_\gamma$, \emph{i.e.},~
\begin{equation}\label{eq:gradRel0}
\lim_{ k\to\infty, k\in\NN} \|\nabla F_\gamma(x^ k)\|=\kappa\neq 0,
\end{equation}
the corresponding subsequence $\{d^ k\}_{ k\in\NN}$ is bounded and satisfies
\begin{equation}\label{eq:gradRel}
\limsup_{ k\to\infty, k\in\NN} \nabla F_\gamma(x^ k)'d^{ k}<0.
\end{equation}
Without loss of generality we can restrict to subsequences for which $\nabla F_\gamma(x^ k)\neq 0$, for all $ k\in \mathcal N$.
Suppose that $\{x^ k\}_{ k\in \mathcal N}$ is one such subsequence.
Due to~\eqref{eq:deltas}, we have $\delta_ k>0$ for all $ k\in \mathcal N$.
Matrix $H^k$ is positive semidefinite due to Proposition~\ref{prop:PSDHess},
therefore $H^ k+\delta_ k I$ is nonsingular for all $k\in \mathcal N$ and
$$\|(H^ k+\delta_ k I)^{-1}\|\leq\delta_ k^{-1}=\frac{1}{\zeta\|\nabla F_\gamma(x^ k)\|}.$$
Now, direction $d^ k$ satisfies
\begin{equation*
d^ k=(H^ k+\delta_ k I)^{-1}(r^ k-\nabla F_\gamma(x^ k)),
\end{equation*}
where $r^ k=(H^ k+\delta_ k I)d^ k+\nabla F_\gamma(x^ k)$.
Therefore
\begin{align}
\|d^ k\|&\leq\|(H^ k+\delta_ k I)^{-1}\|(\|r^ k\|+\|\nabla F_\gamma(x^ k)\|)\label{eq:boundd}\\
&\leq\frac{1}{\zeta\|\nabla F_\gamma(x^ k)\|}(\eta_ k\|\nabla F_\gamma(x^ k)\|+\|\nabla F_\gamma(x^ k)\|)\leq(1+\bar{\eta})/\zeta,\nonumber
\end{align}
proving that $\{d^ k\}_{ k\in\mathcal{N}}$ is bounded.
According to \cite[Lemma A.2]{dembo1983truncated}, when CG is applied to~\eqref{eq:RegNewtSys} we have that
\begin{equation}\label{eq:CGprop}
\nabla F_\gamma(x^ k)'d^ k\leq-\frac{1}{\|H^ k+\delta_ k I\|}\|\nabla F_\gamma(x^ k)\|^2.
\end{equation}
Using~\eqref{eq:deltas}
and Proposition~\ref{prop:PSDHess}, we have that
$$\|H^ k+\delta_ k I\|\leq \gamma^{-1}+\zeta\| \nabla F_\gamma(x^ k)\|,$$
therefore
\begin{equation}\label{eq:SuffDec}
\nabla F_\gamma(x^ k)'d^ k\leq-\frac{\|\nabla F_\gamma(x^ k)\|^2}{ \gamma^{-1}+\zeta\|\nabla F_\gamma(x^ k)\|},\quad\forall k\in\NN,
\end{equation}
As $ k(\in\NN)\to\infty$, the right hand side of~\eqref{eq:SuffDec} converges to $-\kappa^2/(\gamma^{-1}+\zeta\kappa)$, which is either a finite negative number (if $\kappa$ is finite) or $-\infty$. In any case, this together with~\eqref{eq:SuffDec} confirm that~\eqref{eq:gradRel} is valid as well, proving that $\{d^ k\}$ is gradient related to $\{x^ k\}$. All the assumptions of \cite[Prop.~1.2.1]{bertsekas1999nonlinear} hold, therefore every accumulation point of $\{x^ k\}$ converges to a stationary point of $F_\gamma$, which by Theorem~\ref{Th:PropFg}(\ref{cor:Equivalence}) is also a minimizer of $F$.
\iftoggle{svver}{\qed}{}
\end{proof}
The next theorem shows that under a nonsingularity assumption on $\hat{\partial}^2 F_\gamma(x_\star)$, the asymptotic rate of convergence of the sequence generated by Algorithm~\ref{al:PNM} is at least superlinear.
\begin{theorem}\label{eq:PNMconvRate}
Suppose that $x_\star$ is an accumulation point of the sequence $\{x^ k\}$ generated by Algorithm~\ref{al:PNM}. If $\mathop{\rm prox}\nolimits_{\gamma g}$ is semismooth at $x_\star-\gamma\nabla f(x_\star)$ and every element of $\hat{\partial}^2 F_\gamma(x_\star)$ is nonsingular, then the entire sequence converges to $x_\star$ and the convergence rate is Q-superlinear. Furthermore, if $\mathop{\rm prox}\nolimits_{\gamma g}$ is strongly semismooth at $x_\star-\gamma\nabla f(x_\star)$ and $\nabla^2 f$ is locally Lipschitz continuous around $x_\star$ then $\{x^k\}$ converges to $x_\star$ with Q-order at least $\rho$.
\end{theorem}
\begin{proof}
Theorem~\ref{th:ConvAlg1} asserts that $x_\star$ must be a stationary point for $F_\gamma$. Due to Proposition~\ref{prop:LNAprops1}, $\hat{\partial}^2 F_\gamma$ is a LNA of $\nabla F_\gamma$ at $x_\star$.
Due to Lemma~\ref{lem:sharpMin}, $x_\star$ is the globally unique minimizer of $F$. Therefore, by Theorem~\ref{th:ConvAlg1} every subsequence must converge to this unique accumulation point, implying that the entire sequence converges to $x_\star$.
Furthermore, for any $ k$
\begin{align}
\|\nabla F_\gamma(x^ k)\|&\leq \|I-\gamma\nabla^2 f(x^ k)\|\|G_\gamma(x^ k)\|\nonumber\\
&\leq \|G_\gamma(x^ k)-G_\gamma(x_\star)\|\leq 2\gamma^{-1}\|x^ k-x_\star\|,\label{eq:Calmness}
\end{align}
where the second inequality follows from $G_\gamma(x_\star)=0$ and Lemma~\ref{le:zNonExp} (in the Appendix).
We know that $d^ k$ satisfies $(H^ k+\delta_ k I) d^ k+\nabla F_\gamma(x^ k)=r^ k$. Therefore, for sufficiently large $ k$, we have
\small
\begin{align}
\|x^ k+d^ k-x_\star\|&=\|x^ k+(H^ k+\delta_ k I)^{-1}(r^ k-\nabla F_\gamma(x^ k))-x_\star\|\nonumber\\
& =\|(H^ k+\delta_ k I)^{-1}(H^ k(x^ k-x_\star)-\nabla F_\gamma(x^ k)+ \delta_ k(x^ k-x_\star)+r^ k)\|\nonumber\\
& \leq \|(H^ k+\delta_ k I)^{-1}\|\left(\|H^ k(x^ k-x_\star)+\nabla F_\gamma(x_\star)-\nabla F_\gamma(x^ k)\|\right.\nonumber\\
& \phantom{\leq \|(H^ k+\delta_ k I)^{-1}\|\left(\right.} \left.+~\delta_ k\|x^ k-x_\star\|+\|r^ k\|\right)\nonumber\\
& \leq\kappa\left(\|H^ k(x^ k-x_\star)+\nabla F_\gamma(x_\star)-\nabla F_\gamma(x^ k)\|\right.\nonumber\\
& \phantom{\leq\kappa\left(\right.}\left.+~2\zeta\gamma^{-1}\|x^ k-x_\star\|^2+\eta\gamma^{-1}\|x-x_\star\|^{1+\rho}\right)\label{eq:convRateEq}
\end{align}
\normalsize
where the last inequality follows by~\eqref{eq:deltas},~\eqref{eq:etas},~\eqref{eq:Calmness}.
Therefore, since $\hat{\partial}^2 F_\gamma$ is a LNA of $\nabla F_\gamma$ at $x_\star$, we have
\begin{equation}\label{eq:Qsup}
\|x^ k+d^ k-x_\star\|=o(\|x^ k-x_\star\|),
\end{equation}
while if it is a strong LNA we have
\begin{equation}\label{eq:Qquad}
\|x^ k+d^ k-x_\star\|=O(\|x^ k-x_\star\|^{1+\rho}).
\end{equation}
In other words, $\{d^ k\}$ is \emph{superlinearly convergent with respect to} $\{x^ k\}$~\cite[Sec. 7.5]{facchinei2003finite}.
Eventually, we have
\begin{align}
\nabla F_\gamma(x^k)'d^k+{d^k}'(H^k+\delta_kI)d^k&\leq\eta_k\|\nabla F_\gamma(x^k)\|\|d^k\|\leq\|\nabla F_\gamma(x^k)\|^{\rho+1}\|d^k\|\nonumber\\
&\leq2\gamma^{-(\rho+1)}\|x^k-x_\star\|^{\rho+1}\|d^k\|\nonumber\\
&=O(\|d^k\|^{\rho+2}),\label{eq:unitStepBasic}
\end{align}
where the first inequality follows by~\eqref{eq:InRegNewtSys}, the second by~\eqref{eq:etas}, the third inequality follows by~\eqref{eq:Calmness} and the equality follows from the fact that $\{d^ k\}$ is superlinearly convergent with respect to $\{x^ k\}$, which implies $\|x^k-x_\star\|=O(\|d^k\|)$~\cite[Lem.~7.5.7]{facchinei2003finite}.
Since $\hat{\partial}^2 F_\gamma$ is a LNA of $\nabla F_\gamma$ at $x_\star$, it has nonempty compact images and is upper semicontinuous at $x_\star$. This, together with the fact that $\{x^ k\}$ converges to $x_\star$ and the nonsingularity assumption on the elements of $\hat{\partial}^2 F_\gamma(x_\star)$ imply through \cite[Lem.~7.5.2]{facchinei2003finite} that for sufficiently large $ k$, $H^ k$ is nonsingular and there exists a $\kappa>0$ such that
$$\max\{\|H^ k\|,\|H^ k\|^{-1}\}\leq\kappa.$$
Therefore, eventually we have $\lambda_{\min}(H^ k+\delta_ k I)\geq\lambda_{\min}(H^ k)\geq\kappa$.
The last inequality together with~\eqref{eq:unitStepBasic} imply that there exists a $\theta>0$ such that eventually
\begin{equation}\label{eq:unitStep}
\nabla F_\gamma(x^k)'d^k\leq -\theta \|d^k\|^2.
\end{equation}
Following the same line of proof as in~\cite[Prop.~7.4.10]{facchinei2003finite}, it can be shown that
\begin{equation}\label{le:2ndOrd}
\lim_{\stackrel{\|d\|\to 0}{H\in\hat{\partial}^2 F_\gamma(x_\star+d)}}\frac{F_{\gamma}(x_\star+d)-F_{\gamma}(x_\star)-\nabla F_\gamma(x_\star)'d-\tfrac{1}{2}d'Hd}{\|d\|^2}=0.
\end{equation}
We remark here that~\cite[Prop.~7.4.10]{facchinei2003finite} assumes semismoothness of $\nabla F_\gamma$ at $x_\star$ and proves~\eqref{le:2ndOrd} with $\partial_C(\nabla F_\gamma)$ in place of $\hat{\partial}^2 F_\gamma$, but exactly the same arguments apply for any LNA of $\nabla F_\gamma$ at $x_\star$ even without the semismoothness assumption.
Using~\eqref{eq:unitStep},~\eqref{le:2ndOrd} and exactly the same arguments as in the proof of~\cite[Prop.~8.3.18(d)]{facchinei2003finite} or~\cite[Th. 3.2]{facchinei1995minimization} we have that eventually
\begin{equation}
F_\gamma(x^ k+d^ k)\leq F_\gamma(x^ k)+\sigma\nabla F_\gamma(x^ k)'d^ k,
\end{equation}
which means that there exists a positive integer $\bar{k}$ such that $\tau_k=1$, for all $k\geq \bar{k}$. Therefore, for all $k\geq \bar{k}$
$$x^{k+1}=x^k+d^k.$$
This together with~\eqref{eq:Qsup},~\eqref{eq:Qquad} proves the corresponding convergence rates for $\{x^k\}$.
\iftoggle{svver}{\qed}{}
\end{proof}
When $f$ is strongly convex quadratic, Theorem~\ref{th:ProxPropQuad} guarantees that $F_\gamma$ is strongly convex and we can give a complexity estimate for Algorithm~\ref{al:PNM}. In particular, the global convergence rate for the function values and the iterates is linear.
\begin{theorem}\label{th:ComplPNM}
Suppose that $f$ is quadratic and $\mu_f>0$. If $\zeta=0$ then
\begin{subequations}
\begin{align}
F(P_\gamma(x^k))-F_\star\leq r_{F_\gamma}(F_\gamma(x^0)-F_\star)),\label{eq:QuadRateF}\\
\|x^{k}-x_\star\|^2\leq \frac{L_{F_\gamma}}{\mu_{F_\gamma}}r_{F_\gamma}^k\|x^0-x_\star\|^2\label{eq:QuadRatex}
\end{align}
\end{subequations}
where $r_{F_\gamma}= 1-2\left(\frac{\mu_{F_\gamma}}{L_{F_\gamma}}\right)^3\frac{\sigma(1-\sigma)}{1+\eta}$.
\end{theorem}
\begin{proof}
See Appendix.
\end{proof}
Algorithm~\ref{al:PNM} exhibits fast asymptotic convergence rates provided that the elements of $\hat{\partial}^2 F_\gamma(x_\star)$ are nonsingular,
but not much can be said about its global convergence rate, unless $f$ is convex quadratic. Even in this favorable case the corresponding complexity
estimates are very loose due to the variable metric used by the algorithm, cf. Theorem~\ref{th:ComplPNM}.
Another reason for the failure to derive meaningful complexity estimates is the fact that Algorithm~\ref{al:PNM} ``forgets'' about the convex
structure of $F$, since it tries to minimize directly $F_\gamma$ which can be nonconvex and its gradient may not be globally Lipschitz continuous.
Specifically, Algorithm~~\ref{al:PNM} may fail to be a descent method for $F$ (although it satisfies that property for $F_\gamma$). Furthermore the
iterates $x^ k$ produced by Algorithm~\ref{al:PNM} may lie outside $\mathop{\rm dom}\nolimits g$ (but $P_\gamma(x^ k)\in\mathop{\rm dom}\nolimits g$, see Theorem~\ref{Th:PropFg}(\ref{prop:LowBnd})).
In this section, we show how Algorithm~\ref{al:PNM} can be modified so as to be able to derive global complexity estimates, similar to the ones for the
proximal gradient method, and at the same time retain fast asymptotic convergence rates. The key idea is to inject a forward-backward step after the
Newton step (cf. Alg.~\ref{al:PGNM}) and analyze the consequences of this choice on $F$, directly. This guarantees that the sequence of function values
for both $F$ and $F_\gamma$ are monotone nonincreasing.
\begin{algorithm}
\LinesNumbered
\DontPrintSemicolon
\caption{Forward-Backward Newton-CG Method II (FBN-CG II)} \label{al:PGNM}
\KwIn{$\gamma\in (0,1/L_{f})$, $\sigma\in \left(0,{1}/{2}\right)$, $\mathcal{K}\subseteq\mathbb N$, $ k=0$, $s_{0}=0$, $x^{0}\in\mathop{\rm dom}\nolimits g$}
\uIf{$ k\in\mathcal{K}$ or $s_{ k}=1$}{
Execute steps 1 and 2 of Algorithm~\ref{al:PNM} to compute direction $d^k$ and step $\tau_k$\;
$\hat{x}^ k\gets {x}^{ k}+\tau_ k d^ k$\;
\lIf{$\tau_ k=1$}{$s_{ k+1}\gets 1$} \lElse{$s_{ k+1}\gets 0$}
}
\Else{
$\hat{x}^{ k}\gets x^{ k}$, $s_{ k+1}\gets 0$
}
$x^{ k+1}\gets \mathop{\rm prox}\nolimits_{\gamma g}(\hat{x}^ k-\gamma\nabla f(\hat{x}^ k))$\;
$ k\leftarrow k+1$ and go to Step 1.\;
\end{algorithm}
We show below that the sequence of iterates $\{x^ k\}_{k\in\mathbb N}$ produced by Algorithm~\ref{al:PGNM} enjoys the same favorable properties in terms of convergence and local convergence rates, as the one of Algorithm~\ref{al:PNM}.
\begin{theorem}
Every accumulation point of the sequence $\{x^ k\}$ generated by Algorithm~\ref{al:PGNM} belongs to $X_\star$.
\end{theorem}
\begin{proof}
If $\mathcal{K}=\emptyset$ then Algorithm~\ref{al:PGNM} is equivalent to FBS and the result has been already proved in \cite[Th. 1.2]{beck2010gradient}. Let us then assume
$\mathcal{K}\neq\emptyset$ and distinguish between two cases. First, we deal with the case where $ k\notin\mathcal{K}$ and $s_{ k}=0$.
Putting $x=\bar{x}=x^{ k}$ in~\eqref{eq:ProxBasic} we obtain
\begin{equation}\label{eq:DesPGNM}
F(x^{ k+1})-F(x^{ k})\leq -\tfrac{\gamma}{2}\|G_{\gamma}(x^{ k})\|^2.
\end{equation}
For the case where $ k\in\mathcal{K}$ or $s_{ k}=1$, unless $\nabla F_{\gamma}(x^ k)=0$ (which means that $x^ k$ is a minimizer of $F$), we have $F_{\gamma}(\hat{x}^ k)< F_{\gamma}(x^ k)$ due to~\eqref{eq:Armijo}. Using parts \eqref{prop:UppBnd} and~\eqref{prop:LowBnd} of Theorem~\ref{Th:PropFg} we obtain
\begin{align*}
F(x^{ k+1})&=F(P_\gamma(\hat{x}^ k))\leq F_\gamma(\hat{x}^ k)\\
&\leq F_\gamma(x^ k)\leq F(x^ k)-\tfrac{\gamma}{2}\|G_{\gamma}(x^ k)\|^2
\end{align*}
and again we arrive at \eqref{eq:DesPGNM}.
Summing up, Eq.~\eqref{eq:DesPGNM} is satisfied for every $k\in\mathbb N$.
Since $\{F(x^k)\}$ is monotonically nonincreasing, it converges to a finite value
(since we have assumed that $F$ is proper),
therefore $\{F(x^k)-F(x^{k+1})\}$
converges to zero. This implies through~\eqref{eq:DesPGNM} that
$\{\|G_{\gamma}(x^{ k})\|^2\}$ converges to zero. Since
$\|G_{\gamma}({\cdot})\|^2$ is a continuous nonnegative function which becomes
zero if and only if $x\in X_\star$, it follows that
every accumulation point of $\{x^{ k}\}$ belongs to $X_\star$.
\iftoggle{svver}{\qed}{}
\end{proof}
\begin{theorem}\label{eq:PGNMconvRate}
Suppose $\mathcal{K}$ is infinite. Under the assumptions of Theorem~\ref{eq:PNMconvRate} the same results
apply also to the sequence of iterates produced by Algorithm~\ref{al:PGNM}.
\end{theorem}
\begin{proof}
Following exactly the same steps as in the proof of Theorem~\ref{eq:PNMconvRate} we can show that $\{d^k\}$ is superlinearly convergent with respect to $\{x^k\}$. Indeed, the derivation is independent of the algorithmic scheme and it is only related to how the direction $d^k$ is generated. This means that unit stepsize is eventually accepted, \emph{i.e.},~, there exists a positive integer $\bar{k}$ such that $s^k=1$ for all $k\geq\bar{k}$. Therefore, eventually the iterates are given by
$$x^{ k+1}=P_\gamma(x^ k+d^ k),\qquad k\geq\bar{k}.$$
Due to nonexpansiveness of $P_\gamma$ we have
$$\|x^{ k+1}-x_\star\|=\|P_\gamma(x^ k+d^ k)-P_\gamma(x_\star)\|\leq\|x^k+d^k-x_\star\|.$$
The proof finishes by invoking~\eqref{eq:convRateEq}.
\iftoggle{svver}{\qed}{}
\end{proof}
As the next theorem shows, Algorithm~\ref{al:PGNM} not only enjoys fast asymptotic convergence rate properties but also comes with the following global complexity estimate.
\begin{theorem}\label{th:PGNMbnds1}
Let $\{x^{ k}\}$ be a sequence generated by Algorithm \ref{al:PGNM}. Assume that the level sets of $F$ are bounded, \emph{i.e.},~ $\|x-x_\star\|\leq R$ for some $x_\star\in X_\star$ and all $x\in\Re^n$ with $F(x)\leq F(x^0)$. If $F(x^0)-F_\star\geq R^2/\gamma$ then
\begin{equation}\label{eq:FirstStep}
F(x^1)-F_\star\leq\frac{R^2}{2\gamma}.
\end{equation}
Otherwise, for any $ k\in\mathbb N$ we have
\begin{equation}\label{eq:kStep}
F(x^{ k})-F_\star\leq\frac{2R^2}{\gamma(k+2)}.
\end{equation}
\end{theorem}
\begin{proof}
See Appendix.
\end{proof}
When $f$ is strongly convex the global rate of convergence is linear. The next theorem gives the corresponding complexity estimates.
\begin{theorem}\label{th:PGNMbnds2}
If $f\in\mathcal{S}_{\mu_f,L_f}^{1,1}(\Re^n)$, $\mu_f>0$, then
\begin{subequations}
\begin{align}
F\left(x^ k\right)-F_\star&\leq(1+\gamma\mu_f)^{- k}(F(x^0)-F_\star),\label{eq:strC1}\\
\|x^{ k+1}-x_\star\|^2&{\leq}\frac{1-\gamma\mu_f}{\gamma\mu_f(1+\gamma\mu_f)^{k}}\|x^{0}-x_\star\|^2.\label{eq:strC2}
\end{align}
\end{subequations}
\end{theorem}
\begin{proof}
See Appendix.
\end{proof}
\begin{remark}\label{re:LSgamma}
We should remark that Theorems~\ref{th:PGNMbnds1} and~\ref{th:PGNMbnds2} remain valid even if
$L_f$ (and thus $\gamma$) is unknown and instead a backtracking line search procedure similar to those described in~\cite{beck2009fast,nesterov2007gradient}, is performed to determine a suitable value for $\gamma$.
\end{remark}
\section{Introduction}
The focus of this work is on efficient Newton-like algorithms for convex
optimization problems in composite form, \emph{i.e.},~
\begin{equation}\label{eq:GenProb}
\minimize\ F(x) = f(x)+g(x),
\end{equation}
where $f\in\mathcal{S}_{\mu_f,L_f}^{2,1}(\Re^n)$\footnote{$\mathcal{S}_{\mu,L}^{2,1}(\Re^n)$: class of twice continuously differentiable,
strongly convex functions with modulus of strong convexity $\mu\geq 0$, whose gradient is Lipschitz continuous with constant $L\geq 0$.} and
$g\in\mathcal{S}^0(\Re^n)$\footnote{$\mathcal{S}^0(\Re^n)$: class of proper, lower semicontinuous, convex functions from $\Re^n$ to $\overline{\Re} = \Re\cup\{+\infty\}$.}
has a cheaply computable proximal mapping~\cite{moreau1965proximiteet}.
Problems of the form~\eqref{eq:GenProb} are abundant in many scientific areas
such as control, signal processing, system identification, machine learning and
image analysis, to name a few. For example, when $g$ is the indicator of a
convex set then~\eqref{eq:GenProb} becomes a constrained optimization problem,
while for $f(x)=\|Ax-b\|_2^2$ and $g(x)=\lambda\|x\|_1$ it becomes the
$\ell_1$-regularized least-squares problem which is the main building block of
compressed sensing. When $g$ is equal to the nuclear norm, then
problem~\eqref{eq:GenProb} can model low-rank matrix recovery problems.
Finally, conic optimization problems such as LPs, SOCPs and SPDs can be brought
into the form of~\eqref{eq:GenProb}, see \cite{lan2011primal}.
Perhaps the most well known algorithm for problems in the form \eqref{eq:GenProb}
is the forward-backward splitting (FBS) or proximal gradient method
\cite{lions1979splitting, combettes2011proximal}, a generalization of the
classical gradient and gradient projection methods to problems involving a
nonsmooth term. Accelerated versions of FBS, based on the work of Nesterov
\cite{nesterov2007gradient,beck2009fast,tseng2008accelerated}, have also gained
popularity.
Although these algorithms share favorable global
convergence rate estimates of order $O(\epsilon^{-1})$ or $O(\epsilon^{-1/2})$
(where $\epsilon$ is the solution accuracy), they are first-order methods and
therefore usually effective at computing solutions of low or medium accuracy only.
An evident remedy is to include second-order information by replacing the
Euclidean norm in the proximal mapping with the $Q$-norm,
where $Q$ is the Hessian of $f$ at $x$ or some approximation of it, mimicking
Newton or quasi-Newton methods for unconstrained problems.
This route is followed in the recent work of \cite{becker2012quasi, Lee2012ProximalNIPS}.
However, a severe limitation of the approach is that, unless $Q$ has a special
structure, the linearized subproblem is very hard to solve. For example, if
$F$ models a QP, the corresponding subproblem is as hard as the original problem.
In this paper we follow a different approach by defining a function, which
we call \emph{forward-backward envelope (FBE)}, that has favorable properties and
can serve as a real-valued, smooth, exact penalty function
for the original problem. Our approach combines and extends ideas stemming from
the literature on merit functions for \emph{variational inequalities}
(VIs) and \emph{complementarity problems} (CPs), specifically the reformulation of a VI as a constrained continuously differentiable optimization problem
via the regularized gap function \cite{fukushima1992equivalent} and as an unconstrained continuously differentiable optimization problem via the
D-gap function \cite{yamashita1997unconstrained} (see \cite[Ch. 10]{facchinei2003finite} for a survey and \cite{Li2007exact}, \cite{patrinos2011global}
for applications to constrained optimization and model predictive control of dynamical systems).
Next, we show that one can design Newton-like methods to minimize the FBE by using tools from nonsmooth analysis. Unlike the approaches of
\cite{becker2012quasi, Lee2012ProximalNIPS}, where the corresponding subproblems are expensive to solve, our algorithms require only the solution
of a usually small linear system to compute the Newton direction. However, this work focuses on devising algorithms that have good {complexity guarantees}
provided by a global (non-asymptotic) convergence rate while achieving $Q$-superlinear or $Q$-quadratic\footnote{A sequence $\{x^k\}_{k\in\mathbb N}$ converges
to $x_\star$ with $Q$-superlinear rate if $\frac{\|x^{k+1}-x_\star\|}{\|x_k-x_\star\|}\to 0$. It converges to $x_\star$ with $Q$-quadratic rate if there
exists a $\bar{k}>0$ such that $\frac{\|x^{k+1}-x_\star\|}{\|x_k-x_\star\|^2}\leq M$, for some $M>0$ and all $k\geq\bar{k}$.}
asymptotic convergence rates in the nondegenerate cases. We show that one can achieve this goal by interleaving Newton-like iterations on the FBE
and FBS iterations. This is possible by relating directions of descent for the considered penalty function with those for the original nonsmooth function.
The main contributions of the paper can be summarized as follows. We show
how Problem~\eqref{eq:GenProb} can be reformulated as the unconstrained minimization
of a real-valued, continuously differentiable function, the FBE,
providing a framework that allows to extend classical algorithms for smooth
unconstrained optimization to nonsmooth or constrained problems in
composite form~\eqref{eq:GenProb}. Moreover, based on this framework, we
devise efficient proximal Newton algorithms with $Q$-superlinear or $Q$-quadratic asymptotic
convergence rate to solve~\eqref{eq:GenProb}, with global complexity bounds.
The conjugate gradient (CG) method is employed to compute efficiently
an approximate Newton direction at every iteration. Therefore our algorithms are
able to handle large-scale problems since they require only the calculation of matrix-vector
products and there is no need to form explicitly the generalized Hessian matrix.
The outline of the paper is as follows. In Section~\ref{sec:FBE} we introduce the
FBE, a continuously differentiable penalty function for
\eqref{eq:GenProb}, and discuss some of its properties. In Section~\ref{sec:LNA} we
discuss the generalized differentiability properties of the gradient of the FBE and introduce
a linear Newton approximation (LNA) for it, which plays a role similar to that of the Hessian
in the classical Newton method. Section~\ref{sec:FBNCG} is the core of the
paper, presenting two algorithms for solving Problem~\eqref{eq:GenProb} and discussing their
local and global convergence properties. In Section~\ref{sec:Examples} we
consider some examples of $g$ and discuss the generalized Jacobian of
their proximal operator, on which the LNA
is based. Finally, in Section~\ref{sec:Simulations}, we consider some practical
problems and show how the proposed methods perform in solving them.
\section{Second-order Analysis of $F_\gamma$}\label{sec:LNA}
As it was shown in Section~\ref{sec:FBE}, $F_\gamma$ is continuously differentiable over $\Re^n$. However $F_\gamma$ fails to be $\mathcal{C}^2$ in most cases:
since $g$ is nonsmooth, its Moreau envelope $g^\gamma$ is hardly ever $\mathcal{C}^2$. For example, if $g$ is real-valued then $g^\gamma$
is $\mathcal{C}^2$ and $\mathop{\rm prox}\nolimits_{\gamma g}$ is $\mathcal{C}^1$ if and only if $g$ is $\mathcal{C}^2$~\cite{lemarechal1997practical}.
Therefore, we hardly ever have the luxury of assuming continuous differentiability of $\nabla F_\gamma$ and we must resort into generalized notions of
differentiability stemming from nonsmooth analysis. Specifically, our analysis is largely based upon generalized differentiability properties
of $\mathop{\rm prox}\nolimits_{\gamma g}$ which we study next.
\subsection{Generalized Jacobians of proximal mappings}
Since $\mathop{\rm prox}\nolimits_{\gamma g}$ is globally Lipschitz continuous, by Rademacher's theorem
\cite[Th.~9.60]{rockafellar2011variational} it is almost everywhere differentiable.
Recall that Rademacher's theorem asserts that if a mapping $G:\Re^n\to\Re^m$ is locally Lipschitz continuous on $\Re^n$, then it is almost
everywhere differentiable, \emph{i.e.},~ the set $\Re^n\setminus C_G$ has measure zero, where $C_G$ is the subset of points in $\Re^n$ for which $G$
is differentiable. Hence, although the Jacobian of $\mathop{\rm prox}\nolimits_{\gamma g}$ in the classical
sense might not exist everywhere, generalized differentiability notions, such as the $B$-subdifferential and the generalized Jacobian of Clarke,
can be employed to provide a local first-order approximation of $\mathop{\rm prox}\nolimits_{\gamma g}$.
\begin{definition}\label{def:Jacs}
Let $G:\Re^n\to\Re^m$ be locally Lipschitz continuous at $x\in\Re^n$. The B-subdifferential (or limiting Jacobian) of $G$ at $x$ is
\begin{equation*}
\partial_B G(x)\triangleq\left\{H\in\Re^{m\times n}\left|\right.\exists\ \{x^k\}\subset C_G\textrm{ with }x^k\to x, \nabla G(x^k)\to H\right\},
\end{equation*}
whereas the (Clarke) generalized Jacobian of $G$ at $x$ is
$$\partial_C G(x)\triangleq\mathop{\rm conv}\nolimits(\partial_BG(x)).$$
\end{definition}
If $G:\Re^n\to\Re^m$ is locally Lipschitz on $\Re^n$ then $\partial_CG(x)$ is a nonempty, convex and compact subset of $m$ by $n$
matrices, and as a set-valued mapping it is outer-semicontinuous at every $x\in\Re^n$.
The next theorem shows that the elements of the generalized Jacobian of the proximal mapping are symmetric and positive semidefinite.
Furthermore, it provides a bound on the magnitude of their eigenvalues.
\begin{theorem}\label{th:JacProx}
Suppose that $g\in\mathcal{S}^0(\Re^n)$ and $x\in\Re^n$. Every $P\in\partial_C(\mathop{\rm prox}\nolimits_{\gamma g})(x)$ is a symmetric positive semidefinite matrix that satisfies $\|P\|\leq 1$.
\end{theorem}
\begin{proof}
Since $g$ is convex, its Moreau envelope is a convex function as well, therefore every element of $\partial_C(\nabla g^\gamma)(x)$ is a
symmetric positive semidefinite matrix (see e.g. \cite[Sec.~8.3.3]{facchinei2003finite}). Due to~\eqref{eq:nabla_e}, we have that
$\mathop{\rm prox}\nolimits_{\gamma g}(x)=x-\gamma\nabla g^\gamma(x)$,
therefore
\begin{equation}\label{eq:JacProx}
\partial_C(\mathop{\rm prox}\nolimits_{\gamma g})(x)= I-\gamma \partial_C(\nabla g^\gamma)(x).
\end{equation}
The last relation holds with equality (as opposed to inclusion in the general case) due to the fact that one of the summands is continuously differentiable.
Now from~\eqref{eq:JacProx} we easily infer that every element of $\partial_C(\mathop{\rm prox}\nolimits_{\gamma g})(x)$ is a symmetric matrix.
Since $\nabla g^\gamma(x)$ is Lipschitz continuous with Lipschitz constant $\gamma^{-1}$, using \cite[Prop.~2.6.2(d)]{clarke1990optimization},
we infer that every $H\in \partial_C(\nabla g^\gamma)(x)$ satisfies $\|H\|\leq\gamma^{-1}$. Now, according to \eqref{eq:JacProx}, it holds
$$P\in\partial_C(\mathop{\rm prox}\nolimits_{\gamma g})(x)\iff P=I-\gamma H,\quad H\in\partial_C(\nabla g^\gamma)(x).$$
Therefore,
$$ d'Pd=\|d\|^2-\gamma d'Hd\geq\|d\|^2-\gamma\gamma^{-1}\|d\|^2 = 0,\quad\forall P\in\partial_C(\mathop{\rm prox}\nolimits_{\gamma g})(x).$$
On the other hand, since $\mathop{\rm prox}\nolimits_{\gamma g}$ is Lipschitz continuous with Lipschitz constant 1, using \cite[Prop.~2.6.2(d)]{clarke1990optimization}
we obtain that $\|P\|\leq 1$, for all $P\in\partial_C(\mathop{\rm prox}\nolimits_{\gamma g})(x)$.
\iftoggle{svver}{\qed}{}
\end{proof}
An interesting property of $\partial_C\mathop{\rm prox}\nolimits_{\gamma g}$, documented in the following proposition,
is useful whenever $g$ is (block) separable, \emph{i.e.},~ $g(x)=\sum_{i=1}^N g_i(x_i)$, $x_i\in\Re^{n_i}$, $\sum_{i=1}^N n_i=n$.
In such cases every $P\in\partial_C(\mathop{\rm prox}\nolimits_{\gamma g})(x)$ is a (block)
diagonal matrix. This has favorable computational implications especially for large-scale problems.
For example, if $g$ is the $\ell_1$ norm or the indicator function of a box,
then the elements of $\partial_C\mathop{\rm prox}\nolimits_{\gamma g}(x)$ (or $\partial_B\mathop{\rm prox}\nolimits_{\gamma g}(x)$) are
diagonal matrices with diagonal elements in $[0,1]$ (or in $\{0,1\}$).
\begin{proposition}[separability]\label{prop:Separ}
If $g:\Re^n\to\overline{\Re}$ is (block) separable then every element of $\partial_B(\mathop{\rm prox}\nolimits_{\gamma g})(x)$ and $\partial_C(\mathop{\rm prox}\nolimits_{\gamma g})(x)$ is (block) diagonal.
\end{proposition}
\begin{proof}
Since $g$ is block separable, its proximal mapping has the form
$$\mathop{\rm prox}\nolimits_{\gamma g}(x)=(\mathop{\rm prox}\nolimits_{\gamma g_1}(x_1),\ldots,\mathop{\rm prox}\nolimits_{\gamma g_N}(x_N)).$$
The result follows directly by Definition~\ref{def:Jacs}.
\iftoggle{svver}{\qed}{}
\end{proof}
The following proposition provides a connection between the generalized Jacobian of the proximal mapping for a convex function and that of its conjugate, stemming from the celebrated Moreau's decomposition~\cite[Th. 14.3]{bauschke2011convex}.
\begin{proposition}[Moreau's decomposition]\label{prop:MorDec}
Suppose that $g\in\mathcal{S}^0(\Re^n)$. Then
\begin{align*}
\partial_B(\mathop{\rm prox}\nolimits_{\gamma g^\star})(x)&=\{P=I-Q\left|\right.Q\in\partial_B(\mathop{\rm prox}\nolimits_{g/\gamma})(x/\gamma)\},\\
\partial_C(\mathop{\rm prox}\nolimits_{\gamma g^\star})(x)&=\{P=I-Q\left|\right.Q\in\partial_C(\mathop{\rm prox}\nolimits_{g/\gamma})(x/\gamma)\}.
\end{align*}
\end{proposition}
\begin{proof}
Using Moreau's decomposition we have
$$\mathop{\rm prox}\nolimits_{\gamma g^\star}(x)=x-\gamma\mathop{\rm prox}\nolimits_{g/\gamma}(x/\gamma).$$
The first result follows directly by Definition~\ref{def:Jacs}, since $\mathop{\rm prox}\nolimits_{\gamma g^\star}$ is expressed as the difference of two functions, one of which is continuously differentiable.
The second result follows from the fact that, with a little abuse of notation,
$$\mathop{\rm conv}\nolimits\{I-Q\left|\right.Q\in\partial_B(\mathop{\rm prox}\nolimits_{g/\gamma})(x/\gamma)\} = I-\mathop{\rm conv}\nolimits(\partial_B(\mathop{\rm prox}\nolimits_{g/\gamma})(x/\gamma)).$$
\iftoggle{svver}{\qed}{}
\end{proof}
\emph{Semismooth} mappings~\cite{qi1993nonsmooth} are precisely Lipschitz continuous mappings for which the generalized Jacobian (and consequenlty the $B$-subdifferential) furnishes a first-order approximation.
\begin{definition}\label{def:Semismooth}
Let $G:\Re^n\to\Re^m$ be locally Lipschitz continuous
at ${x}$. We say that $G$ is \emph{semismooth} at $\bar{x}$ if
$$\|G(x)+H(\bar{x}-x)-G(\bar{x})\|=o(\|x-\bar{x}\|)\ \textrm{as}\ x\to\bar{x},\ \forall H\in\partial_CG(x)$$
whereas $G$ is said to be \emph{strongly semismooth} if $o(\|x-\bar{x}\|)$ can be replaced with $O(\|x-\bar{x}\|^2)$.
\end{definition}
We remark that the original definition of semismoothness given by~\cite{mifflin1977semismooth} requires $G$ to be directionally differentiable at $x$. The definition given here is the one employed by~\cite{gowda2004inverse}. Another worth spent remark is that $\partial_C G(x)$ can be replaced with the smaller set $\partial_B G(x)$ in Definition~\ref{def:Semismooth}.
Fortunately, the class of semismooth mappings is rich enough to include proximal mappings of most of the functions arising in interesting applications. For example \emph{piecewise smooth ($PC^1$) mappings} are semismooth everywhere. Recall that a continuous mapping $G:\Re^n\to\Re^m$ is $PC^1$ if there exists a finite collection of smooth mappings $G_i:\Re^n\to\Re$, $i=1,\ldots,N$ such that
$$G(x)\in\{G_1(x),\ldots,G_N(x)\},\quad\forall x\in\Re^n.$$
The definition of $PC^1$ mappings given here is less general than the one of, e.g.,~\cite[Ch. 4]{scholtes2012introduction} but it suffices for our purposes.
For every $x\in\Re^n$ we introduce the set of essentially active indices
$$I_G^e(x)=\{i\in[N]\ |\ x\in\mathop{\rm cl}\nolimits(\mathop{\rm int}\nolimits\{x\ |\ G(x)=G_i(x)\})\}\footnote{$[N]\triangleq\{1,\ldots,N\}$ for any positive integer $N$.}.$$
In other words, $I_G^e(x)$ contains only indices of the pieces $G_i$ for which there exists a full-dimensional set on which $G$ agrees with $G_i$. In accordance to Definition~\ref{def:Jacs}, the generalized Jacobian of $G$ at $x$ is the convex hull of the Jacobians of the essentially active pieces, \emph{i.e.},~ \cite[Prop.~4.3.1]{scholtes2012introduction}
\begin{equation}\label{eq:PCJac}
\partial_C G(x)=\mathop{\rm conv}\nolimits\{\nabla G_i(x)\ |\ i\in I_G^e(x)\}.
\end{equation}
As it will be clear in Section~\ref{sec:Examples}, in many interesting cases $\mathop{\rm prox}\nolimits_{\gamma g}$ is $PC^1$ and thus semismooth. Furthermore, through~\eqref{eq:PCJac} an element of $\partial_C\mathop{\rm prox}\nolimits_{\gamma g}(x)$ can be easily computed once $\mathop{\rm prox}\nolimits_{\gamma g}(x)$ has been computed.
A special but important class of convex functions whose proximal mapping is $PC^1$ are piecewise quadratic (PWQ) functions. A convex function $g\in\mathcal{S}^0(\Re^n)$ is called PWQ if $\mathop{\rm dom}\nolimits g$ can be represented as the union
of finitely many polyhedral sets, relative to each of which $g(x)$ is given by an expression of the form $(1/2)x'Qx+q'x+c$ ($Q\in\Re^{n\times n}$ must necessarily be symmetric positive semidefinite)~\cite[Def.~10.20]{rockafellar2011variational}. The class of PWQ functions is quite general since it includes e.g. polyhedral norms, indicators and support functions of polyhedral sets, and it is closed under addition, composition with affine mappings, conjugation, inf-convolution and inf-projection~\cite[Prop.~10.22, Proposition~11.32]{rockafellar2011variational}. It turns out that the proximal mapping of a PWQ function is \emph{piecewise affine} (PWA)~\cite[12.30]{rockafellar2011variational} ($\Re^n$ is partitioned in polyhedral sets relative to each of which $\mathop{\rm prox}\nolimits_{\gamma g}$ is an affine mapping), hence strongly semismooth~\cite[Prop.~7.4.7]{facchinei2003finite}.
Another example of a proximal mapping that it is strongly semismooth is the projection operator over symmetric cones~ \cite{sun2002semismooth}.
We refer the reader to \cite{mifflin1999properties,meng2008lagrangian,meng2009moreau,meng2005semismoothness} for conditions that guarantee semismoothness of the proximal mapping for more general convex functions.
\subsection{Approximate generalized Hessian for $F_\gamma$
Having established properties of generalized Jacobians for proximal mappings, we are now in position to construct a generalized Hessian for $F_\gamma$ that will allow the development of Newton-like methods with fast asymptotic convergence rates. The obvious route to follow is to assume that $\nabla F_\gamma$ is semismooth and employ $\partial_C(\nabla F_\gamma)$ as a generalized Hessian for $F_\gamma$. However, semismoothness would require extra assumptions on $f$. Furthermore, the form of $\partial_C(\nabla F_\gamma)$ is quite complicated involving third-order partial derivatives of $f$.
On the other hand, what is really needed to devise Newton-like algorithms with fast local convergence rates is a \emph{linear Newton approximation (LNA)}, cf. Definition~\ref{def:LNA}, at some stationary point of $F_\gamma$, which by Theorem \ref{Th:PropFg}\eqref{cor:Equivalence} is also a minimizer of $F$, provided that $\gamma\in (0,1/L_f)$.
The approach we follow is largely based on \cite{sun1997computable}, \cite[Prop.~10.4.4]{facchinei2003finite}.
The following definition is taken from \cite[Def.~7.5.13]{facchinei2003finite}
\begin{definition}\label{def:LNA}
Let $G:\Re^n\to\Re^m$ be continuous on $\Re^n$. We say that $G$ admits a linear Newton approximation at a vector $\bar{x}\in\Re^n$ if there exists a set-valued mapping $\mathscr{G}:\Re^n\rightrightarrows\Re^{n\times m}$ that has nonempty compact images, is upper semicontinuous at $\bar{x}$ and for any $H\in\mathscr{G}(x)$
$$\|G(x)+H(\bar{x}-x)-G(\bar{x})\|= o(\|x-\bar{x}\|)\ \textrm{ as } x\to\bar{x}.$$
If instead
$$\|G(x)+H(\bar{x}-x)-G(\bar{x})\|= O(\|x-\bar{x}\|^2)\ \textrm{ as } x\to\bar{x},$$
then we say that $G$ admits a strong linear Newton approximation at $\bar{x}$.
\end{definition}
Arguably the most notable example of a LNA for semismooth mappings is the generalized Jacobian, cf. Definition~\ref{def:Jacs}. However, semismooth mappings can admit LNAs different from the generalized Jacobian. More importantly, mappings that are not semismooth may also admit a LNA.
It turns out that we can define a LNA for $\nabla F_\gamma$ at any stationary
point, whose elements have a simpler form than those of $\partial_C(\nabla F_\gamma)$,
without assuming semismoothness of $\nabla F_\gamma$. We call it \emph{approximate
generalized Hessian} and it is given by
\begin{equation*}\label{eq:LNAF}
\hat{\partial}^2 F_\gamma(x)\triangleq\{\gamma^{-1}(I-\gamma\nabla^2 f(x))(I-P(I-\gamma\nabla^2 f(x)))\ |\ P\in\partial_C(\mathop{\rm prox}\nolimits_{\gamma g})(x-\gamma\nabla f(x))\}.
\end{equation*}
The key idea in the definition of $\hat{\partial}^2 F_\gamma$, reminiscent to the Gauss-Newton method for nonlinear least-squares problems, is to omit terms vanishing at $x_\star$ that contain third-order derivatives of $f$. The following proposition shows that $\hat{\partial}^2 F_\gamma$ is indeed a LNA of $\nabla F_\gamma$ at any $x_\star\in X_\star$.
\begin{proposition}\label{prop:LNAprops1}
Let $T(x)=x-\gamma\nabla f(x)$, $\gamma\in(0,1/L_f)$ and $x_\star\in X_\star$. Then
\begin{enumerate}[\rm (i)]
\item if $\mathop{\rm prox}\nolimits_{\gamma g}$ is semismooth at $T({x}_\star)$, then $\hat{\partial}^2 F_\gamma$
is a LNA for $\nabla F_\gamma$ at ${x}_\star$,
\item if $\mathop{\rm prox}\nolimits_{\gamma g}$ is strongly semismooth at $T({x}_\star)$, and $\nabla^2 f$
is locally Lipschitz around $x_\star$, then $\hat{\partial}^2 F_\gamma$
is a strong LNA for $\nabla F_\gamma$ at ${x}_\star$.
\end{enumerate}
\end{proposition}
\begin{proof}
See Appendix.
\end{proof}
The next proposition shows that every element of $\hat{\partial}^2 F_\gamma(x)$ is a symmetric positive semidefinite matrix, whose eigenvalues are lower and upper bounded uniformly over all $x\in\Re^n$.
\begin{proposition}\label{prop:PSDHess}
Any $H\in\hat{\partial}^2 F_\gamma(x)$ is symmetric positive semidefinite and satisfies
\begin{equation}
\xi_1\|d\|^2\leq d'Hd\leq \xi_2\|d\|^2,\ \forall d\in\Re^n,
\end{equation}
where
$\xi_1\triangleq\min\left\{(1-\gamma\mu_f)\mu_f,(1-\gamma L_f)L_f\right\}$,
$\xi_2\triangleq\gamma^{-1}(1-\gamma\mu_f)$.
\end{proposition}
\begin{proof}
See Appendix.
\end{proof}
The next lemma shows uniqueness of the solution of~\eqref{eq:GenProb} under a nonsingularity assumption on the elements of $\hat{\partial}^2F_\gamma(x_\star)$. Its proof is similar to~\cite[Lem.~7.2.10]{facchinei2003finite}, however $\nabla F_\gamma$ is not required to be locally Lipschitz around $x_\star$.
\begin{lemma}\label{lem:sharpMin}
Let $x_\star\in X_\star$. Suppose that $\gamma\in(0,1/L_f)$, $\mathop{\rm prox}\nolimits_{\gamma g}$ is semismooth at $x_\star-\nabla f(x_\star)$ and every element of $\hat{\partial}^2F_\gamma(x_\star)$ is nonsingular. Then $x_\star$ is the unique solution of~\eqref{eq:GenProb}. In fact, there exist positive constants $\delta$ and $c$ such that
$$\|x-x_\star\|\leq c\|G_\gamma(x)\|,\ \mathrm{for\ all\ } x\ \mathrm{with}\ \|x-x_\star\|\leq\delta.$$
\end{lemma}
\begin{proof}
See Appendix.
\end{proof}
\section{Simulations}\label{sec:Simulations}
This section is devoted to the application of Algorithms~\ref{al:PNM} and~\ref{al:PGNM} to some practical problems. Based on the results obtained in Section~\ref{sec:Examples},
we discuss the Newton system for each of the examples, and compare the proposed approach against other algorithms on the basis of numerical results obtained with {\sc{Matlab}}.
\subsection{Box constrained QPs}
A quadratic program with box constraints
can be reformulated in the form \eqref{eq:GenProb} by adding to the cost the indicator of the feasible set, namely $\delta_{[l,u]}$. Then
$$ f(x) = \frac{1}{2}x'Qx + q'x,\quad g(x) = \delta_{[l,u]}(x). $$
The B-subdifferential, in this case, is composed of diagonal matrices, with diagonal
elements in $\{0,1\}$, cf. Section~\ref{ex:projBox}. More precisely, in Algorithm~\ref{al:PNM}, we can split
variable indices in the two sets
\begin{align*}
\alpha &= \left\{i\ \left.\right|\ l_i < \left[x-\gamma\nabla f(x)\right]_i < u_i\right\},\\
\bar\alpha &= \left\{1,\ldots, n\right\}\setminus \alpha,
\end{align*}
and choose $P=\mathop{\rm diag}\nolimits(p_1,\ldots,p_n)$, with $p_i=1$ if $i\in\alpha$ and $p_i=0$ otherwise.
Then the Newton system~\eqref{eq:RegNewtSys} reduces the triangular form
$$ \begin{bmatrix} I_{|\bar\alpha|} & \\ \gamma Q_{\alpha\bar\alpha} & \gamma Q_{\alpha\alpha} \end{bmatrix} d^k = P_{\gamma}(x^k) - x^k.$$
This can be solved by forward substitution, where only the $|\alpha|$-by-$|\alpha|$ block is solved via CG.
We tested the proposed algorithms against the commercial QP solver {\sc{Gurobi}}, {\sc{Matlab}}'s built-in ``quadprog'' solver,
the accelerated forward-backward splitting~\cite{nesterov2007gradient}
(with constant stepsize) and the alternating directions method of multipliers (ADMM)~\cite{boyd2011distributed}. The latter was both implemented using a direct solver, which requires
the initial computation of the Cholesky factor of $Q$, and the conjugate gradient method. Random problems were generated with chosen size, density and condition number,
as explained in~\cite{gonzaga2013optimal}. Figures~\ref{fig:QPbox_size_cond}-\ref{fig:QP_time_vs_obj} show the results obtained:
the proposed algorithms are generally faster then the others, and also appear to scale
good with respect to problem size and condition number.
\begin{figure}[tp!]
\center
\begin{tabular}{cc}
\subfloat[Problem size]{\includegraphics[width=0.49\textwidth]{figures/QPbox_size_vs_time_bw}} &
\subfloat[Condition number]{\includegraphics[width=0.49\textwidth]{figures/QPbox_cond_vs_time_bw}}
\end{tabular}
\caption{Box constrained QPs. Average running times over a sample of $20$ random instances, with increasing problem size and condition number.}
\label{fig:QPbox_size_cond}
\end{figure}
\subsection{General QPs}
If we consider the more general quadratic programming problem with constraint $l\leq Ax \leq u$,
$A\in\Re^{m\times n}$, then the projection onto the feasible set is not
explicitly computable like in the previous example.
Formulating the Fenchel dual, and letting $w$ be the dual variable, one can tackle the composite problem with
$$ f(w) = \frac{1}{2}(A'w+q)'Q^{-1}(A'w+q),\qquad g(w) = \sigma_{[l,u]}(w).$$
Also in this case $\mathop{\rm prox}\nolimits_{\gamma g}(w) = w-\Pi_{[\gamma l,\gamma u]}(w)$
has its B-subdifferential composed of binary diagonal matrices, cf. Section~\ref{ex:SuppFun}:
\begin{align*}
\bar\alpha &= \left\{i\ \left.\right|\ \gamma l_i \leq \left[x-\gamma\nabla f(x)\right]_i \leq \gamma u_i\right\},\\
\alpha &= \left\{1,\ldots, n\right\}\setminus \alpha.
\end{align*}
Choosing $P=\mathop{\rm diag}\nolimits(p_1,\ldots,p_n)$, with $p_i=1$ if $i\in\alpha$ and $p_i=0$ otherwise,
just like in the previous case system~\eqref{eq:RegNewtSys} is block-triangular:
$$ \begin{bmatrix} I_{|\bar\alpha|} & \\ \gamma A_{\alpha}Q^{-1}A_{\bar\alpha}' & \gamma A_{\alpha}Q^{-1}A_{\alpha}' \end{bmatrix} d = P_{\gamma}(w) - w. $$
Here subscripts denote \emph{row} subsets. The latter is solved by forward
substitution, and the $|\alpha|$-by-$|\alpha|$ block is solved via CG.
Note that all the products with $Q^{-1}$ are merely formal, and require a
previous computation of the Cholesky factor of $Q$. Figure~\ref{fig:QP_time_vs_obj} compares Algorithm~\ref{al:PNM} and \ref{al:PGNM} to the accelerated
version of FBS~\cite{nesterov2007gradient} and to ADMM~\cite{boyd2011distributed}, in terms of objective value decrease.
\begin{figure}[tp!]
\center
\begin{tabular}{cc}
\subfloat[Box constrained QP, $n=1500$ and $\kappa = 10^4$.]{\includegraphics[width=0.49\textwidth]{figures/QPbox_n_1500_cond_1e4_bw}} &
\subfloat[General QP, $n=1000$, $m=2000$ and $\kappa = 10^3$.]{\includegraphics[width=0.49\textwidth]{figures/QPdual_n_1000_cond_1000_dens_20_bw}}
\end{tabular}
\caption{QPs. Comparison of the methods applied to a box constrained QP (primal) and to a general QP (dual).}
\label{fig:QP_time_vs_obj}
\end{figure}
\subsection{\texorpdfstring{$\ell_1$}{l1}-regularized least squares}
This is a classical problem arising in many fields like statistics, machine learning, signal and image processing.
The purpose is to find a sparse solution to an underdetermined linear system.
We have
$$ f(x) = \frac{1}{2}\|Ax-b\|_2^2,\qquad g(x) = \lambda\|x\|_1,$$
where $A\in\Re^{m\times n}$ with $m < n$. The $\ell_1$-regularization term is known to promote sparsity in the solution vector $x^*$.
As we mentioned in Section~\ref{ex:EllOne}, the proximal mapping of the $\ell_1$ norm is the soft-thresholding operator, whose generalized Jacobian is diagonal.
Specifically, if
\begin{align*}
\alpha &= \left\{i\ \left.\right|\ \left|\left[x-\gamma\nabla f(x)\right]_i\right| > \gamma\lambda\right\},\\
\bar\alpha &= \left\{1,\ldots, n\right\}\setminus \alpha,
\end{align*}
then $P=\mathop{\rm diag}\nolimits(p_1,\ldots,p_n)$, with $p_i=1$ if $i\in\alpha$ and $p_i=0$ otherwise, is an element of $\partial_B(\mathop{\rm prox}\nolimits_{\gamma g})(x-\gamma\nabla f(x))$.
The simplified system~\eqref{eq:RegNewtSys} reduces then to
\begin{equation}
\begin{bmatrix}
I_{|\bar\alpha|} & \\
\gamma A_{\alpha}'A_{\bar\alpha} & \gamma A_{\alpha}'A_{\alpha}
\end{bmatrix}d = P_{\gamma}(x)-x.
\end{equation}
Here subscripts denote \emph{column} subsets.
The dimension of the problem to solve at each iteration is then $|\alpha|$: the smaller this set is, the cheaper
the computation of the Newton direction is.
Noting that at, any given $x$, larger values of $\lambda$ allow for smaller size of $\alpha$, and that decreasing $\lambda$ decreases the objective value,
we can set up a simple continuation scheme in order to keep the size of $\alpha$ small:
starting from a relatively large value of $\lambda = \lambda_{\mbox{\tiny max}} > \lambda_0$, we decrease it every time a certain criterion is met until $\lambda = \lambda_0$,
using the solution of one step as to warm-start the next one. Specifically, we set $\lambda_{\mbox{\tiny max}} = \|A'b\|_{\infty}$,
which is the threshold above which the null solution is optimal. For an in-depth analysis of such continuation techniques on this type of problems, see \cite{xiao2013proximal}.
We compared our method to SpaRSA~\cite{wright2009sparse},
YALL1~\cite{yang2011alternating} and l1\_ls~\cite{koh2007interior}.
The algorithms were tested against the datasets available at
\url{wwwopt.mathematik.tu-darmstadt.de/spear}~\cite{lorenz2013constructing}.
These include datasets with different sizes and dynamic ranges of the solution.
In each test we obtained a reference solution by running the method extensively, with
a very small tolerance as stopping criterion. Then we set all the algorithms to stop
as soon as the primal objective value reached a threshold at a relative distance
$\epsilon_r = 10^{-8}$ from the reference solution.
Figure~\ref{fig:l1ls_1} reports the performance profiles \cite{dolan2012benchmarking}
of the algorithms considered on the aforementioned problem set.
A point $(r,f)$ on a line indicates that the correspondent algorithm had a
performance ratio\footnote{An algorithm has a performance ratio $r$, with respect
to a problem, if its running time is $r$ times the running time of the top
performing algorithm among the ones considered.} at most $r$ in a fraction $f$ of problems.
It appears that the forward-backward Newton-CG method is very stable compared to the other algorithms considered.
The benefits of the continuation scheme are evident from Figure \ref{fig:l1ls_3}, where the size of the linear system solved by FBN-CG at every iteration is shown.
\begin{figure}[tp!]
\center
\includegraphics[width=0.7\textwidth]{figures/L1LS_spear_1_274_color}
\caption{$\ell_1$-regularized least squares. Performance profiles of the algorithms on the SPEAR test set with $\lambda_0=10^{-3}\lambda_{\mbox{\tiny max}}$.
The FBN-CG methods considered perform continuation on $\lambda$.}
\label{fig:l1ls_1}
\end{figure}
\begin{figure}[tp!]
\center
\begin{tabular}{cc}
\subfloat[\emph{spear\_inst\_1}]{\includegraphics[width=0.49\textwidth]{figures/L1LS_size_inst1_lambda_1}} &
\subfloat[\emph{spear\_inst\_91}]{\includegraphics[width=0.49\textwidth]{figures/L1LS_size_inst91_lambda_1}} \\
\subfloat[\emph{spear\_inst\_131}]{\includegraphics[width=0.49\textwidth]{figures/L1LS_size_inst131_lambda_1}} &
\subfloat[\emph{spear\_inst\_151}]{\includegraphics[width=0.49\textwidth]{figures/L1LS_size_inst151_lambda_1}}
\end{tabular}
\caption{$\ell_1$-regularized least squares. Size of the linear system solved, by FBN-CG with and without warm-starting, compared to the full problem size.}
\label{fig:l1ls_3}
\end{figure}
\subsection{\texorpdfstring{$\ell_1$}{l1}-regularized logistic regression}
This is another example of sparse fitting problem, although here the solution is used to perform binary classification.
The composite objective function consists of
$$ f(x) = \sum_{i=1}^m\log(1+e^{-a_{i}'x}),\qquad g(x) = \lambda\|x_{[n-1]}\|_1,$$
and again the $\ell_1$-regularization enforces sparsity in the solution.
We have
$$(\mathop{\rm prox}\nolimits_{\gamma g}(x))_i = \begin{cases}(\mathop{\rm sign}\nolimits(x_i)(|x_i|-\lambda\gamma)_{+})_i, & i = 1,\ldots,n+1, \\ x_i & i = n.\end{cases}$$
Let $A\in\Re^{m\times n}$ be the feature matrix with rows $a_{i}$ having the trailing feature (the \emph{bias} term) equals to one.
If we set $\sigma(x) = (1+e^{-Ax})^{-1}$ and let $\Sigma(x) = \mathop{\rm diag}\nolimits(\sigma(x)\circ (1-\sigma(x)))$,
then the Newton system~\eqref{eq:RegNewtSys} is
$$ \begin{bmatrix} I_{|\bar\alpha|} & \\ \gamma A_{\alpha}'\Sigma(x)A_{\bar\alpha} & \gamma A_{\alpha}'\Sigma(x)A_{\alpha} \end{bmatrix} d = P_{\gamma}(x) - x, $$
where this time
\begin{align*}
\alpha &= \left\{i\ \left.\right|\ \left|\left[x-\gamma\nabla f(x)\right]_i\right| > \gamma\lambda\right\}\cup\left\{n\right\},\\
\bar\alpha &= \left\{1,\ldots, n\right\}\setminus \alpha.
\end{align*}
We compared FBN-CG to the accelerated FBS~\cite{nesterov2007gradient}.
A continuation technique, similar to what described for the previous example, is employed in order to keep $|\alpha|$ small.
As in the previous example, an approximate solution to the problem was first
computed by means of extensive runs of one of the methods, and then the algorithms were set to stop
once at a relative distance of $\epsilon_r = 10^{-8}$ from it.
Table~\ref{tbl:l1lr_1} shows
how the methods scale with the number of features $n$,
for sparse random datasets with $m = n/10$ and $\approx 50$ nonzero features
per row. The datasets were generated according to what described
in~\cite[Sec.~4.2]{koh2007interior}. It is apparent how FBN-CG improves with
respect to the accelerated version of forward-backward splitting.
\begin{table}[tp!]
\center
\begin{tabular}{|r|rr|rr|rr|rr|}
\hline
& \multicolumn{2}{c|}{FBN-CG I} & \multicolumn{2}{c|}{FBN-CG II} & \multicolumn{2}{c|}{Accel. FBS} \\
\hline \hline
$n$ & time & iter. & time & iter. & time & iter.\\
\hline
100 & 0.04 & 51.1 & 0.05 & 57.3 & 0.06 & 292.4\\
215 & 0.05 & 52.8 & 0.06 & 61.0 & 0.11 & 462.1\\
464 & 0.06 & 54.4 & 0.09 & 69.4 & 0.18 & 647.2\\
1000 & 0.08 & 62.2 & 0.12 & 74.4 & 0.33 & 962.3\\
2154 & 0.27 & 98.8 & 0.35 & 108.2 & 0.82 & 1553.2\\
4641 & 0.95 & 151.1 & 0.94 & 142.2 & 3.58 & 2451.3\\
10000 & 2.40 & 217.7 & 2.54 & 207.0 & 9.36 & 3553.6\\
\hline
\end{tabular}
\caption{$\ell_1$-regularized logistic regression. Average running time (in seconds) and average number of iterations, for random datasets
with $m = n/10$ and increasing $n$, $\lambda = 1$.}
\label{tbl:l1lr_1}
\end{table}
\subsection{Matrix completion}
We consider the problem of recovering the entries of a matrix, which is known to have small rank, from a sample of them. One may refer to~\cite{candes2009exact} for
a detailed theoretical analysis of the problem.
Since we are now dealing with matrix variables, we conveniently adopt the notation of \emph{vector representation} of the matrix $X$, denoted by $\rm vec(X)$,
\emph{i.e.},~ the $mn$-dimensional vector obtained by stacking the columns of $X$.
The problem is formulated in a composite form as
$$ f(X) = \frac{1}{2}\|\mathcal{A}(X)-b\|^2,\qquad g(X) = \lambda\|X\|_*.$$
The linear mapping $\mathcal{A}:\Re^{m\times n}\to\Re^k$ is represented as a $k$-by-$mn$
matrix $A$ acting on $\rm vec(X)$. The problem is nothing more than a least squares problem with a nuclear norm regularization term, having
$\nabla f(X) = A'(A\rm vec(X)-b)$ and $\nabla^2 f(X) = A'A$.
For a matrix completion task, matrix $A$ is a binary matrix that selects $k$ elements from $X$.
Hence $\nabla^2 f(X)$ is actually diagonal, with $k$ nonzero elements:
$$ A'A = \mathop{\rm diag}\nolimits(h_1,\ldots,h_{mn}),\quad h_i = \begin{cases}1 & i \mbox{ is selected by } A,\\0 & \mbox{otherwise}.\end{cases}$$
The proximal mapping associated with $g = \|\cdot\|_*$ is the soft-thresholding applied to the singular values of the matrix argument. Its $B$-subdifferential elements
act on $m$-by-$n$ matrices as expressed in \eqref{eq:JacNS}: if we consider, again, vector representations
the linear mapping $P$ is explicitly expressed by some symmetric and positive semi-definite matrix $Q\in\Re^{mn\times mn}$ with eigenvalues in the interval $[0,1]$.
Hence we can express \eqref{eq:RegNewtSys} as follows:
\begin{equation} (G-GQG + \delta I)\rm vec(D) = -G\rm vec(X-P_{\gamma}(X)),\label{eq:NewtonSystemMC1}\end{equation}
where $$ G = I-\gamma \nabla^2 f(x) = I_{mn} - \gamma A'A = \mathop{\rm diag}\nolimits(g_1,\ldots,g_{mn}), $$
has diagonal elements $1-\gamma$ and $1$. Note however that we don't need to form the system \eqref{eq:NewtonSystemMC1} order compute residuals and carry out CG iterations, as matrix $Q$ is indeed
very large and dense. Instead, one can observe that pre-multiplication of $\rm vec(D)$ by a diagonal matrix $G$ is equivalent to the Hadamard product $\widehat{G}\circ D$, where
$$ \widehat{G} = \begin{bmatrix}g_1 & g_{m+1} & \cdots & g_{(n-1)m+1} \\ \vdots & \vdots & & \vdots \\ g_m & g_{2m} & \cdots & g_{nm} \end{bmatrix}. $$
Furthermore, with arguments similar to the ones in~\cite{zhao2010newton}, the computational effort needed to evaluate $P$ can be drastically reduced
due to the sparsity pattern of matrices $\Omega_1, \Omega_2, \Omega_3$ in \eqref{eq:JacNS}.
Hence it is convenient to compute residuals according to the following rewriting of \eqref{eq:NewtonSystemMC1}:
\begin{equation} \widehat{G}\circ D-\widehat{G}\circ P(\widehat{G}\circ D) + \delta D = -\widehat{G}\circ(X-P_{\gamma}(X)).\label{eq:NewtonSystemMC2}\end{equation}
Even in this case, as in the previous examples, we can warm start our methods by approximately solving it for $\lambda_{\mbox{\tiny max}} \geq \lambda > \lambda_{0}$ and updating $\lambda$
in a continuation scheme until the final stage in which $\lambda = \lambda_{0}$.
We considered the accelerated proximal gradient
with line search (APGL)~\cite{toh2010accelerated} and the linearized alternating
direction method (LADM)~\cite{yang2013linearized} in performing our tests. Both the methods also
implement continuation on their parameters. Table~\ref{tbl:mc_1} shows the
average performance, in terms of number of iterations and SVD computations,
on random matrices generated according to~\cite{toh2010accelerated}.
FBN-CG always succeeds at finding a low-error solution within a moderate number
of iterations and SVD computations, which is not the case for LADM. Regarding
APGL, it is worth noticing that it takes advantage from different
acceleration techniques for this specific problem, which we have not considered
for our algorithm. The drawback of our method is that at every iteration,
the computation of \eqref{eq:JacNS} requires a full SVD as opposed to a
decomposition in reduced form. Whether this can be avoided, and how this would
affect the overall method, requires further investigation.
\begin{table}
\center
\begin{tabular}{|r||r|r|r|r|r|r|}
\hline
& $m$ ($=n$) & density & iterations & SVDs & error \\
\hline
\multirow{3}{*}{FBN-CG I} & 100 & 0.56 & 67.3 & 86.2 & 6.89e-04\\
& 200 & 0.35 & 76.8 & 100.3 & 3.56e-04\\
& 500 & 0.20 & 83.8 & 96.8 & 1.92e-04\\
\hline
\multirow{3}{*}{FBN-CG II} & 100 & 0.56 & 54.1 & 126.1 & 6.89e-04\\
& 200 & 0.35 & 65.6 & 153.3 & 3.56e-04\\
& 500 & 0.20 & 71.0 & 151.2 & 1.92e-04\\
\hline
\multirow{3}{*}{APGL} & 100 & 0.56 & 92.4 & 92.4 & 5.94e-04\\
& 200 & 0.35 & 94.9 & 94.9 & 3.56e-04\\
& 500 & 0.20 & 67.3 & 67.3 & 1.92e-04\\
\hline
\multirow{3}{*}{LADM} & 100 & 0.56 & 183.2 & 183.2 & 4.58e-03\\
& 200 & 0.35 & 494.2 & 494.2 & 7.57e-03\\
& 500 & 0.20 & 1000.0 & 1000.0 & 2.70e-02\\
\hline
\end{tabular}
\caption{Matrix completion. Average performance on $10$ randomly generated instances $M$ with $\mbox{rank}(M) = 10$, $\lambda = 10^{-2}$.
The density column refers to the fraction of observed coefficients. APGL and LADM require one SVD per iteration.
The error reported is $\|X-M\|_F/\|M\|_F$, the relative distance $X$, the computed solution, and the original matrix $M$.}
\label{tbl:mc_1}
\end{table}
|
1,116,691,498,646 | arxiv | \section{Introduction}
The technique of employing Imaging Air Cherenkov Telescopes (IACTs) has been
successfully used for ground based gamma-ray astronomy for about
two decades, revealing more than 70 sources in the course of this
period. The successful race started with the discovery of the first
TeV gamma-ray signal from the Crab Nebula - now regarded as the standard
candle in gamma-ray astronomy - by the Whipple Collaboration in 1989
\cite{whipple}. Over 20 years following that event, the sensitivity
of the IACTs has improved dramatically, leading to a large increase
in the rate of scientific discovery made with these instruments.
In the beginning a few tens of hours were needed to detect
a significant signal from the Crab Nebula, whereas today{'}s
installations operating in the same energy range require only
a few minutes for the same signal strength.
The main
improvements are essentially due to a) a finer pixel size of photo sensors
in the imaging camera, b) an improved trigger, c) a larger size of telescopes
and improved optics
providing stronger signals and revealing more structures in the images
(helps to further suppress the backgrounds) and
d) the use of multiple telescopes operating in
coincidence mode (the so-called stereo mode of observations).
However, the field of view (FoV) of IACTs has not undergone a similar evolution.
The largest FoV telescope had a field of about $7^\circ$
\cite{sinitsyna}.
Contemporaneous IACTs typically cover a $(2-4)^{\circ}$ wide FoV.
A wider FoV would enable sensitive
all-sky surveys to be conducted within a relatively short time frame.
In optical astronomy the trend to larger FoV is evident, just to mention
the LSST \cite{lsst} and the Pan-starrs projects \cite{panstarrs}.
An interesting example of the use of even
moderately wide FoV telescopes has been demonstrated by the H.E.S.S.
collaboration. While performing observations of scheduled astronomical targets,
H.E.S.S. has discovered several
new sources in the $\sim 4^\circ$ effective FoV of their instrument.
Subsequently, when scanning the galactic plane, multiple sources were
found in the $\sim 300$ square degree band surveyed by
H.E.S.S. - i.e. the region scanned was much larger than a
single H.E.S.S. FoV.
\cite{hess}.
Along with the advantages of the wide FoV there are
also a number of drawbacks, such as: i) compared to currently
used simple prime focus constructions,
they have a more complex optical and mechanical
design, ii) the imaging camera will have a large transverse size and
thus can vignet a significant fraction of the mirror and iii) the
imaging camera will be composed of a very large number of light
sensors and one will therefore need a large number of readout
channels. These factors tend to make a wide FoV telescope expensive.
In the following we shall present a concept for a wide FoV IACT, for which
the complications due to i) and ii) are minimal. The challenge of building
a camera with a large number of channels cannot be by-passed.
\section{Wide FoV}
For the successful operation
of an IACT, one needs to provide a relatively high optical
resolution to efficiently select the rare gamma shower images from the few
orders of magnitude more frequent images induced by hadron
(background) showers. Although the images of gamma showers
tend to be small in size compared to those of hadrons,
such a selection is not straightforward
because the distributions of the parameters used to describe
their images (see, for example, \cite{carmona}) overlap significantly.
The differences in shape parameters of gamma
and hadron images are in the range of $(0.1-0.2)^{\circ}$ for the TeV
energy range and they are a few times less for the (sub) 100~GeV energy
range. Therefore, for a successful image discrimination
the telescope shall provide a Point Spread Function
(PSF) that is $\leq (0.1-0.2)^{\circ}$ for the TeV energy range
and a few times less for the (sub) 100~GeV range.
The simplest and most straightforward way to design a large FoV
telescope is using the prime-focus design, i.e. with
just a single (primary) mirror surface of a
required minimum $f/D$.
Five telescopes of different prime-focus designs were simulated in
\cite{schliesser_mirzoyan}.
In that study ten optical resolutions in the range of
$(0.01-0.1)^{\circ}$ RMS, with a step size of $(0.01)^{\circ}$,
were simulated.
It has been shown in \cite{schliesser_mirzoyan}, for example, that
by using a F/2.7 optics,
one can design a $10^{\circ}$ wide FoV telescope of
parabolic design that can provide a resolution of $0.05^\circ$
everywhere in the FoV.
In the same study it was shown that a Davies-Cotton telescope of
F/2.5 and even a F/2 optics of elliptical design can provide the
same $10^{\circ}$ wide FoV at a resolution of $0.05^\circ$,
albeit at the expense of a higher degree of
isochronous distortion.
In the recent work \cite{vassiliev} the authors described an
interesting
$15^{\circ}$ wide FoV aplanatic two mirror telescope design
for gamma-ray astronomy.
They also showed that one
can improve the angular resolution for gamma events
as the optical resolution in the FoV approaches a limit
of $1^{'}$.
An alternative way of constructing a wide angle optics is to follow
the design of the EUSO mission \cite{petrolini},
which has refracting optics that
allows a full FoV of $60^{\circ}$
or even larger. The GAW telescope for TeV gamma astronomy is following
that design in their construction \cite{GAW}. Two double-sided Fresnel
lenses were planned to be used in the optical design of EUSO.
The disadvantages of that design were considered to be the relatively high
light losses, especially for relatively large incident angles of light.
Also, distortion of images because
of scattering of light by the Fresnel lenses must be carefully taken
into account. One needs to construct
the refractive optics from materials that for the given thickness do not
substantially absorb the short-wave near UV light in the wavelength
range of 330-400 nm.
A variation of the EUSO-type solution could be to construct a stationary
wide FoV telescope or a telescope that includes some kind of
secondary optics.
\subsection{Vignetting in prime focus design}
The plate scale of a telescope, giving
the ratio of angular distance on the sky to length
in the focal plane (in deg./m), is
\begin{equation}
\delta = \frac{57.3}{f}
\end{equation}
where $f$ is the telescope focal length.
For a given detector field of view $\theta$, the diameter
of the detector assembly is
\begin{equation}
d = \frac{\theta}{\delta}
\end{equation}
For a prime focus telescope of a given focal ratio, $F = f/D$,
where D is the telescope diameter, vignetting factor,
i.e., shadowing, of the detector assembly is given by
\begin{equation}
Vignetting = \left( d / D \right)^2 =
\left( \frac{\theta \times f}{57.3} / D \right)^2 =
\left( \theta / 57.3 \right)^{2} \times F^{2}
\end{equation}
Thus, for example, a telescope with $F/2$ and $\theta=15^\circ$
will have a vignetting factor of 27~\%, whereas a camera with a field of view
of $\theta=10^\circ$ will have a vignetting factor of just 12~\%.
We have mentioned above that prime focus telescopes of F/2-F/2.7 of
a few different designs can provide a FoV of $10^\circ$
with the desired optical resolution. For simplicity
let us consider an F/2 design. In the case of F/2 design, when compared
with the F/1 case, we see that:
\begin{itemize}
\item the imaging camera will be 2 times further away from the mirror,
\item the imaging camera pixels must have a 2 times larger linear size,
\item the imaging camera weight may increase by more than a factor of 4,
\item the camera support mechanics must be significantly stiffer and
\item the vignetting by the camera will increase by a factor of $\approx$ 4.
\end{itemize}
There are therefore several reasons to consider designs which are faster.
Generally, a well designed telescope that includes more than one optical
element will offer some advantages not available in the case of prime
focus designs. Those advantages could be a) the wider FoV, b) a higher
and more homogeneous spatial resolution across the FoV, c) faster
optics/more compact size and d) lower isochronous distortion.
In the following we want to concentrate on a specific wide FoV telescope
solution that comprises more than one optical element.
\section{The Schmidt Telescope}
Of all telescope designs, the Schmidt telescope, and solutions
derived from it, provides the widest FoV.
Specifically, Schmidt type systems provide by far the largest number
of focal plane spatially resolved pixels on the focal plane
per optical element \cite{schroeder}.
Moreover, it is possible to work at very fast F-ratios, below F/1,
thereby minimizing the obscuration and weight of the camera, and the
overall size and weight of the system.
We have developed
a design of a 7~m Schmidt telescope that is suitable as an IACT.
Our design consists of simple optical components, is
compact (low $f/D$) and is realistic to implement.
The primary design characteristics of the telescope
are given in table \ref{tel_description.tab}. The optical parameters
of the telescope are listed in table \ref{Schmidt_parameters.tab}.
The classical Schmidt telescope uses a spherical mirror and
an aspheric refractive corrector plate, normally referred to as the
'Schmidt corrector', which is located at the centre of curvature of
the mirror. The Schmidt telescope has a curved focal plane, which is
con-focal with
the mirror. The entrance pupil (the stop) is located at the Schmidt corrector.
In order to accept light without vignetting from directions that are
relatively far away from the optical axis, the mirror must be
somewhat larger than the Schmidt corrector.
Thus, for a given incidence angle, only a part of the mirror,
equivalent to the the size of the Schmidt corrector, is used.
The Schmidt corrector pre-deforms the impinging
wavefront so that after reflection on the spherical mirror,
it is free of spherical aberration. As the only 'on-axis' aberration of
a spherical mirror is spherical aberration, the Schmidt telescope
is therefore nominally aberration free at the wavelength for which the
the Schmidt corrector is optimized. At an off-axis field position,
the Schmidt corrector makes some angle with the chief ray,
the chief ray being defined as the ray that passes through the centre of
the entrance pupil, i.e. the centre of the Schmidt corrector.
Thus, while the spherical mirror appears the same for any field position,
the Schmidt corrector will be at an angle to the beam for an off-axis field
position. It will therefore not correct perfectly for spherical aberration
at off-axis field positions. However, because the Schmidt corrector
is a thin plate, aberrations increase very slowly with the field angle.
We see that the Schmidt telescope is free of 3rd order coma and astigmatism,
exactly because the entrance pupil/corrector is placed
at the centre of curvature of the spherical mirror.
A simplified version of the Schmidt-type telescope is used by the
AUGER collaboration for their air fluorescence telescopes: the
corrector plate is replaced by an aperture diaphragm, combined with
an annular Schmidt corrector \cite{klages}.
This aperture eliminates the 3rd order coma. The remaining spherical
aberration is acceptable for the given $f/D$, and satisfactory
for the requirements of fluorescence telescopes.
\section{The layout of a IACT Schmidt telescope}
In a Schmidt telescope,
the corrector is a very weak aspheric transparent optical element.
Because the corrector is weak, chromatic effects are moderate
for a telescope with $f/D$ $\approx$ 1.
\begin{figure}[t]
\begin{center}
\includegraphics[totalheight=10cm]{Mirzoyan_Andersen_fig1.eps}
\end{center}
\caption{\it
Layout of the Schmidt-type IACT. The mirror and the focal plane have
their centre of curvature at the centre of the corrector plate.
In the insert in the upper left corner, the Schmidt corrector is shown
with the aspheric shape magnified by a factor of 20. Both the nominal
corrector (shaded line) and a Fresnel version is shown.
\label{Mirzoyan_Andersen_fig1} }
\end{figure}
We have developed a specific Schmidt type design, optimizing it
for the use as wide FoV IACT (Fig. \ref{Mirzoyan_Andersen_fig1}),
using the ZEMAX optics design software.
The design has an F-ratio of 0.80, an entrance aperture
of 7~m, a total length of 11.2~m and a FoV of $15^\circ$ diameter, with
a polychromatic image quality that is well below $1^{'}$ RMS radius
across the entire field. This is achieved with a corrector of acrylic
plastic and a weakly aspheric mirror.
\begin{table}[h]
\caption{IACT Schmidt telescope primary design parameters.}
\vspace{4mm}
\begin{center}
\begin{tabular}{l l} \hline
Diameter & ~~~7.0~m \\
F-ratio & ~~~0.8 \\
Focal length & ~~~5.6~m \\
Field of View & ~15$^\circ$ \\
Resolution (RMS) & $<$~1{\mbox{$^{\prime}$}} \\
An-isochronicity & $\leq$ 0.03~ns \\
\hline
\end{tabular}
\vspace{2mm}
\label{tel_description.tab}
\end{center}
\end{table}
\begin{table}[h]
\begin{center}
\caption[]{Optical parameters of a 7~m F/0.8 15$^\circ$ FoV Schmidt telescope.}
\vspace{4mm}
\begin{tabular}{l c c c c c} \hline
Component & Radius of & Axial &Diameter& 4th order & 6th order
\vspace{-2mm} \\
& curvature & spacing& & aspheric & aspheric
\vspace{-2mm}\\
& $[$m$]$ &$[$m$]$ &$[$m$]$&$[$m$^{-3}]$&$[$m$^{-5}]$\\
\hline
Corrector \vspace{-2mm} \\
~~surface 1 &$\infty$ & \multicolumn{1}{r}{0.0250}~ & 7.00 &\vspace{-3mm} \\
~~surface 2 & \multicolumn{1}{r}{-103.00}~ & \multicolumn{1}{r}{10.7600}~ & 7.00 & 3.20$\times 10^{-4}$& ~~6.75$\times 10^{-6}$ \vspace{2mm} \\
Mirror & \multicolumn{1}{r}{-10.88}~ & \multicolumn{1}{r}{-5.2885}~ & 9.95 & 8.30$\times 10^{-7}$& -8.50$\times 10^{-8}$\\
Camera & \multicolumn{1}{r}{-5.60}~ & & 1.47 & & \\
\hline
\end{tabular}
\vspace{-4mm}
\label{Schmidt_parameters.tab}
\end{center}
\vspace{4mm}
\end{table}
The optical parameters corresponding to the design are given in
table \ref{Schmidt_parameters.tab}.
Each row of this table fully specifies an optical surface in the telescope.
The first column identifies the component. The refractive corrector naturally has two
surfaces, while the mirror and focal plane are single surfaces.
The second column specifies the radius of curvature of the surface,
with a negative sign signifying that the center of curvature is located
towards the object.
The third column gives the distance between the vertexes of the current
surface and the next surface, i.e., the spacing of surfaces along the
optical axis.
The 4th column gives the diameter (clear aperture) of the surface.
The 5th and 6th column gives the 4th and 6th order polynomial
coefficients of the corresponding aspheric surfaces.
The aspheric deformation of the mirror is very weak and can be obtained
by optimally tilting segments of a segmented
spherical mirror, as long as the segment size is not more than about 60~cm.
The nominal isochronous distortion is less than 0.01 ns anywhere in the field.
Spot diagrams are shown in Fig. \ref{Mirzoyan_Andersen_fig2} and the
corresponding RMS spot sizes vs FoV angle are given in Fig.
\ref{Mirzoyan_Andersen_fig3}. We note that good imaging characteristics
are naturally linked to low isochronous distortion through Fermat's principle.
\begin{figure}[h!]
\begin{center}
\includegraphics[totalheight=7cm]{Mirzoyan_Andersen_fig2.eps}
\end{center}
\caption{\it
Spot diagram for 6 field positions, from on-axis to $7.5^\circ$.
The box size is 5 arcmin. \label{Mirzoyan_Andersen_fig2} }
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[totalheight=7cm]{Mirzoyan_Andersen_fig3.eps}
\end{center}
\caption{\it
RMS spot size vs. FoV angle. The solid line is for polychromatic light.
\label{Mirzoyan_Andersen_fig3} }
\end{figure}
The physical length of a Schmidt telescope is twice its focal length,
in our design equivalent to a F/1.6 prime focus telescope, which is
not, by a large margin, capable of delivering a comparable FoV
with a similar image quality and isochronicity. Because of the very
fast F-ratio, the camera has a diameter of less than 1.5~m, despite the
large FoV. The resulting vignetting is less than 5~\%.
If the entire $15^\circ$ FoV is to be fully illuminated
by the light passing through the Schmidt corrector, the mirror
must have a diameter of 9.95~m, implying that only 50\% of the mirror
surface is "actively used" to observe a given point in the sky. By allowing
for some vignetting, the mirror diameter can be reduced down to 7~m.
This is illustrated in Fig. \ref{Mirzoyan_Andersen_fig4}.
A useful compromise could be a mirror with a diameter of 8~m,
which would result in 13-14\% vignetting at the very edge of the field.
\begin{figure}[h!]
\begin{center}
\includegraphics[totalheight=7cm]{Mirzoyan_Andersen_fig4.eps}
\end{center}
\caption{\it Vignetting as a function of mirror size.
\label{Mirzoyan_Andersen_fig4} }
\end{figure}
\section{The practical implementation of a Schmidt IACT}
The mirror of our baseline design is
essentially the same as those
already implemented
in large Cherenkov telescopes, such as for example MAGIC,
H.E.S.S. or VERITAS, except that
the specification of the alignment of
the mirror segments is a factor of two to three tighter,
in order to match the nominal performance of the design. Our Schmidt
design is ideally suited for implementing an auto-collimation system
for closed loop control of the mirror alignment. If a light source
is located at the centre of curvature of the mirror, which is by
design also the vertex of the corrector plate, light reflected from
the mirror should return to this point. Any deviation from this
signifies a deviation from the nominal shape of the mirror. It is
straightforward to construct a device with a single light source and a
single camera, which will be able to monitor all mirror segments
that are not obscured by the camera (about 88\% of the segments),
in real time, ensuring that the high resolution of the telescope
can be maintained under all conditions.
The Schmidt corrector is very forgiving with respect to misalignment.
The centre of the corrector plate should nominally be located at
the centre of curvature of the mirror, and it should be perpendicular
to the optical axis. The given design allows for a shift along the optical
axis of 90~mm and a decenter of 10~mm,
without increasing the spot size beyond $1^{'}$ RMS anywhere in the field.
This is illustrated in Fig. \ref{Mirzoyan_Andersen_fig5}, where the
consequence of a focus offset is also shown. Tilts of the corrector by
more than one degree are required, before the spot size increases to beyond
$1^{'}$ RMS.
The Schmidt corrector does therefore not pose any new stringent demands on
alignment. The main challenge, with respect to alignment, remains the correct
focussing of the camera across the wide FoV.
\begin{figure}[h!]
\begin{center}
\includegraphics[totalheight=7cm]{Mirzoyan_Andersen_fig5.eps}
\end{center}
\caption{\it The maximum RMS spot size over the field of view, as a function of
corrector radial displacement, longitudal displacement and focus offset.
\label{Mirzoyan_Andersen_fig5} }
\end{figure}
The corrector plate is an element which has
as yet
not been implemented
on a scale comparable to what is required for the optical system discussed
here. The corrector plate has a maximum thickness of 17~mm, which would
imply significant attenuation of the UV radiation. The most practical
solution is to implement the corrector as a Fresnel like
lens, whereby
the thickness of the acrylic corrector can be minimized. Specifically, we consider
to bond acrylic wedge segments onto the downwards facing surface of a
substrate of 5~mm thick Borofloat sheets.
This would allow for good UV transparency, even below 330~nm.
Both acrylic plastic and Borofloat are inexpensive materials
which are produced on an industrial scale in large dimensions.
We note that the use of a Fresnel-like lens implies
increased isochronous distortion, to a level of 0.03 ns, i.e., a level
which is still very acceptable. Because the aspheric corrector lens is
very weak, implementing it as a Fresnel lens implies vignetting and scattering
of light on a level well below 0.1\%, even at the edge of the field.
Thus, the use of a Fresnel lens for the corrector plate does not have
disadvantages that affect the performance of the telescope on a measurable
scale.
The large Fresnel Schmidt corrector should be implemented as a segmented lens,
where
the segments of a size of $\sim 50~cm$ can be held in a
spider's web like mesh made from a light-weight material. This
mesh will further introduce some vignetting, at the level of a few percent
Because the position of the Schmidt corrector is very forgiving, there
is no need for an active control of the segmented corrector.
The dominant aberration in the telescope is chromatic aberration in the
corrector. For the field size and F-ratio used here, this has no practical
importance. It is possible to increase the field of view, and/or use a faster
design, by introducing an achromatic corrector plate. This could be done
by using a combination of Polystyrene and Acrylic plastic materials.
The main challenge would lie in maintaining a high UV transmittance.
With an achromatic corrector plate, a FoV of up to 25 degrees would become
feasible.
\section{A short discussion on the cost of a Wide FoV telescope}
The telescope suggested above has $1^{'}$ resolution anywhere
in the $15^\circ$ wide FoV. The physical size of $1^{'}$ corresponds
to $\sim 1.6~mm$ in the focal plane. This hints
at the possible size of the light sensors that can be used in such
a high resolution system. The currently operating or planned IACTs
use $\sim 10^{3}$ pixels in their imaging cameras of a few
degree aperture. The $28~m$ diam. H.E.S.S.II telescope will have
2048 pixels in its $3.2^{o}$ FoV
camera \cite{HESS-II}.
The wide FoV telescope may need,
depending on the pixel size and selected FoV, about two orders
of magnitude more pixels for the camera. If the light sensor element
(that may include a light collector) size is about $5~mm$ one will
need $7\times 10^{4}$ pixels, and if the
light sensor element size is $10~mm$ one will need
$\sim 1.8 \times 10^{4}$ pixels for covering a $15^\circ$ wide FoV.
Usually the readout and the camera of a telescope are
the most expensive items in the total cost. Therefore it will
be mandatory to look for a cost-efficient light sensors and
readout system. Multi-channel bialkali PMTs or
UV enhanced SiPMs can be used as light sensors.
The relatively high cost of a wide FoV telescope can be seen as
compensated by the fact that one will need only one mechanical mount
for covering a huge area in the sky.
\section{Summary and Conclusions}
We have presented the principal design of a 7~m wide FoV IACT,
which has excellent
imaging characteristics over a $15^\circ$ field diameter.
The basic design is that of a Schmidt type telescope, F/0.8,
i.e., comparatively fast.
This design allows one to obtain an optical
spot size of $1^{'}$ RMS everywhere in the field, with an isochronous distortion
below 0.03~ns in case a Fresnel lens is used as a corrector.
It is straightforward to scale this design to larger apertures. The only
aspect which changes by scaling is the isochronous distortion. For a 20~m
diameter telescope, the isochronous distortion amounts to 0.06~ns.
The limiting factor in the baseline design proposed here is chromatic
aberration in the corrector plate. This can be overcome by implementing
an achromatic corrector plate. The main challenge will however lie in
filling the focal plane with detectors which would fully utilize the
resolution provided by the telescope.
\section{Acknowledgements}
We are grateful to Ms. S. Rodriguez for critically reading this
manuscript. We thank the anonymous referee for providing comments
which have greatly helped to clarify the paper.
\end{document}
|
1,116,691,498,647 | arxiv | \section{Introduction}
\thispagestyle{empty}
\let\thefootnote\relax\footnotetext{This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 754475). The author also acknowledges the support of ISF grant 871/17.}
Let $\bar{X}_{d-1,d}$ be the space of rank-$\left(d-1\right)$ discrete
subgroups of $\mathbb{R}^{d}$ and let $X_{d-1,d}$ be the space of their
homothety classes, that is
\[
X_{d-1,d}\overset{\text{def}}{=}\bar{X}_{d-1,d}\diagup\sim,
\]
where,
\[
\text{\ensuremath{\Lambda}}\sim\alpha\cdot\Lambda,\ \alpha\in\mathbb{R}^{\times}.
\]
For $\Lambda\in\bar{X}_{d-1,d}$, we will denote by $\left[\Lambda\right]$
its image in $X_{d-1,d}$, namely
\[
\left[\Lambda\right]\overset{\text{def}}{=}\left\{ \alpha\cdot\Lambda\mid\alpha\in\mathbb{R}^{\times}\right\} .
\]
The\emph{ covolume} function, denoted by
\[
\text{cov}:\ \bar{X}_{d-1,d}\to\mathbb{R}_{+},
\]
is the function that assigns to each $\Lambda\in\bar{X}_{d-1,d}$
the $\left(d-1\right)$-volume of a fundamental parallelotope. It
is explicitly given by
\begin{equation}
\text{cov}(\Lambda)=\sqrt{\det b^{t}b},\ \Lambda\in X_{d-1,d},\label{eq:covol def}
\end{equation}
where $b\in M_{d\times d-1}(\mathbb{R})$ is a matrix whose columns form a
$\mathbb{Z}$-basis for $\Lambda$. Let $\primlatof{d-1}\subseteq\bar{X}_{d-1,d}$
be the set of all subgroups that arise as the intersection of $\mathbb{Z}^{d}$
with a rational hyperplane. Of particular interest to us are the subsets
of $X_{d-1,d}$ defined by
\[
\primlatof{d-1}(T)\overset{\text{def}}{=}\left\{ \left[\Lambda\right]\in X_{d-1,d}\mid\Lambda\in\primlatof{d-1},\ \text{cov}(\Lambda)=T\right\} ,\ T\in\mathbb{R}^{\times}.
\]
The sets $\primlatof{d-1}(T),$ $T\in\mathbb{R}^{\times}$ are finite (see
e.g. Lemma \ref{lem:bijection orthogonal vectors}) and we denote
by $\mu_{T},$ $T\in\mathbb{R}^{\times}$, the uniform probability counting
measures on them. In this note we prove that certain subsequences
of $\left\{ \mu_{T}\right\} $ converge to a probability measure which
we denote by $\mu_{\text{polar}}$. The measure $\mu_{\text{polar}}$
may be described through disintegration (see e.g. \cite{sergant_shapira})
as follows. Let $\text{Gr}_{d-1}(\mathbb{R}^{d})$ be the space of hyperplanes in
$\mathbb{R}^{d}$, and for $\mathcal{P}\in\text{Gr}_{d-1}(\mathbb{R}^{d})$ let
\begin{equation}
X_{\mathcal{P}}\overset{\text{def}}{=}\left\{ \left[\Lambda\right]\in X_{d-1,d}\mid\Lambda\subseteq\mathcal{P}\right\} .\label{eq:X_p}
\end{equation}
A choice of a linear isomorphism between $\mathcal{P}$ and $\mathbb{R}^{d-1}$
gives an identification of $X_{\mathcal{P}}$ with $X_{d-1}=\text{PGL}{}_{d-1}(\mathbb{R})\diagup\text{PGL}{}_{d-1}(\mathbb{Z})$.
Through this identification, the $\text{PGL}{}_{d-1}(\mathbb{R})$-invariant
probability measure $\mu_{X_{d-1}}$ may be pushed to a measure on
$X_{\mathcal{P}}.$ It turns out that the latter push-forward to $X_{\mathcal{P}}$
is independent of the chosen isomorphism (see e.g. part 3 of Lemma
\ref{lem:horizontal lattices identification with =00005BX_d-1=00005D}),
hence this process defines a measure $\mu_{\mathcal{P}}$ on $X_{\mathcal{P}}$.
Let $\mu_{\text{Gr}_{d-1}(\mathbb{R}^{d})}$ be the $\text{SO}{}_{d}(\mathbb{R})$-invariant
probability measure on $\text{Gr}_{d-1}(\mathbb{R}^{d})$. We define
\[
\mu_{\text{polar}}\overset{\text{def}}{=}\int_{\text{Gr}_{d-1}(\mathbb{R}^{d})}\mu_{\mathcal{P}}\ d\mu_{\text{Gr}_{d-1}(\mathbb{R}^{d})}(\mathcal{P}).
\]
For a prime $p$, let
\begin{equation}
\mathbb{D}(p)\overset{\text{def}}{=}\left\{ m\in\mathbb{N}\mid p\nmid m\right\} .\label{eq:D(P)}
\end{equation}
We prove
\begin{thm}
\label{thm:maintheorem} The convergence
\[
\mu_{T_{j}}\overset{\text{weak * }}{\longrightarrow}\mu_{\text{polar}},
\]
holds for:
\begin{enumerate}
\item $d=4$, and $\left\{ T_{j}^{2}\right\} \subseteq\mathbb{D}(p)/8\mathbb{N}$,
for any odd prime $p$.
\item $d=5$, and $\left\{ T_{j}^{2}\right\} \subseteq\mathbb{D}(p)$, for
any odd prime $p$.
\item $d>5$, and $\left\{ T_{j}^{2}\right\} \subseteq\mathbb{N}$.
\end{enumerate}
\end{thm}
\begin{rem*}
It should be possible to prove a version of Theorem \ref{thm:maintheorem}
for $d=3$ by relying on the work of \cite{AES3d}. Also, it seems
that the unnecessary congruence conditions in dimensions $4$ and
$5$ can be removed and an effective estimate on the convergence can
be obtained by exploiting a theorem of Einsiedler, R{\"u}hr and Wirth
found in \cite{Effective_aes_Ruhr}. In order to do so, one should
replace Theorem \ref{thm:AESgrids thm} (of \cite{AESgrids}) stated
in this note, with the corresponding theorems of \cite{AES3d} and
\cite{Effective_aes_Ruhr} and go along the lines of sections \ref{sec:The-p-adic-factory}
and \ref{sec:proof of the equidisitrbution}.
\end{rem*}
\subsection{Background for Theorem \ref{thm:maintheorem} }
W. Schmidt in \cite{Schmidt1998,Schmidt2015} computed the distribution
of the homothety classes the $\left(d-1\right)$-integral subgroups
through the filtration
\[
\left\{ \left[\Lambda\right]\in X_{d-1,d}\mid\Lambda\in\primlatof{d-1},\ \ \text{cov}(\Lambda)\leq T\right\} ,\ \text{as }T\to\infty.
\]
Hence, Theorem \ref{thm:maintheorem} should be viewed as a sparse
version of Schmidt's result. Later, Aka, Einsiedler and Shapira in
\cite{AESgrids,AES3d} computed the limiting distribution of the image
of the sets $\primlatof{d-1}(T)$ in the space $\text{Gr}_{d-1}(\mathbb{R}^{d})\times\text{O}_{d}(\mathbb{R})\backslash X_{d-1,d}$.
Since there is a natural surjection $\pi:X_{d-1,d}\to\text{Gr}_{d-1}(\mathbb{R}^{d})\times\text{O}_{d}(\mathbb{R})\backslash X_{d-1,d}$,
Theorem \ref{thm:maintheorem} implies the main result of \cite{AESgrids}.
We also note that the type of problem considered here may be viewed
as a natural generalization of the problem considered by Y. Linnik
regarding the equidistribution of the projection to the unit sphere
of primitive integer vectors on large spheres (see \cite{Lin68},
and also \cite{elenberg_michel_venaktesh} for a modern review).
\subsection{Organization of the note and proof ideas}
We provide a novel interpretation of $X_{d-1,d}$ as a double coset
space (see Proposition \ref{prop:Tthe polar coordinates}) which allows
to use the methods and results of \cite{AESgrids} in order to prove
Theorem \ref{thm:maintheorem}. In an overview, the method of \cite{AESgrids}
allows to interpret the sets $\primlatof{d-1}(T)$ as compact orbits
in an S-arithmetic space and to relate their natural measure to the
measures $\mu_{T}$. A key theorem of \cite{AESgrids} states that
those orbits equidistribute, which eventually allows to deduce Theorem
\ref{thm:maintheorem}. The organization of the note is as follows.
\begin{itemize}
\item In Section \ref{sec:Polar-coordinates-on} we describe $X_{d-1,d}$
as a coset space, and as a double coset space. We also discuss the
measure $\mu_{\text{polar }}$ in detail.
\item In Section \ref{sec:The-p-adic-factory} we discuss the method that
is used to ``generate'' elements of $\primlatof{d-1}$ by the p-adics.
\item In Section \ref{sec:proof of the equidisitrbution} we discuss the
resulting measures and conclude the proof.
\end{itemize}
\section{\label{sec:Polar-coordinates-on} $X_{d-1,d}$ as a Homogenous space
and its polar coordinates}
\subsection{The transitive action of $\text{SL}{}_{d}(\protect\mathbb{R})$}
The group $\text{SL}{}_{d}(\mathbb{R})$ acts from the left on $X_{d-1,d}$
by
\[
g\cdot[\Lambda]=[g\Lambda],\ g\in\text{SL}{}_{d}(\mathbb{R}),\ [\Lambda]\in X_{d-1,d},
\]
where
\[
g\Lambda=\left\{ gv\mid v\in\Lambda\right\} .
\]
Let $e_{i},$ $i\in\{1,..,d\}$ be the standard basis vectors of $\mathbb{R}^{d}$
and note that for $g\in\text{SL}{}_{d}(\mathbb{R})$, the set $\left\{ ge_{1},..,ge_{d-1}\right\} $
consists of the first $\left(d-1\right)$ columns of $g$. Since basis
vectors of any $\Lambda\in X_{d-1,d}$ can be put to be the first
$\left(d-1\right)$ columns of some $\text{SL}{}_{d}(\mathbb{R})$ matrix,
we deduce that the $\text{SL}{}_{d}(\mathbb{R})$ orbit of
\[
x_{0}\overset{\text{def}}{=}\left[\text{span }_{\mathbb{Z}}\{e_{1},..,e_{d-1}\}\right]
\]
equals to $X_{d-1,d}$. A computation shows that
\[
Q_{d-1,d}\overset{\text{def}}{=}\left\{ \left(\begin{array}{cc}
\lambda\gamma & *\\
0_{1\times d} & 1/\det(\lambda\gamma)
\end{array}\right)\mid\lambda\in\mathbb{R}^{\times},\gamma\in\text{GL}{}_{d-1}(\mathbb{Z})\right\}
\]
is the stabilizer of $x_{0}$. Therefore, we get the identification
\[
X_{d-1,d}=\text{SL}{}_{d}(\mathbb{R})\diagup Q_{d-1,d}.
\]
\begin{rem*}
In terms of the coset space, the collection $\left\{ \left[\Lambda\right]\in X_{d-1,d}\mid\Lambda\in\mathbb{Z}_{\text{prim}}^{d-1,d}\right\} $
is identified with the orbit $\text{SL}{}_{d}(\mathbb{Z})x_{0}$.
\end{rem*}
\subsubsection{The measure $\mu_{\text{polar }}$}
Let $P_{d-1,d}$ be the parabolic group,
\[
P_{d-1,d}\overset{\text{def}}{=}\left\{ \left(\begin{array}{cc}
m & *\\
0_{1\times d} & 1/\det m
\end{array}\right)\mid m\in\text{GL}{}_{d-1}(\mathbb{R})\right\} .
\]
\begin{lem}
\label{lem:horizontal lattices identification with =00005BX_d-1=00005D}
Let \emph{$g\in\text{SL}{}_{d}(\mathbb{R})$}, and let $\mathcal{P}=\text{span}_{\mathbb{R}}\{ge_{1},..,ge_{d-1}\}$.
Then:
\begin{enumerate}
\item \label{enu:lattices in hyperplane 1} The subset $X_{\mathcal{P}}$
\emph{(}see \eqref{eq:X_p}\emph{)} is identified with $gP_{d-1,d}\diagup Q_{d-1,d}$.
\item \label{enu:lattices in hyperplane 2}The map\emph{
\[
\varphi_{g}:\text{PGL}{}_{d-1}(\mathbb{R})\diagup\text{PGL}{}_{d-1}(\mathbb{Z})\to X_{\mathcal{P}},
\]
}which sends\emph{
\[
\mathbb{R}^{\times}m\cdot\text{PGL}{}_{d-1}(\mathbb{Z})\mapsto g\left(\begin{array}{cc}
m & 0_{d-1\times1}\\
0_{1\times d-1} & 1
\end{array}\right)Q_{d-1,d},
\]
}is a homeomorphism.
\item \label{enu:lattices in hyperplane 3}Assume that $gP_{d-1,d}=g'P_{d-1,d}$.
Then,
\begin{equation}
\left(\varphi_{g}\right)_{*}\mu_{X_{d-1}}=\left(\varphi_{g'}\right)_{*}\mu_{X_{d-1}},\label{eq:pushforwards equal}
\end{equation}
where $\mu_{X_{d-1}}$ is the \emph{$\text{PGL}{}_{d-1}(\mathbb{R})$}-invariant
probability measure on $X_{d-1}$.
\end{enumerate}
\end{lem}
\begin{proof}
We observe that
\[
\left\{ g\in\text{SL}_{d}(\mathbb{R})\mid\mathcal{P}=\text{span}_{\mathbb{R}}\{ge_{1},..,ge_{d-1}\}\right\} =gP_{d-1,d},
\]
hence \eqref{enu:lattices in hyperplane 1} follows. Next, in order
prove \eqref{enu:lattices in hyperplane 2}, we note that the map
\[
\phi_{1}:\text{PGL}{}_{d-1}(\mathbb{R})\diagup\text{PGL}{}_{d-1}(\mathbb{Z})\to P_{d-1,d}\diagup Q_{d-1,d},
\]
that sends,
\[
\mathbb{R}^{\times}m\cdot\text{PGL}{}_{d-1}(\mathbb{Z})\mapsto\left(\begin{array}{cc}
m & 0_{d-1\times1}\\
0_{1\times d-1} & 1/\det m
\end{array}\right)Q_{d-1,d},\ \ m\in\text{GL}{}_{d-1}(\mathbb{R}),
\]
is a homeomorphism, and the map
\[
\phi_{2}:P_{d-1,d}\diagup Q_{d-1,d}\to gP_{d-1,d}\diagup Q_{d-1,d},
\]
defined by multiplication from the left by $g,$ is also a homeomorphism.
Hence $\phi_{2}\circ\phi_{1}$ is a homeomorphism, which proves \eqref{enu:lattices in hyperplane 2}.
Finally, we prove \eqref{enu:lattices in hyperplane 3}. We let $g,g'\in\text{SL}{}_{d}(\mathbb{R})$
be such that
\[
g'=gp,
\]
for some $p\in P_{d-1,d}$, where $p=\left(\begin{array}{cc}
m_{p} & v\\
0_{1\times d} & 1/\det m_{p}
\end{array}\right)$. Then, a short calculation shows that
\begin{equation}
\varphi_{g'}\left(\mathbb{R}^{\times}m\text{PGL}{}_{d-1}(\mathbb{Z})\right)=\varphi_{g}\left(\mathbb{R}^{\times}m_{p}m\text{PGL}{}_{d-1}(\mathbb{Z})\right),\label{eq:ef_g and ef_g_tilde}
\end{equation}
hence, since $\mu_{X_{d-1}}$ is $\text{PGL}{}_{d-1}(\mathbb{R})$-invariant,
we obtain \eqref{eq:pushforwards equal}.
\end{proof}
\begin{rem*}
In the rest, we shall abuse notation and denote the measure $\left(\varphi_{id}\right)_{*}\mu_{X_{d-1}}$
on $P_{d-1,d}\diagup Q_{d-1,d}$ by $\mu_{X_{d-1}}$.
\end{rem*}
We denote
\[
K_{d-1,d}^{\pm}\overset{\text{def}}{=}P_{d-1,d}\cap\text{SO}{}_{d}(\mathbb{R}),
\]
which is isomorphic to $\text{O}_{d-1}(\mathbb{R}),$ and identify $Gr_{d-1}(\mathbb{R}^{d})$
with $K_{d-1,d}^{\pm}\diagdown\text{SO}{}_{d}(\mathbb{R})$ via the map
\[
K_{d-1,d}^{\pm}\rho\mapsto\text{span}_{\mathbb{R}}\{\rho^{-1}e_{1},..,\rho^{-1}e_{d-1}\}.
\]
\begin{rem*}
To ease the notation, we will omit in the following the indices $d-1,d$
from $P_{d-1,d}$, $Q_{d-1,d}$ and $K_{d-1,d}^{\pm}$.
\end{rem*}
Let $\mu_{Gr_{d-1}(\mathbb{R}^{d})}$ be the $\text{SO}{}_{d}(\mathbb{R})$-invariant
probability measure and for $f\in C_{c}(X_{d-1,d})$ we define $\hat{f}\in C_{c}(Gr_{d-1}(\mathbb{R}^{d}))$
by
\[
\hat{f}(K^{\pm}\rho)\overset{\text{def}}{=}\left(\varphi_{\rho^{-1}}\right)_{*}\mu_{X_{d-1}}(f),
\]
which is well defined by part 3 of Lemma \ref{lem:horizontal lattices identification with =00005BX_d-1=00005D}.
Then, the measure $\mu_{\text{polar }}$ is given by
\begin{equation}
\mu_{\text{polar }}(f)\overset{\text{def}}{=}\int_{\text{Gr}_{d-1}(\mathbb{R}^{d})}\hat{f}(K^{\pm}\rho)\ d\mu_{Gr_{d-1}(\mathbb{R}^{d})}(K^{\pm}\rho).\label{eq:mu polar section 2}
\end{equation}
Note that the description \eqref{eq:mu polar section 2} of $\mu_{\text{polar }}$
yields the same measure defined in the introduction, although stated
slightly differently. We chose this description since it suits well
to the proof of Lemma \ref{lem:mult push nu polar to mu polar}.
\subsection{Alternative description of $X_{d-1,d}$ via polar coordinates}
Here we shall give a description of the elements of $X_{d-1,d}$ by
their orientation and by their shape, hence the name of \emph{polar
coordinates}. Those coordinates will serve as a bootstrap to the technique
of \cite{AESgrids}.
\subsubsection{The multiplication map}
Let $\Delta K^{\pm}$ be the diagonal embedding in $\text{SO}{}_{d}(\mathbb{R})\times P$,
which is defined by
\[
\Delta K^{\pm}\overset{\text{def}}{=}\left\{ (k,k)\mid k\in K_{d-1,d}^{\pm}\right\} \leq\text{SO}{}_{d}(\mathbb{R})\times P.
\]
The following double coset space,
\[
X_{d-1,d}^{\text{polar}}\overset{\text{def}}{=}\Delta K^{\pm}\diagdown\left(\text{SO}{}_{d}(\mathbb{R})\times P\diagup Q\right),
\]
will be shown to be homeomorphic to $X_{d-1,d}$. Consider the map
\[
\mathcal{M}\ :\ X_{d-1,d}^{\text{polar}}\to X_{d-1,d},
\]
defined by
\[
\mathcal{M}\left(\Delta K^{\pm}(\rho,\eta Q)\right)=\rho^{-1}\eta Q.
\]
It is well defined since if $(\rho',\eta'Q)=(k\rho,k\eta Q),$ then
$\rho'^{-1}\eta'Q=\rho^{-1}\eta Q$.
\begin{prop}
\label{prop:Tthe polar coordinates}The map $\mathcal{M}$ is a homeomorphism.
\end{prop}
\begin{proof}
To prove injectivity, we assume that
\[
\mathcal{M}\left(\Delta K^{\pm}(\rho_{1},\eta_{1}Q)\right)=\mathcal{M}\left(\Delta K^{\pm}(\rho_{2},\eta_{2}Q)\right),
\]
which is equivalent to that
\[
\rho_{1}^{-1}\eta_{1}q=\rho_{2}^{-1}\eta_{2},
\]
for some $q\in Q$. Then,
\[
SO_{d}(\mathbb{R})\ni\rho_{2}\rho_{1}^{-1}=\eta_{2}q^{-1}\eta_{1}^{-1}\in P,
\]
hence there is a $k\in K^{\pm}$ such that
\[
\rho_{2}\rho_{1}^{-1}=\eta_{2}q^{-1}\eta_{1}^{-1}=k,
\]
which in turn implies
\[
\Delta K^{\pm}(\rho_{1},\eta_{1}Q)=\Delta K^{\pm}(\rho_{2},\eta_{2}Q).
\]
To prove continuity of $\mathcal{M}$, we consider the following commuting
diagram
\[
\xymatrix{\text{SO}{}_{d}(\mathbb{R})\times P\ar[d]\ar[r] & \text{SL}{}_{d}(\mathbb{R})\ar[d]\\
X_{d-1,d}^{\text{polar}}\ar[r]_{\ \ \mathcal{M}} & X_{d-1,d}
}
\]
where the vertical maps are the natural projections, and the horizontal
upper arrow sends
\[
(\rho,\eta)\mapsto\rho^{-1}\eta.
\]
Note that the resulting map from $\text{SO}{}_{d}(\mathbb{R})\times P$ to
$X_{d-1,d}$ is a composition of continuous maps, hence is continuous.
Therefore, by the universal property of the quotient space, $\mathcal{M}$
is continuous. Next, we compute the inverse of $\mathcal{M}$ and show it
is continuous. Let $A\leq\text{SL}{}_{d}(\mathbb{R})$ be the diagonal subgroup
with positive entries, and $N\leq\text{SL}{}_{d}(\mathbb{R})$ be the group
of upper triangular unipotent matrices. The map (Iwasawa decomposition)
\[
\psi:\text{SO}{}_{d}(\mathbb{R})\times A\times N\to\text{SL}{}_{d}(\mathbb{R}),
\]
given by
\[
\psi(\rho,a,n)=\rho an,
\]
is a homeomorphism (see e.g. \cite{Bekka_mayer}, Chapter 5). Consider
the following commuting diagram,
\[
\xymatrix{\text{SO}{}_{d}(\mathbb{R})\times P\ar[d] & \text{SO}{}_{d}(\mathbb{R})\times A\times N\ar_{p\ }[l] & \text{SL}{}_{d}(\mathbb{R})\ar[l]_{\ \ \ \ \ \ \ \ \ \psi^{-1}}\ar[d]\\
X_{d-1,d}^{\text{polar}} & & X_{d-1,d}\ar[ll]
}
\]
where the map $p$ is defined by
\[
p(\rho,a,n)\overset{\text{def}}{=}\left(\rho^{-1},an\right),
\]
and the horizontal maps are the natural projections. The map corresponding
to the lower horizontal arrow sends
\[
\rho anQ\mapsto\Delta K^{\pm}\left(\rho^{-1},anQ\right),
\]
which is clearly an inverse for $\mathcal{M}$. Since the resulting map
from $\text{SL}_{d}(\mathbb{R})$ to $X_{d-1,d}^{\text{polar}}$ is a composition
of continuous maps, we get that it is continuous, hence by the universal
property of the quotient space, $\mathcal{M}^{-1}$ is continuous.
\end{proof}
\subsubsection{The measure $\mu_{\text{polar }}$ through the polar coordinates}
Consider the map
\[
q_{\Delta K^{\pm}}:\text{SO}{}_{d}(\mathbb{R})\times P\diagup Q\to X_{d-1,d}^{\text{polar}},
\]
that divides from the left by $\Delta K^{\pm}$. We define
\begin{equation}
\nu_{\text{polar}}\overset{\text{def}}{=}\left(q_{\Delta K^{\pm}}\right)_{*}\mu_{\text{SO}{}_{d}(\mathbb{R})}\otimes\mu_{X_{d-1}}.\label{eq:nu_polar definition}
\end{equation}
\begin{lem}
\label{lem:mult push nu polar to mu polar}It holds that $\mathcal{M}_{*}\nu_{\text{polar}}=\mu_{\text{polar}}$.
\end{lem}
\begin{proof}
First, recall that for $\varphi\in C_{c}(\text{SO}{}_{d}(\mathbb{R})$) ,
\[
\int_{\text{SO}{}_{d}(\mathbb{R})}\varphi\ d\mu_{\text{SO}{}_{d}(\mathbb{R})}=\int_{\text{Gr}_{d-1}(\mathbb{R}^{d})}\left(\int_{K^{\pm}}\varphi(k\rho)\ d\mu_{K^{\pm}}(k)\right)d\mu_{\text{Gr}_{d-1}(\mathbb{R}^{d})}(K^{\pm}\rho).
\]
Hence for $f\in C_{c}(X_{d-1,d}^{\text{polar}})$,
\[
\nu_{\text{polar}}(f)=\int f(q_{\Delta K^{\pm}}(\rho,\eta Q))d\mu_{X_{d-1}}(\eta Q)d\mu_{\text{SO}{}_{d}(\mathbb{R})}(\rho)=
\]
\begin{equation}
\int\left(\int f(q_{\Delta K^{\pm}}(k\rho,\eta Q))d\mu_{X_{d-1}}(\eta Q)d\mu_{K^{\pm}}(k)\right)d\mu_{\text{Gr}_{d-1}(\mathbb{R}^{d})}(K^{\pm}\rho).\label{eq:integral x_d-1,K_Gr_d-1}
\end{equation}
Note that
\[
q_{\Delta K^{\pm}}(k\rho,\eta Q)=q_{\Delta K^{\pm}}(\rho,k^{-1}\eta Q),
\]
whence, by \eqref{eq:integral x_d-1,K_Gr_d-1},
\[
\nu_{\text{polar}}(f)=\int\left(\int f(q_{\Delta K^{\pm}}(\rho,k^{-1}\eta Q))d\mu_{X_{d-1}}(\eta Q)\right)d\mu_{K^{\pm}}(k)d\mu_{\text{Gr}_{d-1}(\mathbb{R}^{d})}(K^{\pm}\rho).
\]
The measure $\mu_{X_{d-1}}$ is $K^{\pm}$ invariant, so that
\[
\nu_{\text{polar}}(f)=\int\left(\int f(q_{\Delta K^{\pm}}(\rho,\eta Q))d\mu_{X_{d-1}}(\eta Q)\right)d\mu_{\text{Gr}_{d-1}(\mathbb{R}^{d})}(K^{\pm}\rho).
\]
Finally, the push-forward by $\mathcal{M}$ gives
\[
\mathcal{M}_{*}\nu_{\text{polar}}(f)=\int\left(\int f(\rho^{-1}\eta Q)d\mu_{X_{d-1}}(\eta Q)\right)d\mu_{\text{Gr}_{d-1}(\mathbb{R}^{d})}(K^{\pm}\rho),
\]
where
\[
\int f(\rho^{-1}\eta Q)d\mu_{X_{d-1}}(\eta Q)=\left(\varphi_{\rho^{-1}}\right)_{*}\mu_{X_{d-1}}(f).
\]
In view of the definition \eqref{eq:mu polar section 2}, the proof
is now done.
\end{proof}
\begin{rem*}
The whole discussion of this section can be adjusted with no trouble
to the spaces $X_{k,d}$ of homothety classes of of rank-$k$ discrete
subgroups of $\mathbb{R}^{d}$, for $1\leq k<d$.
\end{rem*}
\section{\label{sec:The-p-adic-factory}The p-adic factory of primitive integral
subgroups }
\subsection{\label{subsec:The-mechanism}The mechanism}
In order to better connect our discussion to the one of \cite{AESgrids},
we shall recall the description of the elements $\Lambda\in\primlatof{d-1}$
with fixed covolume as orthogonal lattices to integer vectors of fixed
norm. Let $\mathbb{Z}_{\text{prim}}^{d}$ be the set of integral primitive
vectors. For a primitive integer vector $v\in\mathbb{Z}_{\text{prim }}^{d}$,
let $v^{\perp}\in Gr_{d-1}(\mathbb{Q}^{d})$ be the orthogonal hyperplane
to $v$. We define the \emph{orthogonal lattice }to $v$ by
\[
\Lambda_{v}\overset{\text{def}}{=}v^{\perp}\cap\mathbb{Z}^{d}.
\]
Note that the map that sends $\mathbb{Z}_{\text{prim }}^{d}\ni v\mapsto\Lambda_{v}$,
is onto $\primlatof{d-1}$. In addition, let
\[
\mathbb{S}^{d-1}(T)\overset{\text{def}}{=}\left\{ v\in\mathbb{Z}_{\text{prim }}^{d}\mid\left\Vert v\right\Vert =T\right\} .
\]
Then, we have the following bijection.
\begin{lem}
\label{lem:bijection orthogonal vectors}The map
\[
\Lambda_{*}:\mathbb{S}^{d-1}(T)\to\primlatof{d-1}(T)
\]
which sends $v\mapsto\Lambda_{v}$ is a bijection.
\end{lem}
\begin{proof}
See \cite{AESgrids}, introduction.
\end{proof}
Let $v\in\mathbb{Z}_{\text{prim}}^{d}$ and let $H_{v}\leq\text{SO}_{d}$
be the subgroup stabilizing $v$. We define by $g_{v}\in\text{SL}{}_{d}(\mathbb{Z})$
to be a matrix who's first $d-1$ columns form a positively oriented
basis for $\Lambda_{v}$. Note that $H_{v}$ and $g_{v}^{-1}H_{v}g_{v}$
are both linear algebraic groups defined over $\mathbb{Q}$, and observe that
$g_{v}^{-1}H_{v}g_{v}\leq\text{ASL}{}_{d-1}$, where
\[
\text{ASL}_{d-1}=\left\{ \left(\begin{array}{cc}
g & *\\
0_{1\times d-1} & 1
\end{array}\right)\mid g\in\text{SL}{}_{d-1}\right\} .
\]
For what follows, we denote by $\mathbb{Q}_{p}$ the field of p-adic numbers
and by $\mathbb{Z}_{p}$ the ring of p-adic integers. Now, recall that $\text{ASL}_{d-1}(\mathbb{Q}_{p})=\text{ASL}_{d-1}(\mathbb{Z}_{p})\text{ASL}_{d-1}\left(\mathbb{Z}\left[\frac{1}{p}\right]\right)$
(see \cite{AESgrids} Section 6.3) and assume that $h\in H_{v}(\mathbb{Q}_{p})\cap\text{SO}{}_{d}(\mathbb{Z}_{p})\text{SO}{}_{d}\left(\mathbb{Z}\left[\frac{1}{p}\right]\right)$.
Then, we may write
\begin{equation}
h=c_{1}\gamma_{1},\ \ c_{1}\in\text{SO}{}_{d}(\mathbb{Z}_{p}),\ \gamma_{1}\in\text{SO}_{d}\left(\mathbb{Z}\left[\frac{1}{p}\right]\right),\label{eq:rotation decomposition}
\end{equation}
and
\begin{equation}
g_{v}^{-1}hg_{v}=c_{2}\gamma_{2}^{-1},\ c_{2}\in\text{ASL}_{d-1}(\mathbb{Z}_{p}),\ \gamma_{2}\in\text{ASL}_{d-1}\left(\mathbb{Z}\left[\frac{1}{p}\right]\right).\label{eq:lattice decompostion}
\end{equation}
The following lemma, and its corollary, show the principle which is
used to generate elements $\Lambda\in\primlatof{d-1}$ of a fixed
covolume.
\begin{lem}
\label{lem:genrated vectors and sl_d(Z) matrices}It holds \emph{$\gamma_{1}g_{v}\gamma_{2}\in\text{SL}_{d}(\mathbb{Z})$}.
\end{lem}
\begin{proof}
We observe from \eqref{eq:rotation decomposition} and \eqref{eq:lattice decompostion}
that
\[
\text{SL}_{d}\left(\mathbb{Z}\left[\frac{1}{p}\right]\right)\ni\gamma_{1}g_{v}\gamma_{2}=c_{1}g_{v}c_{2}^{-1}\in\text{SL}_{d}(\mathbb{Z}_{p}),
\]
Since $\mathbb{Z}_{p}\cap\mathbb{Z}\left[\frac{1}{p}\right]=\mathbb{Z}$, the statement follows.
\end{proof}
\begin{rem*}
Although not explicitly stated in \cite{AESgrids}, the proof of Lemma
\ref{lem:genrated vectors and sl_d(Z) matrices} can be readily deduced
from the proof of Proposition 6.2 of \cite{AESgrids}.
\end{rem*}
\begin{cor}
\label{cor:generated orthogonal lattices} Let $\Lambda$ be the $\mathbb{Z}$-span
of the first $\left(d-1\right)$ columns of $\gamma_{1}g_{v}\gamma_{2}$.
It holds that $\gamma_{1}v\in\mathbb{Z}_{\text{prim}}^{d}$ and $\Lambda=\Lambda_{\gamma_{1}v}$.
Importantly, \emph{
\[
\text{cov}(\Lambda)=\text{cov}(\Lambda_{v}).
\]
}
\end{cor}
\begin{proof}
Since $\gamma_{1}g_{v}\gamma_{2}\in\text{SL}_{d}(\mathbb{Z})$, the basis
of $\Lambda$ can be completed to a basis of $\mathbb{Z}^{d}$ which implies
that $\Lambda\in\primlatof{d-1}$. Next, a computation that uses \eqref{eq:covol def}
shows
\begin{equation}
\text{cov}(\Lambda)=\text{cov}(\Lambda_{v}).\label{eq:cov ofz-span of d-1 coloms of gamma_1g_vgamma_2 equal to lambda_v}
\end{equation}
Now observe that $\Lambda\subseteq\gamma_{1}v^{\perp}$. Hence by
Lemma \ref{lem:bijection orthogonal vectors} and \eqref{eq:cov ofz-span of d-1 coloms of gamma_1g_vgamma_2 equal to lambda_v}
we deduce $\gamma_{1}v\in\mathbb{Z}_{\text{prim}}^{d}$ and $\Lambda=\Lambda_{\gamma_{1}v}$.
\end{proof}
\subsection{The S-arithmetic orbits and their projection to the reals}
To ease the notation, we introduce
\[
\mathbb{G}_{1}\overset{\text{def}}{=}\text{SO}{}_{d},\ \mathbb{G}_{2}\overset{\text{def}}{=}\text{ASL}{}_{d-1},\ \mathbb{G}\overset{\text{def}}{=}\mathbb{G}_{1}\times\mathbb{G}_{2}.
\]
For an odd prime $p$ let
\[
\mathcal{Y}_{p}\overset{\text{def}}{=}\mathbb{G}(\mathbb{R}\times\mathbb{Q}_{p})/\mathbb{G}\left(\mathbb{Z}\left[\frac{1}{p}\right]\right),
\]
where by $\mathbb{G}\left(\mathbb{Z}\left[\frac{1}{p}\right]\right)$ we mean the
diagonal embedding of each $\mathbb{G}_{i}\left(\mathbb{Z}\left[\frac{1}{p}\right]\right)$
factor into $\mathbb{G}_{i}(\mathbb{R}\times\mathbb{Q}_{p}).$ Consider the set
\[
\mathcal{U}\overset{\text{def}}{=}\mathbb{G}(\mathbb{R}\times\mathbb{Z}_{p})\mathbb{G}\left(\mathbb{Z}\left[\frac{1}{p}\right]\right)\subseteq\mathcal{Y}_{p}.
\]
We now recall the (well known) construction of the projection to the
real coordinate. If
\[
\left((g_{1,\infty},g_{1,p}),(g_{2,\infty},g_{2,p})\right)\mathbb{G}\left(\mathbb{Z}\left[\frac{1}{p}\right]\right)\in\mathcal{U},
\]
then we may write for $i\in\{1,2\}$,
\[
g_{i,p}=c_{i,p}\gamma_{i,p},\ \ c_{i,p}\in\mathbb{G}_{i}\left(\mathbb{Z}_{p}\right),\ \gamma_{i,p}\in\mathbb{G}_{i}\left(\mathbb{Z}\left[\frac{1}{p}\right]\right).
\]
Then, the map $q_{\infty}:\mathcal{U}\to\mathbb{G}(\mathbb{R})/\mathbb{G}(\mathbb{Z})$ is defined by
\begin{equation}
q_{\infty}\left(\left((g_{1,\infty},g_{1,p}),\ (g_{2,\infty},g_{2,p})\right)\mathbb{G}\left(\mathbb{Z}\left[\frac{1}{p}\right]\right)\right)\overset{\text{def}}{=}(g_{1,\infty}\gamma_{1,p}^{-1},\ g_{2,\infty}\gamma_{2,p}^{-1})\mathbb{G}\left(\mathbb{Z}\right).\label{eq:q_infty def}
\end{equation}
\subsubsection{The S-arithmetic orbit and its decomposition}
Let $v\in\mathcal{\Z}_{\text{prim}}^{d}$ and let $g_{v}$ be as defined in Section \ref{subsec:The-mechanism}.
We define the following diagonal embedding of $H_{v}$,
\[
L_{v}\overset{\text{def}}{=}\left\{ \left(h,g_{v}^{-1}hg_{v}\right)\mid h\in H_{v}\right\} \leq\mathbb{G}.
\]
We choose some $k_{v}\in\text{SO}{}_{d}(\mathbb{R})$ such that
\[
k_{v}v=e_{d},
\]
and we denote by $a_{v}$ the diagonal matrix with entries $(\left\Vert v\right\Vert ^{-1/\left(d-1\right)},..,\left\Vert v\right\Vert ^{-1/\left(d-1\right)},\left\Vert v\right\Vert )$.
This choices imply that $a_{v}k_{v}g_{v}\in\text{ASL}_{d-1}(\mathbb{R})$.
The following orbit is of main importance,
\begin{equation}
O_{v,p}\overset{\text{def}}{=}\left((k_{v},e_{p}),\ (a_{v}k_{v}g_{v},e_{p})\right)\cdot L_{v}(\mathbb{R}\times\mathbb{Q}_{p})\mathbb{G}\left(\mathbb{Z}\left[\frac{1}{p}\right]\right).\label{eq:the s-arithemtic orbit}
\end{equation}
We consider the following decomposition of $H_{v}(\mathbb{Q}_{p})$ into double
cosets
\begin{equation}
H_{v}(\mathbb{Q}_{p})=\bigsqcup_{h\in M}H_{v}\left(\mathbb{Z}_{p}\right)hH_{v}\left(\mathbb{Z}\left[\frac{1}{p}\right]\right),\label{eq:decomposition of SO_V_perp}
\end{equation}
where $M$ is a set of representatives of the double coset space.
We note that the collection of representatives is finite (see \cite{AESgrids},
section 6.2). We denote
\[
K\overset{\text{def}}{=}H_{e_{d}}(\mathbb{R})\cong\text{SO}{}_{d-1}(\mathbb{R}),
\]
and
\[
\Delta K(\mathbb{R})\times L_{v}(\mathbb{Z}_{p})\overset{\text{def}}{=}\left\{ \left((k,h),(k,g_{v}^{-1}hg_{v})\right)\mid k\in K,\ h\in H_{v}(\mathbb{Z}_{p})\right\} .
\]
\begin{lem}
\label{lem:decomposition of o_v_p}It holds that
\begin{equation}
O_{v,p}=\bigsqcup_{h\in M}O_{v,p,h},\label{eq:decomposition _v,p}
\end{equation}
where
\[
O_{v,p,h}=\left(\Delta K\times L_{v}(\mathbb{Z}_{p})\right)\left((k_{v},h),\ (a_{v}k_{v}g_{v},g_{v}^{-1}hg_{v})\right)\mathbb{G}\left(\mathbb{Z}\left[\frac{1}{p}\right]\right).
\]
\end{lem}
\begin{proof}
This follows from a simple computation which uses \eqref{eq:decomposition of SO_V_perp},
and the observation that
\[
k_{v}H_{v}(\mathbb{R})k_{v}^{-1}=K.
\]
\end{proof}
Let $q_{1}:\mathcal{U}\to\mathbb{G}_{1}(\mathbb{Q}_{p})$ be the projection to the p-adic coordinate
of $\mathbb{G}_{1}(\mathbb{R}\times\mathbb{Q}_{p})$, and define
\[
M_{0}\overset{\text{def}}{=}\left\{ h\in M\mid h\in q_{1}(\mathcal{U})\right\} .
\]
We observe that $L_{v}(\mathbb{Z}_{p})(h,g_{v}^{-1}hg_{v})\mathbb{G}\left(\mathbb{Z}\left[\frac{1}{p}\right]\right)$
is either contained in $\mathcal{U}$ or disjoint from it. In particular
\begin{equation}
L_{v}(\mathbb{Z}_{p})(h,g_{v}^{-1}hg_{v})\mathbb{G}\left(\mathbb{Z}\left[\frac{1}{p}\right]\right)\subseteq\mathcal{U}\iff h\in M_{0}.\label{eq:L_v contained if}
\end{equation}
\begin{cor}
It holds that
\begin{equation}
O_{v,p}\cap\mathcal{U}=\bigsqcup_{h\in M_{0}}O_{v,p,h}.\label{eq:decomposition of O_v,p cap U}
\end{equation}
\end{cor}
\begin{proof}
This follows from the definition of $\mathcal{U}$, decomposition \eqref{eq:decomposition _v,p}
and observation \eqref{eq:L_v contained if}.
\end{proof}
This allows for the following nice description of $q_{\infty}(\mathcal{U}\cap O_{v,p})$.
\begin{prop}
\label{prop:properties of p-adic projection}For $h\in M_{0}$, choose
$\gamma_{i}(h)\in\mathbb{G}_{i}\left(\mathbb{Z}\left[\frac{1}{p}\right]\right)$,
$i\in\left\{ 1,2\right\} $, by \eqref{eq:rotation decomposition}
and \eqref{eq:lattice decompostion}. The following holds:
\begin{enumerate}
\item For $h\in M_{0}$,
\begin{equation}
q_{\infty}\left(O_{v,p,h}\right)=\Delta K\left(k_{v}\gamma_{1}^{-1}(h),a_{v}k_{v}g_{v}\gamma_{2}(h)\right)\mathbb{G}(\mathbb{Z}).\label{eq:Image of a fiber of an orbit to the real place}
\end{equation}
\item If $h,h'\in M_{0}$ and $h\neq h^{'}$, then
\begin{equation}
Kk_{v}\gamma_{1}^{-1}(h)\mathbb{G}_{1}(\mathbb{Z})\cap Kk_{v}\gamma_{1}^{-1}(h')\mathbb{G}_{1}(\mathbb{Z})=\emptyset,\label{eq:K cosets empty intersection}
\end{equation}
in particular
\begin{equation}
q_{\infty}\left(O_{v,p,h}\right)\cap q_{\infty}\left(O_{v,p,h'}\right)=\emptyset.\label{eq:q_infty projection empty intersection}
\end{equation}
\item For $h\in M_{0}$,
\[
q_{\infty}^{-1}\left(\Delta K\left(k_{v}\gamma_{1}^{-1}(h),a_{v}k_{v}g_{v}\gamma_{2}(h)\right)\mathbb{G}(\mathbb{Z})\right)\bigcap O_{v,p}=O_{v,p,h}.
\]
\end{enumerate}
\end{prop}
\begin{proof}
For $h\in M_{0}$, we write
\begin{equation}
h=c_{1}(h)\gamma_{1}(h),\ \ g_{v}^{-1}hg_{v}=c_{2}(h)\gamma_{2}^{-1}(h),\label{eq:decomposition of h}
\end{equation}
where $(c_{1}(h),c_{2}(h))\in\mathbb{G}(\mathbb{Z}_{p})$ and $(\gamma_{1}(h),\gamma_{2}(h))\in\mathbb{G}\left(\mathbb{Z}\left[\frac{1}{p}\right]\right).$
\begin{enumerate}
\item Since $\left((\gamma_{1}^{-1}(h),\gamma_{1}^{-1}(h))\ ,(\gamma_{2}(h),\gamma_{2}(h))\right)\in\mathbb{G}\left(\mathbb{Z}\left[\frac{1}{p}\right]\right)$,
we see that
\begin{align*}
\Delta K\times L_{v}(\mathbb{Z}_{p})\cdot\left((k_{v},h)\ ,(a_{v}k_{v}g_{v},g_{v}^{-1}hg_{v})\right)\mathbb{G}\left(\mathbb{Z}\left[\frac{1}{p}\right]\right) & =
\end{align*}
\[
=\Delta K\times L_{v}(\mathbb{Z}_{p})\left((k_{v}\gamma_{1}^{-1}(h),c_{1}(h)),\ (a_{v}k_{v}g_{v}\gamma_{2}(h),c_{2}(h))\right)\mathbb{G}\left(\mathbb{Z}\left[\frac{1}{p}\right]\right).
\]
Hence by definition \eqref{eq:q_infty def},
\[
q_{\infty}\left(O_{v,p,h}\right)=\Delta K\left(k_{v}\gamma_{1}^{-1}(h),a_{v}k_{v}g_{v}\gamma_{2}(h)\right)\mathbb{G}(\mathbb{Z}).
\]
\item The proof of \eqref{eq:K cosets empty intersection} is a routine
check, hence we omit its details and leave them for the reader (one
may also look at the proof of Proposition 6.2 in \cite{AESgrids}).
Note that \eqref{eq:q_infty projection empty intersection} follows
from \eqref{eq:K cosets empty intersection}.
\item This fact follows immediately from the two last ones and \eqref{eq:decomposition of O_v,p cap U}.
\end{enumerate}
\end{proof}
\subsection{\label{subsec:Equivalence-class-of-lattices}The resulting elements
of $\protect\primlatof{d-1}$}
The following commuting diagram will be important for us,
\begin{equation}
\xymatrix{\mathcal{U}\ar^{q_{\infty\ \ \ \ \ \ \ }}[r] & \mathbb{G}(\mathbb{R})\diagup\mathbb{G}(\mathbb{Z})\ar[r]_{q_{\Delta K\ \ \ \ \ }} & \Delta K\diagdown\mathbb{G}(\mathbb{R})\diagup\mathbb{G}(\mathbb{Z})\\
& \mathbb{G}(\mathbb{R})\diagup\mathbb{G}_{2}(\mathbb{Z})\ar_{id\times q_{P\diagup Q}^{\mathbb{G}_{2}(\mathbb{R})\diagup\mathbb{G}_{2}(\mathbb{Z})}}[d]\ar[u]^{\pi_{\mathbb{G}_{1}(\mathbb{Z})}}\ar[r]_{\tilde{q}_{\Delta K}\ \ \ \ \ } & \Delta K\diagdown\mathbb{G}(\mathbb{R})\diagup\mathbb{G}_{2}(\mathbb{Z})\ar_{\tilde{q}}[d]\ar[u]^{\tilde{\pi}_{\mathbb{G}_{1}(\mathbb{Z})}}\ar[dr]^{\tilde{\mathcal{M}}}\\
& \mathbb{G}_{1}(\mathbb{R})\times P\diagup Q\ar[r]_{q_{\Delta K^{\pm}}} & X_{d-1,d}^{\text{polar}}\ar[r]_{\mathcal{M}} & X_{d-1,d}
}
\label{eq:main diagram}
\end{equation}
The maps $\pi_{\mathbb{G}_{1}(\mathbb{Z})}$ and $\tilde{\pi}_{\mathbb{G}_{1}(\mathbb{Z})}$ are obtained
by dividing from the right by $\mathbb{G}_{1}(\mathbb{Z})$. The maps $q_{\Delta K}$,
$\tilde{q}_{\Delta K}$ and $q_{\Delta K^{\pm}}$ are obtained by
dividing from the left by $\Delta K$ and $\Delta K^{\pm}$ correspondingly.
The map
\[
q_{P\diagup Q}^{\mathbb{G}_{2}(\mathbb{R})\diagup\mathbb{G}_{2}(\mathbb{Z})}:\mathbb{G}_{2}(\mathbb{R})\diagup\mathbb{G}_{2}(\mathbb{Z})\to P\diagup Q,
\]
is naturally defined by $\pi_{2}\left(g\mathbb{G}_{2}(\mathbb{Z})\right)=gQ$, since
$\mathbb{G}_{2}(\mathbb{Z})\leq Q$. The maps $\tilde{q}$ and $\tilde{\mathcal{M}}$ are
defined so that the diagrams commute. Now, denote
\begin{equation}
\tilde{R}_{v}\overset{\text{def}}{=}\tilde{q}_{\Delta K}\left(\pi_{\mathbb{G}_{1}(\mathbb{Z})}^{-1}\left(q_{\infty}(\mathcal{U}\cap O_{v,p})\right)\right),\label{eq:R^tilde_v}
\end{equation}
then, by Proposition \ref{prop:properties of p-adic projection},
we get that $\tilde{R}_{v}$ is the finite collection of points
\[
\tilde{O}_{v,h,\gamma}\overset{\text{def}}{=}\Delta K\left(k_{v}\gamma_{1}^{-1}(h)\gamma,a_{v}k_{v}g_{v}\gamma_{2}(h)\mathbb{G}_{2}(\mathbb{Z})\right),\ \ \gamma\in\mathbb{G}_{1}(\mathbb{Z}),\ h\in M_{0},
\]
where $\gamma_{i}(h)$, $i\in\left\{ 1,2\right\} $, are defined in
\eqref{eq:decomposition of h}. Denote
\[
\mathcal{L}(v)\overset{\text{def}}{=}\tilde{\mathcal{M}}(\tilde{R}_{v})\subseteq X_{d-1,d}.
\]
\begin{lem}
\label{lem:elements of L_v}It holds that
\[
\mathcal{L}(v)=\left\{ \left[\Lambda(v,h,\gamma)\right]\overset{\text{def}}{=}\gamma^{-1}\gamma_{1}(h)g_{v}\gamma_{2}(h)Q\mid h\in M_{0},\ \gamma\in\mathbb{G}_{1}(\mathbb{Z})\right\} .
\]
Importantly $\left[\Lambda(v,h,\gamma)\right]=\left[\Lambda_{\gamma^{-1}\gamma_{1}(h)v}\right]$
and as a consequence $\mathcal{L}(v)\subseteq\primlatof{d-1}(\left\Vert v\right\Vert )$.
\end{lem}
\begin{proof}
We have
\[
\tilde{\mathcal{M}}\left(\tilde{O}_{v,h,\gamma}\right)=\gamma^{-1}\gamma_{1}(h)k_{v}^{-1}\left(a_{v}k_{v}g_{v}\gamma_{2}(h)Q\right),
\]
and we note that $a_{v}k_{v}g_{v}\gamma_{2}(h)Q=k_{v}g_{v}\gamma_{2}(h)Q$,
which gives
\[
\tilde{\mathcal{M}}\left(\tilde{O}_{v,h,\gamma}\right)=\gamma^{-1}\gamma_{1}(h)\left(k_{v}^{-1}k_{v}\right)g_{v}\gamma_{2}(h)Q=\left[\Lambda(v,h,\gamma)\right].
\]
By Corollary \ref{cor:generated orthogonal lattices},
\begin{equation}
\left[\Lambda(v,h,\gamma)\right]=\left[\Lambda_{\gamma^{-1}\gamma_{1}(h)v}\right],\label{eq:the resulting primitive lattice as orthogonal latice}
\end{equation}
and also $\mathcal{L}(v)\subseteq\primlatof{d-1}(\left\Vert v\right\Vert )$.
\end{proof}
\subsubsection{Refinement of Theorem \ref{thm:maintheorem}}
For everything that follows we fix a prime $p\neq2$ and assume that
$d\geq4$ is a natural number.
\begin{defn}
We shall say that $v\in\mathbb{Z}_{\text{prim}}^{d}$ is admissible, if either
of the following holds
\begin{enumerate}
\item $d=4$, and $\left\Vert v\right\Vert ^{2}\subseteq\mathbb{D}(p)/8\mathbb{N}$.
\item $d=5$, and $\left\Vert v\right\Vert ^{2}\subseteq\mathbb{D}(p)$.
\item $d>5$, and $v$ is any primitive vector.
\end{enumerate}
\end{defn}
In section \ref{sec:proof of the equidisitrbution} we shall conclude
the following theorem.
\begin{thm}
\label{thm:refinement of main theorem} Let $\left\{ v_{i}\right\} _{i=1}^{\infty}$
be a sequence of admissible vectors such that
\[
\left\Vert v_{i}\right\Vert \to\infty,
\]
and let $\mu_{v_{i}}$ be the uniform counting measures supported
on $\mathcal{L}(v_{i})$. Then \emph{
\[
\mu_{v_{i}}\overset{\text{weak *}}{\longrightarrow}\mu_{\text{polar}}.
\]
}
\end{thm}
Theorem \ref{thm:refinement of main theorem} implies Theorem \ref{thm:maintheorem}
by the following. In \cite{AESgrids} (see Section 5.1 and Proposition
6.2 in \cite{AESgrids}) there was introduced an equivalence relation
on the primitive vectors lying on spheres. It was shown that the equivalence
class of $v\in\mathbb{Z}_{\text{prim}}^{d}$ is exactly
\[
\left\{ \gamma^{-1}\gamma_{1}(h)v\right\} _{\gamma\in\mathbb{G}_{1}(\mathbb{Z}),\ h\in M_{0}}.
\]
Hence by Lemma \ref{eq:decomposition of h}, if $v\sim u$ then $\mathcal{L}(v)=\mathcal{L}(u)$.
\section{\label{sec:proof of the equidisitrbution}The resulting measures}
\subsection{A further refinement}
We define $\tilde{\nu}_{v}$ be the uniform measure on the finite
set $\tilde{R}_{v}$ (defined in \eqref{eq:R^tilde_v}). We also define
the measure
\begin{equation}
\tilde{\nu}_{\text{polar}}=\left(\tilde{q}_{\Delta K}\right)_{*}\mu_{\mathbb{G}_{1}(\mathbb{R})}\otimes\mu_{\mathbb{G}_{2}(\mathbb{R})\diagup\mathbb{G}_{2}(\mathbb{Z})},\label{eq:nu_tilde_polar}
\end{equation}
where $\mu_{\mathbb{G}_{1}(\mathbb{R})}$ and $\mu_{\mathbb{G}_{2}(\mathbb{R})\diagup\mathbb{G}_{2}(\mathbb{Z})}$
are the Haar probability measure on $\mathbb{G}_{1}(\mathbb{R})$ and the $\mathbb{G}_{2}(\mathbb{R})$-invariant
probability measure on $\mathbb{G}_{2}(\mathbb{R})\diagup\mathbb{G}_{2}(\mathbb{Z})$. We will prove
the following.
\begin{thm}
\label{thm:convergence of nu tilde-1}Let $\left\{ v_{i}\right\} _{i=1}^{\infty}$
be a sequence of admissible vectors such that
\[
\left\Vert v_{i}\right\Vert \to\infty,
\]
then\emph{
\begin{equation}
\tilde{\nu}_{v_{i}}\overset{\text{weak *}}{\longrightarrow}\tilde{\nu}_{\text{polar}}.\label{eq:nu_tilde convergence to natural measure}
\end{equation}
}
\end{thm}
Theorem \ref{thm:convergence of nu tilde-1} implies Theorem \ref{thm:refinement of main theorem}
by the following two lemmata.
\begin{lem}
\label{lem:push of counting measure nu_tilde_v}It holds that the
map $\tilde{\mathcal{M}}$ when restricted to $\tilde{R}_{v}$ is a bijection
onto $\mathcal{L}(v)$. In particular,
\begin{equation}
\tilde{\mathcal{M}}_{*}\tilde{\nu}_{v}=\mu_{v}.\label{eq:mult_tild_nu_tild_v is mu_v}
\end{equation}
\end{lem}
\begin{proof}
The map is clearly onto. In order to prove injectivity, we recall
that part 2 of Proposition \ref{prop:properties of p-adic projection}
states that
\[
Kk_{v}\gamma_{1}^{-1}(h)\mathbb{G}_{1}(\mathbb{Z})\cap Kk_{v}\gamma_{1}^{-1}(h')\mathbb{G}_{1}(\mathbb{Z})=\emptyset,\ \ h\neq h',\ h,h'\in M_{0},
\]
which implies that for different representatives $h,h'\in M_{0}$,
the corresponding $\left(d-1\right)$-subgroups defined by \eqref{eq:the resulting primitive lattice as orthogonal latice}
lie inside different hyperplanes. Finally, since bijectivity is established,
we immediately get \eqref{eq:mult_tild_nu_tild_v is mu_v}.
\end{proof}
\begin{lem}
\label{lem:push of nu_tilde_v_polar}It holds that $\tilde{\mathcal{M}}_{*}\tilde{\nu}_{\text{polar}}=\mu_{\text{polar }}$.
\end{lem}
\begin{proof}
Since $\tilde{\mathcal{M}}=\mathcal{M}\circ\tilde{q}$ and since $\mathcal{M}_{*}\nu_{\text{polar}}=\mu_{\text{polar }}$,
it is left to prove $\tilde{q}_{*}\tilde{\nu}_{\text{polar}}=\nu_{\text{polar}}.$
We note that
\[
\left(q_{P\diagup Q}^{\mathbb{G}_{2}(\mathbb{R})\diagup\mathbb{G}_{2}(\mathbb{Z})}\right)_{*}\mu_{\mathbb{G}_{2}(\mathbb{R})\diagup\mathbb{G}_{2}(\mathbb{Z})}=\mu_{X_{d-1}},
\]
and observe by Diagram \eqref{eq:main diagram} that
\begin{equation}
\tilde{q}\circ\tilde{q}_{\Delta K}=q_{\Delta K^{\pm}}\circ\left(id\times q_{P\diagup Q}^{\mathbb{G}_{2}(\mathbb{R})\diagup\mathbb{G}_{2}(\mathbb{Z})}\right).\label{eq:q_tilde equalities}
\end{equation}
Therefore,
\[
\tilde{q}_{*}\tilde{\nu}_{\text{polar}}\underbrace{=}_{\text{definition of \ensuremath{\tilde{\nu}_{\text{polar}}}}}\left(\tilde{q}\right)_{*}\left(\left(\tilde{q}_{\Delta K}\right)_{*}\mu_{\mathbb{G}_{1}(\mathbb{R})}\otimes\mu_{\mathbb{G}_{2}(\mathbb{R})\diagup\mathbb{G}_{2}(\mathbb{Z})}\right)\underbrace{=}_{\eqref{eq:q_tilde equalities}}
\]
\[
\left(q_{\Delta K^{\pm}}\right)_{*}\left(id\times q_{P\diagup Q}^{\mathbb{G}_{2}(\mathbb{R})\diagup\mathbb{G}_{2}(\mathbb{Z})}\right)_{*}\mu_{\mathbb{G}_{1}(\mathbb{R})}\otimes\mu_{\mathbb{G}_{2}(\mathbb{R})\diagup\mathbb{G}_{2}(\mathbb{Z})}=
\]
\[
\left(q_{\Delta K^{\pm}}\right)_{*}\mu_{\mathbb{G}_{1}(\mathbb{R})}\otimes\mu_{X_{d-1}}\underbrace{=}_{\text{\eqref{eq:nu_polar definition}}}\nu_{\text{polar }}.
\]
\end{proof}
\subsection{Proof of Theorem \ref{thm:convergence of nu tilde-1}}
To summarize, we have
\[
\text{Theorem \ref{thm:convergence of nu tilde-1} \ensuremath{\implies}Theorem \ref{thm:refinement of main theorem} \ensuremath{\implies}Theorem \ref{thm:maintheorem}}.
\]
Hence this section serves as the last step of the proof for Theorem
$\ref{thm:maintheorem}.$
\subsubsection{The key Theorem of \cite{AESgrids}}
The orbit $O_{v,p}$ defined in \eqref{eq:the s-arithemtic orbit}
is a compact orbit (see \cite{AESgrids}, Section 3.2) and we denote
by $\mu_{O_{v,p}}$ the $L_{v}(\mathbb{R}\times\mathbb{Q}_{p})$-invariant probability
measure supported on $O_{v,p}$. Also, let $\mu_{\mathcal{Y}_{p}}$ be the
$\mathbb{G}(\mathbb{R}\times\mathbb{Q}_{p})$-invariant probability measure on $\mathcal{Y}_{p}$.
The following theorem, which was proved in \cite{AESgrids}, is key
in order to prove Theorem \ref{thm:convergence of nu tilde-1}.
\begin{thm}
\emph{\label{thm:AESgrids thm} }Let $\left\{ v_{i}\right\} _{i=1}^{\infty}$
be a sequence of admissible vectors such that
\[
\left\Vert v_{i}\right\Vert \to\infty,
\]
then
\[
\mu_{O_{v_{i},p}}\overset{\text{weak * }}{\longrightarrow}\mu_{\mathcal{Y}_{p}}.
\]
\end{thm}
We define the probability measure $\eta_{v}$ on $O_{v,p}\cap\mathcal{U}$,
by
\begin{equation}
\eta_{v}\overset{\text{def}}{=}\mu_{O_{v,p}}\mid_{\mathcal{U}}.\label{eq:eta_v definition}
\end{equation}
Since $\mathcal{U}$ is a clopen set, it follows from Theorem \ref{thm:AESgrids thm}
that
\[
\eta_{v_{i}}\overset{\text{weak *}}{\longrightarrow}\mu_{\mathcal{Y}_{p}}\mid_{\mathcal{U}}.
\]
Also, since $q_{\infty}$ is a proper map we get
\[
\left(q_{\infty}\right)_{*}\eta_{v_{i}}\overset{\text{weak *}}{\longrightarrow}\left(q_{\infty}\right)_{*}\mu_{\mathcal{Y}_{p}}\mid_{\mathcal{U}}.
\]
Importantly, $\mu_{\mathcal{Y}_{p}}\mid_{\mathcal{U}}$ is $\mathbb{G}(\mathbb{R})$ invariant, hence
also $\left(q_{\infty}\right)_{*}\mu_{\mathcal{Y}_{p}}\mid_{\mathcal{U}}$. Therefore
we deduce,
\begin{cor}
\label{cor:s-arithemetic measures converge to haar}It holds that
\begin{equation}
\left(q_{\infty}\right)_{*}\eta_{v_{i}}\overset{\text{weak *}}{\longrightarrow}\mu_{\mathbb{G}(\mathbb{R})/\mathbb{G}(\mathbb{Z})},\label{eq:weak convergence to haar of eta}
\end{equation}
where $\mu_{\mathbb{G}(\mathbb{R})/\mathbb{G}(\mathbb{Z})}$ is the $\mathbb{G}(\mathbb{R})$-invariant probability
on $\mathbb{G}(\mathbb{R})/\mathbb{G}(\mathbb{Z})$.
\end{cor}
Next, note that Proposition \ref{prop:properties of p-adic projection}
shows that the measure $\left(q_{\infty}\right)_{*}\eta_{v}$ is supported
on a finite union of $\Delta K$ orbits
\[
\bigsqcup_{h\in M_{0}}q_{\infty}\left(O_{v,p,h}\right),
\]
and by applying further $q_{\Delta K}$, we get that $\left(q_{\Delta K}\circ q_{\infty}\right)_{*}\eta_{v}$
is supported on a finite set
\[
R_{v}\overset{\text{def}}{=}q_{\Delta K}\circ q_{\infty}(O_{v,p}),
\]
which consists of the elements
\[
\tilde{O}_{v,h}=\Delta K\left(k_{v}\gamma_{1}^{-1}(h),a_{v}k_{v}g_{v}\gamma_{2}(h)\right)\mathbb{G}(\mathbb{Z}),\ h\in M_{0}.
\]
On the other hand, note that
\[
\tilde{\pi}_{\mathbb{G}_{1}(\mathbb{Z})}\left(\tilde{R}_{v}\right)=R_{v},
\]
so that $\left(\tilde{\pi}_{\mathbb{G}_{1}(\mathbb{Z})}\right)_{*}\tilde{\nu}_{v}$
has the same support as that of $\left(q_{\Delta K}\circ q_{\infty}\right)_{*}\eta_{v}$.
The following lemma connects those two measures.
\begin{lem}
\label{lem:difference of measure is zero}It holds that
\[
\left(\pi_{\mathbb{G}_{1}(\mathbb{Z})}\right)_{*}\tilde{\nu}_{v_{i}}-\left(q_{\Delta K}\circ q_{\infty}\right)_{*}\eta_{v_{i}}\overset{\text{weak *}}{\longrightarrow}0.
\]
\end{lem}
In Subsection \ref{subsec:lemma of difference of meas} we will explain
how Lemma \ref{lem:difference of measure is zero} follows from \cite{AESgrids}.
Before that, we explain how Lemma \ref{lem:difference of measure is zero}
and the preceding discussion implies Theorem \ref{thm:convergence of nu tilde-1}.
\begin{proof}[Proof of Theorem \ref{thm:convergence of nu tilde-1}]
By Corollary \ref{cor:s-arithemetic measures converge to haar},
it follows that
\[
\left(q_{\Delta K}\circ q_{\infty}\right)_{*}\eta_{v_{i}}=\left(q_{\Delta K}\right)_{*}\left((q_{\infty})_{*}\eta_{v_{i}}\right)\overset{\text{weak *}}{\longrightarrow}\left(q_{\Delta K}\right)_{*}\mu_{\mathbb{G}(\mathbb{R})\diagup\mathbb{G}(\mathbb{Z})}.
\]
Hence we get from Lemma \ref{lem:difference of measure is zero} that
\[
\left(\pi_{\mathbb{G}_{1}(\mathbb{Z})}\right)_{*}\tilde{\nu}_{v_{i}}\overset{\text{weak *}}{\longrightarrow}\left(q_{\Delta K}\right)_{*}\mu_{\mathbb{G}(\mathbb{R})\diagup\mathbb{G}(\mathbb{Z})}.
\]
Observe that (see Diagram \eqref{eq:main diagram})
\[
\left(q_{\Delta K}\right)_{*}\mu_{\mathbb{G}(\mathbb{R})\diagup\mathbb{G}(\mathbb{Z})}=\left(\pi_{\mathbb{G}_{1}(\mathbb{Z})}\right)_{*}\left(\left(\tilde{q}_{\Delta K}\right)_{*}\mu_{\mathbb{G}_{1}(\mathbb{R})}\otimes\mu_{\mathbb{G}_{2}(\mathbb{R})\diagup\mathbb{G}_{2}(\mathbb{Z})}\right),
\]
so that by the definition of $\tilde{\nu}_{\text{polar}}$ (see \eqref{eq:nu_tilde_polar}),
we get that
\[
\left(\pi_{\mathbb{G}_{1}(\mathbb{Z})}\right)_{*}\tilde{\nu}_{v_{i}}\overset{\text{weak *}}{\longrightarrow}\left(\pi_{\mathbb{G}_{1}(\mathbb{Z})}\right)_{*}\tilde{\nu}_{\text{polar}}.
\]
Now, since the measures $\tilde{\nu}_{v}$ and $\tilde{\nu}_{\text{polar}}$
are both $\mathbb{G}_{1}(\mathbb{Z})$ invariant and since $\mathbb{G}_{1}(\mathbb{Z})$ is finite,
we also obtain that
\[
\tilde{\nu}_{v_{i}}\overset{\text{weak *}}{\longrightarrow}\tilde{\nu}_{\text{polar}}.
\]
\end{proof}
\subsubsection{\label{subsec:lemma of difference of meas}Outline of the proof for
Lemma \ref{lem:difference of measure is zero}}
Let $\lambda_{v}$ be the uniform counting measures on the sets $R_{v}.$
The idea of the proof of Lemma \ref{lem:difference of measure is zero}
is to show that
\begin{equation}
\left(\pi_{\mathbb{G}_{1}(\mathbb{Z})}\right)_{*}\tilde{\nu}_{v}-\lambda_{v}\to0,\ \text{and\ }\left(q_{\Delta K}\circ q_{\infty}\right)_{*}\eta_{v}-\lambda_{v}\to0.\label{eq:difference of counting measures}
\end{equation}
We denote
\begin{equation}
\left(q_{\Delta K}\circ q_{\infty}\right)_{*}\eta_{v}=\sum_{h\in M_{0}}\alpha_{v,h}\delta_{\tilde{O}_{v,h}},\label{eq:rh_Dk_rh_infty_eta}
\end{equation}
then,
\[
\alpha_{v,h}\overset{\text{def}}{=}\eta_{v}(q_{\infty}^{-1}(O_{v,h}))\underbrace{=}_{\eqref{eq:eta_v definition}\text{, and Proposition \ref{prop:properties of p-adic projection}}}\eta_{v}(O_{v,p,h}).
\]
It follows that
\[
\eta_{v}(O_{v,p,h})=\frac{\alpha}{\left|\text{stab}_{\Delta K\times L_{v}(\mathbb{Z}_{p})}(k_{v},h,a_{v}k_{v}g_{v},g_{v}^{-1}hg_{v})\mathbb{G}(\mathbb{Z}(\frac{1}{p}))\right|},
\]
where $\alpha=\alpha(v)$ normalizes $\left(q_{\Delta K}\circ q_{\infty}\right)_{*}\eta_{v}$
to be a probability measure. Also, let
\[
\left(\pi_{\mathbb{G}_{1}(\mathbb{Z})}\right)_{*}\tilde{\nu}_{v}=\sum_{h\in M_{0}}\beta_{v,h}\delta_{\tilde{O}_{v,h}},
\]
where
\[
\beta_{v,h}=\frac{\beta}{\left|\text{stab}_{\mathbb{G}_{1}(\mathbb{Z})}(Kk_{v}\gamma_{1}^{-1}(h))\right|},
\]
and $\beta=\beta(v)$ normalizes the measure $\left(\pi_{\mathbb{G}_{1}(\mathbb{Z})}\right)_{*}\tilde{\nu}_{v}$
to a probability measure. Let
\[
M_{v}=\max_{h\in M_{0}}\alpha_{v,h},
\]
and
\[
N_{v}=\max_{h\in M_{0}}\beta_{v,h}.
\]
Also let
\[
E=\left\{ \Delta K\left(\rho,\eta)\mathbb{G}(\mathbb{Z})\right)\mid\left|\text{stab}_{\mathbb{G}_{1}(\mathbb{Z})}(K\rho)\right|>1\right\} .
\]
The following statements were proven in \cite{AESgrids},
\begin{lem}
\label{lem:weight lemma from aes-1}The following holds,
\begin{enumerate}
\item For all $h\in M_{0}$ such that $O_{v,h}\notin E,$ it holds that
$\alpha_{v,h}=M_{v}$ and $\beta_{v,h}=N_{v}$.
\item $\frac{\left|R_{v}\cap E\right|}{|R_{v}|}\to0$.
\end{enumerate}
\end{lem}
\begin{proof}
See Lemmata 6.3 and 6.4 of \cite{AESgrids}.
\end{proof}
It is immediate that Lemma \ref{lem:weight lemma from aes-1} implies
the limits \eqref{eq:difference of counting measures}.
\subsubsection*{Acknowledgements}
I thank Uri Shapira for purposing this problem, his valuable support
and for many discussions. I also thank Cheng Zheng and Rene R{\"u}hr for
many important discussions on this project.
|
1,116,691,498,648 | arxiv | \section{Introduction}
In the past few years, the on-surface synthesis of carbon-based nanoarchitectures out of single building-block precursors has attracted great attention \cite{Franc2011,Fan2015,Robert2015,Talirz2016,Nacci2016,Shen2017}. Recent advances in this active field are the synthesis of atomically precise graphene nanoribbbons (GNR) \cite{Cai2010,Ruffieux2016}, and covalent \cite{Bieri2009,Lafferentz2012,Lipton-Duffin2009,Basagni2015,Moreno2018} and metal-organic coordination \cite{Park2011,Saywell2014,Dong2016} polymeric chains and networks. One of the most common strategies for GNR synthesis has been the use of non-planar, halogen-based hydrocarbons in a two-step reaction: i) dehalogenative homocoupling polymerization (Ullmann coupling), and ii) formation of planar aromatic skeletons by cyclodehydrogenation \cite{Cai2010,Bjork2011,Talirz2016}. This strategy has led to GNRs with armchair, zigzag, and more complex edge structures \cite{Kimouche2015,Liu2015,Liu2016a,Ruffieux2016,Talirz2016,Oteyza2016}, to their atomically controlled functionalization \cite{Kawai2015,Nguyen2016,Carbonell-Sanroma2017a}, or to GNR heterostructures \cite{Chen2015,Dienel2015,Carbonell-Sanroma2017}.
Despite the above successful examples, predicting the final product from a given precursor and catalytic surface is far out of reach of present understanding. This is due to the subtle correlation of parameters such as the monomer structure and adsorption configuration \cite{Jacobse2016,Schulz2017}, the surface crystal structure and reactivity \cite{Oteyza2016,Simonov2018}, or the presence of byproducts \cite{Bjork2013,Batra2014} and metal adatoms \cite{Simonov2018,Patera2017,Schulz2017}. An illustrative case of the critical role of the interplay between monomer and surface properties on the final product is that of dibromo bianthracene (DBBA) precursors. $10,10'$-Dibromo $9,9'$-bianthracene undergoes different reaction paths depending on the catalytic substrate. Whereas the conventional Ullmann coupling route leads to straight armchair GNRs in Au(111) \cite{Cai2010,Simonov2014,Talirz2016} and Au(110) \cite{Massimi2015}, dehydrogenative cross coupling polymerization on Cu(111) leads to chiral zigzag edge GNRs \cite{Han2014,Han2015,SanchezSanchez2016}. On Ag(111), armchair GNRs are obtained through a very different path where polymerization occurs only after turning monomers into graphene platelets by a simoultaneous dehalogenation and cyclodehydrogenation \cite{Huang2012}. This covalent coupling of individual platelets can be, however, blocked by the strong interaction with the substrate, as found for Cu(110) \cite{Simonov2015}. Finally, replacing the Br sites from the $10,10'$ to the $2,2'$ sites minimizes the effect of the surface, univocaly guiding the reaction path through the Ullman coupling route in Au(111), Ag(111), and Cu(111) \cite{Oteyza2016}. Understanding the role of the monomer and the substrate in hierarchical on-surface reactions is therefore determinant to synthesize atomically precise GNRs, the main activity in this field, but also to design new synthetic routes for novel polymeric structures.
Here we present a systematic study of the thermally assisted reactions and the resulting structures obtained with the same monomer on Au(111), Ag(111), Ag(100), and Cu(111). For our studies we use the home-synthesized $2,2'$-diphenyl $10,10'$-dibromo $9,9'$-bianthracene (DP-DBBA), which can be seen as a derivative of $10,10'$-dibromo $9,9'$-bianthracene (DBBA), the most studied precursor for GNR synthesis. The temperature dependent X-ray (XPS) and ultraviolet (UPS) photoemission, and scanning tunneling microscopy (STM) experiments carried out on the different substrates gives us insight on the role of the substrate, whereas comparison with the available literature on the on-surface reactions of DBBA on each of the above substrates enables to discriminate pure monomer effects. The obtained final product is different for each substrate, and each of them differs also from that obtained with DBBA on the same substrate. The variety of the products we obtain range from coordination and covalent polymer chains where the precursor arrangement dictates their chiral imprint, to graphene nanoribbons and to nanoporous graphene.
\section{Results and discussion}
\subsection{The molecular precursor}
The precursor monomer used in this work, $2,2'$-diphenyl $10,10'$-dibromo $9,9'$-bianthracene (DP-DBBA), is a derivative of the well-studied $10,10'$-dibromo $9,9'$-bianthracene (DBBA) in the synthesis of graphene nanoribbons\cite{Cai2010, Ruffieux2016}, with one additional phenyl group added at opposite sides of each anthracene branch. The molecule is synthesized synthesized in a two-steps procedure starting from $2,2'$-dibromo-$9,9'$-bianthracene (1, Fig. 1a). First, double Pd-catalyzed Suzuki coupling of compound 1 with two equivalents of phenyl boronic acid affords diphenyl derivative 2. Then, careful dibromination leads to the isolation of compound DP-DBBA in good yield (see Supplementary Materials of Ref. \cite{Moreno2018} for details in the synthesis).
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=8.5 cm]{Fig1.png}
\caption{Synthesis and structure of monomer DP-DBBA. (a) Synthetic route to obtain $2,2'$-diphenyl $10,10'$-dibromo $9,9'$-bianthracene (DP-DBBA). (b) Top and side views of the gas-phase relaxed structure of DP-DBBA, where the staggered conformation arising from the steric hindrance between hydrogen atoms can be clearly visualized (the high-end aromatic rings are highlighted in green).}
\label{Fig1}
\end{center}
\end{figure}
Bianthryl derivatives are characterized by the staggered geometry induced by the steric repulsion between hydrogen atoms, which twists the anthracene subunits around the C-C bond connecting them, as can be seen in the schematic model of Fig. \ref{Fig1}b. Interestingly, the phenyl substituents imprint chirality on the DP-DBBA molecule, which can transferred to the polymer chains upon Ullmann coupling depending on the chiral mixture of precursor monomers, as will be shown latter.
\subsection{Au(111): From covalent polymeric chains, to nanoribbons and nanoporous graphene}
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=17.5 cm]{Fig2.png}
\caption{\scriptsize \textbf {Generation of polymeric chains, graphene nanoribbons and nanoporous graphene on Au(111).} \textbf{a}, Br 3d and \textbf{b} C 1s core-level spectra and \textbf{c}, UV photoemission spectroscopy of as-deposited monomer precursor at RT and progressive stepwise anneal stages. Spectra fitting of C 1s considering C-C, C-H and C-Br deconvoluted components. Raw data is displayed by gray dots and the the fitting results and subcomponents by solid lines. \textbf{d}, Constant current STM topographic images of as-deposited precursor molecules: overview of 14x7 nm$^{2}$ (top) and zoomed region of 2.5x2.5 nm$^{2}$ (down) with the single monomer structure template overlaid (It=0.1nA, Vs= 0.7V), \textbf{e} polymeric chains after mild annealing at 200$^{\circ}$C and 300$^{\circ}$C with the polymer model overlaid and the side view at the bottom (7x1.5 nm$^{2}$,It=0.1nA, Vs= 0.7V), \textbf{f} single graphene nanoribbon after 400$^{\circ}$C (9x3 nm$^{2}$,It=0.1nA, Vs= 0.7V), \textbf{g}, Laplace filtered constant-height STM image of nanoporous graphene formation after further annealing at 450$^{\circ}$C and its overlapped structural model [80x80 nm$^{2}$, I$_{t}$=0.1nA, V$_{s}$=1.0V], \textbf{h} overview of as-deposited monomers holding the sample over 300$^{\circ}$C (30x30 nm$^{2}$,It=0.7nA, Vs= 0.2V) and \textbf{i} zoom in \textbf{h} highlighting the linking of several planar monomer units (6x6 nm$^{2}$). All images were recorded at 5K.}
\label{Fig2}
\end{center}
\end{figure}
Au(111) is the most studied surface for the on-surface synthesis of covalent nanostructures, in particular GNRs \cite{Talirz2016}. The latter is mainly due to the fact that the majority of halogenated hydrocarbon precursors undergo stepwise Ullmann coupling and cyclodehydrogenation reactions with clearly separated thermal windows. The new monomer employed in this study allows for a third hierarchical reaction step that couples adjancent GNRs via dehydrogenative cross coupling to give rise to a nanoporous graphene, as we have recently reported.\cite{Moreno2018}
We use temperature-dependent XPS and UPS spectra to track the thermal reactions that give rise to the different intermediate and final structures. Debromination of the precursors can be followed by tracking energy shifts of the Br $3d$ core level, whereas the spectral decomposition of the C $1s$ multiplet can be used to follow cyclodehydrogenation by measuring the ratio between C-C and C-H bond contributions, and the formation of C-metal coordination bonds.
Figures~\ref{Fig2}a and b show XPS spectra of the Br $3d$ and C $1s$ core levels and corresponding fits for different temperatures.
The spectrum recorded after deposition at room temperature (RT) at the Br $3d$ region shows the spin-orbit split $3d_{5/2}-3d_{3/2}$ doublet at 70.9 and 69.9 eV, which can be assigned to C-bonded Br,\cite{DiGiovannantonio2013,Gutzler2014,Massimi2015,Batra2014,Basagni2015,Oteyza2016}indicating that the monomer is still brominated upon deposition. When the sample is annealed at 200$^{\circ}$C a significantly quenched Br-C doublet coexists with a more intense doublet that appears at 2.1 eV lower binding energy (BE), which can be assigned to Br-Au bond formation. \cite{DiGiovannantonio2013,Gutzler2014,Massimi2015,Batra2014,Basagni2015,Oteyza2016}. This transition is attributed to the partial cleavage of C-Br bonds and the passivation of Br radicals by surface Au atoms. At 300$^{\circ}$C, the sole presence of the Br-Au doublet indicates that the monomers are completely debrominated. Finally the absence of any peak 400$^{\circ}$C indicate that Br adatoms are desorbed from the surface in this last temperature window.
In the C $1s$ region, the small energy differences between the C-C, C-H, and C-Br bond contributions result in a single broad peak that is difficult to deconvolute by fitting, forcing us to reduce the degrees of freedom by assuming some relative ratios\cite{note1}. At RT, a nice fit is obtained by assuming relative intensities of the pristine monomer stoichiometry (C-H:C-C:C-Br=26:12:2). The best fit at 200$^{\circ}$C and 300$^{\circ}$C is obtained by maintaining the relative weight of C-H bonds and converting the ratio of cleaved C-Br obtained from the Br $3d$ analysis into C-C bonds. This reveals two important points. First, that C radicals formed by the Br cleavage are saturated by C-C coupling, as expected for the Ullmann polymerization. Second, that cyclodehydrogenation still did not take place yet. At 400$^{\circ}$C, however, the C-H:C-C bond ratio has to be modified to the one corresponding to GNRs (C-H:C-C=12:24), a clear fingerprint of the cyclodehydrogenative aromatization. Further annealing at 450$^{\circ}$C not alter the C1s (not shown), and then the edge dehydrogenation that takes place to create the nanoporous graphene is out of our XPS sensitivity.
The effect of the intramolecular transformations described above on the frontier orbitals can be tracked by UPS, as shown in Fig. \ref{Fig2}c. Upon deposition at RT, we can identify a prominent molecular orbital peak at 1.6 eV. The peak broadens after each reaction step identified in the XPS analysis, which is in agreement with a gradual delocalization of molecular orbitals expected by the polymerization and aromatization.
Comparing all XPS and UPS data displayed in Figs. \ref{Fig2}a-c together, we notice a coincident shift of $\sim0.2$ eV of all peaks in the thermal regime of 200-400$^{\circ}$C. The overall shift of the spectra is a signature of a non-local effect such as a work function change, which can be attributed to the presence of Br adatoms in this temperature range.\cite{Pham2016,Oteyza2016} Br is indeed known to substantially modify the work function of gold. \cite{Bertel1980}
We combine our spectroscopic analysis with real space structural information provided by scanning tunneling microscopy (STM). Monomers deposited with the sample held at room temperature are observed as double bright protrusions in STM images, as shown in Fig.~\ref{Fig2}d. They display an apparent height of about 0.23 nm and a center-to-center distance of 0.74 nm. Considering the gas-phase relaxed structure displayed in Fig.~\ref{Fig1}, we can relate the measured inter-lobe distance with the two opposite, high end benzene rings of the scissor-like staggered anthracenes, and hence conclude that the monomer binds to the surface from the functional aryl group. Annealing at around 200$^{\circ}$C promotes the formation of 1D chains observed as a zigzag of bright protrusions with an apparent height of about 0.31 nm and a periodicity of 0.84 nm. This values are in excellent matching with the periodicity of C-C bonded DP-DBBA of 0.85 nm depicted in the schematics of Fig.~\ref{Fig2}e, and also with previously reported values for DBBA \cite{Cai2010}. The zigzag correlation confirms that the protrusions correspond to the non-functionalized side of the anthracene, since the presence of the aryl group at the high end would lead to the alignment of the protrusions pairs perpendicular to the chain. Further annealing at above 400$^{\circ}$C leads to protrusionless 1D chain structures with a reduced apparent height of 0.18 nm (see Fig. \ref{Fig2}f). The planarization of the backbone is in agreement with the aromatization of the polymers upon cyclodehydrogenation and consequent formation of GNRs \cite{Bjork2011}. Similar values were founded for other type of on-surface synthesized GNRs \cite{Cai2010}. Interestingly, the cyclization of the functional aryl group in our precursor leads to an atomic structure that differs substantially from the more conventional armchair GNRs obtained using plain DBBA, giving rise to particular periodic cove-shape edges which allow the formation of nanoporous graphene (Fig.~\ref{Fig2}g) after a subsequent annealing at 450$^{\circ}$C.\cite{Moreno2018}
The hierarchical reactions that lead to the synthesis of straight GNRs with well defined edge structures as the one shown in Fig. \ref{Fig2}f require clearly separated thermal windows for the dehalogenation and cyclodehydrogenation steps. Mixing the two steps by, for instance, rising the dehalogenation temperature by replacing Br with Cl leads to a disordered polymerization of individual graphene platelets formed by the aromatization of single precursors \cite{Jacobse2016} Here we show that the two steps can be kinetically mixed by modifying the annealing method. Long, straight GNRs are obtained by slowly annealing a precursor covered sample to 400$^{\circ}$C within 1 hour. In contrast, precursor deposition with the sample held at 300$^{\circ}$C leads to disordered polymer chains very similar to those obtained with chlorinated DCBA precursors in sequential annealing steps. The chains, as shown in Figs. \ref{Fig2}h and i, seem to be formed by randomly linked planar units of the size of the monomers, with a lateral size of 1.5 nm along the long axis direction, and a height of 0.15 nm.
\subsection{Ag(111): From racemic to chiral metal coordinated polymers}
GNRs have been synthesized on Ag(111) using DBBA \cite{Huang2012,Oteyza2016} and tetraphenyl-triphenylene \cite{Cai2010} precursors. Although a priori the results could be expected due to the similarity with Au(111) in lattice structure and chemical reactivity, a comparison of the intermediate structures obtained by DBBA reveal different chemical paths followed in each substrate. On Ag(111), the precursor is already debrominated and dehydrogenated at 180$^{\circ}$C, but instead of forming C-C coupled polymer chains as in Au(111), they self-assemble in a hexagonal pattern of flattened graphene platelets.\cite{Huang2012} The platelets polymerize into GNRs by increasing the annealing temperature to 380$^{\circ}$C. The results presented in the following illustrates how modifying the precursor from DBBA to DP-DBBA can further alter this path, giving rise to well-defined polymeric intermediates but inhibiting the formation of GNRs.
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=17.5 cm]{Fig3.png}
\caption{\scriptsize \textbf {Temperature-assisted on-surface reactions on Ag(111).} \textbf{a}, Br 3d and \textbf{b} C 1s core-level spectra and \textbf{c}, UV photoemission spectroscopy of as-deposited monomer precursor at RT and progressive stepwise anneal stages. Spectra fitting of C 1s considering C-C, C-H and C-Br deconvoluted components. Raw data is displayed by gray dots and the the fitting results and subcomponents by solid lines. The subtracted background is marked by dotted lines. Constant current STM topographic images of as-deposited precursor molecules: \textbf{d}, overview of 56x56 nm$^{2}$ (top) and zoomed region of 5.6x5.6 nm$^{2}$ (down) (It=0.1nA, Vs=0.8V) , \textbf{e} annealed at 150$^{\circ}$C: overview of 56x56 nm$^{2}$ (top) and zoomed region of 5x5 nm$^{2}$ (down) (It=0.1nA, Vs=1.0V), \textbf{f} annealed at 175$^{\circ}$C (34x34 nm$^{2}$ (top) and zoomed region of 8.5x8.5 nm$^{2}$ (down) (It=0.1nA, Vs= 1.2V), \textbf{g} annealed at 200$^{\circ}$C (45x45 nm$^{2}$, inset:10.5x10.5 nm$^{2}$; It=0.1nA, Vs= 1.2V) and, \textbf{h} annealed at 250$^{\circ}$C (63x63 nm$^{2}$, inset:15x15 nm$^{2}$; It=0.1nA, Vs= 0.7V). All images were recorded at 78K.}
\label{Fig3}
\end{center}
\end{figure}
Figures ~\ref{Fig3}a and b shows the evolution of Br $3d$ and C $1s$ core levels as a function of annealing temperature. As in Au(111), we find that the precursor adsorbs brominated at RT. This is in agreement with the intact adsorption of DBBA reported previously in \cite{Shen2017a} where the bromine atoms are placed in the same $10,10'$ site. However, we note that moving the Br to the $2,2'$ site has shown to lower the debromination down to RT, reflecting a strong sensitivity of the dissociation barrier to structural details of the molecule on this substrate.\cite{Oteyza2016} At 150$^{\circ}$C, the coexistence of both Br-C and Br-Ag doublets separated by 2.2 eV reveals the onset of debromination, which is already completed at 175$^{\circ}$C. This lower dehalogenation temperature as compared to Au has also been found for DBBA.\cite{Huang2012} Finally, Br adsorbates desorb between 300$^{\circ}$C and 400$^{\circ}$C.
C $1s$ core level fitting (Fig.~\ref{Fig3}b) confirms the intact adsorption of as-deposited monomers at RT. A good fitting is obtained by constraining the relative C-Br, C-H and C-C peak intensities to the precursor stoichiometry. After annealing at 150$^{\circ}$C, however, the behaviour is different as in Au(111). Here the fitting requires an additional component at the lower BE tail, which is a signature of Br-Ag bonds.\cite{Gutzler2014, Basagni2015,Massimi2015,Simonov2015,Smerieri2016, Pis2016} Comparing the similar branching ratio of C-Br/C-Ag and Br-C/Br-Ag bonds we can conclude that the debrominated C radicals are passivated by metal coordination instead of covalently coupled to other molecules, as found in Au(111). On the other hand, a reduced intensity of the C-H component obtained at this temperature set an onset of partial dehydrogenation that is gradually completed at 400$^{\circ}$C (all series not shown). This is in contrast to DBBA, where dehydrogenation is already completed at 180$^{\circ}$C.\cite{Huang2012} In the thermal range of 150-300$^{\circ}$C we also observe subtle peak shifts that can be related to the structural transformations induced by the gradual dehydrogenation. Finally the broad dominant C-C peak observed above 200$^{\circ}$C suggests that some kind of disordered covalent structures with different C-C bonding configuration are being formed at the high temperature regime (spectra at 400$^{\circ}$C shown as example).
Valence band spectra displayed in Fig. \ref{Fig3}c exhibits a clear molecular orbital at around 2.2 eV binding energy, 0.6 eV higher than in Au(111), which correlates to the work function difference between the two materials. In contrast to that found in Au(111), we do not observe any significant broadening of the molecular orbital level up to 175$^{\circ}$C. The absence of any signature of orbital delocalization is in line with the inhibition of covalent polymerization suggested by the C $1s$ peak fitting.
Similar to the case of Au, we also observe a general shift of all XPS and UPS peaks of 0.4 eV to lower binding energies in the thermal window where Br adatoms are present in the surface, attributed again to changes in the work function.
The two dimensional self-assembled structures observed by STM after deposition at room temperature differ from the arrays of parallel armchairlike chains found for DBBA (Fig. \ref{Fig3}d).\cite{Shen2017a} Since molecules adsorb intact at this temperature, we assume that the assembly is driven by weak, Br mediated interactions.\cite{Chung2011,Fan2014a,Fan2015a,Kawai2015a,Morchutt2015,Huang2016}. It is only after annealing to 150$^{\circ}$C when the 2D clusters transform into similar armchairlike chains. However, the precursor arrangement within the chain cannot be the one proposed for DBBA,\cite{Shen2017a} since that would overlap the functional phenyl groups of neighbouring molecules. Instead, we propose an alternative structure that consists on Ag coordinated chains (see Fig. \ref{Fig3}e), which would be in turn consistent with the detection of the C-Ag component with XPS. We note that this new arrangement also consists of alternating molecules with opposite chiral conformation, as the one proposed for DBBA. Interestingly, the racemic chains seem to transform into chiral chains after annealing to 200$^{\circ}$C, as shown in Fig. \ref{Fig3}f. This chiral phase separation, probably triggered by the different selectivity in the interactions of the partially dehydrogenated molecules, implies considerable molecular reorganization. The STM appearance of the chiral chains emulate the covalently coupled polymers found in Au(111), with molecules aligned with the anthracenes perpendicular to the chain (see Fig. \ref{Fig2}e). The measured periodicity of 0.95 nm, however, is 0.1 nm larger than the C-C coupled polymers, difference that can be attributed to a metal coordination. \cite{Oteyza2016} This is again supported by the sizeable C-Ag component found in the corresponding C $1s$ XPS spectrum, and the fact that metal coordinated intermediates seem to be rather ubiquitous in the Ullmann coupling on Ag(111). \cite{Park2011,Gutzler2014,Oteyza2016,Dong2016} Finally, annealing to higher temperatures leads to disordered 2D and 1D polymerization instead of well-defined GNRs obtained with DBBA, as can be seen in Figs. \ref{Fig3}g and h.
\subsection{Ag(100): From non-planar to aromatic metal coordinated polymers}
On-surface reactions on Ag(100) have been studied very recently using the prototypical DBBA as precursor.\cite{Smalley2017} In this study, although long-range self-assembled molecular structures are observed at RT, annealing to 200$^{\circ}$C already results in the desorption of most of the material, leaving a few undefined clusters on the surface. It is only by direct deposition at 400$^{\circ}$C that polymerization gives rise to irregular chains that the authors assign to disordered GNRs. In contrast to these results, the higher stability we find for DP-DBBA on this substrate leads to well-defined polymeric structures to temperatures up to 200$^{\circ}$C, as shown in figure\ref{Fig4}.
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=17.5 cm]{Fig4.png}
\caption{\scriptsize \textbf {Temperature-assisted on-surface reactions on Ag(100).} \textbf{a}, Br 3d and \textbf{b} C 1s core-level spectra of as-deposited monomer precursor at RT and progressive stepwise anneal stages. Spectra fitting of C 1s considering C-C, C-H and C-Br deconvoluted components. Raw data is displayed by gray dots and the the fitting results and subcomponents by solid lines. The subtracted background is marked by dotted lines. Constant current STM topographic images of: \textbf{c}, as-deposited precursor molecules overview of 140x140 nm$^{2}$ (inset) and zoomed region of 14x14 nm$^{2}$ (It=0.1nA, Vs=1.0V) , \textbf{d} annealed at 100$^{\circ}$C: overview of 33.6x33.6 nm$^{2}$ (inset) and zoomed region of 5.6x5.6 nm$^{2}$ with the model proposed overlaid (It=0.1nA, Vs=1.2V), \textbf{e} annealed at 200$^{\circ}$C (84.4x84.4 nm$^{2}$ (inset) and zoomed region of 14x14 nm$^{2}$ (It=0.10nA, Vs= 1.0V) with the metal organic model proposed overlaid, \textbf{f} annealed at 300$^{\circ}$C (42x42 nm$^{2}$, It=0.1nA, Vs= 1.2V) and (inset) annealed at 400$^{\circ}$C (42x42 nm$^{2}$, It=0.1nA, Vs= 0.4V). All images were recorded at 78K.}
\label{Fig4}
\end{center}
\end{figure}
XPS spectra obtained after deposition at RT reveal a lower debromination barrier on Ag(100) as compared to Ag(111) and Au(111). The energy of the Br $3d$ doublet shown in Fig. \ref{Fig4}a has the energy of the metal-bonded component, indicating that the molecule debrominates upon adsorption at RT. The higher temperature sequence shows that Br is stable on the surface up to T = 200$^{\circ}$C. Debromination below RT has also been observed
for DBBA on Cu(110)\cite{Simonov2015} and Cu(111),\cite{Simonov2014} but our observation is in contrast to that reported for Ag(100),\cite{Smalley2017} where the authors concluded that DBBA adsorbed intact based on the absence of Br atoms in STM images.
The C $1s$ spectrum obtained at RT can only be fitted by reducing the C-H/C-C ratio well below the values of the intact precursor and adding a C-Ag component, suggesting that both debromination and partial dehydrogenation has taken place simultaneously, and that C radicals are saturated by metal coordination (see Fig. \ref{Fig4}b). The bond ratio is maintained at 100$^{\circ}$C, but by increasing the annealing temperature to 200$^{\circ}$C the molecule undergoes complete dehydrogenation, according to the disappearance of the C-H component. The C-Ag component maintains roughly constant in intensity. Annealing to higher temperatures only leads to a broadening of the C-C peak and a decrease of the total intensity, signatures of partial desorption and bond disorder.
The core level shift observed between 100$^{\circ}$C and 200$^{\circ}$C cannot be related to Br induced work function modifications, since similar amount of Br adatoms are found for the two temperatures. Alternatively, we can attribute the shift to a stronger interaction with the underlying metal of the planar aromatic structures obtained from the dehydrogenation, as revealed by the STM images discussed below. The more subtle shifts observed above this temperature are, on the other hand, an indication of further structural transformations occurring in this temperature range.
The 1D chains imaged by STM after deposition indicate that polymeric chains are already formed at RT, as shown in Fig. \ref{Fig4}c. The position of the protrusion pairs follows that of the chiral arrangement of the C-C coupled polymers on Au(111) and the metal coordinated ones found at 200$^{\circ}$C on Ag(111). The periodicity of 0.94 nm that we find points towards metal coordination, in agreement with the C $1s$ peak analysis (following the same arguments as for the Ag(111) case). We note that the protrusion pairs can be classified in two different heights of 0.22 and 0.31 nm, which we attribute to the partial dehydrogenation found by XPS. The structure of the partially dehydrogenated molecule, however, remains unsolved. We note here that, although the reduction of C-H bonds found in the C $1s$ spectra could imply the coexistence of fully dehydrogenated planar, and hydrogenated non-planar species, we discard this scenario since all species exhibit the double protrusion appearance characteristic of the staggered anthracene units. Planar oval-shaped structures are observed, however, at 200$^{\circ}$C, where XPS data show no sizeable contribution of C-H bonds (see Fig. \ref{Fig4}b). The measured dimensions of 1.0 nm and 1.4 nm along the short and long axis directions respectively are in good agreement with the size of the graphene platelet that would result from the cyclodehydrogenation of a single precursor. The measured apparent height of 0.15 nm is also very similar to that of GNRs and the disordered chains of graphene platelets formed on Au(111) (Figs. \ref{Fig2}f-h). The planar units form large 2D islands that resemble arrays of parallel chains (inset Fig. \ref{Fig4}b). From the intrachain periodicity of 1.06 nm, and the remaining C-Ag contribution in the C $1s$ peak, we conclude that the graphene platelets within the chain are also linked by metal coordination. The interchain distance of 1.42 nm is very similar to those found for other self-assembled structures formed by DBBA-based graphene platelets, where the interactions seem to be mediated by intercalated Br adatoms.\cite{Huang2012,Simonov2015} Based on that, and supported by the fact that the adatoms we observe outside the clusters cannot account for all the released Br, we propose the structural configuration superimposed to the STM image in Fig. \ref{Fig4}e, where Br atoms are intercalated within the Ag-bonded chains. Indeed, a very similar configuration has been proposed for arrays of Ag-coordinated dibromo anthracene chains formed in Ag(111).\cite{Park2011} Heating the sample to higher temperatures (300-400$^{\circ}$C) leads to the fragmentation of the coordinated polymers and the desorption of roughly half of the organic material (Fig. ~\ref{Fig4}f and inset), in good agreement with the C1s core level broadening and quenching.
\subsection{Cu(111): Disordered polymerization}
The polymerization mechanism followed by the DBBA precursor on Cu(111) leads to chiral zigzag GNR structures instead the armchair ones obtained in Au(111) and Ag(111). \cite{Han2014,SanchezSanchez2016} In this substrate, the C radicals resulting from dehalogenation are not relevant in the polymerization process, and instead neighbouring molecules link via dehydrogenative cross coupling of their low lying phenyls. The onset for the formation of the chiral GNRs is around 250$^{\circ}$C,\cite{Han2014,SanchezSanchez2016} and they are stable up to at least 500$^{\circ}$C.\cite{Han2014,SanchezSanchez2016} Our studies reveal a lower stability for DP-DBBA on this surface, as we show in the following.
Similar to the case of Ag(100), monomers are debrominated upon deposition on Cu(111) at RT (Fig.~\ref{Fig5}a), as indicated by the position of the Br $3d$ doublet at 68.6 and 69.6 eV. Br adatoms remain on the surface after annealing to 150$^{\circ}$C, where we stopped the series based on the highly disordered structures already observed by STM (Fig.~\ref{Fig5}d). The C $1s$ fit depicts an scenario where the monomers are already substantially dehydrogenated and metal coordinated at RT (Fig.~\ref{Fig5}b). The dehydrogenation process continues at 150$^{\circ}$C, where, in addition, almost half of the species are already desorbed.
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=17.5 cm]{Fig5.png}
\caption{\scriptsize \textbf {Temperature-assisted on-surface reactions on on Cu(111).} \textbf{a}, Br 3d and \textbf{b} C 1s core-level spectra of as-deposited monomer precursor at RT and progressive stepwise anneal stages. Constant current STM topographic images of \textbf{c}, as-deposited precursor monomers of 17x17 nm$^{2}$, \textbf{d} annealed at 150$^{\circ}$C (56x56 nm$^{2}$), \textbf{e} annealed at 250$^{\circ}$C (56x56 nm$^{2}$ and \textbf{f} zoomed region of 12.5x12.5 nm$^{2}$. All images were recorded at It=0.1nA, Vs= 1.0V and 78K.}
\label{Fig5}
\end{center}
\end{figure}
STM images obtained at RT present a disordered surface with apparently random structures. We find few protrusion pairs with the characteristic apparent height and protrusion pair separation of the monomer of 0.25-0.30 nm and 0.74 nm respectively (circles in Fig. \ref{Fig5}c). Yet, the distance between most protrusions exceed the molecular size, and their width of about 1 nm indicate that each of them is an individual reacted monomer. Annealing to 150$^{\circ}$C and 250$^{\circ}$C leads to substantial desorption and a gradual formation of disordered chains that could be attributed to an ill-defined polymerization.
\section{Summary}
We have carried out a systematic, multitechnique study of the on-surface reactions and resulting structures obtained by using DP-DBBA, a newly synthesized bianthryl derivative as precursor, in comparison with previous studies using the well-known DBBA monomer.
Hierarchical Ullmann coupling and cyclodehydrogenation in Au(111) leads to the formation of well-defined straight GNRs of cove-shaped edges when the precursor is deposited at RT and annealed at low rates. Further annealing leads to an additional dehydrogenative step where GNRs couple laterally and form nanoporous graphene. Deposition over 300$^{\circ}$C however, results in the mixing of both reaction steps and randomly oriented covalent linking of fully aromatized monomers that form disordered GNRs.
On Ag(111) a simultaneous dehalogenation and dehydrogenation sets at 150$^{\circ}$C. Instead of Ullmann polymerization, the C radicals are linked by Ag coordination, giving rise to metal-organic polymeric chain structures where chirality evolves with temperature. The inhibition of covalent C-C coupling results in an inferior stability of the coordinated chains as compare to the GNRs obtained in Au(111).
Metal coordinated polymer chains are already formed at RT on the more reactive Ag(100). A different thermal evolution as compared to Ag(111) leads to a full aromatization of the monomer units at 200$^{\circ}$C that form large 2D islands of arrays of Ag-coordinated chains. The lack of covalent linking limits the stability of these structures below 300$^{\circ}$C.
Finally, on Cu(111) we do not observe any ordered polymerization, but rather irregular chains of likely covalently bonded species.
The radically different structures obtained on each surface, and the comparison to similar studies on DBBA illustrate the difficulties on predicting on-surface reactions and related structures. Whereas DBBA leads to well-defined (albeit different type of) GNRs on Au(111), Ag(111) and Cu(111), DP-DBBA only forms GNRs on Au(111). Instead, the latter gives rise to metal coordinated chains on the Ag substrates. Conversely, the lack of any ordered structure obtained with DBBA on Ag(100) is in contrast with the long-range ordered polymeric chains obtained with DP-DBBA. Altogether, our study highlights the decisive role of both design of the monomer and choice of the substrate in the on-surface synthesis of covalent nanoarchitectures.
\section{Experimental methods}
The experiments were performed in two separate UHV systems, one dedicated to STM, the other to high resolution core level XPS and UPS. Both UHV systems (base pressure $<5\times 10^{-10}$ mbar) were equipped with an ion sputter gun for surface cleaning.
Sample preparation was carried out identically in both systems. Clean metal surfaces was obtained by cycles of $Ar^{+}$ bombardment ($<$1 keV) at RT and annealing at 400-450$^{\circ}$C for 5-10 min in ultra-high vacuum. Surface cleanliness was confirmed with x-ray photoelectron spectroscopy (XPS) prior monomer deposition. Monomer deposition were sublimated at 304$^{\circ}$C comprising times between 3 and 10 min for coverages until the monoloayer. XPS measurements were carried out using a Specs Phoibos 150 hemispherical energy analyser using a monochromatic X-ray source (Al K$_{\alpha}$ line with an energy of 1486.6 eV and 400 W) and energy referenced to the Fermi level. UPS experiments were performed with incident light from the He I emission at 21.2 eV. The fits have been performed with Voigt functions (Lorentzian-Gaussian curves), with a Gaussian-Lorentzian ratio of 0.3. Fitting was performed using XPST macro for IGOR (Dr. Martin Schmid, Philipps University Marburg) using the minimum number of peaks required to minimize the R-factor. Temperatures were measured from a infrared pyrometer at the XPS setup, which was previously calibrated by fixing a thermocouple directly to the sample holder. In the STM setup, the temperature was and directly measured with a thermocouple spot-welded to the sample holder during the annealings. STM images were processed with WsXM software \cite{Horcas2007}.For the synthesis of DP-DBBA from 2,2,’-dibromo-9,9’-bianthracene see Supplementary Materials of Ref. \cite{Moreno2018}.
\begin{acknowledgement}
C.M was supported by the Agency for Management of University and Research grants (AGAUR) of the Catalan government through the FP7 framework program of the European Commission under Marie Curie COFUND action 600385. Funded by the CERCA Programme / Generalitat de Catalunya. ICN2 is supported by the Severo Ochoa program from Spanish MINECO (Grant No. SEV-2013-0295). We acknowledge support from the Ministerio de Ciencia e Innovaci\'{o}n (MAT2013-46593-C6-2-P, MAT2013-46593-C6-5-P, MAT2016-78293-C6-2-R, MAT2016-78293-C6-4-R), Ag\'{e}ncia de Gesti\'{o} d\'{}Ajuts Universitaris i de Recerca-AGAUR (2014 SGR715), the EU project PAMS (610446) the Xunta de Galicia (Centro singular de investigacion de Galicia accreditation 2016-2019, ED431G/09), and the European Regional Development Fund.
\end{acknowledgement}
\providecommand{\latin}[1]{#1}
\makeatletter
\providecommand{\doi}
{\begingroup\let\do\@makeother\dospecials
\catcode`\{=1 \catcode`\}=2 \doi@aux}
\providecommand{\doi@aux}[1]{\endgroup\texttt{#1}}
\makeatother
\providecommand*\mcitethebibliography{\thebibliography}
\csname @ifundefined\endcsname{endmcitethebibliography}
{\let\endmcitethebibliography\endthebibliography}{}
\begin{mcitethebibliography}{57}
\providecommand*\natexlab[1]{#1}
\providecommand*\mciteSetBstSublistMode[1]{}
\providecommand*\mciteSetBstMaxWidthForm[2]{}
\providecommand*\mciteBstWouldAddEndPuncttrue
{\def\unskip.}{\unskip.}}
\providecommand*\mciteBstWouldAddEndPunctfalse
{\let\unskip.}\relax}
\providecommand*\mciteSetBstMidEndSepPunct[3]{}
\providecommand*\mciteSetBstSublistLabelBeginEnd[3]{}
\providecommand*\unskip.}{}
\mciteSetBstSublistMode{f}
\mciteSetBstMaxWidthForm{subitem}{(\alph{mcitesubitemcount})}
\mciteSetBstSublistLabelBeginEnd
{\mcitemaxwidthsubitemform\space}
{\relax}
{\relax}
\bibitem[Franc and Gourdon(2011)Franc, and Gourdon]{Franc2011}
Franc,~G.; Gourdon,~A. {Covalent networks through on-surface chemistry in
ultra-high vacuum: state-of-the-art and recent developments}. \emph{Physical
Chemistry Chemical Physics} \textbf{2011}, \emph{13}, 14283\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Fan \latin{et~al.}(2015)Fan, Gottfried, and Zhu]{Fan2015}
Fan,~Q.; Gottfried,~J.~M.; Zhu,~J. Surface-Catalyzed C-C Covalent Coupling
Strategies toward the Synthesis of Low-Dimensional Carbon-Based
Nanostructures. \emph{Acc. Chem. Res.} \textbf{2015}, \emph{48},
2484--2494\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Robert and Angelika(2015)Robert, and Angelika]{Robert2015}
Robert,~L.; Angelika,~K. On-Surface Reactions. \emph{ChemPhysChem}
\textbf{2015}, \emph{16}, 1582--1592\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Talirz \latin{et~al.}(2016)Talirz, Ruffieux, and Fasel]{Talirz2016}
Talirz,~L.; Ruffieux,~P.; Fasel,~R. On-Surface Synthesis of Atomically Precise
Graphene Nanoribbons. \emph{Advanced Materials} \textbf{2016}, \emph{28},
6222--6231\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Nacci \latin{et~al.}(2016)Nacci, Hecht, and Grill]{Nacci2016}
Nacci,~C.; Hecht,~S.; Grill,~L. The Emergence of Covalent On-Surface
Polymerization. On-Surface Synthesis. Cham, 2016; pp 1--21\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Shen \latin{et~al.}(2017)Shen, Gao, and Fuchs]{Shen2017}
Shen,~Q.; Gao,~H.-Y.; Fuchs,~H. Frontiers of on-surface synthesis: From
principles to applications. \emph{Nano Today} \textbf{2017}, \emph{13},
77--96\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Cai \latin{et~al.}(2010)Cai, Ruffieux, Jaafar, Bieri, Braun,
Blankenburg, Muoth, Seitsonen, Saleh, Feng, Mullen, and Fasel]{Cai2010}
Cai,~J.; Ruffieux,~P.; Jaafar,~R.; Bieri,~M.; Braun,~T.; Blankenburg,~S.;
Muoth,~M.; Seitsonen,~A.~P.; Saleh,~M.; Feng,~X.; Mullen,~K.; Fasel,~R.
Atomically precise bottom-up fabrication of graphene nanoribbons.
\emph{Nature} \textbf{2010}, \emph{466}, 470--473\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Ruffieux \latin{et~al.}(2016)Ruffieux, Wang, Yang,
S\'anchez-S\'anchez, Liu, Dienel, Talirz, Shinde, Pignedoli, Passerone,
Dumslaff, Feng, Müllen, and Fasel]{Ruffieux2016}
Ruffieux,~P.; Wang,~S.; Yang,~B.; S\'anchez-S\'anchez,~C.; Liu,~J.; Dienel,~T.;
Talirz,~L.; Shinde,~P.; Pignedoli,~C.~A.; Passerone,~D.; Dumslaff,~T.;
Feng,~X.; Müllen,~K.; Fasel,~R. On-surface synthesis of graphene nanoribbons
with zigzag edge topology. \emph{Nature} \textbf{2016}, \emph{531},
489--492\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Bieri \latin{et~al.}(2009)Bieri, Treier, Cai, Ait-Mansour, Ruffieux,
Groning, Groning, Kastler, Rieger, Feng, Mullen, and Fasel]{Bieri2009}
Bieri,~M.; Treier,~M.; Cai,~J.; Ait-Mansour,~K.; Ruffieux,~P.; Groning,~O.;
Groning,~P.; Kastler,~M.; Rieger,~R.; Feng,~X.; Mullen,~K.; Fasel,~R. Porous
graphenes: two-dimensional polymer synthesis with atomic precision.
\emph{Chem. Commun.} \textbf{2009}, 6919--6921\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Lafferentz \latin{et~al.}(2012)Lafferentz, Eberhardt, Dri, Africh,
Comelli, Esch, Hecht, and Grill]{Lafferentz2012}
Lafferentz,~L.; Eberhardt,~V.; Dri,~C.; Africh,~C.; Comelli,~G.; Esch,~F.;
Hecht,~S.; Grill,~L. Controlling on-surface polymerization by hierarchical
and substrate-directed growth. \emph{Nat Chem} \textbf{2012}, \emph{4},
215--220\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Lipton-Duffin \latin{et~al.}(2009)Lipton-Duffin, Ivasenko, Perepichka,
and Rosei]{Lipton-Duffin2009}
Lipton-Duffin,~J.~A.; Ivasenko,~O.; Perepichka,~D.~F.; Rosei,~F. Synthesis of
Polyphenylene Molecular Wires by Surface-Confined Polymerization.
\emph{Small} \textbf{2009}, \emph{5}, 592--597\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Basagni \latin{et~al.}(2015)Basagni, Sedona, Pignedoli, Cattelan,
Nicolas, Casarin, and Sambi]{Basagni2015}
Basagni,~A.; Sedona,~F.; Pignedoli,~C.~A.; Cattelan,~M.; Nicolas,~L.;
Casarin,~M.; Sambi,~M. Molecules Oligomers Nanowires Graphene Nanoribbons: A
Bottom-Up Stepwise On-Surface Covalent Synthesis Preserving Long-Range Order.
\emph{Journal of the American Chemical Society} \textbf{2015}, \emph{137},
1802--1808, PMID: 25582946\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Moreno \latin{et~al.}(2018)Moreno, Vilas-Varela, Kretz, Garcia-Lekue,
Costache, Paradinas, Panighel, Ceballos, Valenzuela, Pe{\~{n}}a, and
Mugarza]{Moreno2018}
Moreno,~C.; Vilas-Varela,~M.; Kretz,~B.; Garcia-Lekue,~A.; Costache,~M.~V.;
Paradinas,~M.; Panighel,~M.; Ceballos,~G.; Valenzuela,~S.~O.; Pe{\~{n}}a,~D.;
Mugarza,~A. {Bottom-up synthesis of multifunctional nanoporous graphene}.
\emph{Science} \textbf{2018}, \emph{360}, 199--203\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Park \latin{et~al.}(2011)Park, Kim, Chung, Yoon, Kim, Han, and
Kahng]{Park2011}
Park,~J.; Kim,~K.~Y.; Chung,~K.-H.; Yoon,~J.~K.; Kim,~H.; Han,~S.; Kahng,~S.-J.
Interchain Interactions Mediated by Br Adsorbates in Arrays of Metal-Organic
Hybrid Chains on Ag(111). \emph{J. Phys. Chem. C} \textbf{2011}, \emph{115},
14834--14838\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Saywell \latin{et~al.}(2014)Saywell, Gre, Franc, Gourdon, Bouju, and
Grill]{Saywell2014}
Saywell,~A.; Gre,~W.; Franc,~G.; Gourdon,~A.; Bouju,~X.; Grill,~L. Manipulating
the Conformation of Single Organometallic Chains on Au(111). \emph{J. Phys.
Chem. C} \textbf{2014}, \emph{118}, 1719--1728\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Dong \latin{et~al.}(2016)Dong, Gao, and Lin]{Dong2016}
Dong,~L.; Gao,~Z.; Lin,~N. Self-assembly of metal-organic coordination
structures on surfaces. \emph{Progress in Surface Science} \textbf{2016},
\emph{91}, 101--135\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Bj\"ork \latin{et~al.}(2011)Bj\"ork, Stafström, and Hanke]{Bjork2011}
Bj\"ork,~J.; Stafström,~S.; Hanke,~F. Zipping Up: Cooperativity Drives the
Synthesis of Graphene Nanoribbons. \emph{J. Am. Chem. Soc.} \textbf{2011},
\emph{133}, 14884--14887\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kimouche \latin{et~al.}(2015)Kimouche, Ervasti, Drost, Halonen, Harju,
Joensuu, Sainio, and Liljeroth]{Kimouche2015}
Kimouche,~A.; Ervasti,~M.~M.; Drost,~R.; Halonen,~S.; Harju,~A.;
Joensuu,~P.~M.; Sainio,~J.; Liljeroth,~P. Ultra-narrow metallic armchair
graphene nanoribbons. \emph{Nat Commun} \textbf{2015}, \emph{6}, 10177\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Liu \latin{et~al.}(2015)Liu, Li, Tan, Giannakopoulos, Sanchez-Sanchez,
Beljonne, Ruffieux, Fasel, Feng, and Mullen]{Liu2015}
Liu,~J.; Li,~B.-W.; Tan,~Y.-Z.; Giannakopoulos,~A.; Sanchez-Sanchez,~C.;
Beljonne,~D.; Ruffieux,~P.; Fasel,~R.; Feng,~X.; Mullen,~K. Toward Cove-Edged
Low Band Gap Graphene Nanoribbons. \emph{J. Am. Chem. Soc.} \textbf{2015},
\emph{137}, 6097--6103\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Liu \latin{et~al.}(2016)Liu, Dienel, Liu, Groening, Cai, Feng,
Müllen, Ruffieux, and Fasel]{Liu2016a}
Liu,~J.; Dienel,~T.; Liu,~J.; Groening,~O.; Cai,~J.; Feng,~X.; Müllen,~K.;
Ruffieux,~P.; Fasel,~R. Building Pentagons into Graphenic Structures by
On-Surface Polymerization and Aromatic Cyclodehydrogenation of
Phenyl-Substituted Polycyclic Aromatic Hydrocarbons. \emph{J. Phys. Chem. C}
\textbf{2016}, \emph{120}, 17588--17593\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[de~Oteyza \latin{et~al.}(2016)de~Oteyza, Garc\'ia-Lekue, Vilas-Varela,
Merino-D\'iez, Carbonell-Sanrom\'a, Corso, Vasseur, Rogero, Guiti\'an,
Pascual, Ortega, Wakayama, and Pena]{Oteyza2016}
de~Oteyza,~D.~G.; Garc\'ia-Lekue,~A.; Vilas-Varela,~M.; Merino-D\'iez,~N.;
Carbonell-Sanrom\'a,~E.; Corso,~M.; Vasseur,~G.; Rogero,~C.; Guiti\'an,~E.;
Pascual,~J.~I.; Ortega,~J.~E.; Wakayama,~Y.; Pena,~D. Substrate-Independent
Growth of Atomically Precise Chiral Graphene Nanoribbons. \emph{ACS Nano}
\textbf{2016}, \emph{10}, 9000--9008\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kawai \latin{et~al.}(2015)Kawai, Saito, Osumi, Yamaguchi, Foster,
Spijker, and Meyer]{Kawai2015}
Kawai,~S.; Saito,~S.; Osumi,~S.; Yamaguchi,~S.; Foster,~A.~S.; Spijker,~P.;
Meyer,~E. Atomically controlled substitutional boron-doping of graphene
nanoribbons. \emph{Nat Commun} \textbf{2015}, \emph{6}, 8098\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Nguyen \latin{et~al.}(2016)Nguyen, Toma, Cao, Pedramrazi, Chen, Rizzo,
Joshi, Bronner, Chen, Favaro, Louie, Fischer, and Crommie]{Nguyen2016}
Nguyen,~G.~D.; Toma,~F.~M.; Cao,~T.; Pedramrazi,~Z.; Chen,~C.; Rizzo,~D.~J.;
Joshi,~T.; Bronner,~C.; Chen,~Y.-C.; Favaro,~M.; Louie,~S.~G.;
Fischer,~F.~R.; Crommie,~M.~F. Bottom-Up Synthesis of N = 13 Sulfur-Doped
Graphene Nanoribbons. \emph{The Journal of Physical Chemistry C}
\textbf{2016}, \emph{120}, 2684--2687\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Carbonell-Sanrom\'a \latin{et~al.}(2017)Carbonell-Sanrom\'a, Hieulle,
Vilas-Varela, Brandimarte, Iraola, Barragán, Li, Abadia, Corso,
Sánchez-Portal, Peña, and Pascual]{Carbonell-Sanroma2017a}
Carbonell-Sanrom\'a,~E.; Hieulle,~J.; Vilas-Varela,~M.; Brandimarte,~P.;
Iraola,~M.; Barragán,~A.; Li,~J.; Abadia,~M.; Corso,~M.;
Sánchez-Portal,~D.; Peña,~D.; Pascual,~J.~I. Doping of Graphene Nanoribbons
via Functional Group Edge Modification. \emph{ACS Nano} \textbf{2017}, \relax
\mciteBstWouldAddEndPunctfalse
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Chen \latin{et~al.}(2015)Chen, Cao, Chen, Pedramrazi, Haberer,
de~OteyzaDimas~G., Fischer, Louie, and Crommie]{Chen2015}
Chen,~Y.-C.; Cao,~T.; Chen,~C.; Pedramrazi,~Z.; Haberer,~D.;
de~OteyzaDimas~G.,; Fischer,~F.~R.; Louie,~S.~G.; Crommie,~M.~F. Molecular
bandgap engineering of bottom-up synthesized graphene nanoribbon
heterojunctions. \emph{Nat Nano} \textbf{2015}, \emph{10}, 156--160\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Dienel \latin{et~al.}(2015)Dienel, Kawai, Sode, Feng, Mullen,
Ruffieux, Fasel, and Groing]{Dienel2015}
Dienel,~T.; Kawai,~S.; Sode,~H.; Feng,~X.; Mullen,~K.; Ruffieux,~P.; Fasel,~R.;
Groing,~O. Resolving Atomic Connectivity in Graphene Nanostructure Junctions.
\emph{Nano Lett.} \textbf{2015}, 6571–6579\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Carbonell-Sanrom\'a \latin{et~al.}(2017)Carbonell-Sanrom\'a,
Brandimarte, Balog, Corso, Kawai, Garcia-Lekue, Saito, Yamaguchi, Meyer,
Sánchez-Portal, and Pascual]{Carbonell-Sanroma2017}
Carbonell-Sanrom\'a,~E.; Brandimarte,~P.; Balog,~R.; Corso,~M.; Kawai,~S.;
Garcia-Lekue,~A.; Saito,~S.; Yamaguchi,~S.; Meyer,~E.; Sánchez-Portal,~D.;
Pascual,~J.~I. Quantum Dots Embedded in Graphene Nanoribbons by Chemical
Substitution. \emph{Nano Lett.} \textbf{2017}, \emph{17}, 50--56\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Jacobse \latin{et~al.}(2016)Jacobse, van~den Hoogenband, Moret,
Klein~Gebbink, and Swart]{Jacobse2016}
Jacobse,~P.~H.; van~den Hoogenband,~A.; Moret,~M.-E.; Klein~Gebbink,~R. J.~M.;
Swart,~I. Aryl Radical Geometry Determines Nanographene Formation on Au(111).
\emph{Angewandte Chemie} \textbf{2016}, \emph{128}, 13246--13249\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Schulz \latin{et~al.}(2017)Schulz, Jacobse, Canova, van~der Lit, Gao,
van~den Hoogenband, Han, Klein~Gebbink, Moret, Joensuu, Swart, and
Liljeroth]{Schulz2017}
Schulz,~F.; Jacobse,~P.~H.; Canova,~F.~F.; van~der Lit,~J.; Gao,~D.~Z.; van~den
Hoogenband,~A.; Han,~P.; Klein~Gebbink,~R. J.~M.; Moret,~M.-E.;
Joensuu,~P.~M.; Swart,~I.; Liljeroth,~P. Precursor Geometry Determines the
Growth Mechanism in Graphene Nanoribbons. \emph{J. Phys. Chem. C}
\textbf{2017}, \relax
\mciteBstWouldAddEndPunctfalse
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Simonov \latin{et~al.}(2018)Simonov, Generalov, Vinogradov, Svirskiy,
Cafolla, McGuinness, Taketsugu, Lyalin, M{\aa}rtensson, and
Preobrajenski]{Simonov2018}
Simonov,~K.~A.; Generalov,~A.~V.; Vinogradov,~A.~S.; Svirskiy,~G.~I.;
Cafolla,~A.~A.; McGuinness,~C.; Taketsugu,~T.; Lyalin,~A.;
M{\aa}rtensson,~N.; Preobrajenski,~A.~B. {Synthesis of armchair graphene
nanoribbons from the 10,10′-dibromo-9,9′-bianthracene molecules on
Ag(111): the role of organometallic intermediates}. \emph{Scientific Reports}
\textbf{2018}, \emph{8}, 3506\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Bj\"ork \latin{et~al.}(2013)Bj\"ork, Hanke, and Stafström]{Bjork2013}
Bj\"ork,~J.; Hanke,~F.; Stafström,~S. Mechanisms of Halogen-Based Covalent
Self-Assembly on Metal Surfaces. \emph{J. Am. Chem. Soc.} \textbf{2013},
\emph{135}, 5768--5775\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Batra \latin{et~al.}(2014)Batra, Cvetko, Kladnik, Adak, Cardoso,
Ferretti, Prezzi, Molinari, Morgante, and Venkataraman]{Batra2014}
Batra,~A.; Cvetko,~D.; Kladnik,~G.; Adak,~O.; Cardoso,~C.; Ferretti,~A.;
Prezzi,~D.; Molinari,~E.; Morgante,~A.; Venkataraman,~L. Probing the
mechanism for graphene nanoribbon formation on gold surfaces through X-ray
spectroscopy. \emph{Chem. Sci.} \textbf{2014}, \emph{5}, 4419--4423\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Patera \latin{et~al.}(2017)Patera, Zou, Dri, Africh, Repp, and
Comelli]{Patera2017}
Patera,~L.~L.; Zou,~Z.; Dri,~C.; Africh,~C.; Repp,~J.; Comelli,~G. {Imaging
on-surface hierarchical assembly of chiral supramolecular networks}.
\emph{Physical Chemistry Chemical Physics} \textbf{2017}, \emph{19},
24605--24612\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Simonov \latin{et~al.}(2014)Simonov, Vinogradov, Vinogradov,
Generalov, Zagrebina, Martensson, Cafolla, Carpy, Cunniffe, and
Preobrajenski]{Simonov2014}
Simonov,~K.~A.; Vinogradov,~N.~A.; Vinogradov,~A.~S.; Generalov,~A.~V.;
Zagrebina,~E.~M.; Martensson,~N.; Cafolla,~A.~A.; Carpy,~T.; Cunniffe,~J.~P.;
Preobrajenski,~A.~B. Effect of Substrate Chemistry on the Bottom-Up
Fabrication of Graphene Nanoribbons: Combined Core-Level Spectroscopy and STM
Study. \emph{J. Phys. Chem. C} \textbf{2014}, \emph{118}, 12532--12540\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Massimi \latin{et~al.}(2015)Massimi, Ourdjini, Lafferentz, Koch,
Grill, Cavaliere, Gavioli, Cardoso, Prezzi, Molinari, Ferretti, Mariani, and
Betti]{Massimi2015}
Massimi,~L.; Ourdjini,~O.; Lafferentz,~L.; Koch,~M.; Grill,~L.; Cavaliere,~E.;
Gavioli,~L.; Cardoso,~C.; Prezzi,~D.; Molinari,~E.; Ferretti,~A.;
Mariani,~C.; Betti,~M.~G. {Surface-Assisted Reactions toward Formation of
Graphene Nanoribbons on Au(110) Surface}. \emph{The Journal of Physical
Chemistry C} \textbf{2015}, \emph{119}, 2427--2437\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Han \latin{et~al.}(2014)Han, Akagi, Federici~Canova, Mutoh, Shiraki,
Iwaya, Weiss, Asao, and Hitosugi]{Han2014}
Han,~P.; Akagi,~K.; Federici~Canova,~F.; Mutoh,~H.; Shiraki,~S.; Iwaya,~K.;
Weiss,~P.~S.; Asao,~N.; Hitosugi,~T. Bottom-Up Graphene-Nanoribbon
Fabrication Reveals Chiral Edges and Enantioselectivity. \emph{ACS Nano}
\textbf{2014}, \emph{8}, 9181--9187\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Han \latin{et~al.}(2015)Han, Akagi, Federici~Canova, Shimizu, Oguchi,
Shiraki, Weiss, Asao, and Hitosugi]{Han2015}
Han,~P.; Akagi,~K.; Federici~Canova,~F.; Shimizu,~R.; Oguchi,~H.; Shiraki,~S.;
Weiss,~P.~S.; Asao,~N.; Hitosugi,~T. Self-Assembly Strategy for Fabricating
Connected Graphene Nanoribbons. \emph{ACS Nano} \textbf{2015}, \emph{9},
12035--12044\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[S\'anchez-S\'anchez \latin{et~al.}(2016)S\'anchez-S\'anchez, Dienel,
Deniz, Ruffieux, Berger, Feng, Müllen, and Fasel]{SanchezSanchez2016}
S\'anchez-S\'anchez,~C.; Dienel,~T.; Deniz,~O.; Ruffieux,~P.; Berger,~R.;
Feng,~X.; Müllen,~K.; Fasel,~R. Purely Armchair or Partially Chiral:
Noncontact Atomic Force Microscopy Characterization of
Dibromo-Bianthryl-Based Graphene Nanoribbons Grown on Cu(111). \emph{ACS
Nano} \textbf{2016}, \emph{10}, 8006--8011\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Huang \latin{et~al.}(2012)Huang, Wei, Sun, Wong, Feng, Neto, and
Wee]{Huang2012}
Huang,~H.; Wei,~D.; Sun,~J.; Wong,~S.~L.; Feng,~Y.~P.; Neto,~A. H.~C.; Wee,~A.
T.~S. Spatially Resolved Electronic Structures of Atomically Precise Armchair
Graphene Nanoribbons. \emph{Sci. Rep.} \textbf{2012}, \emph{2}, 983\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Simonov \latin{et~al.}(2015)Simonov, Vinogradov, Vinogradov,
Generalov, Zagrebina, Svirskiy, Cafolla, Carpy, Cunniffe, Taketsugu, Lyalin,
Martensson, and Preobrajenski]{Simonov2015}
Simonov,~K.~A.; Vinogradov,~N.~A.; Vinogradov,~A.~S.; Generalov,~A.~V.;
Zagrebina,~E.~M.; Svirskiy,~G.~I.; Cafolla,~A.~A.; Carpy,~T.;
Cunniffe,~J.~P.; Taketsugu,~T.; Lyalin,~A.; Martensson,~N.;
Preobrajenski,~A.~B. From Graphene Nanoribbons on Cu(111) to Nanographene on
Cu(110): Critical Role of Substrate Structure in the Bottom-Up Fabrication
Strategy. \emph{ACS Nano} \textbf{2015}, \emph{9}, 8997--9011, PMID:
26301684\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Di~Giovannantonio \latin{et~al.}(2013)Di~Giovannantonio, El~Garah,
Lipton-Duffin, Meunier, Cardenas, Fagot~Revurat, Cossaro, Verdini,
Perepichka, Rosei, and Contini]{DiGiovannantonio2013}
Di~Giovannantonio,~M.; El~Garah,~M.; Lipton-Duffin,~J.; Meunier,~V.;
Cardenas,~L.; Fagot~Revurat,~Y.; Cossaro,~A.; Verdini,~A.; Perepichka,~D.~F.;
Rosei,~F.; Contini,~G. Insight into Organometallic Intermediate and Its
Evolution to Covalent Bonding in Surface-Confined Ullmann Polymerization.
\emph{ACS Nano} \textbf{2013}, \emph{7}, 8190--8198\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Gutzler \latin{et~al.}(2014)Gutzler, Cardenas, Lipton-Duffin,
El~Garah, Dinca, Szakacs, Fu, Gallagher, Vondracek, Rybachuk, Perepichka, and
Rosei]{Gutzler2014}
Gutzler,~R.; Cardenas,~L.; Lipton-Duffin,~J.; El~Garah,~M.; Dinca,~L.~E.;
Szakacs,~C.~E.; Fu,~C.; Gallagher,~M.; Vondracek,~M.; Rybachuk,~M.;
Perepichka,~D.~F.; Rosei,~F. Ullmann-type coupling of brominated
tetrathienoanthracene on copper and silver. \emph{Nanoscale} \textbf{2014},
\emph{6}, 2660--2668\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[not()]{note1}
We note here that the energy sequence of C-H and C-C bonds we use are the ones
leading to the best fits, and coincide with several previous studies
\cite{Scholl2004,Lee2007,Pis2016,Smerieri2016,Savu2012} although several
other studies revert this order \cite{Schmitz2011,Simonov2014, Simonov2015,
Massimi2015, Basagni2015}\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Pham \latin{et~al.}(2016)Pham, Song, Nguyen, Li, Studener, and
Stöhr]{Pham2016}
Pham,~T.~A.; Song,~F.; Nguyen,~M.-T.; Li,~Z.; Studener,~F.; Stöhr,~M.
Comparing Ullmann Coupling on Noble Metal Surfaces: On-Surface Polymerization
of 1,3,6,8-Tetrabromopyrene on Cu(111) and Au(111). \emph{Chemistry – A
European Journal} \textbf{2016}, \emph{22}, 5937--5944\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Bertel and Netzer(1980)Bertel, and Netzer]{Bertel1980}
Bertel,~E.; Netzer,~F.~F. Adsorption of bromine on the reconstructed Au(100)
surface: LEED, thermal desorption and work function measurements.
\emph{Surface Science} \textbf{1980}, \emph{97}, 409--424\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Shen \latin{et~al.}(2017)Shen, Tian, Huang, He, Xie, Song, Lu, Wang,
and Gao]{Shen2017a}
Shen,~Y.; Tian,~G.; Huang,~H.; He,~Y.; Xie,~Q.; Song,~F.; Lu,~Y.; Wang,~P.;
Gao,~Y. Chiral Self- Assembly of Nonplanar 10,10Dibromo9,9bianthryl Molecules
on Ag(111). \emph{Langmuir} \textbf{2017}, \emph{33}, 2993--2999\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Smerieri \latin{et~al.}(2016)Smerieri, Pis, Ferrighi, Nappini, Lusuan,
Di~Valentin, Vaghi, Papagni, Cattelan, Agnoli, Magnano, Bondino, and
Savio]{Smerieri2016}
Smerieri,~M.; Pis,~I.; Ferrighi,~L.; Nappini,~S.; Lusuan,~A.; Di~Valentin,~C.;
Vaghi,~L.; Papagni,~A.; Cattelan,~M.; Agnoli,~S.; Magnano,~E.; Bondino,~F.;
Savio,~L. Synthesis of graphene nanoribbons with a defined mixed edge-site
sequence by surface assisted polymerization of (1{,}6)-dibromopyrene on
Ag(110). \emph{Nanoscale} \textbf{2016}, \emph{8}, 17843--17853\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Pis \latin{et~al.}(2016)Pis, Ferrighi, Nguyen, Nappini, Vaghi,
Basagni, Magnano, Papagni, Sedona, Di~Valentin, Agnoli, and Bondino]{Pis2016}
Pis,~I.; Ferrighi,~L.; Nguyen,~T.~H.; Nappini,~S.; Vaghi,~L.; Basagni,~A.;
Magnano,~E.; Papagni,~A.; Sedona,~F.; Di~Valentin,~C.; Agnoli,~S.;
Bondino,~F. Surface-Confined Polymerization of Halogenated Polyacenes: The
Case of Dibromotetracene on Ag(110). \emph{J. Phys. Chem. C} \textbf{2016},
\emph{120}, 4909--4918\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Chung \latin{et~al.}(2011)Chung, Park, Kim, Yoon, Kim, Han, and
Kahng]{Chung2011}
Chung,~K.-H.; Park,~J.; Kim,~K.~Y.; Yoon,~J.~K.; Kim,~H.; Han,~S.; Kahng,~S.-J.
Polymorphic porous supramolecular networks mediated by halogen bonds on
Ag(111). \emph{Chem. Commun.} \textbf{2011}, \emph{47}, 11492--11494\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Fan \latin{et~al.}(2014)Fan, Wang, Han, Zhu, Kuttner, Hilt, and
Gottfried]{Fan2014a}
Fan,~Q.; Wang,~C.; Han,~Y.; Zhu,~J.; Kuttner,~J.; Hilt,~G.; Gottfried,~J.~M.
Surface-Assisted Formation, Assembly, and Dynamics of Planar Organometallic
Macrocycles and Zigzag Shaped Polymer Chains with C-Cu-C Bonds. \emph{ACS
Nano} \textbf{2014}, \emph{8}, 709--718\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Fan \latin{et~al.}(2015)Fan, Wang, Liu, Zhao, Zhu, and
Gottfried]{Fan2015a}
Fan,~Q.; Wang,~T.; Liu,~L.; Zhao,~J.; Zhu,~J.; Gottfried,~J.~M. Tribromobenzene
on Cu(111): Temperature-dependent formation of halogen-bonded,
organometallic, and covalent nanostructures. \emph{The Journal of Chemical
Physics} \textbf{2015}, \emph{142}, 101906\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kawai \latin{et~al.}(2015)Kawai, Sadeghi, Xu, Peng, Orita, Otera,
Goedecker, and Meyer]{Kawai2015a}
Kawai,~S.; Sadeghi,~A.; Xu,~F.; Peng,~L.; Orita,~A.; Otera,~J.; Goedecker,~S.;
Meyer,~E. Extended Halogen Bonding between Fully Fluorinated Aromatic
Molecules. \emph{ACS Nano} \textbf{2015}, \emph{9}, 2574--2583\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Morchutt \latin{et~al.}(2015)Morchutt, Bjork, Krotzky, Gutzler, and
Kern]{Morchutt2015}
Morchutt,~C.; Bjork,~J.; Krotzky,~S.; Gutzler,~R.; Kern,~K. Covalent coupling
via dehalogenation on Ni(111) supported boron nitride and graphene.
\emph{Chem. Commun.} \textbf{2015}, \emph{51}, 2440--2443\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Huang \latin{et~al.}(2016)Huang, Tan, He, Liu, Sun, Zhao, Zhou, Tian,
Wong, and Wee]{Huang2016}
Huang,~H.; Tan,~Z.; He,~Y.; Liu,~J.; Sun,~J.; Zhao,~K.; Zhou,~Z.; Tian,~G.;
Wong,~S.~L.; Wee,~A. T.~S. Competition between Hexagonal and Tetragonal
Hexabromobenzene Packing on Au(111). \emph{ACS Nano} \textbf{2016},
\emph{10}, 3198--3205\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Smalley \latin{et~al.}(2017)Smalley, Lahti, Pussi, Dhanak, and
Smerdon]{Smalley2017}
Smalley,~S.; Lahti,~M.; Pussi,~K.; Dhanak,~V.~R.; Smerdon,~J.~A.
Dibromobianthryl ordering and polymerization on Ag(100). \emph{The Journal of
Chemical Physics} \textbf{2017}, \emph{146}, 184701\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Horcas \latin{et~al.}(2007)Horcas, Fern\'andez, G\'omez-Rodr\'iguez,
Colchero, G\'omez-Herrero, and Baro]{Horcas2007}
Horcas,~I.; Fern\'andez,~R.; G\'omez-Rodr\'iguez,~J.~M.; Colchero,~J.;
G\'omez-Herrero,~J.; Baro,~A.~M. WSXM: A software for scanning probe
microscopy and a tool for nanotechnology. \emph{Review of Scientific
Instruments} \textbf{2007}, \emph{78}, 013705\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\end{mcitethebibliography}
\begin{tocentry}
Some journals require a graphical entry for the Table of Contents.
This should be laid out ``print ready'' so that the sizing of the
text is correct.
Inside the \texttt{tocentry} environment, the font used is Helvetica
8\,pt, as required by \emph{Journal of the American Chemical
Society}.
The surrounding frame is 9\,cm by 3.5\,cm, which is the maximum
permitted for \emph{Journal of the American Chemical Society}
graphical table of content entries. The box will not resize if the
content is too big: instead it will overflow the edge of the box.
This box and the associated title will always be printed on a
separate page at the end of the document.
\end{tocentry}
\end{document}
|
1,116,691,498,649 | arxiv | \section{Introduction}
Let $F\in\Diff\Cd n$ be a germ of a holomorphic diffeomorphism. A {\em stable set} of $F$ is a subset $B\subset V$ of an open neighborhood $V$ of $0$ where $F$ is defined, which is invariant, i.e. $F(B)\subset B$, and such that the orbit of each point of $B$ converges to $0$. If $B$ is an analytic, locally closed submanifold of $V$ then we say that $B$ is a {\em stable manifold} of $F$ (in $V$).
In the case of one-dimensional diffeomorphisms, the existence of stable manifolds depends mainly on the multiplier $\lambda=F'(0)\in\mathbb C$. More precisely, $F$ has non-trivial stable manifolds when $F$ is
{\em (hyperbolic) attracting} ($|\lambda|<1$), in which case a whole neighborhood of $0\in\mathbb C$ is a stable manifold, or {\em rationally neutral} ($\lambda$ is a root of unity)
and non-periodic, in which case the ``attracting petals'' of Leau-Fatou Flower Theorem \cite{Lea, Fat} are stable manifolds. In the remaining cases, {\em (hyperbolic) repelling} ($|\lambda|>1$), periodic or {\em irrationally neutral} ($|\lambda|=1$ and $\lambda$ is not a root of unity), the origin itself is the only stable manifold of $F$ in any neighborhood (a result by P\'{e}rez-Marco
\cite{Per} in the last case).
In the two-dimensional case, the problem of the existence of stable manifolds of $F$ has been addressed by several authors. The existence of one-dimensional stable manifolds, usually called {\em parabolic curves} (when they do not contain the origin), has been studied for example by Ueda \cite{Ued2} when $F$ is semihyperbolic; by \'{E}calle \cite{Eca}, Hakim \cite{Hak}, Abate \cite{Aba}, Abate, Bracci and Tovena \cite{Aba-B-T}, Molino \cite{Mol}, Brochero, Cano and L\'{o}pez \cite{Bro-C-L} and L\'{o}pez and Sanz \cite{Lop-S} when $F$ is tangent to the identity; by Bracci and Molino \cite{Bra-M} when $F$ is quasi-parabolic. The existence of open stable manifolds has been treated for example by Ueda \cite{Ued1} in the semihyperbolic case; by Weickert \cite{Wei}, Hakim \cite{Hak}, Vivas \cite{Viv}, Rong \cite{Ron} in the tangent to the identity case.
In this paper, we study the case of a planar diffeomorphism $F\in\Diff\Cd 2$ and we look for stable manifolds consisting of orbits which are asymptotic to a given invariant formal curve $\Gamma$. Going one step further, our interest is to describe a family of such stable manifolds whose union ``captures'' any orbit asymptotic to $\Gamma$. Following the terminology adopted by Ueda in \cite{Ued1}, we construct a ``base of the set of orbits asymptotic to $\Gamma$'' which is a union of stable manifolds. Our assumptions in order to guarantee the existence of such stable manifolds are just the necessary conditions inherited from the one-dimensional dynamics induced by $F$ on $\Gamma$. No further hypotheses on the linear part $DF(0)$ are required.
Let us describe our main result in more precise terms. At the end of the introduction we discuss its relation with some of the results which appear in the references mentioned above.
\strut
Recall that a formal curve $\Gamma$ at $0\in\mathbb C^2$ is a reduced principal ideal of $\mathbb C[[x,y]]$. It is called irreducible if $\Gamma$ is a prime ideal.
We say that $\Gamma$ is {\em invariant} by $F$, or $F$-invariant, if $\Gamma \circ F = \Gamma$.
If $\Gamma$ is irreducible and $F$-invariant then we can consider the {\em restriction} $F|_\Gamma$,
which is a formal diffeomorphism in one variable (see Section 2).
A formal irreducible curve $\Gamma_0$ is called $m$-{\it periodic} if $\Gamma_0 \circ F^{m} = \Gamma_0$
and $m$ is the minimum positive integer holding such property. In that case, the formal curve
\[ \Gamma= \bigcap_{j=0}^{m-1} \Gamma_0 \circ F^{j} \]
is $F$-invariant. Let us point out that if $\Gamma_0$ defines an analytic curve $V(\Gamma_0)$ then
$V (\Gamma) = \cup_{j=0}^{m-1} F^{j}(V(\Gamma_0))$. Thus $V(\Gamma)$ is the minimal $F$-invariant curve
containing $V(\Gamma_0)$. Equivalently, $\Gamma$ is the maximal $F$-invariant ideal contained in $\Gamma_0$, being this conclusion also valid in the formal setting.
We say that $\Gamma$ is the {\it invariant curve associated to} $\Gamma_0$.
In this case, the irreducible components of $\Gamma$ are the $m$-periodic curves
$\Gamma_j := \Gamma_0 \circ F^{j}$ for $j = 0, ...,m-1$.
Given a $m$-periodic curve $\Gamma_0$ of $F$, a non-trivial orbit $O$ of $F$ is said to be {\em asymptotic} to
the associated invariant curve $\Gamma$ if it converges to the origin and, for any finite composition of blow-ups of points
$\sigma : M \to {\mathbb C}^{2}$, the $\omega$-limit of the lifted sequence $\sigma^{-1}(O)$
is contained in the finite set determined by the components of $\Gamma$ in the exceptional divisor $\sigma^{-1} (0)$ (see Section ~\ref{sec:blow-ups} for details).
Our main result is the following:
\begin{Theorem}\label{th:main}
Consider $F\in\Diff\Cd 2$ and let $\Gamma_0$ be a formal $m$-periodic curve of $F$ whose associated invariant
curve is denoted by $\Gamma$. Assume that the restriction
$F^{m}|_{\Gamma_0}$
is either attracting or rationally neutral and non-periodic.
Then, in any sufficiently small open neighborhood $V$ of $0$, there exists a non-empty finite family of pairwise disjoint stable manifolds $S_1,...,S_r\subset V$ of $F$ of pure positive dimension and with finitely many connected components such that the orbit of every point in $S_j$ is asymptotic to $\Gamma$ and such that any orbit of $F$ asymptotic to $\Gamma$ is eventually contained in
$S_1\cup\dots\cup S_r$.
\end{Theorem}
It is worth mentioning that a diffeomorphism $F\in\Diff\Cd 2$ always has a
formal periodic curve by a result of Rib\'{o}n \cite{Rib}, although they may be all divergent and
non-invariant.
\begin{Remark}\label{rk:periodic-to-invariant}
In order to show Theorem \ref{th:main} it suffices to consider irreducible invariant curves,
i.e. $m = 1$.
Indeed, assume that $\Gamma_0$ is $m$-periodic and apply the theorem to $F^{m}$ and the $F^m$-invariant irreducible curve $\Gamma_0$.
Let $\mathcal{F}_{0}=\{S_1, ..., S_r \}$ be a family of stable manifolds of $F^{m}$
obtained for a domain $V$ in which every $F^{j}$, for $j = 1, ...,m - 1$, is defined and injective, and put
$\mathcal{F}=\{\cup_{j=0}^{m-1}F^{j}(S_1),\ldots,\cup_{j=0}^{m-1}F^{j}(S_r)\}$. Then $\mathcal{F}$ is a family with the required properties of Theorem~\ref{th:main} for $F$ and the invariant curve $\Gamma$. Notice that, since each component of $\Gamma$ is invariant by $F^m$, the points determined by $\Gamma$ in the exceptional divisor after blow-ups are fixed points for the corresponding transform of $F^m$ (see Section ~\ref{sec:blow-ups}). Thus, an orbit $O=\{F^n(p)\}_{n\geq 0}$ of $F$ is asymptotic to $\Gamma$ if and only if each one of the $m$ orbits $O_j=\{F^{nm+j}(p)\}_{n\geq 0}$ of $F^m$ for $j=0,...,m-1$ is asymptotic to one and only one of the components of $\Gamma$. Hence, the orbit under $F^m$ of a point in $F^j(S_i)$ is asymptotic to $\Gamma_j=F^j(\Gamma_0)$ for any $j=0,...,m-1$ and any $i=1,...,r$ and thus $F^j(S_i)\cap F^{k}(S_{l})=\emptyset$ whenever $i\neq l$ and $j,k\in\{0,\dots, m-1\}$.
\end{Remark}
As a consequence of Remark~\ref{rk:periodic-to-invariant}, we assume from now on that all formal irreducible periodic curves are
invariant.
Roughly speaking, Theorem~\ref{th:main} can be interpreted by saying that the condition ensuring the existence of stable manifolds in dimension $1$ also provides
(applied to $F|_\Gamma$) stable manifolds of orbits asymptotic to $\Gamma$. More precisely, if $\Gamma$ were convergent, the hypotheses in Theorem~\ref{th:main} would be necessary conditions in order to have stable orbits {\em inside} $\Gamma$.
Although these hypotheses are not necessary in general, if they are not satisfied then
one can find simple examples where no orbit asymptotic to $\Gamma$ exists.
In the case where $F|_\Gamma$ is hyperbolic, being attracting is a necessary condition for having orbits asymptotic to $\Gamma$ (see Section~\ref{sec:hyperbolic}). In the case where $F|_\Gamma$ is periodic (and hence rationally neutral), since the set of fixed points of a diffeomorphism is an analytic set, either $F$ is itself periodic or $\Gamma$ is convergent. In the first case, there are no non-trivial orbits converging to the origin; in the second case, there are examples with no asymptotic orbits (for instance $F(x,y)=(-x,2y)$ and $\Gamma=(y)$) and examples with asymptotic orbits (for instance $F=\mbox{Exp}(y(x^2\partial/\partial x+y\partial/\partial y))$ and $\Gamma=(y)$).
In the case where $F|_\Gamma$ is irrationally neutral, although we can also find simple linear examples with no asymptotic orbits, we do not know if there are examples with asymptotic orbits.
\strut
In the proof of Theorem \ref{th:main}, we consider separately the two situations for $F|_\Gamma$,
namely hyperbolic or rationally neutral, since the arguments and the structure of the stable manifolds $S_j$ are
notably different in both cases.
In Section~\ref{sec:hyperbolic} we study the case where $F|_\Gamma$ is hyperbolic attracting. The result is a consequence of the classical Stable Manifold and Hartman-Grobman
Theorems for diffeomorphisms. We show that $\Gamma$ is an analytic curve which contains eventually any orbit of $F$ which is asymptotic to $\Gamma$.
Indeed the hyperbolic case can be characterized in terms of the family
of stable manifolds ${\mathcal F}=\{S_1, \dots, S_r\}$
provided by Theorem ~\ref{th:main} in the following way: $F|_\Gamma$ is hyperbolic if and only if $\overline{S}_j$ is a germ of analytic curve at $0$ for some $1 \leq j \leq r$ and in this case ${\mathcal F} = \{\Gamma \setminus \{0\} \}$.
We also prove that
$\Gamma$ is either non-singular or a cusp $y^{p} = x^{q}$ in some coordinates and that, in this last case, $F$ is analytically linearizable.
The case where $F|_\Gamma$ is rationally neutral is more involved and is treated in Sections~\ref{sec:reduction}, \ref{sec:saddle-direction}, \ref{sec:node-direction} and \ref{sec:conclusion}.
Observe first that, considering an iterate of $F$ and using similar arguments to the ones in Remark~\ref{rk:periodic-to-invariant}, we may assume that $F|_\Gamma$ is a {\em parabolic} formal diffeomorphism, i.e. $(F|_\Gamma)'(0)=1$.
In Section~\ref{sec:reduction}, we show that, after finitely many blow-ups along $\Gamma$,
we can consider analytic coordinates $(x,y)$ at the origin such that $\Gamma$ is non-singular and tangent to the $x$-axis and $F$ is of the form
\begin{equation}\label{eq:RS-form}
\begin{array}{l}
x\circ F(x,y)=x-x^{k+p+1}+O(x^{k+p+1}y,x^{2k+2p+1})\\
y\circ F(x,y)= \mu(y+x^ka(x)y+O(x^{k+p+1}y,x^{k+p+2}))
\end{array}
\end{equation}
where $k\geq 1$, $p\geq 0$ and $a(x)$ is a polynomial of degree at most $p$ with $a(0)\neq 0$. Notice that $k$ and $p$ depend only on $F$ and $\Gamma$, since $k+1$ is the order of $F$ and $k+p+1$ is the order of the restriction $F|_{\Gamma}$.
Let $A(x)=A_0+A_1x+\cdots+A_px^p$ be the polynomial defined by the formula
$$\log\mu+x^k\left(A_0+A_1x+\cdots+A_px^p\right)=J_{k+p}\left(\log\left(\mu\left(1+x^ka(x)\right)\right)\right),$$
where $J_m$ denotes the truncation of a series up to degree $m$. The idea behind this definition is that the
jets of order $k+p+1$ of $F$
and of the exponential of the vector field
$$
Z=-x^{k+p+1}\frac{\partial}{\partial x}+(\log\mu+x^kA(x))y\frac{\partial}{\partial y}
$$
coincide, and the dynamics of $F$ and $\EXP(Z)$
are somewhat related.
Let us describe briefly the behavior of the orbits of the toy model $\EXP(Z)$
converging to the origin and asymptotic to the invariant curve $y=0$, which plays the role of $\Gamma$. Given such an orbit $O=\{(x_n,y_n)\}$, the sequence $\{x_n\}$ is an orbit of the one-dimensional parabolic diffeomorphism $x\mapsto\EXP(-x^{k+p+1}\frac{\partial}{\partial x})$ and hence it converges to $0\in\mathbb C$ along a well defined real limit direction, necessarily one of the $k+p$ half-lines $\xi\mathbb R^+$ with $\xi^{k+p}=1$, called the {\em attracting directions} (they correspond to the central directions of the attracting petals in Leau-Fatou Flower Theorem). On the other hand, $Z$ has a first integral $H(x,y)=yh(x)$, where
$$
h(x)=\exp\left(\int \frac{\log\mu+x^kA(x)}{x^{k+p+1}}dx\right),
$$
and the behavior of the orbits of $\EXP(Z)$, since they are contained in fibers of $H$, depends on the asymptotics of $H$ in a neighborhood of the corresponding attracting direction $\ell$. Making a linear change of variables so that $\ell=\mathbb R^+$, we say that $\ell$ is a {\em node} direction if
$\left(\ln|\mu|,\Real\left(A_0\right),...,\Real\left(A_{p-1}\right)\right)<0$
in the lexicographic order. Otherwise, we say that $\ell$ is a {\em saddle} direction.
Consider the simplest case where $|\mu|\neq 1$ (i.e. $F$ is {\em semi-hyperbolic}). Then $\ell$ is a saddle or a node direction if $|\mu|>1$ or $|\mu|<1$, respectively. There exists a sector $\Omega\subset\mathbb C$ bisected by $\ell$ in which either $h(x)$ or $1/h(x)$ is a flat function depending on whether $\ell$ is a saddle or a node direction, respectively. Thus, the fibers of $H$ in $\Omega\times\mathbb C$ behave correspondingly as a saddle (only $y=0$ is bounded) or a node (any fiber is bounded and asymptotic to $y=0$). In the general case, one can show a similar description for the fibers of $H$ in $\Omega\times\mathbb C$, where $\Omega$ is a domain
of $\mathbb C$ containing $\ell$ which is not necessarily a sector. Moreover, $\Omega\times\mathbb C$ eventually contains any orbit $\{(x_n,y_n)\}$ of $\EXP(Z)$ such that $\{x_n\}$ has $\ell$ as a limit direction. We obtain that $\Omega\times\mathbb C$ (respectively $\Omega\times\{0\}$) is a stable manifold of $\EXP(Z)$ when $\ell$ is a node direction (respectively saddle direction) composed of orbits asymptotic to the curve $y=0$. The family of these stable manifolds satisfies the conclusions of Theorem~\ref{th:main}.
For a general diffeomorphism $F$ written in the reduced form \eqref{eq:RS-form}, we obtain a similar description of the orbits asymptotic to $\Gamma$. In fact, we construct a family $\{S_\ell\}$ of stable manifolds of $F$, where $\ell$ varies in the set of attracting directions $\ell=\xi\mathbb R^+$, with $\xi^{k+p}=1$, satisfying the assertion of Theorem~\ref{th:main}. The case of a saddle direction is treated in Section~\ref{sec:saddle-direction}, where we obtain that $S_\ell$ is one-dimensional and simply connected (a so-called parabolic curve). The case of a node direction is studied in Section~\ref{sec:node-direction}, where we obtain that $S_\ell$ is a simply connected open set.
As a consequence of our main result, in Section~\ref{sec:conclusion} we prove the following theorem, which generalizes results in \cite{Bra-M} and \cite{Lop-S}.
\begin{Theorem}\label{th:generalizing-Lopez-Sanz}
Let $\Gamma$ be an irreducible formal invariant curve of $F\in\Diff\Cd 2$ such that $F|_\Gamma$ is parabolic, with $F|_{\Gamma}\neq\id$, and assume that $\spec(DF(0))=\{1,\mu\}$, with $|\mu|\geq 1$. Then there exists a parabolic curve for $F$, which is asymptotic to $\Gamma$.
\end{Theorem}
We end this introduction discussing some special situations for the diffeomorphism $F$ already treated in the literature and their relation with our approach to find stable manifolds.
- In the semi-hyperbolic attracting case ($|\mu|<1$), every attracting direction is a node direction. We obtain $r=k+p$ open stable manifolds whose union forms a base for the set of orbits of $F$ asymptotic to $\Gamma$. This case is the one considered by Ueda in \cite{Ued1}, and our unified point of view recovers his result (observe that in the semi-hyperbolic case, the Poincar\'{e}-Dulac normal form $\tilde F$ of $F$ has a unique formal invariant curve $\tilde\Gamma$ such that the restriction $\tilde F|_{\tilde\Gamma}$ is parabolic and hence so does $F$).
- In the semi-hyperbolic repelling case ($|\mu|>1$), every attracting direction is a saddle direction and we obtain $r=k+p$ parabolic curves, defined as graphs of holomorphic functions over open sectors in the $x$-variable, whose union is a base of the set of orbits asymptotic to $\Gamma$. This case is also treated by Ueda in \cite{Ued2} and we again recover his conclusion.
- In the case $\spec(DF(0))=\{1\}$ and $p=0$ (\emph{Briot-Bouquet} case), we have that every attracting direction is a saddle direction. We obtain, as in \'Ecalle \cite{Eca} and Hakim \cite{Hak}, that there exist $k$ parabolic curves of $F$ whose union is a base of convergent orbits asymptotic to $\Gamma$ (notice that the tangent direction of $\Gamma$ in this case is a ``characteristic direction'' of $F$). This result was used by Abate \cite{Aba} (see also \cite{Bro-C-L}) to show that every tangent to the identity diffeomorphism with isolated fixed point has a parabolic curve.
- In the case $\spec(DF(0))=\{1,\mu\}$, with $|\mu|=1$, $\mu$ is not a root of unity and $p=0$, every attracting direction is a saddle direction. In this case, Bracci and Molino \cite{Bra-M} proved the existence of $k$ parabolic curves of $F$. Since in this case there exists a formal invariant curve $\Gamma$ such that $F|_\Gamma$ is parabolic, using the Poincar\'e-Dulac normal form, our approach recovers their result and generalizes it to the case $p>0$.
- In the case $\spec(DF(0))=\{1\}$ and $\Real(A_0)>0$, a particular case of a saddle direction, L\'{o}pez and Sanz proved in \cite{Lop-S} the existence of a parabolic curve of $F$ asymptotic to $\Gamma$. Following the same arguments (which are in turn a modification of Hakim's proof in \cite{Hak}) we recover that result and generalize it for an arbitrary saddle direction.
- In the case $\spec(DF(0))=\{1\}$ and $\Real (A_0)<0$, a particular case of a node direction, Rong proved in \cite{Ron} the existence of an open stable manifold. Notice that, since $A_0\neq0$, applying Briot-Bouquet's theorem \cite{Bri-B} to the infinitesimal generator of $F$ we conclude that there always exists a formal invariant curve $\Gamma$ such that $F|_\Gamma$ is parabolic. Hence, our approach recovers Rong's result and generalizes it for an arbitrary node direction.
\section{Diffeomorphisms, invariant curves and blow-ups}\label{sec:blow-ups}
Let $F\in\Diff\Cd 2$ be a germ of a holomorphic diffeomorphism at the origin of $\mathbb C^2$. In this article we make use repeatedly of the behavior of $F$ under blow-up. Although quite well known (see for instance \cite{Rib}), let us summarize the principal properties, in order to fix notations and to establish a convenient terminology.
Let $\pi:\wt{\mathbb C^2}\to\mathbb C^2$ be the blow-up at the origin of $\mathbb C^2$ and denote by $E=\pi^{-1}(0)$ the exceptional divisor. Then $\wt{F}=\pi^{-1}\circ F\circ\pi$ extends to an injective holomorphic map in a neighborhood of $E$ in $\wt{\mathbb C^2}$ that leaves the divisor $E$ invariant and so that $\wt{F}|_E$ is the projectivization of the linear map $DF(0)$ in the identification $E\simeq\mathbb{P}^1_\mathbb C$. Hence, a point $p\in E$ is a fixed point for $\wt{F}$ if and only if $p$ corresponds to the projectivization of an invariant line $\ell$ of $DF(0)$. In this case we will say, in analogy with the standard terminology for curves, that $p$ is a {\em first infinitely near fixed point} of $F$ and that the germ $F_p$ of $\wt{F}$ at $p$ is the {\em transform} of $F$ at $p$. Repeating the operation of blowing-up, we can recursively define sequences $\{p_n\}_{n\geq 0}$ of {\em infinitely near fixed points} of $F$ and corresponding {\em transforms} $F_{p_n}$ putting $p_0=0$ and, for $n\geq 1$, taking $p_n$ as a first infinitely near point of $F_{p_{n-1}}$ (considered as an element of $\Diff\Cd 2$ after taking analytic coordinates at $p_{n-1}$).
Let us recall how the eigenvalues of the differential of a diffeomorphism vary under blow-ups. Let $\lambda,\mu$ be the eigenvalues of $DF(0)$ and let $p$ be an infinitely near fixed point of $F$ corresponding to an invariant line $\ell$ of $DF(0)$ associated to the eigenvalue $\lambda$; then the differential of the transform $F_p$ has eigenvalues $\{\lambda,\mu/\lambda\}$, where $\mu/\lambda$ is the eigenvalue associated to the tangent direction of the exceptional divisor $E$ at $p$. This can be seen by the following simple computation. Choose coordinates $(x,y)$ at $0\in\mathbb C^2$ such that $\ell$ is tangent to the $x$-axis and write
$
F(x,y)=(F_1(x,y),F_2(x,y))
$
and $DF(0)(x,y)=(\lambda x+ay,\mu y)$, where $a\in\mathbb C$.
Consider coordinates $(x',y')$ at $p$ so that $\pi$ is written as
$
\pi(x',y')=(x',x'y').
$
Then $\wt{F}=\pi^{-1}\circ F\circ\pi$ is written locally at $p$ as
\begin{equation}\label{eq:blow-up-F}
\wt{F}(x',y')=\left(F_1(x',x'y'),\frac{F_2(x',x'y')}{F_1(x',x'y')}\right),
\end{equation}
so that we obtain $D\wt{F}(p)(x',y')=(\lambda x',\frac{\mu}{\lambda} y'+b x')$ for some $b\in\mathbb C$, which gives the result (notice that $E=\{x'=0\}$ in these coordinates).
\strut
Let $\Gamma$ be an (irreducible) {\em formal curve} at $0\in\mathbb C^2$. By definition, once we fix coordinates $(x,y)$ at the origin, $\Gamma$ is a principal ideal of $\mathbb C[[x,y]]$, generated by an irreducible non-constant series $f(x,y)$. The {\em multiplicity} of $\Gamma$ is the positive integer $\nu=\nu(\Gamma)$ such that $f\in\mathfrak{m}^{\nu}\setminus\mathfrak{m}^{\nu+1}$, where $\mathfrak{m}$ is the maximal ideal of $\mathbb C[[x,y]]$. The formal curve $\Gamma$ is {\em non-singular} if and only if $\nu=1$. If we write $f=f_\nu+f_{\nu+1}+\cdots$ as a sum of homogeneous polynomials, then $f_\nu=(ax+by)^\nu$ where $a,b\in\mathbb C$ are not both zero. The line $ax+by=0$ is the {\em tangent line} of $\Gamma$ (in the coordinates $(x,y)$).
A formal curve $\Gamma$ is uniquely determined by a {\em parametrization}, i.e. a pair $\gamma(s)=(\gamma_1(s),\gamma_2(s))\in\mathbb C[[s]]^2\setminus\{0\}$ with $\gamma(0)=(0,0)$ such that $h\in\Gamma$ if and only if $h(\gamma(s))=0$. We can always consider a parametrization $\gamma(s)$ which is {\em irreducible} (i.e. it cannot be written as $\gamma(s)=\sigma(s^l)$ where $\sigma(s)$ is another parametrization of $\Gamma$ and $l>1$). In fact, if $\gamma(s)$ is an irreducible parametrization of $\Gamma$ then any other parametrization $\tilde{\gamma}(s)$ of $\Gamma$ is written as $\tilde{\gamma}(s)=\gamma(\theta(s))$ for a unique $\theta(s)\in\mathbb C[[s]]$ with $\theta(0)=0$. If $\gamma(s)$ is irreducible, the multiplicity $\nu$ of $\Gamma$ is the minimum of the orders of the series $\gamma_1(s),\gamma_2(s)\in\mathbb C[[s]]$ and the tangent line is given by $[\gamma_1(s)/s^\nu,\gamma_2(s)/s^\nu]|_{s=0}\in\mathbb{P}^1_\mathbb C$.
A formal curve $\Gamma$ is also uniquely determined by its sequence $\{q_n\}_{n\geq 0}$ of {\em infinitely near points}, obtained by blow-ups as follows. Put $q_0=0$. If $\pi:\wt{\mathbb C^2}\to\mathbb C^2$ is the blow-up of $\mathbb C^2$ at the origin, $q_1\in\pi^{-1}(0)$ is the point corresponding to the tangent line of $\Gamma$ in the identification $\pi^{-1}(0)\simeq\mathbb{P}^1_\mathbb C$. There is a unique irreducible formal curve $\Gamma_1$ at $q_1$ such that $\Gamma_1$ is different from the exceptional divisor at $q_1$ and which satisfies $\pi^*\Gamma\subset\Gamma_1$, where $\pi^*\Gamma=\{h\circ\pi\,:\,h\in\Gamma\}$, called the {\em strict transform} of $\Gamma$. Then, recursively for $n\geq 2$, $q_n$ is the point corresponding to the tangent line of $\Gamma_{n-1}$ and $\Gamma_n$ is the strict transform of $\Gamma_{n-1}$ by the blow-up at $q_{n-1}$.
\strut
In the following proposition, we present several equivalent definitions for a formal curve to be invariant for a diffeomorphism. Although quite well known, we include its proof for the sake of completeness.
\begin{proposition}\label{pro:invariant}
Consider $F\in\Diff\Cd 2$ and let $\Gamma$ be an irreducible formal curve at the origin of $\mathbb C^2$. The following properties are equivalent:
\begin{enumerate}[(a)]
\item For any $h\in\Gamma$, one has $h\circ F\in\Gamma$.
\item Given a parametrization $\gamma(s)$ of $\Gamma$, there exists $\theta(s)\in\mathbb C[[s]]$ with $\theta(0)=0$ and $\theta'(0)\neq 0$ such that $F\circ\gamma(s)=\gamma\circ\theta(s)$.
\item The sequence of infinitely near points of $\Gamma$ is a sequence of infinitely near fixed points of $F$.
\end{enumerate}
If any of the conditions above holds, we say that $\Gamma$ is an {\em invariant formal curve} of $F$.
\end{proposition}
\begin{proof}
Notice first that in (a) it is sufficient to consider $h$ a fixed generator of $\Gamma$. Also, in (b) it suffices to consider $\gamma(s)$ a fixed irreducible parametrization: if $\tilde{\gamma}(s)$ is another parametrization, then $\tilde{\gamma}(s)=\gamma(\tau(s))$ where $\tau(s)\in\mathbb C[[s]]$ has order $l>0$. Hence, assuming (b) for $\gamma(s)$, $F\circ\tilde{\gamma}(s)=\gamma(\theta(\tau(s)))$ and, since $\theta\circ\tau(s)$ and $\tau(s)$ have the same order, there exists some $\alpha(s)\in\mathbb C[[s]]$ with $\alpha(0)=0$ and $(\alpha'(0))^l=\theta'(0)\neq 0$ such that $\theta\circ\tau(s)=\tau\circ\alpha(s)$. This shows property (b) for $\tilde{\gamma}(s)$.
Let us prove the equivalence between (a) and (b). Let $h$ be a generator of $\Gamma$ and let $\gamma(s)$ be an irreducible parametrization of $\Gamma$. Then we have
property (a) if and only if $h\circ F(\gamma(s))=0$, which is equivalent to saying that $F\circ\gamma(s)$ is a parametrization of $\Gamma$, which, in turn, is equivalent to the existence of some $\theta(s)\in\mathbb C[[s]]$ with $\theta(0)=0$ such that $F\circ\gamma(s)=\gamma(\theta(s))$. The additional condition $\theta'(0)\neq 0$ in this last case is a consequence of the fact that the minimum of the orders of the components of $F\circ\gamma(s)$ and of $\gamma(s)$ are the same.
Let us prove the equivalence between (b) and (c). First, assume that property (b) holds and let $\gamma(s)$ be an irreducible parametrization of $\Gamma$.
On the one hand, property (b) for $\gamma(s)$ implies that the tangent line of $\Gamma$ is an invariant line of $DF(0)$. Thus, if $q_1$ is the first infinitely near point of $\Gamma$, $q_1$ is an infinitely near fixed point of $F$. On the other hand, one can see that $\tilde{\gamma}(s)=\pi^{-1}\circ\gamma(s)$ is a parametrization of the strict transform $\Gamma_1$ of $\Gamma$ by the blow-up $\pi$ at the origin which moreover satisfies $F_{q_1}\circ\tilde{\gamma}(s)=\tilde{\gamma}\circ\theta(s)$. Repeating the argument, we prove (c). Now assume that (c) holds. Notice that the last argument presented above shows that property (b) is stable both under blow-up and blow-down, i.e. property (b) holds for $F$ and $\Gamma$ at the origin if and only if it holds for the transform $F_{q_1}$ of $F$ and the strict transform $\Gamma_1$ of $\Gamma$ at the first infinitely near point $q_1$ of $\Gamma$. Then, using reduction of singularities of formal curves, we can assume that $\Gamma$ is non-singular. Let us show in this case that (c) implies (a), which is equivalent to (b). Consider formal coordinates $(\hat{x},\hat{y})$ such that $\Gamma$ is generated by $\hat{y}$ and write $F=(F_1(\hat{x},\hat{y}),F_2(\hat{x},\hat{y}))$ in those coordinates. The sequence of infinitely near points of $\Gamma$ is given by the centers $q_n$ of the charts $(\hat{x}_n,\hat{y}_n)$ for which the corresponding composition of blow-ups is written as $(\hat{x}_n,\hat{y}_n)\mapsto(\hat{x}_n,(\hat{x}_n)^n\hat{y}_n)$ and the expression of the corresponding transformed diffeomorphism at $q_n$ is obtained repeating $n$ times the computation in \eqref{eq:blow-up-F}. In particular, if $q_n$ is an infinitely near fixed point of $F$ then $F_2(\hat{x},\hat{x}^n\hat{y})$ is divisible by $\hat{x}^n$ for any $n$. Thus $\hat{y}$ divides $F_2(\hat{x},\hat{y})$, which shows property (a).
\end{proof}
If $\Gamma$ is a formal invariant curve of a diffeomorphism $F$, the series $\theta(s)\in\mathbb C[[s]]$ given by property (b) in Proposition~\ref{pro:invariant} can be considered as a formal diffeomorphism in one variable, i.e. $\theta(s)\in\widehat{\Diff}(\mathbb C,0)$. Note that the class of formal conjugacy of $\theta(s)$ is independent of the chosen parametrization $\gamma(s)$ in (b). Any representative of this class will be called the {\em restriction} of $F$ to $\Gamma$ and denoted by $F|_\Gamma$. Notice that if $\alpha\in\mathbb Z$ then $\Gamma$ is invariant by $F^\alpha$ and
$$
(F|_\Gamma)^\alpha=F^\alpha|_\Gamma.
$$
The number $\lambda_\Gamma=\theta'(0)\in\mathbb C^*$, called the {\em inner eigenvalue}, is intrinsically defined and invariant under blow-ups (since $\theta(s)$ is stable under blow-ups as mentioned in the proof of Proposition~\ref{pro:invariant}).
On the other hand, let $\lambda(\Gamma)$ be the eigenvalue of the differential $DF(0)$ corresponding to the tangent direction of $\Gamma$, that we call the {\em tangent eigenvalue}. The relation between the inner and the tangent eigenvalues is given by the following lemma, which can be proved by a simple computation.
\begin{lemma}\label{lm:restricted-eigenvalue}
If $\nu$ is the multiplicity of $\Gamma$ and $\lambda_\Gamma$, $\lambda(\Gamma)$ are respectively the inner and the tangent eigenvalues of $\Gamma$, then we have $(\lambda_\Gamma)^\nu=\lambda(\Gamma)$.
\end{lemma}
In particular, $\lambda_\Gamma=\lambda(\Gamma)$ if $\Gamma$ is non-singular. The equality is not necessarily true when $\Gamma$ is singular. Consider for instance the linear diffeomorphism
$
F(x,y)=(x,-y)
$. For any natural odd number $n\geq 3$, the curve $\Gamma_{n}$ generated by the polynomial $x^n-y^2$ is invariant for $F$ and tangent to the $x$-axis, an eigendirection with associated eigenvalue equal to $1$, whereas $\lambda_{\Gamma_{n}}=-1$ for any such $n$.
Notice that this example also shows that
the tangent eigenvalue $\lambda(\Gamma)$ is not invariant under blow-up (after some blow-ups, the formal curve becomes non-singular and hence $\lambda_\Gamma$ and $\lambda(\Gamma)$ eventually coincide).
\begin{definition}\label{def:restricted-Gamma}
Let $\Gamma$ be a formal invariant curve of $F\in\Diff\Cd 2$ and let $\lambda_\Gamma$ be the inner eigenvalue. We say that $\Gamma$ is {\em hyperbolic} if $|\lambda_\Gamma|\neq 1$ (\emph{attracting} if $|\lambda_\Gamma|<1$ and \emph{repelling} if $|\lambda_\Gamma|>1$), and that $\Gamma$ is \emph{rationally neutral} if $\lambda_\Gamma$ is a root of unity; in the particular case $\lambda_{\Gamma}=1$, we say that $\Gamma$ is \emph{parabolic}.
\end{definition}
Notice that the condition of $\Gamma$ being hyperbolic, rationally neutral or parabolic is stable under blow-ups.
\strut
We discuss now the concept of asymptotic orbit which appears in the statement of Theorem~\ref{th:main}. In fact, we will consider such property for larger stable sets of a diffeomorphism $F\in\Diff\Cd 2$. Recall from the introduction that a {\em stable manifold} of $F$ (in $U$) is an analytic locally closed submanifold $S$ in a neighborhood $U$ where $F$ is defined such that $F(S)\subset S$ and such that, for any point $a=a_0\in S$, the orbit $\{a_n=F^n(a)\}_n$ converges to the origin. The smallest non-trivial example of a zero-dimensional stable manifold is an orbit which converges and is not reduced to the origin, called a {\em (non-trivial) stable orbit} of $F$. Another interesting example is a {\em parabolic curve}, defined as a connected and simply connected stable manifold of pure dimension one not containing the origin.
\begin{definition}\label{def:asymptotic-stable-manifolds}
Let $S$ be a stable set of $F$ such that $0\not\in S$. We say that $S$ has the property of {\em iterated tangents} if the following holds: if $\pi_1:M_1\to\mathbb C^2$ is the blow-up at the origin and $S_1=\pi_1^{-1}(S)$, then $\overline{S_1}\cap\pi_1^{-1}(0)$ is a single point $p_1$; if $\pi_2:M_2\to M_1$ is the blow-up at $p_1$ and $S_2=\pi_2^{-1}(S_1)$, then $\overline{S_2}\cap\pi_2^{-1}(p_1)$ is a single point $p_2$; and so on. The sequence of points $\{p_n\}_n$ so constructed is called the sequence of {\em iterated tangents} of the stable manifold $S$. Given a formal curve $\Gamma$ at $0\in\mathbb C^2$, we say that $S$ is {\em asymptotic to} $\Gamma$ if $S$ has the property of iterated tangents and its sequence of iterated tangents is equal to the sequence of infinitely near points of $\Gamma$.
\end{definition}
Notice that if $S$ is a stable manifold with the property of iterated tangents, then any stable orbit $O\subset S$ also has the property (and the same sequence of iterated tangents), but the converse does not need to be true. On the other hand, if $\{p_n\}$ is the sequence of iterated tangents of a stable manifold $S$, then each $p_n$ is a fixed point of the corresponding transform of $F$ at the point $p_n$. Thus, by Proposition~\ref{pro:invariant}, if $S$ is asymptotic to a formal curve $\Gamma$ then $\Gamma$ is an invariant curve of $F$.
Stable orbits of a diffeomorphism need not have the property of iterated tangents. We can take for instance a linear diffeomorphism
$
F(x,y)=(ax,ae^{2\pi i\theta}y)
$,
where $a\in\mathbb C$ satisfies $0<|a|<1$ and $\theta$ is irrational. Since the origin is a global attractor for $F$, any orbit of $F$ is a stable orbit, but only those orbits contained in one of the (invariant) coordinate axes have the property of iterated tangents. In fact, if $\{(x_n,y_n)\}$ is an orbit of $F$ with $x_ny_n\neq 0$ for any $n$, we have $[x_n:y_n]=[c:e^{2\pi in\theta}]\in\mathbb{P}_{\mathbb C}^1$ for some non-zero constant $c$, which has infinitely many accumulation points when $n$ goes to infinity.
On the other hand, there may exist stable orbits with iterated tangents which are not asymptotic to any formal curve. As an example, we can consider a linear diffeomorphism
$
F(x,y)=(ax+ay,ay)
$,
where $0<|a|<1$. The orbits of $F$ are asymptotic to the exceptional divisor after a blow-up at the origin, but they are not asymptotic to a formal curve in the ambient space. More precisely, the unique formal invariant curve $\Gamma$ of $F$ is the $x$-axis. Any non-trivial orbit $O$ of $F$ is stable and tangent to $\Gamma$, i.e. its transform $\pi^{-1}(O)$ by the blow-up $\pi$ at the origin is a stable orbit of the transformed diffeomorphism $F_{p_1}$, where $p_1$ corresponds to $[1:0]$. One can see that if $O$ is not contained in $\Gamma$ then $\pi^{-1}(O)$ is asymptotic to the exceptional divisor $E=\pi^{-1}(0)$.
It is worth to notice that the property of being asymptotic to a formal curve $\Gamma$ in Definition~\ref{def:asymptotic-stable-manifolds} corresponds actually to the standard analytic meaning of having $\Gamma$ as ``asymptotic expansion''. To fix ideas, if $\Gamma$ is non-singular and we consider a parametrization of the form $\gamma(s)=(s,h(s))$ where $h(s)=\sum_{n\geq 1}h_ns^n\in\mathbb C[[s]]$, then a non-trivial orbit $O=\{(x_n,y_n)\}$ is asymptotic to $\Gamma$ if and only if for any $N\in\mathbb N$ there exist some $C_N>0$ and some $n_0=n_0(N)\in\mathbb N$ such that, for any $n\geq n_0$, we have
$$
\left|y_n-(h_1x_n+h_2x_n^2+\cdots+h_Nx_n^N)\right|\leq C_N|x_n|^{N+1}.
$$
A similar condition (see \cite{Lop-S}) can be considered for a parabolic curve asymptotic to a formal curve $\Gamma$. It is worth to remark that our definition of parabolic curve asymptotic to a formal curve coincides with that of ``robust parabolic curve'' in \cite{Aba-T}.
\strut
We can now restate our main result Theorem~\ref{th:main}. Since we use different arguments, we consider the two different situations in separate statements.
\begin{theorem}[$\Gamma$-hyperbolic case]\label{th:main2-hyperbolic}
Let $F\in\Diff\Cd 2$ and let $\Gamma$ be an invariant formal curve of $F$. Assume that $\Gamma$ is hyperbolic attracting. Then $\Gamma$ is a germ of an analytic curve at the origin such that a (sufficiently small) representative of it is a stable manifold of $F$ and contains the germ of any orbit of $F$ asymptotic to $\Gamma$.
\end{theorem}
\begin{theorem}[$\Gamma$-rationally neutral case]\label{th:main2-parabolic}
Consider $F\in\Diff\Cd 2$ and let $\Gamma$ be an invariant formal curve of $F$. Assume that $\Gamma$ is rationally neutral and that the restricted diffeomorphism $F|_\Gamma$ is not periodic. Then, for any sufficiently small neighborhood $V$ of the origin, there exists a non empty finite family of mutually disjoint stable manifolds $\{S_1,...,S_r\}$ in $V$ of pure positive dimension satisfying:
\begin{enumerate}[(i)]
\item Every orbit in the union $S=\bigcup_{j=1}^r S_j$ is asymptotic to $\Gamma$.
\item $S$ contains the germ of any orbit of $F$ asymptotic to $\Gamma$.
\item If $n$ is the order of the inner eigenvalue $\lambda_\Gamma$ as a root of unity, then each $S_j$ is a finite union of $n$ connected and simply connected mutually disjoint stable manifolds $S_{j1},\ldots,S_{jn}$ of the iterated diffeomorphism $F^n$ (i.e. either parabolic curves or open stable sets of $F^n$). In fact, $S_{ji}=F(S_{j,i-1})$ for $i=2,...,n$ and for any $j$.
\end{enumerate}
Moreover, if $\dim(S_j)=1$ then $S_{j}$ is asymptotic to $\Gamma$. If $\dim(S_j)=2$, one can also choose $S_{j}$ to be asymptotic to $\Gamma$.
\end{theorem}
In Section~\ref{sec:hyperbolic} we prove Theorem~\ref{th:main2-hyperbolic} and other related questions concerning the case where $\Gamma$ is hyperbolic. The proof of Theorem~\ref{th:main2-parabolic} is more involved and will be carried on in Sections~\ref{sec:reduction} to \ref{sec:conclusion}. As mentioned in the introduction, by the same arguments used in Remark~\ref{rk:periodic-to-invariant}, to show Theorem~\ref{th:main2-parabolic} it suffices to consider the case $\lambda_{\Gamma}=1$ ($\Gamma$-parabolic case).
\section{$\Gamma$-hyperbolic case}\label{sec:hyperbolic}
In this section, we assume that $\Gamma$ is a hyperbolic formal invariant curve of $F \in \Diff\Cd 2$, i.e. $|(F|_{\Gamma})'(0)|\neq1$.
We prove Theorem~\ref{th:main2-hyperbolic} and other results related to this case. They are consequences of classical theorems involving local hyperbolic dynamics
and normal forms. To summarize, we first show that $\Gamma$ is an analytic curve at the origin as a consequence of the Stable Manifold Theorem. Moreover, some manipulations regarding the Poincar\'{e}-Dulac normal form allow us to show that
$\Gamma$ is either non-singular or a cusp $y^{p} = x^{q}$ in some coordinates and that, in this last case, $F$ is analytically linearizable. In the attracting case $|(F|_{\Gamma})'(0)|<1$, we obtain, as an application of Hartman-Grobman Theorem, that all stable orbits of $F$ which are asymptotic to $\Gamma$ are contained in $\Gamma$. This result forbids the existence of two-dimensional stable manifolds formed by orbits asymptotic to $\Gamma$,
that can appear in the parabolic case $\lambda_{\Gamma}=1$ as we shall see in Section~\ref{sec:node-direction}. Finally, we characterize the attracting hyperbolic case as the unique where there exists an analytic curve at the origin which is a stable set.
\begin{proposition}
\label{pro:anahyp1}
Let $\Gamma$ be a formal invariant curve of $F \in \Diff\Cd2$.
Suppose that $\spec (DF (0)) = \{ \lambda ({\Gamma}), \mu \}$ where the tangent eigenvalue $\lambda(\Gamma)$ satisfies
$|\lambda ({\Gamma})|< \min (1, |\mu|)$.
Then $\Gamma$ is a non-singular analytic curve. Moreover it is the unique formal periodic curve
whose tangent line is not the eigenspace associated to $\mu$.
\end{proposition}
\begin{proof}
Set $\lambda=\lambda(\Gamma)$, and denote by $\{p_n\}_{n\ge0}$ the sequence of infinitely near points of $\Gamma$. To prove the uniqueness statement, we will show that the sequence $\{p_n\}$ depends only on $F$.
The eigenvalues $\lambda$ and $\mu$ are different, thus there are two eigenspaces of dimension $1$.
Since $\Gamma$ is not tangent to the eigenspace associated to $\mu$ by hypothesis, it follows
that the tangent line of $\Gamma$ is the eigenspace of $DF(0)$ associated to $\lambda$. In particular such
direction, and then $p_1$, depend only on $DF (0)$.
If $F_{p_1}$ is the transform of $F$ at $p_1$, we have that $\spec (DF_{p_1}(p_1)) = \{ \lambda, \mu/ \lambda \}$. If $\Gamma_1$ is the strict transform of $\Gamma$, then by the invariance of the inner eigenvalue under blow-ups $|\lambda_{\Gamma_1}|=|\lambda_{\Gamma}|<1$ and, since $\lambda (\Gamma_1)$ is a power of
$\lambda_{\Gamma_1}$ and $|\mu / \lambda | > 1$, it follows that
$\lambda = \lambda (\Gamma_1)$.
Therefore $\Gamma_1$ is tangent to the eigenspace of $DF_{p_1}(p_1)$ associated to $\lambda$
and hence $p_2$ depends only on $DF_{p_1}(p_1)$ and then on $F$. By induction, denoting by $F_{p_{j+1}}$ the transform of $F_{p_j}$ and by $\Gamma_{j+1}$ the strict transform of $\Gamma_j$, we obtain that
\begin{equation}
\label{equ:varspec}
\spec (DF_{p_{j+1}}(p_{j+1})) = \left\{ \lambda, \frac{\mu}{\lambda^{j+1}} \right\}
\end{equation}
and then $\lambda$ is the eigenvalue
associated to the tangent line of $\Gamma_{j+1}$ at $p_{j+1}$ for any $j \geq 0$. In particular, the sequence $\{p_n\}_{n}$ of infinitely near points of $\Gamma$ depends only on $F$. Moreover, since the tangent line of $\Gamma_j$ is not tangent to the exceptional divisor for all $j$, it follows that $\Gamma$ is non-singular.
Since $|\lambda^{p}|< \min (1, |\mu^{p}|)$, the curve $\Gamma$ is the unique
formal $F^{p}$-invariant curve that is not tangent to
the eigenspace of $\mu$
for any $p \in {\mathbb N}$.
Moreover the properties $|\lambda| < 1$ and $|\lambda| < |\mu|$ imply that
$\Gamma$ is a non-singular analytic curve by the
Stable Manifold Theorem \cite[Theorem 6.1]{Rue}.
\end{proof}
Next we see that
any formal hyperbolic invariant curve can be reduced to the setting of Proposition \ref{pro:anahyp1}
via blow-ups.
\begin{proposition}
\label{pro:anahyp}
Let $\Gamma$ be a formal hyperbolic invariant curve of $F \in \Diff\Cd2$. Then $\Gamma$ is an analytic curve.
\end{proposition}
\begin{proof}
Suppose $\mathrm{spec} (DF(0)) = \{\lambda, \mu\}$ with $\lambda = \lambda ({\Gamma})$.
We can suppose $|\lambda|<1$ up to replacing $F$ with $F^{-1}$ if $|\lambda|>1$. Also, by reduction of singularities, we may assume that $\Gamma$ is non-singular. Thus, using the same notations as in the proof of Proposition \ref{pro:anahyp1}, $\lambda_{\Gamma_j}=\lambda$ for any $j$ and the divisor at $p_j$ has inner eigenvalue $\mu/\lambda^j$. Take $j \in {\mathbb N}$ such that $|\lambda|<|\mu/\lambda^{j}|$.
Therefore $\Gamma_j$ and then $\Gamma$ are analytic by Proposition \ref{pro:anahyp1}.
\end{proof}
\begin{cor}
Let $\Gamma$ be an analytic curve at the origin that is a stable set for $F \in \Diff\Cd2$.
Then $|\lambda ({\Gamma})| < 1$.
\end{cor}
\begin{proof}
The result is a consequence of the analogous one for the one-dimensional diffeomorphism $f=F|_{\Gamma}$.
Up to conjugacy we can suppose $f\in\Diff(\mathbb C,0)$.
Let $U$ be a bounded open neighborhood of the origin that is a stable set for $f$.
Cauchy's integral formula implies that the sequence of derivatives of the sequence $\{f^{n}\}_{n \geq 1}$ is
uniformly bounded in compact subsets of $U$. Thus the sequence $\{f^{n}\}_{n \geq 1}$ is normal and as a consequence
$\{f^{n}\}_{n \geq 1}$ converges to $0$ uniformly in compact subsets of $U$.
Another application of Cauchy's integral formula shows $\lim_{n \to \infty} (f^{n})' (0)=0$.
Since $(f^{n})' (0) = \lambda_{\Gamma}^{n}$, we deduce $|\lambda_{\Gamma}|<1$.
\end{proof}
As a consequence of Proposition \ref{pro:anahyp},
we see that the unique asymptotic manifold associated to a hyperbolic invariant curve is the curve itself.
\begin{proposition}
\label{pro:hypsad}
Let $\Gamma$ be an invariant curve of $F \in \Diff\Cd2$ with $|\lambda ({\Gamma})|< 1$.
Then any stable orbit of $F$ asymptotic to $\Gamma$ is contained in $\Gamma$.
\end{proposition}
\begin{proof}
Using the arguments of Proposition~\ref{pro:anahyp} and the notations of the proof of Proposition \ref{pro:anahyp1}, consider $j\in\mathbb N$ such that $|\mu/\lambda^{j}|>1$. Equation \eqref{equ:varspec} and Hartman-Grobman theorem imply that the unique orbits of $F_{p_j}$ that converge to $p_j$ are those contained in $\Gamma_j$.
Consider a point $q$ whose orbit $O=\{F^n(q)\}_n$ is asymptotic to $\Gamma$, and denote by $\pi_l:M_l\to M_{l-1}$ the blow-up at $p_{l-1}$ for $1\le l\le j$, where $M_0=\mathbb C^2$ and $p_0=0$. Since $O$ is asymptotic to $\Gamma$, $(\pi_{1} \circ \cdots \circ \pi_{j})^{-1}(F^n(q))$ tends to $p_j$ when $n \to \infty$, so $(\pi_1\circ\cdots\circ\pi_j)^{-1}(O)\subset\Gamma_j$ and therefore $O$ is contained in $\Gamma$.
\end{proof}
\begin{remark}
\label{rem:nasm2}
Let $\Gamma$ be an invariant curve of $F \in \Diff\Cd2$ with $|\lambda ({\Gamma})| < 1$.
Even if there are no asymptotic stable manifolds of dimension $2$, let us consider the
``closest" case. This is the (hyperbolic)
node case, corresponding to $|\mu| < |\lambda ({\Gamma})| <1$. In this case, the tangent line $\ell$ of
$\Gamma$ is an attractor for the dynamics induced by $DF(0)$ in the space
$\mathbb P_{\mathbb C}^1$ of directions. In fact, the origin is an attractor for the map $F$ and any orbit converges to the
origin with tangent $\ell$. Of course, this convergence is not asymptotic since the hierarchy of the eigenvalues is disrupted by
blow-up. More precisely, the inequality $|\mu| < |\lambda ({\Gamma})|$ is not stable by blow-up.
Indeed such property is key in the proof of Proposition \ref{pro:hypsad}. On the other hand, in the $\Gamma$-parabolic case $\lambda({\Gamma})=1$,
since the inner eigenvalue is stable under blow-ups, the tangent eigenvalue of $\Gamma$ and all of its strict transforms are equal to 1, and then equation \eqref{equ:varspec} implies that the inequality $|\mu|<1$ is preserved under blow-ups at the infinitely near points of $\Gamma$.
Then, with the notations of Proposition \ref{pro:anahyp1}, the tangent line $\ell_{j}$ of $\Gamma_{j}$ is always an attractor for the action induced by
$DF_{p_j}(p_j)$ in the space of directions at $p_j$ for $j \geq 0$.
Since all the iterated tangents are attractors, it becomes possible to find
open stable manifolds in which all the orbits are asymptotic to $\Gamma$ (we will show in Section~\ref{sec:node-direction} that they actually exist). Let us remark that asymptotic convergence is
not necessarily related to the dynamics of $DF(0)$, for instance it can also happen in the case
$\lambda ({\Gamma})=1$ and $|\mu|=1$ as we will see in the next sections.
\end{remark}
The next result shows that if a formal hyperbolic invariant curve is singular, then
both the dynamics and the curve are very special.
\begin{proposition}
\label{pro:nsis}
Let $\Gamma$ be a hyperbolic invariant curve of $F \in \Diff\Cd2$.
Suppose that $\Gamma$ is singular.
Then there exist coprime natural numbers $q>p>1$ such that,
up to an analytic change of coordinates, we have that
$F(x,y) = (\lambda ({\Gamma}) x, \mu y)$, where $\lambda ({\Gamma})^{q} = \mu^{p}$, and that
$\Gamma$ is the curve $y^{p} = x^{q}$.
\end{proposition}
\begin{proof}
We denote $\lambda = \lambda ({\Gamma})$ and $\mathrm{spec} (DF(0)) = \{\lambda, \mu\}$.
We can suppose $|\lambda|<1$ without loss of generality.
By Proposition \ref{pro:anahyp1}, we have $|\mu| \leq |\lambda|<1$.
Since the eigenvalues of $DF(0)$ have modulus less than $1$, the diffeomorphism $F$ is analytically
conjugated to its Poincar\'{e}-Dulac normal form (cf. \cite[Theorem 5.17]{Ily-Y}).
This normal form is either $F(x,y)=(\lambda x, \mu y)$ or $F(x,y)=(\lambda x, \mu (y + x^{m}))$, where $\mu = \lambda^{m}$. Let us show that the second case is impossible. Indeed, in that case
$$ F(x,y) = (\lambda x, \mu y) \circ \EXP \left( x^{m} \frac{\partial}{\partial y} \right) =
\EXP \left( x^{m} \frac{\partial}{\partial y} \right) \circ (\lambda x, \mu y)$$
is the Jordan-Chevalley decomposition of $F$ (see \cite{Rib}). Since any invariant curve of $F$ is also invariant by the unipotent part $F_u(x,y)=\EXP \left( x^{m} \frac{\partial}{\partial y} \right)$ and then by the vector field $X=x^{m}\frac{\partial}{\partial y}$ (cf. \cite[Propositions 2 and 3]{Rib}),
we deduce that $x=0$ is the unique $F$-invariant curve, which is impossible since $\Gamma$ is singular.
Let
$\gamma(s) = (s^p, \gamma_{2}(s))$ be an irreducible parametrization
of $\Gamma$, where
$\gamma_{2}(s) = \sum_{j=1}^{\infty} c_{j} s^{j} \in {\mathbb C}[[s]]$.
Since $F ( \gamma (s)) = (\lambda s^p, \mu \gamma_2(s))$ is again a parametrization of $\Gamma$, we obtain that
$\mu\gamma_2(s)=\gamma_2(\lambda^{1/p}s)$, where $\lambda^{1/p}$ is a $p$-th root of $\lambda$.
Hence $c_{j} \neq 0$ implies $\mu^{p} = \lambda^{j}$
for any $j \in {\mathbb N}$.
We deduce that $\gamma(s)= (s^{p}, c_{q} s^{q})$ for some $q \in {\mathbb N}$
with $\lambda^{q} = \mu^{p}$. Since $\gamma$ is irreducible, $p$ and $q$ are coprime. Moreover, $q > p$, because $|\mu| < |\lambda|$, and $p>1$, because $\Gamma$ is singular. The curve $\Gamma$ is equal to $y^{p} = x^{q} c_{q}^{p}$, and then conjugated by a linear map $(x,y)\mapsto (\alpha x, y)$ to $y^{p} = x^{q}$.
\end{proof}
\begin{remark}
Let $\Gamma$ be a hyperbolic invariant curve of $F \in \Diff\Cd2$.
Then there exists a non-singular hyperbolic invariant curve $\Gamma'$ such that
$\lambda ({\Gamma}) = \lambda ({\Gamma'})$.
Indeed, if $\Gamma$ is singular then we can suppose
$F(x,y) = (\lambda ({\Gamma}) x, \mu y)$, by Proposition \ref{pro:nsis}, and then we define $\Gamma' = \{y=0\}$.
\end{remark}
\section{$\Gamma$-parabolic case: reduction of the diffeomorphism}\label{sec:reduction}
Consider a diffeomorphism $F\in\Diff\Cd 2$ and a formal invariant curve $\Gamma$ which is parabolic (i.e. $(F|_\Gamma)'(0)=1$) and such that $F|_\Gamma\neq\id$. Note that the tangent eigenvalue $\lambda(\Gamma)$ is 1, by Lemma~\ref{lm:restricted-eigenvalue}, and put $\spec(DF(0))=\{1,\mu\}$.
\begin{definition}\label{def:reducedform}
We say that the pair $(F,\Gamma)$ is {\em reduced} if $\Gamma$ is non-singular and there exist coordinates
$(x,y)$ at $0\in\mathbb C^2$ such that $F$ is written as
\begin{align*}
x\circ F\,(x,y)&= x-x^{k+p+1}+O(x^{k+p+1}y,x^{2k+2p+1})\\
y\circ F\,(x,y)&=\mu\left[y+x^ka(x)y+O(x^{k+p+1}y)+b(x)\right],
\end{align*}
where $k\geq 1$, $p\geq 0$, $b(x)\in\mathbb C\{x\}$ and $a(x)$ is a polynomial of degree at most $p$ with $a(0)\neq 0$, and such that $\Gamma$ has order of contact at least $k+p+2$ with the $x$-axis. The polynomial $\mu\left(1+x^ka(x)\right)$ is called the \emph{principal part} of the pair $(F,\Gamma)$.
\end{definition}
Observe that the integers $k$ and $p$ are independent of the coordinates $(x,y)$, since $k+1$ is the order of $F$ and $k+p+1$ is the order of the formal diffeomorphism $F|_{\Gamma}$.
\begin{remark}\label{rem:orderofcontact}
Suppose that $(F,\Gamma)$ is reduced, with the same notations of Definition~\ref{def:reducedform}, and denote by $m\ge k+p+2$ the order of contact of $\Gamma$ with the $x$-axis, so that $\Gamma$ admits a parametrization of the form $\gamma(s)=\left(s,\gamma_2(s)\right)$, where the order of $\gamma_2(s)$ is $m$. Then, the order $\nu_0(b)$ of $b$ at 0 satisfies $\nu_0(b)=m$, in the case $\mu\neq1$, and $\nu_0(b)\ge m+k$, in the case $\mu=1$.
\end{remark}
In this section, we will prove that there exists a finite sequence of changes of coordinates and blow-ups at the infinitely near points of $\Gamma$ such that the pair $(\wt F,\wt\Gamma)$, where $\wt F$ is the transform of $F$ and $\wt\Gamma$ is the strict transform of $\Gamma$, is reduced.
Observe first that, after a finite number of blow-ups centered at the infinitely near points of $\Gamma$, we can assume that $\Gamma$ is non-singular and transversal to the exceptional divisor, which is given by $\{x=0\}$ in some analytic coordinates $(x,y)$. In these coordinates, $\Gamma$ admits a parametrization $\gamma(s)$ of the form $\gamma(s)=\left(s,\gamma_2(s)\right)$, with $\gamma_2(s)\in s\mathbb C[[s]]$.
Denote by $r+1\ge2$ the order of the formal diffeomorphism $F|_\Gamma$ (which is well defined since $F|_\Gamma\neq\id$) and consider the change of variables $(x,y)\mapsto \left(x, y-J_{r+1}\gamma_2(x)\right)$, where $J_l$ denotes the jet of order $l$. In the new coordinates, the curve $\Gamma$ admits a parametrization
$\bar\gamma(s)=(s,\bar\gamma_2(s))$, where the order of $\bar\gamma_2$ is at least $r+2$. Since $F(s,\bar\gamma_2(s))=(\theta(s),\bar\gamma_2(\theta(s)))$ for some $\theta(s)=s+\alpha s^{r+1}+\cdots$ with $\alpha\neq0$, we conclude that $F$ is written in the new coordinates as
\begin{align*}
x\circ F(x,y)&=x+\alpha x^{r+1}+O(y,x^{r+2})\\
y\circ F(x,y)&=\mu\Bigl[y+y\sum_{j\ge1}c_jx^j+O(y^2,x^{r+2})\Bigr].
\end{align*}
Set $t=\min\{j\ge 1:c_j\neq0\}$, if the series $\sum_{j\ge1}c_jx^j$ does not vanish, and $t=\infty$ otherwise. Put $k=r$ and $p=0$ if $t\ge r$, and $k=t$ and $p=r-t$ otherwise. We have then
\begin{align*}
x\circ F(x,y)&=x+\alpha x^{k+p+1}+O(y,x^{k+p+2})\\
y\circ F(x,y)&=\mu\Bigl[y+cx^ky+O(x^{k+1} y, y^2,x^{k+p+2})\Bigr],
\end{align*}
where $k\ge1$, $p\ge0$, $\alpha\neq0$ and, if $p\ge1$, then $c\neq0$; moreover, the order of contact of $\Gamma$ with the $x$-axis is at least $k+p+2$.
Consider now the sequence $\phi$ of blow-ups centered at the first $k+p+1$ infinitely near points of $\Gamma$. Observe that each of these blow-ups increases the exponent of $x$ in every term in $x\circ F$ with positive degree in the variable $y$ and every term in $y\circ F$ with degree at least 2 in the variable $y$. Moreover, the coefficient $c$ does not change if $p\ge1$. Hence, the transform $\wt F$ of $F$ by $\phi$ is written in some coordinates $(x,y)$ as
\begin{align*}
x\circ \wt F(x,y)&=x+x^{k+p+1}\left(\alpha+O(x,y)\right)\\
y\circ \wt F(x,y)&=\mu\Bigl[y+ax^ky+O(x^{k+1}y,x^{k+p+1}y^2, x)\Bigr],
\end{align*}
where again $k\ge1$, $p\ge0$, $\alpha\neq0$ and, if $p\ge1$, then $a=c\neq0$. In these coordinates, the strict transform $\wt\Gamma$ of $\Gamma$ is parametrized by $\wt\gamma(s)=(s,\wt\gamma_2(s))$, where $\wt\gamma_2(s)$ has order at least 1. Finally, after a polynomial change of coordinates of the form $(x,y)\mapsto\left(\beta x+P(x),y-J_{k+p+1}\wt\gamma_2(x)\right)$, where $\beta\in\mathbb C^*$ and $P(x)\in x^2\mathbb C[x]$, we obtain
\begin{align*}
x\circ \wt F\,(x,y)&=x-x^{k+p+1}+O(x^{k+p+1}y,x^{2k+2p+1})\\
y\circ \wt F\,(x,y)&=\mu\left[y+x^ka(x)y+O(x^{k+p+1}y)+b(x)\right],
\end{align*}
where $k\ge1$, $p\ge0$, $a(x)$ is a polynomial of degree at most $p$ such that $a(0)\neq0$ in case $p\ge1$ and the order of contact of $\wt\Gamma$ with the $x$-axis is at least $k+p+2$. Hence, $(\wt F,\wt \Gamma)$ is reduced unless $a(0)=0$; in this case, necessarily $p=0$, and we get a reduction for $(\wt F,\wt \Gamma)$ after a change of coordinates to increase by one unit the order of contact of $\wt\Gamma$ with the $x$-axis and a blow-up.
\strut
Consider a reduced pair $(F,\Gamma)$. We define the \emph{attracting directions} of $(F,\Gamma)$ as the $k+p$ half lines $\xi\mathbb R^+$, where $\xi^{k+p}=1$. This definition is motivated by the following: when $\Gamma$ is convergent, the one-dimensional diffeomorphism $F|_{\Gamma}$ is of the form $F|_{\Gamma}(x)=x-x^{k+p+1}+O(x^{k+p+2})$ so, by Leau-Fatou Flower Theorem, the real tangents of its orbits are exactly the attracting directions of $(F,\Gamma)$, and we find stable manifolds of dimension one in sectors bisected by each one of them.
We will classify the attracting directions in two types as follows. Consider $A_0, A_1,...,A_p\in\mathbb C$ such that
$$\log\mu+x^k\left(A_0+A_1x+\cdots+A_px^p\right)=J_{k+p}\left(\log\left(\mu\left(1+x^ka(x)\right)\right)\right),$$
where $\mu\left(1+x^ka(x)\right)$ is the principal part of the pair $(F,\Gamma)$. Note that $A_0=a(0)\neq0$. The polynomial $\log\mu+x^k\left(A_0+A_1x+\cdots+A_px^p\right)$ is called the \emph{infinitesimal principal part} of $(F,\Gamma)$. Observe that, if we put $F_{\id}(x,y)=(x,\mu^{-1}y)\circ F(x,y)$, then $F_{\id}$ is tangent to the identity and the jet of order $k+p+1$ of its infinitesimal generator $X$ is exactly
$$J_{k+p+1}X=-x^{k+p+1}\parc x+x^k\left(A_0+A_1x+\cdots+A_px^p\right)y\parc y.$$
\begin{definition}
An attracting direction $\ell=\xi\mathbb R^+$ is a \emph{node direction} for $(F,\Gamma)$ if
$$\left(\ln|\mu|,\Real\left(\xi^{k}A_0\right),...,\Real\left(\xi^{k+p-1}A_{p-1}\right)\right)<0$$
in the lexicographic order; otherwise, it is a \emph{saddle direction}. In the case $|\mu|=1$, we define the \emph{first asymptotic significant order} of $\ell$ as $p$, if $\Real(\xi^{k+j}A_j)=0$ for all $0\le j\le p-1$, or as the first index $0\le r_{\ell}\le p-1$ such that $\Real(\xi^{k+r_{\ell} }A_{r_{\ell}})\neq0$, otherwise.
\end{definition}
Note that, when $|\mu|\neq1$, all attracting directions have the same character: they are node directions in case $|\mu|<1$ and saddle directions in case $|\mu|>1$. In the case $|\mu|=1$ and $p=0$, every attracting direction $\ell$ is a saddle direction, with first significant order $r_{\ell}=0$.
In the next two sections we will prove the existence, for a reduced pair $(F,\Gamma)$, of a stable manifold of $F$ in a neighborhood of every attracting direction $\ell$, which has dimension one or two depending on whether $\ell$ is a saddle or a node direction.
\begin{remark}\label{rem:furtherreductions}
In order to study asymptotic properties along $\Gamma$ it will be interesting to consider further refinements of a reduced pair $(F,\Gamma)$, in which the order of contact of $\Gamma$ with the $x$-axis can be assumed to be arbitrarily high. Let us explain how to obtain such transformations. Let $\gamma(s)=(s,\gamma_2(s))$ be a parametrization of $\Gamma$, where the order of $\gamma_2(s)$ is at least $k+p+2$. Given $m\ge2$, a change of coordinates $(x,y)\mapsto(x,y-J_{k+p+m-1}\gamma_2(x))$ transforms $F$ into
\begin{align*}
x\circ F\,(x,y)&= x-x^{k+p+1}+O(x^{k+p+1}y,x^{2k+2p+1}))\\
y\circ F\,(x,y)&=\mu\left[y(1+x^ka(x))+O(x^{k+p+1}y,x^{k+p+m})\right].
\end{align*}
Notice that this change of coordinates preserves the principal part (and hence the infinitesimal one) of $(F,\Gamma)$ for all $m\ge2$.
For technical reasons, we will also need to impose the conditions
$$x\circ F(x,y)=x-x^{k+p+1}+O(x^{2k+p+1}y,x^{2k+2p+1})\quad\text{ and }\quad\Real(A_p)>0$$
on a reduced pair $(F,\Gamma)$, where $A_p$ is the coefficient of the term of degree $k+p$ in the infinitesimal principal part of $(F,\Gamma)$. These two conditions can be obtained after a polynomial change of variables as above, to increase the order of contact of $\Gamma$ with the $x$-axis, and a finite number of blow-ups at the infinitely near points of $\Gamma$, each of which increases by one unit both $A_p$ and the exponent of $x$ in the terms in $x\circ F$ with positive degree in the variable $y$. Observe that these blow-ups only change the coefficient $A_p$ in the infinitesimal principal part of $(F,\Gamma)$ and leave the other ones unaltered. Therefore, although the infinitesimal principal part changes, the node or saddle character of each attracting direction does not.
\end{remark}
\section{$\Gamma$-parabolic case: existence of parabolic curves}\label{sec:saddle-direction}
In this section, we prove that if $\Gamma$ is a parabolic formal invariant curve of $F\in\Diff\Cd2$ such that $(F,\Gamma)$ is reduced, then for every saddle attracting direction there exists a one-dimensional stable manifold of $F$ asymptotic to $\Gamma$.
\begin{theorem}\label{the:saddle}
Consider $F\in\Diff\Cd2$ and a formal invariant curve $\Gamma$ of $F$ such that the pair $(F,\Gamma)$ is in reduced form in some coordinates $(x,y)$. For each attracting direction of $(F,\Gamma)$ which is a saddle direction, there exists a parabolic curve of $F$ asymptotic to $\Gamma$. More precisely, if $\ell$ is a saddle attracting direction of $(F,\Gamma)$, then there exist a connected and simply connected domain $R\subset \mathbb C$, with $0\in\partial R$, that contains $\ell$ and a
holomorphic map $\varphi:R\to\mathbb C$ such that the set
$$S=\left\{(x,\varphi(x)):x\in R\right\}$$
is a parabolic curve of $F$ asymptotic to $\Gamma$. Moreover, if $\{(x_n,y_n)\}$ is an orbit of $F$ asymptotic to $\Gamma$ such that $\{x_n\}$ has $\ell$ as tangent direction, then $(x_n,y_n)\in S$ for all $n$ sufficiently big.
\end{theorem}
The rest of the section is devoted to the proof of Theorem~\ref{the:saddle}. The strategy of the proof is analogous to the one used in \cite{Lop-S}, which is inspired by the techniques used by Hakim in \cite{Hak}.
Up to a linear change of coordinates, we can assume without loss of generality that $\ell=\mathbb R^+$; in the case $|\mu|=1$, we denote by $r$ its first significant order. For $d,e, \varepsilon>0$, we define the set $R_{d,e,\varepsilon}$ as follows.
\begin{itemize}
\item If $|\mu|>1$ or $|\mu|=1$ and $r=0$, then
$$
R_{d,e,\varepsilon}=\{x\in\mathbb C: |x|<\varepsilon, -d\Real(x)<\Imag (x)<e\Real(x)\}.
$$
\item If $|\mu|=1$, $r\ge1$ and $\Imag(a(0))>0$, then
$$
R_{d,e,\varepsilon}=\{x\in\mathbb C: |x|<\varepsilon, -d\Real(x)<\Imag (x)<e \Real(x)^{r+1}\}.
$$
\item If $|\mu|=1$, $r\ge1$ and $\Imag(a(0))<0$, then
$$
R_{d,e,\varepsilon}=\{x\in\mathbb C: |x|<\varepsilon, -d\Real(x)^{r+1}<\Imag (x)<e\Real(x)\}.
$$
\end{itemize}
As mentioned in Remark~\ref{rem:furtherreductions}, to prove the asymptoticity of the parabolic curve we will need to consider successive changes of coordinates in which the order of contact of $\Gamma$ with the $x$-axis is arbitrarily high. Therefore, we consider an arbitrary $m\ge p+2$. By Remark~\ref{rem:furtherreductions}, after a polynomial change of variables and a finite sequence of blow-ups centered at the infinitely near points of $\Gamma$ we can find some coordinates $(\xx m,\yy m)$, with $(x,y)=\phi(\xx m,\yy m)=\left(\xx m,\xx m^t\yy m+P(\xx m)\right)$ for some $t\in\mathbb N$ and some polynomial $P$ of order at least $k+p+2$, such that $F$ is written
\begin{align*}
\xx m\circ F\,(\xx m,\yy m)&=F_1\,(\xx m,\yy m)=\xx m-\xx m^{k+p+1}+O(\xx m^{2k+p+1}\yy m,\xx m^{2k+2p+1}) \\
\yy m\circ F\,(\xx m,\yy m)&=F_2\,(\xx m,\yy m)=\mu\left[\yy m+\xx m^k a(\xx m)\yy m+O(\xx m^{k+p+1}\yy m,\xx m^{k+p+m})\right],
\end{align*}
$\Gamma$ has order of contact at least $k+p+m$ with the $\xx m$-axis and $\Real(A_p)>0$, where $A_p$ is the coefficient of the term of degree $k+p$ in the infinitesimal principal part of $(F,\Gamma)$. Note that it suffices to prove Theorem~\ref{the:saddle} in the new coordinates $(\xx m,\yy m)$. In fact, if $S_m$ is a parabolic curve of the transform of $F$ by $\phi$ then $\phi(S_m)$ is a parabolic curve of $F$. Moreover, $\phi(S_m)$ is asymptotic to $\Gamma$ if and only if $S_m$ is asymptotic to the strict transform of $\Gamma$ and, since the $x$-variable is preserved by $\phi$, the fact that $S_m$ is a graph over a domain $R\subset\mathbb{C}$ and the property of $S_m$ eventually containing any asymptotic orbit whose sequence of first components is tangent to $\ell$ are both preserved by $\phi$. For simplicity, we also denote the new coordinates by $(x,y)$. By the definition of a saddle direction, we have that either $|\mu|>1$ or $|\mu|=1$ and $\Real(A_j)=0$ for $j=0, \dots, r-1$ and $\Real(A_r)>0$, where
$$\log\mu+x^kA(x)=\log\mu+x^k\left(A_0+A_1x+\dots+A_px^p\right)$$
is the infinitesimal principal part of $(F,\Gamma)$. Notice that $A_0=a(0)\neq0$.
We shall need the following technical lemmas.
\begin{lemma}\label{lem:holconj}
Suppose $|\mu|=1$ and $r>0$. Then there exists a germ of diffeomorphism of the form $\rho(x)=x+\sum_{j=2}^{\infty}\rho_j x^j$ such that
$$
A_0\rho(x)^k=x^{k}A(x),
$$
with $\rho_j\in\mathbb R$ for any $2\leq j\leq r$ and $\rho_{r+1}\not\in\mathbb R$. Moreover, $\Imag(A_0)\Imag(\rho_{r+1})<0$.
\end{lemma}
\begin{proof}
The existence of $\rho$ follows since the vanishing order and the principal terms of $A_0x^k$ and $x^kA(x)$ at $0$ coincide. The properties of $\rho_j$ for $0\le j\le r+1$ follow easily solving
$$
A_0 \left(x + \sum_{j=2}^{\infty} \rho_j x^{j}\right)^k = x^{k}(A_0+A_1x+\cdots+A_px^p)
$$
recursively. Indeed, we obtain $A_1=kA_0\rho_2$ and $A_j=A_0(k\rho_{j+1}+P_j(\rho_2,\ldots,\rho_j))$ for any $2 \leq j \leq p$ where $P_j$ is a polynomial with real coefficients.
\end{proof}
\begin{lemma}\label{lem:imagecurve}
Suppose $r>0$. Consider a real analytic curve $\kappa$ at $0\in\mathbb C$ given by
$$\left\{x\in\mathbb C:\Imag (x) = \kappa_{r+1}\Real(x)^{r+1}+\kappa_{r+2}\Real(x)^{r+2} + \cdots\right\}.$$
Let $\rho(x)=x+\sum_{j=2}^{\infty}\rho_{j} x^{j}$, where $\rho_2, \ldots, \rho_r \in\mathbb R$ and $\rho_{r+1}\not\in\mathbb R$. Then $\rho (\kappa)$ is of the form $\{x\in\mathbb C:\Imag (x) = (\kappa_{r+1} + \Imag(\rho_{r+1}))\Real(x)^{r+1}+\cdots\}$.
\end{lemma}
\begin{proof}
Let $\tau(\Real(x))= \Real(x)+ i\sum_{j=r+1}^{\infty}\kappa_j\Real(x)^{j}$ be a parametrization of $\kappa$. The jet of order $r+1$ of the parametrization $\rho\circ\tau$ of the curve $\rho\circ\kappa$ is given by
$$
J_{r+1}(\rho\circ\tau) = \Real(x) + \sum_{j=2}^{r} \rho_j \Real (x)^{j} + \Real(\rho_{r+1}) \Real(x)^{r+1} + i \left(\kappa_{r+1} + \Imag(\rho_{r+1})\right)\Real(x)^{r+1},
$$
and the result follows.
\end{proof}
\begin{lemma}\label{rem:Rdecontainedsaddledomain}
If $|\mu|=1$, then
$$
R_{d,e,\varepsilon}\subset\{x\in\mathbb C:\Real(x^kA(x))>0\}
$$
for $d,e,\varepsilon$ sufficiently small.
\end{lemma}
\begin{proof}
The result is clear if $r=0$. Suppose $r\ge1$, and assume without loss of generality that $\Imag(A_0)<0$. If $\rho$ is the diffeomorphism of Lemma~\ref{lem:holconj}, it suffices to show that $\rho(R_{d,e,\varepsilon})\subset\{x\in\mathbb C:\Imag(x^k)>0\}$. By Lemma~\ref{lem:imagecurve}, the set $\rho(R_{d,e,\varepsilon})$ is enclosed between two curves of the form
$$
\Imag (x)=(-d+\Imag(\rho_{r+1}))\Real(x)^{r+1}+\cdots \quad \hbox{and} \quad \Imag (x) = 2e\Real(x).
$$
Since $\Imag(\rho_{r+1})>0$, if $d,e,\varepsilon$ are small enough we conclude that $\Imag(x^k)>0$ for any $x\in \rho(R_{d,e,\varepsilon})$.
\end{proof}
\begin{lemma}\label{lem:rde}
$F_1(R_{d,e,\varepsilon}\times B(0,\varepsilon))\subset R_{d,e, \varepsilon}$ for $d,e,\varepsilon>0$ sufficiently small.
\end{lemma}
\begin{proof}
The set $R_{d,e,\varepsilon}$ is the intersection of the three sets $A= \{x\in\mathbb C: |x| < \varepsilon, \Real(x)>0\}$, $B=\{x\in\mathbb C:\Imag (x)>-d\Real(x)^{\alpha}\}$ and $C=\{x\in\mathbb C:\Imag (x)<e\Real(x)^{\beta}\}$, where either $\{\alpha,\beta\} = \{1,r+1\}$ or $\alpha=\beta=1$. Let us show that $F_1(x,y)$ belongs to those sets for any $(x,y)\in R_{d,e,\varepsilon}\times B(0,\varepsilon)$ if $d,e,\varepsilon>0$ are sufficiently small.
Note that $F_1(x,y)= x - x^{k+p+1} + O(x^{k+p+2})$ for any $(x,y)\in R_{d,e,\varepsilon}\times B(0,\varepsilon)$. Thus, $\Real(F_1(x,y))=\Real (x)+O(x^{k+p+1})$ in $R_{d,e,\varepsilon}\times B(0,\varepsilon)$, so it is positive if $d,e,\varepsilon>0$ are sufficiently small. Since $F_1(x,y)/x = 1 - x^{k+p} + O(x^{k+p+1})$, we deduce $|F_1(x,y)| \leq |x|$ if $(x,y) \in R_{d,e,\varepsilon}\times B(0,\varepsilon)$ for $d,e,\varepsilon>0$ small enough. In particular, $F_1(x,y)\in A$ for any $(x,y) \in R_{d,e,\varepsilon}\times B(0,\varepsilon)$.
Let us show $ F_1(R_{d,e,\varepsilon}\times B(0,\varepsilon))\subset B$. Fix $0 < \delta <1$ such that $(k+p+1) \delta > \alpha$. We split $R_{d,e,\varepsilon}\times B(0,\varepsilon)$ in two subsets, namely $R_{1} = \{(x,y)\in R_{d,e,\varepsilon}\times B(0,\varepsilon): \Imag (x) < -\delta d\Real(x)^{\alpha}\}$ and $R_{2} = \left(R_{d,e,\varepsilon}\times B(0,\varepsilon)\right) \setminus R_{1}$.
In $R_{2}$, we have
\begin{align*}
\Imag(F_1(x,y))+d\Real(F_1(x,y))^{\alpha}&=\Imag (x) + d \Real (x)^{\alpha} + O(x^{k+p+1})\\
&\geq d (1-\delta)\Real(x)^{\alpha} + O(x^{k+p+1}) >0
\end{align*}
if $d,e,\varepsilon>0$ are small enough, since $\alpha<k+p+1$. Thus we obtain $F_1(R_{2}) \subset B$. Let us focus on $R_{1}$. First we consider the case $\alpha=1$. The inequality
$$
\Imag(\log (F_1(x,y))-\log x) = \Imag\left( \log \frac{F_1(x,y)}{x} \right) = - \Imag (x^{k+p}) + O(x^{k+p+1}) >0
$$
holds in $R_{1}$ for $d,e,\varepsilon>0$ small enough; it implies that $\arg(F_1(x,y))>\arg(x)$ so
$$
\frac{\Imag(F_1(x,y))+d\Real(F_1(x,y))}{\Real (F_1(x,y))} > \frac{\Imag (x)+ d \Real (x)}{\Real (x)}
$$
and in particular $ F_1(R_{1}) \subset B$. Suppose $\alpha>1$. Given $(x,y) \in R_{1}$, we denote $\gamma = \Imag (x)/ \Real(x)^{\alpha}$, which satisfies $-d < \gamma < -\delta d$. We have, writing $x=\Real(x)+i\gamma\Real(x)^\alpha$, that
\begin{align*}
\Imag(F_1(x,y))&=\Imag(x)-\Imag(x^{k+p+1})+O(x^{k+p+2})\\
&=\Imag(x)-\gamma(k+p+1)\Real(x)^{k+p+\alpha}+O(x^{k+p+\alpha+1})
\end{align*}
and that
\begin{align*}
\Real(F_1(x,y))^\alpha&=\left(\Real(x)-\Real(x^{k+p+1})+O(x^{k+p+2})\right)^\alpha\\
&=\Real(x)^\alpha-\alpha\Real(x)^{k+p+\alpha}+O(x^{k+p+\alpha+1}).
\end{align*}
Therefore,
\begin{align*}
\Imag(F_1(x,y))+d\Real(F_1(x,y))^{\alpha}
&= \Imag (x) +d\Real(x)^{\alpha}\\
&\quad- [(k+p+1)\gamma+d\alpha]\Real(x)^{k+p+\alpha}+ O(x^{k+p+\alpha+1})
\end{align*}
for $(x,y) \in R_{1}$. We denote $\delta'= d [ (k+p+1)\delta - \alpha]$, which satisfies $\delta'>0$ by the choice of $\delta$. We obtain
$$
\Imag(F_1(x,y))+d\Real(F_1(x,y))^{\alpha} \ge \Imag (x) + d\Real(x)^{\alpha} + \delta'\Real(x)^{k+p+\alpha}+ O(x^{k+p+\alpha+1})>0
$$
for all $(x,y) \in R_{1}$ if $d,e,\varepsilon>0$ are small enough. In particular, $F_1(R_{1})\subset B$.
Analogously we can show that $F_1(R_{d,e,\varepsilon}\times B(0,\varepsilon)) \subset C$, and the Lemma is proved.
\end{proof}
We consider $0<\varepsilon<1$ and fix $d,e>0$ small enough so that Lemmas \ref{rem:Rdecontainedsaddledomain} and \ref{lem:rde} hold (notice that this does not depend on $m$). Consider the Banach space
$$\mc B^m_\varepsilon=\left\{u\in\mathcal{O}(R_{d,e,\varepsilon},\mathbb C)\,:
\sup\left\{\frac{|u(x)|}{|x|^{m-1}}\colon x\in R_{d,e,\varepsilon}\right\}<\infty\right\}$$
with the norm $\|u\|=\sup\Bigl\{\frac{|u(x)|}{|x|^{m-1}}:x\in R_{d,e,\varepsilon}\Bigr\}$ and its closed subset
$$\mc{H}^m_\varepsilon=\{u\in\mc B^m_\varepsilon\colon \|u\|\leq 1, |u'(x)|\le|x|^{m-p-2} \, \forall x\in R_{d,e,\varepsilon}\}.$$
If we denote $f_u(x)=F_1(x,u(x))$, then $f_u(R_{d,e,\varepsilon})\subset R_{d,e,\varepsilon}$ for every $u\in\mc{H}^m_\varepsilon$, by Lemma~\ref{lem:rde}. Moreover, as in Leau-Fatou Flower Theorem, there exists a constant $C>0$ such that if $x_0\in R_{d,e,\varepsilon}$ and $u\in\mc{H}^m_\varepsilon$, and we denote $x_j=f_u(x_{j-1})$, then
\begin{equation}\label{eq:dynamics1}
\lim_{j\to\infty}(k+p)jx_j^{k+p}=1\quad\hbox{and}\quad |x_j|^{k+p}\le C\frac{|x_0|^{k+p}}{1+j|x_0|^{k+p}}
\end{equation}
for all $j\in\mathbb N$. Therefore, if $u\in\mc{H}^m_\varepsilon$ is a solution of the equation
\begin{equation}\label{eq:invariance}
u(f_u(x))=F_2(x,u(x)),
\end{equation}
then the set $S_m=\left\{(x,u(x)):x\in R_{d,e,\varepsilon}\right\}$ is a parabolic curve of $F$.
Define
$$
E(x)=\exp\left(-\int\frac{A(x)}{x^{p+1}}dx\right).
$$
We have, as in \cite[Lemma 3.7]{Lop-S},
\begin{equation}\label{eq:ExEF1x}
E(x)E(F_1(x,y))^{-1}=\exp(-x^{k}A(x))+O(x^{k+p+1},x^ky).
\end{equation}
\begin{lemma}\label{lem:sumofx_j}
If $\varepsilon>0$ is small enough and we put $x_j=f_{u}(x_{j-1})$ for ${j\geq 1}$, for any $u\in\mc{H}^m_\varepsilon$, we have:
\begin{enumerate}[(i)]
\item For any real number $s> k+p$ there exists a constant $K_s>0$, independent of $u$, such that for any $x_0\in R_{d,e,\varepsilon}$,
$$
\sum_{j\ge0}|x_j|^s\le K_s|x_0|^{s-k-p}.
$$
\item There exists a constant $M>0$ independent of $u$ such that, for any $x_0\in R_{d,e,\varepsilon}$ and for any $j\ge0$,
$$
\left|\mu^{-j}E(x_0)E(x_j)^{-1}\right|\le M.
$$
\end{enumerate}
\end{lemma}
\begin{proof}
Part (i) follows from the inequality in \eqref{eq:dynamics1}, as in \cite[Corollary 4.3]{Hak}. To prove part (ii), observe that
$$
E(x_0)E(x_1)^{-1}=\exp\left(-x_0^kA(x_0)\right)+\theta_u(x_0),
$$
where $|\theta_u(x_0)|\le K|x_0|^{k+p+1}$ for any $x_0\in R_{d,e,\varepsilon}$ and any $u\in\mc{H}^m_\varepsilon$, with some $K>0$ independent of $u$.
If $|\mu|>1$, since $\left|\exp\left(-x_0^kA(x_0)\right)\right|\le\exp\left(K'\varepsilon^k\right)$
for some $K'>0$, we have $\left|\mu^{-1}\exp\left(-x_0^kA(x_0)\right)\right|\le1$ if $\varepsilon$ is small enough. If $|\mu|=1$, since $R_{d,e,\varepsilon}\subset\{x\in\mathbb C:\Real(x^kA(x))>0\}$ by Lemma~\ref{rem:Rdecontainedsaddledomain}, we have $\left|\mu^{-1}\exp\left(-x_0^kA(x_0)\right)\right|\le1$.
Therefore, for $\varepsilon>0$ small enough, we obtain
$$
\left|\mu^{-j}E(x_0)E(x_j)^{-1}\right|\leq \prod_{l=0}^{j-1} (1+K|x_l|^{k+p+1})\leq \prod_{l=0}^{\infty} (1+K|x_l|^{k+p+1}).
$$
The convergence of the infinite product follows from part (i).
\end{proof}
Define
$$
H(x,y)=y-\mu^{-1}E(x)E(F_1(x,y))^{-1}F_2(x,y)\in\mathbb C\{x,y\}.
$$
Using equation~\eqref{eq:ExEF1x}, the identity
$\mu\left(1+x^ka(x)\right)=J_{k+p}\left(\exp\left(\log \mu+x^kA(x)\right)\right)$ and the expression of $F_2$,
we obtain that
$$
H(x,y)=O(x^{k+p+1}y,x^ky^2,x^{k+p+m}).
$$
\begin{proposition}\label{pro:contraction-map}
If $\varepsilon>0$ is sufficiently small and we put $x_j=f_{u}(x_{j-1})$ for ${j\geq 1}$, for any $u\in\mc H^m_\varepsilon$ and any $x_0\in R_{d,e,\varepsilon}$, then the series
$$
Tu(x_0)=\sum_{j\ge0}\mu^{-j}E(x_0)E(x_j)^{-1}H(x_j,u(x_j))
$$
is normally convergent and defines an element $Tu\in\mc H^m_\varepsilon$. Moreover, $T\colon u\mapsto Tu$ is a contracting map from $\mc H^m_{\varepsilon}$ to itself and $u\in\mc{H}^m_\varepsilon$ is a fixed point of $T$ if and only if $u$ satisfies equation \eqref{eq:invariance}.
\end{proposition}
\begin{proof}
The normal convergence of the series $Tu(x_0)$ and the fact that $Tu\in\mc{H}^m_\varepsilon$ for all $u\in\mc{H}^m_\varepsilon$, if $\varepsilon$ is sufficiently small, are proved as in \cite[Proposition 3.9]{Lop-S}.
To show that $T$ is a contraction, consider $u,v\in \mc{H}^m_\varepsilon$ and write $Tu(x_0)-Tv(x_0)=U_1+U_2$, with
\begin{align*}
U_1&=\sum\limits_{j\ge0}\mu^{-j}E(x_0)E(x_j)^{-1}\left[H(x_j,u(x_j))-H(z_j,v(z_j))\right]\\
U_2&=\sum\limits_{j\ge0}\mu^{-j}\left[E(x_0)E(x_j)^{-1}-E(x_0)E(z_j)^{-1}\right]H(z_j,v(z_j)),
\end{align*}
where $x_j=f_u^j(x_0)$ and $z_j=f_v^j(x_0)$.
Arguing as in \cite[Proposition 3.9]{Lop-S}, we prove that there exists $B_1>0$ such that
$|U_1|\le B_1|x_0|^m\|u-v\|$. To bound $U_2$, write
$$
r(x)=-\int\dfrac{A(x)}{x^{p+1}}dx=\dfrac{1}{x^p}\left(p^{-1}A_0+(p-1)^{-1}A_1x+\dots+A_{p-1}x^{p-1}\right)-A_p\log x.
$$
As an application of Taylor's formula, we obtain
$$
r(x_1)=r(x_0)+x_0^kA(x_0)+\theta_u(x_0),
$$
where $|\theta_u(x_0)|\le c|x_0|^{k+p+1}$ for some constant $c>0$ independent of $u$.
If we put
$$
E(x_0)E(x_j)^{-1}-E(x_0)E(z_j)^{-1}=\exp a-\exp b,
$$
with $a=r(x_0)-r(x_j)$ and $b=r(x_0)-r(z_j)$, we have
\begin{align*}
|\mu|^{-j}\left|E(x_0)E(x_j)^{-1}-E(x_0)E(z_j)^{-1}\right|
&=|\mu|^{-j}\left|\exp a-\exp b\right|\\
&\le |\mu|^{-j}|a-b|\max\limits_{\zeta\in[a,b]}|\exp\zeta|.
\end{align*}
If $|\mu|=1$, since $\Real(x^kA(x))>0$ for all $x\in R_{d,e,\varepsilon}$ by Lemma~\ref{rem:Rdecontainedsaddledomain}, we have that $\Real(r(x_0)-r(x_1))\le|\theta_u(x_0)|$
and therefore
$$
\Real(r(x_0)-r(x_j))\le \sum_{l=0}^{j-1}c|x_l|^{k+p+1}\le 1
$$
if $\varepsilon$ is sufficiently small, by Lemma~\ref{lem:sumofx_j}. Analogously, $\Real(r(x_0)-r(z_j))\le 1$, and hence $[a,b]\subset\{x\in\mathbb C:\Real(x)\le1\}$ so
$$
|\mu|^{-j}\max\limits_{\zeta\in[a,b]}|\exp\zeta|\le \mathrm{e}.
$$
If $|\mu|>1$, there exists a constant $K>0$ such that $|x^kA(x)|\le K\varepsilon^k$ for all $x\in R_{d,e,\varepsilon}$, so
$$
\Real(r(x_0)-r(x_j))\le\sum_{l=0}^{j-1}\left(K\varepsilon^k+c|x_l|^{k+p+1}\right)\le jK\varepsilon^k+1
$$
if $\varepsilon$ is small enough, by Lemma~\ref{lem:sumofx_j}. Analogously, $\Real(r(x_0)-r(z_j))\le jK\varepsilon^k+1$, and hence
$$
|\mu|^{-j}\max\limits_{\zeta\in[a,b]}|\exp\zeta|\le |\mu|^{-j}\exp\left(jK\varepsilon^k\right)\mathrm{e}=\exp\left((K\varepsilon^k-\ln|\mu|)j\right)\mathrm{e}\le \mathrm{e}
$$
for $\varepsilon>0$ sufficiently small. Therefore,
$$
|\mu|^{-j}\left|E(x_0)E(x_j)^{-1}-E(x_0)E(z_j)^{-1}\right|\le \mathrm{e}|r(z_j)-r(x_j)|
$$
and, arguing as in \cite[Proposition 3.9]{Lop-S}, there exists a constant $B_2>0$ such that $|U_2|\le B_2|x_0|^m\|u-v\|$. Therefore,
$|Tu(x_0)-Tv(x_0)|\le(B_1+B_2)|x_0|^m\|u-v\|$, so $T$ is a contraction if $\varepsilon$ is small enough.
Finally, rewriting
\begin{align*}
Tu(x_0)
&=\displaystyle E(x_0)\sum_{j\ge0}\left(\mu^{-j}E(x_j)^{-1}u(x_j)-\mu^{-(j+1)}E(x_{j+1})^{-1}F_2(x_j, u(x_j))\right)\\
&=u(x_0)-\mu^{-1}E(x_0)E(x_1)^{-1}F_2(x_0,u(x_0))+\mu^{-1}E(x_0)E(x_1)^{-1}Tu(x_1)
\end{align*}
we conclude that $u\in\mc{H}^m_\varepsilon$ satisfies equation \eqref{eq:invariance} if and only if $u$ is a fixed point of~$T$.
\end{proof}
The existence of a solution $u\in\mc{H}^m_\varepsilon$ of equation \eqref{eq:invariance} (and hence of a parabolic curve for $F$) follows from Proposition~\ref{pro:contraction-map}, by Banach fixed point theorem. The property of the parabolic curve being asymptotic $\Gamma$ can be proved exactly as in \cite{Lop-S} (showing that $S_m=S_{m'}$ for $m'\geq m$ by uniqueness of the fixed point and that $S_m$ is tangent to $\Gamma$ up to an order which increases with $m$).
To complete the proof of Theorem~\ref{the:saddle}, it only remains to show that if $\{(x_j,y_j)\}$ is an orbit of $F$ asymptotic to $\Gamma$ such that $\{x_j\}$ has $\mathbb R^+$ as tangent direction, then $(x_j,y_j)\in S_m$ for $j$ sufficiently big. To prove it, we will need the two following lemmas.
\begin{lemma}\label{lem:tangency}
If $\{(x_j,y_j)\}$ is a stable orbit of $F$ such that $\{x_j\}$ has $\mathbb R^+$ as tangent direction and $|y_j|<|x_j|^{p+1}$ for all $j$, then
$$\lim_{j\to\infty} \frac{\Imag(x_j)}{\Real(x_j)^{r+1}}=0.$$
\end{lemma}
\begin{proof}
We denote by $-\rho+(k+p+1)/2$ the coefficient of $x^{2k+2p+1}$ in $F_1(x,y)$ and consider
$$
\psi(x)=\frac1{(k+p)x^{k+p}}+\rho\log x.
$$
Using the fact that $|y_j|<|x_j|^{p+1}$ for all $j$, we can see that $\psi(x_1)=\psi(x_0)+1+O(x_0^{k+p+1})$, so $\psi(x_j)-j$ is bounded for any $j$, by Lemma~\ref{lem:sumofx_j}. Therefore,
$$
\frac1{(k+p)x_j^{k+p}}=\left(j+O(1)\right)\left(1+O(x_j^{k+p}\log x_j)\right).
$$
Since $\lim_{j\to\infty}(k+p)jx_j^{k+p}=1$, by \eqref{eq:dynamics1}, we get
$$
\frac{1}{(k+p)x_j^{k+p}}=\left(j+O(1)\right)\left(1+O\left(\frac 1j\log j\right)\right)=j+O(\log j)
$$
and hence
$$
x_j=(k+p)^{-1/(k+p)}j^{-1/(k+p)}\left(1+O \left(\frac{\log j}{j} \right)\right).
$$
The quotient $\Imag (x_j)/\Real(x_j)^{r+1}$ satisfies then
$$
\frac{\Imag(x_j)}{\Real(x_j)^{r+1}}
=\frac{(k+p)^{-\frac1{k+p}}j^{-\frac1{k+p}}O\left(\frac{\log j}{j} \right)}{(k+p)^{-\frac{r+1}{k+p}}j^{-\frac{r+1}{k+p}}\left(1+O \left(\frac{\log j}{j} \right)\right)}
=(k+p)^{\frac{r}{k+p}} j^{\frac{r}{k+p}} O \left( \frac{\log j}{j} \right).
$$
Since $r<k+p$, $\Imag(x_j)/\Real(x_j)^{r+1}$ tends to $0$ when $j \to \infty$.
\end{proof}
\begin{lemma}\label{lem:est}
If $|\mu|=1$ there exists a constant $c>0$ such that, if $d,e,\varepsilon$ are small enough, then for every $x\in R_{d,e,\varepsilon}$ we have
$$\Real(x^{k} A(x))\geq c|x|^{k+r}.$$
\end{lemma}
\begin{proof}
If $r=0$, we have
$$
\Real(x^k A(x))\geq\Real(A_0x^k)/2\geq c|x|^k
$$
for $x\in R_{d,e,\varepsilon}$ if $d,e,\varepsilon$ are small enough, where $c=\Real(A_0)/3$.
If $r>0$, using the diffeomorphism $\rho(x)=x+\sum_{j\ge2}\rho_jx^j$ of Lemma~\ref{lem:holconj}, it suffices to show that $\Real(A_0x^{k})\geq c|x|^{k+r}$ for every $x\in\rho(R_{d,e,\varepsilon})$, for some $c>0$. Without loss of generality, we can assume $\Imag(A_0)<0$, so $\Imag(\rho_{r+1})>0$. The set $\rho(R_{d,e,\varepsilon})$ is enclosed between two curves of the form
$$
\Imag (x)=(-d+\Imag(\rho_{r+1}))\Real(x)^{r+1}+\cdots \quad \hbox{and} \quad \Imag (x) = 2e\Real(x),
$$
by Lemma~\ref{lem:imagecurve}. Notice that $-d +\Imag(\rho_{r+1})$ is positive if $d$ is sufficiently small. The elements of $\rho(R_{d,e,\varepsilon})$ satisfy $d'|x|^{r} <\arg x <\pi/(2k)$ for some $d'>0$ if $d,e,\varepsilon$ are small enough. Then, since sine is an increasing function in $(0,\pi/2)$, we obtain
$$
\Real(A_0x^k)=-\Imag(A_0)|x|^k\sin(k \arg x) \geq -\Imag(A_0)|x|^k \sin (kd'|x|^r)\geq c|x|^{k+r}
$$
in $\rho(R_{d,e,\varepsilon})$ if $d,e,\varepsilon$ are small enough, where $c= - \Imag(A_{0})kd'/2$.
\end{proof}
Let $\{(x_j,y_j)\}$ be an orbit of $F$ asymptotic to $\Gamma$, such that $\{x_j\}$ has $\mathbb R^+$ as tangent direction. We consider the sectorial change of coordinates $(x,y)\in R_{d,e,\varepsilon}\times B(0,\varepsilon)\mapsto (x,y-u(x))$, where $u\in\mc{H}^m_\varepsilon$ is the solution of equation \eqref{eq:invariance}, so that the parabolic curve $S_m$ becomes the $x$-axis and $F$ is written as
\begin{align*}
F_1(x,y)&=x-x^{k+p+1}+O(x^{2k+p+1}y,x^{2k+2p+1}) \\
F_2(x,y)&=\mu y\left[1+x^k a(x)+O(x^{k+p+1})\right].
\end{align*}
Since $\{(x_j,y_j)\}$ is asymptotic to $\Gamma$ and $S_m=(y=0)$ is also asymptotic to $\Gamma$, we have that $|y_j|<|x_j|^{p+1}$ if $j$ is big enough. Then, by Lemma~\ref{lem:tangency}, for any $d,e,\varepsilon>0$ we have that $x_j\in R_{d,e,\varepsilon}$ if $j$ is big enough. Then, we have
$$
|\mu|\left|1+x_j^k a_j(x)+O(x_j^{k+p+1})\right|=
|\mu|\left|\exp\left(x_j^kA(x_j)\right)+O(x_j^{k+p+1})\right|>1
$$
for $j$ big enough, since either $|\mu|>1$ or $|\mu|=1$ and $\Real(x_j^kA(x_j))\ge c|x_j|^{k+r}$, by Lemma~\ref{lem:est}. Therefore, the orbit $\{(x_j,y_j)\}$ can only converge to 0 if $y_j=0$ for all $j$ big enough. This ends the proof of Theorem~\ref{the:saddle}.
\section{$\Gamma$-parabolic case: existence of open stable manifolds}\label{sec:node-direction}
In this section, we show that if $\Gamma$ is a parabolic formal invariant curve of $F\in\Diff\Cd2$ such that $(F,\Gamma)$ is reduced, then for every node attracting direction there exists a two-dimensional stable manifold of $F$ in which every orbit is asymptotic to $\Gamma$.
\begin{theorem}\label{the:node}
Consider $F\in\Diff\Cd2$ and a formal invariant curve $\Gamma$ of $F$ such that the pair $(F,\Gamma)$ is in reduced form in some coordinates $(x,y)$. For each attracting direction of $(F,\Gamma)$ which is a node direction, there exists an open stable manifold of $F$ where every orbit is asymptotic to $\Gamma$. More precisely, if $\ell$ is a node attracting direction of $(F,\Gamma)$, then there exist a connected and simply connected domain $R\subset \mathbb C$, with $0\in\partial R$, that contains $\ell$ and some integers $M\ge k+p+2$ and $q\ge p+1$ such that the set
$$S=\left\{(x,y):x\in R, \left|y-J_M\gamma_2(x)\right|<|x|^q\right\},$$
where $\gamma(s)=(s,\gamma_2(s))$ is a parametrization of $\Gamma$, is an open stable manifold of $F$ where every orbit is asymptotic to $\Gamma$. Moreover, if $\{(x_n,y_n)\}$ is an orbit of $F$ asymptotic to $\Gamma$ such that $\{x_n\}$ has $\ell$ as tangent direction, then $(x_n,y_n)\in S$ for all $n$ sufficiently big.
\end{theorem}
The rest of the section is devoted to the proof of Theorem~\ref{the:node}. Up to a linear change of coordinates, we can assume without loss of generality that $\ell=\mathbb R^+$; in the case $|\mu|=1$, we denote by $r$ its first significant order. Observe that $r<p$. For $d,e, \varepsilon>0$, we define the set $R_{d,e,\varepsilon}$ as follows.
\begin{itemize}
\item If $|\mu|<1$ or $|\mu|=1$ and $r=0$, then
$$
R_{d,e,\varepsilon}=\{x\in\mathbb C: |x|<\varepsilon, -d\Real (x)<\Imag (x)<e\Real(x)\}.
$$
\item If $|\mu|=1$, $r\ge1$ and $\Imag(a(0))>0$, then
$$
R_{d,e,\varepsilon}=\{x\in\mathbb C: |x|<\varepsilon, -d\Real (x)^{r+1}<\Imag (x)<e\Real (x)\}.
$$
\item If $|\mu|=1$, $r\ge1$ and $\Imag(a(0))<0$, then
$$
R_{d,e,\varepsilon}=\{x\in\mathbb C: |x|<\varepsilon, -d\Real (x)<\Imag (x)<e \Real(x)^{r+1}\}.
$$
\end{itemize}
As mentioned in Remark~\ref{rem:furtherreductions}, to prove the asymptoticity of the orbits inside the stable manifold we will need to consider successive changes of coordinates in which the order of contact of $\Gamma$ with the $x$-axis is arbitrarily high. Therefore, we consider an arbitrary $m\ge p+2$. By Remark~\ref{rem:furtherreductions}, after a polynomial change of variables and a finite sequence of blow-ups we can find some coordinates $(\xx m,\yy m)$, with $(x,y)=\phi(\xx m,\yy m)=\left(\xx m,\xx m^t\yy m+J_M\gamma_2(\xx m)\right)$ for some $t\ge0$ and some $M\ge k+p+2$, such that $F$ is written
\begin{align*}
\xx m\circ F\,(\xx m,\yy m)&=F_1\,(\xx m,\yy m)=\xx m-\xx m^{k+p+1}+O(\xx m^{2k+p+1}\yy m,\xx m^{2k+2p+1}) \\
\yy m\circ F\,(\xx m,\yy m)&=F_2\,(\xx m,\yy m)=\mu\left[\yy m+\xx m^k a(\xx m)\yy m+O(\xx m^{k+p+1}\yy m,\xx m^{k+p+m})\right],
\end{align*}
$\Gamma$ has order of contact at least $k+p+m$ with the $\xx m$-axis (in this case, unlike the case of a saddle attracting direction, we do not need the condition $\Real(A_p)>0$ on the coefficient $A_p$ in the infinitesimal principal part). We define, for $d,e,\varepsilon>0$,
$$S_{d,e,\varepsilon}^{m}=
\left\{(\xx m,\yy m)\in\mathbb C^2:\xx m\in R_{d,e,\varepsilon},|\yy m|<|\xx m|^{p+1}\right\}.$$
If we show that, in the coordinates $(\xx m,\yy m)$, the set $S_{d,e,\varepsilon}^{m}$ is a stable manifold where every orbit is asymptotic to $\Gamma$ and which eventually contains every orbit $\{(x_n,y_n)\}$ asymptotic to $\Gamma$ such that $\{x_n\}$ has $\ell$ as tangent direction, then the set $\phi(S_{d,e,\varepsilon}^{m})$ will satisfy the required properties of Theorem~\ref{the:node} in the coordinates $(x,y)$. We will work therefore in the coordinates $(\xx m,\yy m)$, that we still denote $(x,y)$ for simplicity. By the definition of a node direction, we have that either $|\mu|<1$ or $|\mu|=1$ and $\Real(A_j)=0$ for $j=0, \dots, r-1$ and $\Real(A_r)<0$, where
$$\log\mu+x^kA(x)=\log\mu+x^k\left(A_0+A_1x+\dots+A_px^p\right)$$
is the infinitesimal principal part of $(F,\Gamma)$. Note that $A_0=a(0)\neq0$.
\begin{proposition}\label{pro:basin}
If $d,e,\varepsilon>0$ are small enough, then
$$F(S^m_{d,e,\varepsilon})\subset S^m_{d,e,\varepsilon}.$$
\end{proposition}
\begin{proof}
Arguing exactly as in Lemma~\ref{lem:rde}, we have that
$$F_1(S^m_{d,e,\varepsilon})\subset R_{d,e,\varepsilon}
$$
if $d,e,\varepsilon>0$ are sufficiently small. If $(x,y)\in S^m_{d,e,\varepsilon}$, using the identity $\mu\left(1+x^ka(x)\right)=J_{k+p}\left(\mu\exp\left(x^kA(x)\right)\right)$, we have that \begin{align*}
\left|\frac{F_2(x,y)}{F_1(x,y)^{p+1}} \right|
& = \left|\frac{\mu y\left(\exp(x^k A(x))+O(x^{k+p+1})\right) + O(x^{k+p+m})}{(x-x^{k+p+1}+ O(x^{2k+2p+1} ))^{p+1}}\right|\cr
&\leq |\mu|\left|\frac{y}{x^{p+1}}\right| \left|\exp (x^kA(x))+O(x^{k+p+1})
\right||1+ O(x^{k+p})|+O(x^{k+m-1})\cr
&<|\mu|\left|\exp (x^kA(x))+O(x^{k+p+1})
\right||1+ O(x^{k+p})|+O(x^{k+m-1}).
\end{align*}
If $|\mu|<1$, we conclude that $\left|F_2(x,y)/F_1(x,y)^{p+1}\right|< 1$ if $\varepsilon>0$ is small enough, so $F(S^m_{d,e,\varepsilon})\subset S^m_{d,e,\varepsilon}$. If $|\mu|=1$, arguing as in Lemma~\ref{lem:est} (with the only difference that in this case $\Real(A_r)<0$ and $\Imag (A_0)\Imag(\rho_{r+1})>0$, where $\rho$ is the diffeomorphism of Lemma~\ref{lem:holconj}), there exists a constant $c>0$ such that
$$\Real(x^{k}A(x))\leq -c|x|^{k+r}$$
for all $x\in R_{d,e,\varepsilon}$, if $d,e,\varepsilon$ are small enough. Then, we get
\begin{align*}
\left|\frac{F_2(x,y)}{F_1(x,y)^{p+1}} \right| &\leq \left(1-c|x|^{k+r} + |O(x^{k+r+1})|\right) |1+ O(x^{k+p})| + O(x^{k+m-1}) \\
&\leq 1 -c |x|^{k+r} + O(x^{k+r+1})<1
\end{align*}
for any $(x,y) \in S^m_{d,e,\varepsilon}$, if $d,e,\varepsilon>0$ are small enough, so $F(S^m_{d,e,\varepsilon}) \subset S^m_{d,e,\varepsilon}$.
\end{proof}
Consider $d,e,\varepsilon>0$ such that Proposition~\ref{pro:basin} holds. For any $(x_0,y_0)\in S^m_{d,e,\varepsilon}$, arguing as in the classical Leau-Fatou Flower Theorem, we have that $\lim_{j\to\infty}(k+p)jx_j^{k+p}=1$, where $(x_j,y_j)=F(x_{j-1},y_{j-1})$, and therefore, by the definition of $S^m_{d,e,\varepsilon}$, we have that $\lim_{j\to\infty}(x_j,y_j)=0$, so $S^m_{d,e,\varepsilon}$ is a stable manifold of $F$. Moreover, if
$\{(x_j,y_j)\}$ is an orbit of $F$ asymptotic to $\Gamma$ such that $\{x_j\}$ has $\mathbb R^+$ as tangent direction, then $x_j\in R_{d,e,\varepsilon}$ if $j$ is big enough, by Lemma~\ref{lem:tangency}, and $|y_j|<|x_j|^{p+1}$ if $j$ is big enough, since the order of contact of $\Gamma$ with the $x$-axis is at least $k+p+m$. Hence, $(x_j,y_j)\in S^m_{d,e,\varepsilon}$ if $j$ is sufficiently big.
The rest of the proof is devoted to showing that every orbit in $S^m _{d,e,\varepsilon}$ is asymptotic to $\Gamma$. We define, as in the proof of Theorem~\ref{the:saddle},
$$E(x)=\exp\left(-\int\frac{A(x)}{x^{p+1}}dx\right).$$
\begin{lemma}\label{lem:merconj}
Suppose $|\mu|=1$ and $r>0$. Then there exists a germ of diffeomorphism of the form $\zeta(x)=x+\sum_{j=2}^{\infty}\zeta_j x^j$ such that
$$
-\frac{A_0}{p\zeta(x)^p}=\int\frac{J_{p-1}A(x)}{x^{p+1}} dx,
$$
with $\zeta_j\in\mathbb R$ for any $2\leq j\leq r$ and $\zeta_{r+1}\not\in\mathbb R$. Moreover, $\Imag(A_0)\Imag(\zeta_{r+1})<0$.
\end{lemma}
\begin{proof}
The existence of $\zeta$ follows from the fact that the meromorphic functions $-A_0/(px^p)$ and $\int\frac{J_{p-1}A(x)}{x^{p+1}} dx$ have the same principal term. The properties of $\zeta_j$, $0\le j\le r+1$, follow easily solving the equation recursively. Indeed, we obtain $A_1 =-(p-1)A_0\zeta_2 $ and $A_j=A_0\left(-(p-j)\zeta_{j+1}+ P_{j}(\zeta_2, \ldots,\zeta_j)\right)$ for any $2\leq j <p$ where $P_j$ is a polynomial with real coefficients.
\end{proof}
\begin{lemma}\label{lem:tangency2}
Let $(x_0,y_0) \in S^m_{d,e,\varepsilon}$ and set $(x_j,y_j)=F^j(x_0,y_0)$ for any $j \geq 0$. Then
$$
\lim_{j\to\infty} |\mu|^j\left|\frac{E(x_0)^{-1}E(x_j)}{x_j^l}\right|=0
$$
for any $l\geq 0$.
\end{lemma}
\begin{proof}
Assume first that $|\mu|<1$. From equation~\eqref{eq:ExEF1x}, we obtain
$$\mu E(x_0)^{-1}E(x_1)=\mu\exp\left(x_0^kA(x_0)\right)+\theta(x_0),$$
where $|\theta(x_0)|\le K|x_0|^{k+p+1}$ for some $K>0$. Then, $\left|\mu E(x_0)^{-1}E(x_1)\right|\le \delta$ for some $\delta<1$, if $\varepsilon$ is small enough. Hence,
$$|\mu|^j\left|\frac{E(x_0)^{-1}E(x_j)}{x_j^l}\right|\le \delta^j\frac1{|x_j|^l},$$
which tends to 0 when $j\to\infty$, since $\lim_{j\to\infty}(k+p)jx_j^{k+p}=1$ and $\delta<1$.
Assume now that $|\mu|=1$. We define the set $\widetilde{R}_{d,e,\varepsilon}\subseteq R_{d,e,\varepsilon} $ as follows. Let $\zeta(x)=x+\sum_{j\ge2}\zeta_j x^j$ be the diffeomorphism of Lemma~\ref{lem:merconj}. If $r=0$, then $
\widetilde R_{d,e,\varepsilon}=R_{d,e,\varepsilon}$. If $r\ge1$ and $\Imag(A_0)>0$, then
$$
\widetilde R_{d,e,\varepsilon}=R_{d,e,\varepsilon}\cap \{x\in\mathbb C: \Imag (x)<\widetilde e\Real (x)^{r+1}\},
$$
where $0<\tilde e< -\Imag(\zeta_{r+1})$. If $r\ge1$ and $\Imag(A_0)<0$, then
$$
\widetilde R_{d,e,\varepsilon}=R_{d,e,\varepsilon}\cap\{x\in\mathbb C:\Imag (x)>-\widetilde d\Real(x)^{r+1}\},
$$
where $\Imag (\zeta_{r+1})>\widetilde d>0$. Notice that, by Lemma~\ref{lem:tangency}, $x_j\in \widetilde R_{d,e,\varepsilon}$ for $j$ sufficiently big.
If $r=0$, then we have
$$
|E(x)|\leq\exp\left(\frac{\mathrm{Re}(A_0)}{2p} \frac{1}{|x|^{p}}\right)
$$
for each $x\in\widetilde R_{d,e,\varepsilon}$ for $d,e,\varepsilon$ small enough, and then $\lim_{j\to\infty} |E(x_j)/x_j^l|=0$ for any $l\geq 0$.
If $r>0$, then thanks to Lemma~\ref{lem:merconj} it suffices to show
$$
\lim_{x\to0\atop x\in\zeta(\widetilde R_{d,e,\varepsilon})}\left|\frac{E(\zeta^{-1}(x))}{\zeta^{-1}(x)^l}\right|=0
$$
for any $l\geq 0$. Notice that $E(\zeta^{-1}(x))=\exp\left(A_0/(px^p)- A_p \log x+\nu(x)\right)$ where $\nu$ is a holomorphic function defined in a neighborhood of $0$. Hence it suffices to prove
$$
\lim_{x\to0\atop x\in\zeta(\widetilde R_{d,e,\varepsilon})}\left|\frac{\exp(A_0/(px^p))}{x^l}\right| =0
$$
for any $l\geq 0$. We have
$$
\left|\exp\left(\frac{A_0}{px^p} \right)\right|
= \exp\left(\Real\left(\frac{A_0}{p x^p} \right)\right) = \exp\left(\frac{1}{p|x|^{2p}}\Real(\overline{A_0}x^p) \right).
$$
The inequality $\Real(\overline{A_0}x^p)\leq -c|x|^{p+r}$ holds in a neighborhood of $0$ in $\zeta (\widetilde R_{d,e,\varepsilon})$ for some $c>0$ analogously as in the proof of Lemma~\ref{lem:est}. Since
$$
\left|\exp\left(\frac{A_0}{p x^p}\right) \frac{1}{x^l} \right| \leq \exp\left(\frac{-c}{p|x|^{p-r}}\right) \frac{1}{|x|^l},
$$
which tends to 0 when $x\to0$, we obtain $\lim_{j\to\infty} |E(x_j)/x_j^l|=0$ for any $l\geq 0$.
\end{proof}
Consider $(x_0,y_0)\in S^m_{d,e,\varepsilon}$ and denote $(x_j,y_j)=F^j(x_0,y_0)$ for $j \geq 0$. Let us prove that the orbit $\{(x_j,y_j)\}$ is asymptotic to $\Gamma$. Recall that we are considering coordinates $(x,y)=(\xx m,\yy m)$ for which the order of contact of $\Gamma$ with the $x$-axis is at least $k+p+m$. In other words, if $\gamma(s)=(s,\gamma_2(s))$ is a parametrization of $\Gamma$, then $\gamma_2$ is at least of order $k+p+m$. We will show that, given any $N\ge m+1$, we have
$$\left|y_j-J_{k+p+N-1} \gamma_2(x_j)\right|<|x_j|^{N+1}$$
if $j$ is big enough. If we work in the coordinates $(\xx N,\yy N)$ given by $(\xx N,\yy N)=\left(\xx m, \yy m-J_{k+p+N-1}\gamma_2(\xx m)\right)$, that we will still denote $(x,y)$ for simplicity, we need to show that
$\left|y_j\right|<|x_j|^{N+1}$ if $j$ is big enough. Observe that, since the order of $\gamma_2(s)$ is at least $k+p+m$ in the coordinates $(\xx m,\yy m)$, in the new coordinates $(x,y)$ we have $|y_j|<2|x_j|^{p+1}$.
Note that, because of Lemma~\ref{lem:tangency}, $x_j\in R_{d,e,\varepsilon}$ for any $d,e,\varepsilon>0$, if $j$ is big enough. If we denote
$$D_{d,e,\varepsilon}=\left\{(x,y)\in\mathbb C^2: x\in R_{d,e,\varepsilon}, |y|<|x|^{N+1}\right\},$$
then, with the same proof of Proposition~\ref{pro:basin}, we have that $F(x,y)\in D_{d,e,\varepsilon}$ for any $(x,y)\in D_{d,e,\varepsilon}$, if $d,e,\varepsilon>0$ are small enough. Therefore, it suffices to show that $(x_j,y_j)\in D_{d,e,\varepsilon}$ for infinitely many indexes $j\in\mathbb N$. Suppose this last property does not hold. Then, up to replacing $(x_0,y_0)$ with one of its iterates, there exists a domain
$$U=\{(x,y)\in\mathbb C^2:|x|^{N+1}\leq |y|<2|x|^{p+1}\}$$
such that $(x_j,y_j) \in U$ for any $j \geq 0$.
Let us see how $y/E(x)$ changes under iteration. We set
$$
H(x,y)=y-\mu^{-1}E(x)E(F_1(x,y))^{-1}F_2(x,y).
$$
As in the proof of Theorem~\ref{the:saddle}, we have
$$
1-\mu^{-1}\left( \frac{F_2(x,y)}{E(F_1(x,y))}\right) {\left(\frac{y}{E(x)} \right)}^{-1}
=
\frac{H(x,y)}{y}= O(x^{k+p+1},x^ky, x^{k+p+m}y^{-1}),
$$
so $H(x,y)/y=O(x^{k+p+1})$ for every $(x,y)\in U$. Therefore we obtain
$$
\left| \frac{y_1}{E(x_1)} \right| = |\mu|\left(1+ O(x_0^{k+p+1})\right) \left| \frac{y_0}{E(x_0)} \right|
$$
for any $j \geq 0$. This leads us to
$$
\left| \frac{y_j}{E(x_j)} \right| = |\mu|^j\left(1+ O(x_0)\right) \left| \frac{y_0}{E(x_0)}\right|
$$
for any $j \geq 0$ by Lemma~\ref{lem:sumofx_j}. Then we obtain
$$
\left|\frac{y_j}{x_j^{N+1}} \right|=\left| \frac{y_j}{E(x_j)} \right| \left| \frac{E(x_j)}{x_j^{N+1}} \right|
\leq
2|\mu|^j\left| \frac{y_0}{E(x_0)}\right| \left|\frac{E(x_j)}{x_j^{N+1}} \right|
$$
for any $j \geq 0$. Applying Lemma~\ref{lem:tangency2}, we obtain that $\lim_{j \to \infty} y_j/x_j^{N+1}=0$, contradicting the fact that $(x_j,y_j) \in U$ for any~${j\geq 0}$. This shows that every orbit in $S^m_{d,e,\varepsilon}$ is asymptotic to $\Gamma$ and ends the proof of Theorem~\ref{the:node}.
\begin{remark}\label{rem:asymp-basin}
The open stable manifold $S$ obtained in Theorem \ref{the:node} is not asymptotic to $\Gamma$.
Let us see that we can replace $S$ with another stable manifold that is asymptotic to $\Gamma$ and
contains eventually every orbit $\{(x_n,y_n)\}$ asymptotic to $\Gamma$ such that $\{x_n\}$ has $\ell$
as a tangent direction. Denote
$$U_{j}= S\cap \left\{ (x,y)\in\mathbb C^2: \varepsilon/ 2^{j+2} <|x| < \varepsilon/ 2^j\right\}$$
for $j \geq 0$. We have $U_{j} \cap U_{j+1} \neq \emptyset$ by construction
and $F(U_{j}) \cap U_{j} \neq \emptyset$ because $F(S)\subset S$ and $|x\circ F(x,y)-x|\le c|x|^{k+p+1}$ for some $c>0$ and for all $(x,y)\in S$.
For any $N\ge 1$, we define
$$V_N = \left\{(x,y)\in\mathbb C^2: \left|y-J_N\gamma_2(x)\right|<|x|^N\right\},$$
where $\gamma(s)=(s,\gamma_2(s))$ is a parametrization of $\Gamma$.
Fix $j \geq 0$.
There exists
$k_j \in {\mathbb N}$
such that
$$F^{k} (x,y) \in V_{1} \cap \dots \cap V_{j+1}$$
for all $(x,y) \in U_j$ and $k \geq k_{j}$. The property is clear for the neighborhood of a single point
$(x,y) \in \overline{U_j}$ and hence it holds for any point of $U_j$ by compactness of $\overline{U_j}$.
We define $W_j = \cup_{k=k_j}^{\infty} F^{k} (U_j)$ for $j \geq 0$ and $W= \cup_{j=0}^{\infty} W_j$.
By construction the set $W$ is an open set.
Moreover $F(U_{j}) \cap U_{j} \neq \emptyset$ implies that $W_j$ is connected.
The sets $W_j$ and $W_{j+1}$ have common points for any $j \geq 0$
since $U_{j} \cap U_{j+1} \neq \emptyset$. Thus $W$ is a connected open set.
Finally we claim that given any $N \geq 1$, a neighborhood of $0$ in $W$ is contained in $V_{N}$. Fix $N \geq 1$.
By compactness of $\overline{U_j}$ for $j \geq 0$ we obtain that a neighborhood of $0$ in
$W_0 \cup \dots \cup W_{N-2}$ is contained in $V_{N}$. By construction
$\cup_{k=N-1}^{\infty} W_k$ is contained in $V_{N}$ and hence a neighborhood of $0$ in $W$ is contained
in $V_{N}$. By the previous discussion the set $W$ is asymptotic to $\Gamma$. Now, given any orbit $\{ (x_n,y_n) \}$ satisfying the hypotheses in Theorem \ref{the:node}
we know that $(x_j,y_j)$ belongs to $S$ for $j$ sufficiently big.
This implies that there exist $j_0, k_0 \in {\mathbb N}$ such that
$(x_{j_0},y_{j_0}) \in U_{k_0}$. Clearly the orbit $\{ (x_n,y_n) \}$ is eventually contained in $W_{k_0}$ and then
in $W$.
\end{remark}
\section{$\Gamma$-parabolic case: conclusion}\label{sec:conclusion}
As a consequence of the results obtained in Sections~\ref{sec:reduction}, \ref{sec:saddle-direction} and \ref{sec:node-direction}, we have the following result, from which Theorem~\ref{th:main2-parabolic} and Theorem~\ref{th:generalizing-Lopez-Sanz} follow.
\begin{theorem}
Consider $F\in\Diff\Cd2$ and let $\Gamma$ be an invariant formal curve of $F$, such that $(F|_\Gamma)'(0)=1$ and $F|_\Gamma\neq\id$. Denote by $r+1$ the order of $F|_\Gamma$. Then, for any sufficiently small neighborhood of the origin, there exists a family $\{S_1,\dots,S_r\}$ of connected and simply connected mutually disjoint stable manifolds of pure positive dimension where every orbit is asymptotic to $\Gamma$ and such that $S_1\cup\dots\cup S_r$ contains the germ of any orbit of $F$ asymptotic to $\Gamma$. If $\dim(S_j)=1$ then $S_j$ is asymptotic to $\Gamma$ and if $\dim(S_j)=2$ then $S_j$ can be chosen to be asymptotic to $\Gamma$. Moreover, if $\spec(DF(0))=\{1,\mu\}$, with $|\mu|\ge1$, then at least $\lceil r/4 \rceil$
stable manifolds $S_j$ have dimension one, where $\lceil r/4 \rceil$ is the least integer greater or equal than $r/4$.
\end{theorem}
\begin{proof}
Let $\phi$ be a sequence of holomorphic changes of coordinates and blow-ups such that the pair $(\wt F,\wt\Gamma)$ is reduced, where $\wt F$ is the transform of $F$ and $\wt\Gamma$ is the strict transform of $\Gamma$. Denote by $k+1$ and by $k+p+1$ the orders of $F$ and of $F|_\Gamma$, respectively. Notice that $k+p=r$, since the restriction $F|_\Gamma$ is preserved under blow-ups. Since $\phi(\wt S)$ is a stable manifold of $F$ for every stable manifold $\wt S$ of $\wt F$, the existence of the family $\{S_1,\dots,S_r\}$ of pairwise disjoint connected and simply connected stable manifolds where every orbit is asymptotic to $\Gamma$ follows immediately from Theorems~\ref{the:saddle} and \ref{the:node}. The one-dimensional stable manifolds are asymptotic to $\Gamma$, by Theorem~\ref{the:saddle}, and the two-dimensional ones can be chosen to be asymptotic to $\Gamma$, by Remark~\ref{rem:asymp-basin}.
Let $O$ be an orbit of $F$ asymptotic to $\Gamma$. In some coordinates $(x,y)$, the transform $\wt F$ of $F$ satisfies $x\circ \wt F(x,y)=x-x^{k+p+1}+O(x^{k+p+1}y,x^{2k+2p+1})$. Since $\phi^{-1}(O)=\{(x_n,y_n)\}$ is an orbit of $\wt F$ asymptotic to $\wt \Gamma$, we have that $|y_n|\le|x_n|$ if $n$ is big enough, so, arguing as in Leau-Fatou Flower Theorem, the sequence $\{x_n\}$ has one of the attracting directions of $(\wt F,\wt \Gamma)$ as tangent direction. Applying Theorems~\ref{the:saddle} and \ref{the:node}, we conclude that $O$ is eventually contained in $S_1\cup\dots \cup S_{k+p}$.
To complete the proof of the Theorem, assume that $\spec(DF(0))=\{1,\mu\}$, with $|\mu|\ge1$. Observe that, since the inner eigenvalue is 1, this condition is stable under blow-up. To prove that in this case at least one of the stable manifolds $S_1,\dots, S_{k+p}$ has dimension one, it suffices to show that at least one of the attracting directions of $(\wt F,\wt \Gamma)$ is a saddle direction, by Theorem~\ref{the:saddle}. If $|\mu|>1$ or $|\mu|=1$ and $p=0$, every attracting direction is a saddle direction, so every $S_j$ has dimension one. Assume that $|\mu|=1$ and $p\ge1$, and let $\log\mu+x^k\left(A_0+A_1x+\dots+A_px^p\right)$ be the infinitesimal principal part of $(\wt F,\wt\Gamma)$. Notice that $A_0\neq 0$.
We denote by $a$ the number of attracting directions $\xi \mathbb R^+$ such that
$\Real (\xi^{k} A_0) >0$. The number $a$ is a lower bound for the number of saddle directions, and is equal to
$\sharp \{ 0 \leq j < r : \Real ( e^{\frac{2 \pi i j k}{r}} A_0) >0 \}$.
We denote $g= \gcd (r, k)$, $r' = r/g$ and $k' = k/g$. Notice that $r'\geq 2$ since $p\geq 1$. Also, since $r'$ and $k'$ are coprime, $\eta$ is a root of unity of order $r'$ if and only if so is $\eta^{k'}$. Hence, we obtain
\[ a = g \, \sharp \{ 0 \leq j < r' : \Real ( e^{\frac{2 \pi i j k'}{r'}} A_0) >0 \}
=g \, \sharp \{ 0 \leq j < r' : \Real ( e^{\frac{2 \pi i j }{r'}} A_0) >0 \}. \]
Suppose $r' \neq 2$. There are at least $\lceil r'/4\rceil$ roots of unity $\xi$ of order $r'$
such that $ \Real ( \xi A_0) >0$.
Hence we obtain $a \geq g r'/4 = r/4$.
Suppose $r' =2$. This case happens if and only if $k=p$.
Hence either $\Real (A_0)\neq0$ (and then there are $k$ one-dimensional stable manifolds and $k$ two-dimensional ones) or $\Real(\xi^k A_0)=0$ for any attracting direction $\xi\mathbb R^+$. In this last case, if $A_j=0$ for all $1\le j\le p-1$ then every attracting direction is a saddle direction, so every $S_j$ has dimension one. Otherwise, we consider the first index red $t$, with $1\le t \le p-1$, such that $A_t \neq0$.
Analogously as above there are at least
$\sharp \{ 0 \leq j < r : \Real ( e^{\frac{2 \pi i j (k+t)}{r}} A_{t}) >0 \}$ saddle directions.
We denote $g' = \gcd (r,k+t) = \gcd (2k, k+t)$.
Since $r /g' > 2k/k =2$ we can apply the argument in the previous paragraph to show that
there are at least $g' (r/g')/4 = r/4$ saddle directions.
\end{proof}
|
1,116,691,498,650 | arxiv | \section{Conclusions}
\label{sec:discussion}
Egocentric pose estimation is challenging due to self-occlusions and distorted views. To address these challenges, we design a spatio-temporal transformer-based 3D HPE model powered by learnable parameters --- the \emph{feature map tokens} --- which can help achieve spatio-temporal attention to significantly reduce errors caused by self-occlusion.
Our proposed Ego-STAN model(s) achieves significant performance improvements on the \xR{-EgoPose} dataset by consistently outperforming the current SOTA, while simultaneously reducing the number of trainable parameters, making it suitable for cutting-edge motion tracking applications such as activity recognition, surgical training, and immersive \xR{} applications. Our future work will investigate the role of each component via extensive ablation studies.
\section{Ego-STAN: Ego-centric Spatio-Temporal Self-Attention Network}
\edit{We now provide a brief overview of the proposed \textbf{Ego}centric \textbf{S}patio-\textbf{T}emporal Self-\textbf{A}ttention \textbf{N}etwork (Ego-STAN) model, which jointly address the self-occlusion and the distortion introduced by the ego-centric views.
Ego-STAN (shown in Fig.~\ref{fig:Ego-STAN}) consists of four modules. Of these, the goals of the \textbf{feature extraction} and \textbf{spatio-temporal Transformer} modules are to address the self-occlusion problem by aggregating information from multiple time steps, while the \textbf{heatmap reconstruction} and \textbf{3D pose estimator} modules aim to accomplish uncertainty saturation with lighter 2D heatmap-to-3D lifting architectures.}
\begingroup
\renewcommand{\arraystretch}{1.25}
\begin{table*}[t]
\centering
\caption{\edit{Quantitative evaluation against the popular outside-in 3D HPE work of Martinez \textit{et al.} \cite{martinez2017simple} and the SOTA egocentric 3D HPE work of Tome \textit{et al.} \cite{tome2019xr}. Our proposed Ego-STAN variations have the highest accuracy across nine actions with the feature map token (FMT) variant having the lowest overall MPJPE (lower is better). We report our results as the average over three different random seeds. Our proposed three variants have a very low standard deviation of 2.06, 0.04, and 0.07 for Avg, Slice and FMT respectively.}}
\vspace{-2pt}
\resizebox{\textwidth}{!}
{\begin{tabular}{ll|cccccccccc}
\toprule
\textbf{Approach} & \textbf{\parbox{1.7cm}{Evaluation \\ error (mm)}} & \textbf{Game} & \textbf{Gest.} & \textbf{Greet} & \textbf{\parbox{1cm}{Lower \\ Stretch}} & \textbf{Pat} & \textbf{React} & \textbf{Talk} & \textbf{\parbox{1cm}{Upper \\ Stretch}} & \textbf{Walk} & \textbf{All}\\
\midrule
& Upper body & 58.5 & 66.7 & 54.8 & 70.0 & 59.3 & 77.8 & 54.1 & 89.7 & 74.1 & 79.4 \\
Martinez \cite{martinez2017simple}& Lower body & 160.7 & 144.1 & 183.7 & 181.7 & 126.7 & 161.2 & 168.1 & 159.4 & 186.9 & 164.8\\
& Average & 109.6 & 105.4 & 119.3 & 125.8 & 93.0 & 119.7 & 111.1 & 124.5 & 130.5 & 122.1\\
\hline
\multirow{3}{*}{\parbox{2.0cm}{Tome \cite{tome2019xr} \\ single-branch}}& Upper body & 114.4 & 106.7 & 99.3 & 90.0 & 99.1 & 147.5 & 95.1 & 119.0 & 104.3 & 112.5 \\
& Lower body & 162.2 & 110.2 & 101.2 & 175.6 & 136.6 & 203.6 & 91.9 & 139.9 & 159.0 & 148.3\\
& Average & 138.3 & 108.5 & 100.3 & 133.3 & 117.8 & 175.6 & 93.5 & 129.0 & 131.9 & 130.4\\
\hline
\multirow{3}{*}{\parbox{2.0cm}{Tome \cite{tome2019xr} \\ dual-branch}}& Upper body & 48.8 & 50.0 & 43.0 & 36.8 & 48.6 & 56.4 & 42.8 & 49.3 & 43.2 & 50.5 \\
& Lower body & 65.1 & 50.4 & 46.1 & 65.2 & 70.2 & 65.2 & 45.0 & 58.8 & 72.2 & 65.9\\
& Average & 56.0 & 50.2 & 44.6 & 51.5 & 59.4 & 60.8 & 43.9 & 53.9 & 57.7 & 58.2\\
\hline
\multirow{3}{*}{\parbox{2.0cm}{\textbf{Ego-STAN} \\ \textbf{Slice (Ours)}}} & Upper body & {27.2} & {30.0} & {36.3} & {24.0} & {21.3} & {25.4} & {25.3} & {34.2} & {25.5} & {30.2} \\
& Lower body & {38.5} & \textbf{30.9} & \textbf{33.2} & {54.5} & \textbf{32.1} & {35.6} & \textbf{29.5} & {64.0} & \textbf{55.9} & {55.5}\\
& Average & {32.9} & {30.4} & {34.8} & {39.2} & \textbf{26.7} & {30.5} & \textbf{27.4} & {49.1} & \textbf{40.7} & {42.8} \\
\hline
\multirow{3}{*}{\parbox{2.0cm}{\textbf{Ego-STAN} \\ \textbf{Avg. (Ours)}}} & Upper body & \textbf{25.4} & \textbf{26.7} & \textbf{31.2} & {25.9} & \textbf{20.7} & \textbf{23.3} & \textbf{23.9} & {33.7} & {26.7} & {29.9} \\
& Lower body & \textbf{38.1} & {32.7} & {35.0} & {54.7} & {34.6} & \textbf{34.3} & {31.2} & {61.2} & {57.2} & {54.3}\\
& Average & \textbf{31.7} & \textbf{29.7} & \textbf{33.1} & {40.3} & {27.7} & \textbf{28.8} & {27.5} & {47.4} & {42.0} & {42.1} \\
\hline
\multirow{3}{*}{\parbox{2.0cm}{\textbf{Ego-STAN} \\ \textbf{FMT (Ours)}}} & Upper body & {25.8} & {28.7} & {35.4} & \textbf{23.4} & {22.6} & {24.1} & {25.9} & \textbf{30.9} & \textbf{25.2} & \textbf{28.2} \\
& Lower body & {40.3} & {34.5} & {38.3} & \textbf{54.4} & {35.9} & {35.0} & {33.4} & \textbf{57.6} & {56.5} & \textbf{52.6}\\
& Average & {33.1} & {31.6} & {36.9} & \textbf{38.9} & {29.2} & {29.6} & {29.7} & \textbf{44.3} & {40.9} & \textbf{40.4} \\\hline
\end{tabular}}
\label{tab:mainresults}
\vspace{-10pt}
\end{table*}
\endgroup
\vspace{5pt}
\noindent\textbf{3.1.~Feature extraction module.}
\edit{The feature extraction module in \figref{fig:Ego-STAN} extracts semantically rich feature maps from ego-centric images via multiple non-linear convolutional filters. Building on a ResNet-101 \cite{he2016deep} backbone for extracting image-level features, we also introduce \emph{feature map tokens} (FMT), a specialized set of learnable parameters utilized by our Transformer to connect valuable pose information across time-steps. The FMT consists of multiple feature map points, which are formed by a weighted sum across spatial and temporal dimensions, corresponding to a particular location in an image for an intermediate 2D heatmap representation. Therefore, each unit of the FMT $\mathbf{K}$ learns how to represent accurate semantic features for the heatmap reconstruction module. By combining information from different time steps, Ego-STAN accomplishes 2D heatmap estimation even in challenging cases where views suffer from extreme occlusions.}
\noindent\textbf{3.2. Spatio-temporal Transformer.}
\edit{Self-attention learns to map the pairwise relationship between \emph{input tokens} in the sequence. This is especially important because it allows the feature map token to look across all of the input tokens in the sequence which are distributed spatially and temporally and to learn where to pay attention. As a result, feature map tokens create an accurate semantic map for heatmap reconstruction (further discussed in Sec. 3.3).} \edit{To explore the impact of FMT, we explore two variants on the spatio-temporal model without FMT. Since we are interested in estimating the 3D pose of the current frame given a sequence of frames from the past, the first variant, called \emph{slice}, takes the indices of the tokens that are respective to the current frame in the token sequence. The second variant, \emph{avg}, reduces the spatial dimensionality by taking the mean of spatially equal but temporally separated tokens.}
\noindent\textbf{3.3. Heatmap reconstruction module.}
\label{sec:heatmaprecon}
\edit{ We leverage deconvolution layers to reconstruct ground truth 2D heatmaps, $\mathbf{M}\in \mathbb{R}^{h \times w \times J}$, for each major joint ($J$) in the human body, similar to the approach of \cite{tome2019xr}. The 2D heatmap reconstruction module, having an estimated 2D heatmap $\mathbf{\widehat{M}} \in \mathbb{R}^{h \times w \times J}$, is trained via a mean square error loss:}
\begin{equation}
\mathcal{L}_{2D} (\mathbf{M}, \mathbf{\widehat{M}}) = \texttt{MSE}(\mathbf{M}, \mathbf{\widehat{M}}).
\label{eq:L2D}
\end{equation}
\noindent\textbf{3.4. 3D pose estimation module.}
\label{sec:hm2pose}
\edit{We leverage a simple convolution block followed by linear layers to lift the 2D heatmaps to 3D poses. As opposed to the SOTA egocentric pose estimator \cite{tome2019xr}, which uses a dual branched auto-encoder structure aimed at preserving the uncertainty information from 2D heatmaps, we (somewhat surprisingly) find that such a complex auto-encoder design is in fact not required, and our simple architecture accomplishes this task more accurately.} \edit{To estimate the 3D pose using the reconstructed 2D heatmaps \eqref{eq:L2D}, we use three different types of loss functions -- i) squared $\ell_2$-error $\mathcal{L}_{\ell_2}(\cdot)$, ii) cosine similarity $\mathcal{L}_{\theta}(\cdot)$, and iii) $\ell_1$-error $\mathcal{L}_{\ell_1}(\cdot)$ between $\mathbf{\widehat{P}}$ and $\mathbf{P}$. These loss functions impose the closeness between $\mathbf{P}$ and $\mathbf{\widehat{P}}$ in multiple ways. As a result, our 3D loss for regularization parameters $\lambda_{\theta}$ and $\lambda_{\ell_1}$ is}
\begin{equation}
\mathcal{L}_{3D}(\mathbf{P}, \mathbf{\widehat{P}}) = \mathcal{L}_{\ell_2}(\mathbf{P},\mathbf{\widehat{P}}) + \lambda_{\theta}\mathcal{L}_\theta(\mathbf{P},\mathbf{\widehat{P}})+\lambda_{\ell_1}\mathcal{L}_{\ell_1}(\mathbf{P},\mathbf{\widehat{P}})
\label{eq:L3D}
\end{equation}
\vspace*{-10pt}
\begin{equation}
\begin{gathered}
\mathcal{L}_{\ell_2}(\mathbf{P},\mathbf{\widehat{P}}) := \lVert \mathbf{\widehat{P}} - \mathbf{P} \rVert^{2}_2 ,
\mathcal{L}_\theta(\mathbf{P},\mathbf{\widehat{P}}) := \textstyle\sum_{i=1}^{J} \tfrac{\langle
\mathbf{P}_i,\mathbf{\widehat{P}}_i\rangle}{\lVert\mathbf{P}_i\rVert_2\lVert\mathbf{\widehat{P}}_i\rVert_2},\\
\text{and~} \mathcal{L}_{\ell_1}(\mathbf{P}, \mathbf{\widehat{P}}) := \textstyle\sum_{i=1}^{J} \lVert \mathbf{\widehat{P}}_i - \mathbf{P}_i \rVert_1. \notag
\end{gathered
\end{equation}
\edit{Thus, the overall loss function to train Ego-STAN comprises the 2D heatmap reconstruction loss and the 3D loss, as shown in \eqref{eq:L2D} and \eqref{eq:L3D}, respectively.}
\section{Experiments}
\edit{We now compare the performance of Ego-STAN against the SOTA egocentric HPE methods. We consider the xR-EgoPose dataset \cite{tome2019xr}, to the best of our knowledge the only dataset which contains sequential egocentric views. We report the mean per-joint position error (MPJPE) metric.}
\tabref{tab:mainresults} shows the MPJPE achieved by Ego-STAN and its variants on the xR-EgoPose Test Set, as compared with the SOTA egocentric HPE models proposed in Tome et. al.~\cite{tome2019xr} (a dual-branch autoencoder model, and its single branch variant), and a popular outside-in baseline \cite{martinez2017simple}. Ego-STAN variants perform the best across different actions as shown in \tabref{tab:mainresults}, with Ego-STAN FMT achieving the best average performance. Ego-STAN FMT outperforms the dual-branch model proposed in \cite{tome2019xr} by a substantial \textbf{17.8~mm (30.6\%)}, averaged over all actions and joints (\tabref{tab:mainresults}).
For a qualitative comparison, we show the estimation results on a few highly self-occluded frames in \figref{fig:qualitative}, which further demonstrates the superior properties of Ego-STAN FMT over the SOTA egocentric HPE methods. Moreover, as compared to 141 million parameters in SOTA, Ego-STAN only requires 110 million parameters, resulting in a 22\% reduction while achieving significant improvement in the pose estimation performance.
\section{Introduction}
\label{sec:intro}
\edit{Virtual immersive technologies, such as augmented, virtual, and mixed reality environments (\xR{}), which rely on user-centric customizations and viewpoint rendering, have underscored the need for accurate human pose estimation (HPE) to support a wide range of applications including medical training \cite{vaughan2020scoring} and architecture \cite{grepon2021architectural}, among others.
Although accurate 3D HPE can be performed using motion capture systems by attaching sensors to major human joints, this approach may not be suitable for real-world settings, due to the cost and additional infrastructural needs \cite{h36m_pami}. To this end, image-based 3D HPE has emerged as a practical alternative \cite{li2015maximum, sun2017compositional, tekin2016structured, pavlakos2018ordinal}.}
\begin{figure*}
\centerline{\includegraphics[width=0.9\textwidth]{seq_hm_direct_horizontal_SR.drawio.pdf}}
\caption{\textbf{Ego-STAN Overview}. The proposed Ego-STAN model captures the dynamics of human motion in ego-centric images using Transformer-based spatio-temporal modeling. Our Transformer architecture leverages \emph{feature map tokens} to facilitate spatio-temporal attention to semantically rich feature maps. Our heatmap reconstruction module estimates the 2D heatmap using deconvolutions, which are used by the 3D pose estimator to estimate the 3D joint coordinates.}
\label{fig:Ego-STAN}
\vspace{-12pt}
\end{figure*}
\edit{The image-based 3D HPE literature is primarily devoted to settings where a static camera observes an entire scene \cite{chen20173d, martinez2017simple, zhou2019hemlets}, however this modality is not ideal for applications requiring higher and robust (low variance) accuracies.
The egocentric modality offers some respite allowing mobility and the flexibility to focus on the subject even in cluttered environments \cite{tome2019xr, xu2019mo, wang2021estimating}. Notwithstanding these advantages, egocentric views introduce challenges of \emph{distortion} (e.g. lower body joints are visually much smaller than the upper body joints) and self-occlusion (e.g. lower body joints heavily occluded by the upper torso); see \figref{fig:qualitative}.}
\edit{
Recent works on egocentric HPE propose using a dual-branch autoencoder-based 2D heatmap to 3D pose estimator \cite{tome2019xr}, and extra camera information \cite{9423180} from static egocentric images. However, self-occlusions are challenging to address from these images-only approaches.
}
\edit{The question we aim to address is: \textit{how can we accurately estimate 3D human pose from egocentric images while jointly addressing the distortion and self-occlusions caused by these views}? Motivated from recent works for static cameras, which leverage spatio-temporal modeling to improve HPE \cite{zheng20213d}, we build Egocentric Spatio-Temporal Self-Attention Network (Ego-STAN) which leverages \emph{feature map tokens} (FMT), a specialized spatio-temporal attention, heatmap-based representations, and 2D heatmap to 3D HPE module. Ego-STAN achieves an \textbf{overall improvement of 30.6\%} mean per-joint position error (MPJPE) compared to the SOTA \cite{tome2019xr}, while \textbf{leading to a 22\% reduction in the trainable parameters} on the $x$R-EgoPose dataset \cite{tome2019xr}; see \figref{fig:qualitative} for a comparative analysis. }
\section{Related Work}
The Mo2Cap2 dataset was one of the first large egocentric HPE synthetic datasets \cite{xu2019mo}, however it is not amenable to spatio-temporal modeling since it only consists of static images. The xR-EgoPose dataset \cite{tome2019xr}, which offers egocentric image sequences, was introduced to mitigate issues encountered in Mo2Cap2 by improving the quality of synthetic images, reflecting more realistic settings. The authors also introduced single and dual-branch auto-encoder structure(s) based on 2D heatmap and 3D pose reconstruction, following which \cite{9423180} leverages extra camera parameters to mitigate the fish-eye distortion.
More recently, utilizing \cite{tome2019xr} as a submodule, GlobalPose \cite{wang2021estimating} developed a sequential variational auto-encoder-based model to address depth ambiguity and temporal instability in egocentric HPE. Ego-STAN can be used with such methods. |
1,116,691,498,651 | arxiv | \section{Introduction}
Although fractals are characterized by high visual complexity, their information content is low: they can be easily generated via simple, recursive algorithms. B. Mandelbrot elaborated on the fractal self-similar geometry and their mathematical notation in his books \cite{mandelbrot1975objets, mandelbrot1982fractal, mandelbrot1987fractals}. Following this framework, M. Berry reported that the diffracted waves from fractal structures exhibit spatiotemporal intensity spiking in their linear propagation dynamics. To emphasize their uniqueness, he referred to the diffracted waves from fractals as `diffractals' \cite{berry1979diffractals}. Fractal geometries and diffractal scattering have attracted widespread attention in many branches of science with applications in engineering such as digital image processing, especially image compression \cite{jacquin1993fractal, zhao2005fractal} and antenna design \cite{puente1998behavior, sharma2017journey, werner2003overview, werner1999fractal, maraghechi2010enhanced}. Such applications exploit a high level of information redundancy, which is organized in strongly-corrugated spatial patterns.
The implications of sparsity and redundancy in diffractals for communications systems are underexplored and there is strong potential for their application in the area of wireless communications given the drive to increase data rates. In the past, communication networks have embraced other advanced modulation and multiplexing schemes \cite{winzer2009modulation, winzer2008advanced}. Commonly used multiplexing techniques in optical fiber communication today include space-division multiplexing (SDM) or spatial multiplexing, wavelength-division multiplexing (WDM) using disjoint frequency bins, orthogonal frequency division multiplexing (OFDM) or coherent WDM (CoWDM) using spectrally overlapping yet orthogonal subcarriers, and polarization-division multiplexing (PDM) using both orthogonal polarizations supported by a single-mode fiber for independent bit streams \cite{kaminow2013optical, winzer2009modulation}. Among these approaches, spatial multiplexing has recently drawn significant interest, as the technology is still under development \cite{li2011focus, morioka2013recent}, particularly with FSO systems.
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\linewidth]{Figs/diffractal.png}
\caption{Concept of diffractal space-division-multiplexing (DSDM) in optical wireless communication through atmosphere turbulence. This schematic illustrates the data kernel pattern “J” and transmitted patterns with fractal order FO = 1,2,3. Only a partial off-axis portion of the far-field beam is detected and reconstructed. In this case, the received image is optically deconvolved with a lens. The FO = 3 data is reconstructed accurately.}
\label{fig:diffractal}
\end{figure*}
One potential approach for FSO spatial multiplexing uses beams with orbital angular momentum (OAM) \cite{gibson2004free, wang2012terabit, willner2015optical}. Since OAM states are mutually orthogonal, they are simultaneously transmitted or multiplexed along the same beam axis and demultiplexed at the receiver. For the same carrier frequency, the system’s {\it aggregate capacity} is equal to the number of system state modes. OAM-multiplexed systems have achieved Tbit/s-scale transmission rates over free space \cite{wang2012terabit}. Additionally, J. Wang {\it et al}. have experimentally demonstrated a free-space data link with an aggregate transmission capacity of 1.036 Pbit/s and a high spectral efficiency of 112.6 bit/s/Hz using 26 OAM modes simultaneously with other multiplexing technologies \cite{wang2014n}. However, since multiple OAM states are multiplexed along the same beam axis, coaxial propagation and reception are required, which means that coherent, OAM-multiplexed links are sensitive to misalignment compared to non-OAM, single-beam communication links \cite{willner2017recent, xie2015performance}. This is an important challenge for FSO and will become worse in the presence of atmospheric turbulence. Propagating through atmospheric turbulence, the intensity profile of Gaussian and OAM beams can be significantly corrupted, making it harder to align and track using their intensity gradient \cite{mahdieh2008numerical, zhang2020alignment}, and greater efforts are necessary to evaluate and attenuate the receiver error \cite{li2018joint, liu2019deep, ren2013atmospheric}.
Another approach to FSO that may be used in tandem with OAM for SDM is multiple-input multiple-output (MIMO), where multiple independent bit streams are transmitted simultaneously and multiple aperture elements are employed at the transmitter/receiver. Zhao {\it et al}. claim that conventional, line-of-sight (LOS) MIMO SDM systems outperform OAM \cite{amhoud2019oam, goldsmith2003capacity, trichili2016optical}. As a well-established technique in radio wireless systems \cite{ergen2009multiple, tse2005fundamentals}, MIMO approach could provide capacity gains relative to single aperture systems and increase link robustness for FSO communications \cite{ren2015free}. However in practice, MIMO is prone to interference between the transmitted and received beams at different aperture elements; this interference arises when these apertures are not sufficiently spatially separated \cite{goldsmith2003capacity, larsson2014massive, lu2014overview}.
In this paper, we demonstrate the novel approach: diffractal space-division multiplexing (DSDM), which is illustrated in Fig. \ref{fig:diffractal}. The unique properties of diffractal redundancy enable the simultaneous transmission of multiple independent bit streams \cite{moocarme2015robustness, Weng2020}; in the far field, arbitrary parts of a diffractal contain sufficient information to recreate the entire original (sparse) signal \cite{Verma2013, Verma2012}. Transmitted beams with higher fractal orders achieve higher reconstruction accuracy [see right column of Fig. \ref{fig:diffractal}]; in prior work, this result has been demonstrated experimentally with a 4-F system \cite{moocarme2015robustness}. Since DSDM does not rely on wavelength or polarization, it could be used with WDM and PDM techniques to further improve system capacity. Additionally, DSDM may be used to improve data transmission capacity in adverse environments in a manner analogous to other FSO techniques in which different parts of a signal are referenced to reduce receiver error \cite{Watnik2020}.
One reason why DSDM may be underexplored is due to diffraction issues (i.e., diffractals generate a wide cone of high spatial frequencies as they propagate, which is counter to many paradigms for FSO). Additionally, the strong diffraction from irregularly corrugated beams is a challenge to simulate reliably. Nevertheless, DSDM presents several important advantages over other FSO multiplexing technologies:
\begin{itemize}
\item Robust to misalignment: with DSDM, receivers may sample arbitrary beam parts, entirely off-axis.
\item Wide reception cone: DSDM enables a roaming area for the non-coaxial transmitter and receiver.
\item Simple design: compared to MIMO, DSDM uses a single transmitter/receiver aperture pair.
\item Robust to turbulence: diffractals provide redundant encoding to capture multiple bits per frame.
\item Swift decoding: DSDM uses optical processing for demultiplexing and a simple soft thresholding for reconstruction.
\item Simple receiver requirements: the same optics may be used to demultiplex all data channels.
\item High detection sensitivity: focusing lens enables capture of low intensity optical signals.
\item Scalability: aggregate capacity is only limited by number of pixels available at the transmitter.
\end{itemize}
More generally, DSDM may be relevant to other applications where the alignment between transmitter and receiver is not fixed, when a receiver “roaming area” is needed, or when an object or data needs to be encrypted, marked, or tracked. Spatial kernel patterns may be used as channel codes or to enable additional channel coding for error correction. In order to advance ideas on redundant spatial diffraction encoding, here we establish basic parameters for fractal propagation or diffraction encoding to the "far-field": we measure the reconstruction accuracy and robustness of DSDM.
\textcolor{black}{This article is organized to elaborate on novel opportunities for DSDM in FSO communication systems. In Sec. 2, we show a simple transmitter and receiver design and illustrate the experimental implementation of DSDM using a spatial light modulator(SLM). We illustrate diffractal propagation characteristics, which determine the roaming area described in later analyses. In Sec. 3 we illustrate design considerations tied to receiver size and the propagation distance between transmitter and receiver, which influences the communication channel's performance. In Sec. 4, we show that, even in the presence of turbulence, with the implementation of higher fractal order beams, accuracy of DSDM system remains high. In fact, with only 81x81 pixel transmitters, we achieve a 10$^{-3}$ bit error rate (BER) with 5 dB signal-noise-ratio (SNR) (the receiver size collects only 30\% of the off-axis roaming area at 2.5 km propagation distances without any error-correcting schemes). This result indicates outstanding possibilities for robust FSO with fractal-based, diffraction-encoded signals.}
\section{Diffractal Space-Division Multiplexing}
\begin{figure}[bt!]
\centering
\includegraphics[width=\linewidth]{Figs/Experimetal.png}
\caption{(a) Schematic with experimental implementation of DSDM. M1 and M2 are mirrors. L1 and L2 are telescoping lenses with pinhole PH to provide spatial filtering. P1 and P2 are orthogonal polarizers \textcolor{black}{ aligned with the spatial light modulator (SLM) with pixel width $a_{pixel} = 36 \mu m$.} A convex lens focuses and decodes data onto the sensor. (Note: the intensity of the light at the receiver is too low to be measured by the optical camera without the lens.) Detector images (b1-b3) show excellent reconstruction of the transmitted beams for fractal orders FO = 3, 4, and 5, in spite of their off-axis locations.}
\label{fig:Experimental}
\end{figure}
The DSDM system involves: multiplexing (where the transmitted data is a fractal from kernel data), diffraction encoding (when the beam propagates to the receiver), and demultiplexing (which is composed of coherent optical and electronic processing). \textcolor{black}{The DSDM deploy the intensity modulation/direct detection (IM/DD) scheme, which is common for FSO communication systems \cite{ghassemlooy2019optical}}. In Fig. \ref{fig:Experimental}(a), we show an experimental setup with a transmissive spatial light modulator (SLM, Holoeye LC2012). The SLM amplitude-modulates with on/off keying (OOK), where the ‘1’s are in-phase optical pulses that occupy the bit duration and ‘0’s are bits in the absence of optical pulses. For fractal orders (FO) of 3, 4, and 5, the beams transmitted by the SLM are 1, 3, and 9 mm in diameter.
At a distance of 10 m from the SLM, a fraction of the laser beam is captured off-axis by a camera placed in the focal plane of the lens. The beam diverges over 10-times in width and the light intensity at the receiver is too low to be detected without the focusing lens. Only a small portion of the far-field beam is captured. The detector sensor, placed in the focal plane of the lens, captures the demultiplexed data with FO = 3, 4, and 5. [Figs. \ref{fig:Experimental}(b1-b3)], which easily reproduces the transmitted "J" kernel. This diffractal transmitter/receiver scheme is explained further below.
\subsection{Multiplexer: fractal, spatially-modulated transmission}
The kernel OOK data is a binary $s \times s$ array. The transmitted data is produced with a fractal mask or a screen pattern generated with the Kronecker product [see left column of Fig. \ref{fig:diffractal}]. The Kronecker product of the kernel with itself involves placement of the kernel at the location of each bit ‘1’ and the placement of an all-zeros matrix at each bit ‘0’. When the beam's fractal order is equal to $n$, the Kronecker product repeats $n$ times. We define the smallest sub-square of the fractal as the pixel. Therefore, each transmitted pattern contains $s^n \times s^n = s^{2n}$ pixels. Details of this kernel-dependent diffraction are provided in Sec 1.1 of the supplemental document.
\begin{figure*}[tbh!]
\centering
\includegraphics[width=\linewidth]{Figs/BeamDiverge2.png}
\caption{Propagated beam profile of electric-field intensity at $z$ = 0, 840 and 1676 m for a1) "X" and (a2) "R" kernels.
MFR as a function of propagation distance $z$ for (b) different data kernels with fixed fractal order FO = 3 and (c) different fractal orders FO = 2, 3, and 4 of the kernel "J". }
\label{fig:BeamDiverge}
\end{figure*}
\subsection{Diffractal beam divergence}
Diffractal beam divergence is an important design consideration: it defines the effective roaming area for receivers. From diffraction theory, a spatially-corrugated beam such as a fractal diverges faster than a Gaussian-profiled beam. We numerically propagate diffractals with $s=3$. The total transmitted power is unit normalized and the transmitted beam area and pixel power varies with fractal orders and kernel shapes. The pixel size ($W_{px}$ = 2 mm) and wavelength ($\lambda =1550$ nm) remains fixed and constant regardless of beam shape. Care is necessary to ensure that the power is either conserved in the simulations or repeatable with doubled boundary widths.
For the calculation of diffracted beam radius, the concept of beam mode field radius (MFR) is employed and given by the equation:
\begin{equation}
\textrm{MFR}=\sqrt{\frac{\iint_{-\infty}^{\infty}[|u(x,y)|^2 (x^2+y^2)dxdy]}{\iint_{-\infty}^{\infty}u(x,y)|^2 dxdy} }
\end{equation}
where $x$ and $y$ are the transverse spatial coordinates and $u(x,y)$ is the electric field of the FSO beam. The fraction of the total beam power within the maximum roaming area is less than or equal to $1-1/e^2$. Different kernel shapes have different degrees of diffraction. Moreover, the range of diffraction for different $3\times3$ kernels varies by a factor of 2: In Fig. \ref{fig:BeamDiverge}(a), the kernel ``R'' diffracts at half the rate as the kernel ``X''. The upper limit for the beam divergence speed is that for a single pixel. A diffractal with the kernel ``X'' spreads almost as much as a single pixel at the same propagation distance. We observe that a diffractal's beam divergence scales approximately with the number of internal edges in the kernel shape or with the largest independent block length of the kernel. Different kernel shapes experience different degrees of diffraction and the extent to which this varies is also tied to the image complexity that characterizes all diffractals.
In fact, even the slowest diffractals diverge at a rate much greater than a Gaussian of the same initial width due to their highly corrugated structure. For example, the kernel "R" diverges at a rate 26 times faster than a Gaussian beam with the same waist radius. In Fig. \ref{fig:BeamDiverge}(b), the initial beam radius of the kernel "R" is around 10 cm. This beam radius increases by a factor of $\sqrt{2}$ after a propagation distance of $z$ = 0.8 km. Meanwhile, the Rayleigh length of a Gaussian beam with a 10 cm waist radius is 21 km, given by\cite{damask2004polarization} $z_{R}=\pi/\lambda (w_{0})^2$, where $w_{0}$ is the beam waist. This extreme divergence of diffractals relative to the divergence of Gaussian beams is a result of the high degree of structure, intrinsic to fractal-modulated beams.
Diffractals uniquely exhibit non-Gaussian beam diffraction statistics \cite{berry1979diffractals}, meaning that their propagation in the near field is populated with spatiotemporal spikes. However, many aspects of their propagation at long distances are similar to Gaussian beam propagation. For example, the diffractal beam radius MFR increases linearly at longer propagation distances. The MFR at longer distances ultimately scales in proportion with the size of a single pixel or with the largest independent block length of the kernel rather than the initial beam waist. In other words, with the same kernel shape and pixel size, beams with different fractal orders have different initial beam waists but similar far-field MFR. This convergence, which depends on the kernel shape but not on fractal order is illustrated in Fig. \ref{fig:BeamDiverge}(c).
\subsection{Diffractal propagation to the "far field"}
With DSDM, diffraction provides part of the signal spatial encoding; the accuracy in the reconstructed data depends on how far the beam travels between the transmitter and receiver. We use the concept of a "Fraunhofer distance" to quantify the diffraction distance to the far-field,
\begin{equation}
z_{DFF} > 2 s^{2n} L_{dfpx},
\label{zdff}
\end{equation}
where $n$ is the fractal order and $L_{dfpx} = \pi/\lambda (W_{px}/2)^2$ is the confocal parameter for a single pixel of width $W_{px}$. We note that the strong spiking spatiotemporal behavior described by \cite{berry1979diffractals} is part of the diffraction-encoding for DSDM and occurs when $z<z_{DFF}$. Our simulations suggest that, in order to fully take advantage of the diffractal encoding, the receiver should be at a distance $z>z_{DFF}$ from the transmitter. {At the same time, as we demonstrate in the following section, good reconstruction is still achieved at $z<z_{DFF}$ when the detector is sufficiently large.}
\textcolor{black}{The propagation distance influences the DSDM diffraction encoding as well as the roaming area. By increasing the FO, we may not significantly increase the roaming area [Fig. \ref{fig:BeamDiverge}(c)]; however, we do increase $z_{DFF}$} and require longer distances for diffraction encoding. To ensure proper diffraction encoding at shorter distances $z$, smaller pixel size $W_{px}$ may be used; nevertheless, this correspondingly increases the MFR.
\subsection{Detection and demultiplexing}
Longer propagation distances result in larger roaming areas. At the receiver, a portion of this roaming area is captured. Fig. \ref{fig:demultiplexing}(a) shows the definitions of receiver detector width (DW) and roaming radius (R) with respect to the beam mode field radius (MFR). The green dotted circle represents the possible roaming area and the green solid circle is defined as the maximum roaming area with radius equal to $\sqrt{2}$MFR.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{Figs/demultiplexing.png}
\caption{(a) The far-field beam pattern for the kernel "J" and definitions of coverage area (CW), receiver width (DW), and roaming radius (R). Several randomly-positioned receivers are shown (white squares). (b) From left to right, optical deconvolution from an off-axis receiver DW = 0.75m \textcolor{black}{to the sensor. The captured area represents 5.8\% of total beam power and 16\% of the coverage area. (c) \textcolor{black} {9 sub-blocks of the received image. During reconstruction, each sub-block is thresholded to ‘1’ or ‘0’ depending on the sub-block intensity.}}}
\label{fig:demultiplexing}
\end{figure}
DSDM demultiplexing is performed both optically (with a convex lens and Fourier-plane camera) and soft thresholding (with a simple threshold algorithm). A vignetted lens focuses light onto a sensor.
Diffractal propagation is unique because in the focal plane of an arbitrarily placed lens, a pattern similar to the initial data kernel is produced, even when the lens is off-axis and captures a fraction of the far-field beam. The receiver (lens and camera) may move freely within the maximum roaming area. The kernel reconstruction is performed by a simple, soft thresholding of the intensity profile of received image.
The received image is separated into 9 sub-blocks, each sub-block represents one bit. Mean and variance of each sub-block and background are thresholded to either ‘1’ or ‘0’, see Fig. \ref{fig:demultiplexing}(c). More details regarding the reconstruction algorithm are provided in Sec 1.2 of the supplemental document.
\section{Design Considerations}
\textcolor{black}{The roaming radius, FO, and receiver size all play a critical role in DSDM. In the sections below, we consider these parameters where the receiver aperture is significantly smaller than the diffracted beam or maximum roaming area $R$ = $\sqrt{2}$MFR. To draw statistics, we sample over 4000 random locations across the beam to calculate the kernel bit-error-rate (K-BER), which varies spatially. The K-BER is the accuracy calculated for a fixed kernel instead of a random pattern; this fixed kernel pattern is a measure of the accuracy if DSDM is applied for channel marking or tracking.}
\begin{figure}[tp]
\centering
\includegraphics[width=\linewidth]{Figs/RoamingRadius2.png}
\caption{ Illustrations of transmitted kernel “J” with fractal order (FO) of 4 at propagation distance $z$ = 2.5km. (a) K-BER vs roaming radius ($R$) for different receiver widths (DW). (b) Deconvolved and (c) reconstructed data over 25 equally-spaced patches across the coverage area. \textcolor{black}{The dashed squares indicate a possible roaming area wherein the sampled, reconstructed 9-bit images are all correct.}}
\label{fig:RoamingRadius}
\end{figure}
\subsection{Roaming radius}
\textcolor{black}{Not surprisingly, the error probability increases as the receiver moves away from the far-field beam center axis, however decreases with larger receiver areas. The roaming radius of the receiver directly influences the K-BER performance.}
Figure \ref{fig:RoamingRadius} illustrates the reconstruction and K-BER performance as a function of the roaming radius $R$ at a propagation distance of $z$ = 2.5 km without turbulence, where the kernel is “J” and FO = 4. Fig. \ref{fig:RoamingRadius}(b) shows the pattern observed in the Fourier-plane of the receiver lens for different sampling locations of the coverage area. Figure \ref{fig:RoamingRadius}(c) shows the reconstructed data of corresponding images in \ref{fig:RoamingRadius}(b) . In general, as the receiver moves farther away from the center of the diffracted beam, the K-BER gradually increases; the highest reconstruction accuracies are sampled on-axis. Additionally, as the receiver samples a larger area, it is able to roam a larger radius with a low K-BER.
\textcolor{black}{The limited K-BER performance is largely influenced by compression noise \cite{jiang2014signal} or the reconstruction error that arises from using only a portion of the entire diffracted beam. As long as the receiver aperture is smaller than the diffracted beam, compression noise exists, regardless of whether there is additional noise or not. Compression noise decreases as the receiver size increases.} \textcolor{black}{For small roaming radius $R$, the K-BER drops sharply, and this drop occurs for smaller $R$ with larger receiver size. Compression noise decreases quickly when the receiver size is larger. The receiver aperture covers most of the high-intensity central area of the far-field beam. Not surprisingly, DSDM with smaller roaming areas and larger receivers have the best performance.}
\subsection{Influence of Fractal Order}
DSDM performance is significantly improved when we increase the FO of the transmitted data kernel. As FO increases, the accuracy over the roaming area increases and smaller DWs are possible. The far field beam exhibits smaller self-similar speckle features and information is encoded at higher spatial frequencies. Therefore, when the FO is large, the detector image produced from an arbitrary subsection of the roaming area closely resembles the transmitted data.
\begin{figure}[hbtp]
\centering
\includegraphics[width=\linewidth]{Figs/FractalOrder.png}
\caption{(a) K-BER trends with respect to fractal order FO = 2,3,4 as a function of receiver aperture size DW at $z = 10$ km. (b-e) The deconvolved images on an optical sensor and the reconstructed data for fractal orders FO = 2 (b-c) and FO = 4 (d-e) for different patches of the coverage area.}
\label{fig:FractalOrder}
\end{figure}
Figure \ref{fig:FractalOrder}(a) shows the K-BER versus DW at a propagation distance of $z$ = 10km and compares the K-BER performance of FO = 2, 3, and 4 beams. The trend clearly shows that higher fractal orders achieve higher accuracy. The K-BER for FO = 4 is lower than the K-BER for FO = 2 and 3. In order to reach the same K-BER level of 10$^{-3}$, the FO = 3 channel needs a receiver size that is about 1.6 times larger than that of the FO = 4 channel. To put the receiver sizes into perspective, at Fig. \ref{fig:FractalOrder}(a), when FO = 4, the K-BER of 10$^{-3}$ is achieved with a receiver size less than 25\% of the maximum roaming area (the receiver area = (DW)$^2$ = (2.5m)$^2$ = 6.25 m$^{2}$; maximum roaming area = 25 m$^{2}$). \textcolor{black}{The Fourier-plane detector images carry more self-similar, iterated features with FO = 4 compared to FO = 2 [Fig. \ref{fig:FractalOrder}(b-e)].} The greater degree of redundancy in these features leads to smaller K-BER with higher FO.
Figure \ref{fig:FractalOrder} illustrates that DSDM with larger FO is an effective way to improve system performance. However, larger FO beams require more transmitted pixels, which require longer propagation distances for encoding [Eq. \ref{zdff}]. As noted above, smaller pixels may be used to decrease the necessary propagation distance but this also increases the rate of beam divergence. Thus, careful design of beam divergence and receiver area is needed for higher-FO DSDM.
\begin{figure*}[tbp]
\centering
\includegraphics[width=\linewidth]{Figs/ReceiverSize.png}
\caption{ (a) K-BER vs. receiver width (DW). The trend decreases with larger DW and depends on kernel data. There are three marked region (I, II, and III). The slightly rise in K-BER (marked zone II) is a result of minimal diffraction encoding or shorter propagation distance $z = 2.5$ km. (b) Far-field pattern for the kernel "R" showing the coverage width (CW) and 3 different receiver areas. (c) Corresponding detector patterns and reconstructed data for the receiver areas (c1) DW1 = 0.2 m, (c2) DW2 = 0.3 m, and (c3) DW3 = 0.4 m. }
\label{fig:RecieverSize}
\end{figure*}
\subsection{Influence of Receiver Size and Kernel Shape}
\textcolor{black}{One main advantage of DSDM is that the receiver aperture can be much smaller than the whole diffracted beam; however, this advantage varies with the diffraction encoding and kernel data. In the previous graph's trend (for the kernel "J") where the K-BER vs DW relationship is smooth [Fig. \ref{fig:FractalOrder}(a)], the propagation distance $z = $10 km puts the receiver approximately in the far field or $z\approx z_{DFF}$. At a shorter distance $z=2.5$ km, the K-BER vs DW for different FO = 4 kernels shows more subtle features [Fig. \ref{fig:RecieverSize}(a)]. While this figure shows that a larger receiver size results in a lower error probability, at this shorter $z$, we observe features from partial diffraction encoding where the beam has not reached the "far field".} The inflection points in the curves in Fig. \ref{fig:RecieverSize}(a) are one feature of partial diffraction encoding. A comparison with the smooth curve of Fig. \ref{fig:FractalOrder}(a) indicates two obvious turning points that separate the trend lines into three regions.
\textcolor{black}{The K-BER performance in Region I is again limited by compression noise \cite{jiang2014signal}. By comparing Fig. \ref{fig:RecieverSize}(c1) and (c3), we see the influence of compression noise: a detector image with larger receiver, where DW = 0.4m, contains more detailed information than with a smaller receiver, DW = 0.2m. The upper left corner of the receivers of different size are located at the same place [see white squares in Fig. \ref{fig:RecieverSize}(b)].} \textcolor{black}{In Region III, the K-BER drops sharply as before with increasing receiver size. Results indicate that when DW > 0.45 m (where receiver size is 30\% of the maximum roaming area) K-BER is below the forward error correction limit $10^{-3}$ \cite{Yan2014}. This indicates that the K-BER is reduced simply by increasing DW.}
\textcolor{black}{However, the trend lines in Region II in Fig. \ref{fig:RecieverSize}(a) are flattened or slightly raised, which appears in violation of the trend described above. However in this Region, the K-BER performance is dominated by partial diffraction encoding, or not having propagated far enough to reach the "far-field". It is not easy to observe Region II at a longer propagation distance of $z=$ 10 km, which is closer to $z_{DFF}$ or the "far field" [Fig. \ref{fig:FractalOrder}(a)]. Figure \ref{fig:RecieverSize}(b) shows different receiver sizes DW = 0.2, 0.3, and 0.4 m in the roaming area. Accurate reconstruction is achieved when DW = 0.2 and 0.4 m [Fig. \ref{fig:RecieverSize}(c1,c3)].} One important area of future work will be the reconstruction of beams with partial diffraction encoding such as those in region II. An illustration is shown in Fig. \ref{fig:RecieverSize}(c2) when a receiver DW = 0.3m is located in the top left of the far field. In this case, the bottom right corner of receiver samples only a part of the high-intensity central area. This area remains localized and is as large as the original, transmitted beam. Since the intensity of the central portion is much higher than the other sampled parts, the upper-left corner of the deconvolved image is much brighter. With our on/off threshold reconstruction algorithm, only the brightest area is considered as '1', whereas the other dark areas are ‘0’. This sampling, in combination with the current threshold algorithm, results in a higher error probability with DW = 0.3 m than DW = 0.2 m.
We note that, at many instances in our study, the numerical reconstruction algorithm fails to identify the kernel pattern even though the detector images would easily be classified by human visual inspection. We tried other reconstruction algorithms besides the one used based on a threshold; kernel reconstruction algorithms based on intensity differentials and image boundaries do, in some cases, reduce the error probability compared to our simple intensity threshold approach. The simplest reconstruction algorithm, however, distills clearer understanding of the diffraction encoding, which is one scope of this article. In the future, we anticipate that more advanced reconstruction algorithms will significantly improve the accuracies beyond the results presented here.
\section{Robustness to turbulence}
In the presence of atmospheric turbulence, DSDM has the advantage of redundant encoding. We simulate the transmitter-receiver propagation in the presence of weak and strong atmospheric turbulence with random phase screens \cite{khare2020orbital}. Two common parameters of atmospheric turbulence-- the index of refraction structure $C_{n}^{2}$ and propagation distance $z$-- are varied to simulate different turbulence strengths.
\textcolor{black}{Moreover, the Rytov variance is a fundamental scaling parameter that depicts the strength of the wave fluctuations, which is defined by $\sigma _{I}^{2}=1.23C_{n}^{2}k^{7/6}L^{11/6}$, where $k=2\pi/\lambda$ is the optical wavenumber and $\lambda$ is the wavelength \cite{khare2020orbital}. Another scaling parameter, the signal-to-noise ratio (SNR), is defined as:}
\begin{equation}
SNR=10\log_{10}( \frac{Signal}{Noise} ) =10\log _{10}( \frac{\sum_1^N{\sum_1^N{|u_{AT}|^{2}}}}{\sum_1^N{\sum_1^N{| u_{AT}-u_{vac} | ^2}}})
\label{eq:SNR}
\end{equation}
where $u_{AT}$ and $u_{vac}$ are the complex, electric-field profiles of the diffracted beams with and without atmospheric turbulence.
\textcolor{black}{K-BER performance under different atmospheric turbulence strength is simulated and estimated in Fig. \ref{fig:BERspatially}(a1,a2). The propagation distance is $z$=2.5 km, and receiver width DW ranges from 5 to 60 cm. The K-BER vs DW trends are similar to Fig. \ref{fig:RecieverSize}(c), except that the turbulence phase screens are added during propagation.} The K-BER is again calculated by averaging 4000 single-aperture receivers at random locations within the maximum roaming area. Figure \ref{fig:BERspatially}(a1) shows the K-BER under weak turbulence, where $C_n^2 = 10^{-15}m^{-2/3}$, and scintillation index is $\sigma_I^2=0.11$, corresponding to a SNR of 5dB. Figure \ref{fig:BERspatially}(a2) shows K-BER under strong turbulence, where $C_n^2 = 10^{-14}m^{-2/3}$, and scintillation index is $\sigma_I^2 = 1.11$, corresponding to a SNR close to 0.01 dB. \textcolor{black}{The K-BER performance under weak turbulence is almost the same as that with no turbulence, illustrating that the DSDM system is robust to noise under 5dB SNR. With strong turbulence, obvious differences arise in Region III, where receiver width DW > 0.4m. A larger receiver collects more noise compared to a smaller receiver, which results in greater distortion in the deconvolved images and higher K-BER. }
\begin{figure}[htb]
\centering
\includegraphics[width=\linewidth]{Figs/BERspatially.png}
\caption{(a) K-BER performance under (a1) weak turbulence conditions, $C_n^2 = 10^{-15}m^{-2/3}$, and scintillation index is $\sigma_I^2=0.11$ corresponding to an SNR of 5dB and (a2) strong turbulence conditions, where $C_n^2 = 10^{-14}m^{-2/3}$ and scintillation index is $\sigma_I^2 = 1.11$ corresponding to a SNR of 0dB. The fractal order is FO = 4 and the propagation distance is $z = 2.5$ km. (b) The received bit error at different locations within the maximum roaming area radius of 2.8 m with increasing receiver size (DW) in columns and fractal order in rows. Here, $z = 10$ km.}
\label{fig:BERspatially}
\end{figure}
The distortion from atmospheric turbulence in the demultiplexed beams is not only overcome by increasing the DW of the receiver but also by increasing the fractal order of the transmitted beam. To illustrate this effect, we plot bit error values spatially at equally distributed locations within the roaming area at a propagated distance of $z = 10$ km for different FO's [Fig. \ref{fig:BERspatially}(b)]. \textcolor{black}{A longer propagation distance is necessary to show the effect when the fractal order is increased to FO > 5; a larger FO requires longer distances for diffraction encoding. Unlike previous figures that show the average K-BER from 4000 randomly-positioned receivers, here we show the bit error from 40x40 single receivers shifted in position and at evenly distributed locations over the roaming area.} In Fig. \ref{fig:BERspatially}(b), different colors represent 0 to 9 received error bit values (there are 9 bits in each kernel).
By column, receiver widths from 0.26 to 2.32 m are tested. For the same FO, larger receiver widths correspond with fewer error bits, consistent with the declining curves in Fig. \ref{fig:BERspatially}(a1,a2). By row, we show FO from 2 to 6. For the same receiver size, when the FO increases, the bit error decreases. Larger FO's generally improve DSDM robustness to atmospheric turbulence; however, a larger FO increases the distance needed for diffraction encoding $z_{DFF}$ [Eq. \ref{zdff}]. This issue of diffraction encoding is highlighted with FO = 4,5,6 at DW = 0.77 m in Fig. \ref{fig:BERspatially}(b). Higher FO's have a smaller K-BER up to FO = 5, but when the FO increases to 6, the corresponding bit error value increases instead of decreases. \textcolor{black}{This increase appears to break the trend where smaller error accompanies higher FO. In fact, the distance $z$ = 10 km is significantly less than the minimum diffraction-encoded distance and is not far enough for FO = 6 to reach the “far-field”. Again, our results indicate that DSDM is still promising when the propagation distance is less than the Fraunhofer diffraction length $z<z_{DFF}$ [Eq. \ref{zdff}]}.
\section{Discussion and Conclusion}
DSDM leverages the fact that fractal patterns of kernel data are redundantly encoded over large areas as they propagate to the far field. As a result, a small portion of the far-field carries information to reproduce the original kernel data. Provided that the receiver detector has sufficient sensitivity, the best reconstruction accuracy is achieved from beams that have propagated to the far field. However, the far field\textemdash the point beyond which the radiation pattern scales but does not change shape with propagation\textemdash is not yet explicitly defined for diffractals. Additionally, in this article, we have shown that diffractal beam divergence and propagation to the far field depend strongly on kernel shape. Our results are relevant to computational sensing, imaging, and communication systems.
Although diffractals exhibit considerably more stable propagation in the far field, we are able to implement DSDM with larger DW in the near field ($z<z_{DFF}$). As the beam propagates to the far field, the intensity patterns exhibit spatiotemporal spiking as part of the process of diffraction encoding. We provide an analysis of the dependence on the receiver size and influence of fractal order in DSDM. In many cases with incomplete diffraction encoding, the detector images are inaccurately classified using our linear threshold algorithm but easily classified by visual inspection. Judging from \cite{Doster2017, Xiong2020}, the incorporation of a neural network or optimization scheme could improve the BER by several orders over already-promising results.
In conclusion, we show enormous potential for DSDM in FSO communication and channel marking, where only a few percent of the off-axis diffracted beam power is needed to reconstruct spatially encoded kernel data. DSDM may be used in practical free-space propagation systems to achieve high-transmission capacity in combination with other degrees of freedom, such as polarization and wavelength multiplexing. With DSDM, information is redundantly encoded spatially so that, with a sufficiently large receiver, DSDM communication is robust to atmospheric turbulence. With 81x81 transmitted pixels, we achieve BER of 10$^{-3}$ under weak turbulent conditions (5 dB SNR) when the receiver sizes are 30$\%$ of the roaming area over propagation distances of 2.5 km. These simulation results would be improved further with the use of higher-FO beams. Higher-FO beams are technologically feasible now but beyond our current capability with simulations. To implement DSDM experimentally over similar distances with higher FO, smaller pixels ensure proper diffraction encoding. The effect of spatially-modulating beams with smaller pixels is a larger reception cone area, which, far from being disadvantageous, may be valuable for FSO systems where the transmitter and receiver are roaming or not coaxial.
\section{Backmatter}
LTV gratefully acknowledges funding from DARPA YFA D19AP00036.
The authors thank Dr.Yingbo Hua for helpful discussions. The authors acknowledge editing support from Ben Stewart (linkedin:benjamin-w-stewart).
|
1,116,691,498,652 | arxiv | \section*{Abstract (Not appropriate in this style!)}%
\else \small
\begin{center}{\bf Abstract\vspace{-.5em}\vspace{\z@}}\end{center}%
\quotation
\fi
}%
}{%
}%
\@ifundefined{endabstract}{\def\endabstract
{\if@twocolumn\else\endquotation\fi}}{}%
\@ifundefined{maketitle}{\def\maketitle#1{}}{}%
\@ifundefined{affiliation}{\def\affiliation#1{}}{}%
\@ifundefined{proof}{\def\proof{\noindent{\bfseries Proof. }}}{}%
\@ifundefined{endproof}{\def\endproof{\mbox{\ \rule{.1in}{.1in}}}}{}%
\@ifundefined{newfield}{\def\newfield#1#2{}}{}%
\@ifundefined{chapter}{\def\chapter#1{\par(Chapter head:)#1\par }%
\newcount\c@chapter}{}%
\@ifundefined{part}{\def\part#1{\par(Part head:)#1\par }}{}%
\@ifundefined{section}{\def\section#1{\par(Section head:)#1\par }}{}%
\@ifundefined{subsection}{\def\subsection#1%
{\par(Subsection head:)#1\par }}{}%
\@ifundefined{subsubsection}{\def\subsubsection#1%
{\par(Subsubsection head:)#1\par }}{}%
\@ifundefined{paragraph}{\def\paragraph#1%
{\par(Subsubsubsection head:)#1\par }}{}%
\@ifundefined{subparagraph}{\def\subparagraph#1%
{\par(Subsubsubsubsection head:)#1\par }}{}%
\@ifundefined{therefore}{\def\therefore{}}{}%
\@ifundefined{backepsilon}{\def\backepsilon{}}{}%
\@ifundefined{yen}{\def\yen{\hbox{\rm\rlap=Y}}}{}%
\@ifundefined{registered}{%
\def\registered{\relax\ifmmode{}\r@gistered
\else$\m@th\r@gistered$\fi}%
\def\r@gistered{^{\ooalign
{\hfil\raise.07ex\hbox{$\scriptstyle\rm\RIfM@\expandafter\text@\else\expandafter\mbox\fi{R}$}\hfil\crcr
\mathhexbox20D}}}}{}%
\@ifundefined{Eth}{\def\Eth{}}{}%
\@ifundefined{eth}{\def\eth{}}{}%
\@ifundefined{Thorn}{\def\Thorn{}}{}%
\@ifundefined{thorn}{\def\thorn{}}{}%
\def\TEXTsymbol#1{\mbox{$#1$}}%
\@ifundefined{degree}{\def\degree{{}^{\circ}}}{}%
\newdimen\theight
\@ifundefined{Column}{\def\Column{%
\vadjust{\setbox\z@=\hbox{\scriptsize\quad\quad tcol}%
\theight=\ht\z@\advance\theight by \dp\z@\advance\theight by \lineskip
\kern -\theight \vbox to \theight{%
\rightline{\rlap{\box\z@}}%
\vss
}%
}%
}}{}%
\@ifundefined{qed}{\def\qed{%
\ifhmode\unskip\nobreak\fi\ifmmode\ifinner\else\hskip5\p@\fi\fi
\hbox{\hskip5\p@\vrule width4\p@ height6\p@ depth1.5\p@\hskip\p@}%
}}{}%
\@ifundefined{cents}{\def\cents{\hbox{\rm\rlap c/}}}{}%
\@ifundefined{tciLaplace}{\def\tciLaplace{\ensuremath{\mathcal{L}}}}{}%
\@ifundefined{tciFourier}{\def\tciFourier{\ensuremath{\mathcal{F}}}}{}%
\@ifundefined{textcurrency}{\def\textcurrency{\hbox{\rm\rlap xo}}}{}%
\@ifundefined{texteuro}{\def\texteuro{\hbox{\rm\rlap C=}}}{}%
\@ifundefined{euro}{\def\euro{\hbox{\rm\rlap C=}}}{}%
\@ifundefined{textfranc}{\def\textfranc{\hbox{\rm\rlap-F}}}{}%
\@ifundefined{textlira}{\def\textlira{\hbox{\rm\rlap L=}}}{}%
\@ifundefined{textpeseta}{\def\textpeseta{\hbox{\rm P\negthinspace s}}}{}%
\@ifundefined{miss}{\def\miss{\hbox{\vrule height2\p@ width 2\p@ depth\z@}}}{}%
\@ifundefined{vvert}{\def\vvert{\Vert}}{
\@ifundefined{tcol}{\def\tcol#1{{\baselineskip=6\p@ \vcenter{#1}} \Column}}{}%
\@ifundefined{dB}{\def\dB{\hbox{{}}}}{
\@ifundefined{mB}{\def\mB#1{\hbox{$#1$}}}{
\@ifundefined{nB}{\def\nB#1{\hbox{#1}}}{
\@ifundefined{note}{\def\note{$^{\dag}}}{}%
\defLaTeX2e{LaTeX2e}
\ifx\fmtnameLaTeX2e
\DeclareOldFontCommand{\rm}{\normalfont\rmfamily}{\mathrm}
\DeclareOldFontCommand{\sf}{\normalfont\sffamily}{\mathsf}
\DeclareOldFontCommand{\tt}{\normalfont\ttfamily}{\mathtt}
\DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf}
\DeclareOldFontCommand{\it}{\normalfont\itshape}{\mathit}
\DeclareOldFontCommand{\sl}{\normalfont\slshape}{\@nomath\sl}
\DeclareOldFontCommand{\sc}{\normalfont\scshape}{\@nomath\sc}
\fi
\def\alpha{{\Greekmath 010B}}%
\def\beta{{\Greekmath 010C}}%
\def\gamma{{\Greekmath 010D}}%
\def\delta{{\Greekmath 010E}}%
\def\epsilon{{\Greekmath 010F}}%
\def\zeta{{\Greekmath 0110}}%
\def\eta{{\Greekmath 0111}}%
\def\theta{{\Greekmath 0112}}%
\def\iota{{\Greekmath 0113}}%
\def\kappa{{\Greekmath 0114}}%
\def\lambda{{\Greekmath 0115}}%
\def\mu{{\Greekmath 0116}}%
\def\nu{{\Greekmath 0117}}%
\def\xi{{\Greekmath 0118}}%
\def\pi{{\Greekmath 0119}}%
\def\rho{{\Greekmath 011A}}%
\def\sigma{{\Greekmath 011B}}%
\def\tau{{\Greekmath 011C}}%
\def\upsilon{{\Greekmath 011D}}%
\def\phi{{\Greekmath 011E}}%
\def\chi{{\Greekmath 011F}}%
\def\psi{{\Greekmath 0120}}%
\def\omega{{\Greekmath 0121}}%
\def\varepsilon{{\Greekmath 0122}}%
\def\vartheta{{\Greekmath 0123}}%
\def\varpi{{\Greekmath 0124}}%
\def\varrho{{\Greekmath 0125}}%
\def\varsigma{{\Greekmath 0126}}%
\def\varphi{{\Greekmath 0127}}%
\def{\Greekmath 0272}{{\Greekmath 0272}}
\def\FindBoldGroup{%
{\setbox0=\hbox{$\mathbf{x\global\edef\theboldgroup{\the\mathgroup}}$}}%
}
\def\Greekmath#1#2#3#4{%
\if@compatibility
\ifnum\mathgroup=\symbold
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}%
\else
\mathchar"#1#2#3#
\fi
\else
\FindBoldGroup
\ifnum\mathgroup=\theboldgroup
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}%
\else
\mathchar"#1#2#3#
\fi
\fi}
\newif\ifGreekBold \GreekBoldfalse
\let\SAVEPBF=\pbf
\def\pbf{\GreekBoldtrue\SAVEPBF}%
\@ifundefined{theorem}{\newtheorem{theorem}{Theorem}}{}
\@ifundefined{lemma}{\newtheorem{lemma}[theorem]{Lemma}}{}
\@ifundefined{corollary}{\newtheorem{corollary}[theorem]{Corollary}}{}
\@ifundefined{conjecture}{\newtheorem{conjecture}[theorem]{Conjecture}}{}
\@ifundefined{proposition}{\newtheorem{proposition}[theorem]{Proposition}}{}
\@ifundefined{axiom}{\newtheorem{axiom}{Axiom}}{}
\@ifundefined{remark}{\newtheorem{remark}{Remark}}{}
\@ifundefined{example}{\newtheorem{example}{Example}}{}
\@ifundefined{exercise}{\newtheorem{exercise}{Exercise}}{}
\@ifundefined{definition}{\newtheorem{definition}{Definition}}{}
\@ifundefined{mathletters}{%
\newcounter{equationnumber}
\def\mathletters{%
\addtocounter{equation}{1}
\edef\@currentlabel{\arabic{equation}}%
\setcounter{equationnumber}{\c@equation}
\setcounter{equation}{0}%
\edef\arabic{equation}{\@currentlabel\noexpand\alph{equation}}%
}
\def\endmathletters{%
\setcounter{equation}{\value{equationnumber}}%
}
}{}
\@ifundefined{BibTeX}{%
\def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}}{}%
\@ifundefined{AmS}%
{\def\AmS{{\protect\usefont{OMS}{cmsy}{m}{n}%
A\kern-.1667em\lower.5ex\hbox{M}\kern-.125emS}}}{}%
\@ifundefined{AmSTeX}{\def\AmSTeX{\protect\AmS-\protect\TeX\@}}{}%
\def\@@eqncr{\let\@tempa\relax
\ifcase\@eqcnt \def\@tempa{& & &}\or \def\@tempa{& &}%
\else \def\@tempa{&}\fi
\@tempa
\if@eqnsw
\iftag@
\@taggnum
\else
\@eqnnum\stepcounter{equation}%
\fi
\fi
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@eqnswtrue
\global\@eqcnt\z@\cr}
\def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}}
\def\@TCItag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}%
\global\def\@currentlabel{#1}}
\def\@TCItagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}%
\global\def\@currentlabel{#1}}
\def\QATOP#1#2{{#1 \atop #2}}%
\def\QTATOP#1#2{{\textstyle {#1 \atop #2}}}%
\def\QDATOP#1#2{{\displaystyle {#1 \atop #2}}}%
\def\QABOVE#1#2#3{{#2 \above#1 #3}}%
\def\QTABOVE#1#2#3{{\textstyle {#2 \above#1 #3}}}%
\def\QDABOVE#1#2#3{{\displaystyle {#2 \above#1 #3}}}%
\def\QOVERD#1#2#3#4{{#3 \overwithdelims#1#2 #4}}%
\def\QTOVERD#1#2#3#4{{\textstyle {#3 \overwithdelims#1#2 #4}}}%
\def\QDOVERD#1#2#3#4{{\displaystyle {#3 \overwithdelims#1#2 #4}}}%
\def\QATOPD#1#2#3#4{{#3 \atopwithdelims#1#2 #4}}%
\def\QTATOPD#1#2#3#4{{\textstyle {#3 \atopwithdelims#1#2 #4}}}%
\def\QDATOPD#1#2#3#4{{\displaystyle {#3 \atopwithdelims#1#2 #4}}}%
\def\QABOVED#1#2#3#4#5{{#4 \abovewithdelims#1#2#3 #5}}%
\def\QTABOVED#1#2#3#4#5{{\textstyle
{#4 \abovewithdelims#1#2#3 #5}}}%
\def\QDABOVED#1#2#3#4#5{{\displaystyle
{#4 \abovewithdelims#1#2#3 #5}}}%
\def\tint{\msi@int\textstyle\int}%
\def\tiint{\msi@int\textstyle\iint}%
\def\tiiint{\msi@int\textstyle\iiint}%
\def\tiiiint{\msi@int\textstyle\iiiint}%
\def\tidotsint{\msi@int\textstyle\idotsint}%
\def\toint{\msi@int\textstyle\oint}%
\def\tsum{\mathop{\textstyle \sum }}%
\def\tprod{\mathop{\textstyle \prod }}%
\def\tbigcap{\mathop{\textstyle \bigcap }}%
\def\tbigwedge{\mathop{\textstyle \bigwedge }}%
\def\tbigoplus{\mathop{\textstyle \bigoplus }}%
\def\tbigodot{\mathop{\textstyle \bigodot }}%
\def\tbigsqcup{\mathop{\textstyle \bigsqcup }}%
\def\tcoprod{\mathop{\textstyle \coprod }}%
\def\tbigcup{\mathop{\textstyle \bigcup }}%
\def\tbigvee{\mathop{\textstyle \bigvee }}%
\def\tbigotimes{\mathop{\textstyle \bigotimes }}%
\def\tbiguplus{\mathop{\textstyle \biguplus }}%
\newtoks\temptoksa
\newtoks\temptoksb
\newtoks\temptoksc
\def\msi@int#1#2{%
\def\@temp{{#1#2\the\temptoksc_{\the\temptoksa}^{\the\temptoksb}}
\futurelet\@nextcs
\@int
}
\def\@int{%
\ifx\@nextcs\limits
\typeout{Found limits}%
\temptoksc={\limits}%
\let\@next\@intgobble%
\else\ifx\@nextcs\nolimits
\typeout{Found nolimits}%
\temptoksc={\nolimits}%
\let\@next\@intgobble%
\else
\typeout{Did not find limits or no limits}%
\temptoksc={}%
\let\@next\msi@limits%
\fi\fi
\@next
}%
\def\@intgobble#1{%
\typeout{arg is #1}%
\msi@limits
}
\def\msi@limits{%
\temptoksa={}%
\temptoksb={}%
\@ifnextchar_{\@limitsa}{\@limitsb}%
}
\def\@limitsa_#1{%
\temptoksa={#1}%
\@ifnextchar^{\@limitsc}{\@temp}%
}
\def\@limitsb{%
\@ifnextchar^{\@limitsc}{\@temp}%
}
\def\@limitsc^#1{%
\temptoksb={#1}%
\@ifnextchar_{\@limitsd}{\@temp
}
\def\@limitsd_#1{%
\temptoksa={#1}%
\@temp
}
\def\dint{\msi@int\displaystyle\int}%
\def\diint{\msi@int\displaystyle\iint}%
\def\diiint{\msi@int\displaystyle\iiint}%
\def\diiiint{\msi@int\displaystyle\iiiint}%
\def\didotsint{\msi@int\displaystyle\idotsint}%
\def\doint{\msi@int\displaystyle\oint}%
\def\dsum{\mathop{\displaystyle \sum }}%
\def\dprod{\mathop{\displaystyle \prod }}%
\def\dbigcap{\mathop{\displaystyle \bigcap }}%
\def\dbigwedge{\mathop{\displaystyle \bigwedge }}%
\def\dbigoplus{\mathop{\displaystyle \bigoplus }}%
\def\dbigodot{\mathop{\displaystyle \bigodot }}%
\def\dbigsqcup{\mathop{\displaystyle \bigsqcup }}%
\def\dcoprod{\mathop{\displaystyle \coprod }}%
\def\dbigcup{\mathop{\displaystyle \bigcup }}%
\def\dbigvee{\mathop{\displaystyle \bigvee }}%
\def\dbigotimes{\mathop{\displaystyle \bigotimes }}%
\def\dbiguplus{\mathop{\displaystyle \biguplus }}%
\if@compatibility\else
\RequirePackage{amsmath}
\fi
\def\makeatother\endinput{\makeatother\endinput}
\bgroup
\ifx\ds@amstex\relax
\message{amstex already loaded}\aftergroup\makeatother\endinput
\else
\@ifpackageloaded{amsmath}%
{\if@compatibility\message{amsmath already loaded}\fi\aftergroup\makeatother\endinput}
{}
\@ifpackageloaded{amstex}%
{\if@compatibility\message{amstex already loaded}\fi\aftergroup\makeatother\endinput}
{}
\@ifpackageloaded{amsgen}%
{\if@compatibility\message{amsgen already loaded}\fi\aftergroup\makeatother\endinput}
{}
\fi
\egroup
\typeout{TCILATEX defining AMS-like constructs in LaTeX 2.09 COMPATIBILITY MODE}
\let\DOTSI\relax
\def\RIfM@{\relax\ifmmode}%
\def\FN@{\futurelet\next}%
\newcount\intno@
\def\iint{\DOTSI\intno@\tw@\FN@\ints@}%
\def\iiint{\DOTSI\intno@\thr@@\FN@\ints@}%
\def\iiiint{\DOTSI\intno@4 \FN@\ints@}%
\def\idotsint{\DOTSI\intno@\z@\FN@\ints@}%
\def\ints@{\findlimits@\ints@@}%
\newif\iflimtoken@
\newif\iflimits@
\def\findlimits@{\limtoken@true\ifx\next\limits\limits@true
\else\ifx\next\nolimits\limits@false\else
\limtoken@false\ifx\ilimits@\nolimits\limits@false\else
\ifinner\limits@false\else\limits@true\fi\fi\fi\fi}%
\def\multint@{\int\ifnum\intno@=\z@\intdots@
\else\intkern@\fi
\ifnum\intno@>\tw@\int\intkern@\fi
\ifnum\intno@>\thr@@\int\intkern@\fi
\int
\def\multintlimits@{\intop\ifnum\intno@=\z@\intdots@\else\intkern@\fi
\ifnum\intno@>\tw@\intop\intkern@\fi
\ifnum\intno@>\thr@@\intop\intkern@\fi\intop}%
\def\intic@{%
\mathchoice{\hskip.5em}{\hskip.4em}{\hskip.4em}{\hskip.4em}}%
\def\negintic@{\mathchoice
{\hskip-.5em}{\hskip-.4em}{\hskip-.4em}{\hskip-.4em}}%
\def\ints@@{\iflimtoken@
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits
\else\multint@\nolimits\fi
\eat@
\else
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits\else
\multint@\nolimits\fi}\fi\ints@@@}%
\def\intkern@{\mathchoice{\!\!\!}{\!\!}{\!\!}{\!\!}}%
\def\plaincdots@{\mathinner{\cdotp\cdotp\cdotp}}%
\def\intdots@{\mathchoice{\plaincdots@}%
{{\cdotp}\mkern1.5mu{\cdotp}\mkern1.5mu{\cdotp}}%
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}%
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}}%
\def\RIfM@{\relax\protect\ifmmode}
\def\RIfM@\expandafter\text@\else\expandafter\mbox\fi{\RIfM@\expandafter\RIfM@\expandafter\text@\else\expandafter\mbox\fi@\else\expandafter\mbox\fi}
\let\nfss@text\RIfM@\expandafter\text@\else\expandafter\mbox\fi
\def\RIfM@\expandafter\text@\else\expandafter\mbox\fi@#1{\mathchoice
{\textdef@\displaystyle\f@size{#1}}%
{\textdef@\textstyle\tf@size{\firstchoice@false #1}}%
{\textdef@\textstyle\sf@size{\firstchoice@false #1}}%
{\textdef@\textstyle \ssf@size{\firstchoice@false #1}}%
\glb@settings}
\def\textdef@#1#2#3{\hbox{{%
\everymath{#1}%
\let\f@size#2\selectfont
#3}}}
\newif\iffirstchoice@
\firstchoice@true
\def\Let@{\relax\iffalse{\fi\let\\=\cr\iffalse}\fi}%
\def\vspace@{\def\vspace##1{\crcr\noalign{\vskip##1\relax}}}%
\def\multilimits@{\bgroup\vspace@\Let@
\baselineskip\fontdimen10 \scriptfont\tw@
\advance\baselineskip\fontdimen12 \scriptfont\tw@
\lineskip\thr@@\fontdimen8 \scriptfont\thr@@
\lineskiplimit\lineskip
\vbox\bgroup\ialign\bgroup\hfil$\m@th\scriptstyle{##}$\hfil\crcr}%
\def\Sb{_\multilimits@}%
\def\endSb{\crcr\egroup\egroup\egroup}%
\def\Sp{^\multilimits@}%
\let\endSp\endSb
\newdimen\ex@
\[email protected]
\def\rightarrowfill@#1{$#1\m@th\mathord-\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}%
\def\leftarrowfill@#1{$#1\m@th\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill\mkern-6mu\mathord-$}%
\def\leftrightarrowfill@#1{$#1\m@th\mathord\leftarrow
\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}%
\def\overrightarrow{\mathpalette\overrightarrow@}%
\def\overrightarrow@#1#2{\vbox{\ialign{##\crcr\rightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\let\overarrow\overrightarrow
\def\overleftarrow{\mathpalette\overleftarrow@}%
\def\overleftarrow@#1#2{\vbox{\ialign{##\crcr\leftarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\def\overleftrightarrow{\mathpalette\overleftrightarrow@}%
\def\overleftrightarrow@#1#2{\vbox{\ialign{##\crcr
\leftrightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\def\underrightarrow{\mathpalette\underrightarrow@}%
\def\underrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\rightarrowfill@#1\crcr}}}%
\let\underarrow\underrightarrow
\def\underleftarrow{\mathpalette\underleftarrow@}%
\def\underleftarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\leftarrowfill@#1\crcr}}}%
\def\underleftrightarrow{\mathpalette\underleftrightarrow@}%
\def\underleftrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th
\hfil#1#2\hfil$\crcr
\noalign{\nointerlineskip}\leftrightarrowfill@#1\crcr}}}%
\def\qopnamewl@#1{\mathop{\operator@font#1}\nlimits@}
\let\nlimits@\displaylimits
\def\setboxz@h{\setbox\z@\hbox}
\def\varlim@#1#2{\mathop{\vtop{\ialign{##\crcr
\hfil$#1\m@th\operator@font lim$\hfil\crcr
\noalign{\nointerlineskip}#2#1\crcr
\noalign{\nointerlineskip\kern-\ex@}\crcr}}}}
\def\rightarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\copy\z@\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\box\z@\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}
\def\leftarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\copy\z@\mkern-2mu$}\hfill
\mkern-6mu\box\z@$}
\def\qopnamewl@{proj\,lim}{\qopnamewl@{proj\,lim}}
\def\qopnamewl@{inj\,lim}{\qopnamewl@{inj\,lim}}
\def\mathpalette\varlim@\rightarrowfill@{\mathpalette\varlim@\rightarrowfill@}
\def\mathpalette\varlim@\leftarrowfill@{\mathpalette\varlim@\leftarrowfill@}
\def\mathpalette\varliminf@{}{\mathpalette\mathpalette\varliminf@{}@{}}
\def\mathpalette\varliminf@{}@#1{\mathop{\underline{\vrule\@depth.2\ex@\@width\z@
\hbox{$#1\m@th\operator@font lim$}}}}
\def\mathpalette\varlimsup@{}{\mathpalette\mathpalette\varlimsup@{}@{}}
\def\mathpalette\varlimsup@{}@#1{\mathop{\overline
{\hbox{$#1\m@th\operator@font lim$}}}}
\def\stackunder#1#2{\mathrel{\mathop{#2}\limits_{#1}}}%
\begingroup \catcode `|=0 \catcode `[= 1
\catcode`]=2 \catcode `\{=12 \catcode `\}=12
\catcode`\\=12
|gdef|@alignverbatim#1\end{align}[#1|end[align]]
|gdef|@salignverbatim#1\end{align*}[#1|end[align*]]
|gdef|@alignatverbatim#1\end{alignat}[#1|end[alignat]]
|gdef|@salignatverbatim#1\end{alignat*}[#1|end[alignat*]]
|gdef|@xalignatverbatim#1\end{xalignat}[#1|end[xalignat]]
|gdef|@sxalignatverbatim#1\end{xalignat*}[#1|end[xalignat*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@multilineverbatim#1\end{multiline}[#1|end[multiline]]
|gdef|@smultilineverbatim#1\end{multiline*}[#1|end[multiline*]]
|gdef|@arraxverbatim#1\end{arrax}[#1|end[arrax]]
|gdef|@sarraxverbatim#1\end{arrax*}[#1|end[arrax*]]
|gdef|@tabulaxverbatim#1\end{tabulax}[#1|end[tabulax]]
|gdef|@stabulaxverbatim#1\end{tabulax*}[#1|end[tabulax*]]
|endgroup
\def\align{\@verbatim \frenchspacing\@vobeyspaces \@alignverbatim
You are using the "align" environment in a style in which it is not defined.}
\let\endalign=\endtrivlist
\@namedef{align*}{\@verbatim\@salignverbatim
You are using the "align*" environment in a style in which it is not defined.}
\expandafter\let\csname endalign*\endcsname =\endtrivlist
\def\alignat{\@verbatim \frenchspacing\@vobeyspaces \@alignatverbatim
You are using the "alignat" environment in a style in which it is not defined.}
\let\endalignat=\endtrivlist
\@namedef{alignat*}{\@verbatim\@salignatverbatim
You are using the "alignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endalignat*\endcsname =\endtrivlist
\def\xalignat{\@verbatim \frenchspacing\@vobeyspaces \@xalignatverbatim
You are using the "xalignat" environment in a style in which it is not defined.}
\let\endxalignat=\endtrivlist
\@namedef{xalignat*}{\@verbatim\@sxalignatverbatim
You are using the "xalignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endxalignat*\endcsname =\endtrivlist
\def\gather{\@verbatim \frenchspacing\@vobeyspaces \@gatherverbatim
You are using the "gather" environment in a style in which it is not defined.}
\let\endgather=\endtrivlist
\@namedef{gather*}{\@verbatim\@sgatherverbatim
You are using the "gather*" environment in a style in which it is not defined.}
\expandafter\let\csname endgather*\endcsname =\endtrivlist
\def\multiline{\@verbatim \frenchspacing\@vobeyspaces \@multilineverbatim
You are using the "multiline" environment in a style in which it is not defined.}
\let\endmultiline=\endtrivlist
\@namedef{multiline*}{\@verbatim\@smultilineverbatim
You are using the "multiline*" environment in a style in which it is not defined.}
\expandafter\let\csname endmultiline*\endcsname =\endtrivlist
\def\arrax{\@verbatim \frenchspacing\@vobeyspaces \@arraxverbatim
You are using a type of "array" construct that is only allowed in AmS-LaTeX.}
\let\endarrax=\endtrivlist
\def\tabulax{\@verbatim \frenchspacing\@vobeyspaces \@tabulaxverbatim
You are using a type of "tabular" construct that is only allowed in AmS-LaTeX.}
\let\endtabulax=\endtrivlist
\@namedef{arrax*}{\@verbatim\@sarraxverbatim
You are using a type of "array*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endarrax*\endcsname =\endtrivlist
\@namedef{tabulax*}{\@verbatim\@stabulaxverbatim
You are using a type of "tabular*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endtabulax*\endcsname =\endtrivlist
\def\endequation{%
\ifmmode\ifinner
\iftag@
\addtocounter{equation}{-1}
$\hfil
\displaywidth\linewidth\@taggnum\egroup \endtrivlist
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@ignoretrue
\else
$\hfil
\displaywidth\linewidth\@eqnnum\egroup \endtrivlist
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@ignoretrue
\fi
\else
\iftag@
\addtocounter{equation}{-1}
\eqno \hbox{\@taggnum}
\global\@ifnextchar*{\@tagstar}{\@tag}@false%
$$\global\@ignoretrue
\else
\eqno \hbox{\@eqnnum
$$\global\@ignoretrue
\fi
\fi\fi
}
\newif\iftag@ \@ifnextchar*{\@tagstar}{\@tag}@false
\def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}}
\def\@TCItag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}%
\global\def\@currentlabel{#1}}
\def\@TCItagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}%
\global\def\@currentlabel{#1}}
\@ifundefined{tag}{
\def\@ifnextchar*{\@tagstar}{\@tag}{\@ifnextchar*{\@tagstar}{\@tag}}
\def\@tag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}}
\def\@tagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}}
}{}
\def\tfrac#1#2{{\textstyle {#1 \over #2}}}%
\def\dfrac#1#2{{\displaystyle {#1 \over #2}}}%
\def\binom#1#2{{#1 \choose #2}}%
\def\tbinom#1#2{{\textstyle {#1 \choose #2}}}%
\def\dbinom#1#2{{\displaystyle {#1 \choose #2}}}%
\makeatother
\endinput
\section{Introduction}
This article is devoted to the study of stabilization for the wave equation
with external force on a compact Riemannian manifold with boundary. In the
first part of this paper, we consider the following wave equation with
linear internal damping and external forc
\begin{equation}
\left\{
\begin{array}{ll}
\partial _{t}^{2}u-\Delta u+a\left( x\right) \partial _{t}u=f\left(
t,x\right) &
\mathbb{R}
_{+}\times M \\
u=0 &
\mathbb{R}
_{+}\times \partial M \\
\left( u\left( 0\right) ,\partial _{t}u\left( 0\right) \right) =\left(
u_{0},u_{1}\right) &
\end{array
\right. \label{sys:linear}
\end{equation
Here $M=\left( M,q\right) $ is a compact, connected Riemannian manifold of
dimension $d$, with $C^{\infty }$ boundary $\partial M$, where $q$ denotes a
Riemannian metric of class $C^{\infty }$. $\Delta $ the Laplace--Beltrami
operator on $M.$ $a\left( x\right) $ is a non negative function in
C^{\infty }\left( M\right) $ and $f$ is a function in $L^{2}\left(
\mathbb{R}
_{+}\times M\right) .$
We define the energy spac
\begin{equation*}
\mathcal{H}=H_{0}^{1}\left( M\right) \times L^{2}(M)
\end{equation*
wher
\begin{equation*}
H_{0}^{1}\left( M\right) =\left\{ u\in H^{1}\left( M\right) ;\RIfM@\expandafter\text@\else\expandafter\mbox\fi{
u|_{\partial M}=0\right\}
\end{equation*
which is a Hilbert space. Linear semigroup theory applied to (\re
{sys:linear}), provides the existence of a unique solution $u$ in the clas
\begin{equation*}
u\in C^{0}\left(
\mathbb{R}
_{+},H_{0}^{1}\left( M\right) \right) \cap C^{1}\left(
\mathbb{R}
_{+},L^{2}\left( M\right) \right)
\end{equation*
With $\left( \RIfM@\expandafter\text@\else\expandafter\mbox\fi{\ref{sys:linear}}\right) $ we associate the energy
functional given b
\begin{equation*}
E_{u}\left( t\right) =\frac{1}{2}\int_{M}\left\vert {\Greekmath 0272} u\left(
t,x\right) \right\vert ^{2}+\left\vert \partial _{t}u\left( t,x\right)
\right\vert ^{2}dx.
\end{equation*
The energy $E_{u}(t)$ is topologically equivalent to the norm on the space
\mathcal{H}.$ Under these assumptions, the energy functional satisfies the
following identit
\begin{equation}
E_{u}\left( t\right) +\int_{s}^{t}\int_{M}a\left( x\right) \left\vert
\partial _{t}u\right\vert ^{2}dxd\sigma =E_{u}\left( s\right)
+\int_{s}^{t}\int_{M}f\partial _{t}udxd\sigma \label{energy identity}
\end{equation
for every $t\geq s\geq 0.$
The topic of interest is rate of decay of the energy functional. This
problem has a very long history. The connection between controllability,
observability and stabilization was discovered \cite{russell} and
effectively used in the context of linear PDE systems.
When $f=0$ and the damping term acts on the hole manifold the problem has
been studied by many authors \cite{andrade} and reference therein. For the
wave equation with localized linear damping term, we mention the works of
Rauch- Taylor\cite{rauch taylor} and Bardos et al \cite{blr} in which
microlocal techniques is used. In particular the notion of geometric
control. We cite also the works of Lasiecka et al \cite{las-tri-ya} and
Triggiani- Yao \cite{tri-yao} in which another approach based on Remannian
geometry is presented.
Particular attention has been paid to the case when $M$ is a bounded domain
and the damping is linearly bounded \cite{las1}\cite{komor} and reference
therein$.$ Under certain geometric condition, the energy functional decays
exponentially. Damping that does not satisfy such linear bound near the
origin (e.g. when the damping has polynomial, exponential or logarithmic
behavior near the origin) results in a weaker form of the energy that could
be expressed by algebraic, logarithmic (or possibly slower) rates \cit
{las-tat}\cite{fab}\cite{mart}$.$ Finally we mention the work of cavalcanti
et al \cite{cavalcante} when $M$ is a compact manifold with or without
boundary.
When $f\neq 0,$ the literature is less furnished, we specially mention the
works of Haraux \cite{har} and Zhu \cite{zhu} when the damping is globally
distributed.
We should also remark when the support of the dissipation may be arbitrarily
small require more regular initial data and result in very slow (logarithmic
or slower ) decay rates as shown in \cite{daou1} and reference therein.
We assume that the geodesics of $\bar{M}$ have no contact of infinite order
with $\partial M.$ Let $\omega $ be an open subset of $M$ and consider the
following assumption:
(G) $\left( \omega ,T\right) $ geometrically controls $M$, i.e. every
generalized geodesic of $M$, travelling with speed $1$ and issued at $t=0$,
enters the set $\omega $ in a time $t<T$. \
This condition is called Geometric Control Condition (see e.g. \cite{blr}) \
We shall relate the open subset $\omega $ with the damper $a$ by
\begin{equation*}
\omega =\{x\in M:a(x)>0\}.
\end{equation*}
Under the assumption (G) it was proved in \cite{blr,lebeau}, that the energy
decays exponentially, moreover if there exits a maximal generalized geodesic
of $M$ that never meets the support of the damper $a,$ then we don't have
the exponential decay of the energy for initial data in the energy space.
It is known that the exponential decay of the energy is equivalent to the
following observability inequality:
\begin{description}
\item[(A) Linear Observability inequality] There exist positive constants $T$
and $\alpha =\alpha (T)$, such that for every initial condition $\varphi
=\left( u_{0},u_{1}\right) \in \mathcal{H}$ the corresponding solution\
satisfies
\begin{equation}
E_{v}(t)\leq \alpha \int_{t}^{t+T}\int_{\Omega }a(x)\left\vert \partial
_{t}v\right\vert ^{2}dxds.
\end{equation
for every $t\geq 0.$
\end{description}
In this paper, under the assumption (G), we show that for the non autonomous
case the corresponding observability inequality reads as follows:
\begin{description}
\item[(\textbf{B}) Non autonomous linear Observability inequality] There
exist positive constants $T$ and $\alpha =\alpha (T)$, such that for every
initial condition $\varphi =\left( u_{0},u_{1}\right) \in \mathcal{H}$ the
corresponding solution\ satisfies
\begin{equation}
E_{v}(t)\leq \alpha \int_{t}^{t+T}\int_{M}a(x)\left\vert \partial
_{t}v\right\vert ^{2}+\left\vert f\left( s,x\right) \right\vert ^{2}dxds.
\end{equation
for every $t\geq 0.$
\end{description}
From the observability inequality above, we infer that the rate of decay of
the energy will depends on $\int_{M}\left\vert f\left( t,x\right)
\right\vert ^{2}dx.$ Now we state the main result of the first part of the
paper:
\begin{theorem}
\label{t:1}Let $u(t)$ is the solution to the linear problem (\ref{sys:linear
) with initial condition $\left( u_{0},u_{1}\right) \in \mathcal{H}$. We
assume that $\left( \omega ,T\right) $ satisfies the assumption (G) an
\begin{equation*}
\Gamma \left( t\right) =C_{1,T}\int_{M}\left\vert f\left( t,x\right)
\right\vert ^{2}dx\in L^{1}\left(
\mathbb{R}
_{+}\right)
\end{equation*
with $C_{1,T}\geq 1$. The
\begin{equation*}
E_{u}\left( t\right) \leq 4e^{T}\left( S\left( t-T\right)
+\int_{t-T}^{t}\Gamma \left( s\right) ds\right) ,\qquad t\geq T
\end{equation*
\ where $S\left( t\right) $ is the solution of the following ordinary
differential equation
\begin{equation}
\frac{dS}{dt}+\frac{1}{TC_{T}}S=\Gamma \left( t\right) ,\qquad S\left(
0\right) =E_{u}\left( 0\right) . \label{sharp ODE}
\end{equation
where $C_{T}\geq 1.$
\end{theorem}
\subsection{\textbf{Applications for the linear case}}
Setting
\begin{equation*}
\Gamma \left( t\right) =C_{1,T}\int_{M}\left\vert f\left( t,x\right)
\right\vert ^{2}dx
\end{equation*
with $C_{1,T}\geq 1$.
The ODE $\left( \ref{sharp ODE}\right) $ governing the energy bound reduces
t
\begin{equation}
\frac{dS}{dt}+CS=\Gamma \left( t\right) \label{equation linear}
\end{equation
where constant $C>0$ does not depend on $E_{u}\left( 0\right) .$
\begin{enumerate}
\item If there are constants $M>0$ and $\theta >0,$ such tha
\begin{equation*}
\Gamma \left( t\right) \leq Me^{-\theta t}
\end{equation*
We hav
\begin{equation*}
\int_{t-T}^{t}e^{-\theta s}ds\leq \frac{1}{\theta }\left[ e^{\theta T}-
\right] e^{-\theta t},t\geq T
\end{equation*
Multiply (\ref{equation linear}) both sides by $\exp (Ct)$ and integrate
from $0$ to $t$, we obtain
\begin{enumerate}
\item $C>\theta
\begin{equation*}
E_{u}\left( t\right) \leq c\left( 1+E_{u}\left( 0\right) \right) e^{-\theta
t},t\geq 0
\end{equation*}
\item $C=\theta
\begin{equation*}
E_{u}\left( t\right) \leq c\left( 1+E_{u}\left( 0\right) \right) \left(
1+t\right) e^{-\theta t},t\geq 0
\end{equation*}
\item $C<\theta
\begin{equation*}
E_{u}\left( t\right) \leq c\left( 1+E_{u}\left( 0\right) \right)
e^{-Ct},t\geq 0
\end{equation*}
\end{enumerate}
\item If there are constants $M>0$ and $\theta >1,$ such tha
\begin{equation*}
\Gamma \left( t\right) \leq M\left( 1+t\right) ^{-\theta }
\end{equation*}
\ We hav
\begin{equation*}
\int_{t-T}^{t}\left( 1+s\right) ^{-\theta }ds\leq T\left( 1+t-T\right)
^{-\theta },t\geq T
\end{equation*
In order to obtain the rate of decay in this case, we use proposition \re
{lemma ode}. The
\begin{equation*}
E_{u}\left( t\right) \leq c\left( 1+t-T\right) ^{-\theta },t\geq T
\end{equation*
where $c>0$ and depends on $E_{u}\left( 0\right) .$
\end{enumerate}
\begin{remark}
If we consider the following syste
\begin{equation*}
\left\{
\begin{array}{ll}
\partial _{t}^{2}u-\Delta u+a\left( x\right) g\left( \partial _{t}u\right)
=f\left( t,x\right) &
\mathbb{R}
_{+}\times M \\
u=0 &
\mathbb{R}
_{+}\times \partial M \\
\left( u\left( 0\right) ,\partial _{t}u\left( 0\right) \right) =\left(
u_{0},u_{1}\right) \in H_{0}^{1}\left( M\right) \times L^{2}\left( M\right)
&
\end{array
\right.
\end{equation*
with $g$ continuous, monotone increasing function, vanishing at the origin
and linearly bounded. Then the result of the theorem above remains true.
\end{remark}
\subsection{The nonlinear case}
In the second part of the paper we study the rate of decay of the energy
functional of solution of the wave equation with nonlinear damping and
external force. More precisely, we consider the following syste
\begin{equation}
\left\{
\begin{array}{ll}
\partial _{t}^{2}u-\Delta u+a\left( x\right) g\left( \partial _{t}u\right)
=f\left( t,x\right) &
\mathbb{R}
_{+}\times M \\
u=0 &
\mathbb{R}
_{+}\times \partial M \\
\left( u\left( 0\right) ,\partial _{t}u\left( 0\right) \right) =\left(
u_{0},u_{1}\right) \in H_{0}^{1}\left( M\right) \times L^{2}\left( M\right)
&
\end{array
\right. \label{sys:nonlinear}
\end{equation}
$g$ is a continuous, monotone increasing function vanishing at the origin.
Moreover we assume that, there exists a positive constant $m,$ such tha
\begin{equation}
\frac{1}{m}\left\vert s\right\vert ^{2}\leq g\left( s\right) s\leq
m\left\vert s\right\vert ^{2},\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }\left\vert s\right\vert >\eta
\label{behavior infinity}
\end{equation
for some $\eta >0.$ $a\left( x\right) $ is a non negative function in
C^{\infty }\left( M\right) $ and $f$ is in $L^{2}\left(
\mathbb{R}
_{+}\times M\right) .$
Nonlinear semigroup theory applied to (\ref{sys:linear}), provides the
existence of a unique solution $u$ in the clas
\begin{equation*}
u\in C^{0}\left(
\mathbb{R}
_{+},H_{0}^{1}\left( M\right) \right) \cap C^{1}\left(
\mathbb{R}
_{+},L^{2}\left( M\right) \right)
\end{equation*
Under these assumptions on the behavior on the damping, the energy
functional satisfies the following identit
\begin{equation}
E_{u}\left( t\right) +\int_{s}^{t}\int_{M}a\left( x\right) g\left( \partial
_{t}u\right) \partial _{t}udxd\sigma =E_{u}\left( s\right)
+\int_{s}^{t}\int_{M}f\partial _{t}udxd\sigma
\end{equation
for every $t\geq s\geq 0.$
It is well known, for the nonlinear problem without a external force the
corresponding observability inequality \cite{las-tat,MID}... reads as
follows:
\begin{description}
\item[(C) Nonlinear Observability Inequality] There exists a constant $T>0$
and a concave, continuous, monotone increasing function $h:\mathbb{R
_{+}\rightarrow \mathbb{R}_{+}$, $h(0)=0$ (possibly dependent on $T$ ) such
that the solution $u(t,x)$ to the nonlinear problem (\ref{sys:nonlinear})
with initial data $\varphi =\left( u_{0},u_{1}\right) $ and $f\equiv 0$
satisfies
\begin{equation}
E_{u}\left( t\right) \leq h\left( \int_{t}^{t+T}\int_{\Omega }a(x)g(\partial
_{t}u)\partial _{t}u\;dxds\right) , \label{observability nonlinear}
\end{equation
for every $t\geq 0.$
\end{description}
The function $h(s)$ in (\ref{observability nonlinear}) depends on the
nonlinear map $g(s)$, and ultimately determines the decay rates for the
energy $E_{u}\left( t\right) $. The energy decay for the \emph{nonlinear}
problem will be determined from the following ODE
\begin{equation}
S_{t}+h^{-1}\left( CS\right) =0,\quad S\left( 0\right) =E_{u}\left( 0\right)
\label{ODE}
\end{equation
we show that under the assumption (\textbf{G) }we obtain the following
observability inequality
\begin{description}
\item[(D) Nonlinear Non-autonomous Observability Inequality] There exists a
constant $T>0$ and a concave, continuous, monotone increasing function $h
\mathbb{R}_{+}\rightarrow \mathbb{R}_{+}$, $h(0)=0$ (possibly dependent on
T $ ) such that the solution $u(t,x)$ to the nonlinear problem (\re
{sys:nonlinear}) with initial data $\varphi =\left( u_{0},u_{1}\right) $
satisfies
\begin{equation}
E_{u}\left( t\right) \leq h\left( \int_{t}^{t+T}\int_{\Omega }a(x)g(\partial
_{t}u)\partial _{t}u\;dxds+\int_{t}^{t+T}\int_{M}\left\vert f\left(
s,x\right) \right\vert ^{2}dxds\right) ,
\end{equation
for every $t\geq 0.$
\end{description}
Before giving the main result of this section, we will define some needed
functions. According to \cite{las-tat} there exists a strictly increasing
function $h_{0}$ with $h_{0}\left( 0\right) =0$ such tha
\begin{equation*}
h_{0}\left( g\left( s\right) s\right) \geq \epsilon _{0}\left( \left\vert
s\right\vert ^{2}+\left\vert g\left( s\right) \right\vert ^{2}\right) ,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{
}\left\vert s\right\vert \leq \eta
\end{equation*
for some $\epsilon _{0},\eta >0.$ For the construction of such function we
refer the interested reader to \cite{las-tat,daou 2}. With this function, we
defin
\begin{equation*}
h=I+\mathfrak{m}_{a}\left( M_{T}\right) h_{0}\circ \frac{I}{\mathfrak{m
_{a}\left( M_{T}\right) }
\end{equation*
where
\begin{equation*}
\mathfrak{m}_{a}=a\left( x\right) dxdt\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and }M_{T}=\left( 0,T\right)
\times M
\end{equation*
We can now proceed to state the main result of the second part of the paper
\begin{theorem}
\label{t:2}Let $u(t)$ is the solution to the nonlinear problem (\re
{sys:nonlinear}) with initial condition $\left( u_{0},u_{1}\right) \in
\mathcal{H}$. We assume that $\left( \omega ,T\right) $ satisfies the
assumption (G) and
\begin{equation*}
\Gamma \left( t\right) =2\int_{M}\left\vert f\left( t,x\right) \right\vert
^{2}dx+\psi ^{\ast }\left( \left\Vert f\left( t,.\right) \right\Vert
_{L^{2}\left( M\right) }\right) \in L^{1}\left(
\mathbb{R}
_{+}\right)
\end{equation*
where $\psi ^{\ast }$ is the convex conjugate of the function $\psi ,$
defined b
\begin{equation*}
\psi \left( s\right) =\left\{
\begin{array}{lc}
\frac{1}{2T}h^{-1}\left( \frac{s^{2}}{8C_{T}e^{T}}\right) & s\in
\mathbb{R}
_{+} \\
+\infty & s\in
\mathbb{R}
_{-}^{\ast
\end{array
\right.
\end{equation*
with $C_{T}\geq 1.$ The
\begin{equation*}
E_{u}\left( t\right) \leq 4e^{T}\left( S\left( t-T\right)
+\int_{t-T}^{t}\Gamma \left( s\right) ds\right) ,\qquad t\geq T
\end{equation*
where $S\left( t\right) $ is the solution of the following ordinary
differential equation
\begin{equation}
\frac{dS}{dt}+\frac{1}{4T}h^{-1}\left( \frac{1}{K}S\right) =\Gamma \left(
t\right) ,\qquad S\left( 0\right) =E_{u}\left( 0\right) .
\end{equation
with, $K\geq C_{T}.$
\end{theorem}
\subsubsection{Applications for the nonlinear case}
\begin{proposition}
\label{lemma ode}Let $p$ a differentiable, positive strictly increasing
function on
\mathbb{R}
_{+},$ \ We assume that there exists $m_{1}>0$ such that, $p\left( x\right)
\leq m_{1}x$ for every $x\in \left[ 0,\eta \right] $ for some $0<\eta <<1$
and the following property
\begin{equation}
p\left( Kx\right) \geq mp\left( K\right) p\left( y\right)
\label{Lem:p lower bound}
\end{equation
holds for some $m>0$ and for every $\left( K,x\right) \in \left[ 1,+\infty
\right[ \times
\mathbb{R}
_{+}.$ $\Gamma \in C^{1}\left(
\mathbb{R}
_{+}\right) .$ Let $S$ satisfying the following differential inequality
\begin{equation*}
\frac{dS}{dt}+p\left( S\right) \leq \Gamma \left( t\right) ,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }S\left(
0\right) \geq 0.
\end{equation*}
\begin{enumerate}
\item $\Gamma \left( t\right) =0$ for every $t\geq 0.$ We assume that
S\left( 0\right) >0.$ Then
\begin{equation*}
S\left( t\right) \leq \psi ^{-1}\left( t\right) ,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for every }t\geq 0
\end{equation*
wher
\begin{equation*}
\psi \left( x\right) =\int_{x}^{S\left( 0\right) }\frac{ds}{p\left( s\right)
}.
\end{equation*
$x\in \left] 0,S\left( 0\right) \right] .$
\item $\Gamma \left( t\right) >0$ for every $t\geq 0.$
\begin{enumerate}
\item There exist $c>0$ and $\kappa \geq 1$ such tha
\begin{equation}
\frac{d}{dt}p^{-1}\left( \Gamma \left( t\right) \right) +c\Gamma \left(
t\right) <0,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for every }t\geq 0 \label{application lemma 1}
\end{equation
an
\begin{equation}
\begin{array}{c}
mp\left( \kappa \right) -\kappa c-1\geq 0 \\
\kappa p^{-1}\circ \Gamma \left( 0\right) \geq S\left( 0\right
\end{array}
\label{application assumption}
\end{equation
The
\begin{equation*}
S\left( t\right) \leq \kappa \psi ^{-1}\left( ct\right) ,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for every
t\geq 0
\end{equation*
wher
\begin{equation*}
\psi \left( x\right) =\int_{x}^{p^{-1}\circ \Gamma \left( 0\right) }\frac{d
}{p\left( s\right) }.
\end{equation*
$x\in \left] 0,p^{-1}\circ \Gamma \left( 0\right) \right] .$
\item There exist $c>0$ and $\kappa \geq 1$ such tha
\begin{equation*}
\frac{d}{dt}p^{-1}\left( \Gamma \left( t\right) \right) +c\Gamma \left(
t\right) \geq 0,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for every }t\geq 0
\end{equation*
and
\begin{equation*}
\begin{array}{c}
mp\left( \kappa \right) -c\kappa -1\geq 0 \\
\kappa p^{-1}\circ \Gamma \left( 0\right) \geq S\left( 0\right
\end{array
\end{equation*
The
\begin{equation*}
S\left( t\right) \leq \kappa p^{-1}\circ \Gamma \left( t\right) ,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for
every }t\geq 0
\end{equation*}
\end{enumerate}
\end{enumerate}
\end{proposition}
\begin{proof}
$\left. {}\right. $
\begin{enumerate}
\item If $S\left( 0\right) =0,$ since $S$ is positive and decreasing then
S\left( t\right) =0$ for every $t\geq 0.$ We assume that $S\left( 0\right)
>0 $. Let $\psi ,$ the function defined by
\begin{equation*}
\psi \left( x\right) =\int_{x}^{S\left( 0\right) }\frac{ds}{p\left( s\right)
}.
\end{equation*
then $\psi $ is a strictly decreasing function on $\left( 0,S\left( 0\right)
\right) $ and $\underset{x\rightarrow 0}{\lim }\psi \left( x\right) =+\infty
.$ We hav
\begin{equation*}
\frac{d}{dt}\psi \circ S\left( t\right) \geq 1
\end{equation*
Integrating from $0$ to $t,$ we obtai
\begin{equation*}
\psi \circ S\left( t\right) \geq t,t\geq 0
\end{equation*
since $\psi $ is decreasin
\begin{equation*}
S\left( t\right) \leq \psi ^{-1}\left( t\right) ,t\geq 0
\end{equation*}
\item $\left. {}\right. $
\begin{enumerate}
\item Let $\psi ,$ the function defined by
\begin{equation*}
\psi \left( x\right) =\int_{x}^{p^{-1}\circ \Gamma \left( 0\right) }\frac{d
}{p\left( s\right) }.
\end{equation*
then $\psi $ is a strictly decreasing function on $\left] 0,p^{-1}\circ
\Gamma \left( 0\right) \right] $ and $\underset{x\rightarrow 0}{\lim }\psi
\left( x\right) =+\infty .$ We hav
\begin{equation*}
\frac{d}{dt}\psi \circ p^{-1}\circ \Gamma \left( t\right) =-\frac{\frac{d}{d
}p^{-1}\left( \Gamma \left( t\right) \right) }{\Gamma \left( t\right) }
\end{equation*
from $\left( \ref{application lemma 1}\right) ,$ we infer tha
\begin{equation*}
\frac{d}{dt}\psi \circ p^{-1}\circ \Gamma \left( t\right) \geq c
\end{equation*
Integrating from $0$ to $t,$ we obtai
\begin{equation*}
\psi \circ p^{-1}\circ \Gamma \left( t\right) \geq ct
\end{equation*
this give
\begin{equation}
\Gamma \left( t\right) \leq p\circ \psi ^{-1}\left( ct\right) ,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for
every }t\geq 0 \label{application 11}
\end{equation
Settin
\begin{equation*}
y\left( t\right) =\kappa \psi ^{-1}\left( ct\right) ,t\geq 0.
\end{equation*
We hav
\begin{equation*}
y^{\prime }\left( t\right) +p\left( y\left( t\right) \right) =-c\kappa
p\circ \psi ^{-1}\left( ct\right) +p\left( \kappa \left( \psi ^{-1}\left(
ct\right) \right) \right)
\end{equation*
Using $\left( \ref{Lem:p lower bound}\right) $ and $\left( \ref{application
11}\right) $
\begin{eqnarray*}
y^{\prime }\left( t\right) +p\left( y\left( t\right) \right) &\geq &\left(
mp\left( \kappa \right) -c\kappa \right) p\circ \psi ^{-1}\left( ct\right) \\
&\geq &\left( mp\left( \kappa \right) -c\kappa \right) \Gamma \left( t\right)
\end{eqnarray*
$\left( \ref{application assumption}\right) $ give
\begin{equation*}
\begin{array}{c}
y^{\prime }\left( t\right) +p\left( y\left( t\right) \right) \geq \Gamma
\left( t\right) \\
y\left( 0\right) \geq S\left( 0\right
\end{array
\end{equation*
the result follows from the following lemma
\begin{lemma}
\label{lemma application}Let $p_{i}$ $\left( i=1,2\right) $ a positive
strictly increasing function on
\mathbb{R}
_{+}.$ Suppose that $S$ and $y$ are absolutely continuous functions and
satisfy
\begin{equation}
\frac{dS}{dt}+p_{1}\left( S\right) \leq \Gamma \left( t\right) \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ on
\left[ 0,+\infty \right[ . \label{ode lemma application 1}
\end{equation
and
\begin{equation}
\frac{dy}{dt}+p_{2}\left( y\right) \geq \Gamma _{1}\left( t\right) \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ on
}\left[ 0,+\infty \right[ . \label{ode lemma application 2}
\end{equation
where $\Gamma ;\Gamma _{1}$ $\in L^{1}([0;\infty ));$ $\Gamma _{1}\geq
\Gamma \geq 0;$ $p_{1}\geq p_{2}\geq 0$. In addition; i
\begin{equation*}
y\left( 0\right) \geq S\left( 0\right)
\end{equation*
The
\begin{equation*}
y\left( t\right) \geq S\left( t\right) ;\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for }t\geq 0
\end{equation*}
\end{lemma}
First we finish the proof of the proposition, then we give the proof of the
lemma.
\item Setting
\begin{equation*}
y\left( t\right) =\kappa p^{-1}\circ \Gamma \left( t\right) ,t\geq 0.
\end{equation*
We have
\begin{equation*}
y^{\prime }\left( t\right) +p\left( y\left( t\right) \right) =\kappa \left(
p^{-1}\circ \Gamma \right) ^{\prime }\left( t\right) +p\left( \kappa
p^{-1}\circ \Gamma \left( t\right) \right)
\end{equation*
Using $\left( \ref{Lem:p lower bound}\right) $ and the fact that
\begin{equation*}
\frac{d}{dt}p^{-1}\left( \Gamma \left( t\right) \right) +c\Gamma \left(
t\right) \geq 0
\end{equation*
for some $c>0,$ we obtain
\begin{equation*}
y^{\prime }\left( t\right) +p\left( y\left( t\right) \right) \geq \left(
mp\left( \kappa \right) -c\kappa \right) \Gamma \left( t\right)
\end{equation*
$\left( \ref{application assumption}\right) $ give
\begin{equation*}
\begin{array}{c}
y^{\prime }\left( t\right) +p\left( y\left( t\right) \right) \geq \Gamma
\left( t\right) \\
y\left( 0\right) \geq S\left( 0\right
\end{array
\end{equation*
the result follows from lemma \ref{lemma application}
\end{enumerate}
\end{enumerate}
\end{proof}
The proof of lemma \ref{lemma application} is borrowed from \cite{zhu}
\begin{proof}[Proof of lemma \protect\ref{lemma application}]
Suppose that there exists $t_{0}$ in $\left[ 0,+\infty \right[ ,$ such tha
\begin{equation*}
S\left( t_{0}\right) =y\left( t_{0}\right) \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and }S\left( t\right)
>y\left( t\right) \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ on }\left[ t_{0},t_{0}+\epsilon \right]
\end{equation*
for some $\epsilon >0.$ Integrate $\left( \ref{ode lemma application 1
\right) $ and $\left( \ref{ode lemma application 2}\right) $ from $t_{0}$ to
$t_{0}+\epsilon ,$ we obtai
\begin{equation*}
S\left( t_{0}+\epsilon \right) -S\left( t_{0}\right)
+\int_{t_{0}}^{t_{0}+\epsilon }p_{1}\left( S\left( t\right) \right) dt\leq
\int_{t_{0}}^{t_{0}+\epsilon }\Gamma \left( t\right) dt
\end{equation*
an
\begin{equation*}
y\left( t_{0}+\epsilon \right) -y\left( t_{0}\right)
+\int_{t_{0}}^{t_{0}+\epsilon }p_{2}\left( y\left( t\right) \right) dt\geq
\int_{t_{0}}^{t_{0}+\epsilon }\Gamma _{1}\left( t\right) dt
\end{equation*
therefor
\begin{equation*}
S\left( t_{0}+\epsilon \right) +\int_{t_{0}}^{t_{0}+\epsilon }p_{1}\left(
S\left( t\right) \right) dt\leq y\left( t_{0}+\epsilon \right)
+\int_{t_{0}}^{t_{0}+\epsilon }p_{2}\left( y\left( t\right) \right) dt
\end{equation*
which give
\begin{eqnarray*}
S\left( t_{0}+\epsilon \right) -y\left( t_{0}+\epsilon \right) &\leq
&\int_{t_{0}}^{t_{0}+\epsilon }p_{2}\left( y\left( t\right) \right)
-p_{1}\left( S\left( t\right) \right) dt \\
&\leq &\int_{t_{0}}^{t_{0}+\epsilon }p_{1}\left( y\left( t\right) \right)
-p_{1}\left( S\left( t\right) \right) dt\leq 0
\end{eqnarray*
which contradict the fact that $S\left( t\right) >y\left( t\right) $ on
\left[ t_{0},t_{0}+\epsilon \right] .$
\end{proof}
Setting
\begin{equation*}
\Gamma \left( t\right) =2\int_{M}\left\vert f\left( t,x\right) \right\vert
^{2}dx+\psi ^{\ast }\left( \left\Vert f\left( t,.\right) \right\Vert
_{L^{2}\left( M\right) }\right)
\end{equation*
where $\psi ^{\ast }$ is the convex conjugate of the function $\psi ,$
defined b
\begin{equation*}
\psi \left( s\right) =\left\{
\begin{array}{lc}
\frac{1}{2T}h^{-1}\left( \frac{s^{2}}{8C_{T}e^{T}}\right) & s\in
\mathbb{R}
_{+} \\
+\infty & s\in
\mathbb{R}
_{-}^{\ast
\end{array
\right.
\end{equation*
and
\begin{equation*}
\psi ^{\ast }\left( s\right) =\underset{y\in
\mathbb{R}
}{\sup }\left[ sy-\varphi \left( y\right) \right]
\end{equation*}
\begin{description}
\item[Superlinear damping] Assume
\begin{equation*}
g\left( s\right) =\left\{
\begin{array}{cc}
s^{2}e^{-\frac{1}{s^{2}}} & 0\leq s<1 \\
-s^{2}e^{-\frac{1}{s^{2}}} & -1<s<
\end{array
\right. .
\end{equation*
We choose $h_{0}^{-1}\left( s\right) =s^{3/2}e^{-\frac{1}{s}},$ $0<s<\eta <<1
$ an
\begin{equation*}
K>>\max \left( E_{u}\left( 0\right) +\left\Vert \Gamma \right\Vert
_{L^{1}\left(
\mathbb{R}
_{+}\right) },C_{T}\right) .
\end{equation*
We hav
\begin{equation*}
\psi ^{\ast }\left( \left\Vert f\left( t,.\right) \right\Vert _{L^{2}\left(
M\right) }\right) \leq C\left( \left\Vert f\left( t,.\right) \right\Vert
_{L^{2}\left( M\right) }\left\vert \ln \left( \left\Vert f\left( t,.\right)
\right\Vert _{L^{2}\left( M\right) }\right) \right\vert ^{-\frac{1}{2
}+\left\Vert f\left( t,.\right) \right\Vert _{L^{2}\left( M\right)
}^{2}\right) .
\end{equation*
The ODE $\left( \ref{sharp ODE}\right) $ governing the energy bound reduces
t
\begin{equation*}
\frac{dS}{dt}+CS^{3/2}e^{-\frac{1}{S}}\leq \Gamma \left( t\right)
\end{equation*
with $C>0$ depends on $E_{u}\left( 0\right) .$ If there are constants $M>0$
and $\theta >1,$ such tha
\begin{equation*}
\Gamma \left( t\right) \leq M\left( 1+t\right) ^{-\theta }
\end{equation*
the
\begin{equation*}
E_{u}\left( t\right) \leq \frac{c_{0}}{\ln \left( ct+c_{1}\right) },t\geq T
\end{equation*
with $c,c_{0},c_{1}>0$. These constants may depend on $E_{u}\left( 0\right) .
$
\item[Sublinear near the origin] Assume $g\left( s\right) s\simeq \left\vert
s\right\vert ^{1+r_{0}},$ $\left\vert s\right\vert <1,$ $r_{0}\in \left(
0,1\right) .$ We choose $h_{0}\left( s\right) =s^{2r_{0}/\left(
1+r_{0}\right) }$ for $0\leq s\leq 1$ an
\begin{equation*}
K>>\max \left( E_{u}\left( 0\right) +\left\Vert \Gamma \right\Vert
_{L^{1}\left(
\mathbb{R}
_{+}\right) },C_{T}\right) .
\end{equation*
We hav
\begin{equation*}
\psi ^{\ast }\left( \left\Vert f\left( t,.\right) \right\Vert _{L^{2}\left(
M\right) }\right) \leq C\left( \left\Vert f\left( t,.\right) \right\Vert
_{L^{2}\left( M\right) }^{r_{0}+1}+\left\Vert f\left( t,.\right) \right\Vert
_{L^{2}\left( M\right) }^{2}\right)
\end{equation*
The ODE $\left( \ref{sharp ODE}\right) $ governing the energy bound reduces
t
\begin{equation*}
\frac{dS}{dt}+CS^{\left( 1+r_{0}\right) /2r_{0}}\leq \Gamma \left( t\right)
\end{equation*
with $C>0$ depends on $E_{u}\left( 0\right) .$
\begin{enumerate}
\item If there are constants $M>0$ and $\theta >1,$ such tha
\begin{equation*}
\Gamma \left( t\right) \leq M\left( 1+t\right) ^{-\theta }
\end{equation*
Then
\begin{enumerate}
\item $\theta \in \left] 1,\frac{1+r_{0}}{1-r_{0}}\right] .
\begin{equation*}
E_{u}\left( t\right) \leq c\left( 1+t-T\right) ^{-\frac{2r_{0}\theta }
1+r_{0}}},t\geq T
\end{equation*
where $c>0.$
\item $\theta \geq \frac{1+r_{0}}{1-r_{0}}
\begin{equation*}
E_{u}\left( t\right) \leq c\left( t-T\right) ^{-\frac{2r_{0}}{1-r_{0}}},t>T
\end{equation*
with $c>0$ and depends on $E_{u}\left( 0\right) $.
\end{enumerate}
\item If there are constants $M>0$ and $\theta >0,$ such tha
\begin{equation*}
\Gamma \left( t\right) \leq Me^{-\theta t}
\end{equation*
The
\begin{equation*}
E_{u}\left( t\right) \leq c\left( t-T\right) ^{-\frac{2r_{0}}{1-r_{0}}},t>T
\end{equation*}
\end{enumerate}
\end{description}
with $c>0$ and depends on $E_{u}\left( 0\right) $
\section{The linear case: Proof of theorem \protect\ref{t:1}}
\subsection{Preliminary results}
\begin{proposition}
Let $u$ be a solution of $\left( \ref{sys:linear}\right) $ with initial data
in the energy space. The
\begin{equation}
E_{u}\left( t\right) \leq \left( 1+\frac{1}{\epsilon }\right) e^{\epsilon
\left( t-s\right) }\left( E_{u}\left( s\right)
+\int_{s}^{t}\int_{M}\left\vert f\left( \sigma ,x\right) \right\vert
^{2}dxd\sigma \right) \label{energy bound linear}
\end{equation
for every $\epsilon >0$ and for every $t\geq s\geq 0.$
\end{proposition}
\begin{proof}
Let $t\geq s\geq 0.$ From the energy identit
\begin{equation*}
E_{u}\left( t\right) \leq E_{u}\left( s\right)
+\int_{s}^{t}\int_{M}f\partial _{t}udxd\sigma
\end{equation*
Using Young's inequalit
\begin{equation*}
E_{u}\left( t\right) \leq E_{u}\left( s\right) +\frac{1}{\epsilon
\int_{s}^{t}\int_{M}\left\vert f\left( \sigma ,x\right) \right\vert
^{2}dxd\sigma +\epsilon \int_{s}^{t}E_{u}\left( \sigma \right) d\sigma
\end{equation*
for every $\epsilon >0.$ Now Gronwall's lemma, give
\begin{equation*}
E_{u}\left( t\right) \leq e^{\epsilon \left( t-s\right) }\left( E_{u}\left(
s\right) +\frac{1}{\epsilon }\int_{s}^{t}\int_{M}\left\vert f\left( \sigma
,x\right) \right\vert ^{2}dxd\sigma \right) .
\end{equation*}
\end{proof}
The result below is a generalisation of the comparison lemma of Lasiecka and
Tataru \cite{las-tat}.
\begin{lemma}
\label{lemma las tat}Let $T>0$ and
\begin{itemize}
\item $\Gamma \in L_{loc}^{1}\left(
\mathbb{R}
_{+}\right) $ and, non negative. Setting $\delta \left( t\right)
=\int_{t}^{t+T}\Gamma \left( s\right) ds$.
\item $W\left( t\right) $ be a non negative, continuous function for $t\in
\mathbb{R}_{+}$. Moreover we assume that there exists a positive, monotone,
increasing function $\alpha $ such tha
\begin{equation*}
W\left( t\right) \leq \alpha \left( t-s\right) \left[ W\left( s\right)
+\int_{s}^{t}\Gamma \left( \sigma \right) d\sigma \right] ,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for every
t\geq s\geq 0.
\end{equation*}
\item Suppose that $\ell $ and $I-\ell
\mathbb{R}
_{+}\rightarrow \mathbb{R}$ are increasing functions with $\ell (0)=0$ and
\begin{equation}
W\left( \left( m+1\right) T\right) +\ell \left\{ W\left( mT\right) +\delta
\left( mT\right) \right\} \leq W\left( mT\right) +\delta \left( mT\right)
\label{lemma las tat inequality}
\end{equation
for $m=0,1,2,..$ where $\ell \left( s\right) $ does not depend on $m.$ The
\begin{equation*}
W\left( t\right) \leq \alpha \left( T\right) \left( S\left( t-T\right)
+2\int_{t-T}^{t}\Gamma \left( s\right) ds\right) ,\qquad \forall t\geq T
\end{equation*
where $S\left( t\right) $ is a positive solution of the following nonlinear
differential equatio
\begin{equation}
\frac{dS}{dt}+\frac{1}{T}\ell \left( S\right) =\Gamma \left( t\right) \RIfM@\expandafter\text@\else\expandafter\mbox\fi{
};\qquad S(0)=W(0). \label{Ode lema}
\end{equation}
\end{itemize}
\end{lemma}
\begin{proof}
To prove this result we use induction. Assume that $W\left( mT\right) \leq
S\left( mT\right) $ and prove that $W\left( \left( m+1\right) T\right) \leq
S\left( \left( m+1\right) T\right) $ where $S\left( t\right) $ is the
solution of $\left( \RIfM@\expandafter\text@\else\expandafter\mbox\fi{\ref{Ode lema}}\right) .$
Integrating the equation $\left( \RIfM@\expandafter\text@\else\expandafter\mbox\fi{\ref{Ode lema}}\right) $ from $mT$ to
$\left( m+1\right) T$ yield
\begin{equation}
S\left( \left( m+1\right) T\right) =S\left( mT\right) -\frac{1}{T
\int_{mT}^{\left( m+1\right) T}\ell \left( S\left( t\right) \right)
dt+\delta \left( mT\right) \label{lem las tat 1}
\end{equation
On the other hand, we hav
\begin{equation*}
\frac{d}{dt}\left( S-\int_{0}^{t}\Gamma \left( s\right) ds\right) =-\frac{1}
T}\ell \left( S\right) \leq 0.
\end{equation*
therefore, for $t_{1}\geq t_{2}$
\begin{equation*}
S\left( t_{1}\right) \leq S\left( t_{2}\right) +\int_{t_{2}}^{t_{1}}\Gamma
\left( s\right) ds
\end{equation*
the function $\ell $ is increasing
\begin{eqnarray*}
\ell \left( S\left( t\right) \right) &\leq &\ell \left( S\left( mt\right)
+\int_{mT}^{t}\Gamma \left( s\right) ds\right) ,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for }mT\leq t\leq
\left( m+1\right) T \\
&\leq &\ell \left( S\left( mt\right) +\delta \left( mT\right) \right)
\end{eqnarray*
Using now, $\left( \ref{lem las tat 1}\right) ,$ we obtai
\begin{equation*}
S\left( \left( m+1\right) T\right) \geq S\left( mT\right) +\delta \left(
mT\right) -\ell \left( S\left( mt\right) +\delta \left( mT\right) \right)
\end{equation*
Since the function, $I-\ell $ is increasin
\begin{equation*}
S\left( \left( m+1\right) T\right) \geq W\left( mT\right) +\delta \left(
mT\right) -\ell \left( W\left( mt\right) +\delta \left( mT\right) \right)
\end{equation*
$\left( \ref{lemma las tat inequality}\right) ,$ give
\begin{equation*}
S\left( \left( m+1\right) T\right) \geq W\left( \left( m+1\right) T\right) .
\end{equation*
Setting $t=mT+\tau ,$ with $0\leq \tau <T.$ Then we obtai
\begin{eqnarray*}
W\left( t\right) &\leq &\alpha \left( \tau \right) \left[ W\left( t-\tau
\right) +\int_{t-\tau }^{t}\Gamma \left( s\right) ds\right] \\
&\leq &\alpha \left( \tau \right) \left( S\left( t-\tau \right)
+\int_{t-\tau }^{t}\Gamma \left( s\right) ds\right) \\
&\leq &\alpha \left( T\right) \left( S\left( t-T\right)
+2\int_{t-T}^{t}\Gamma \left( s\right) ds\right) ,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ for every }t\geq T.
\end{eqnarray*}
\end{proof}
\begin{proposition}
We assume that $\left( \omega ,T\right) $ satisfies the assumption (G).\
Then there exists $\hat{C}_{T}>0,$ such that the following inequalit
\begin{equation}
E_{u}\left( t\right) \leq \hat{C}_{T}\left[ \int_{t}^{t+T}\int_{M}a\left(
x\right) \left\vert \partial _{t}u\right\vert ^{2}+\left\vert f\left(
s,x\right) \right\vert ^{2}dxds\right] \label{observability linear}
\end{equation
holds for every $t\geq 0$, for every solution $u$ of $\left( \ref{sys:linear
\right) $ with initial data in the energy space $\mathcal{H}$, for every $f$
in $L^{2}\left(
\mathbb{R}
_{+}\times M\right) $ $.$
\end{proposition}
\begin{proof}
To prove this result we argue by contradiction. We assume that there exist a
sequence $\left( u_{n}\right) _{n}$ solution of $\left( \RIfM@\expandafter\text@\else\expandafter\mbox\fi{\re
{sys:nonlinear}}\right) $ with initial data in the energy space, a
non-negative sequence $\left( t_{n}\right) _{n}$ and $f_{n}$\ in
L^{2}\left(
\mathbb{R}
_{+}\times M\right) ,$ such that
\begin{equation}
\begin{array}{l}
E_{u_{n}}\left( t_{n}\right) \geq n\int_{t_{n}}^{t_{n}+T}\int_{M}a\left(
x\right) \left\vert \partial _{t}u_{n}\right\vert ^{2}+\left\vert
f_{n}\left( t,x\right) \right\vert ^{2}dxdt
\end{array}
\label{contradiction argument}
\end{equation
Moreover, $u_{n}$ has the following regularit
\begin{equation}
u_{n}\in C\left(
\mathbb{R}
_{+},H_{0}^{1}\left( M\right) \right) \cap C^{1}\left(
\mathbb{R}
_{+},L^{2}\left( M\right) \right)
\end{equation
Setting $\alpha _{n}=\left( E_{u_{n}}\left( t_{n}\right) \right) ^{1/2}>0$
and $v_{n}\left( t,x\right) =\frac{u_{n}\left( t_{n}+t,x\right) }{\alpha _{n
}.$ Then $v_{n}$ satisfie
\begin{equation}
\left\{
\begin{array}{ll}
\partial _{t}^{2}v_{n}-\Delta v_{n}+a\left( x\right) \partial _{t}v_{n}
\frac{1}{\alpha _{n}}f_{n}\left( t_{n}+t,x\right) &
\mathbb{R}
_{+}\times M \\
v_{n}=0 &
\mathbb{R}
_{+}\times \partial M \\
\left( v_{n}\left( 0\right) ,\partial _{s}v_{n}\left( 0\right) \right)
\frac{1}{\alpha _{n}}\left( u_{n}\left( t_{n}\right) ,\partial
_{t}u_{n}\left( t_{n}\right) \right) &
\end{array
\right. \label{sys:vn}
\end{equation
Moreover,
\begin{equation*}
E_{v_{n}}\left( 0\right) =1
\end{equation*
an
\begin{equation}
\begin{array}{l}
1\geq n\int_{0}^{T}\int_{M}a\left( x\right) \left\vert \partial
_{t}v_{n}\right\vert ^{2}+\left\vert \frac{1}{\alpha _{n}}f_{n}\left(
t_{n}+t,x\right) \right\vert ^{2}dxdt
\end{array}
\label{contradiction argument1}
\end{equation
From the inequality above,\ we infer tha
\begin{equation}
\begin{array}{l}
\int_{0}^{T}\int_{M}a\left( x\right) \left\vert \partial
_{t}v_{n}\right\vert ^{2}dxdt\underset{n\rightarrow \infty }{\rightarrow }0,
\\
\int_{0}^{T}\int_{M}\left\vert \frac{1}{\alpha _{n}}f_{n}\left(
t_{n}+t,x\right) \right\vert ^{2}dxdt\underset{n\rightarrow \infty }
\rightarrow }
\end{array}
\label{l2 contradiction argument}
\end{equation
We hav
\begin{equation}
v_{n}\in C\left( \left[ 0,T\right] ,H_{0}^{1}\left( M\right) \right) \cap
C^{1}\left( \left[ 0,T\right] ,L^{2}\left( M\right) \right)
\end{equation
Therefore
\begin{equation}
E_{v_{n}}\left( t\right) =E_{v_{n}}\left( 0\right)
-\int_{0}^{T}\int_{M}a\left( x\right) \left\vert \partial
_{t}v_{n}\right\vert ^{2}dxdt+\int_{0}^{T}\int_{M}\frac{1}{\alpha _{n}
f_{n}\left( t_{n}+t,x\right) \partial _{t}v_{n}dxdt
\label{energy identity vn}
\end{equation
and using $\left( \RIfM@\expandafter\text@\else\expandafter\mbox\fi{\ref{energy bound linear}}\right) ,$ we infer tha
\begin{eqnarray*}
E_{v_{n}}\left( t\right) &\leq &2e^{T}\left( E_{v_{n}}\left( 0\right)
+\int_{0}^{T}\int_{M}\left\vert \frac{1}{\alpha _{n}}f_{n}\left(
t_{n}+t,x\right) \right\vert ^{2}dxdt\right) \\
E_{v_{n}}\left( t\right) &\leq &2e^{T}\left( 1+\frac{1}{n}\right) ,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{
for every }t\in \left[ 0,T\right]
\end{eqnarray*
This estimate allows one to show that the sequence $\left( v_{n},\partial
_{t}v_{n}\right) $ is bounded in $L^{\infty }\left( \left( 0,T\right)
\mathcal{H}\right) $ then it admits a subsequence still denoted by $\left(
v_{n},\partial _{t}v_{n}\right) $ that converges weakly-* to $\left(
v,\partial _{t}v\right) $ in $L^{\infty }\left( \left( 0,T\right) ,\mathcal{
}\right) .$ Passing to the limit in the system satisfied by $v_{n},$ we
obtain
\begin{equation}
\left\{
\begin{array}{ll}
\partial _{t}^{2}v-\Delta v=0 & \left] 0,T\right[ \times M \\
\partial _{t}v=0 & \left] 0,T\right[ \times \omega \\
\left( v\left( 0\right) ,\partial _{s}v\left( 0\right) \right) \in \mathcal{
} &
\end{array
\right. \label{sys limite}
\end{equation
and the solution $v$ is in the class
\begin{equation*}
C\left( \left[ 0,T\right] ,H_{0}^{1}\left( M\right) \right) \cap C^{1}\left(
\left[ 0,T\right] ,L^{2}\left( M\right) \right)
\end{equation*
We deduce as in J. Rauch and M. Taylor \cite{rauch taylor} or C. Bardos, G.
Lebeau, J. Rauch \cite{blr} that the set of such solutions is finite
dimensional and admits an eigenvector $v$ for $\Delta $. By unique
continuation for second order elliptic operator, we get $\partial _{t}v=0$.
Multiplying the equation by $v$ and integrating, we obtain $v=0.$ Now we
prove that $v_{n}\rightarrow 0,$ in the strong topology of
H_{loc}^{1}\left( \left( 0,T\right) ,H^{1}\left( M\right) \right) .$ For
that we use the notion of microlocal defect measures. These measures were
introduced by P G\'{e}rard \cite{ge1} and L. Tartar \cite{tatar}. Let $\mu $
the microlocal defect measure associated to the sequence $\left(
v_{n}\right) .$ From $\left( \ref{l2 contradiction argument}\right) $ we
infer that the supprot of $\mu $ is contained in characteristic set of the
wave operator and it propagates along the geodesic flow (G. Lebeau \cit
{lebeau}$)$. Therefore
\begin{equation*}
v_{n}\rightarrow 0,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }H_{loc}^{1}\left( \left( 0,T\right) ,H^{1}\left(
\omega \right) \right)
\end{equation*
Now the assumption (G) combined with the propagation of $\mu $ along
geodesic flow, gives
\begin{equation*}
v_{n}\rightarrow 0,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }H_{loc}^{1}\left( \left( 0,T\right) ,H^{1}\left(
M\right) \right)
\end{equation*
This give
\begin{equation}
\int_{0}^{T}\varphi \left( t\right) E_{v_{n}}\left( t\right) dt\underset
n\rightarrow \infty }{\rightarrow }0 \label{energy vn integral}
\end{equation
for every $\varphi \in C_{0}^{\infty }\left( \left[ 0,T\right] \right) .$ On
the other hand, $E_{v_{n}}\left( 0\right) =1,$ therefore, from $\left( \re
{energy identity vn}\right) $ and the fact tha
\begin{equation*}
\int_{0}^{T}\int_{M}\left\vert \partial _{t}v_{n}\right\vert ^{2}dxdt\leq
2Te^{T}\left( 1+\frac{1}{n}\right)
\end{equation*
we deduce that $E_{v_{n}}\left( t\right) \underset{n\rightarrow \infty }
\rightarrow }1,$ for every $t\in \left[ 0,T\right] .$ Since $E_{v_{n}}\left(
t\right) \leq 2,$ by Lebesgue's dominated convergence theore
\begin{equation*}
\int_{0}^{T}\varphi \left( t\right) E_{v_{n}}\left( t\right) dt\underset
n\rightarrow \infty }{\rightarrow }\int_{0}^{T}\varphi \left( t\right) dt.
\end{equation*
for every $\varphi \in C_{0}^{\infty }\left( \left[ 0,T\right] \right) .$ We
obtain a contradiction by choosing $\varphi $ such that $\int_{0}^{T}\varphi
\left( t\right) dt>0.$
\end{proof}
\subsection{Proof of Theorem \protect\ref{t:1}.}
Let $u$ be a solution of $\left( \ref{sys:linear}\right) $ with initial data
in the energy space. From the energy identity we hav
\begin{equation*}
\int_{t}^{t+T}\int_{M}a\left( x\right) \left\vert \partial _{t}u\right\vert
^{2}dxdt=E_{u}\left( t\right) -E_{u}\left( t+T\right)
+\int_{t}^{t+T}\int_{M}f\left( s,x\right) \partial _{t}udxds
\end{equation*
therefore, using Young's inequalit
\begin{eqnarray*}
\int_{t}^{t+T}\int_{M}a\left( x\right) \left\vert \partial _{t}u\right\vert
^{2}dxdt &\leq &E_{u}\left( t\right) -E_{u}\left( t+T\right) \\
&&+\epsilon \int_{t}^{t+T}\int_{M}\left\vert \partial _{t}u\right\vert
^{2}dxds+\frac{1}{\epsilon }\int_{t}^{t+T}\int_{M}\left\vert f\left(
s,x\right) \right\vert ^{2}dxds
\end{eqnarray*
for every $\epsilon >0.$ Now using the observability estimate $\left( \re
{observability linear}\right) $ and $\left( \ref{energy bound linear}\right)
,$ we can show tha
\begin{equation}
\int_{t}^{t+T}\int_{M}\left\vert \partial _{t}u\right\vert ^{2}dxds\leq
2Te^{T}\hat{C}_{T}\left[ \int_{t}^{t+T}\int_{M}a\left( x\right) \left\vert
\partial _{t}u\right\vert ^{2}+\left\vert f\left( s,x\right) \right\vert
^{2}dxds\right] \label{observability}
\end{equation
Then setting $\epsilon =\frac{1}{4Te^{T}\hat{C}_{T}},$ we infer tha
\begin{equation*}
\frac{1}{2}\int_{t}^{t+T}\int_{M}a\left( x\right) \left\vert \partial
_{t}u\right\vert ^{2}dxdt\leq E_{u}\left( t\right) -E_{u}\left( t+T\right)
+\left( 1+Te^{T}\hat{C}_{T}\right) \int_{t}^{t+T}\int_{M}\left\vert f\left(
s,x\right) \right\vert ^{2}dxds
\end{equation*
Hence with $\tilde{C}_{T}=\left( 1+Te^{T}\hat{C}_{T}\right) ,$ we hav
\begin{equation*}
\int_{t}^{t+T}\int_{M}a\left( x\right) \left\vert \partial _{t}u\right\vert
^{2}dxdt\leq 2\left[ E_{u}\left( t\right) -E_{u}\left( t+T\right) +\tilde{C
_{T}\int_{t}^{t+T}\int_{M}\left\vert f\left( s,x\right) \right\vert ^{2}dxd
\right]
\end{equation*
Now from the observability estimate $\left( \ref{observability}\right)
\begin{equation*}
E_{u}\left( t\right) \leq 2\hat{C}_{T}\left[ E_{u}\left( t\right)
-E_{u}\left( t+T\right) +\left( \tilde{C}_{T}+1\right)
\int_{t}^{t+T}\int_{M}\left\vert f\left( s,x\right) \right\vert ^{2}dxd
\right]
\end{equation*
with $\hat{C}_{T}\geq 1.$ Setting $C_{1,T}=2\left( \tilde{C}_{T}+1\right) .$
We remark tha
\begin{equation*}
2\hat{C}_{T}\left[ E_{u}\left( t\right) -E_{u}\left( t+T\right)
+C_{1,T}\int_{t}^{t+T}\int_{M}\left\vert f\left( s,x\right) \right\vert
^{2}dxds\right] \geq C_{1,T}\int_{t}^{t+T}\int_{M}\left\vert f\left(
s,x\right) \right\vert ^{2}dxds
\end{equation*
Then for $C_{T}=4\hat{C}_{T}$, we hav
\begin{eqnarray*}
E_{u}\left( t\right) +C_{1,T}\int_{t}^{t+T}\int_{M}\left\vert f\left(
s,x\right) \right\vert ^{2}dxds &\leq &C_{T}\left[ E_{u}\left( t\right)
-E_{u}\left( t+T\right) \right. \\
&&\left. C_{1,T}\int_{t}^{t+T}\int_{M}\left\vert f\left( s,x\right)
\right\vert ^{2}dxds\right]
\end{eqnarray*
Therefor
\begin{eqnarray*}
&&E_{u}\left( t+T\right) +\frac{1}{C_{T}}\left[ E_{u}\left( t\right)
+C_{1,T}\int_{t}^{t+T}\int_{M}\left\vert f\left( s,x\right) \right\vert
^{2}dxds\right] \\
&\leq &E_{u}\left( t\right) +C_{1,T}\int_{t}^{t+T}\int_{M}\left\vert f\left(
s,x\right) \right\vert ^{2}dxds
\end{eqnarray*
Setting $t=mT,$ with $m\in
\mathbb{N}
$ and using Lemma \ref{lem las tat 1}, we conclude tha
\begin{equation*}
E_{u}\left( t\right) \leq 4e^{T}\left( S\left( t-T\right)
+\int_{t-T}^{t}\Gamma \left( s\right) ds\right) ,\qquad \forall t\geq T
\end{equation*
where $S\left( t\right) $ is a positive solution of the following nonlinear
differential equatio
\begin{equation}
\frac{dS}{dt}+\frac{1}{C_{T}T}S=\Gamma \left( t\right) \RIfM@\expandafter\text@\else\expandafter\mbox\fi{ };\qquad
S(0)=E_{u}(0)
\end{equation
and
\begin{equation*}
\Gamma \left( s\right) =C_{1,T}\int_{M}\left\vert f\left( s,x\right)
\right\vert ^{2}dx
\end{equation*}
\section{The nonlinear case: Proof of theorem \protect\ref{t:2}}
This part is devoted to the proof of theorem \ref{t:2}. First we give the
following energy inequality.
\begin{proposition}
Let $u$ be a solution of $\left( \ref{sys:nonlinear}\right) $ with initial
data in the energy space. Then the following inequalit
\begin{equation}
E_{u}\left( t\right) \leq \left( 1+\frac{1}{\epsilon }\right) e^{\epsilon
\left( t-s\right) }\left( E_{u}\left( s\right)
+\int_{s}^{t}\int_{M}\left\vert f\left( \sigma ,x\right) \right\vert
^{2}dxd\sigma \right) \label{energy bound nonlinear}
\end{equation
holds for every $\epsilon >0$ and for every $t\geq s\geq 0.$
\end{proposition}
For the proof of $\left( \ref{energy bound nonlinear}\right) ,$ we have only
to proceed as in the proof of $\left( \ref{energy bound linear}\right) .$
Now we give the proof of theorem \ref{t:2}.
\begin{proof}[Proof of Theorem \protect\ref{t:2}]
Let $u$ be the solution of $\left( \ref{sys:nonlinear}\right) $ with initial
condition $\left( u_{0},u_{1}\right) $ in the energy space $\mathcal{H}.$
Let $t\geq 0$ and $\phi =u\left( t+\cdot \right) $ be the solution of
\begin{equation}
\begin{array}{l}
\left\{
\begin{array}{ll}
\partial _{s}^{2}\phi -\Delta \phi +a\left( x\right) g\left( \partial
_{s}\phi \right) =f\left( t+s,x\right) &
\mathbb{R}
_{+}\times \Omega \\
\phi =0 &
\mathbb{R}
_{+}\times \partial \Omega \\
\left( \phi \left( 0\right) ,\partial _{s}\phi \left( 0\right) \right)
=\left( u\left( t\right) ,\partial _{t}u\left( t\right) \right) &
\end{array
\right.
\end{array}
\label{sys:nonautonomous translate}
\end{equation
We argue as in \cite{MID}. Define $z=\phi -v$, where $v$ is the solution of
\left( \ref{sys:linear}\right) $ with initial data $\left( u\left( t\right)
,\partial _{t}u\left( t\right) \right) $ and $f=f\left( t+\cdot ,\cdot
\right) .$\ Then $z$ satisfies the system
\begin{equation*}
\left\{
\begin{array}{ll}
\partial _{t}^{2}z-\Delta z+a\left( x\right) g\left( \partial _{t}\phi
\right) -a(x)\partial _{t}v=0 &
\mathbb{R}
_{+}\times \Omega \\
z=0 &
\mathbb{R}
_{+}\times \partial \Omega \\
\left( z\left( 0\right) ,\partial _{t}z\left( 0\right) \right) =0 &
\end{array
\right.
\end{equation*
Let $T>0,$ such that $\left( \omega ,T\right) $ satisfies the assumption
\left( \RIfM@\expandafter\text@\else\expandafter\mbox\fi{G}\right) .$\ It is clear that $a\left( x\right) \left( g\left(
\partial _{t}\phi \right) -\partial _{t}v\right) \in L^{2}\left( \left(
0,T\right) \times \Omega \right) .$ This observation permits us to apply
energy identity, whence
\begin{eqnarray*}
E_{z}(T) &=&\int_{0}^{T}\int_{M}a\left( x\right) \left( \partial
_{t}v-g(\partial _{t}\phi )\right) \partial _{t}z\;dxdt \\
&=&-\int_{0}^{T}\int_{M}a\left( x\right) \left( \left\vert \partial
_{t}v\right\vert ^{2}+g(\partial _{t}\phi )\partial _{t}\phi \right)
\;dxdt+\int_{0}^{T}\int_{M}g\left( \partial _{t}\phi \right) \partial
_{t}v+\partial _{t}v\partial _{t}\phi d\mathfrak{m}_{a}
\end{eqnarray*
The monotonicity of $g$ ($g\left( s\right) s\geq 0$) and the estimate above,
gives the following estimate:
\begin{equation*}
\int_{0}^{T}\int_{M}a\left( x\right) \left( \left\vert \partial
_{t}v\right\vert ^{2}+g(\partial _{t}\phi )\partial _{t}\phi \right)
dxdt\leq \int_{0}^{T}\int_{M}g\left( \partial _{t}\phi \right) \partial
_{t}v+\partial _{t}v\partial _{t}\phi d\mathfrak{m}_{a}
\end{equation*
Now the observability estimate $\left( \ref{observability linear}\right) $,
gives
\begin{equation*}
E_{u}\left( t\right) =E_{v}\left( 0\right) \leq \hat{C}_{T}\left(
\int_{0}^{T}\int_{M}g\left( \partial _{t}\phi \right) \partial
_{t}v+\partial _{t}v\partial _{t}\phi d\mathfrak{m}_{a}+\int_{0}^{T}\int_{M
\left\vert f\left( t+s,x\right) \right\vert ^{2}dxds\right)
\end{equation*
for some $\hat{C}_{T}\geq 1.$ From the estimate above we infer tha
\begin{equation*}
E_{u}\left( t\right) \leq \hat{C}_{T}\left( \int_{t}^{t+T}\int_{M}g\left(
\partial _{t}u\right) \partial _{t}\tilde{v}+\partial _{t}\tilde{v}\partial
_{t}ud\mathfrak{m}_{a}+\int_{t}^{t+T}\int_{M}\left\vert f\left( s,x\right)
\right\vert ^{2}dxds\right)
\end{equation*
where $\tilde{v}\left( s\right) =v\left( s-t\right) ,$ $s\geq t\geq 0.$
\begin{lemma}
\label{lem:concave estimates global}Settin
\begin{equation*}
M_{s,t}=[s,t]\times \Omega ,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }t\geq s\geq 0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and }M_{0,t}=M_{t}
\end{equation*
Let $t\geq 0.$ For $i=0,1$ let
\begin{equation*}
M_{t}^{0}=\left\{ \left( s,x\right) \in \left[ t,t+T\right] \times \Omega
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }\left\vert \partial _{s}u\left( s,x\right) \right\vert <\eta
_{0}\right\} ,\quad M_{t}^{1}=M_{t,t+T}\setminus M_{t}^{0}
\end{equation*
and define
\begin{equation*}
\Theta \left( M_{t,t+t}\right) =\int_{M_{t}}\left\vert \partial _{s}u\left(
s\right) \partial _{s}v\left( s-t\right) \right\vert d\mathfrak{m}_{a},\quad
\Psi \left( M_{t}^{i}\right) =\int_{M_{t}^{i}}\left\vert g\left( \partial
_{s}u\left( s\right) \right) \partial _{s}v\left( s-t\right) \right\vert
\mathfrak{m}_{a},
\end{equation*
where $u$ and $v$ denotes respectively the solution of $\left( \RIfM@\expandafter\text@\else\expandafter\mbox\fi{\re
{sys:nonlinear}}\right) $ and $\left( \RIfM@\expandafter\text@\else\expandafter\mbox\fi{\ref{sys:linear}}\right) $ with
initial data $\left( u_{0},u_{1}\right) $ and $\left( \left(
v_{0},v_{1}\right) =u\left( t\right) ,\partial _{t}u\left( t\right) \right)
.
\begin{enumerate}
\item The following inequality holds for every $\epsilon >0,
\begin{eqnarray}
\medskip \Psi \left( M_{t}^{0}\right) +\Theta \left( M_{t}^{0}\right) &\leq
&\epsilon E_{u}\left( t\right) +\frac{C}{\epsilon }\mathfrak{m}_{a}\left(
M_{T}\right) h_{0}\left( {\frac{1}{\mathfrak{m}_{a}\left( M_{T}\right) }
\int_{M_{t,t+T}}g\left( \partial _{s}u\right) \partial _{s}u\;d\mathfrak{m
_{a}\right) \label{Omega zero estimate global} \\
&&+\epsilon \int_{t}^{t+T}\int_{M}\left\vert f\left( s,x\right) \right\vert
^{2}dxds
\end{eqnarray
with $C>0.$
\item Estimate on the damping near infinity. The following inequalit
\begin{eqnarray}
\medskip \Psi \left( M_{t}^{1}\right) +\Theta \left( M_{t}^{1}\right) &\leq
&\epsilon E_{u}\left( t\right) +C\epsilon ^{-1}\left(
\int_{M_{t,t+T}}g\left( \partial _{s}u\right) \partial _{s}u\;d\mathfrak{m
_{a}\right) \label{super nonlinear h1 global} \\
&&+\epsilon \int_{t}^{t+T}\int_{M}\left\vert f\left( s,x\right) \right\vert
^{2}dxds
\end{eqnarray
holds for every $\epsilon >0$ with $C>0$.
\end{enumerate}
\end{lemma}
For the proof of the lemma above we have only to proceed as in \cite[Lemma
3.1 cases 1 and 2]{MID} and to use the energy inequality $\left( \ref{energy
bound linear}\right) $.
Now using $\left( \ref{Omega zero estimate global}\right) $ and $\left( \re
{super nonlinear h1 global}\right) ,$ we deduce tha
\begin{equation*}
E_{u}\left( t\right) \leq \tilde{C}_{T}\left( \epsilon E_{u}\left( t\right)
+C_{T,\epsilon }h\left( \int_{M_{t,t+T}}g\left( \partial _{s}u\right)
\partial _{s}ud\mathfrak{m}_{a}+\int_{M_{t,t+T}}\left\vert f\left(
s,x\right) \right\vert ^{2}dxds\right) \right)
\end{equation*
for every $\epsilon >0,$ where the function $h=I+\mathfrak{m}_{a}\left(
M_{T}\right) h_{0}\circ {\frac{I}{\mathfrak{m}_{a}\left( M_{T}\right) }}$
and $\tilde{C}_{T}\geq 1$. Setting $\epsilon $ small enough, e.g. $\epsilon
\frac{1}{2\tilde{C}_{T}}
\begin{equation}
E_{u}\left( t\right) \leq C_{T}h\left( \int_{t}^{t+T}\int_{M}g\left(
\partial _{s}u\right) \partial _{s}ud\mathfrak{m}_{a}+\int_{t}^{t+T}\int_{M
\left\vert f\left( s,x\right) \right\vert ^{2}dxds\right) \label{proof 1}
\end{equation
for some $C_{T}\geq 1.$ This give
\begin{equation}
E_{u}\left( t\right) +\int_{t}^{t+T}\int_{M}\left\vert f\left( s,x\right)
\right\vert ^{2}dxds\leq 2C_{T}h\left( \int_{t}^{t+T}\int_{M}g\left(
\partial _{s}u\right) \partial _{s}ud\mathfrak{m}_{a}+\int_{t}^{t+T}\int_{M
\left\vert f\left( s,x\right) \right\vert ^{2}dxds\right)
\label{Proof: observability nonlinear}
\end{equation
On the other hand, the energy identity give
\begin{equation}
\int_{t}^{t+T}\int_{M}a\left( x\right) g\left( \partial _{t}u\right)
\partial _{t}udxd\sigma \leq E_{u}\left( t\right) -E_{u}\left( t+T\right)
+\int_{t}^{t+T}\int_{M}\left\vert f\left( \sigma ,x\right) \partial
_{t}u\right\vert dxd\sigma \label{proof 2}
\end{equation
Let $\psi ,$ defined b
\begin{equation*}
\psi \left( s\right) =\left\{
\begin{array}{lc}
\frac{1}{2T}h^{-1}\left( \frac{s^{2}}{8C_{T}e^{T}}\right) & s\in
\mathbb{R}
_{+} \\
+\infty & s\in
\mathbb{R}
_{-}^{\ast
\end{array
\right.
\end{equation*
It is clear that $\psi $ convex and proper function. Hence, we can apply
Young's inequality \cite{rockfellar}
\begin{eqnarray*}
\int_{t}^{t+T}\int_{M}\left\vert f\left( \sigma ,x\right) \partial
_{t}u\right\vert dxd\sigma &\leq &\int_{t}^{t+T}\left\Vert f\left( \sigma
,.\right) \right\Vert _{L^{2}}\left\Vert \partial _{t}u\left( \sigma
,.\right) \right\Vert _{L^{2}}d\sigma \\
&\leq &\int_{t}^{t+T}\psi ^{\ast }\left( \left\Vert f\left( \sigma ,.\right)
\right\Vert _{L^{2}}\right) +\psi \left( \left\Vert \partial _{t}u\left(
\sigma ,.\right) \right\Vert _{L^{2}}\right) d\sigma
\end{eqnarray*
where $\psi ^{\ast }$ is the convex conjugate of the function $\psi ,$
defined by
\begin{equation*}
\psi ^{\ast }\left( s\right) =\underset{y\in
\mathbb{R}
}{\sup }\left[ sy-\varphi \left( y\right) \right]
\end{equation*
Using the energy inequality $\left( \ref{energy bound nonlinear}\right) $
and the observability estimate $\left( \ref{Proof: observability nonlinear
\right) ,$ we infer tha
\begin{equation*}
\int_{t}^{t+T}\psi \left( \left\Vert \partial _{t}u\left( \sigma ,.\right)
\right\Vert _{L^{2}}\right) d\sigma \leq \frac{1}{2}\left(
\int_{t}^{t+T}\int_{M}g\left( \partial _{s}u\right) \partial _{s}ud\mathfrak
m}_{a}+\int_{t}^{t+T}\int_{M}\left\vert f\left( s,x\right) \right\vert
^{2}dxds\right)
\end{equation*
\ then $\left( \ref{proof 2}\right) ,$ give
\begin{eqnarray}
\int_{t}^{t+T}\int_{M}a\left( x\right) g\left( \partial _{t}u\right)
\partial _{t}udxd\sigma &\leq &2\left( E_{u}\left( t\right) -E_{u}\left(
t+T\right) +\int_{t}^{t+T}\int_{M}\left\vert f\left( s,x\right) \right\vert
^{2}dxds\right. \\
&&\left. +\int_{t}^{t+T}\psi ^{\ast }\left( \left\Vert f\left( \sigma
,.\right) \right\Vert _{L^{2}}\right) d\sigma \right)
\end{eqnarray
\ The inequality above combined with the observability estimate $\left( \re
{proof 1}\right) $ and the fact $h=I+\mathfrak{m}_{a}\left( M_{T}\right)
h_{0}\circ {\frac{I}{\mathfrak{m}_{a}\left( M_{T}\right) }}$ is icreasing$,$
give
\begin{equation*}
E_{u}\left( t\right) \leq C_{T}h\left( 4\left( E_{u}\left( t\right)
-E_{u}\left( t+T\right) +\int_{t}^{t+T}\int_{M}\left\vert f\left( s,x\right)
\right\vert ^{2}dxd\sigma +\int_{t}^{t+T}\psi ^{\ast }\left( \left\Vert
f\left( \sigma ,.\right) \right\Vert _{L^{2}}\right) d\sigma \right) \right)
\end{equation*
Settin
\begin{equation*}
\Gamma \left( s\right) =2\int_{M}\left\vert f\left( s,x\right) \right\vert
^{2}dx+\psi ^{\ast }\left( \left\Vert f\left( s,.\right) \right\Vert
_{L^{2}}\right)
\end{equation*
Therefor
\begin{equation*}
E_{u}\left( t\right) +\int_{t}^{t+T}\Gamma \left( s\right) ds\leq Kh\left(
4\left( E_{u}\left( t\right) -E_{u}\left( t+T\right) +\int_{t}^{t+T}\Gamma
\left( s\right) dxds\right) \right)
\end{equation*
with $K\geq C_{T}.$ Setting $\theta \left( t\right) =$\ $\int_{t}^{t+T
\Gamma \left( s\right) ds.$ Thu
\begin{equation*}
E_{u}\left( t+T\right) +\frac{1}{4}h^{-1}\left( \frac{1}{K}\left(
E_{u}\left( t\right) +\theta \left( t\right) \right) \right) \leq
E_{u}\left( t\right) +\theta \left( t\right)
\end{equation*
for every $t\geq 0.$ Take $t=mt,$ $m\in
\mathbb{N}
\begin{equation*}
E_{u}\left( \left( m+1\right) T\right) +\frac{1}{4}h^{-1}\left( \frac{1}{K
\left( E_{u}\left( mT\right) +\theta \left( mT\right) \right) \right) \leq
E_{u}\left( mT\right) +\theta \left( mT\right)
\end{equation*
Setting $W\left( t\right) =E_{u}\left( t\right) ,$ $\ell \left( s\right)
\frac{1}{4}h^{-1}\circ \frac{I}{K}$ and $\Gamma \left( t\right)
=2\int_{M}\left\vert f\left( s,x\right) \right\vert ^{2}dx+\psi ^{\ast
}\left( \left\Vert f\left( s,.\right) \right\Vert _{L^{2}}\right) .$ It is
clear that the functions $\ell $ and $I-\ell $ are increasing on the
positive axis and $\ell \left( 0\right) =0.$ The function $\Gamma \in
L_{loc}^{1}\left(
\mathbb{R}
_{+}\right) $ and non negative on
\mathbb{R}
_{+}$. According to lemma \ref{lemma las tat}
\begin{equation*}
E_{u}\left( t\right) \leq 4e^{T}\left( S\left( t-T\right)
+\int_{t-T}^{t}\Gamma \left( s\right) ds\right) ,\qquad \forall t\geq T
\end{equation*
where $S\left( t\right) $ is the solution of the following nonlinear
differential equatio
\begin{equation}
\frac{dS}{dt}+\frac{1}{T}\ell \left( S\right) =\Gamma \left( t\right) \RIfM@\expandafter\text@\else\expandafter\mbox\fi{
};\qquad S(0)=W(0).
\end{equation}
\end{proof}
|
1,116,691,498,653 | arxiv | \section{Introduction}
Active optical fibres are becoming more and more important in the
field of detection and measurement of ionising radiation and
particles. Light is generated inside the fibre either through
interaction with the incident radiation (scintillating fibres) or
through absorption of primary light (wavelength-shifting
fibres). Plastic fibres with large core diameters, i.e.\ where the
wavelength of the light being transmitted is much smaller than the
fibre diameter, are commercially available and readily fabricated,
have good timing properties and allow a multitude of different
geometrical designs. The low costs of plastic materials make it
possible for many present day or future experiments to use such fibres
in large quantities (see~\cite{Leutz1995} for a review of
active fibres in high energy physics). For the construction of the
highly segmented tracking detector of the ATLAS experiment approved
for the LHC collider at CERN more than 600,000 wavelength-shifting
fibres have been used~\cite{ATLAS1994}. Our work is also motivated by
the fact that spiral fibres embedded in scintillators are being used
for calorimetric measurements in long base line neutrino oscillation
experiments, most recently in the MINOS experiment~\cite{MINOS1998}.
The treatment of small diameter optical fibres involves
electromagnetic theory applied to dielectric waveguides, which was
first achieved by Snitzer~\cite{Snitzer1961} and Kapany {\it et
al}~\cite{Kapany1963}. Although this approach provides insight into
the phenomenon of total internal reflection and eventually leads to
results for the field distributions and electromagnetic radiation for
curved fibres, it is advantageous to use ray optics for applications
to large diameter fibres where the waveguide analysis is an
unnecessary complication. In ray optics a light ray may be categorised
by its path along the fibre. The path of a meridional ray is confined
to a single plane, all other modes of propagation are known as skew
rays. The optics of meridional rays in fibres was developed in the
1950s~\cite{Kapany1957} and can be found in numerous textbooks, e.g.\
in ~\cite{Kapany1967,Allan1973,Ghatak1998}. Since then, the scientific
and technological progress in the field of fibre optics has been
enormous. Despite the extensive coverage of theory and experiment in
this field, only fragmentary studies on the trapping efficiencies and
refraction of skew rays in curved multimode fibres could be
found~\cite{Winkler1979,Badar1989,Badar1991A,Badar1991B}. We have
therefore performed a three-dimensional simulation of photons
propagating in simple circularly curved fibres in order to quantify
the losses and to establish the dependence of these losses on the
angle of the bend. We have also briefly investigated the time
dispersion in fibres. For our calculations a common type of fibre in
particle physics is assumed, specified by a polystyrene core of
refractive index $n_{\it core}=$ 1.6 and a thin polymethylmethacrylate
(PMMA) cladding of refractive index $n_{\it clad}=$ 1.49, where the
indices are given at a wavelength of 590\,nm. Another common cladding
material is fluorinated polymethacrylate with $n_{\it clad}=
1.42$. Typical diameters are in the range of 0.5 -- 1.5\,mm.
This paper is organised as follows: section~2 describes the analytical
expressions of trapping efficiencies for skew and meridional rays in
active, i.e.\ light generating, fibres. The analytical description of
skew rays is too complex to be solved for sharply curved fibres and
the necessity of a simulation becomes evident. In section~3 a
simulation code is outlined that tracks light rays in cylindrical
fibres governed by a set of geometrical rules derived from the laws of
optics. section~4 presents the results of the simulations. These
include distributions of the characteristic properties which describe
light rays in straight and curved fibres, where special emphasis is
placed on light losses due to the sharp bending of fibres. Light
dispersion is briefly reviewed in the light of the results of the
simulation. The last section provides a short summary.
\section{Trapping of photons}
When using scintillating or wavelength-shifting fibres in charged
particle detectors the trapped light as a fraction of the intensity of
the emitted light is very important in determining the light yield of
the system. For very low light intensities as encountered in many
particle detectors the photon representation is more appropriate to
use than a description by light rays. Whether the fibres are
scintillating or wavelength-shifting one is only ever concerned with a
few 10's or 100's of photons propagating in the fibre and single
photon counting is often necessary.
The geometrical path of any rays in optical fibres, including skew
rays, was first analysed in a series of papers by
Potter~\cite{Potter1961} and Kapany~\cite{Kapany1961}. The treatment
of angular dependencies in our paper is based on that. The angle
$\gamma$ is defined as the angle of the projection of the light ray in
a plane perpendicular to the axis of the fibre with respect to the
normal at the point of reflection. One may describe $\gamma$ as a
measure of the `skewness' of a particular ray, since meridional rays
have this angle equal to zero. The polar angle, $\theta^\prime$, is
defined as the angle of the light ray in a plane containing the fibre
axis and the point of reflection with respect to the normal at the
point of reflection. It can be shown that the angle of incidence at
the walls of the cylinder, $\alpha$, is given by $\cos{\alpha}=
\cos{\theta^\prime}\, \cos{\gamma}$. The values of the two orthogonal
angles $\theta^\prime$ and $\gamma$ will be preserved independently
for a particular photon at every reflection along its path.
In general for any ray to be internally reflected within the cylinder
of the fibre, the inequality $\sin{\alpha} \geq
\sin{\theta^\prime_{\it crit}} = n_{\it clad}/n_{\it core}$ must be
fulfilled, where the critical angle, $\theta^\prime_{\it crit}$, is
given by the index of refraction of the fibre core, $n_{\it core}$,
and that of the cladding, $n_{\it clad}$. In the meridional
approximation the above equations lead to the well known critical
angle condition for the polar angle, $\theta^\prime \ge
\theta^\prime_{\it crit}$, which describes an acceptance cone of
semi-angle, $\theta\ [= \pi/2 - \theta^\prime]$, with respect to the
fibre axis (see for example~\cite{Potter1961} and references
therein). Thus, in this approximation all light within the forward
cone will be considered as trapped and undergo multiple total internal
reflections to emerge at the end of the fibre.
For the further discussion in this paper it is convenient to use the
axial angle, $\theta$, as given by the supplement of $\theta^\prime$,
and the skew angle, $\gamma$, to characterise any light ray in terms
of its orientation, see figure~\ref{fig:description} for an
illustration.
The flux transmitted by a fibre is determined by an integration over
the angular distribution of the light emitted within the acceptance
domain, i.e.\ the phase space of possible propagation modes. Using the
expression given by Potter {\it et al}~\cite{Potter1963} and setting the
transmission function, which parameterises the light attenuation, to
unity the light flux can be written as follows:
\begin{equation}
\eqalign{
F & = F_m + F_s\\
& = 4 \rho^2 \int_{\theta= 0}^{\theta_{\it crit}}
\int_{\gamma= 0}^{\pi/2} \int_{\phi= 0}^{\pi/2}
I(\theta,\phi)\, \cos^2{\gamma}\, d\gamma\, d\Omega\ +\\
& 4 \rho^2 \int_{\theta= \theta_{\it crit}}^{\pi/2}
\int_{\gamma= \overline{\gamma}(\theta)}^{\pi/2}
\int_{\phi= 0}^{\pi/2}
I(\theta,\phi)\, \cos^2{\gamma}\, d\gamma\, d\Omega\ ,
}
\end{equation}
where $d\Omega$ is the element of solid angle,
$\overline{\gamma}(\theta)$ refers to the maximum axial angle allowed
by the critical angle condition, $\rho$ is the radius of a cylindrical
fibre and $I(\theta,\phi)$ is the angular distribution of the emitted
light in the fibre core. The two terms, $F_m$ and $F_s$, refer to
either the meridional or skew cases, respectively. The lower limit of
the integral for $F_s$ is $\overline{\gamma}=
\arccos{(\sin{\theta_{\it crit}}/\sin{\theta})}$.
The trapping efficiency for forward propagating photons,
$\epsilon^{1/2}$, may be defined as the fraction of totally internally
reflected photons. The formal expression for the trapping efficiency,
including skew rays, is derived by dividing the transmitted flux by
the total flux through the cross-section of the fibre core, $F_0$.
For isotropic emission of fluorescence light the total flux equals $4
\pi^2 \rho^2 I_0$. Then, the first term of equation~(1) gives the
trapping efficiency in the meridional approximation,
\begin{equation}
\epsilon^{1/2}_m = F_m/F_0 = \frac{1}{2} (1 -
\cos{\theta_{\it crit}}) \approx
\frac{\theta^2_{\it crit}}{4}\ ,
\label{eq:omega_m}
\end{equation}
where all photons are considered to be trapped if $\theta \le
\theta_{\it crit}$, independent of their actual skew angles.
The integration of the second term of equation~(1) gives the
contributions of all skew rays to the trapping efficiency. Integrating
by parts, one obtains
\begin{equation}
\hspace{-1cm} \epsilon^{1/2}_s = \frac{1}{2} \cos{\theta_{\it crit}} -
\frac{\cos^2{\theta_{\it crit}} \sin{\theta_{\it crit}}}{2\pi}
\int_0^1 \frac{dt}{\sqrt{(1-t)\,t}\, \left(1-t \cos^2{\theta_{\it
crit}} \right)}\ ,
\end{equation}
with $t= \cos^2{\theta}/\cos^2{\theta_{\it crit}}$. Complex
integration leads to the result:
\begin{equation}
\epsilon^{1/2}_s = \frac{1}{2} (1 - \cos{\theta_{\it crit}})
\cos{\theta_{\it crit}}\ .
\label{eq:omega_s}
\end{equation}
The total initial trapping efficiency is then:
\begin{equation}
\epsilon^{1/2} = \frac{1}{2} (1 - \cos^2{\theta_{\it crit}})
\approx \frac{\theta^2_{\it crit}}{2}\ ,
\label{eq:omega_tot}
\end{equation}
which is approximately twice the trapping efficiency in the meridional
approximation for small critical angles. The trapping efficiency of
rays is crucially dependent on the circular symmetry of the
core-cladding interface. Any ellipticity or variation in the fibre
diameter will lead to the refraction of some skew rays. Furthermore,
skew rays have a much longer optical path length, suffer from more
reflections and therefore get attenuated more quickly, see section~4
for a quantitative analysis of this effect. In conclusion, for long
fibres the effective trapping efficiency is closer to
$\epsilon^{1/2}_m$ than to $\epsilon^{1/2}$. Formula~\ref{eq:omega_m}
yields a trapping efficiency of $\epsilon^{1/2}_m=$ 3.44\% for plastic
fibres with $n_{\it core}=$ 1.6 and $n_{\it clad}=$ 1.49. For these
``standard'' parameters the efficiency in formula~\ref{eq:omega_s}
evaluates to $\epsilon^{1/2}_s=$ 3.20\% and in
formula~\ref{eq:omega_tot} to $\epsilon^{1/2}=$ 6.64\%.
\section{Description of the tracking code}
The aim of the program is to track light rays through a fibred. Since
the analytic analysis of the passage of skew rays along a curved fibre
is exceedingly complex we treat the problem with a Monte Carlo
technique. This type of numerical integration using random numbers is
a standard method in the field of particle physics and is now
practical given the CPU power currently available. On its path the ray
is subject to attenuation, parameterised firstly by an effective
absorption coefficient and secondly by a reflection coefficient. At
the core-cladding interface the ray can be reflected totally or
partially internally. In the latter case a random number is compared
to the reflection probability to select reflected rays.
Light rays are randomly generated on the circular cross-section of a
fibre with radius $\rho$. An arbitrary ray is defined by its axial and
azimuthal or skew angle. An advantage of this method is that any
distribution of light rays can easily be generated. The axis of the
fibre is defined by a curve $z= f(s)$ where $s$ is the arc length. For
$s < 0$, it is a straight fibre along the negative $z$-axis and for $0
< s < L_F$, the fibre is curved in the $xz$-plane with a radius of
curvature $R_{\it curve}$. In particular, the curve $f(s)$ is
tangential to the $z$-axis at $s = 0$.
Light rays are represented as lines and determined by two points,
$\vec{r}$ and $\vec{r}^{\,\prime}$. The points of incidence of rays
with the core-cladding interface are determined by solving the
appropriate systems of algebraic equations. In the case of a straight
fibre the geometrical representation of a straight cylinder is used
resulting in the quadratic equation
\begin{equation}
(x + (x^\prime - x)\times m)^2 + (y +
(y^\prime - y)\times m)^2 - \rho^2 = 0\ .
\end{equation}
The positive solution for the parameter $m$ defines the point of
incidence, $\vec{r}_R$, on the cylinder wall. In the case of a fibre
curved in a circular path, the cylinder equation is generalized by the
torus equation
\begin{equation}
\hspace{-1cm} \eqalign{ ( R_{\it curve} - ( (x + (x^\prime -
x)\times m + R_{\it curve})^2\\
+ (z + (z^\prime - z)\times m)^2 )^{1/2} )^2\\
+ (y + (y^\prime - y)\times m)^2 - \rho^2 = 0\ .}
\end{equation}
The coefficients of this fourth degree polynomial are real and depend
only on $R_{\it curve}$ and the vector components of $\vec{r}$ and
$\vec{r}^{\,\prime}$ up to the fourth power. In most cases there are
two real roots, one for the core-cladding intersection in the forward
direction and one at $\vec{r}$ if the initial point already lies on
the cylinder wall. The roots are found using Laguerre's
method~\cite{Recipes1992}. It requires complex arithmetic, even while
converging to real roots, and an estimate for the root to be
found. The routine implements a stopping criterion in case of
non-convergence because of round-off errors. The initial estimate is
given by the intersection point of the light ray and a straight
cylinder that has been rotated and translated to the previous
reflection point. A driver routine is used to apply Laguerre's method
to all four roots and to perform the deflation of the remaining
polynomial. Finally the roots are sorted by their real part. The
smallest positive, real solution for $m$ is then used to determine the
reflection point, $\vec{r}_R$.
After the point of incidence has been found, the reflection length and
absorption probability can be calculated. The angle of incidence,
$\alpha$, is given by $\cos{\alpha} = \vec{r}_{in} \cdot \vec{n}$,
where $\vec{n}$ denotes the unit vector normal to the core-cladding
interface at the point of reflection and $\vec{r}_{in}=
(\vec{r}-\vec{r}_R)/|\vec{r}-\vec{r}_R|$ is the unit incident
propagation vector. Now the reflection probability corresponding to
this angle $\alpha$ is determined. In case the ray is partially or
totally internally reflected the total number of reflections is
increased and the unit propagation vector after reflection,
$\vec{r}_{\it out}$, is then calculated by mirroring $\vec{r}_{in}$
with respect to the normal vector: $\vec{r}_{\it out} = \vec{r}_{in} -
2 \vec{n} \cos{\alpha}$. The program returns in a loop to the
calculation of the next reflection point. When the ray is absorbed on
its path or not reflected at the reflection point the next ray is
generated at the fibre entrance end. A scheme on the main steps of the
program can be found in figure~\ref{fig:scheme}. At any point of the
ray's path its axial, azimuthal and skew angle are given by scalar
products of the ray vector with the coordinate axes in a projection on
a plane perpendicular to the fibre axis and parallel to the fibre
axis, respectively. The transmitted flux of a specific fibre, taking
all losses caused by bending, absorption and reflections into account,
is calculated from the number of lost rays compared to the number of
rays reaching the fibre exit end.
This method gives rise to an efficient simulation technique for fibres
with constant curvature. It is possible to extend the method for the
study of arbitrarily curved fibres by using small segments of constant
curvature. In the current version of the program light rays are
tracked in the fibre core only and no tracking takes place in the
surrounding cladding, corresponding to infinite cladding thickness. In
long fibres cladding modes will eventually be lost, but for lengths $<
1$\,m they can contribute to the transmission function. The
simulation code is written in Fortran and it takes about 1.5\,ms to
track a skew ray through a curved fibre.
\section{Results of the tracking code}
Figure~\ref{fig:description} shows the passage of a skew ray along a
straight fibre. The light ray has been generated off-axis with an
axial angle of $\theta= 0.42$ and would not be trapped if it were
meridional. In general, the projection of a meridional ray on a plane
perpendicular to the fibre axis is a straight line, whereas the
projection of a skew ray changes its orientation with every
reflection. In the special case of a cylindrical fibre all meridional
rays pass through the fibre axis. The figure illustrates the
preservation of the skew angle, $\gamma$, during the propagation.
\subsection{Trapping efficiency and acceptance domain}
Figure~\ref{fig:phasespace}(a) shows the total acceptance domain and
its splitting into the meridional and skew regions in the meridional
ray approximation. The phase space density, i.e.\ the number of
trapped rays per angular interval, is represented by proportional
boxes. The density increases with $\cos^2{\gamma}$ and $\sin{\theta}$.
The contours relate to sharply curved fibres and are explained in
section~\ref{section:curvedfibres}. Figure~\ref{fig:phasespace}(b)
shows a projection of the phase space onto the $\sin\theta$-axis. A
peak around the value $\sin{\theta_{\it crit}}$ is apparent. A skew
ray can be totally internally reflected at larger angles $\theta$ than
meridional rays and the relationship between the minimum permitted
skew angle, $\overline{\gamma}$, at a given axial angle, $\theta$, is
determined by the critical angle condition: $\cos{\overline{\gamma}}=
\sin{\theta_{\it crit}} / \sin{\theta}$.
Photons are generated randomly on the cross-section of the fibre with
an isotropic angular distribution in the forward direction. The figure
gives values for the two trapping efficiencies which can be determined
by integrating over the two angular regions. The integrals are
identical in value to the expressions in formulae~\ref{eq:omega_m}
and~\ref{eq:omega_s}. It is obvious from the critical angle condition
that a photon emitted close to the cladding has a higher probability
to be trapped than when emitted close to the centre of the fibre. For
a given axial angle the range of possible azimuthal angles, in which
the photons are uniformly distributed, for the photon to get trapped
increases with the radial position, $\hat{\rho}$, of the light emitter
in the fibre core. It can be deduced from figure~\ref{fig:trap-r} that
the meridional approximation is a good estimate for $\epsilon$ if the
photons originate at radial positions $\hat{\rho} < 0.8$. The trapping
of skew rays only becomes significant for photons originating at
radial positions $\hat{\rho} \ge 0.9$. This fact has been discussed
before, e.g.\ in~\cite{Johnson1994}. Figure~\ref{fig:trap-theta} shows
the the trapping efficiency as a function of the axial angle. All
photons with axial angles below $\theta_{\it crit}$ are trapped in the
fibre, whereas photons with larger angles are trapped only if their
skew angle exceeds the minimum permitted skew angle. It can be seen
that the trapping efficiency falls off very steeply with the axial
angle.
\subsection{Propagation of photons}
The analysis of trapped photons is based on the total photon path
length per axial fibre length, $P$, the number of internal reflections
per axial fibre length, $\eta$, and the optical path length between
successive internal reflections, $l_R$, where we follow the
nomenclature of Potter and Kapany. It should be noted that these three
variables are not independent as $P= \eta \times l_R$.
Figure~\ref{fig:pathlength} shows the distribution of the normalised
path length, $P(\theta)$, for photons reaching the exit end of
straight and curved fibres of 0.6\,mm radius. The figure also gives
results for curved fibres of two different radii of curvature. The
distribution of path lengths shorter than the path length for
meridional photons propagating at the critical angle is almost
flat. It can easily be shown that the normalised path length along a
straight fibre is given by the secant of the axial angle and is
independent of other fibre dimensions: $P(\theta)= \sec\theta$. In
case of the curved fibre the normalised path length of the trapped
photons is less than the secant of the axial angle and photons on near
meridional paths are refracted out of the fibre most.
The distribution of the normalised number of reflections,
$\eta(\theta)$, for photons reaching the exit end of straight and
curved fibres is shown in figure~\ref{fig:reflections}. Again, the
figure gives results for curved fibres of two different radii of
curvature. The number of reflections a photon experiences scales with
the reciprocal of the fibre radius. In the meridional approximation
the normalised number of reflections is related by simple trigonometry
to the axial angle and the fibre radius: $\eta_m(\theta) =
\tan{\theta}/2\rho$. The distribution of $\eta_m$, based on the
distribution of axial angles for the trapped photons, is represented
by the dashed line. The upper limit, $\eta(\theta_{\it crit})$, is
indicated in the plot by a vertical line. The number of reflections
made by a skew ray, $\eta_s(\theta)$, can be calculated for a given
skew angle: $\eta_s(\theta)= \eta_m(\theta) / \cos{\gamma}$. It is
clear that this number increases significantly if the skew angle
increases. From the distributions it can be seen that in curved fibres
the trapped photons experience fewer reflections on average.
Figure~\ref{fig:rlambda}(a) shows the distribution of the reflection
length, $l_R(\theta)$, for photons reaching the exit end of fibres of
radius $\rho= 0.6$\,mm. The reflection length will scale with the
fibre radius. The left figure shows $l_R(\theta)$ for four different
over-all fibre lengths, $L_F=$ 0.5, 1, 2 and 3\,m, and the attenuation
characteristics of the fibre is made apparent by the non-vanishing
attenuation parameters used. Short reflection lengths correspond to
long optical path lengths and large numbers of reflections. Because of
the many reflections and the long total paths traversed, these photons
will be attenuated faster than photons with larger reflection
lengths. This reveals the high attenuation of rays with large skew
angles. In the meridional approximation the reflection length is
related to the axial angle by: $l_R= 2\rho/\cos{\theta}$. In the
figure the minimum reflection length allowed by the critical angle
condition is shown by a vertical line at $l_R(\theta_{\it crit})=
3.29$\,mm. On average photons propagate with smaller reflection
lengths along the curved fibre. Figure~\ref{fig:rlambda}(b) shows the
distribution of $l_R(\theta)$ in curved fibres of two different radii
of curvature, $R_{\it curve}=$ 2 and 8\,cm. The fibre radius is
identical to the one used in figure (a) and the over-all fibre length
is 0.5\,m. It can be seen that the sharp peak in the photon
distribution flattens with decreasing radius of curvature, so that the
region of highest attenuation is close to the reflection length for
photons propagating at the critical angle.
In contrast to the analysis of straight fibres an approximation of the
sharply curved fibre by meridional rays is not a very good one, since
only a very small fraction of the light rays have paths lying in the
bending plane. It is clear that when a fibre is curved the path
length, the number of reflections and the reflection length of a
particular ray in the fibre are affected, which is clearly seen in
figures~\ref{fig:pathlength},~\ref{fig:reflections} and
\ref{fig:rlambda}(b). The over-all fibre length for the curved fibres
in these calculations is 0.5\,m and the fibres are curved for their
entire length. The average optical path length and the average number
of reflections in a fibre curved over a circular arc are less than
those for the same ray in a straight fibre for those photons which
remain trapped.
\subsection{Light attenuation}
Light attenuation in active fibres has many sources, among them
self-ab\-sorp\-tion, optical non-uni\-formities, reflection losses and
absorption by impurities. The two main sources of attenuation in this
type of fibres are the absorption of scintillation light and Rayleigh
scattering from small density fluctuations. The self-absorption is due
to the overlap of the emission and absorption bands of the fluorescent
dyes. The cumulative effect of these attenuation processes can be
conveniently parameterised by an effective attenuation length over
which the signal amplitude is attenuated to 1$/e$ of its original
value, a method often applied in high energy physics applications. The
attenuation of active fibres at wavelengths close to its emission band
($400-600$\,nm) is much higher than in wavelength regions of interest
for standard applications of communication fibres where mainly
infrared light is transmitted ($0.8-0.9\,\mu$m and $1.2-1.5\,\mu$m).
Restricting the analysis to these processes, the transmission through
an active fibre can be represented for any given axial angle by $T=
\exp\left[- P(\theta) L_F/\Lambda_{\it bulk}\right]\, \times
q^{\eta(\theta) L_F}$, where the exponential function describes light
losses due to bulk absorption and scattering (bulk absorption length
$\Lambda_{\it bulk}$), and the second factor describes light losses
due to imperfect reflections (reflection coefficient $q$) which can be
caused by a rough surface or variations in the refractive indices. A
comparison of some of our own measurements to determine the
attenuation length of plastic fibres with other available data
indicates that a reasonable value for the bulk absorption length is
$\Lambda_{\it bulk} \sim 3$\,m. Most published data suggest a
deviation of the reflection coefficient, which parameterises the
internal reflectivity, from unity between $5 \times 10^{-5}$ and $6.5
\times 10^{-5}$ \cite{Ambrosio1991}. A reasonable value of $q= 0.9999$
is used in the simulation to account for all losses proportional to
the number of reflections.
Internal reflections being less than total give rise to so-called
leaky or non-guided modes, where part of the electromagnetic energy is
radiated away. Rays in these modes populate a region defined by axial
angles above the critical angle and skew angles slightly larger than
the ones for totally internally reflected photons. These modes are
taken into account by using the Fresnel equation for the reflection
coefficient, $\langle R \rangle$, averaged over the parallel and
orthogonal plane of polarisation
\begin{equation}
\langle R \rangle = \frac{1}{2} \left( R_{||} + R_\perp \right) =
\frac{1}{2} \left( \frac{\tan^2(\alpha - \beta)}
{\tan^2(\alpha + \beta)} + \frac{\sin^2(\alpha - \beta)}
{\sin^2(\alpha + \beta)} \right)\ ,
\end{equation}
where $\alpha$ is the angle of incidence and $\beta$ is the refraction
angle. However, it is obvious that non-guided modes are lost quickly
in a small fibre. This is best seen in the fraction of non-guided to
guided modes, $f$, which decreases from $f = 11\%$ at the first
reflection of the ray over $f = 2.5\%$ at the second reflection to $f
< 1\%$ at further reflections. Since the average reflection length of
non-guided modes is $l_R \approx 1.5$\,mm those modes do not
contribute to the flux transmitted by fibres longer than a few
centimeters. The absorption and emission processes in fibres are
spread out over a wide band of wavelengths and the attenuation is
known to be wavelength dependent. For simplicity only monochromatic
light is assumed in the simulation and highly wavelength-dependent
effects like Rayleigh scattering are not included explicitly.
A question of practical importance for the estimation of the light
output of a particular fibre application is its transmission
function. In the meridional approximation and substituting
$\exp(-\ln{q})$ by $\exp(1-q)$ the attenuation length can be written
as
\begin{equation}
\Lambda_m = \cos{\theta_{\it crit}}\, \left[ 1/\Lambda_{\it bulk} +
(1-q)\sin{\theta_{\it crit}}/2\rho \right]^{-1}\ .
\end{equation}
Only for small diameter fibres ($D \sim 0.1\,$mm) are the attenuation
lengths due to imperfect reflections of the same order as the
absorption lengths. Because of the large radii of the fibres discussed
reflection losses are not relevant for the transmission function and
the attenuation length contracts to $\Lambda_m= \Lambda_{\it bulk}
\cos{\theta_{\it crit}}$. For the simulated bulk absorption length
this evaluates to $\Lambda_m= 2.8$\,m. The transmission function
outside the meridional approximation can be found by integrating over
the normalised path length distribution, where $dN$ represents the
number of photons per path length interval $dP$, weighted by the
exponential bulk absorption factor:
\begin{equation}
T = \frac{1}{N} \int_{P=0}^{\infty} dN/dP\, e^{-P L_F/
\Lambda_{\it bulk}}\, dP\ .
\end{equation}
Figure~\ref{fig:absorption} shows this transmission function versus
the ratio of fibre to absorption length, $L_F/\Lambda_m$. A simple
exponential fit, $T \propto \exp\left[-L_F/\Lambda_{\it eff}\right]$,
applied to the simulated light transmissions for a varying fibre
length results in an effective attenuation length of $\Lambda_{\it
eff}= 2.4$\,m. For $L_F/\Lambda_m \ge 0.2$ this description is
sufficiently accurate to parameterise the transmission function, at
smaller values for $L_F/\Lambda_m$ the light is attenuated faster. The
difference of order 15\% to the meridional attenuation length is
attributed to the tail of the path length distribution.
Measurements of the light attenuation in fibres proves this simple
model of a single attenuation length to be wrong. A dependence of the
attenuation length with distance usually is
observed~\cite{Davis1989}. The most important cause of this effect is
the fact that the short wavelength components of the scintillation
light is dominantly absorbed. This leads to a shift of the average
wavelength in the emission spectrum towards longer wavelengths and to
an increase in the effective attenuation length.
\subsection{Trapping efficiency and transition losses in sharply curved
fibres}
\label{section:curvedfibres}
One of the most important practical issues in implementing optical
fibres into compact particle detector systems are macro-bending
losses. In general, some design parameters of fibre applications,
especially if the over-all size of the detector system is important,
depend crucially on the minimum permissible radius of curvature. By
using waveguide analysis transition and bending losses have been
thoroughly investigated and a loss formula in terms of the Poynting
vector can be derived~\cite{Marcuse1976,Gambling1979}. Those studies
are difficult to extend to multimode fibres since a large number of
guided modes has to be considered. When applying ray optics to curved
multimode fibres the use of a two-dimensional model is
common~\cite{Badar1989,Badar1991A,Badar1991B}. In contrast, our
simulation method follows a three-dimensional approach.
Photons are lost from a fibre core both by refraction and
tunnelling. In the simulation only refracting photons were considered.
The angle of incidence of a light ray at the tensile (outer) side of
the fibre is always smaller than at the compressed side and photons
propagate either by reflections on both sides or in the extreme
meridional case by reflections on the tensile side only. If the fibre
is curved over an arc of constant radius of curvature photons can be
refracted, and will then no longer be trapped, at the very first
reflection point on tensile side. Therefore, the trapping efficiency
for photons entering a curved section of fibre towards the tensile
side is reduced most. Figure~\ref{fig:trap-bend} quantifies the
dependence of the trapping efficiency on the azimuthal angle, $\Psi$,
between the bending plane and the photon path for a curved fibre with
a radius of curvature $R_{\it curve}=$ 2\,cm. The azimuthal angle is
orthogonal to the axial angle and $\Psi = 0\,$rad corresponds to
photons emitted towards the tensile side of the fibre.
Figure~\ref{fig:bradius} displays the explicit dependence of the
transmission function for fibres curved over circular arcs of
90$^\circ$ on the radius of curvature to fibre radius ratio for
different fibre radii, $\rho=$ 0.2, 0.6, 1.0 and 1.2\,mm. No further
light attenuation is assumed. Evidently, the number of photons which
are refracted out of a sharply curved fibre increases very rapidly
with decreasing radius of curvature. The losses are dependent only on
the curvature to fibre radius ratio, since no inherent length scale is
involved, justifying the introduction of this scaling variable. The
light loss due to bending of the fibre is about 10\% for a radius of
curvature of 65 times the fibre radius.
The use of the meridional approximation in the bending plane in place
of a three dimensional fibre is justified by the losses being
dominantly caused by meridional rays~\cite{Gloge1972,Winkler1979}.
Figure~\ref{fig:bentfibre} shows a section of a curved fibre and the
passage of a meridional ray in the bending plane with maximum axial
angle. In this model photons are guided for axial angles
\begin{equation}
\cos\theta < \cos\theta_O= \frac{R + 2 \rho}{R + \rho}
\cos\theta_{\it crit}\ ,
\label{eq:transmission}
\end{equation}
where the subscript $O$ refers to the outer fibre wall. A
transmission function can be estimated by assuming that all photons
with axial angles $\theta > \theta_O$ are refracted out of the fibre:
\begin{equation}
T= 1 - \frac{1}{1 + R_{\it curve}/\rho}\
\frac{\cos\theta_{\it crit}}{1 - \cos\theta_{\it crit}}\ .
\end{equation}
This transmission function is shown in figure~\ref{fig:bradius} as a
dashed line and it overestimates the light losses due the larger axial
angles allowed for skew rays. A comparable theoretical calculation
using a two-dimensional slab model and a generalized Fresnel
transmission coefficient has been performed by Badar {\it et
al}~\cite{Badar1991A}. Their plot of the power contained in the fibre
core as a function of the radius of curvature (figure~5) is similar to
our results on the transmission function in the meridional
approximation. In~\cite{Winkler1979} a ray optics calculation for
curved multimode fibres involving skew rays is presented. In this
paper a discussion on the transmission function is missing. Instead, a
plot of the power remaining in a curved fibre versus distance is shown
which gives complementary information.
For photons entering a curved section of fibre the first point of
reflection on the tensile side defines the transition angle,
$\Phi_{\it trans}$, measured from the plane of entry. The angular
range of transition angles associated with each ray is called the
transition region of the fibre. For photons emitted towards the
tensile side the transition angle is related to the axial angle and
since the angular phase space density of trapped photons is highest
close to the critical angle a good estimate is $\Phi_{\it trans}=
\theta_{\it crit} - \theta_O$. For a fibre radius $\rho=$ 0.6\,mm and
radii of curvature $R_{\it curve}=$ 1, 2, and 5\,cm the above formula
leads to transition angles $\Phi_{\it trans}=$ 0.19, 0.08 and
0.03\,rad, respectively. We attribute these discrete angles to beams
emitted in the bending plane. Photons emitted from the fibre axis
towards the compressed side are not lost at this side, however, they
experience at least one reflection on the tensile side if the bending
angle exceeds the limit $\Phi_{\it limit} = \arccos\left[R_{\it
curve}/(R_{\it curve} + 2\, \rho)\right] \approx \arccos\left[1 - 2\,
\rho / R_{\it curve}\right]$. A transition in the transmission
function should occur at bending angles between $\Phi_{\it limit}/2$,
where all photons emitted towards the tensile side have experienced a
reflection, and $\Phi_{\it limit}$, where this is true for all
photons. Figure~\ref{fig:bending} shows the transmission as a function
of bending angle, $\Phi$, for a standard fibre as defined before. Once
a sharply curved fibre with a ratio $R_{\it curve}/\rho > 83$ is bent
through angles $\Phi \sim \pi/8$\,rad light losses do not increase any
further. The transition region ranges from $\sim 0.44$ to $\sim
1.06$\,rad and is indicated in the figure by arrows. At much smaller
ratios $R_{\it curve}/\rho$ the model is no longer valid to describe
this behaviour.
Experimental results on losses in curved multimode fibres along with
corresponding predictions are best known for silica fibres with core
radii $\rho \approx 50\,\mu$m. Calculations on the basis of ray
optics for a plastic fibre with $\rho = 0.49\,$mm can be found
in~\cite{Badar1991B}. Our result on the transmission function in the
meridional approximation $T= 0.35$ at $\rho/R_{\it curve}= $20 is in
good agreement with the two-dimensional calculation. The larger value
of $T= 0.65$ predicted by the simulation is explained by the small
loss of skew rays, clearly seen in figure~\ref{fig:rlambda}. It should
be noted that the difference between finite and infinite cladding and
the appearance of oscillatory losses in the transition region has not
been investigated in the simulation.
Figure~\ref{fig:phasespace} shows contours of the angular phase space
for photons which were trapped in the straight fibre section but are
refracted out of sharply curved fibres with radii of curvature $R_{\it
curve}=$ 2 and 5\,cm. The contours demonstrate that only skew rays
from a small region close to the boundary curve are getting lost. The
smaller the radius of curvature, the larger the affected phase space
region.
\subsection{Light dispersion}
The timing resolution of scintillators are often of paramount
importance, but a pulse of light, consisting of several photons
propagating along a fibre, broadens in time. In active fibres, three
effects are responsible for the time distribution of photons reaching
the fibre exit end. Firstly the decay time of the fluorescent dopants,
usually of the order of a few nanoseconds, secondly the chromatic
dispersion in a dispersive medium, and thirdly the fact that photons
on different paths have different transit times to reach the fibre
exit end, known as inter-modal dispersion.
The chromatic dispersion is due to the spectral width,
$\Delta\lambda$, of the emitter. It is the combination of material
dispersion and waveguide dispersion. If the core refractive index is
explicitly dependent on the wavelength, $n(\lambda)$, photons of
different wavelengths have different propagation velocities along the
same path, called material dispersion. The broadening of a pulse is
given by $\Delta \tau= L_F/c_{\it core} \left( \lambda^2
d^2n/d\lambda^2 \right) \Delta\lambda/\lambda$~\cite{Ghatak1998}. The
FWHM of the emission peaks of scintillating or wavelength-shifting
fibres is approximately $40-50$\,nm. The material dispersion in the
used polymers (mostly polystyrene) is of the order of
ns$/$nm$\times$km and thus negligible for multimode fibres.
The transit time in ray optics is simply given by $\tau=
P(\theta)/c_{\it core}$, where $c_{\it core}$ is the speed of light in
the fibre core. The simulation results on the transit time are shown
in figure~\ref{fig:timing}. The full widths at half maximum (FWHM) of
the pulses in the time spectrum are presented for four different fibre
lengths. The resulting dispersion has to be compared with the time
dispersion in the meridional approximation which is simply the
difference between the shortest transit time $\tau(\theta= 0)$ and the
longest transit time $\tau(\theta= \theta_{\it crit})$: $\Delta \tau=
L_F/c_{\it core}\ (\sec{\theta_{\it crit}}-1)$, where $L_F$ is the
total axial length of the fibre. The dispersion evaluates for the
different fibre lengths to 197\,ps for 0.5\,m, 393\,ps for 1\,m,
787\,ps for 2\,m and 1181\,ps for 3\,m. Those numbers are in good
agreement with the simulation, although there are tails associated to
the propagation of skew rays. With the attenuation parameters of our
simulation the fraction of photons arriving later than $\tau(\theta=
\theta_{\it crit})$ decreases from 37.9\% for a 0.5\,m fibre to 32\%
for a 3\,m fibre due to the stronger attenuation of the skew rays in
the tail. Due to inter-modal dispersion the pulse broadening is quite
significant.
\section{Summary}
We have simulated the propagation of photons in straight and curved
optical fibres. The simulations have been used to evaluate the loss of
photons propagating in fibres curved in a circular path in one
plane. The results show that loss of photons due to the curvature of
the fibre is a simple function of radius of curvature to fibre radius
ratio and is $< 10\%$ if the ratio is $> 65$. The simulations also
show that for larger ratios this loss takes place in a transition
region ($\Phi \sim \pi/8$) during which a new distribution of photon
angles is established. Photons which survive the transition region
then propagate without further losses.
We have also used the simulation to investigate the dispersion of
transit times of photons propagating in straight fibres. For fibre
lengths between 0.5 and 3\,m we find that approximately two thirds of
the photons arrive within the spread of transit times which would be
expected from the use of the simple meridional ray approximation and
the refractive index of the fibre core. The remainder of the photons
arrive in a tail at later times due to their helical paths in the
fibre. The fraction of photons in the tail of the distribution
decreases only slowly with increasing fibre length and will depend on
the attenuation parameters of the fibre.
We find that when realistic bulk absorption and reflection losses are
included in the simulation for a straight fibre, the overall
transmission can not be described by a simple exponential function of
propagation distance because of the large spread in optical path
lengths between the most meridional and most skew rays.
We anticipate that these results on the magnitude of transition losses
will be of use for the design of particle detectors incorporating
sharply curved active fibres.
\ackn This research was supported by the UK Particle Physics and Astronomy
Research Council (PPARC).
\section*{References}
|
1,116,691,498,654 | arxiv | \section{Introduction}
An elastic beam on a foundation is a model that can be found in a
broad range of applications: railway tracks, buried pipelines,
sandwich panels, coated solids in material, network beams, floating
structures... The usual way to model the interaction between the
beam and the foundation is to replace the latter with a set of
independent springs whose restoring force is a linear \cite[see
e.g.][]{Winkler1867,Lekkerkerker62,Naschie1974,Lee96,Kounadis2006,Koiter2009,Challamel2011,Suhir2012}
or a nonlinear \cite[see e.g.][]{Cox40,Potier15,Hunt93,
HuntBlack,Wadee97,Wadee18,Whiting17,Netto99,
Zhang2005,Jang2011,Lagrange2013} function of the local deflection of
the beam. In both cases, the nonlinear effects, from the beam's
deformation and/or from the restoring force, play a crucial role in
the buckling and the post-buckling behaviors. In particular, for a
softening nonlinear foundation, the equilibrium curves of the beam
may exhibit a maximum load (i.e. limit point) at which the structure
loses its stability. Small imperfections, arising from various
sources, usually have an appreciable effect on this maximum load.
The papers on deterministic imperfection sensitivity include those
of \cite[][]{Reissner70,Reissner71,Sheinman91,
Sheinman93,Hunt93,Whiting17,Lee96,Kounadis2006,Lagrange2012} and
extensive references for the stochastic imperfection sensitivity are
compiled in \cite[][]{Elishakoff2001}. As a general rule, the
maximum load at which the beam becomes unstable diminishes with
increasing imperfection size. Considering a finite beam on a
bi-linear/exponential foundation, \cite{Lagrange2012} has shown the
existence of a critical imperfection size ${A_{0c}}$ such that: if
$A_0<A_{0c}$, then the maximum load diminishes with the imperfection
size, from the critical buckling load predicted by the classical
linear analysis \cite[see][]{Potier15}, for $A_0=0$, to the Euler
load for $A_0=A_{0c}$. In this case, the decay rate is sensitive to
the restoring force model. For $A_0>A_{0c}$, the maximum load is the
Euler load (i.e. buckling load of a beam with no foundation).
In the present paper we aim to extend these results to two restoring
force models with more general softening behaviors. We derive an
analytical expression for $A_{0c}$ and study the evolution of the
maximum load with the imperfection size and the stiffness reduction.
\section{Formulation of the problem}
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{Fig1.eps}
\end{center}
\caption{Sketch of a beam on a nonlinear foundation. The beam has an
imperfect shape ${W_0}$ and its lateral displacement is $W$. The
compressive force is ${\bf{P}}$ and the restoring force per unit
length is ${-\bf{\overline{P}}}$.}\label{ProblemeModele}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{Fig2.eps}
\end{center}
\caption{Dimensionless restoring force $p_k$. Red line: bi-linear
model. Blue line: hyperbolic model. The stiffness ratio is
$k\leq1$.}\label{Fig2_4}
\end{figure}
We consider the effects of a compressive load $P$ on a beam of
length $L$, with bending stiffness $EI$, lying on a foundation that
provides a restoring force per unit length $\overline{P}$ (see Fig.
\ref{ProblemeModele}). The beam and the foundation are assumed to be
well bonded at their interface and remain bonded during deformation.
Thus, interfacial slip or debonding is not considered. The
mobilization of the foundation (also named the yield point) is noted
$\Delta$, its linear stiffness $K_0$ and its nonlinear stiffness
$K$.
In its initial configuration, the beam has an imperfect shape
$W_0=A_0\sin \left(\pi X/L\right)$, where $A_0$ is the imperfection
size and $X$ is the longitudinal coordinate.
We introduce the characteristic length $L_{c} = \left(
{{{EI}}/{K_0}} \right)^{{1}/{4}}$ and the non-dimensional quantities
\begin{eqnarray}\label{GrandeursAdimBis2}
l &=& \frac{{ L }}{{L_{c} }},\quad x= \frac{{ X }}{{L_{c} }}, \quad
w = \frac{{W}}{{\Delta}}, \quad {w_0} = \frac{{{W_0}}}{{\Delta}},
\quad a_0=\frac{{A_0}}{\Delta}, \quad\lambda = \frac{P
{L_{c}}^2}{{EI }}, \quad k=\frac{K}{K_0}, \quad
p_k=\frac{\overline{P}}{K_0\Delta},\nonumber\\
\end{eqnarray}
as the dimensionless beam length, longitudinal coordinate, lateral
deflection (measured from the initial configuration), imperfection
shape, imperfection size, compressive load, stiffness ratio and
restoring force respectively.
Two models for the restoring force $p_k$ are considered in this
article (see Fig. \ref{Fig2_4}). The first one is
\begin{eqnarray}\label{RestoringForceDim}
p_k \left( w \right) = - w - (1 - k)\left( {{\mathop{\rm sgn}}
\left( w \right) - w} \right){\rm{H}}\left( {\left| w \right| - 1}
\right),
\end{eqnarray}
where $\rm{sgn}$ denotes the sign function and $\rm{H}$ is the
Heaviside function, defined as ${\rm{H}}\left( \left| w \right| -
1\right)=0$ if $\left| w \right|<1$ and $1$ if $\left| w \right|>1$.
This bi-linear restoring force refers to a foundation whose
stiffness is instantaneously reduced from $1$ to $k\leq1$ when
$w>1$. The particular case $k=1$ corresponds to a linear foundation.
Reference \cite[][]{Lagrange2012} considered the particular case of
$k=0$. Here, we extend the study to $k\leq1$, which leads to more
general results.
To reflect the experimental tests on railway tracks performed by
\cite{Birmann}, also reported in \cite{Kerr2,Tvergaard14}, who
showed that the lateral friction force acting on a track is a smooth
function of the lateral displacement, we introduce a hyperbolic
profile defined as
\begin{eqnarray}\label{TanhDim}
p_k \left( w \right) = - kw - (1 - k)\tanh (w).
\end{eqnarray}
This restoring force is a regularization of the bi-linear model as
they share the same asymptotic behaviors.
We assume that $\lambda$ and $p_k$ are conservative forces, that
strains are small compared to unity and that the kinematics of the
beam is given by the classical Euler-Bernoulli assumption. The
imperfection is also assumed to be small so that terms with higher
powers of $w_0$ or its derivatives are neglected in the expression
of the potential energy. Under these assumptions, the potential
energy $V$ with low-order geometrically nonlinear terms is
\cite[see][]{Potier15}
\begin{eqnarray}\label{EnergiePotentielle}
V = \int\limits_0^l {\left[ {\frac{1}{2}{w^{''}} ^2
-\lambda\,\left({\frac{1}{2}{w^{'}}^2 +{{w_0}^{'}}{{w}^{'}} }\right)
- \int\limits_0^w { p_k \left( t \right)dt} } \right]{{\text{d}x}}},
\end{eqnarray}
where a prime denotes differentiation with respect to $x$. The first
term in the integral is the elastic bending energy, the second is
the work done by the load $\lambda$, the last term is the energy
stored in the elastic foundation.
The equilibrium states are given by the critical values of $V$.
Assuming a simply supported beam, the boundary conditions are
$w\left( 0 \right)= w\left( l \right)=0$. Variations of
(\ref{EnergiePotentielle}) for an arbitrary kinematically admissible
virtual displacement $\delta w$ yields the weak formulation of the
equilibrium problem
\begin{eqnarray}\label{VariationEnergiePotentielle}
\int\limits_0^l \left[w^{''''} + \lambda\,\left( {w^{''} + w_0
^{''} } \right) - {p_k} \left( w \right) \right]\delta
w\,\text{d}x=0,
\end{eqnarray}
which is equivalent to the stationary Swift-Hohenberg equation
\begin{eqnarray}\label{EqDiff}
w^{''''} + \lambda\,\left( {w^{''} + w_0 ^{''} } \right) - {p_k}
\left( w \right)=0,
\end{eqnarray}
along with static boundary conditions $w''\left( 0 \right)=
w''\left( l \right)=0$.
This boundary value problem is nonlinear because of the restoring
force and its solutions are highly sensitive to the length $l$, as
shown in \cite[][]{Lee96}. Therefore, it is unrealistic to describe
the behavior of the system over a large range of variation for $l$.
As done in \cite{Lagrange2012}, this study is restricted to a finite
length beam where $l<\sqrt{2}\pi$. For such values of $l$ a
classical linear analysis \cite[see][]{Potier15} shows that the
first buckling mode is the most unstable one and appears for
$\lambda_c=\lambda_e+\lambda_e^{-1 }$, where $\lambda_e=\left( \pi
/l \right)^2$ is the Euler load.
\section{Solving methods}
To solve (\ref{VariationEnergiePotentielle}) we apply a Galerkin
method with a trigonometric test function $w$ of amplitude $y$: $w=y
\sin \left({\pi\,x }/{l}\right)$, assuming that the deflection has
the same shape as the first buckling mode and the initial
imperfection. For more details about the principle of the method,
the reader is referred to \cite{Lagrange2012}, where the procedure
has already been used. In that paper, this method has been shown to
be reliable in the prediction of the equilibrium paths of the
system, for $k=0$. We shall see in the present paper that it is
actually reliable for any $k\leq1$, thereby extending the results of
\cite{Lagrange2012}.
The insertion of $\delta w=\delta y \sin \left({\pi\,x }/{l}\right)$
in (\ref{VariationEnergiePotentielle}) yields
\begin{eqnarray}\label{IntegraleWhiting}
\int\limits_0^l {\sin \left( {\frac{\pi }{l}x} \right)\left[
{w^{''''} + \lambda \left( {w^{''} + w_0 ^{''} } \right) -
p_k\left( w \right)} \right]{{\text{d}x}} = 0}.
\end{eqnarray}
Splitting the restoring force in a linear and a nonlinear term
$N\left(w\right)$ leads to
$p_k\left(w\right)=-w-\left(1-k\right)N\left(w\right)$. With this
decomposition and $w=y \sin \left({\pi\,x }/{l}\right)$,
(\ref{IntegraleWhiting}) can be rewritten as
\begin{eqnarray}\label{Galerkin}
\lambda_k =\frac{1}{a_{0} +y} \left[\lambda_c\,
y+\left(1-k\right)\frac{Q\left(y\right)}{\lambda_{e} } \right],
\end{eqnarray}
where the subscript $k$ denotes the dependance of $\lambda$ on the
parameter $k$. The function $Q$ takes into account the nonlinear
behavior of the restoring force and is given by
\begin{eqnarray}\label{1.7}
Q\left(y\right)=\frac{2}{l} \int_{0}^{l}\sin \left(\frac{\pi }{l}
x\right)N\left(y\sin \left(\frac{\pi }{l}
x\right)\right){{\text{d}x}}.
\end{eqnarray}
For the restoring force models (\ref{RestoringForceDim}) and
(\ref{TanhDim}), $Q$ is negative and decreases monotonically to the
asymptote $-y+{4}/{\pi}$ as $y \to +\infty$. Thus $\lambda_k$ is
maximum for $k=1$ (linear foundation) and has an horizontal
asymptote $\lambda _k^\infty= \lambda _e + k\lambda _e ^{ - 1}$ as
$y \to +\infty$.
Equilibrium paths predicted by (\ref{Galerkin}) are traced out in
the plane $\left(y={\rm{max}}\left(w\right), \lambda\right)$ by
gradually incrementing $y$ and evaluating $\lambda_k$, $k$ and $a_0$
being fixed. Predictions are compared with a numerical solution of
(\ref{EqDiff}), using MATLAB's boundary value solver {\it{bvp4c}}
\cite[this is a finite difference code that implements a collocation
formula, details of which can be found in][]{Matlab}.
\section{Results}
\begin{figure}
\begin{center}
\includegraphics[width=1.\textwidth]{Fig3.eps}
\end{center}
\caption{Equilibrium paths of a finite length beam on a nonlinear
foundation. Circles: numerical predictions. Lines: Galerkin
solution. In red: bi-linear restoring force model. In blue:
hyperbolic model. (a) $k=-2$, (b) $k=0$, (c) $k=0.75$, (d) $k=1$. On
each subfigure, the equilibrium paths are plotted (from top to
bottom) for $a_0=0$, $a_0=0.0238$, $a_0=0.595$ and $a_0=1.19$, as
shown in (d). The length of the beam is $l=3$. }\label{Paths}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{Fig4.eps}
\end{center}
\caption{Maximum load $\lambda_m$ that the beam may support versus
the imperfection size $a_0$ and the foundation stiffness ratio $k$.
In red: bi-linear restoring force model. In blue: hyperbolic model.
Vertical dotted lines correspond to the critical imperfection sizes
$a_{0c}$ for $k=0.75$ and $k=0$. The length of the beam is $l=3$.
}\label{Limit_point}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{Fig5.eps}
\end{center}
\caption{Diagram of existence of a limit point for an imperfect
finite beam on a bi-linear/hyperbolic foundation. $k$ is the
stiffness ratio of the foundation and $a_0$ the imperfection size.
$\lambda_e=(\pi/l)^2$ is the Euler load,
$\lambda_c=\lambda_e+\lambda_e^{-1}$ and $\lambda_{k}^{\infty}=
\lambda_e+k\lambda_e^{-1}$.}\label{Diagram}
\end{figure}
The equilibrium paths predicted by the Galerkin method and the
numerical solution are shown in Fig. \ref{Paths}. A perfect
agreement in the predictions is found for both restoring force
models (the relative error between the two methods being less than
$0.1\%$). Since the Galerkin method was initiated with a test
function having the same shape as the imperfection, we conclude that
the deflection is just an amplification of the initial curvature. In
other words, in the range $l<\sqrt{2}\pi$, no localized buckling is
observed for a beam on a bi-linear or hyperbolic foundation. This
behavior has also been reported by \cite{Lee96} for a linear
foundation, showing a tendency toward localization when increasing
the beam length.
As expected, the equilibrium paths traced out for the hyperbolic
restoring force are below those traced out for the bi-linear force,
the hyperbolic profile modeling a softer foundation than the
bi-linear one. However, the choice of the restoring force has little
influence on the shape of the equilibrium paths.
For small $a_0$, the equilibrium paths first increase to a maximum
$\lambda_m$ that is smaller (or equals to in the case of $a_0=0$)
than the buckling load $\lambda_c$. Then, the paths asymptotically
decrease to $\lambda_{k}^{\infty}$. In the case of $a_0=0$, $k=1$,
they remain equal to $\lambda_c$. For high $a_0$, the equilibrium
paths increase monotonically to the asymptote
$\lambda_{k}^{\infty}\leq\lambda_c$. The asymptotic value
$\lambda_{k}^{\infty}=\lambda_c$ is reached for $a_0=0$ and $k=1$.
Note that for $k<-{\lambda_e}^2$, $\lambda_{k}^{\infty}$ is
negative, so that equilibrium states with $\lambda<0$ are predicted
(see Fig. \ref{Paths}(a)). Physically, when $k<-{\lambda_e}^2$ the
restoring force $p_k$ may become negative so that springs are
compressed, pushing up the beam. In this situation, the restoring
force has a destabilizing effect on the beam. To counteract this
effect, a tensile force $\lambda<0$ has to be applied.
The evolution of $\lambda_m$ versus $a_0$ is shown in Fig.
\ref{Limit_point}. A gradual drop in the maximum load admissible by
the structure from $\lambda_c$ to $\lambda_{k}^{\infty}$ is observed
when increasing $a_0$ (resp. decreasing $k$). This gradual drop is
highly sensitive to the restoring force model. A log scale applied
on Fig. \ref{Limit_point} shows that, for small imperfection sizes,
the decay rate does not depend on $k$: $\lambda _m - \lambda _c$
scales as $-a_0$, for the bi-linear model and as $-a_0 ^{2/3}$ for
the hyperbolic model.
For $a_0$ larger than a critical value $a_{0c}$, the equilibrium
paths do not have a limit point anymore. Actually, a path with no
limit point may be seen as a path with a limit point at
$\left(\infty,\lambda_{k}^{\infty}\right)$. Thereby, $a_{0c}$ may be
obtained from (\ref{Galerkin}) by enforcing $y\rightarrow\infty$ in
$d\lambda_k/dy=0$. Both restoring force models leads to
\begin{eqnarray}\label{a0c}
a_{0c}=\frac{4}{\pi}\frac{1-k}{{\lambda_e}^{2}+k},
\end{eqnarray}
whose dimensional equivalent form is
\begin{eqnarray}\label{A0_c}
A_{0c} = \frac{4}{\pi }\frac{{\left( {K_0 - K} \right)\Delta
}}{{{{\pi ^4 EI}}/{{L^4 }} + K}}.
\end{eqnarray}
The critical imperfection size predicted by \cite{Lagrange2012} is
therefore recovered in the particular case $K=0$, showing that
$A_{0c}$ only depends on the limiting plateau $K_0\,\Delta$ of the
restoring force \cite[as stated in][]{Maltby7}.
Finally, since $a_0>0$, equation (\ref{a0c}) shows that if
$k<-{\lambda_e}^2$ then the equilibrium paths always have a limit
point $\lambda_{k}^{\infty}<\lambda_m<\lambda_c$.
\section{Conclusion}
In this paper, we considered the buckling of an imperfect finite
beam on a bi-linear/hyperbolic foundation. The imperfection has been
introduced as an initial curvature of size $a_0$ and the foundation
stiffness ratio as a parameter $k\leq1$, extending the result of
\cite{Lagrange2012} derived for $k=0$.
Equilibrium paths of the beam have been predicted using a Galerkin
method initiated with a single trigonometric function which has the
same shape as the imperfection. Predictions compare well with a
numerical solution and lead to the conclusion that only periodic
buckling can arise for a finite beam on a bi-linear/hyperbolic
foundation, as also observed by \cite{Lee96} for an linear
foundation.
We have shown the existence of a critical imperfection size $a_{0c}
= 4\left( {1 - k} \right)\left[ {\pi \left( {\lambda _e^2 + k}
\right)} \right]^{ - 1}$, independent of the restoring force model,
such that:
\begin{itemize}
\item if $a_0<a_{0c}$, then the maximum load diminishes with
increasing imperfection size, from $\lambda _c=\lambda_e+\lambda_e^{-1}$, for
$a_0=0$, to $\lambda_e+k\lambda_e^{-1}$ for $a_0=a_{0c}$,
$\lambda_e=(\pi/l)^2$ being the Euler load. The decay rate has been
shown to be sensitive to the restoring force model. In the limit of
small $a_0$, $\lambda _m - \lambda _c\sim -a_0$ for the bi-linear
model and $\lambda _m - \lambda _c\sim -a_0 ^{2/3}$ for the
hyperbolic model.
\item If $a_0>a_{0c}$, then the
maximum load simply corresponds to $\lambda_e+k\lambda_e^{-1}$.
\end{itemize}
Finally, we have shown that if $k<-{\lambda_e}^2$ then an imperfect
finite beam on a bi-linear/hyperbolic foundation can support a
compressive load larger than $\lambda_{k}^{\infty}$, and smaller
than $\lambda_c$, whatever the imperfection size is. This feature is
highly interesting for an engineer since $a_0$ is usually hard to
evaluate. The main results from this study are summarized in Fig.
\ref{Diagram}.
In the present paper, a bi-linear restoring force model for the
foundation has been used but plasticity effects that would emerge
from loading/unloading cycles have not been considered. Future works
will have to highlight the way those effects could modify the
maximum load that the beam can support. A basic model would consist
of considering a permanent deflection as an imperfection whose size
would grow up after each cycle. In that case, from the present
study, it is expected a decrease of the maximum load after each
cycle, at least as long as the accumulated deflection remains
smaller than a threshold equivalent to $a_{0c}$.
The author acknowledges Dr. M. Brojan for introducing to him the
hyperbolic restoring force model and Dr. Alban Sauret and Dr. Jay
Miller for their insightful comments on this paper.
\bibliographystyle{aipnum4-1}
|
1,116,691,498,655 | arxiv | \section{Introduction}
Gravity-darkening is an important piece of the stellar structure and evolution
that has been studied for almost a hundred years (von Zeipel 1924). Gravity-darkening
exponents (GDE) are key tools for analysing light curves of eclipsing binaries or
in isolated rotating stars through long-baseline optical and infrared interferometry.
Considering stars in strict radiative equilibrium
(pseudo-barotrope), in 1924 von Zeipel showed that the variation of brightness
over the surface is proportional to the effective gravity, that is,
\begin{equation}
{F} = -{4 a c T^{3}\over{3 \kappa \rho}}{dT\over{d\psi}} {g^{\beta_{1}}}
,\end{equation}
\noindent
or equivalently
\begin{equation}
{{T_{\rm eff}}^4} \propto {g^{\beta_1}},
\end{equation}
\noindent
where $\psi$ is the potential, $g$ the local gravity, T the local
temperature, $\kappa$ the opacity, $\rho$ the local density, $a$
the radiation pressure constant, $c$ the velocity of light in vacuum, T$_{\rm eff}$
the effective temperature, and $\beta_1$=1.0 is the GDE, which is a bolometric quantity.
Although we are still far from fully understanding the gravity-darkening phenomenon,
in the last 22 years several
theoretical papers related to the GDE have appeared in the literature that have
shed some light on the scenario, such as Claret (1998, 2000, 2012), Espinosa
Lara and Rieutord (2011, 2012), and McGill, Sigut, and Jones (2013). For a historical
summary of the theoretical research on the GDE see Claret (1998, 2015).
From now on we also designate the GDE as $\beta_1$.
Another important and complementary ingredient in the analysis of
systems with non-spherical configurations are the so-called gravity-darkening
coefficients (GDCs). As we can only observe
determined band-limited stellar flux (not the bolometric), the
introduction of these coefficients is necessary in order to model a distorted star. Such a coefficient can be written as (Claret\& Bloemen 2011):
\begin{eqnarray}
y(\lambda, T_{\rm eff }, \log [A/H], \log g, V_{\xi}) = \nonumber\\
\left(\frac{d\ln T_{\rm eff }}{{d\ln g}}\right)
\left(\frac{\partial{\ln I_o(\lambda)}}{{\partial{\ln T_{\rm eff }}}}\right)_{g}
+ \left(\frac{\partial{\ln I_o(\lambda)}}
{\partial{\ln g}}\right)_{T_{\rm eff}},
\end{eqnarray}
\noindent
\noindent
where $\lambda$ is the wavelength, $I_o(\lambda)$ the intensity at a given
wavelength at the centre of the stellar disc, and $V_{\xi}$ is the microturbulent velocity.
We note that the expression $\left(\frac{d\ln T_{\rm eff }}{{d\ln g}}\right)$ can
be written as $\beta_1/4$.
In order to progress in our investigation of stellar
configurations distorted by rotation and/or tides, we recently studied
the GDCs for the case of compact stars (Claret et al. 2020). In that study
we computed GDC for DA, DB, and DBA white dwarf models, covering the
transmission curves of the Sloan, UBVRI, Kepler, TESS, Gaia, and HiPERCAM
photometric systems. These computations are available for log [H/He] =
-10.0 (DB), -2.0 (DBA), and He/H = 0 (DA). The covered range
of log g was 5.0-9.5, while for the effective temperatures the respective
range was 3750 K-100000 K.
From an observational point of view, discoveries of binary systems whose components
are tidally and/or rotationally distorted white dwarfs are ongoing (see e.g.
Burdge et al. (2019a, 2019b, 2020), Kupfer et al. (2020) and references therein).
However, as far as we know, there are no specific GDE calculations
for white dwarf sequences or even for an individual model.
In this short paper we present, for the first time, GDEs for three
white dwarf cooling sequences. We also introduce some improvements to
our methods for calculating the GDE:
$\beta_1$ is computed as a function of the optical depth, that is, $\beta_1 =
\beta_1(\tau)$. In addition, we introduce an extra condition in our
calculations, namely the relationship between the local gravities and the corresponding
optical depths for a given equipotential surface.
In the following, sections we discuss these points in more detail.
The paper is organised as follows: Sect. 2 is dedicated to describing our methods and some
aspects of the distorted stellar configurations. In Sect. 3 we discuss both observational and
theoretical evidence for deviations from von Zeipel's theorem (1924) and
present our results and conclusions.
\section{ The numerical method}
Our numerical method is based on the triangles strategy
introduced by Kippenhahn et al. (1967). A complete description of our method can be
found in Claret (1998), but for the sake of clarity, we summarise it below. To save
computing time, Kippenhahn et al. (1967) introduced an ingenious method: when an
evolutionary sequence is being calculated, if the external boundary conditions are
unchanged at the fitting point (envelope-interior), the outer layer integrations
must be the same as the previous ones. Three envelopes corresponding to three points
in the Hertzsprung-Russell (HR) diagram around the current values of luminosity and effective temperature are computed.
As the model evolves, its properties are checked to see if they remain within this
triangle. If so, the boundary conditions are unchanged. It is important to highlight that
this strategy is valid only if the triangle is sufficiently small. This warning is
particularly valid for convective envelopes. To simulate a distorted star, we are
interested in several triangles showing different effective temperature distributions
over the surface. Therefore, we increase the number of triangles in the HR diagram, that is,
we compute several envelopes with different temperature distributions
but imposing the same physical conditions at a given interior point. Figure 2
in Claret (2000) shows a simplified scheme of our method where only three points
are shown in the HR diagram. Once the triangles have been computed for
each point of the evolutionary track we can derive $\beta_1$ by differentiating
the neighbouring envelopes. Such a procedure is performed for the next evolutionary
track point and so on.
Our method has some advantages: (a) it can be applied to convective and/or
radiative envelopes; (b) one can investigate the influence of the optical depth
in the GDE by changing the fitting point to impose the boundary conditions,
without loss of generality; (c) the GDE can be computed as a function of initial mass,
chemical composition, evolutionary stage (in the present paper up to the white
dwarf phase), and other ingredients of the input physics, and (d) more realistic atmosphere
models can easily be incorporated as external boundary conditions,
as done in Claret (2012).
We introduce an extra condition in our procedure: the
relationship between local gravity and optical depth over a given equipotential surface.
In reality, solving the hydrostatic equilibrium equation for two points on an equipotential
characterised by [$g(\mu), \tau(\mu,\psi)$] ,
we obtain $g(\mu)\tau(\mu, \psi) = g(\mu_o) \tau (\mu_o, \psi)$. In this relationship,
$g$ is the local gravity, $\psi$ is the total potential (rotational one included),
$\tau$ is the optical depth, and $\mu$ is given by $\cos(\phi)$ where $\phi$
is the angle between the radiation field and the $z$ axis. The subscript $o$
indicates the reference point, for example the pole.
To guarantee that the triangle technique represents an equipotential, we have taken
as reference the absolute dimensions for each point of the evolutionary models,
that is, [$g(\mu_o), \tau(\mu_o,\psi) $]. The equipotentials are then configurated introducing
additional triangles from this point.
As outlined in Sect. 1, for stars in strict radiative equilibrium, von Zeipel's theorem predicts an exponent $\beta $ = 1.0. However, if we inspect a typical HR diagram in the log g $\times$ T$_{\rm eff}$ plane
for stars with different initial masses, for example, 1.0 M$_{\odot}$ (convection
predominates in the envelope) and 10.0 M$_{\odot}$ (envelope predominantly in radiative
equilibrium), it can be verified that both models have different average slopes.
If we make a simple analogy between these two slopes and the GDE given by Eq. 2, the
respective $\beta_1$ would be different, with the one corresponding to the star with
1.0 M$_{\odot}$ being smaller than that of the model with 10.0 M$_{\odot}$; see Fig. 3 by Claret (1998) for a graphic example. This figure seems to indicate that stars with convective
envelopes do not strictly obey von Zeipel's theorem. Additional and more elaborate evidence has also been found to support these statements,
such as that outlined in Claret (2012, 2015) for example. In these latter studies, significant deviations were
found when the GDEs are computed for the upper layers of a distorted star.
On the other hand, Kopal (1959) derived the following equation for the stellar distortions:
\begin{eqnarray}{{g-g_o}\over{g_0}}={\sum_j}\left(1-{5\over{\Delta_j}}\right)
\left({r\over{a_1}}-1\right)
,\end{eqnarray}
\noindent
where g$_o$ is the reference local gravity, $\Delta_{j}$ = 1 + 2 k$_j$
, and k$_j$ is the apsidal motion constant
of order $j$. Therefore, the radius of an equipotential $r$ (order 2)
can be written as
\begin{equation}
r = a_1\left(1 - f_2 P_2(\theta, \phi)\right),
\end{equation}
\noindent
with
\begin{eqnarray}f_2 = {5\omega^2a_1^3\over{3GM_{\psi}(2+\eta_2)}},
\end{eqnarray}
\noindent
where $\omega$ is the angular velocity, P$_2(\theta, \phi)$
is the second surface harmonic, $a_1$ is the mean radius of the level surface,
$\eta_2$ is the logarithmic derivative of the amplitudes of the surface distortions defined
through Radau's differential equation, and $M_{\psi}$ is the mass enclosed by an
equipotential. Claret (2000) has shown that there is a close connection
between the GDE and the shape of the distorted stellar configuration,
its internal structure, and the details of the rotation law.
In fact, for stellar masses around 1.5 M$_{\odot}$ there is a change in the
predominant source of thermonuclear energy from the proton--proton chain to the CNO
cycle. This change causes a readjustment of the mass
concentration through the parameter $\eta_2$.
We reiterate the fact that $\eta_2$ is connected to k$_2$
through a simple equation: $k_2=(3-\eta_2(R))/(2(2+\eta_2(R)))$, where
$\eta_2(R)$ is evaluated at the stellar surface.
This readjustment will affect
how a star reacts under distortions and will consequently effect the parameter $\beta_1$ (see Eq. 3 and also Fig. 1 by Claret (2000) where log k$_2$ and $\beta_1$
are shown for ZAMS models with masses varying from 0.075 to 40.0 M$_{\odot}$).
In addition, convection also begins to contribute significantly
to the total flux in this range of effective temperatures.
For masses greater than 1.5 M$_{\odot}$ the mass concentration decreases almost
linearly with the stellar mass.
\section {Discussion of the results and final remarks}
Some years ago, Claret (2016, see the corresponding Eqs. 8 and 9) adopted
an expression for the convective efficiency (ratio of the convective to the
total flux, denoted by the symbol $f$) to investigate the GDEs.
Here we generalise that result for a range of opacities
(through parametrised formulae) as
indicated by Eq. 7:
\begin{eqnarray}
{f} \approx A \gamma \left({r\over R}\right)^2 \left({3 \Gamma_1\over{5 \mu_1 \beta}}\right)^{1/2}
\left({2 c_P \mu_1 \beta\over{5}}\right) T^{1/2}
\left[ g\over{T_{\rm eff}^{{(4n+4+\left|n+s\right|)}}} \right].
\end{eqnarray}
\noindent
In the above equation, $A$ is a constant, $R$ the star radius, $r$ the
radial coordinate, $\mu_1$
the mean molecular weight, $\beta$ the ratio of gas to total pressure,
$c_P$ the specific heat at constant
pressure, and $\Gamma_1$ is the
adiabatic exponent. The parameter $\gamma$ is given by
\begin{eqnarray}
{\gamma} = {1\over 2} {\overline{v}\over{v_s}} {\Delta T\over T},
\end{eqnarray}
\noindent
with $\overline{v}$ being the mean convective velocity,
$\Delta$T the excess of temperature of a rising element over the
mean temperature of the surrounding environment, and $v_s$ is the velocity of sound.
Here, we assume that for the derivation of Eq. 7 the opacity can be written as
$\kappa \approx \kappa' P^{n} T^{-s}$ or $\kappa\approx\kappa''\rho^{n} T^{-n-s}$,
where $\kappa'$ and $\kappa''$ are constants, and $n >$ 0 and $s <$ 0,
assuming a perfect gas equation.
Although we will not use this expression directly because it is only
an approximation, it is useful in order to analyse the correlation between $f$ ---through
the role of the opacities--- and $\beta_1$, as we see in the following paragraphs.
On the other hand, the evolutionary track from the pre-main sequence (PMS) up to the white dwarf stage
was computed using the MESA module (Paxton et al. 2011, 2013 and 2015), version 7385.
The basic input physics of the MESA code is given in the
above references.
Another set of DA- and DB-type white dwarf models was computed with
LPCODE (Althaus et al. 2001a, 2001b). The models were followed from the
zero age main sequence (ZAMS) up to the white dwarf stage (Renedo et al. 2010).
This code considers modern input physics such as a detailed network for
thermonuclear reactions, OPAL radiative opacities, full-spectrum turbulence
theory of convection, detailed equations of state, and
neutrino emission rates.
We first discuss the
cases of `pure' DA- and DB-type white dwarfs, without considering the previous
evolutionary phases. We analyse models with 1.0 M$_{\odot}$ although the results are
similar if we consider models with different initial masses. Figure 1 illustrates the resulting GDE
computed at optical depth $\tau$= 100.0 (thick continuous line) using the modified triangles strategy method. The log g during the evolution
of this cooling sequence vary approximately between 8.3 and 8.7.
As expected, for high effective temperatures where radiative transfer
predominates and at large optical depth, the results are compatible with the equation of diffusion, that is, with those resulting from the
von Zeipel (1924) approach. However, as the model evolves, the influence of convection
begins to appear which translates into a drop of $\beta_1$.
The drop-off threshold is located at logT$_{\rm eff}$ $\approx$ 4.12 and
we have found large deviations from
the von Zeipel theorem for effective temperatures smaller than 10000K.
We note that this transition temperature is slightly different from that typical of main sequence
stars (logT$_{\rm eff}$ $\approx$ 3.90). This is due to differences in chemical
compositions and degree of compactness (log g).
The dashed (log g = 8.5) and thin continuous (log g=8.0) lines are useful
to establish a more direct connection between
the contribution of the convection to the flux and $\beta_1$. Such a contribution
can be set in terms of the ratio F$_c(\tau)$/F$_r(\tau)$
where F$_c(\tau)$ is the convective flux and F$_r(\tau)$ is the radiative one for a
given optical depth (these data were kindly provided by E. Cukanovaite 2020;
see also Cukanovaite et al. 2019).
The F$_c(\tau)$/F$_r(\tau)$ ratio is slightly different from that given by Eq. 7
but shows the same appearance, taking into account the different scales: the F$_c(\tau)$/F$_T(\tau)$ ratio
varies from 0 to 1.0, F$_T(\tau)$ being the total flux.
We can use Eq. 7
to help us visualise the behaviour of $\beta_1$ and its relation to opacity.
The connection between $\beta_1$ and the convective contribution to the
flux is evident; it is especially clear near the maxima and minima.
The behaviour of the GDE shown in Fig. 1 (also that of the following figures)
is the result of the combination of the different sources of opacity
but we can gain some insight into its influence on the GDE by adopting some opacity laws in their parameterised form.
Firstly, we analyse the effects of opacity in
the colder models. For this range of effective temperatures
one of the predominant sources of opacity is the negative hydrogen ion whose dependence
on $\rho$ and T is given by $\kappa \approx \kappa_1 \rho^{1/2} T^{7.7}$, where
$\kappa_1$ is a constant. We note the high dependence of negative hydrogen ion opacity
with temperature.
The expression
$ g\over{T_{\rm eff}^{{(4n+4+\left|n+s\right|)}}}$
in Eq. 7 drives the behaviour of the gravity-darkening exponent, and,
introducing the relevant values of $n$ and $s$
for such models at a given convective efficiency, we get $\beta_1\approx$ 0.30.
In addition, the opacities $ff$, $bf,$ and
$bb,$ because of electronic transitions, are given by Kramers law:
$\kappa \approx \kappa_2 \rho T^{-7/2}$, where
$\kappa_2$ is a constant. The approximate corresponding value of $\beta_1$
in this case is $\approx$ 0.35.
Considering that Eq. 7 is just a rough approximation, these results are surprising because they
predict deviations from the von Zeipel's theorem and, in addition, they are in
reasonable agreement with the values of $\beta_1$ found in the literature for
late-type stars (semi-empirical and theoretical ones; see below). On the other hand, one of the
main sources of continuum opacity in hot star atmospheres is the so-called
Thomson scattering. As known, this opacity is `grey' because there is no
dependence on temperature and density and it can be written as follows for the case of complete ionisation:
$\kappa \approx 0.2 (1.0+X)$, where $X$ is the hydrogen content. Using the same
treatment as for the negative hydrogen ion case and Kramers law, we obtain
$\beta_1 \approx 1.00$, which is in good agreement with the predictions of the
von Zeipel's theorem for hot stars. Again it is gratifying that Eq. 7 is
capable of predicting
the typical values of $\beta_1$, even considering its limitations.
In addition to the analysis of the influence of convective flux on $\beta_1$,
there are other ways to correlate $\beta_1$ with some physical magnitudes.
Considering the additive properties of the specific entropy in the nonrelativistic
case we have
\begin{eqnarray}{s} = B+ {N_ok\over{\mu_i}}ln{T^{3/2}\over{\rho}} +
{{N_ok\over{\mu_e}}} \left[ {5\over{3}} {F_{3/2}(\alpha)\over{F_{1/2}(\alpha)}}
+ \alpha\right]
+ {4a\over{3}} {T^3\over{\rho}}
,\end{eqnarray}
\noindent
where the symbols have the following meaning: B is a constant, $\alpha$ the
degeneracy parameter, N$_o$ is Avogadro's number, $k$ the Boltzmann constant,
$\mu_i$ the mean molecular weight per ion, and $\mu_e$ is the mean molecular
weight per electron. The functions $F_{1/2}(\alpha)$ and $F_{3/2}(\alpha)$
are auxiliary functions that can be written as
\begin{eqnarray}
F_{1/2}(\alpha)= \int_{0}^{\infty} {u^{1/2}du\over{e^{\alpha+u} +1}},
\end{eqnarray}
\noindent
and
\begin{eqnarray}
F_{3/2}(\alpha)= \int_{0}^{\infty} {u^{3/2}du\over{e^{\alpha+u} +1}},
\end{eqnarray}
\noindent
where $u=p^2/(2 m kT)$, with $p$ being the particle momentum and $m$ its mass.
The three components in Eq. 9 above can be easily identified: the second term
is the entropy due to ions, the third is connected to electrons, and the
fourth to radiation. We can see in Fig. 2 how the GDE is related to
the entropy for the same models and conditions
shown in Fig. 1. As mentioned, the convection onset is in
log T$_{\rm eff}$ $\approx$ 4.12 (log T$_{\rm eff~onset}$)
where a sudden variation of the entropy is indicated by a vertical arrow. This
effective temperature is in agreement with those by Tremblay (2020).
There are three other inflexion points (also marked with vertical arrows).
These four points shown in Fig. 2 indicate that the entropy varies with the
effective temperature (also with log g), which in turn drives the behaviour of the GDE.
The changes in $\beta_1$ with entropy are not surprising.
Indeed, the differential of the entropy is given by $dS=c_p (\nabla-\nabla_{adia})$,
where $\nabla=dlnT/dlnP$ and $\nabla_{adia}=(dlnT/dnlP)_{adia}$. On the other hand,
it can be shown that, alternatively, the convective flux F$_c(\tau) \propto(\nabla-\nabla_{adia})^x$,
where x = 3 (small convective efficiency) or x = 3/2 for large convective efficiency. Thus, under determined physical conditions, the entropy can also be used as a convective stability criterion.
Another point to note is that the entropy does not significantly depend on the
optical depth for effective temperatures $\leq$ T$_{\rm eff~onset}$. Indeed, the curves
for $\tau $ = 100.0 and 500.0 coincide for T$_{\rm eff}$ $\leq$ T$_{\rm eff~onset}$.
This implies that the behaviour of the resulting GDEs should not vary significantly,
at least within the range of optical depths, log g, and effective temperatures explored here. We note that, for hot models, because
the entropy
hardly varies with T$_{\rm eff}$, the corresponding GDE values are almost
constant and equal to 1.0, which restores von Zeipel's theorem.
These results are more general than those obtained decades ago, whose GDE
value was constant and approximately equal to 0.32 for envelopes in
convective equilibrium.
Figure 3 shows a comparison between the GDE for DB and DA models at
$\tau$ = 100.0. The profiles of $\beta_1$ are very similar, with the exception
of the transition zone where there is a shift due to the difference in the
chemical composition of the models and its influence on pressure and
temperature.
An interesting comparison that can be made is related to the masses of
white dwarfs. In Fig. 4 we show the evolution of the GDE for DA models
with masses 1 M$_{\odot}$ (continuous line) and 0.52 M$_{\odot}$ (dashed line). The
calculations were also performed for $\tau$ = 100.0. Because of
the dependence of the onset of convection on the local gravity, the
GDEs for the model with 0.52 M$_{\odot}$ are shifted towards lower temperatures,
in reasonable agreement with other studies (Cunningham et al. 2019,
Tremblay (2020)).
As indicated, another set of evolutionary models was generated with
the MESA module with an initial mass of 2.0 M$_{\odot}$, X = 0.703, and Y = 0.277.
For stars with convective envelopes, we employed the
standard mixing-length formalism (B\"ohm-Vitense 1958) with $\alpha_{MLT}$ = 1.80.
The opacity tables adopted are those given by Grevesse and Sauval (1998).
The models were followed from the PMS up to the cooling white dwarf stage. Figure 5
shows the complete evolutionary track in the HR diagram. The final mass in
the cooling stage is 0.56 M$_{\odot}$. The results concerning gravity-darkening
are shown in Fig. 6 where
we add the profile of $\beta_1$ shown in Fig. 1 for comparison.
Within our present level of approximation, the differences are small, being more
appreciable only in the interval around logT$_{\rm eff}$ $\approx$ 3.90.
As we can see in Figures 1-4 and 6, there are clear indications of deviations
from the approach by von Zeipel.
Deviations from von Zeipel's theorem were previously
found using suitable evolutionary models in stars evolving towards the branch
of red giants and/or in low-mass main sequence stars
(low effective temperatures) where the flux is predominantly convective
in their envelopes (Claret 1998, 2000).
Additional evidence for deviations from von Zeipel's theorem comes from 3D simulations
of cold stars (Ludwig, Freytag\&Steffen 1999).
A more complete historical review about this subject can be found in above references.
On the other hand, there is an analytical approach by which we can explore the behaviour
of GDEs in compact stars whose outermost layers are in radiative equilibrium.
Such an analysis was carried out in Claret (2012)
for main sequence stars. Here we adapt the physical conditions
for the case of white dwarfs. Such an equation can be written
as (see Appendix A)
\begin{eqnarray}
\beta_1 \approx 1 + \left[{{2+N}-\chi(4+N)\over{2+N}} -
{{2+N+\chi(-4+8\alpha_1-5N)}\over{2+N+\chi(\kappa_{\rho}-\kappa_T)(N-2\alpha_1)}}\right]
\nonumber\\{\tau_p\over{\tau_e}}.
\end{eqnarray}
Equation 12 opens up some possibilities on the investigation of
the distribution of rotational velocities through the
parameter $\alpha_1$ and on geometry through $N$. We note that
this equation is valid only for hot stars.
We note that the
ratio $\tau_p/\tau_e \leq1.0$. For example, we would restore
the predictions of von Zeipel's theorem for the cases
$\alpha_1= N/2$ or for uniform rotation
and no $\theta$ dependence.
An interesting feature of Eq. 12 can be explored and is
related to $\kappa_{\rho}$ and $\kappa_T$.
Depending on the behaviour of these two variables, we could have values
of $\beta_1$ significantly smaller than 1.0.\footnote{We have also found significant deviations from von
Zeipel’s theorem using our modified numerical method, at the upper layers of hot white dwarfs.}
For example, for $\alpha$=-1.0, N = 1, $\chi$ = 0.2, $\tau_p$/$\tau_e$= 0.71,
and $\beta_1$ = 0.80, and the resulting condition would be
\begin{eqnarray}
-6.0 \lessapprox \left[\left({{\partial ln\kappa}\over{\partial ln \rho}}\right)_{T} -
\left({{\partial ln\kappa}\over{\partial ln T}}\right)_{\rho}\right]
\lessapprox -5.0.
\end{eqnarray}
\noindent
Such a condition is approximately fulfilled for envelopes with effective
temperatures in the interval 6000 K $\lessapprox$ T$_{\rm eff}$ $\lessapprox$ 13000 K. There are no semi-empirical data yet for white dwarfs in this effective temperature range for comparison, but it is interesting to note that values
of $\beta_1$ smaller than 1.0 were detected using long-baseline
optical/infrared interferometry in isolated fast rotators.
Some of these systems show effective temperatures within the
mentioned range.
A summary of their properties can be found in Table 1 of Claret (2016).
Values of the GDE smaller than 1.0 were also observationally detected in main sequence and/or
subgiants stars in binary systems.
However, a direct connection between the results from long-baseline
interferometry and from eclipsing binaries (mostly main sequence stars) and those
provided by Eq. 12 (compact ones) is not straightforward. However, it can give us some clue for future research.
A turning point concerning semi-empirical $\beta_1$ was the pioneering paper by Rafert\&Twigg (1980) using eclipsing
binaries. Such research was followed
by others, such as Pantazis and Niarchos (1998), Niarchos (2000), and Djurasevic et al.
(2003, 2006), for example, who explored a wide range of effective temperatures.
The semi-empirical GDEs
derived by these latter authors for hot stars are scattered around the classical
von Zeipel value. Some of these systems show $\beta_1$ as low as 0.60. Another
important result of these investigations is that the derived values for systems
with cooler components also contradict the predictions of von Zeipel's theorem.
These semi-empirical GDE values are quite significant but do not yet
constitute a critical test of the theory of temperature distribution on a
distorted stellar surface. However, despite the fact that these
observations are not conclusive, they seem to
indicate a transition zone for the GDE. That zone approximately coincides
with the change in the prevalence of the process of energy transport in the envelopes,
that is, radiative to convective, as indicated by the Fig. 3 in Claret (2003). Such a transition zone is also predicted in the present
paper for compact stars.
Finally, it would be very interesting and useful
if observers were to focus their attention on close binary systems constituted by
white dwarfs distorted by rotation and tides, so that the validity of the
preliminary calculations presented here can be verified.
To constrain the GDE values observationally, it would be necessary
to investigate eclipsing binary white dwarfs that are double-lined and bright enough to obtain good radial-velocity semi-amplitudes for both
components. We hope that such systems will be found in the not too distant future.
\begin{figure}
\includegraphics[height=8.cm,width=6cm,angle=-90]{fig1.eps}
\caption{DA-type white dwarf models (1.0 M$_{\odot}$). The thick solid line
represents the GDE as a function of effective temperature. The continuous
thin line indicates the ratio F$_c(\tau)$/F$_r(\tau)$ for log g = 8.5
and the dashed one denotes the same but for log g = 8.0. All calculations
were performed at $\tau$ = 100.0. We note that the T$_{\rm eff}$ range
of the atmosphere models by Cukanovaite (2020)
is limited to Teff $\leq$ 60000 K. The arrow indicates the direction of time
evolution.}
\end{figure}
\begin{figure}
\includegraphics[height=8.cm,width=6cm,angle=-90]{fig2AA.eps}
\caption{Entropy for the same models and conditions
shown in Fig. 1 (red lines). The vertical arrows indicate the points where there
is a marked variation of the entropy with effective temperature.
The solid line represents the entropy for $\tau$ = 100 while the
dashed-dotted line denotes $\tau $ = 500; both for log g = 8.5.
The horizontal arrow indicates
the direction of time evolution.}
\end{figure}
\begin{figure}
\includegraphics[height=8.cm,width=6cm,angle=-90]{fig3.eps}
\caption{Comparison between the GDE for a DB model (solid line, 1.0 M$_{\odot}$)
and DA (dashed line, 0.5 M$_{\odot}$). All calculations were performed
at $\tau$ = 100.0. The horizontal arrow indicates the direction of time evolution.}
\end{figure}
\begin{figure}
\includegraphics[height=8.cm,width=6cm,angle=-90]{fig4.eps}
\caption{Effect of log g on the onset of convection for DA models.
The continuous line represents a sequence of DA model (1.0 M$_{\odot}$)
while the dashed one also denotes DA models but with 0.52 M$_{\odot}$.
The horizontal arrow indicates the direction of time evolution.}
\end{figure}
\begin{figure}
\includegraphics[height=8.cm,width=6cm,angle=-90]{fig5.eps}
\caption{HR diagram for an initial mass of 2.0 M$_{\odot}$ from
the PMS to cooling white dwarf stage. Initial chemical composition
X = 0.703, Y = 0.277, $\alpha_{MLT}$ = 1.80.}
\end{figure}
\begin{figure}
\includegraphics[height=8.cm,width=6cm,angle=-90]{fig6.eps}
\caption{GDE for the white dwarf cooling sequence for the models shown
Fig. 5 (continuous line).
The DA model shown in Fig. 1 is represented by a dashed line.
The horizontal arrow indicates the direction of time evolution. }
\end{figure}
\begin{acknowledgements}
I would like to thank E. Cukanovaite for providing models of the
structure of white dwarfs atmospheres and an anonymous referee for his/her helpful suggestions.
The Spanish MEC (ESP2017-87676-C5-2-R, PID2019-107061GB-C64, and
PID2019-109522GB-C52) is gratefully acknowledged for its
support during the development of this work. A.C. also
acknowledges financial support from the State Agency for
Research of the Spanish MCIU through the “Center of
Excellence Severo Ochoa” award for the Instituto de
Astrofísica de Andalucía (SEV-2017-0709). This research has made
use of the SIMBAD database, operated at the CDS, Strasbourg,
France, and of NASA's Astrophysics Data System Abstract Service.
\end{acknowledgements}
|
1,116,691,498,656 | arxiv | \section{methods}
\subsection{Experimental setup}
The experiment consists of a standard cQED setup~\cite{Armen09,Mabu96} utilizing laser cooled $^{133}$Cs atoms and a high-finesse Fabry-Perot optical resonator. We attempt to drive only the $(6S_{1/2},F=4,m_F=+4)\rightarrow(6P_{3/2},F=5,m_F=+5)$ atomic cycling transition at 852nm through frequency- and polarization-selectivity so that the atom may be approximated as a TLS. The use of an improved cavity geometry and mirror mounting scheme, as compared to a previous experimental study of single-atom cavity QED in the strong driving regime~\cite{Armen09}, was crucial in enabling us to observe clear binary phase modulation.
Inside a UHV ($\approx10^{-9}$Torr) chamber and placed on a multi-stage vibration-isolation stack, the Fabry-Perot optical resonator is formed by two high-reflectivity (8ppm transmission, 2ppm loss), 10cm radii of curvature dielectric mirrors with roughly 27$\mu$m of separation, yielding a 300,000-finesse optical resonator for the standing wave, TEM$_{00}$, 18$\mu$m-waist transverse spacial mode with a field decay rate of $\kappa = 2\pi\times9.3$MHz. We took particular care to mount the mirrors in a rotationally-symmetric manner to minimize stress-induced birefringence in the mirror coatings, allowing for full polarization-selectivity of the atomic transitions. The cavity length is tuned and actively stabilized by two shear-mode piezoelectric plates underlying the two mirror mounts. The precise cavity length and resonance frequency is continually stabilized by the Pound-Drever-Hall~\cite{PDH} method using an additional laser probe detuned by the desired probe/cavity resonance frequency by two cavity free spectral ranges (at an optical wavelength of roughly 826nm, which interacts negligibly with Cs).
A Doppler-limited, magneto-optically trapped ensemble of $\sim10^6$ atoms is formed roughly 1cm above the cavity mode in the UHV chamber. After cooling, the ensemble trap is switched off, allowing the cold atoms to fall under gravity towards the cavity mode and by the time they reach the cavity mode their free-fall velocity tends to dominate any residual thermal motion. Due to the strong coupling between the targeted atomic transition and the cavity mode (with calculated maximum value max$_r\{g(r)\}\equiv g_0=2\pi\times56.8$MHz at the cavity anti-nodes, using the dipole strength of the atomic transition and cavity mode volume), individual atom transits are detected by monitoring the ($g(r)$-dependent) cavity transmission amplitude using a relatively weak and near-resonant probe, a free space balanced photodetector, and an actively phase-locked optical local oscillator. Once a near-maximally coupled atom has been detected, the probe power and frequency shift to the desired experimental levels and data acquisition is initiated. Although multiple atom transits per drop may be visible, the atomic ensemble is sufficiently diffuse such that no more than one atom is simultaneously present in the cavity mode and we acquire data from only one transit per ensemble drop. Due to the many sensitive stabilization requirements and slow drifts in the experimental apparatus, data are usually analyzed in groups of 50 atom transits, over which experimental stability can be confidently maintained.
Within a group of 50 transits records, photocurrent segments are selected for analysis in a two-stage process. First, photocurrent segments in each transit with above-shot noise variance (corresponding to TLS-induced binary phase switching) are algorithmically identified using HMM methods. Typically these segments persist for several tens of microseconds. Next, some subjective selection of these algorithmically-identified segments is required to limit the analysis to segments over which the variance is both reasonably high and constant in time, corresponding to switching signals in which the TLS maintained near-maximal coupling throughout. Independent trials of this selection process were found to result in similar final results. Despite this attempt to compensate for the time-dependent atom-field coupling (due to a position-dependent $g(r)$), modulation in the switching variance is typically apparent over timescales greater than a few microseconds; in fact, a significant fraction of the transit segments display near-sinusoidal modulation in the switching variance, corresponding to atomic motion through several standing wave anti-nodes.
\subsection{The Jaynes-Cummings model and simulation}
The Jaynes-Cummings Hamiltonian~\cite{Berman,JC} is widely used to describe the internal dynamics of a cQED system driven by a coherent probe (using $\hbar=1$):
\begin{equation}
H = \Delta \sigma^\dag\sigma+\Theta a^\dag a + ig(r)(a^\dag \sigma-a\sigma^\dag) + i\mathcal{E}(a^\dag-a)
\end{equation}
where $a$ is the annihilation operator for the cavity mode, $\sigma = \vert g\rangle\langle e\vert$ is the TLS lowering operator and $^\dag$ signifies the Hermitian conjugate. $\Delta$ is the detuning between the probe and the atomic transition frequencies and $\Theta$ is the detuning between the cavity mode resonance and the probe ($\Theta=0$ in all systems considered here). The third, `coupling' term represents the interaction between the atomic transition and cavity mode and describes the process by which quanta of energy are exchanged (at rate $\propto g(r)$) between the TLS and mode. The final term represents the coherent driving term, with amplitude $\mathcal{E}$.
The physical damping processes included in the overall dynamics model include the decay of cavity photons out of the resonator at a rate of $2\kappa$ per intra-cavity photon and the spontaneous emission of an excited TLS at a rate of $2\gamma_\perp$. Standard quantum trajectory theory~\cite{Carm93} based on these processes underlies much of the theoretical analysis and numerical simulation. For example, simulation of a photocurrent given a set of experimental parameters first involves the calculation of a possible trajectory for the internal quantum state vector of the cQED system $\vert\psi_c(t)\rangle$ by numerically integrating the stochastic Schr\"{o}dinger equation \cite{Tan99,Carm93}
\begin{eqnarray}
d\vert\psi_c(t)\rangle &=& -(iH+\kappa a^\dag a+\gamma_\perp\sigma^\dag\sigma)\vert\psi_c(t)\rangle dt+\left(\sqrt{2\kappa}\langle\psi_c(t)\vert a+a^\dag\vert\psi_c(t)\rangle dt+dW_t^{(1)}\right)\sqrt{2\kappa}a\vert\psi_c(t)\rangle+\nonumber\\
&&\left(\sqrt{2\gamma_\perp}\langle\psi_c(t)\vert \sigma+\sigma^\dag\vert\psi_c(t)\rangle dt+dW_t^{(2)}\right)\sqrt{2\gamma_\perp}\sigma\vert\psi_c(t)\rangle
\end{eqnarray}
where $\{dW_t^{(1)},dW_t^{(2)}\}$ are randomly generated, independent Wiener increments and the state vector is forcibly re-normalized after each recursive update. The simulated photocurrent $dQ(t)$ may then be calculated using this state trajectory and calibrated detection efficiency $\eta$
by
\begin{equation}
dQ(t) = \sqrt{\eta}\left(\sqrt{2\kappa}\langle\psi_c(t)\vert a+a^\dag\vert\psi_c(t)\rangle dt+dW_t^{(1)}\right)+\sqrt{1-\eta}dW_t^{(3)}
\end{equation}
where $dW_t^{(3)}$ is a third, independent Wiener increment. Although this model may be generalized to include a time dependence in the coupling rate $g(r)$ and a more realistic, multi-level atomic structure, all simulations here utilized the maximal $g_0$ for the static coupling rate and assumed a TLS atomic model.
\subsection{Hidden Markov model analysis}
A hidden Markov model (HMM) \cite{HMM} is a common method for analyzing systems with a dynamically evolving state that is monitored through noisy observations. The model is Markovian in the sense that the probability that the system will be in a particular state in the next time step is a function only of its current state. Our HMM analysis of experimental and simulated photocurrents assumes only two possible internal states (each corresponding to one of the two atomic dressed states), with transitions between them at fixed (and generally asymmetric) rates. Inference of the `hidden' internal state trajectory is made using the photocurrent measurements and models for the mean transition rates and measurement distributions associated with each internal state. We expect these photocurrent distributions to be normal, with the same variance (reflecting optical shot noise as well as $g(r)$ variations), but differing means (corresponding to the associated positive and negative phase shifts).
The two state HMM with normally-distributed photocurrent `emissions' most likely to produce a particular photocurrent segment ({\it i.e.} the maximum likelihood estimator for the HMM) is calculated according to the Baum-Welch expectation maximization algorithm \cite{Welch03,HMM}. Essentially, this algorithm consists of first assuming some set of HMM parameters: a pair of state transition rates and a mean and variance of the emissions distributions associated with each state. Then, for the particular photocurrent segment, an estimate of the hidden state trajectory is made and, using this estimate, the segment-average transition rates and emission statistics are calculated. It can be shown that these inferred, average HMM parameters comprise a model with a necessarily higher likelihood than the originally assumed one. Thus, the procedure is iterated, each time using the inferred, segment-averaged HMM parameters from the previous trajectory estimate until the model parameters converge. Once this maximum likelihood HMM is identified, the likelihoods of models in the vicinity of the most likely model are also calculated. The Viterbi state trajectory estimate (such as used in Figure \ref{fig:Viterbi_Fit}) maximizes the probability of the entire state trajectory, given a HMM and a photocurrent segment \cite{Viterbi,HMM}.
\begin{acknowledgments}
JK would like to thank Arka Majumdar and Ramon van Handel for helpful discussions. This work is supported by DARPA-MTO under Award No. FA8650-10-1-7007. DSP acknowledges the support of a Stanford Graduate Fellowship.
\end{acknowledgments}
\section{Additional information}
The authors declare no competing financial interests.
|
1,116,691,498,657 | arxiv | \section*{References}
|
1,116,691,498,658 | arxiv | \section{Introduction}
\begin{figure}[t!]
\centering
\includegraphics[width=1\columnwidth]{images/lagrange.pdf}
\caption{The architecture of the proposed prediction-driven optimization framework to maximize the market influence of air carrier $C_k$. Note that the pre-trained neural network-based market share prediction models constitute the objective function. The gradients of the budget constraint and the objective function can flow from the top to the bottom to optimize the weighted adjacency matrix because all intermediate modules are differentiable.} \label{fig:archi}
\end{figure}
Ever since the deregulation in 1978, there has been huge competition among US air carriers (airlines) for air passenger transportation. 771 million passengers were transported in 2018 alone and the largest air carrier produces a revenue of more than 43 billion dollars for the period between September 2017 and September 2018\footnote{\url{https://www.transtats.bts.gov}}. It is one of the largest domestic markets in the world and there is a huge demand to improve their services. Consequently, many computational methods have also been proposed to predict market share, ticket price, demand, etc. and allocate resources (e.g., aircraft) on those air passenger markets accordingly~\cite{An:2016:MFM:2939672.2939726,An:2017:DFA:3055535.3041217,10.1007/978-3-319-54430-4_3,8606022}.
The market influence is sometimes strategically more important than profits. Typically, there are two ways to expand business: i) a strategical merger with other strong competitors, and ii) a strategical play to maximize the market influence~\cite{ciliberto2018market}. Our paper is closely related to the latter strategy.
We propose a novel way of \emph{unifying both data mining and mathematical optimization methods} to maximize air carrier's influence on the air transportation market. In this paper, we define the influence of an air carrier as \emph{the number of passengers transported by the air carrier} which can be calculated by the total demand multiplied with the air carrier's market share.
\begin{table*}[t]
\small
\setlength{\tabcolsep}{2pt}
\begin{center}
\caption{Comparison table between two related papers~\cite{An:2016:MFM:2939672.2939726,An:2017:DFA:3055535.3041217} and this work. Since we do not assume the route independence, our problem setting is more realistic, making many existing optimization algorithms designed based on the assumption inapplicable to our work.}\label{tbl:cmp}
\begin{tabular}{|c|c|c|}
\hline
\textbf{Comparison items} & \textbf{Existing work}~\cite{An:2016:MFM:2939672.2939726,An:2017:DFA:3055535.3041217} & \textbf{Our work} \\ \hline
Market Share Prediction Model & Standard multi-logit model & Deep learning model \\ \hline
Conventional Air Carrier Performance Features & Yes & Yes \\ \hline
Transportation Network Features & No & Yes \\ \hline
Removal of Route Independence Assumption & No & Yes \\ \hline
Optimization Technique & Classical combinatorial optimization techniques & Our proposed adaptive gradient ascent \\ \hline
How to integrate prediction and optimization & Black-box query to prediction model & \begin{tabular}[c]{@{}c@{}}White-box search\end{tabular} \\ \hline
\end{tabular}
\end{center}
\end{table*}
Since the market influence of an air carrier in a route can be calculated by the total demand (passenger numbers) in the route multiplied with the market share, predicting market share is a key step in our work. Conventional features (e.g., average ticket price, flight frequency, and on-time performance) have been widely used to predict the market share~\cite{An:2016:MFM:2939672.2939726,An:2017:DFA:3055535.3041217,suzuki2000relationship,wei2005impact}. For instance, air carrier's market share on a route will increase if ticket price is decreased and flight frequency is increased. However, some researchers recently paid an attention to air carrier's transportation network connectivity that is highly likely to be connected to market share~\cite{10.1007/978-3-642-21786-9_61,doi:10.1057/ejis.2010.11}. As a response, we design a neural network-based prediction model that uses a wide variety of conventional and transportation network features, such as degree centrality, PageRank, and so forth. It is worth mentioning that we train a prediction model for each route.
On top of the market share prediction models, we build a budget-constrained optimization module to maximize the market influence by optimizing transportation network (more precisely, flight frequency values over 2,262 routes), which is an Integer Knapsack problem (cf. Fig.~\ref{fig:archi}). Our objective function consists of the market share prediction models in those routes and our constraint is a budget limit of an air carrier. The objective is not in a simple form but rather a complex one of inter-correlated neural networks because changing frequency in a route will influence market shares on other neighboring routes as well. Therefore, it is very hard to solve with existing techniques that assume routes are independent from each other (see discussions in Section~\ref{sec:opt}).
We test our optimization framework with 2,262 routes. To achieve such a high scalability, we design a method of \underline{\textbf{A}}daptive \underline{\textbf{G}}radient \underline{\textbf{A}}scent (AGA). In our experiments, the proposed optimization method solves the very large-scale optimization problem much faster than existing algorithms. However, one main challenge in our approach is how to consider the budget constraint in the proposed gradient-based optimization technique --- each air carrier has a limited budget to operate flights. It is not straightforward to consider the budget constraint with gradient-based optimization methods. However, our proposed AGA method is able to dynamically manipulate gradients to ensure the budget limit, i.e., dynamically impose a large penalty, if any cost overrun, in such a way that one gradient ascent update theoretically guarantees a decrease in the total cost. Therefore, a series of updates can eventually address the cost overrun problem.
In our experiments, our customized prediction model shows much better accuracy in many routes than existing methods. In particular, our median root-mean-square error is more than two times better than the best baseline. Our proposed AGA method is able to maximize the market influence on all those routes 690 times faster with a better optimized influence than a greedy algorithm.
\begin{figure}[t]
\centering
\subfigure[Existing Black-box Search Methods]{\includegraphics[width=0.95\columnwidth]{images/BS.pdf}}
\subfigure[Proposed White-box Search Method]{\includegraphics[width=0.95\columnwidth]{images/WS.pdf}}
\caption{The comparison with existing black-box search methods in~\cite{An:2016:MFM:2939672.2939726,An:2017:DFA:3055535.3041217} and the white-box search method proposed in this work. Our AGA optimization algorithm enables the white-box search concept to be used in this work.} \label{fig:ws}
\end{figure}
\section{Related Work}
We introduce a selected set of related works about air market predictions and optimizations.
\subsection{Market Share Prediction}
There have been proposed many prediction models such as~\cite{suzuki2000relationship,wei2005impact,An:2016:MFM:2939672.2939726,An:2017:DFA:3055535.3041217}, to name a few. However, they share many common design points. First, almost all of them use the multi-logit regression model. It is a standard model to predict air transportation market shares. We also use the same multi-logit regression (see Section~\ref{sec_pred} for its details) after some extensions. Suzuki considers air carriers' frequency, delay, and safety~\cite{suzuki2000relationship} whereas Wei et al. study about the effect of aircraft size and seat availability on market share and consider other variables such as price and frequency~\cite{wei2005impact}. There are some more similar works~\cite{An:2016:MFM:2939672.2939726,An:2017:DFA:3055535.3041217}. In our paper, we consider transportation network features in addition to those conventional air carrier performance features.
\subsection{Flight Frequency Optimization}\label{sec:opt}
One similar flight frequency optimization problem to maximize profits was solved in~\cite{An:2016:MFM:2939672.2939726,An:2017:DFA:3055535.3041217}. In their work, An et al. showed that the frequency-market share curve is very hard to approximate with existing approximation methods such as piece-wise linear approximation~\cite{646812}. After that, they designed one heuristic-based algorithm, called GroupGreedy, which runs an exact algorithm in each subset of routes (because running the exact algorithm for the entire route set is prohibitive). Each subset consists of a few routes and running the exact algorithm within a small subset provides a tolerable degree of scalability in general. However, they were able to test with \emph{at most about 30 routes} for its prohibitively long execution time even with GroupGreedy and its scalability is not satisfactory. We test with 2,262 routes in this paper --- i.e., the problem search space size is $\mathcal{O}(n^{30})$ in their work vs. $\mathcal{O}(n^{2,262})$ in this work.
In addition, we found that GroupGreedy cannot be used for our prediction model because of the network features --- An et al. did not consider network features and assumed each route is independent~\cite{An:2016:MFM:2939672.2939726,An:2017:DFA:3055535.3041217}. After adopting the assumption, they optimize for each route separately. In reality, however, changing a frequency in a route is likely to influence the market shares in other routes because routes are often inter-correlated. Thus, GroupGreedy based on the independence assumption is not applicable to our work. Our work does not assume the independence so this work is more realistic.
In the perspective of Knapsack, after excluding the independence assumption, it becomes much more complicated because the value (i.e., market share) of a product (i.e., route) becomes non-deterministic and is influenced by other products (i.e., routes). This makes the current problem more realistic than those studied in the previous work by An et al. However, this change prevents us from applying many existing Knapsack algorithms that have been invented for the simplest case where product values are fixed and independent from each other~\cite{axiotis_et_al:LIPIcs:2019:10595}.
One more significant difference is that the optimization algorithm in the related work queries its prediction models whereas both optimization and prediction are integrated on TensorFlow in this new paper. In Table~\ref{tbl:cmp}, we summarize the differences between the previous work and our work. In addition, Fig.~\ref{fig:ws} compares their fundamental difference on the algorithm design philosophy. Those existing methods are representative black-box search methods where the query-response strategy is adopted. In this new work, however, the gradients directly flow to update frequencies so its runtime is inherently faster than existing methods.
\section{Preliminaries}
We introduce our dataset and the state-of-the-art market share prediction model. Our main dataset is the air carrier origin and destination survey (DB1B) dataset released by the US Department of Transportation's Bureau of Transportation Statistics (BTS)~\cite{bts} and some safety dataset by the National Transportation Safety Board (NTSB)~\cite{ntsb}. We refer to Appendix for detailed dataset information.
\subsection{Market Share Prediction Model}\label{sec_pred}
In this subsection, we describe a popular existing market share prediction model for air transportation markets. Given a route $r$, the following multinomial logistic regression model is to predict the market share of air carrier $C_k$ in the route:
\begin{align}\label{eq:logit}{\color{black}
m_{r,k} = \frac{ e^{\sum_{j} w_{r,j} \cdot f_{r,k,j}}}{\sum_{i} e^{\sum_{j} w_{r,j} \cdot f_{r,i,j}}} = \frac{exp(\mathbf{w}_r \cdot \mathbf{f}_{r,k})}{\sum_i exp(\mathbf{w}_r \cdot \mathbf{f}_{r,i})},}
\end{align}where $m_{r,k}$ means the market share of air carrier $C_k$ in route $r$; $f_{r,k,j}$ is the $j$-th feature of air carrier $C_k$ in route $r$; and $w_{r,j}$ represents the sensitivity of market share to feature $f_{r,k,j}$ in route $r$ that can be learned from data.
A set of features for air carrier $C_k$ in route $r$ can be represented by a vector $\mathbf{f}_{r,k}$ (see Appendix~\ref{sec:features} for a complete list of $\mathbf{f}_{r,k}$ in our work). We use bold font to denote vectors.
The rationale behind the multi-logit model is that $exp(\mathbf{w}_r \cdot \mathbf{f}_{r,k})$ can be interpreted as passengers' valuation score about air carrier $C_k$ and the market share can be calculated by the normalization of those passengers' valuation scores --- this concept is not proposed by us but widely used for the air carrier market share prediction in Business, Operations Research, etc~\cite{hansen1990airline,An:2016:MFM:2939672.2939726,An:2017:DFA:3055535.3041217,suzuki2000relationship,wei2005impact}.
\section{Proposed Prediction Method}
We design a neural network-based market share prediction model with transportation network features.
\subsection{Air Carrier Transportation Network}
There are more than 2,000 routes (e.g., from LAX to JFK) in the US and this creates one large transportation network. Transportation network $\mathcal{G}=(\mathcal{V},\mathcal{E})$ is a directed graph among airports (i.e., vertices) in $\mathcal{V}$. In particular, we are interested in an air carrier-specific directed transportation network $\mathcal{G}_k$ weighted by its flight frequency values. Thus, $\mathcal{G}_k$ represents the connectivity of air carrier $C_k$ and its edge weight on a certain directional edge means the flight frequency of the air carrier in the route. $\mathcal{G}_k$ can be represented by a weighted adjacency (or frequency) matrix $\mathcal{A}_k$, where each element is a flight frequency from one airport to another.
\subsection{Network Features}\label{sec:netf}
In this section, we introduce the network features we added to improve the prediction model.
\begin{figure}[t]
\centering
\footnotesize
\includegraphics[width=0.95\columnwidth,trim={2cm 1cm 2cm 2cm},clip]{netfeat_passenger.png}
\caption{Number of passengers vs. network features. The result summarizes all the airports.}
\label{fig:netfeat_passenger}
\end{figure}
\subsubsection{Degree Centrality}
As mentioned by earlier works, transportation network connectivity is important in air transportation markets~\cite{10.1007/978-3-642-21786-9_61,doi:10.1057/ejis.2010.11}. For instance, the higher the degree centrality of an airport in $\mathcal{G}_k$, the more options the passengers to fly. Thereby, its market share will increase at the routes departing the high degree centrality airport. Therefore, we study how the degree centrality of source and destination airports influences the market share.
Given $\mathcal{A}_k$, the out-degree (resp. in-degree) centrality of $i$-th airport is the sum of $i$-th row (resp. column). So this feature calculation can be very easily implemented on Tensorflow or other deep learning platforms.
\subsubsection{Ego Network Density}
Ego network is very popular for social network analysis~\cite{NIPS2012_4532}. We introduce the concept of ego network first.
\begin{definition}
Given a vertex $v$, its \emph{ego network} is an induced subgraph of $v$ and its neighbors. The vertex $v$ is called \emph{ego vertex} (i.e., ego airport in our case). Note that ego networks are also weighted with flight frequency values. The density of an ego network is defined as the sum of edge weights divided by $n(n-1)$ where $n$ is the number of vertices in the ego network.
\end{definition}
By the definition, an airport's ego network density is high when the airport and its neighboring airports are well connected all together. It is natural that passengers transit in an airport whose connections are well prepared for their final destinations.
\subsubsection{PageRank}
PageRank was originally proposed to derive a vertex's importance score based on the random web surfer model~\cite{Page99thepagerank} --- i.e., a web surfer performs a random walk following hyperlinks. We think PageRank is suitable to analyze multi-stop passengers for the following reason.
After normalizing $\mathcal{A}_k$ row-wise, it becomes the transition probability that a random passenger will move following the route. Thus, PageRank is able to capture the importance of an airport.
Fig.~\ref{fig:netfeat_passenger} depicts the relationships between the network features introduced above and the total number of passengers transported in and out airports by a certain air carrier. We used the DB1B data released by the BTS for the first quarter of 2018 to draw this figure. As shown in Fig.~\ref{fig:netfeat_passenger}, the number of passengers in each airport is highly correlated with the network features (i.e., in-degree, out-degree, ego network density, and PageRank). In conjunction with other classical air carrier performance features, these network features can improve the prediction accuracy by a non-trivial margin.
\subsection{Neural Network-based Prediction}\label{sec:nnmodel}
Whereas many existing methods rely on classical machine learning approaches, we use the following neural network to predict:
\begin{align}\begin{split}\label{eq:nn}
\mathbf{h}^{(1)}_{r,k} &= \sigma(\mathbf{f}_{r,k}\mathbf{W}^{(0)} + \mathbf{b}^{(0)}),\textrm{ for initial layer}\\
\mathbf{h}^{(i+1)}_{r,k} &= \mathbf{h}^{(i)}_{r,k} + \sigma(\mathbf{h}^{(i)}_{r,k}\mathbf{W}^{(i)} + \mathbf{b}^{(i)}),\textrm{ if }i \geq 1
\end{split}\end{align}where $\sigma$ is ReLU. $\mathbf{W}^{(0)} \in \mathcal{R}^{19 \times d}$, $\mathbf{b}^{(0)} \in \mathcal{R}^{d}$, $\mathbf{W}^{(i)} \in \mathcal{R}^{d \times d}$, $\mathbf{b}^{(i)} \in \mathcal{R}^{d}$ are parameters to learn. Note that we use residual connections after the initial layer. For the final activation, we also use the multi-logit regression. From Eq.~\eqref{eq:logit}, we replace $\mathbf{f}_{r,k}$ with $\mathbf{h}^{l}_{r,k}$, which denotes the last hidden vector of our proposed neural network, to predict $m_{r,k}$ as follows:
\begin{align}\label{eq:nn2}
m_{r,k} = \frac{exp(\mathbf{w}_r \cdot \mathbf{h}^{l}_{r,k})}{\sum_i exp(\mathbf{w}_r \cdot \mathbf{h}^{l}_{r,i})},
\end{align}where $\mathbf{w}_r$ is a trainable parameter. We use $\bm{\theta}_r$ to denote all the parameters of route $r$ in Eqs.~\eqref{eq:nn} and~\eqref{eq:nn2}.
One thing to mention is that all the network features can be properly calculated on TensorFlow from $\mathcal{A}_k$ before being fed into the neural network. This is the case during the frequency optimization phase which will be described shortly. By changing a frequency in $\mathcal{A}_k$, the entire network feature can be recalculated before the neural network processing as shown in Fig.~\ref{fig:archi}. Therefore, the gradients can directly flow from the prediction models to the frequency matrix through the network feature calculation part. Hereinafter, we use a function $m_{r,k}(\mathcal{A}_k;\bm{\theta}_r)$ after partially omitting features (such as ticket price, aircraft size, etc.) to denote the predicted market share. Note that the omitted features and $\bm{\theta}_r$ are considered constant while optimizing frequencies in the next section. We sometimes omit all the inputs and use $m_{r,k}$ for brevity.
\section{Proposed Optimization Method}
{\color{black} Among many features, the flight frequency is an actionable feature that we are interested in to adjust --- see Appendix~\ref{sec:features} for a complete list of features we consider in this work. An actionable feature means a feature that can be freely decided only for one's own purposes. May other features, such as delay time, safety, and so on, cannot be solely decided by an air carrier. Hereinafter, we use $f_{r,k,freq}$ to denote a flight frequency value of air carrier $C_k$ in route $r$. These frequency values among airports constitute $\mathcal{A}_k$.
\subsection{Problem Definition}\label{sec:def}
We solve the following optimization problem to maximize the market influence of air carrier $C_k$ (i.e., the number of passengers transported by $C_k$) on those routes in $\mathcal{R}$. Given its total budget $budget_k$, we optimize the flight frequency values of the air carrier over multiple routes in $\mathcal{R}$ as follows:
\begin{align}\begin{split}\label{eq:obj}
\max_{f_{r}^{max} \geq f_{r,k,freq}\geq 0, r \in \mathcal{R}}& \sum_{r \in \mathcal{R}} demand_r \times m_{r,k} \\
\textrm{subject to }& \sum_{r \in \mathcal{R}}cost_{r,k} \times f_{r,k,freq} \leq budget_k,
\end{split}\end{align} where $m_{r,k}$ is the predicted market share of $C_k$ in route $r$ (by our neural network model), $demand_r$ is the number of total passengers in route $r$ from the DB1B dataset, and $cost_{r,k}$ is the unit operational cost of air carrier $C_k$ in route $r$. $f_{r}^{max}$ is the maximum flight frequency in route $r$ observed in the DB1B dataset. The adoption of $f_{r}^{max}$ is our heuristic to prevent overshooting a practically meaningful frequency limit. Note that different air carriers have different unit operational costs in a route $r$ as their efficiency is different and they purchase fuel in different prices --- we extract this information from the DB1B dataset.
Eq.~\eqref{eq:obj} shows how we can effectively merge data mining and mathematical optimization. The proposed problem is basically a non-linear optimization and a special case of Integer Knapsack and resource allocation problems which are all NP-hard~\cite{Arora:2009:CCM:1540612}. The theoretical complexity of the problem is $\mathcal{O}(\prod_{r\in \mathcal{R}}f_{r}^{max})$, which can be simply written as $\mathcal{O}(n^{2,262})$ after assuming $n=f_{r}^{max}$ in each route for ease of discussion because $|\mathcal{R}|=2,262$.
\begin{theorem}
The market influence maximization is NP-hard.
\end{theorem}
\subsection{Overall Architecture}
In Fig.~\ref{fig:archi}, the overall architecture of the proposed optimization idea is shown. The overall workflow is as follows:
\begin{enumerate}
\item Train the market share prediction model in each route, which considers transportation network features.
\item Fix the prediction models and update the frequency matrix $\mathcal{A}_k$ using the proposed AGA optimizer. We consider other features (such as ticket price, aircraft size, etc.) are fixed while optimizing frequencies.
\end{enumerate}
The adoption of network features makes many classical combinatorial optimization techniques inapplicable to our work because the route independent assumption does not hold any more. Even worse, our objective function consists of highly non-linear neural networks. Therefore, our problem becomes a challenging non-linear optimization problem. We shortly describe how to solve such a large-scale and difficult optimization problem.
\subsection{Gradient-based Optimization}\label{sec:sol}
We solve the problem in Eq.~\eqref{eq:obj} on a deep learning platform using our AGA method in Algorithm~\eqref{alg:adaptive-gd}. But one problem in this approach is how to consider the budget constraint. We design two workarounds based on i) Lagrangian function (LF) and ii) rectified linear unit (ReLU).\medskip
In our heuristic, we covert integer frequency variables to real variables and use the \texttt{clip\_by\_value} function of TensorFlow to restrict the frequency in $r$ into $[0,f_{r}^{max}]$ during the optimization process. As the optimized frequencies by our method will be real numbers, \emph{we round down to convert them to integers} and not to violate the budget limit at the end of the optimization process i.e., a continuous relaxation from integer frequencies. We now describe how to solve the continuous-relaxed problem.
\subsubsection{Lagrangian Function (LF)-based Heuristic: }
{\color{black}The method of Lagrange multiplier is a popular method to maximize concave functions (or some special non-concave functions) with constraints~\cite{10.5555/993483,10.1561/2200000016}. However, we cannot apply the method to our work because our objective function consists of highly non-linear neural networks. Therefore, we adopt only the Lagrangian function from the method and develop our own heuristic search method.} The following Lagrangian function can be defined in our case:
\begin{align}\label{eq:lag_l}\begin{split}
L = o(\mathcal{A}_k) - \lambda c(\mathcal{A}_k),
\end{split}\end{align} where $\lambda$ is called a Lagrange multiplier, and
\begin{align}\begin{split}
o(\mathcal{A}_k) &= \sum_{r \in \mathcal{R}} demand_r \times m_{r,k},\\
c(\mathcal{A}_k) &= \sum_{r \in \mathcal{R}}\big(cost_{r,k} \times f_{r,k,freq}\big) - budget_k.
\end{split}\end{align}
{\color{black}Basically, the Lagrange multiplier $\lambda$ can be systematically decided, if the objective function $o(\mathcal{A}_k)$ is in simple forms, and we can find the optimal solution of the original constrained problem. However, this is not the case in our work due to the complicated nature of neural networks and the objective function from them, and our goal is to solve the optimization problem on TensorFlow for the purpose of increasing scalability, aided by our scalable AGA optimization technique.
Thus, we propose the following regularized problem and develop a heuristic search method}:
\begin{align}\label{eq:lag2}
\max_{f_{r}^{max} \geq f_{r,k,freq}\geq 0, r\in \mathcal{R}}\quad \min_{\lambda}\quad L + \delta \lambda^2
\end{align}where $\delta \geq 0$ is a weight for the regularization term. {\color{black}Note that our definition is different from the original Lagrangian function.} The inner minimization part has been added by us to prevent that $\lambda$ becomes too large. One way to solve Eq.~\eqref{eq:lag2} is to alternately optimize flight frequencies (i.e., the outer maximization) and $\lambda$ (i.e., the inner minimization), which implies that Eq.~\eqref{eq:lag2} be basically a two-player max-min game. We further improve Eq.~\eqref{eq:lag2} and derive a simpler but equivalent formulation that does not require the alternating maximization and minimization shortly in Eq.~\eqref{eq:lag_max_min_simp}.
\begin{theorem}\label{th:optimization}
Let $\mathcal{A}_k$ be a matrix of flight frequencies. The optimal solution of the max-min problem in Eq.~\eqref{eq:lag2} is the same as the optimal solution of the following problem:
\begin{align}\label{eq:lag_max_min2}
\max_{f_{r}^{max} \geq f_{r,k,freq}\geq 0, r\in \mathcal{R}}\quad o(\mathcal{A}_k) -\frac{c(\mathcal{A}_k)^2}{4\delta}.
\end{align}
\end{theorem}
For simplicity, let $\beta = \frac{1}{2\delta}$ and we can rewrite Eq.~\eqref{eq:lag_max_min2} as follows:
\begin{align}\label{eq:lag_max_min_simp}
\max_{f_{r}^{max} \geq f_{r,k,freq}\geq 0, r\in \mathcal{R}}\quad \bar{L}_{Lagrange},
\end{align}where $\bar{L}_{Lagrange} = o(\mathcal{A}_k) -\beta\frac{c(\mathcal{A}_k)^2}{2}$.
Note that maximizing Eq.~\eqref{eq:lag_max_min_simp} is equivalent to solving the max-min problem in Eq.~\eqref{eq:lag2} so we implement only Eq.~\eqref{eq:lag_max_min_simp} and optimize it using the proposed AGA method that will be described in the next subsection.
\subsubsection{Rectified Linear Unit (ReLU)-based Heuristic: }
ReLU is used to rectify an input value by taking its positive part for neural networks. This property can be used to impose a penalty if the budget limit constraint is violated as follows:
\begin{align}\label{eq:lag_max_min_simp2}
\max_{f_{r}^{max} \geq f_{r,k,freq}\geq 0, r\in \mathcal{R}}\quad \bar{L}_{ReLU},
\end{align}where $\bar{L}_{ReLU} = o(\mathcal{A}_k) - \beta R(c(\mathcal{A}_k))$ and $R(\cdot)$ is the rectified linear unit.
\subsection{$\beta$ Selection and Adaptive Gradient Ascent}
{\color{black}We propose the AGA method, which basically uses the gradients of $\bar{L}_{Lagrange}$ or $\bar{L}_{ReLU}$ w.r.t. flight frequencies to optimize them. In both methods, the coefficient $\beta$ needs to be \emph{dynamically} adjusted to ensure the budget limit rather than being fixed to a constant. For example, one gradient ascent update will increase flight frequencies even after a cost overrun if $\beta$ is not large enough. Whenever there is any cost overrun, $\beta$ should be set to such a large enough value that the total cost is decreased.
\begin{figure}[t]
\centering
\footnotesize
\subfigure[$\beta=1$ is not enough to decrease cost]{\includegraphics[width=0.46\columnwidth]{gau2.pdf}}\hfill
\subfigure[$\beta=5$ is enough to decrease cost]{\includegraphics[width=0.45\columnwidth]{gau3.pdf}}
\caption{Suppose that there is a small cost overrun with $\mathcal{A}^{(i)}$, which denotes a frequency matrix at $i$-th gradient ascent iteration. The norm of $\vect{c}'$ is smaller than that of $\vect{o}'$ and the gradient ascent update cannot remove the cost overrun if $\beta$ is small (e.g., $\beta=1$ in (a)). However, if $\beta$ is large enough (e.g., $\beta=5$ in (b)), the gradient ascent update can reduce the cost overrun. Note that $\mathcal{A}^{(i+1)}$ is located behind $\mathcal{A}^{(i)}$ w.r.t. the blue dotted line perpendicular to $\vect{c}'$ in (b), which means a reduced cost overrun. We dynamically adjust $\beta$ to decrease the cost, if any cost overrun, while sacrificing the objective as little as possible.}
\label{fig:ga}
\end{figure}
For the sake of our convenience, we will use $\vect{o}'$ and $\vect{c}'$ to denote the gradients of objective and cost overrun penalty term as follows:
\begin{align*}
\vect{o}' &= \nabla o(\mathcal{A}_k),\\
\vect{c}' &= \begin{cases}\nabla\frac{c(\mathcal{A}_k)^2}{2},\textrm{ if the Lagrangian function-based method}\\
\nabla R(c(\mathcal{A}_k)),\textrm{ if the ReLU-based method}.
\end{cases}
\end{align*}
Fig.~\ref{fig:ga} shows an illustration of why we need to adjust $\beta$. As shown, if the directions of the two gradients $\vect{c}'$ and $\vect{o}'-\beta \vect{c}'$, where $\beta=5$, are opposite, the cost overrun will decrease after one gradient ascent update. If $\beta$ is too small, the cost overrun does not decrease in the example.
}
We also do not distinguish between $\bar{L}_{Lagrange}$ and $\bar{L}_{ReLU}$ in this section because the algorithm proposed in this section is commonly applicable to both the Lagrangian function and ReLU-based methods. We denote them simply as $\bar{L}$ in this section.
The gradients of $\bar{L}$ w.r.t. $\mathcal{A}_k$ are made of two components $\vect{o}'$ and $-\beta\vect{c}'$, where $\vect{o}'$ increases the market influence and $-\beta\vect{c}'$ reduces the cost overrun. Typically, the market influence increases as the frequencies $\mathcal{A}_k$ increase. So $\beta$ needs to be properly selected such that the frequencies are updated (by the proposed AGA method) to reduce the cost once the total cost exceeds the budget during the gradient-based update process.
This requires that the overall gradients $\vect{o}'-\beta \vect{c}'$ suppresses an increase in $c(\mathcal{A}_k)$.
More precisely, it requires that the directional derivative of $c(\mathcal{A}_k)$ along the vector $\vect{o}'-\beta \vect{c}'$ (or the dot product of $\vect{o}'-\beta\vect{c}'$ and $\vect{c}'$) is negative --- if two vectors have different directions, their dot product is negative.
\begin{algorithm}[t!]
\SetAlgoLined
\caption{Adaptive gradient ascent (AGA)}\label{alg:adaptive-gd}
\KwIn{$\gamma$}
\KwOut{$\mathcal{A}_k$}
Initialize $\mathcal{A}_k$\tcc*[r]{Initialize freqs}
$\beta \gets 0$\tcc*[r]{Initialize $\beta$}
\While {until convergence}{
$\mathcal{A}_k \gets \mathcal{A}_k+\gamma \nabla \bar{L}$\tcc*[r]{Gradient ascent}\label{alg:opt}
\eIf{$c(\mathcal{A}_k)>0$}{
$\beta \gets$Eq.~\eqref{eq:betafinal}
}{
$\beta \gets 0$\;
}
}
\end{algorithm}
Therefore, we want $\vect{c}' \cdot (\vect{o}'-\beta \vect{c}') < 0$. From it, we can rewrite the inequality w.r.t. $\beta$ and we have
\begin{align}\label{th:beta_selection}
\beta > \frac{\vect{c}' \cdot \vect{o}'}{\vect{c}' \cdot \vect{c}'}.
\end{align}
Note that Eq.~\eqref{th:beta_selection} does not include the equality condition but requires that $\beta$ is strictly larger than its right-hand side. To this end, we introduce a positive value $\epsilon>0$ as follows:
\begin{align}
\beta = \frac{\vect{o}'\cdot\vect{c}'}{\vect{c}'\cdot\vect{c}'}+\epsilon,
\end{align}where $\epsilon$ is a positive hyper-parameter in our method.
On the other hand, we need to ensure that $\beta$ is getting closer to zero when the algorithm is approaching an optimal solution of $\mathcal{A}_k$. To do this, we further modify it as follows:
\begin{align}\label{eq:betafinal}
\beta = \frac{\vect{o}'\cdot\vect{c}'}{\vect{c}'\cdot\vect{c}'}+c(\mathcal{A}_k)\epsilon.
\end{align}
Note that $c(\mathcal{A}_k)\epsilon$ becomes a very trivial value if $c(\mathcal{A}_k)$ is very small. This specific setting prevents the situation that an ill-chosen large $\epsilon$ decreases flight frequencies too much given a very small cost overrun $c(\mathcal{A}_k) \approx 0$.
The proposed AGA method is presented in Algorithm \ref{alg:adaptive-gd}. The optimization of frequencies occurs at line~\ref{alg:opt} and other lines are for dynamically adjusting $\beta$. We take a solution around 500 epochs when the cost overrun is not positive. 500 epochs are enough to reach a solution point in our experiments.
{\color{black}
\begin{theorem}
Algorithm~\ref{alg:adaptive-gd} is able to find a feasible solution of the original problem in Eq.~\eqref{eq:obj}.
\end{theorem}
}
\section{Experiments}
In this section, we introduce experimental environments and results for both the prediction and the optimization. {\color{black}We collected our data for 10 years from the website~\cite{bts}. We predict the market share and optimize the flight frequency in the last month of the dataset after training with all other month data.}
In our dataset, there are 2,262 routes and more than 10 air carriers. We predict and optimize for the top-4 air carriers among them considering their influences on the US domestic air markets. We ignore other regional/commuter level air carriers.
Our detailed software and hardware environments are as follows: Ubuntu 18.04.1 LTS, Python ver. 3.6.6, Numpy ver. 1.14.5, Scipy ver. 1.1.0, Pandas ver. 0.23.4, Matplotlib ver.3.0.0, Tensorflow-gpu ver. 1.11.0, CUDA ver. 10.0, NVIDIA Driver ver. 417.22. Three machines with i9 CPU and GTX1080Ti are used.
\subsection{Market Share Prediction}
\subsubsection{Baseline Methods}
We compare our proposed model with two baseline prediction models.
Model1~\cite{suzuki2000relationship} considers air carrier's frequency, delay, and safety. Model2~\cite{wei2005impact} studies the effect of aircraft size and seat availability on market share and considers all other variables such as price and frequency. Model1 and Model2 are conventional methods based on multi-logit regression and they are trained using numerical solvers. Model3 is a neural network-based model created by us and uses the network features as well.
{\color{black}To train the market share prediction models, we use the learning rate of 1e-4 which decays with a ratio of 0.96 every 100 epochs. The number of layer in our neural network is $l=\{3,4,5\}$ and the dimensionality of the hidden vector is $d=\{16, 32\}$. We train 1,000 epochs for each model and use the Xavier initializer~\cite{Glorot10understandingthe} for initializing weights and the Adam optimizer for updating weights. We used the cross validation method to choose the best one, which means given a training set with $N$ months, we choose a random month and validate with the selected month after training with all other $N-1$ months. We repeat this $N$ times.
In addition, we test other standard regression algorithms as well. In particular, we are interested in testing some robust regression algorithms such as TheilSen, AdaBoost Regression, and RandomForest Regression. We also use the same cross validation method.}
\begin{figure}\centering
\footnotesize
\includegraphics[width=1\columnwidth]{pred_baseline.png}
\caption{Histogram of RMSE scores --- lower values are preferred. X-axis is the RMSE score and Y-axis is the number of routes.}
\label{fig:pred_baseline}
\end{figure}
\begin{table}\centering
\footnotesize
\caption{Median/Average RMSE and $R^2$. The up-arrow (resp. down-arrow) means higher (resp. lower) is better. The best results are indicated in bold font.}
\begin{tabular}{@{}cc|ccc@{}}\hline
&&Median RMSE $\downarrow$ &$R^2\uparrow$ &Mean RMSE $\downarrow$\\\hline
\multirow{6}{*}{10 routes}
&TheilSen &0.048 &0.944 &0.052\\
&AdaBoost &0.029 &0.970 &0.027\\
&RandomForest &0.029 &\textbf{0.979} &0.025\\
&Model1~\cite{suzuki2000relationship} &0.026 &0.965 &\textbf{0.024}\\
&Model2~\cite{wei2005impact} &0.035 &0.953 &0.030\\
&{\bf Model3 (Ours)} &\textbf{0.023} &0.899 &0.026\\
&Model3 (No Net.) &0.026 &0.884 &0.027\\\hline
\multirow{6}{*}{1,000 routes}
&TheilSen &0.080 &0.855 &0.087\\
&AdaBoost &0.021 &0.964 &0.033\\
&RandomForest &0.024 &0.968 &0.033\\
&Model1~\cite{suzuki2000relationship} &0.021 &0.957 &0.033\\
&Model2~\cite{wei2005impact} &0.020 &0.983 &0.035\\
&{\bf Model3 (Ours)} &\textbf{0.010} &\textbf{0.988} &\textbf{0.025}\\
&Model3 (No Net.) &0.019 &0.978 &0.030\\\hline
\multirow{6}{*}{2,262 routes}
&TheilSen &0.0813 &0.707 &0.088\\
&AdaBoost &0.017 &0.933 &\textbf{0.031}\\
&RandomForest &0.014 &0.942 &\textbf{0.031}\\
&Model1~\cite{suzuki2000relationship} &0.033 &0.944 &0.041\\
&Model2~\cite{wei2005impact} &0.030 &0.976 &0.033\\
&{\bf Model3 (Ours)} &\textbf{0.007} &\textbf{0.983} &0.038\\
&Model3 (No Net.) &0.013 &0.969 &0.040\\
\hline
\end{tabular}
\label{table:avg_rmse_predbaseline}
\end{table}
\subsubsection{Experimental Results}
Fig.~\ref{fig:pred_baseline} shows the histogram of RMSE scores for Model1, 2, and 3. We experimented three scenarios (i.e., top-10 routes, top-1,000 routes, and top-2,262 routes in terms of the number of passengers). Our Model3 shows a higher density in low-RMSE regions than other models.
The median/average root-mean-square error (RMSE) and $R^2$ scores are summarized in Table~\ref{table:avg_rmse_predbaseline}. Our Model3 has much better median RMSE and $R^2$ scores than other models (especially for the largest scale prediction with 2,262 routes). Sometimes our mean RMSE is worse than other baselines. However, we think this is not significant because our low median RMSE says that it is better than others in the majority of routes. In particular, we show the median RMSE of 0.007 for the 2,262-route predictions vs. 0.030 by Model2. RandomForest also shows reasonable accuracy in many cases.
For the top-10 routes, most models have good performance.
This is because it is not easy for our model to have reliable network features only with the 10 routes. However, our main goal is to predict accurately in a larger scale prediction.
{\color{black}We also compare the accuracy of our proposed model without the network features, denoted with ``No Net.'' in the table. When we do not use any network features, the accuracy of market share predictions slightly decreases. Considering the scale of the market size, however, a few percentage errors can result in a big loss in the optimization phase. Therefore, our proposed prediction model is the most suitable to be used to define the objective function of our proposed optimization problem.}
\subsection{Market Influence Maximization}
\subsubsection{Baseline Methods}\label{sec:base}
Dynamic programming, branch and bound, and GroupGreedy were used to solve a similar problem in~\cite{An:2016:MFM:2939672.2939726,An:2017:DFA:3055535.3041217}. However, all these algorithms assume that routes are independent, which is not the case in our work because we use the network features. Therefore, their methods are not applicable to our work (see Section~\ref{sec:opt}).
Therefore, we describe two baseline methods: greedy and an exhaustive algorithm. Greedy methods are effective in many optimization problems. In particular, greedy provides an approximation ratio of around 63\% for submodular minimization. Unfortunately, our optimization is not a submodular case. Due to its simplicity, however, we compare with the following greedy method, which iteratively chooses a route with the maximum marginal increment of market influence and increases its flight frequency by $\alpha$. In general, the step size $\alpha$ is 1. For faster convergence, however, we test various $\alpha =\{1, 10\}$. The complexity of the greedy algorithm is $\mathcal{O}( \frac{budget_k\cdot N_k}{\alpha \cdot avg\_cost_{k}})$, where $N_k$ is the number of routes and $avg\_cost_{k}$ is the average cost for air carrier $k$ over the routes. However, this greedy is still a black-box method, whose efficiency is worse than our white-box method.
We can also use a brute-force algorithm when the number of routes is small. Given three routes $\{r_1, r_2, r_3\}$, for instance, the possible number of solutions is $f_{r_1}^{max} \times f_{r_2}^{max} \times f_{r_3}^{max}$. It is already a very large search space because each $f_{r_i}^{max}$ is several hundreds for a popular route in a month. However, we do not need to test solutions one by one. We create a large tensor of $|\mathcal{R}| \times |\mathcal{R}| \times q$ dimensions, where $q$ is the number of queries, and query $q$ solutions at the same time. In general, GPUs can solve the large query quickly. Even with GPUs, however, we cannot query more than a few routes because the search space volume exponentially grows. We also use the step size $\alpha=\{5,10\}$. $\alpha=1$ is not feasible in the brute-force search even with state-of-the-art GPUs. Thus, the complexity becomes $\mathcal{O}(\frac{f_{r_1}^{max}}{\alpha} \times \frac{f_{r_2}^{max}}{\alpha} \times \frac{f_{r_3}^{max}}{\alpha})$.
\begin{table}[t]
\centering
\footnotesize
\caption{Optimization results for the top-3 routes. Multiplying by 10 will lead to the real scale of passenger numbers because the DB1B database includes 10\% random samples of air tickets. LF and ReLU mean our Lagrangian function and ReLU-based methods, respectively.}
\begin{tabular}{@{}cc|cccc@{}}\hline
&&Carrier&Carrier&Carrier&Carrier\\
&&1&2&3&4\\ \hline
\multirow{9}{*}{\rotatebox[origin=c]{90}{\# of Passengers}}
&\multicolumn{1}{|c|}{Ground Truth} &4,960 &307 &1,792 &3,124\\
&\multicolumn{1}{|c|}{LF, Real\_Init (Ours)} &4,964 &308 &1,842 &3,126 \\
&\multicolumn{1}{|c|}{ReLU, Real\_Init (Ours)} &4,961 &\textbf{310} &1,854 &\textbf{3,144} \\
&\multicolumn{1}{|c|}{LF, Zero\_Init (Ours)} &4,970 &308 &\textbf{1,891} &3,139 \\
&\multicolumn{1}{|c|}{ReLU, Zero\_Init (Ours)} &4,961 &\textbf{310} &\textbf{1,891} &\textbf{3,144} \\
&\multicolumn{1}{|c|}{Greedy, Zero\_Init, $\alpha=1$} &4,967 &\textbf{310} &\textbf{1,891} &\textbf{3,144} \\
&\multicolumn{1}{|c|}{Greedy, Zero\_Init, $\alpha=10$} &\textbf{4,972} &\textbf{310} &\textbf{1,891} &\textbf{3,144} \\
&\multicolumn{1}{|c|}{Brute-force, Zero\_Init, $\alpha=5$} & \textbf{4,972} & N/A & N/A & N/A\\
&\multicolumn{1}{|c|}{Brute-force, Zero\_Init, $\alpha=10$} & \textbf{4,972} &\textbf{310}&\textbf{1,891}&\textbf{3,144}\\\hline
\end{tabular}
\label{table:r3}
\vspace{1.5em}
\centering
\footnotesize
\caption{Optimized number of passengers for the top-10 routes.}
\begin{tabular}{@{}cc|cccc@{}}\hline
&&Carrier&Carrier&Carrier&Carrier\\
&&1&2&3&4\\ \hline
\multirow{7}{*}{\rotatebox[origin=c]{90}{\# of Passengers}}
&\multicolumn{1}{|c|}{Ground Truth} &16,924 &4,022 &20,064 &29,419\\
&\multicolumn{1}{|c|}{LF, Real\_Init (Ours)} &18,612 &4,054 &20,552 &30,220 \\
&\multicolumn{1}{|c|}{ReLU, Real\_Init (Ours)} &18,618 &\textbf{5,024} &\textbf{20,703} &\textbf{30,269}\\
&\multicolumn{1}{|c|}{LF, Zero\_Init (Ours)}&18,583 &4,259 &20,549 &30,074 \\
&\multicolumn{1}{|c|}{ReLU, Zero\_Init (Ours)} &\textbf{18,643} &\textbf{5,024} &20,323 &\textbf{30,269}\\
&\multicolumn{1}{|c|}{Greedy, Zero\_Init, $\alpha=1$}&17,016 &\textbf{5,024} &20,515 &29,519 \\
&\multicolumn{1}{|c|}{Greedy, Zero\_Init, $\alpha=10$} &18,078 &\textbf{5,024} &20,515 &\textbf{30,269}\\ \hline
\end{tabular}
\label{table:r10}
\vspace{1.5em}
\centering
\footnotesize
\caption{Running time (in sec.) for the top-10 routes.}
\begin{tabular}{@{}cc|cccc@{}}\hline
&&Carrier&Carrier&Carrier&Carrier\\
&&1&2&3&4\\ \hline
\multicolumn{2}{c|}{LF, Real\_Init (Ours)} &40.77 &42.52 &41.86 &41.70 \\
\multicolumn{2}{c|}{ReLU, Real\_Init (Ours)}&\textbf{40.75} &41.48 &\textbf{40.67} &44.30 \\
\multicolumn{2}{c|}{LF, Zero\_Init (Ours)}&43.10 &42.45 &40.94 &40.37\\
\multicolumn{2}{c|}{ReLU, Zero\_Init (Ours)}&40.98 &\textbf{39.90} &40.49 &\textbf{40.31} \\
\multicolumn{2}{c|}{Greedy, Zero\_Init, $\alpha=1$}&910.12 &191.12 &1,074.95 &1,001.04 \\
\multicolumn{2}{c|}{Greedy, Zero\_Init, $\alpha=10$}&89.47 &20.14 &107.82 &101.01\\ \hline
\end{tabular}
\label{table:r10_time}
\end{table}
\begin{table*}
\centering
\footnotesize
\caption{Optimized number of passengers for the top-1,000 and 2,262 routes. Greedy with $\alpha=1$ is not feasible in this scale of experiments.}
\begin{tabular}{@{}c|cccccccc@{}}\hline
&\multicolumn{2}{c}{Carrier 1}&\multicolumn{2}{c}{Carrier 2}&\multicolumn{2}{c}{Carrier 3}&\multicolumn{2}{c}{Carrier 4}\\
\cmidrule{2-9}
&1000 routes &2262 routes&1000 routes &2262 routes&1000 routes &2262 routes&1000 routes &2262 routes\\ \hline
\multicolumn{1}{c|}{LF, Real\_Init (Ours)} &429,581 &487,475 &\textbf{225,623} &307,815 &388,864 &447,633 &546,742 &723,522\\
\multicolumn{1}{c|}{ReLU, Real\_Init (Ours)} &431,261 &489,684 &\textbf{225,623} &\textbf{307,881} &\textbf{390,239} &\textbf{448,421} &547,623 &725,526\\
\multicolumn{1}{c|}{LF, Random\_Init (Ours)} &426,683 &\textbf{498,511} &225,057 &306,268 &373,756 &435,277 &548,310 &\textbf{726,142}\\
\multicolumn{1}{c|}{ReLU, Random\_Init (Ours)} &\textbf{434,154} & 492,784 &224,603 &306,963 &380,318 &447,656 &\textbf{549,092} &721,100\\
\multicolumn{1}{c|}{Greedy, Zero\_Init, $\alpha=10$}&428,196&N/A&225,322&N/A&385,607&N/A&516,348&N/A\\ \hline
\end{tabular}
\label{table:opt_r1000_2262}
\end{table*}
\begin{table*}
\centering
\footnotesize
\caption{Running time (in sec.) for the top-1,000 and 2,262 routes scenarios.}
\begin{tabular}{@{}c|cccccccc@{}}\hline
&\multicolumn{2}{c}{Carrier 1}&\multicolumn{2}{c}{Carrier 2}&\multicolumn{2}{c}{Carrier 3}&\multicolumn{2}{c}{Carrier 4}\\
\cmidrule{2-9}
&1000 routes &2262 routes&1000 routes &2262 routes&1000 routes &2262 routes&1000 routes &2262 routes\\ \hline
\multicolumn{1}{c|}{LF, Real\_Init (Ours)} &440.21 &875.15 &450.30 &964.00 &\textbf{451.10}& {951.56} &{439.56} &{940.17}\\
\multicolumn{1}{c|}{ReLU, Real\_Init (Ours)} &439.18 &{908.66} &{452.39} &947.35 &453.50& \textbf{937.25} &438.75 &950.60\\
\multicolumn{1}{c|}{LF, Random\_Init (Ours)}&\textbf{438.06} &\textbf{878.73} &451.15 &947.35 &453.39 &949.80 &440.17 &948.20\\
\multicolumn{1}{c|}{ReLU, Random\_Init (Ours)} &442.12 &891.05 &\textbf{448.85} &\textbf{936.29} &452.47 &967.23 &\textbf{438.49} &\textbf{928.13}\\
\multicolumn{1}{c|}{Greedy, Zero\_Init, $\alpha=10$}&84,643.31&N/A&13,414.46&N/A&35,116.56&N/A&302,272.43&N/A\\ \hline
\end{tabular}
\label{table:time_r1000_2262}
\end{table*}
\subsubsection{Hyperparameter Setup}
For all methods, we let the flight frequency $f_{r,k,freq}$ of air carrier $C_k$ in route $r$ on or below the maximum frequency $f_{r}^{max}$ observed in the DB1B database. This is very important to ensure feasible frequency values because too high frequency values may not be accepted in practice due to limited capacity of airports. This restriction can be implemented using the $clip\_by\_value(\cdot)$ function of Tensorflow.
In addition, we need to properly initialize frequency values in Algorithm~\ref{alg:adaptive-gd}. We test two ways to initialize frequencies: i) Real\_Init initializes the flight frequency values with the ground-truth values observed in the dataset, and ii) Zero\_Init initializes all the frequencies to zeros. In all methods, we set the total budget to the ground-truth budget.
We tested $\epsilon =\{1, 100, 1000\}$ but there is no significant difference on the achieved final optimization values. For the following experiments, we choose $\epsilon=1000$ to speed up the optimization process. We use 10 for the learning rate $\gamma$ and run 500 epochs.
One more thing is that the DB1B database includes 10\% random samples of air tickets\footnote{See the overview section in \url{https://www.transtats.bts.gov/DatabaseInfo.asp?DB_ID=125}.} so our reported passenger numbers multiplied by 10 will be the real scale. In this paper, we list values in the original scale of the DB1D database for better reproducibility.
\subsubsection{Experimental Results}
We first compare all the aforementioned methods in a small sized problem with only 3 routes. Especially, the brute-force search is possible only for this small problem.
For the top-3 route optimization, we choose the top-3 biggest routes and the top-4 air carriers in terms of the number of passengers transported and optimize the flight frequencies in the 3 routes for each air carrier for the last month of our dataset. In Table~\ref{table:r3}, detailed optimized market influence values are listed for various methods. Surprisingly, all methods mark similar values. We think all methods are good at solving this small size problem. However, the brute-force method is not feasible for some cases where the maximum frequency limits $f_{r}^{max}$ in the routes are large --- we mark with `N/A' for those whose runtime is prohibitively large.
Experimental results of the top-10 route optimization are summarized in Table~\ref{table:r10}. Our method based on the ReLU activation produces the best results for all the top-4 air carriers. Our Lagrangian function (LF)-based optimization also produces many reasonable results better than Greedy. Greedy shows the worst performance in this experiment. In Table~\ref{table:r10_time}, their runtimes are also reported. Our method is 2-22 times faster than the Greedy except Carrier 2 with $\alpha=10$.
For the top-1,000 and 2,262 routes, experimental results are listed in Tables~\ref{table:opt_r1000_2262} and~\ref{table:time_r1000_2262}. Our methods produce the best optimized value in the least amount of time. In particular, our method is about 690 times faster than the Greedy with $\alpha=10$ at Carrier 4. Greedy is not feasible for 2,262 routes.
Our method shows a \emph{(sub-)linear} increment of runtime w.r.t. the number of routes. It takes about 40 seconds for the top-10 routes and 400 seconds for the top-1,000 routes. When the problem size becomes two orders of magnitude larger from 10 to 1,000, the runtime increases only by one order of magnitude. Considering that we solve a NP-hard problem, the sub-linear runtime increment is an outstanding achievement. Moreover, our method consistently shows the best optimized values in almost all cases.
{\color{black}Greedy is slower than our method due to its high complexity $\mathcal{O}( \frac{budget_k\cdot N_k}{\alpha \cdot avg\_cost_{k}})$ as described in Sec.~\ref{sec:base}. When the budget limit $budget_k$ and the number of routes $N_k$ are large, it should query the prediction models many times, which significantly delays its solution search time. Therefore, Greedy is a classical black-box search method whose efficiency is much worse than our proposed method. One can consider our method as a white-box search method because the gradients flow directly to update flight frequencies.}
\section{Conclusion}
We presented a prediction-driven optimization framework for maximizing air carriers' market influence, which includes neural network-based market share prediction models by adding transportation network features and innovates large-scale optimization techniques through the proposed AGA method. Our approach suggests a way to unify data mining and mathematical optimization.
Our network feature-based prediction shows better accuracy than existing methods. Our AGA method can optimize for all the US domestic routes in our dataset at the same time whereas state-of-the-art methods are applicable to at most tens of routes.
\begin{acks}
Noseong Park is the corresponding author. This work of Jinsung Jeon, Seoyoung Hong, and Noseong Park was supported by the Institute of Information \& Communications Technology Planning \& Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2020-0-01361, Artificial Intelligence Graduate School Program (Yonsei University)). The work of Thai Le and Dongwon Lee was in part supported by NSF awards \#1909702, \#1940076, \#1934782, and \#2114824.
\end{acks}
\bibliographystyle{ACM-Reference-Format}
\section{Introduction}
Ever since the deregulation in 1978, there has been huge competition among US air carriers (airlines) for air passenger transportation. 771 million passengers were transported in 2018 alone and the largest air carrier produces a revenue of more than 43 billion dollars for the period between September 2017 and September 2018\footnote{\url{https://www.transtats.bts.gov}}. It is one of the largest domestic markets in the world and there is a huge demand to improve their services. Consequently, many computational methods have also been proposed to predict market share, ticket price, demand, etc. and allocate resources (e.g., aircraft) on those air passenger markets accordingly~\cite{An:2016:MFM:2939672.2939726,An:2017:DFA:3055535.3041217,10.1007/978-3-319-54430-4_3,8606022}.
Inspired by those works, in this work, we propose a novel way by \emph{unifying both data mining and mathematical optimization methods} to maximize air carrier's influence on the air transportation market. In this paper, we define the \emph{influence of an air carrier} as \emph{the number of passengers transported by the air carrier}\footnote{An alternative definition of the market influence in our contexts is the market share multiplied by the total market demand (cf. Eq.~\eqref{eq:obj}).} --- in our experiments, we also show that our framework can be easily modified to maximize profits (instead of the market influence) and conduct some profit maximization experiments.
The market influence is sometimes strategically more important than profits. Typically, there are two ways to expand business: i) a strategical merger with other strong competitors, and ii) a strategical play to maximize the market influence~\cite{ciliberto2018market}. Our paper is closely related to the latter strategy.
Our contributions lie in the following three steps: i) designing a market share prediction method, ii) designing a large-scale optimization technique whose scalability is significantly larger than state-of-the-art methods (i.e., optimizing for 2,262 routes in this work vs. 30 routes in our previous works~\cite{An:2016:MFM:2939672.2939726,An:2017:DFA:3055535.3041217}), and iii) integrating them into a unifying framework, i.e., prediction-driven optimization. \emph{Among all the steps, our main contributions are at Steps ii) and iii)}. Our work shows how to systematically merge data mining and optimization approaches.
Conventional features (e.g., average ticket price, flight frequency, and on-time performance) have been widely used to predict the market share~\cite{An:2016:MFM:2939672.2939726,An:2017:DFA:3055535.3041217,suzuki2000relationship,wei2005impact}. For instance, air carrier's market share on a route will increase if ticket price is decreased and flight frequency is increased. However, some researchers recently paid an attention to air carrier's transportation network connectivity that is highly likely to be connected to market share~\cite{10.1007/978-3-642-21786-9_61,doi:10.1057/ejis.2010.11}. As a response, we improve state-of-the-art prediction models based on a wide variety of conventional and transportation network features, such as degree centrality and PageRank. It is worth mentioning that we train a prediction model for each route and therefore, we have in total 2,262 market share prediction models.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{images/lagrange.pdf}
\caption{The architecture of the proposed prediction-driven optimization framework to maximize the market influence of air carrier $C_k$. Note that the pre-trained neural network-based market share prediction models are sub-routines of the objective function. The gradients of the budget constraint and the objective function can flow from the top to the bottom because all intermediate modules are differentiable --- we implement this framework on TensorFlow. The weighted adjacency matrix can be optimized to maximize the market influence without violating the budget limitation, which is more complicated than training neural network weights given a loss due to the constraint. Our main contributions are highlighted in blue.} \label{fig:archi}
\end{figure}
On top of the market share prediction models, we build a budget-constrained optimization module to maximize the market influence by optimizing transportation network (more precisely, flight frequency values over 2,262 routes) which is an Integer Knapsack problem. Our objective for optimization consists of the market share prediction models in those routes and our constraint is a budget limit of an air carrier. Our objective is not in a simple form but rather a complex one of inter-correlated deep learning prediction models because changing frequency in a route will influence market shares on other neighboring routes. Therefore, it is very hard to solve with existing techniques that assume routes are independent (see discussions in Section~\ref{sec:opt}).
We test our optimization framework with 2,262 routes. To achieve such a high scalability, we design a deep learning platform-based optimization method\footnote{Deep learning platforms such as TensorFlow, PyTorch, etc. can be regarded as (non-constrained) optimization frameworks for a math function that consists of differentiable operators.}. In our experiments, the proposed optimization method solves the very large-scale optimization problem much faster than existing algorithms.
However, one main challenge in our approach is how to consider the budget constraint --- each air carrier has a limited budget to operate flights. It is not straightforward to consider the budget constraint on deep learning platforms. Therefore, we also design a gradient-based optimizer that is able to consider it as shown in Fig.~\ref{fig:archi}, which we call \emph{adaptive gradient ascent}.
After collecting a 10-year-long data from several US Governments, we test our method with up to 2,262 routes. Our customized prediction model shows much better accuracy in many routes than existing methods. In particular, our median root-mean-square error (RMSE) is more than two times better than the best baseline method. Our proposed adaptive gradient ascent is able to maximize the market influence on all those routes 690 times faster with a better optimized influence than a greedy algorithm.
\section{Related Work}
We introduce a selected set of related works about air market predictions and optimizations.
\subsection{Market Share Prediction}
There have been proposed many prediction models such as~\cite{suzuki2000relationship,wei2005impact,An:2016:MFM:2939672.2939726,An:2017:DFA:3055535.3041217}, to name a few. However, they share many common design points. First, almost all of them use the multi-logit regression model. It is a standard model to predict air transportation market shares. We also use the same multi-logit regression (see Section~\ref{sec_pred} for its details) after some extensions. Suzuki considers air carriers' frequency, delay, and safety~\cite{suzuki2000relationship} whereas Wei et al. study about the effect of aircraft size and seat availability on market share and consider other variables such as price and frequency~\cite{wei2005impact}. There are some more similar works~\cite{An:2016:MFM:2939672.2939726,An:2017:DFA:3055535.3041217}. In our paper, we consider transportation network features in addition to those conventional air carrier performance features.
\subsection{Flight Frequency Optimization}\label{sec:opt}
We already solved one similar flight frequency optimization problem to maximize profits~\cite{An:2016:MFM:2939672.2939726,An:2017:DFA:3055535.3041217}. We showed that the frequency-market share curve is very hard to approximate with existing approximation methods such as piece-wise linear approximation~\cite{646812}. After that, we designed one heuristic-based algorithm, called GroupGreedy, which runs an exact algorithm in each subset of routes (because running the exact algorithm for the entire route set is prohibitive). Each subset consists of a few routes and running the exact algorithm within a small subset provides a tolerable degree of scalability in general. However, we were able to test with \emph{at most about 30 routes} for its prohibitively long execution time even with the GroupGreedy algorithm and we think that its scalability is not still satisfactory. We test with 2,262 routes in this paper --- i.e., the problem search space size is $\mathcal{O}(n^{30})$ in the previous work vs. $\mathcal{O}(n^{2,262})$ in this work.
In addition, we found that GroupGreedy cannot be used for our prediction model because of the network features --- we did not consider network features and assume each route is independent in our past works~\cite{An:2016:MFM:2939672.2939726,An:2017:DFA:3055535.3041217}. We optimize for each route separately. In reality, however, changing a frequency in a route is likely to influence the market shares in other routes because routes are often inter-correlated. Thus, GroupGreedy based on the independence assumption is not applicable to our prediction models. This paper does not assume the independence so this work is more realistic.
\begin{table*}[t]
\begin{center}
\caption{Comparison table between our previous works~\cite{An:2016:MFM:2939672.2939726,An:2017:DFA:3055535.3041217} and this new work. Because we do not adopt the route independence assumption, our problem setting is more realistic, making many existing algorithms designed based on the assumption inapplicable.}\label{tbl:cmp}
\begin{tabular}{|c|c|c|}
\hline
\rowcolor[HTML]{9AFF99}
\textbf{Comparison items} & \textbf{Previous}~\cite{An:2016:MFM:2939672.2939726,An:2017:DFA:3055535.3041217} & \textbf{New} \\ \hline
Market Share Prediction Model & Standard Multi-logit Model & Deep Learning Model \\ \hline
Conventional Air Carrier Performance Features & Yes & Yes \\ \hline
Transportation Network Features & No & Yes \\ \hline
Removal of Route Independence Assumption & No & Yes \\ \hline
Optimization Technique & Classical combinatorial optimization techniques & Our proposed adaptive gradient ascent \\ \hline
How to integrate prediction and optimization & Black-box query to prediction model & \begin{tabular}[c]{@{}c@{}}Fully integrated on TensorFlow\end{tabular} \\ \hline
\end{tabular}
\end{center}
\end{table*}
In the perspective of Knapsack,
after excluding the independence assumption,
it becomes much more complicated because the value (i.e., market share) of product (i.e., route) becomes non-deterministic and is influenced by other products (i.e., routes). This makes the current problem more realistic than those studied in our previous works. However, this change also prevents us from applying many existing Knapsack algorithms that have been invented for the simplest case where product values are fixed~\cite{axiotis_et_al:LIPIcs:2019:10595}.
Another significant difference lies in that the optimization algorithm in our previous works queries its prediction model whereas both optimization and prediction are integrated on TensorFlow in this new paper. In Table~\ref{tbl:cmp}, we summarize the differences between our previous works and the current work. In addition, Fig.~\ref{fig:ws} compares their fundamental difference on the algorithm design philosophy. Previous works are representative black-box search methods where the query-prediction response strategy is adopted. In this new work, however, the gradients directly flow to update frequencies so its runtime is inherently faster than existing methods.
\begin{figure}[t]
\centering
\subfigure[Existing Black-box Search Methods]{\includegraphics[width=0.9\columnwidth]{images/BS.pdf}}
\subfigure[Proposed White-box Search Method]{\includegraphics[width=0.9\columnwidth]{images/WS.pdf}}
\caption{The comparison with existing black-box search methods in~\cite{An:2016:MFM:2939672.2939726,An:2017:DFA:3055535.3041217} and the white-box search method proposed in this work. Our adaptive gradient ascent algorithm enables the white-box search concept to be used in this work.} \label{fig:ws}
\end{figure}
\subsection{Deep Learning Platform for Optimization}
Deep learning platform such as Tensorflow can be used to solve unconstrained optimization problems. In fact, training neural networks is nothing but minimizing its loss (objective) function given a differentiable math function (neural network) using stochastic gradient descent (SGD). Therefore, we can utilize it to solve very large scale optimization problems. Especially, state-of-the-art GPUs have thousands of CUDA cores that can accelerate the computation. The following example shows how to solve an optimization problem with no constraints using Tensorflow.
\begin{example}
Suppose the following simple unconstrained maximization problem that consists of only a variable $x$.
\begin{align}\label{eq:eg}
\textrm{maximize } \log(x) - e^x
\end{align}
Its corresponding Tensorflow implementation is at the following Code~\ref{lst:code}. To implement it on Tensorflow, we need to first define the variable $x$ (line 2) followed by the objective function (line 4). Until line 6, we specify how to solve Eq.~\eqref{eq:eg} and no real computations happen. At line 8, Tensorflow will start to solve. This programming paradigm is popular for deep learning.
\begin{lstlisting}[basicstyle=\small,escapechar=@,mathescape,label={lst:code},numbers=left,columns=fullflexible,frame=single,caption=Tensorflow implementation of Equation~\eqref{eq:eg}]
@{\color{blue}// Define one scalar variable}@
x = tf.get_variable("x", [1])
@{\color{blue}// Define an objective function}@
obj = tf.log(x) - tf.exp(x)
opt = GradientDescentOptimizer(learning_rate=0.01)
opt_op = opt.maximize(obj, [x])
@{\color{blue}// Maximize the objective using SGD}@
opt_op.run()
\end{lstlisting}
\end{example}
One pitfall of solving optimization problems using a deep learning platform is that constraints cannot be readily considered. The standard from of describing an optimization problem is as follows:
\begin{align*}\begin{split}
\textrm{minimize } & Objective\\
\textrm{subject to. } & Constraints.
\end{split}\end{align*}
However, there are several mathematical tricks to convert constrained optimization problem into unconstrained optimization problem, such as the method of Lagrange multiplier. We also found that the rectified linear unit (ReLU) activation can be used for this purpose. Our paper is based on those tricks. However, a naive use of the tricks does not guarantee convergence to meaningful solutions. We investigate this issue and design our own \emph{adaptive gradient ascent} to stabilize the optimization process.
\subsection{Existing Techniques to Solve Combinatorial Optimization with Deep Learning Technology}
We are not the first proposing the concept of solving (or predicting) combinatorial optimization problems with deep learning technology. However, existing methods cannot be applied to our works for various reasons.
Li et al. \emph{predict} solutions of some NP-hard problems, such as Satisfiability, Maximal Independent Set (MIS), and Minimum Vertex Cover (MVC), with graph convolutional networks (GCNs)~\cite{Li:2018:COG:3326943.3326993}. Note that we use the term `predict' because they really predict solutions rather than solving. Dai et al. also predict for MVC and Traveling Salesman Problem (TSP)~\cite{Dai:2017:LCO:3295222.3295382} with deep learning technology. Mittal et al. proposed a similar GCN-based method to predict for MVC~\cite{2019arXiv190303332M}. However, those problems do not include any constraints in their problem definitions but usually find maximal or minimal solutions. Thus, their method cannot be used to solve our problem. In addition, we are more interested in \emph{solving} rather than \emph{predicting}.
There are many prior works to predict for TSP~\cite{NIPS2015_5866,DBLP:journals/corr/BelloPLNB16,DBLP:journals/corr/ZophL16}. Many of them require training samples of TSP problem instance and solution pairs. To populate such training samples, they need to run combinatorial optimization algorithms to solve many instances, which is time consuming.
The recently-proposed method by Bello et al~\cite{DBLP:journals/corr/BelloPLNB16} predicts for TSP and Kanpsack problems without requiring the generation of such training samples. However, they still require many TSP or Knapsack instances to train. In their work, a reinforcement learning algorithm with an actor-critic architecture, where the critic returns a reward (i.e., objective value) of the problems, is used for training. However, their algorithm should still be trained with many problem instances to stabilize the solution quality.
On the contrary, what we are doing in this work is very different from theirs. First, we do not require many training instances but have only one Integer Knapsack instance with very specific objectives (i.e., market shares) to solve. The most importantly, their scalability (in terms of network size) was not tested up to the scale that we need, i.e., 100 products in their work vs. 2,262 airports in our work.
\section{Preliminaries}
We introduce our dataset and the state-of-the-art market share prediction model.
\subsection{Dataset}
Our main dataset is the air carrier origin and destination survey (DB1B) dataset released by the US Department of Transportation's Bureau of Transportation Statistics (BTS)~\cite{bts}. They release 10\% of tickets sold in the US every quarter of year for research purposes, in conjunction with much detailed air carrier information. Itemized operational expenses of air carrier are very well summarized in the dataset and for instance, we can know that how much each air carrier had paid for fuel and attendants and what kinds of air crafts were used by a certain air carrier in a certain route. Air carrier's performance is also one important type of information in the dataset. We also use some safety dataset by the National Transportation Safety Board (NTSB)~\cite{ntsb}.
\subsection{Market Share Prediction Model}\label{sec_pred}
In this subsection, we describe a popular market share prediction model for air transportation markets. Given a route $r$, the following multinomial logistic regression model is to predict the market share of air carrier $C_k$ in the route:
\begin{align}\label{eq:logit}{\color{black}
m_{r,k} = \frac{ e^{\sum_{j} w_{r,j} \cdot f_{r,k,j}}}{\sum_{i} e^{\sum_{j} w_{r,j} \cdot f_{r,i,j}}} = \frac{exp(\mathbf{w}_r \cdot \mathbf{f}_{r,k})}{\sum_i exp(\mathbf{w}_r \cdot \mathbf{f}_{r,i})},}
\end{align}where $m_{r,k}$ means the market share of air carrier $C_k$ in route $r$; $f_{r,k,j}$ is the $j$-th feature of air carrier $C_k$ in route $r$; and $w_{r,j}$ represents the sensitivity of market share to feature $f_{r,k,j}$ in route $r$ that can be learned from data.
A set of features for air carrier $C_k$ in route $r$ can be represented by a vector $\mathbf{f}_{r,k}$ (see Section~\ref{sec:features} for the complete definition of $\mathbf{f}_{r,k}$ in our work). We use bold font to denote vectors throughout the paper.
The rationale behind the multi-logit model is that $exp(\mathbf{w}_r \cdot \mathbf{f}_{r,k})$ can be interpreted as passengers' valuation score about air carrier $C_k$ and the market share can be calculated by the normalization of those passengers' valuation scores --- this concept is not proposed by us but widely used for the air carrier market share prediction in Business, Operations Research, etc~\cite{hansen1990airline,An:2016:MFM:2939672.2939726,An:2017:DFA:3055535.3041217,suzuki2000relationship,wei2005impact}.
\section{Market Share Prediction}
We improve the prediction model by i) adding several transportation network features, and ii) designing a neural network-based model.
\subsection{Air Carrier Transportation Network}
There are more than 2,000 routes (e.g., from LAX to JFK) in the US and this creates one large transportation network. Transportation network $\mathcal{G}=(\mathcal{V},\mathcal{E})$ is a directed graph among airports (i.e., vertices) in $\mathcal{V}$. In particular, we are interested in an air carrier-specific directed transportation network $\mathcal{G}_k$ weighted by its flight frequency values. Thus, $\mathcal{G}_k$ represents the connectivity of air carrier $C_k$ and its edge weight on a certain directional edge means the flight frequency of the air carrier in the route. $\mathcal{G}_k$ can be represented by a weighted adjacency (or frequency) matrix $\mathcal{A}_k$, where each element is a flight frequency from one airport to another.
\subsection{Network Features}\label{sec:netf}
In this section, we introduce the network features we added to improve the prediction model.
\begin{figure}[t]
\centering
\footnotesize
\includegraphics[width=1\columnwidth,trim={2cm 1cm 2cm 2cm},clip]{netfeat_passenger.png}
\caption{Number of passengers vs. network features. The result summarizes all the airports.}
\label{fig:netfeat_passenger}
\end{figure}
\subsubsection{Degree Centrality}
As mentioned by earlier works, transportation network connectivity is important in air transportation markets~\cite{10.1007/978-3-642-21786-9_61,doi:10.1057/ejis.2010.11}. For instance, the higher the degree centrality of an airport in $\mathcal{G}_k$, the more options the passengers to fly. Thereby, its market share will increase at the routes departing the high degree centrality airport. Therefore, we study how the degree centrality of source and destination airports influences the market share.
Given $\mathcal{A}_k$, the out-degree (resp. in-degree) centrality of $i$-th airport is the sum of $i$-th row (resp. column). So this feature calculation can be very easily implemented on Tensorflow or other deep learning platforms.
\subsubsection{Ego Network Density}
Ego network is very popular for social network analysis~\cite{NIPS2012_4532}. We introduce the concept of ego network first.
\begin{definition}
Given a vertex $v$, its \emph{ego network} is an induced subgraph of $v$ and its neighbors. The vertex $v$ is called \emph{ego vertex} (i.e., ego airport in our case). Note that ego networks are also weighted with flight frequency values. The density of an ego network is defined as the sum of edge weights divided by $n(n-1)$ where $n$ is the number of vertices in the ego network.
\end{definition}
By the definition, an airport's ego network density is high when the airport and its neighboring airports are well connected all together. It is natural that passengers transit in an airport whose connections are well prepared for their final destinations.
\subsubsection{PageRank}
PageRank was originally proposed to derive a vertex's importance score based on the random web surfer model~\cite{Page99thepagerank} --- i.e., a web surfer performs a random walk following hyperlinks. We think PageRank is suitable to analyze multi-stop passengers for the following reason.
After row-wise normalization, it means the transition probability that a random passenger will move following the route. Thus, PageRank is able to capture the importance of an airport.
Fig.~\ref{fig:netfeat_passenger} depicts the relationships between the network features introduced above and the total number of passengers transported in and out airports by a certain air carrier. We used the DB1B data released by the BTS for the first quarter of 2018 to draw this figure. As shown in Fig.~\ref{fig:netfeat_passenger}, the number of passengers in each airport is highly correlated with the network features (i.e., in-degree, out-degree, ego network density, and PageRank). In conjunction with other classical air carrier performance features, these network features can improve the prediction accuracy by a non-trivial margin.
\subsection{Final Feature Set in our Prediction}\label{sec:features}
The complete elements of $\mathbf{f}_{r,k}$ we use for our prediction are as follows so $\mathbf{f}_{r,k}$ is a 19-dimensional vector:
\begin{enumerate}
\item $f_{r,k,0}$: Average ticket price
\item $f_{r,k,1}$: Flight frequency
\item $f_{r,k,2}$: Delay ratio
\item $f_{r,k,3}$: Average delayed time in minutes
\item $f_{r,k,4}$: Flight cancel ratio
\item $f_{r,k,5}$: Flight divert ratio
\item $f_{r,k,6}$: Total number of fatal cases
\item $f_{r,k,7}$: Total number of serious accident cases
\item $f_{r,k,8}$: Total number of minor accident cases
\item $f_{r,k,9}$: Average aircraft size in terms of number of seats per flight
\item $f_{r,k,10}$: Average seat availability percentage which is not occupied by connecting passengers
\item $f_{r,k,11}$: In-degree of the source airport
\item $f_{r,k,12}$: In-degree of the destination airport
\item $f_{r,k,13}$: Out-degree of the source airport
\item $f_{r,k,14}$: Out-degree of the destination airport
\item $f_{r,k,15}$: PageRank of the source airport
\item $f_{r,k,16}$: PageRank of the destination airport
\item $f_{r,k,17}$: Ego network density of the source airport
\item $f_{r,k,18}$: Ego network density of the destination airport
\end{enumerate}
{\color{black} Among many features, the flight frequency $f_{r,k,1}$ is an actionable feature that we are interested in to adjust. An actionable feature means a feature that can be freely decided only for one's own purposes. May other features, such as delay time, safety, and so on, cannot be solely decided by an air carrier. Hereinafter, we use $f_{r,k,freq}$ (instead of $f_{r,k,1}$ for clarity) to denote a flight frequency value of air carrier $C_k$ in route $r$.} Note that these frequency values among airports constitute $\mathcal{A}_k$.
\subsection{Neural Network-based Prediction}\label{sec:nnmodel}
Whereas many existing methods rely on classical machine learning approaches, we use the following neural network to predict:
\begin{align}\begin{split}\label{eq:nn}
\mathbf{h}^{(1)}_{r,k} &= \sigma(\mathbf{f}_{r,k}\mathbf{W}^{(0)} + \mathbf{b}^{(0)}),\textrm{ for initial layer}\\
\mathbf{h}^{(i+1)}_{r,k} &= \mathbf{h}^{(i)}_{r,k} + \sigma(\mathbf{h}^{(i)}_{r,k}\mathbf{W}^{(i)} + \mathbf{b}^{(i)}),\textrm{ if }i \geq 1
\end{split}\end{align}where $\sigma$ is ReLU. $\mathbf{W}^{(0)} \in \mathcal{R}^{19 \times d}$, $\mathbf{b}^{(0)} \in \mathcal{R}^{d}$, $\mathbf{W}^{(i)} \in \mathcal{R}^{d \times d}$, $\mathbf{b}^{(i)} \in \mathcal{R}^{d}$ are parameters to learn. Note that we use residual connections after the initial layer. For the final activation, we also use the multi-logit regression. From Eq.~\eqref{eq:logit}, we replace $\mathbf{f}_{r,k}$ with $\mathbf{h}^{l}_{r,k}$, which denotes the last hidden vector of our proposed neural network, to predict $m_{r,k}$ as follows:
\begin{align}\label{eq:nn2}
m_{r,k} = \frac{exp(\mathbf{w}_r \cdot \mathbf{h}^{l}_{r,k})}{\sum_i exp(\mathbf{w}_r \cdot \mathbf{h}^{l}_{r,i})},
\end{align}where $\mathbf{w}_r$ is a trainable parameter. We use $\bm{\theta}_r$ to denote all the parameters of route $r$ in Eqs.~\eqref{eq:nn} and~\eqref{eq:nn2}.
One thing to mention is that all the network features can be properly calculated on TensorFlow from $\mathcal{A}_k$ before being fed into the neural network. This is the case during the frequency optimization phase which will be described shortly. By changing a frequency in $\mathcal{A}_k$, the entire network feature can be recalculated before the neural network processing as shown in Fig.~\ref{fig:archi}. Therefore, the gradients can directly flow from the prediction models to the frequency matrix through the network feature calculation part. Hereinafter, we use a function $m_{r,k}(\mathcal{A}_k;\bm{\theta}_r)$ after partially omitting features (such as ticket price, aircraft size, etc.) to denote the predicted market share. Note that the omitted features and $\bm{\theta}_r$ are considered constant while optimizing frequencies in the next section. We sometimes omit all the inputs and use $m_{r,,k}$ for brevity. See Table~\ref{tbl:notation} for notations.
\begin{table}[t]
\begin{center}
\caption{Notation table}\label{tbl:notation}
\begin{tabular}{|c|c|}
\hline
\rowcolor[HTML]{9AFF99}
\textbf{Symbol} & \textbf{Meaning} \\ \hline
$C_k$ & An air carrier \\ \hline
$r$, $\mathcal{R}$ & \begin{tabular}[c]{@{}p{5.5cm}@{}}$r$ means a route (e.g., from LAX to JFK). $\mathcal{R}$ is the set of all US domestic air routes.\end{tabular} \\ \hline
$\mathcal{A}_k$ & \begin{tabular}[c]{@{}p{5.5cm}@{}}Frequency matrix of air carrier $C_k$. Its one element is a flight frequency value from an airport to other airport (i.e., route) provided by $C_k$. We optimize this matrix to maximize the market influence of air carrier $C_k$.\end{tabular} \\ \hline
$f_{r,k,1}$ or $f_{r,k,freq}$ & \begin{tabular}[c]{@{}p{5.5cm}@{}}Frequency of air carrier $C_k$ in route $r$. This is an element of $\mathcal{A}_k$. Both $f_{r,k,1}$ or $f_{r,k,freq}$ have the same meaning. For clarity, however, we use $f_{r,k,freq}$.\end{tabular} \\ \hline
$m_{r,k}(\mathcal{A}_k;\bm{\theta}_r)$ or $m_{r,k}$ & \begin{tabular}[c]{@{}p{5.5cm}@{}}Predicted market share of $C_k$ in route $r$. For brevity but without loss of clarity, we sometimes use $m_{r,k}$.\end{tabular} \\ \hline
$demand_r$& \begin{tabular}[c]{@{}p{5.5cm}@{}}The total number of passengers in route $r$.\end{tabular} \\ \hline
$cost_{r,k}$& \begin{tabular}[c]{@{}p{5.5cm}@{}}The unit operational cost (or cost per flight) of $C_k$ in route $r$.\end{tabular} \\ \hline
$budget_k$& \begin{tabular}[c]{@{}p{5.5cm}@{}}The total operational budget of $C_k$\end{tabular} \\ \hline
market influence of $C_k$& \begin{tabular}[c]{@{}p{5.5cm}@{}}The number of passengers transported by $C_k$ in route $r$, i.e., $demand_r \times m_{r,k}$\end{tabular} \\ \hline
\end{tabular}
\end{center}
\end{table}
\section{Market Influence Maximization}
In this section, we describe our main contribution to solve the market influence maximization problem.
\subsection{Problem Definition}\label{sec:def}
We solve the following optimization problem to maximize the market influence of air carrier $C_k$ (i.e., the number of passengers transported by $C_k$) on those routes in $\mathcal{R}$. Given its total budget $budget_k$, we optimize the flight frequency values of the air carrier over multiple routes in $\mathcal{R}$ as follows:
\begin{align}\begin{split}\label{eq:obj}
\max_{f_{r}^{max} \geq f_{r,k,freq}\geq 0, r \in \mathcal{R}}& \sum_{r \in \mathcal{R}} demand_r \times m_{r,k} \\
\textrm{subject to }& \sum_{r \in \mathcal{R}}cost_{r,k} \times f_{r,k,freq} \leq budget_k,
\end{split}\end{align} where $m_{r,k}$ is the predicted market share of $C_k$ in route $r$ (by our neural network model), $demand_r$ is the number of total passengers in route $r$ from the DB1B dataset, and $cost_{r,k}$ is the unit operational cost of air carrier $C_k$ in route $r$. $f_{r}^{max}$ is the maximum flight frequency in route $r$ observed in the DB1B dataset. The adoption of $f_{r}^{max}$ is our heuristic to prevent overshooting a practically meaningful frequency limit. Note that different air carriers have different unit operational costs in a route $r$ as their efficiency is different and they purchase fuel in different prices --- we extract this information from the DB1B dataset.
Eq.~\eqref{eq:obj} shows how we can effectively merge data mining and mathematical optimization. The proposed problem is basically a non-linear optimization and a special case of Integer Knapsack and resource allocation problems which are all NP-hard~\cite{Arora:2009:CCM:1540612}. The theoretical complexity of the problem is $\mathcal{O}(\prod_{r\in \mathcal{R}}f_{r}^{max})$, which can be simply written as $\mathcal{O}(n^{2,262})$ after assuming $n=f_{r}^{max}$ in each route for ease of discussion because $|\mathcal{R}|=2,262$.
\begin{theorem}
The proposed market influence maximization is NP-hard.
\end{theorem}
\begin{proof}
We prove the theorem by showing that an arbitrary Integer Knapsack problem instance can be reduced to a special case of our market influence maximization problem.
In an Integer Knapsack problem, there are $n$ product types and each product type $p$ has a value $v_p$ and a cost $c_p$. In particular, there exist an enough number of product instances for a product type so we can choose multiple instances for a certain product type. Given a budget $B$, we can choose as many instances as we want such that the sum of the product values are maximized.
This problem instance can be reduced to a market influence maximization by letting a product type $p$ be a route $r$, $c_p$ be $cost_{r,k}$, and $v_p$ be a deterministic increment of market influence by increasing the frequency by one.
Therefore, the proposed market influence maximization problem is NP-hard.
\end{proof}
\begin{table*}[t]
\centering
\caption{The comparison between the method of Lagrange multiplier and our method}\label{tbl:lag}
\begin{tabular}{|>{\color{black}}c|>{\color{black}}l|}
\hline
\rowcolor[HTML]{9AFF99}
\textbf{Method}& \multicolumn{1}{>{\color{black}}c|}{\textbf{How it works}} \\ \hline
The method of Lagrange multiplier & \begin{tabular}[c]{@{}p{13.2cm}@{}}$\boldsymbol{\cdot}$ Given a constrained optimization problem, $\max o(\mathcal{A}_k)$ subject to $c(\mathcal{A}_k) \leq 0$, it optimizes $L = o(\mathcal{A}_k) - \lambda c(\mathcal{A}_k)$. \\ $\boldsymbol{\cdot}$ If $o(\mathcal{A}_k)$ is concave, there exists a well-developed method to decide such a Lagrange multiplier $\lambda$ that the optimal solution of $L$ is the same as that of the original constrained problem~\cite{10.5555/993483,10.1561/2200000016}.\end{tabular} \\ \hline
Our proposed method & \begin{tabular}[c]{@{}p{13.2cm}@{}}$\boldsymbol{\cdot}$ $o(\mathcal{A}_k)$ is not concave in our case, so we use $L + \delta \lambda^2$ and derive simpler forms in Eqs.~\eqref{eq:lag_max_min_simp} and~\eqref{eq:lag_max_min_simp2}.\\
$\boldsymbol{\cdot}$ We add $\delta \lambda^2$ to regularize large $\lambda$ because we cannot use the original method of Lagrange multiplier.\\
$\boldsymbol{\cdot}$ After that, we focus on efficiently solving them by dynamically controlling $\beta$, which we call \emph{adaptive gradient ascent}. This method is one of our main contributions in this work. \\ $\boldsymbol{\cdot}$ The entire process happens on TensorFlow.\end{tabular} \\ \hline
\end{tabular}
\end{table*}
\subsubsection{The difficulty of the proposed problem}\label{sec:diff}
Recall that in our prediction, $m_{r_i,k}$ is not independent from $m_{r_j,k}$, where $i \neq j$, because a flight frequency change on $r_i$ can (adversely) change some network features related to $r_j$. This prevents many existing techniques that optimize routes one by one because a later route will influence an earlier route during the optimization process. We have to optimize an adjacency matrix $\mathcal{A}_k$ as a whole, which is what we are doing in this paper. We will describe our approach shortly.
{\color{black}We also show that the proposed market influence objective function is not concave in our case. Let $o(\cdot)$ be our market influence objective. Given two solutions (weighted adjacency matrices) $\mathcal{A}$ and $\mathcal{B}$ of air carrier $C_k$, let $S$ be a set of all possible linear combinations of $\mathcal{A}$ and $\mathcal{B}$, i.e., $S=\{\mathcal{C} | \mathcal{C} = w\cdot \mathcal{A} + (1-w)\cdot \mathcal{B}, \forall w \in [0,1]\}$, so $S$ is an interval in our solution space. For any two $\mathcal{A}_{1} \in S$ and $\mathcal{A}_{2} \in S$, the following Sierpinski inequality~\cite{convex} should be met if $o(\cdot)$ is concave:
$$o\Big(\frac{\mathcal{A}_1 + \mathcal{A}_2}{2}\Big) \geq \frac{o(\mathcal{A}_1) + o(\mathcal{A}_2)}{2}.$$
However, we have many observations in our dataset that the inequality is not the case because the market share-frequency curves are all intercorrelated and frequently very complicated. Fig.~\ref{fig:non-concave}, where $\mathcal{A}_3 = \frac{\mathcal{A}_1 + \mathcal{A}_2}{2}$ and $o(\mathcal{A}_1) = o(\mathcal{A}_2)$, shows one such example in our data. As shown in the figure, $$o(\mathcal{A}_3) < \frac{o(\mathcal{A}_1) + o(\mathcal{A}_2)}{2},$$ which is the opposite to the above Sierpinski inequality. Therefore, our objective is not concave in our data.
Because i) the adoption of network features makes many combinatorial optimization techniques inapplicable to our work and ii) our objective function is not concave, it is hard to design an efficient search method.}
\begin{figure}[t]
\centering
\footnotesize
\includegraphics[width=0.8\columnwidth,trim={0.5cm 0 2cm 0.9cm},clip]{Picture5.png}
\caption{The (predicted) transported passenger numbers of linear combinations of two frequency matrices for a certain air carrier}
\label{fig:non-concave}
\end{figure}
\subsection{Overall Architecture}
In Fig.~\ref{fig:archi}, the overall architecture of the proposed optimization idea is shown. The overall workflow is as follows:
\begin{enumerate}
\item Train the market share prediction model in each route.
\item Fix the prediction models and update the frequency matrix $\mathcal{A}_k$ using the proposed adaptive gradient ascent. We consider other features (such as ticket price, aircraft size, etc.) are fixed while optimizing frequencies.
\end{enumerate}
\subsection{How to Solve}\label{sec:sol}
We solve the problem in Eq.~\eqref{eq:obj} on a deep learning platform using our adaptive gradient ascent in Algorithm~\eqref{alg:adaptive-gd}. But one problem is how to consider the budget constraint. We design two workarounds based on i) the Lagrange multiplier method and ii) the rectified linear unit (ReLU).\medskip
\subsubsection{Lagrange Multiplier Method: }
{\color{black}The method of Lagrange multiplier is a popular method to maximize concave functions (or some special non-concave functions) with constraints~\cite{10.5555/993483,10.1561/2200000016}. However, we cannot apply the method in a na{\"\i}ve way because our objective function is not concave. Therefore, we adopt only the Lagrangian function from the method and develop our own heuristic search method.} With the Lagrange multiplier method, we can construct the following Lagrangian function:
\begin{align}\label{eq:lag_l}\begin{split}
L = o(\mathcal{A}_k) - \lambda c(\mathcal{A}_k),
\end{split}\end{align} where $\lambda$ is called a Lagrange multiplier, and
\begin{align}\begin{split}
o(\mathcal{A}_k) &= \sum_{r \in \mathcal{R}} demand_r \times m_{r,k},\\
c(\mathcal{A}_k) &= \sum_{r \in \mathcal{R}}\big(cost_{r,k} \times f_{r,k,freq}\big) - budget_k.
\end{split}\end{align}
{\color{black}Basically, the Lagrange multiplier $\lambda$ can be systematically decided, if the objective function $o(\mathcal{A}_k)$ concave, and we can find the optimal solution of the original constrained problem. However, this is not the case in our work and our goal is to solve the optimization problem on TensorFlow for the purpose of increasing scalability.
Thus, we propose the following regularized problem and develop a heuristic search method}:
\begin{align}\label{eq:lag2}
\max_{f_{r}^{max} \geq f_{r,k,freq}\geq 0, r\in \mathcal{R}}\quad \min_{\lambda}\quad L + \delta \lambda^2
\end{align}where $\delta \geq 0$ is a weight for the regularization term. {\color{black}Note that our definition is different from the original Lagrangian\footnote{{\color{black}One can consider that the relationship between the original Lagrangian and our proposed regularized Lagrangian is similar to that between the linear regression and the ridge regression. The ridge regression is to minimize prediction errors in conjunction with the sum of squared parameters to learn, which the linear regression does not have~\cite{10.2307/1271436}.}}.} The inner minimization part has been added by us to prevent that $\lambda$ becomes too large. One way to solve Eq.~\eqref{eq:lag2} is to alternately optimize flight frequencies (i.e., the outer maximization) and $\lambda$ (i.e., the inner minimization), which implies that Eq.~\eqref{eq:lag2} be basically a two-player max-min game.
We further improve Eq.~\eqref{eq:lag2} and derive a simpler but equivalent formulation that does not require the alternating maximization and minimization shortly in Eq.~\eqref{eq:lag_max_min_simp}.
As the optimized frequencies by our method will be real numbers, \emph{we round down to convert them to integers} and not to violate the budget limit $budget_k$ --- one can consider our method as a relaxation from integer frequencies to continuous values.
\begin{theorem}\label{th:optimization}
Let $\mathcal{A}_k$ be a matrix of flight frequencies. The proposed max-min method in Eq.~\eqref{eq:lag2} is equivalent to $\underset{f_{r}^{max} \geq f_{r,k,freq}\geq 0, r\in \mathcal{R}}{\max}\ o(\mathcal{A}_k) -\frac{c(\mathcal{A}_k)^2}{4\delta}$.
\end{theorem}
\begin{proof}
First we rewrite Eq.~\eqref{eq:lag2} as follows:
\begin{align}\label{eq:lag_max_min}
\max_{f_{r}^{max} \geq f_{r,k,freq}\geq 0, r\in \mathcal{R}}\quad \min_{\lambda}\quad o(\mathcal{A}_k) - \lambda c(\mathcal{A}_k) + \delta \lambda^2.
\end{align}
Let us fix $\mathcal{A}_k$ then Eq.~\eqref{eq:lag_max_min} becomes a quadratic function (parabola) w.r.t. $\lambda$. It is already known that the optimal solution to minimize the quadratic function given a fixed $\mathcal{A}_k$ is achieved when its derivative w.r.t. $\lambda$ is zero, i.e., $\dv{o(\mathcal{A}_k) - \lambda c(\mathcal{A}_k) + \delta \lambda^2}{\lambda} = - c(\mathcal{A}_k) + 2\delta \lambda = 0$. Therefore, the optimal form of $\lambda$ can be derived as $\hat{\lambda} = \frac{c(\mathcal{A}_k)}{2\delta}$.
Let us substitute $\lambda$ for its optimal form $\hat{\lambda}$ in Eq.~\eqref{eq:lag_max_min} and the inner minimization will disappear as follows:
\begin{align}\label{eq:lag_max_min2}
\max_{f_{r}^{max} \geq f_{r,k,freq}\geq 0, r\in \mathcal{R}}\quad o(\mathcal{A}_k) -\frac{c(\mathcal{A}_k)^2}{4\delta}
\end{align}
\end{proof}
For simplicity, let $\beta = \frac{1}{2\delta}$ and we can rewrite Eq.~\eqref{eq:lag_max_min2} as follows:
\begin{align}\label{eq:lag_max_min_simp}
\max_{f_{r}^{max} \geq f_{r,k,freq}\geq 0, r\in \mathcal{R}}\quad \bar{L}_{Lagrange},
\end{align}where $\bar{L}_{Lagrange} = o(\mathcal{A}_k) -\beta\frac{c(\mathcal{A}_k)^2}{2}$.
Note that maximizing Eq.~\eqref{eq:lag_max_min_simp} is equivalent to solving the max-min problem in Eq.~\eqref{eq:lag2} so we implement only Eq.~\eqref{eq:lag_max_min_simp} and optimize it using the proposed adaptive gradient ascent that will be described in the next subsection.
We finalize our Lagrange multiplier-based method with a remark on the equivalence between Eq.~\eqref{eq:lag2} and Eq.~\eqref{eq:lag_max_min_simp}. In Eq.~\eqref{eq:lag_max_min_simp}, there is a squared penalty, which means we much emphasize the budget constraint term during the optimization process if any cost overrun. This supports the appropriateness of our regularized method in Eq.~\eqref{eq:lag2}. In particular, the optimal form $\hat{\lambda}$ can be defined because Eq.~\eqref{eq:lag_max_min} is a parabolic function w.r.t. $\lambda$ with the regularization term $\delta \lambda^2$, which leads to Eq.~\eqref{eq:lag_max_min_simp}.\medskip
\subsubsection{ReLU-based Method: }
ReLU is used to rectify an input value by taking its positive part for neural networks. This property can be used to impose a penalty if the budget limit constraint is violated as follows:
\begin{align}\label{eq:lag_max_min_simp2}
\max_{f_{r}^{max} \geq f_{r,k,freq}\geq 0, r\in \mathcal{R}}\quad \bar{L}_{ReLU},
\end{align}where $\bar{L}_{ReLU} = o(\mathcal{A}_k) - \beta R(c(\mathcal{A}_k))$ and $R(\cdot)$ is the rectified linear unit.
\begin{comment}
Using the gradient ascent-based method, a matrix of flight frequencies $\mathcal{A}_k$ is updated as follows:
\begin{align}\label{eq.opt-relu-gd}
\mathcal{A}_k_{i+1}&= \left\{ \begin{array}{ll}
\mathcal{A}_k_{i} + \gamma \cdot o'(\mathcal{A}_k_{i}) & c(\mathcal{A}_k_{i}) \leq 0 \\
\mathcal{A}_k_{i} + \gamma(o'(\mathcal{A}_k_{i}) - \beta c'(\mathcal{A}_k_{i})) & c(\mathcal{A}_k_{i}) > 0\\
\end{array}\right.
\end{align}
where $\gamma$ is a learning rate set by user and $\mathcal{A}_k_{i}$ is a matrix $\mathcal{A}_k$ at $i$-th iteration during the optimization process.
\end{comment}
\subsection{$\beta$ Selection and Adaptive Gradient Ascent Method}
{\color{black}We propose an adaptive gradient ascent method, which basically uses the gradients of $\bar{L}_{Lagrange}$ or $\bar{L}_{ReLU}$ to optimize flight frequencies. In both methods, the coefficient $\beta$ needs to be \emph{dynamically} adjusted to ensure the budget limitation rather than being fixed to a constant. For example, one gradient ascent update will increase flight frequencies even after a cost overrun if $\beta$ is not large enough. Whenever there is any cost overrun, $\beta$ should be set to such a large enough value that the total cost is decreased.
\begin{figure}[t]
\centering
\footnotesize
\includegraphics[width=0.5\columnwidth]{gau3.pdf}
\caption{Suppose that there is a small cost overrun with $\mathcal{A}^{(i)}$, which denotes a frequency matrix at $i$-th gradient ascent iteration. The scale of $\vect{c}'$ is smaller than that of $\vect{o}'$ and the gradient ascent update cannot remove the cost overrun if $\beta$ is small. However, if $\beta$ is large enough (e.g., $\beta=5$ in this example), the gradient ascent update can reduce the cost overrun. Note that $\mathcal{A}^{(i+1)}$ is located behind $\mathcal{A}^{(i)}$ w.r.t. the blue dotted line perpendicular to $\vect{c}'$, which means a reduced cost overrun.}
\label{fig:ga}
\end{figure}
For the sake of our convenience, we will use $\vect{o}'$ and $\vect{c}'$ to denote the gradients of objective and cost overrun penalty term as follows:
\begin{align*}
\vect{o}' &= \nabla o(\mathcal{A}_k),\\
\vect{c}' &= \begin{cases}\nabla\frac{c(\mathcal{A}_k)^2}{2},\textrm{ if the Lagrange multiplier-based method}\\
\nabla R(c(\mathcal{A}_k)),\textrm{ if the ReLU-based method}.
\end{cases}
\end{align*}
Fig.~\ref{fig:ga} shows an illustration of why we need to adjust $\beta$. As shown, if the directions of the two gradients $\vect{c}'$ and $\vect{o}'-\beta \vect{c}'$, where $\beta=5$, are opposite, the cost overrun will decrease after one gradient ascent update.
}
We also do not distinguish between $\bar{L}_{Lagrange}$ and $\bar{L}_{ReLU}$ in this section because the algorithm proposed in this section is commonly applicable to both the Lagrange multiplier and ReLU-based methods. We denote them simply as $\bar{L}$ in this section.
The gradients of $\bar{L}$ w.r.t. $\mathcal{A}_k$ are made of two components $\vect{o}'$ and $-\beta\vect{c}'$, where $\vect{o}'$ increases the market influence and $-\beta\vect{c}'$ reduces the cost overrun. Typically, the market influence increases as the frequencies $\mathcal{A}_k$ increase. So $\beta$ needs to be properly selected such that the frequencies are updated (by the proposed adaptive gradient ascent) to reduce the cost once the total cost exceeds the budget during the gradient-based update process.
This requires that the overall gradients $\vect{o}'-\beta \vect{c}'$ suppresses an increase in $c(\mathcal{A}_k)$.
More precisely, it requires that the directional derivative of $c(\mathcal{A}_k)$ along the vector $\vect{o}'-\beta \vect{c}'$ (or the dot product of $\vect{o}'-\beta\vect{c}'$ and $\vect{c}'$) is negative --- if two vectors are orthogonal, its dot product is 0 and if two vectors have different directions, its dot product is negative. Based on Theorem \ref{th:beta_selection}, we obtain
\begin{align}\label{eq:opt-over-budget}
\beta > \frac{\vect{c}' \cdot \vect{o}'}{\vect{c}' \cdot \vect{c}'},
\end{align}where `$\cdot$' means the vector dot product.
\begin{definition}
The directional derivative of $c({\mathcal{A}_k})$ w.r.t. $\mathcal{A}_k$ along the vector $\vect{o}'-\beta \vect{c}'$ is defined as its dot product $\vect{c}' \cdot (\vect{o}'-\beta \vect{c}')$~\cite{kaplan1991advanced}. This is directly from the definition of the dot product as follows:
\begin{align*}
\nabla_{\vect{o}'-\beta \vect{c}'} c(\mathcal{A}_k) &= \nabla c(\mathcal{A}_k) \cdot (\vect{o}'-\beta \vect{c}')= \vect{c}' \cdot (\vect{o}'-\beta \vect{c}').
\end{align*}
\end{definition}
\begin{theorem}\label{th:beta_selection}
$\vect{c}' \cdot (\vect{o}'-\beta \vect{c}')<0$ if and only if $\beta > \frac{\vect{c}' \cdot \vect{o}'}{\vect{c}' \cdot \vect{c}'} $.
\end{theorem}
\begin{proof}
From $\vect{c}' \cdot (\vect{o}'-\beta \vect{c}') < 0$, rewrite the inequality w.r.t. $\beta$ and we have
\begin{align*}
\beta > \frac{\vect{c}' \cdot \vect{o}'}{\vect{c}' \cdot \vect{c}'}.
\end{align*}
\end{proof}
Note that Eq.~\eqref{eq:opt-over-budget} does not include the equality condition but requires that $\beta$ is strictly larger than its right-hand side. To this end, we introduce a positive value $\epsilon>0$ as follows:
\begin{align}
\beta = \frac{\vect{o}'\cdot\vect{c}'}{\vect{c}'\cdot\vect{c}'}+\epsilon,
\end{align}where $\epsilon$ is a positive hyper-parameter in our method.
On the other hand, we need to ensure that $\beta$ is getting closer to zero when the algorithm is approaching an optimal solution of $\mathcal{A}_k$. To do this, we further modify it as follows:
\begin{align}\label{eq:betafinal}
\beta = \frac{\vect{o}'\cdot\vect{c}'}{\vect{c}'\cdot\vect{c}'}+c(\mathcal{A}_k)\epsilon.
\end{align}
Note that $c(\mathcal{A}_k)\epsilon$ becomes a very trivial value if $c(\mathcal{A}_k)$ is very small. This specific setting prevents the situation that an ill-chosen large $\epsilon$ decreases flight frequencies too much given a very small cost overrun $c(\mathcal{A}_k) \approx 0$.
The proposed adaptive gradient ascent is presented in Algorithm \ref{alg:adaptive-gd}. The optimization of frequencies occurs at line~\ref{alg:opt} and other lines are for dynamically adjusting hyper-parameters to prevent cost overrun. We take a solution around 500 epochs when the cost overrun is not positive. 500 epochs are enough to reach a solution point in our experiments.
{\color{black}
\begin{theorem}
Algorithm~\ref{alg:adaptive-gd} is able to find a feasible solution of the original problem in Eq.~\eqref{eq:obj}.
\end{theorem}
\begin{proof}
In Thm.~\eqref{th:beta_selection}, we choose a $\beta$ configuration that meets $\vect{c}' \cdot (\vect{o}'-\beta \vect{c}')<0$. The frequency matrix $\mathcal{A}_k$ is updated by the gradient ascent rule, denoted $\mathcal{A}_k = \mathcal{A}_k + \gamma(\vect{o}'-\beta \vect{c}')$. However, the directions of $\vect{o}'-\beta \vect{c}'$ and $\vect{c}'$ are opposite to each other (because their dot-product is negative), which means the gradient ascent will decrease the cost overrun term $c(\mathcal{A}_k)$ as illustrated in Fig.~\ref{fig:ga}.
Therefore, after applying the proposed gradient ascent multiple times any cost overrun can be removed. Our algorithm stops at the first solution whose cost overrun is not positive after at least 500 epochs. Therefore, Algorithm~\ref{alg:adaptive-gd} is able to find a feasible solution that meets the budget constraint and its termination is guaranteed.
\end{proof}
}
\begin{algorithm}[t]
\small
\SetAlgoLined
\caption{Adaptive gradient ascent}\label{alg:adaptive-gd}
\KwIn{$\gamma$}
\KwOut{$\mathcal{A}_k$}
Initialize $\mathcal{A}_k$\tcc*[r]{Initialize freqs}
$\beta \gets 0$\tcc*[r]{Initialize $\beta$}
\While {until convergence}{
$\mathcal{A}_k \gets \mathcal{A}_k+\gamma \nabla \bar{L}$\tcc*[r]{Gradient ascent}\label{alg:opt}
\eIf{$c(\mathcal{A}_k)>0$}{
$\beta \gets$Eq.~\eqref{eq:betafinal}
}{
$\beta \gets 0$\;
}
}
\end{algorithm}
\begin{example}
We introduce an example to maximize a quadratic objective using our method in Fig.~\ref{fig:eg}. It maximizes $x_1^2 + x_2^2$ with a constraint $(x_1-1)^2 + (x_2-1)^2 \leq 1$. Our proposed algorithm dynamically adjusts $\beta$ to pull the trajectory toward the gray circle which represents the feasible region of the example problem. Note that the trajectory remains near the boundary of the feasible region once it successfully enters the feasible region because we can enable/disable $\beta$ as necessary.
\end{example}
\begin{figure}[t]
\centering
\includegraphics[width=0.7\columnwidth,trim={0.3cm 0.3cm 0.7cm 0.3cm},clip]{images/Picture4.png}
\caption{Trajectory of optimizing $x_1^2 + x_2^2$ with a constraint $(x_1-1)^2 + (x_2-1)^2 \leq 1$ from a random initialization point. Note that it converges to the optimal solution. Gray circle means there is no cost overrun and blue circle means $\mathbf{b}'\cdot\mathbf{o}' < 0$, i.e., their directions are opposite to each other. Green arrow represents $\mathbf{o}'$ and red arrow $\mathbf{b}'$. Note that our algorithm pulls the trajectory toward the gray circle as soon as there is any small cost overrun.}\label{fig:eg}
\end{figure}
\section{Experiments}
In this section, we introduce experimental environments and results for both the prediction and the optimization. {\color{black}We collected our data for 10 years from the website~\cite{bts}. We predict the market share and optimize the flight frequency in the last month of the dataset after training with all other month data.}
In our dataset, there are 2,262 routes and more than 10 air carriers. We predict and optimize for the top-4 air carriers among them considering their influences on the US domestic air markets. We ignore other regional/commuter level air carriers.
Our detailed software and hardware environments are as follows: Ubuntu 18.04.1 LTS, Python ver. 3.6.6, Numpy ver. 1.14.5, Scipy ver. 1.1.0, Pandas ver. 0.23.4, Matplotlib ver.3.0.0, Tensorflow-gpu ver. 1.11.0, CUDA ver. 10.0, NVIDIA Driver ver. 417.22. Three machines with i9 CPU and GTX1080Ti are used.
\end{enumerate}
{\color{black}
\subsection{Data Crawling}
Our main data source is the famous DB1B database~\cite{bts} released by the Bureau of Transportation Statistics (BTS). They collect all the domestic air tickets sold in the US and some additional management information and release the following three main tables: Coupon, Market, and Ticket. The Coupon table, which contains 880,384,622 rows in total, provides coupon-specific information for each domestic itinerary of the Origin and Destination Survey, such as the operating carrier, origin and destination airports, number of passengers, fare class, coupon type, trip break indicator, and distance. The Market table, which has 535,639,256 rows, contains directional market characteristics of each domestic itinerary of the Origin and Destination Survey, such as the reporting carrier, origin and destination airport, prorated market fare, number of market coupons, market miles flown, and carrier change indicators, and the Ticket table, which has 303,276,607 rows, contains summary characteristics of each domestic itinerary on the Origin and Destination Survey, including the reporting carrier, itinerary fare, number of passengers, originating airport, roundtrip indicator, and miles flown. Those thee tables share a set of common columns, i.e., primary-foreign key relationships in a database, and thus can be merged into one large table. Sometime airline names are changed so we use the unique identifiers assigned by the US governments rather than their names.}
\subsection{Market Share Prediction}
\subsubsection{Baseline Methods}
We compare our proposed model with two baseline prediction models.
Model1~\cite{suzuki2000relationship} considers air carrier's frequency, delay, and safety. Model2~\cite{wei2005impact} studies the effect of aircraft size and seat availability on market share and considers all other variables such as price and frequency. Model1 and Model2 are conventional methods based on multi-logit regression and they are trained using numerical solvers. Model3 is a neural network-based model created by us and uses the network features as well.
{\color{black}To train the market share prediction models, we use the learning rate of 1e-4 which decays with a ratio of 0.96 every 100 epochs. The number of layer in our neural network is $l=\{3,4,5\}$ and the dimensionality of the hidden vector is $d=\{16, 32\}$. We train 1,000 epochs for each model and use the Xavier initializer~\cite{Glorot10understandingthe} for initializing weights and the Adam optimizer for updating weights. We used the cross validation method to choose the best one, which means given a training set with $N$ months, we choose a random month and validate with the selected month after training with all other $N-1$ months. We repeat this $N$ times.
In addition, we test other standard regression algorithms as well. In particular, we are interested in testing some robust regression algorithms such as TheilSen, AdaBoost Regression, and RandomForest Regression. We also use the same cross validation method.}
\begin{figure}\centering
\footnotesize
\includegraphics[width=1\columnwidth]{pred_baseline.png}
\caption{Histogram of RMSE scores --- lower values are preferred. X-axis is the RMSE score and Y-axis is the number of routes.}
\label{fig:pred_baseline}
\end{figure}
\begin{table}\centering
\footnotesize
\caption{Median/Average RMSE and $R^2$. The up-arrow (resp. down-arrow) means higher (resp. lower) is better. The best results are indicated in bold font.}
\begin{tabular}{@{}cc|ccc@{}}\hline
&&Median RMSE $\downarrow$ &$R^2\uparrow$ &Mean RMSE $\downarrow$\\\hline
\multirow{6}{*}{10 routes}
&TheilSen &0.048 &0.944 &0.052\\
&AdaBoost &0.029 &0.970 &0.027\\
&RandomForest &0.029 &\textbf{0.979} &0.025\\
&Model1~\cite{suzuki2000relationship} &0.026 &0.965 &\textbf{0.024}\\
&Model2~\cite{wei2005impact} &0.035 &0.953 &0.030\\
&{\bf Model3 (Ours)} &\textbf{0.023} &0.899 &0.026\\
&Model3 (No Net.) &0.026 &0.884 &0.027\\\hline
\multirow{6}{*}{1,000 routes}
&TheilSen &0.080 &0.855 &0.087\\
&AdaBoost &0.021 &0.964 &0.033\\
&RandomForest &0.024 &0.968 &0.033\\
&Model1~\cite{suzuki2000relationship} &0.021 &0.957 &0.033\\
&Model2~\cite{wei2005impact} &0.020 &0.983 &0.035\\
&{\bf Model3 (Ours)} &\textbf{0.010} &\textbf{0.988} &\textbf{0.025}\\
&Model3 (No Net.) &0.019 &0.978 &0.030\\\hline
\multirow{6}{*}{2,262 routes}
&TheilSen &0.0813 &0.707 &0.088\\
&AdaBoost &0.017 &0.933 &\textbf{0.031}\\
&RandomForest &0.014 &0.942 &\textbf{0.031}\\
&Model1~\cite{suzuki2000relationship} &0.033 &0.944 &0.041\\
&Model2~\cite{wei2005impact} &0.030 &0.976 &0.033\\
&{\bf Model3 (Ours)} &\textbf{0.007} &\textbf{0.983} &0.038\\
&Model3 (No Net.) &0.013 &0.969 &0.040\\
\hline
\end{tabular}
\label{table:avg_rmse_predbaseline}
\end{table}
\subsubsection{Experimental Results}
Fig.~\ref{fig:pred_baseline} shows the histogram of RMSE scores for Model1, 2, and 3. We experimented three scenarios (i.e., top-10 routes, top-1,000 routes, and top-2,262 routes in terms of the number of passengers). Our Model3 shows a higher density in low-RMSE regions than other models.
The median/average root-mean-square error (RMSE) and $R^2$ scores are summarized in Table~\ref{table:avg_rmse_predbaseline}. Our Model3 has much better median RMSE and $R^2$ scores than other models (especially for the largest scale prediction with 2,262 routes). Sometimes our mean RMSE is worse than other baselines. However, we think this is not significant because our low median RMSE says that it is better than others in the majority of routes. In particular, we show the median RMSE of 0.007 for the 2,262-route predictions vs. 0.030 by Model2. RandomForest also shows reasonable accuracy in many cases.
For the top-10 routes, most models have good performance.
This is because it is not easy for our model to have reliable network features only with the 10 routes. However, our main goal is to predict accurately in a larger scale prediction.
{\color{black}We also compare the accuracy of our proposed model without the network features, denoted with ``No Net.'' in the table. When we do not use any network features, the accuracy of market share predictions slightly decreases. Considering the scale of the market size, however, a few percentage errors can result in a big loss in the optimization phase. Therefore, our proposed prediction model is the most suitable to be used to define the objective function of our proposed optimization problem.}
\subsection{Market Influence Maximization}
\subsubsection{Baseline Methods}\label{sec:base}
We describe three baseline methods: greedy, dynamic programming, and an exhaustive algorithm. Greedy methods are effective in many optimization problems. In particular, greedy provides an approximation ratio of around 63\% for submodular minimization. Unfortunately, our optimization is not a submodular case. Due to its simplicity, however, we compare with the following greedy method, which iteratively chooses a route with the maximum marginal increment of market influence and increases its flight frequency by $\alpha$. In general, the step size $\alpha$ is 1. For faster convergence, however, we test various $\alpha =\{1, 10\}$. The complexity of the greedy algorithm is $\mathcal{O}( \frac{budget_k\cdot N_k}{\alpha \cdot avg\_cost_{k}})$, where $N_k$ is the number of routes and $avg\_cost_{k}$ is the average cost for air carrier $k$ over the routes.
Dynamic programming, branch and bound, and GroupGreedy were used to solve a similar problem in~\cite{An:2016:MFM:2939672.2939726,An:2017:DFA:3055535.3041217}. However, all these algorithms assume that routes are independent, which is not the case in our work because we use the network features. Therefore, their methods are not applicable to our work (see Sections~\ref{sec:opt} and~\ref{sec:diff} for more detailed descriptions). Most importantly, these methods are able to solve maximization problems for at most 30 routes.
Instead, we can use a brute-force algorithm when the number of routes is small. Given three routes $\{r_1, r_2, r_3\}$, for instance, the possible number of solutions is $f_{r_1}^{max} \times f_{r_2}^{max} \times f_{r_3}^{max}$. It is already a very large search space because each $f_{r_i}^{max}$ is several hundreds for a popular route in a month. However, we do not need to test solutions one by one. We create a large tensor of $|\mathcal{R}| \times |\mathcal{R}| \times q$ dimensions, where $q$ is the number of queries, and query $q$ solutions at the same time. In general, GPUs can solve the large query quickly. Even with GPUs, however, we cannot query more than a few routes because the search space volume exponentially grows. We also use the step size $\alpha=\{5,10\}$. $\alpha=1$ is not feasible in the brute-force search even with state-of-the-art GPUs. Thus, the complexity becomes $\mathcal{O}(\frac{f_{r_1}^{max}}{\alpha} \times \frac{f_{r_2}^{max}}{\alpha} \times \frac{f_{r_3}^{max}}{\alpha})$.
\begin{table}[t]
\centering
\footnotesize
\caption{Optimization results for the top-3 routes. Multiplying by 10 will lead to the real scale of passenger numbers because the DB1B database includes 10\% random samples of air tickets.}
\begin{tabular}{@{}cc|cccc@{}}\hline
&&Carrier&Carrier&Carrier&Carrier\\
&&1&2&3&4\\ \hline
\multirow{9}{*}{\rotatebox[origin=c]{90}{\# of Passengers}}
&\multicolumn{1}{|c|}{Ground Truth} &4,960 &307 &1,792 &3,124\\
&\multicolumn{1}{|c|}{Lagrange, Real\_Init (Ours)} &4,964 &308 &1,842 &3,126 \\
&\multicolumn{1}{|c|}{ReLU, Real\_Init (Ours)} &4,961 &\textbf{310} &1,854 &\textbf{3,144} \\
&\multicolumn{1}{|c|}{Lagrange, Zero\_Init (Ours)} &4,970 &308 &\textbf{1,891} &3,139 \\
&\multicolumn{1}{|c|}{ReLU, Zero\_Init (Ours)} &4,961 &\textbf{310} &\textbf{1,891} &\textbf{3,144} \\
&\multicolumn{1}{|c|}{Greedy, Zero\_Init, $\alpha=1$} &4,967 &\textbf{310} &\textbf{1,891} &\textbf{3,144} \\
&\multicolumn{1}{|c|}{Greedy, Zero\_Init, $\alpha=10$} &\textbf{4,972} &\textbf{310} &\textbf{1,891} &\textbf{3,144} \\
&\multicolumn{1}{|c|}{Brute-force, Zero\_Init, $\alpha=5$} & \textbf{4,972} & N/A & N/A & N/A\\
&\multicolumn{1}{|c|}{Brute-force, Zero\_Init, $\alpha=10$} & \textbf{4,972} &\textbf{310}&\textbf{1,891}&\textbf{3,144}\\\hline
\end{tabular}
\label{table:r3}
\vspace{1.5em}
\centering
\footnotesize
\caption{Optimized number of passengers for the top-10 routes.}
\begin{tabular}{@{}cc|cccc@{}}\hline
&&Carrier&Carrier&Carrier&Carrier\\
&&1&2&3&4\\ \hline
\multirow{7}{*}{\rotatebox[origin=c]{90}{\# of Passengers}}
&\multicolumn{1}{|c|}{Ground Truth} &16,924 &4,022 &20,064 &29,419\\
&\multicolumn{1}{|c|}{Lagrange, Real\_Init (Ours)} &18,612 &4,054 &20,552 &30,220 \\
&\multicolumn{1}{|c|}{ReLU, Real\_Init (Ours)} &18,618 &\textbf{5,024} &\textbf{20,703} &\textbf{30,269}\\
&\multicolumn{1}{|c|}{Lagrange, Zero\_Init (Ours)}&18,583 &4,259 &20,549 &30,074 \\
&\multicolumn{1}{|c|}{ReLU, Zero\_Init (Ours)} &\textbf{18,643} &\textbf{5,024} &20,323 &\textbf{30,269}\\
&\multicolumn{1}{|c|}{Greedy, Zero\_Init, $\alpha=1$}&17,016 &\textbf{5,024} &20,515 &29,519 \\
&\multicolumn{1}{|c|}{Greedy, Zero\_Init, $\alpha=10$} &18,078 &\textbf{5,024} &20,515 &\textbf{30,269}\\ \hline
\end{tabular}
\label{table:r10}
\vspace{1.5em}
\centering
\footnotesize
\caption{Running time (in sec.) for the top-10 routes.}
\begin{tabular}{@{}cc|cccc@{}}\hline
&&Carrier&Carrier&Carrier&Carrier\\
&&1&2&3&4\\ \hline
\multicolumn{2}{c|}{Lagrange, Real\_Init (Ours)} &40.77 &42.52 &41.86 &41.70 \\
\multicolumn{2}{c|}{ReLU, Real\_Init (Ours)}&\textbf{40.75} &41.48 &\textbf{40.67} &44.30 \\
\multicolumn{2}{c|}{Lagrange, Zero\_Init (Ours)}&43.10 &42.45 &40.94 &40.37\\
\multicolumn{2}{c|}{ReLU, Zero\_Init (Ours)}&40.98 &\textbf{39.90} &40.49 &\textbf{40.31} \\
\multicolumn{2}{c|}{Greedy, Zero\_Init, $\alpha=1$}&910.12 &191.12 &1,074.95 &1,001.04 \\
\multicolumn{2}{c|}{Greedy, Zero\_Init, $\alpha=10$}&89.47 &20.14 &107.82 &101.01\\ \hline
\end{tabular}
\label{table:r10_time}
\end{table}
\begin{table*}
\centering
\footnotesize
\caption{Optimized number of passengers for the top-1,000 and 2,262 routes. Greedy with $\alpha=1$ is not feasible in this scale of experiments.}
\begin{tabular}{@{}c|cccccccc@{}}\hline
&\multicolumn{2}{c}{Carrier 1}&\multicolumn{2}{c}{Carrier 2}&\multicolumn{2}{c}{Carrier 3}&\multicolumn{2}{c}{Carrier 4}\\
\cmidrule{2-9}
&1000 routes &2262 routes&1000 routes &2262 routes&1000 routes &2262 routes&1000 routes &2262 routes\\ \hline
\multicolumn{1}{c|}{Lagrange, Real\_Init (Ours)} &429,581 &487,475 &\textbf{225,623} &307,815 &388,864 &447,633 &546,742 &723,522\\
\multicolumn{1}{c|}{ReLU, Real\_Init (Ours)} &431,261 &489,684 &\textbf{225,623} &\textbf{307,881} &\textbf{390,239} &\textbf{448,421} &547,623 &725,526\\
\multicolumn{1}{c|}{Lagrange, Random\_Init (Ours)} &426,683 &\textbf{498,511} &225,057 &306,268 &373,756 &435,277 &548,310 &\textbf{726,142}\\
\multicolumn{1}{c|}{ReLU, Random\_Init (Ours)} &\textbf{434,154} & 492,784 &224,603 &306,963 &380,318 &447,656 &\textbf{549,092} &721,100\\
\multicolumn{1}{c|}{Greedy, Zero\_Init, $\alpha=10$}&428,196&N/A&225,322&N/A&385,607&N/A&516,348&N/A\\ \hline
\end{tabular}
\label{table:opt_r1000_2262}
\end{table*}
\begin{table*}
\centering
\footnotesize
\caption{Running time (in sec.) for the top-1,000 and 2,262 routes scenarios.}
\begin{tabular}{@{}c|cccccccc@{}}\hline
&\multicolumn{2}{c}{Carrier 1}&\multicolumn{2}{c}{Carrier 2}&\multicolumn{2}{c}{Carrier 3}&\multicolumn{2}{c}{Carrier 4}\\
\cmidrule{2-9}
&1000 routes &2262 routes&1000 routes &2262 routes&1000 routes &2262 routes&1000 routes &2262 routes\\ \hline
\multicolumn{1}{c|}{Lagrange, Real\_Init (Ours)} &440.21 &875.15 &450.30 &964.00 &\textbf{451.10}& {951.56} &{439.56} &{940.17}\\
\multicolumn{1}{c|}{ReLU, Real\_Init (Ours)} &439.18 &{908.66} &{452.39} &947.35 &453.50& \textbf{937.25} &438.75 &950.60\\
\multicolumn{1}{c|}{Lagrange, Random\_Init (Ours)}&\textbf{438.06} &\textbf{878.73} &451.15 &947.35 &453.39 &949.80 &440.17 &948.20\\
\multicolumn{1}{c|}{ReLU, Random\_Init (Ours)} &442.12 &891.05 &\textbf{448.85} &\textbf{936.29} &452.47 &967.23 &\textbf{438.49} &\textbf{928.13}\\
\multicolumn{1}{c|}{Greedy, Zero\_Init, $\alpha=10$}&84,643.31&N/A&13,414.46&N/A&35,116.56&N/A&302,272.43&N/A\\ \hline
\end{tabular}
\label{table:time_r1000_2262}
\end{table*}
\subsubsection{Parameter Setup}
For all methods, we let the flight frequency $f_{r,k,freq}$ of air carrier $C_k$ in route $r$ on or below the maximum frequency $f_{r}^{max}$ observed in the DB1B database. This is very important to ensure feasible frequency values because too high frequency values may not be accepted in practice due to limited capacity of airports. This restriction can be implemented using the $clip\_by\_value(\cdot)$ function of Tensorflow.
In addition, we need to properly initialize frequency values in Algorithm~\ref{alg:adaptive-gd}. We test two ways to initialize frequencies: i) Real\_Init initializes the flight frequency values with the ground-truth values observed in the dataset --- this initialization method is only for our method because Greedy monotonically increases the frequency values; ii) Zero\_Init initializes all the frequencies to zeros (for the Greedy and Brute-force) or random values close to zero (for our method). In all methods, we set the total budget to the ground-truth budget.
We tested $\epsilon =\{1, 100, 1000\}$ but there is no significant difference on the achieved final optimization values. For the following experiments, we choose $\epsilon=1000$ to speed up the optimization process. We use 10 as the learning rate $\gamma$ and run 500 epochs.
One more thing is that the DB1B database includes 10\% random samples of air tickets\footnote{See the overview section in \url{https://www.transtats.bts.gov/DatabaseInfo.asp?DB_ID=125}.} so our reported passenger numbers multiplied by 10 will be the real scale. In this paper, we list values in the original scale of the DB1D database for better reproducibility.
\subsubsection{Experimental Results}
We first compare all the aforementioned methods in a small sized problem with only 3 routes. Especially, the brute-force search is possible only for this small problem.
For the top-3 route optimization, we choose the top-3 biggest routes and the top-4 air carriers in terms of the number of passengers transported and optimize the flight frequencies in the 3 routes for each air carrier for the last month of our dataset. In Table~\ref{table:r3}, detailed optimized market influence values are listed for various methods. Surprisingly, all methods mark similar values. We think all methods are good at solving this small size problem. However, the brute-force method is not feasible for some cases where the maximum frequency limits $f_{r}^{max}$ in the routes are large --- we mark with `N/A' for those whose runtime is prohibitively large.
Experimental results of the top-10 route optimization are summarized in Table~\ref{table:r10}. Our method based on the ReLU activation produces the best results for all the top-4 air carriers. Lagrange multiplier method-based optimization also produces many reasonable results better than Greedy. Greedy shows the worst performance in this experiment. In Table~\ref{table:r10_time}, their runtimes are also reported. Our method is 2-22 times faster than the Greedy except Carrier 2 with $\alpha=10$.
For the top-1,000 and 2,262 routes, experimental results are listed in Tables~\ref{table:opt_r1000_2262} and~\ref{table:time_r1000_2262}. Our methods produce the best optimized value in the least amount of time. In particular, our method is about 690 times faster than the Greedy with $\alpha=10$ at Carrier 4. Greedy is not feasible for 2,262 routes.
Our method shows a \emph{(sub-)linear} increment of runtime w.r.t. the number of routes. It takes about 40 seconds for the top-10 routes and 400 seconds for the top-1,000 routes. When the problem size becomes two orders of magnitude larger from 10 to 1,000, the runtime increases only by one order of magnitude. Considering that we solve a NP-hard problem, the sub-linear runtime increment is an outstanding achievement. Moreover, our method consistently shows the best optimized values in almost all cases.
{\color{black}Greedy is slower than our method due to its high complexity $\mathcal{O}( \frac{budget_k\cdot N_k}{\alpha \cdot avg\_cost_{k}})$ as described in Sec.~\ref{sec:base}. When the budget limitation $budget_k$ and the number of routes $N_k$ are large, it should query the prediction models many times, which significantly delays its solution search time. Therefore, Greedy is a classical black-box search method whose efficiency is much worse than our proposed method. One can consider our method as a white-box search method because the gradients flow directly to update flight frequencies.}
\subsection{Profit Maximization}
In this subsection, we demonstrate that our framework can be readily generalized to maximize profits. To this end, we modify the objective function in Eq.~\eqref{eq:obj} as follows with the same budget constraint:
\begin{align*}\begin{split}
\max_{f_{r}^{max} \geq f_{r,k,freq}\geq 0, r \in \mathcal{R}}& \sum_{r \in \mathcal{R}} price_{r, k} \times demand_r \times m_{r,k}\\ & - \sum_{r \in \mathcal{R}}cost_{r,k} \times f_{r,k,freq},
\end{split}\end{align*}where $price_{r, k}$ is the average ticket price of air carrier $C_k$ at route $r$. Thus, the new objective function indicates that we subtract the cost term from the revenue term, which yields profits.
After modifying the objective function, we perform experiments with the top-1,000 routes and the results are presented in Table~\ref{table:r1000_profit}. Comparing to the Greedy with Zero\_Init and $\alpha=10$, our methods always give better optimization results. The running time is similar to that of the influence maximization, thus omitted here.
\begin{table}
\centering
\footnotesize
\caption{Monthly profits for the top-1,000 routes (in million dollars).}
\begin{tabular}{@{}cc|cccc@{}}\hline
&&Carrier&Carrier&Carrier&Carrier\\
&&1&2&3&4\\ \hline
\multicolumn{2}{c|}{Lagrange, Zero\_Init (Ours)}&960.37 &\textbf{254.71} &\textbf{764.35} &\textbf{1,507.49} \\
\multicolumn{2}{c|}{ReLU, Zero\_Init (Ours)} &\textbf{961.63} &254.55 &758.83 &\textbf{1,507.49}\\
\multicolumn{2}{c|}{Greedy, Zero\_Init, $\alpha=10$}&916.73 &253.83 &743.44 &1,333.05\\ \hline
\end{tabular}
\label{table:r1000_profit}
\end{table}
\section{Conclusion}
We presented a prediction-driven optimization framework for maximizing air carriers' market influence, which customizes existing market share prediction models by adding transportation network features and innovates large-scale optimization techniques through the proposed adaptive gradient method. Our approach suggests a way to unify data mining and mathematical optimization.
Our network feature-based prediction shows better accuracy than existing methods. Our adaptive gradient method can optimize for all the US domestic routes in our dataset at the same time whereas state-of-the-art methods are applicable to at most tens of routes.
\bibliographystyle{IEEEtran}
|
1,116,691,498,659 | arxiv | \section{Introduction}
\label{sec:intro}
Recent work has revived ideas from the late 1980's and early 1990's \cite{Coleman:1988cy,Giddings:1988cx,Giddings:1988wv,Polchinski:1994zs} (based in turn on the earlier refs. \cite{Hawking:1987mz,Giddings:1987cg,Lavrelashvili:1987jg,Hawking:1988ae}) suggesting that spacetime wormholes elucidate the quantum physics of black holes
\cite{Saad:2018bqo,Saad:2019lba,Blommaert:2019wfy,Saad:2019pqd,Penington:2019kki,Marolf:2020xie,Blommaert:2020seb,Bousso:2020kmy,Stanford:2020wkf}, and other issues involving gravitational entropy \cite{Chen:2020tes}. Here the term spacetime wormhole is used to denote a
configuration in the quantum gravity path integral for which two separate connected components of the spacetime boundary are connected through the bulk of spacetime; see \cref{fig:wormholes}.
Furthermore, because boundary partition functions tend not to factorize in the presence of spacetime wormholes (see e.g., \cite{Maldacena:2004rf}), the above suggestions have led to renewed discussion of the nature of the AdS/CFT correspondence. Indeed, as described in the above references it may be that a given bulk theory is dual to an ensemble of field theories -- though see also comments in \cite{Pollack:2020gfa}, \cite{McNamara:2020uza}, \cite{Belin:2020hea}, \cite{Liu:2020jsv} and \cite{Altland:2020ccq}. The physics of spacetime wormholes and their possible implications are thus very much a part of current research.
\begin{figure}[h]
\centering
\begin{tikzpicture}[scale=0.6]
\coordinate (u1) at (0,6);
\coordinate (u2) at (-3,0);
\coordinate (u3) at (4,-1);
\draw[ultra thick, black] ($(u1) + (-1.5,0)$) .. controls (-1,3) and (-4.5,2) .. ($(u2) +(-1.8,0)$ );
\draw[ultra thick, black] ($(u1) + (1.5,0)$) .. controls (1.2,4) and (6.5,1) .. ($(u3) +(1.6,0)$ );
\draw[ultra thick, black] ($(u2) +(1.8,0)$ ) .. controls (-1.8,1.3) and (-1,1.95) .. ($(u1)+(0,-4)$) .. controls (1, 1.95) and (3.5,0.5) .. ($(u3) +(-1.6,0)$ );
\draw[ultra thick, blue!70, fill=rust!20] (u1) ellipse [x radius = 1.5, y radius = 0.6];
\draw[ultra thick, blue!70, fill=rust!20] (u2) ellipse [x radius = 1.8, y radius = 0.7];
\draw[ultra thick,blue!70, fill=rust!20] (u3) ellipse [x radius = 1.6, y radius = 0.5];
\end{tikzpicture}
\caption{Schematic illustration of a spacetime wormhole connecting three boundaries.}
\label{fig:wormholes}
\end{figure}
However, attempting to understand issues involving spacetime wormholes brings one face-to-face with the absence of a fully developed and universally accepted set of rules for manipulating and interpreting quantum gravity path integrals. This deficit can lead to much confusion in both the technical investigation of such path integrals and in communicating the results. Our work below seeks to aid both tasks by pointing out certain discrete choices that must be made in order to define a theory of quantum gravity from diagrams like those in \cref{fig:wormholes}, and by exploring certain implications of such choices.
In particular, one may note that the spacetimes of \cref{fig:wormholes} are similar to the diagrams of worldsheet string theory and to the Feynman diagrams of QFT. Taken together with other parallels between free particles in Minkowski space and simple models of quantum gravity, this feature is often taken to suggest that the full structure of quantum gravity will again resemble that of string theory or QFT \cite{Kuchar1981,Caderni:1984pw,Moss:1986yd,McGuigan:1988vi,Banks:1988je,Rubakov:1988jf,Giddings:1988wv,Hosoya:1988aa,Strominger:1988si,Hawking:1991vs,Lyons:1991im}, in which context the process of building the corresponding theory of quantum gravity
is often called `third quantization' (also see the recent discussion in \cite{Giddings:2020yes}). However, we avoid using this term below due its status as a work-in-progress and the resulting lack of a clear definition in the literature.\footnote{For similar reasons, we also avoid use of the terms which have previously been applied, eg., ``multiverse field theory'' parenthetically mentioned in \cite{Giddings:1988wv}, ``universal field theory'' of \cite{Banks:1988je}, and ``universe field theory" coined in \cite{Anous:2020lka}.}
We emphasize below that such QFT-like approaches correspond only to certain possible choices that might be made in interpreting the diagrams of \cref{fig:wormholes}. In contrast, distinctly different choices were (perhaps implicitly) made in recent comparisons of Jackiw-Teitelboim quantum gravity with ensembles of matrix models \cite{Saad:2018bqo,Saad:2019lba,Blommaert:2019wfy,Saad:2019pqd,Penington:2019kki,Blommaert:2020seb}, and (more explicitly) in general
arguments \cite{Marolf:2020xie} that spacetime wormholes lead to superselection sectors for boundary partition functions associated with states of a so-called `baby universe' Hilbert space;\footnote{Similar-but-different conclusions were reached in \cite{Coleman:1988cy,Giddings:1988cx,Giddings:1988wv} by following the QFT-like 3rd quantization paradigm together and using an additional `locality' assumption that restricts attention to an abelian sub-algebra, though this assumption can then be questioned as in
\cite{Giddings:1988wv}. See also further comments in \cref{sec:Disc}.} see also \cite{Balasubramanian:2020jhl,Gesteau:2020wrk} and the Lorentz-signature discussion of baby universes in \cite{Marolf:2020rpm}.
Indeed, a key feature of the worldline formulation of QFT \cite{Feynman:1951gn} is that the boundary conditions on worldline path integrals are associated with the arguments of correlation functions, and thus with the non-abelian algebra of quantum fields (see e.g., \cite{Strassler:1992zr,Schubert:2001he} for applications). But the arguments of \cite{Marolf:2020xie} for baby-universe superselection sectors imply that boundary conditions of quantum gravity path integrals correspond to the arguments of correlation functions associated with an abelian algebra of operators that can be simultaneously diagonalized on the baby universe Hilbert space $\mathcal{H}_\mathrm{BU}$. This apparent tension was highlighted in \cite{Anous:2020lka}. To resolve this, below we focus on carefully identifying the steps in the QFT-like constructions that deviate from the framework defined in \cite{Marolf:2020xie}. In particular, we will see that this issue is not related to any choice of spacetime signature, as both Lorentz- and Euclidean-signature constructions can in principle lead to either sort of algebra. Instead, the critical issue is whether the inner product on the quantum gravity Hilbert space is constructed from an adjoint (or ${\sf CPT} $ conjugation) operation that leaves the set of allowed boundary conditions invariant.
We will proceed by example, exploring a series of constructions one might use to relate path integral amplitudes to some quantum gravity inner product in various simple models. Our main goal is to illustrate some key places where choices must be made, and where QFT-like approaches deviate from the framework of \cite{Marolf:2020xie}. But this is only one step in analyzing the treatment of quantum gravity path integrals. We will thus not concern ourselves with making the models particularly realistic. In particular, we will mostly study models which do not allow universes to split and join, so that our path integrals reduce to a collection of cylinders. Though it is important to investigate such splitting and joining interactions in detail in the future, at least in perturbation theory, it is clear that adding such interactions to our models should cannot change any qualitative conclusions.
In fact, we will consider models of quantum gravity in which spacetime is one-dimensional (so that the above cylinders degenerate to become just line segments). Such models may thus be called ``worldline theories''. This choice was made for simplicity and also for ease of comparison with QFT. Since there is no concept of spatial boundary for one-dimensional Lorentzian spacetimes, our models are most analogous to studies of closed universes in higher dimensions. In particular, we encourage the reader to think of the quantum gravity Hilbert spaces described below as analogues of the `baby universe sectors' of higher-dimensional models discussed in \cite{Coleman:1988cy,Giddings:1988cx,Giddings:1988wv,Polchinski:1994zs,Marolf:2020xie}. We will thus often refer to them as baby universe Hilbert spaces below. Comments on higher dimensional cases are interspersed throughout the text, but a full treatment of higher dimensional cases may require additional inputs beyond those discussed here.
\begin{table}[]
\centering
\begin{tabular}{l|c|c|c}
Choices & ESFTs \cref{sec:EST}& QFT-like \cref{sec:EQFT,sec:LQFT} & GATs \cref{sec:GAT} \\
\hline \hline
1) Proper Time (Lapse) Range ($\mathbb{R}^\pm$, $\mathbb{R}$) & $\mathbb{R}^+$ & $\mathbb{R}^+$ & $\mathbb{R}$ \\
2) Spacetime (Worldline) Signature (E,L) & E & E/L & L \\
3) Restricted Boundary Conditions & No & Yes & No \\
4) ${\sf CPT} $ requires extra $\mathbb{Z}_2$? & No & For E target & No \\
5) Target Space Signature & E & E/L & any but E \\
\end{tabular}
\caption{The choices explored below associated with transforming a worldline path integral into a candidate quantum gravity Hilbert space, and the options chosen to define so-called Group Averaged Theories (GATs), QFT-like theories, and Euclidean Statistical Field Theories (ESFTs). E and L denote Euclidean and Lorentzian signatures respectively. Other terminology will be explained in the sections below. }
\label{tab:choices}
\end{table}
The set of options we explore is enumerated in \cref{tab:choices}, though we defer a full explanation of the terms used there to the sections below. We make no claim that this list is exhaustive. In particular, we primarily consider unoriented spacetimes, only occasionally commenting on the possibility of including an orientation (e.g., in parallel with the treatment of \cite{Stanford:2019vob} for Jackiw-Teitelboim gravity).
We furthermore make no claim that any of the options described below correspond precisely to the way that specific models were studied in \cite{Marolf:2020xie} or in
\cite{Saad:2018bqo,Saad:2019lba,Blommaert:2019wfy,Saad:2019pqd,Penington:2019kki,Blommaert:2020seb}. We thus defer any discussion of the detailed connection between those references and the approaches below to the discussion in \cref{sec:Disc}.
We begin in \cref{sec:reviewSS} below by reviewing the argument from \cite{Marolf:2020xie} that quantum gravity quantities associated with asymptotic boundaries defines an algebra of simultaneously-diagonalizable operators on a so-called `baby universe' Hilbert space. This also provides an opportunity to provide an overview of the main structures required to define a quantum gravity Hilbert space from a gravitational path integral. Some further preliminaries for one-dimensional gravity theories are then described in \cref{sec:general}.
We then proceed by examining various types of constructions in turn in \cref{sec:EST,sec:EQFT,sec:LQFT,sec:GAT}, stepping through the ingredients and the choices to be made in each case. For each type of construction, we focus on simple examples and indicate various potential generalizations without dwelling on the details. We conclude in \cref{sec:Disc} with a final discussion emphasizing open issues and the relation of our models to higher dimensional quantum gravity.
\section{Boundary observables and Baby Universes}
\label{sec:reviewSS}
To set the stage for our discussion we begin with a recapitulating the essence of the argument from \cite{Marolf:2020xie}. The path integral for any quantum system defines a map from a set of boundary conditions to numbers, the amplitudes. The amplitudes depend on a prescription for the dynamics (the set of configurations to be summed over in the path integral and the corresponding weights, typically specified by an action), as well as the allowed set of boundary conditions. We may superpose boundary conditions, turning the set of allowed boundary conditions into a vector space on which the map to amplitudes acts linearly.
\begin{figure}
\centering
\begin{tikzpicture}[scale=1]
\coordinate (b) at (7,-1);
\coordinate (k) at (7,1);
\coordinate (a) at (0,0);
\draw[ball color=red!20, fill opacity=0.2,thick,black] (a) circle (2cm);
\draw[thick, ball color= green!30,fill opacity=0.4] (a) ellipse (2cm and 0.3cm);
\draw[ball color=red!20, fill opacity=0.2, thick,black] ($(k)+(2,0)$) arc (0:180:2);
\draw[thick,ball color= green!30,fill opacity=0.65, black] (k) ellipse (2cm and 0.3cm);
\draw[ball color=red!20,fill opacity=0.2, thick,black] ($(b)+(2,0)$) arc (0:-180:2);
\draw[thick, ball color= green!30,fill opacity=0.45, black] (b) ellipse (2cm and 0.3cm);
\node at ($(k)$) [below=8pt] {$\bra{\Phi}$};
\node at ($(b)$) [above=8pt] {$\ket{\Phi}$};
\node at ($(a) +(2,0)$) [right] {$\braket{\Phi}$};
\end{tikzpicture}
\caption{Slicing open a quantum amplitude to reveal the bra and ket components. }
\label{fig:slicing}
\end{figure}
Given the quantum path integral, we can extract from it a Hilbert space by cutting it open along a codimension-1 slice. This follows from the convolutional property of path integrals; each cut corresponds to a resolution of the identity. The two halves of the path integral produced by the cut each use the boundary conditions appropriate to that part of the path integral to construct a state in the Hilbert space or in its dual; i.e., a ket-vector or a bra-vector. The full path integral constructed by sewing them back together computes then the inner product between these bra- and ket- states, see \cref{fig:slicing}. In other words, we split the boundary conditions into `bra' and `ket' pieces (typically corresponding to future and past, respectively, when our cut is at some fixed time in QM or QFT), and the amplitudes define a bilinear product between these pieces. To obtain a Hilbert space (with an inner product between two ket-vectors, say) we must have an anti-linear map turning a ket boundary condition for the path integral into a bra boundary condition, which squares to the identity. We may think of this as prescribing the action of a {\sf CPT} map on the space of allowed boundary conditions for the path integral.
Now, if the quantum gravity path integral is a sum over all topologies, then it naturally allows topologies with an arbitrary number of boundaries. As a result, if $\BC$ is the space of allowed boundary conditions at a single boundary, then
any list $b_1,\ldots b_n$ of elements $b_i \in \BC$ (of any length $n$) will define an allowed boundary condition for our quantum gravity theory. Furthermore, the path integral is to be computed by summing over \emph{all} spacetimes with boundaries matching $b_1,\ldots , b_n$. Thus the ordering of the boundary conditions plays no role, and two lists that differ by a permutation should be viewed as defining the same boundary conditions. Using $\expval{ b_1,\ldots, b_n}$ to denote the path integral with boundary conditions $b_1,\ldots, b_n$, we may then write
\begin{equation}
\label{eq:BUperm}
\expval{ b_1, b_2, \ldots, b_n }
= \expval{ b_{\sigma(1)}, b_{\sigma(2)}, \ldots, b_{\sigma(n)} } \,,
\end{equation}
where $\sigma \in S_n$ is a permutation of the boundary conditions. This turns the vector space of boundary conditions into an abelian algebra, with the product defined by disjoint union of boundaries.
As noted above, we should be able to cut open such a path integral to define a state. Due to the presence of boundaries, there are various ways in which we could introduce such a cut. For simplicity let us introduce a cut that does not intersect any of the existing boundaries $b_1, b_2, \ldots, b_n$, but merely partitions them into two disjoint subsets. Since each piece of the resulting path integral should define a state on the cut, there should be states $\ket{ a_1,\ldots , a_m}$ associated with arbitrary lists of boundary conditions, where the symmetry \eqref{eq:BUperm} means that we must identify states that differ only by the ordering of the boundary conditions $a_1,\ldots ,a_m $. And since the above cuts are closed surfaces, we should think of these states as describing closed universes without boundary. As a result, it is traditional to call this the Hilbert space $\mathcal{H}_\mathrm{BU}$ of `baby universes,' with the idea that such closed universes may have somehow been `produced' by some larger (infinite) `parent universe' having a non-trivial asymptotic boundary.
For each allowed boundary condition $b \in \BC$, it is then natural to define an operator $\hat b$ on $\mathcal{H}_\mathrm{BU}$ that simply inserts an additional boundary with the stated boundary condition; i.e.,
\begin{equation}
\hat b\ket{ a_1,\ldots, a_m } = \ket{b, a_1,\ldots, a_m }.
\end{equation}
Since the ordering of the boundary conditions is unimportant, it is manifest that any two such operators commute:
\begin{equation}\label{eq:commute}
\hat b_1 \hat b_2 \ket{a_1,\ldots , a_m} = \ket{b_1, b_2, a_1,\ldots , a_m}= \ket{b_2, b_1, a_1,\ldots , a_m}
= \hat b_2 \hat b_1 \ket{a_1,\ldots , a_m}.
\end{equation}
Finally, there is one special state $\ket{\mathrm{HH}}$ (the Hartle-Hawking no-boundary state) which corresponds to the absence of boundaries, $m=0$. All the states of $\mathcal{H}_\mathrm{BU}$ are then generated by the action of the algebra of boundary-inserting operators $\hat{b}$ acting on $\ket{\mathrm{HH}}$,
\begin{equation}
\ket{b_1,\ldots, b_n} = \hat{b}_1 \cdots \hat{b}_n \ket{\mathrm{HH}},
\end{equation}
and linear combinations.
As noted above, this is not quite enough to define the inner product on $\mathcal{H}_\mathrm{BU}$, since in addition we must choose an anti-linear `${\sf CPT} $' operation acting on boundary conditions. Indeed, we will see below that a single set of amplitudes may be associated with several different Hilbert spaces, by making a different choice of this conjugation operation. This choice is equivalent to defining the adjoint of the boundary-inserting operators $\hat{b}$ (and the conjugate to the no-boundary state). We then have
\begin{equation} \label{eq:QGIP1}
\begin{aligned}
\braket{ a_1,\ldots , a_m }{b_1,\ldots , b_n }
&=
\mel{\mathrm{HH}}{ \,\hat{a}_1^\dag \cdots \hat{a}_m^\dag \,\hat{b}_1 \cdots \hat{b}_n \,}{\mathrm{HH}} \\
&=
\expval{ a_1^\dag, \ldots , a_m^\dag, b_1,\ldots , b_n }\,,
\end{aligned}
\end{equation}
where the second line is just a path integral amplitude written in the same notation as in \eqref{eq:BUperm}, and where we have assumed that the application of {\sf CPT} to a list of boundary conditions $a_1,\ldots a_m$ is given by applying {\sf CPT} to the individual members of the list. This means that for any $b\in\BC$, $\hat{b}^\dag$ acts by inserting some boundary $b^\dag\in\BC$ (reusing the adjoint notation from the operator interpretation), so $\dag$ acts on the space of connected boundaries and extends to multiple boundaries in the simplest possible manner.\footnote{In particular, this means that the no-boundary condition is left invariant, so the norm of $\ket{\mathrm{HH}}$ is given by the path integral over closed spacetimes with no boundary whatsoever. We may also choose to normalize $\ket{\mathrm{HH}}$, which means defining the amplitudes to include a denominator of the path integral over closed spacetimes. Equivalently, we can integrate only over spacetimes without closed components, in the same way that vacuum diagrams are removed by normalization in QFT.\label{foot:HHnorm}}
This defines a sesquilinear product on the space of boundaries. For this to give a sensible Hilbert space, we must require that it is positive semi-definite; that is, the norm of any state thus computed is nonnegative. Under that assumption, we can define $\mathcal{H}_\mathrm{BU}$ as the completion of the span of states $\left| b_1,\ldots, b_n \right\rangle$ (i.e., the completion of polynomials in boundary conditions $b_i$) with the given inner product.\footnote{Defining $\mathcal{H}_\mathrm{BU}$ from the amplitudes of the algebra $\BC$ in the Hartle-Hawking state in this way is closely analogous to the GNS construction \cite{Gelfand:1943imb,Segal:1947irr} (see also \cite{Gesteau:2020wrk,Anous:2020lka}), but not technically identical since we do not a priori have a norm on the space of operators.} Since the inner product is required only to be positive \emph{semi}-definite, nontrivial linear combinations of the states $\left| b_1,\ldots, b_n \right\rangle$ can be `null': they have zero norm, and hence in the completion $\mathcal{H}_\mathrm{BU}$ are equal to the zero state. One can informally say that $\mathcal{H}_\mathrm{BU}$ is constructed as a quotient by such null states (though it is not technically necessary to invoke such a quotient to define the completion).
However, there are two more issues that we should consider. The first is to show that our boundary-inserting operators $\hat{b}$ are truly well-defined on $\mathcal{H}_\mathrm{BU}$. The potential issue here arises due to the above quotient by null states, since $\hat{b}$ is a well-defined operator on $\mathcal{H}_\mathrm{BU}$ only if it preserves the space of null states. But this is straightforward to show from \eqref{eq:QGIP1}. The key observation is that $\hat{b}$ acting to the right is equivalent to $\hat{b}^\dag$ acting to the left, and have assumed that $\hat{b}^\dag$ acts by adding some boundary $b^\dag$:
%
\begin{equation}
\mel{ a_1,\ldots , a_m }{\hat{b}}{b_1,\ldots , b_n}
= \mel{ a_1,\ldots , a_m }{(\hat{b}^\dag)^\dag}{b_1,\ldots , b_n}
= \braket{ a_1,\ldots , a_m, b^\dag }{b_1,\ldots , b_n}.
\end{equation}
%
As a result, for any null state $\ket{N}$ and any boundaries $a_1,\ldots , a_m$ we have
\begin{equation}
\label{eq:null}
\mel{ a_1,\ldots , a_m }{ \hat b }{N } =
\braket{ a_1,\ldots , a_m, b^\dag}{N } =0,
\end{equation}
where the last equality holds because $\ket{N}$ is null. This means that the overlap of $\hat{b} \ket{N}$ with any state is zero, so $\hat{b} \ket{N}$ is also null, and thus $\hat b$ preserves the null space as desired.
Secondly, we would also like to show that the $\hat{b}$ can be simultaneously diagonalized. In general these operators are not Hermitian, but they are \emph{normal}, meaning that $\hat{b}$ commutes with its adjoint $\hat b^\dagger$, $\commutator{\hat{b}}{\hat{b}^\dag}=0$. This follows from the fact that $\hat{b}^\dagger$ also acts by inserting a boundary $b^\dag$, so we can apply \eqref{eq:commute} with $b_1=b$ and $b_2=b^\dag$. This means that we can apply the spectral theorem, so $\hat{b}$ is diagonalizable. In fact, all of the operators $\hat{b}$ and $\hat{b}^\dagger$ commute, and hence all $\hat{b}$ can be simultaneously diagonalized as desired. The baby universe Hilbert space $\mathcal{H}_\mathrm{BU}$ has a basis of simultaneous eigenvectors $\ket{\alpha}$ for all boundary-inserting operators $\hat{b}$, labeled by some (continuous or discrete) parameters $\alpha$:
\begin{equation}\label{eq:alphastates}
\hat{b} \ket{\alpha} = b_\alpha \ket{\alpha}
\end{equation}
for some $b_\alpha\in\mathbb{C}$, for all $b\in \BC$. These $\alpha$-states give superselection sectors for the commutative algebra generated by boundary-inserting operators.
It is clear that the above argument is very general. The key point is simply that quantum gravity inner products are given by path integral amplitudes as in \eqref{eq:QGIP1} for some set of single-boundary boundary conditions $\BC$ that is invariant under the action of $\dag$. While this appears to us to be a natural condition to impose on theories of quantum gravity, as we review below it is certainly \emph{not} the case for the construction of QFT from worldline path integrals. This is illustrated by the examples below in which we also discuss certain other choices that must be made to define a Hilbert space from quantum-gravity-like path integrals.
\section{One-dimensional Theories of Gravity}
\label{sec:general}
We now describe a general framework that forms the backbone of our one-dimensional quantum gravity models. Thinking of a gravitational theory as a path integral over spacetimes, we first describe the amplitudes resulting from the sum over one-dimensional manifolds. For the sake of simplicity we will focus on the non-interacting limit (the free theory), where our worldlines do not intersect with each other. We then discuss the choice of `matter' degrees of freedom which lives on these spacetimes, giving the prominent example of a minisuperspace model obtained as a dimensional reduction of a theory in higher dimensions.
These ingredients will however not suffice to characterize our quantum gravitational theories completely: there will be additional discrete choices that need to be made in order to fully specify the model as summarized in \cref{tab:choices}. We will explore these choices in detail subsequently in \cref{sec:EST,sec:EQFT,sec:GAT,sec:LQFT}.
\subsection{Amplitudes from the sum over one-dimensional spacetimes}
\label{sec:1damps}
For given boundary conditions, our one-dimensional gravity amplitudes will be defined by an integral over the compatible one-dimensional manifolds, perhaps with some `matter' quantum mechanics living on those manifolds.
Fortunately one-dimensional manifolds and metrics are simple: we have only intervals or circles, parameterized by their total
proper length $T>0$. Here we treat the real line as the $T\rightarrow \infty$
limit of an interval.
Since we are interested in the dependence on boundary conditions, we may ignore the circles (which in any case only contribute an overall normalization; see \cref{foot:HHnorm}) and restrict to intervals. For the most part we take the spacetimes we sum over to be simply a union of intervals, though we will take occasion to comment on the generalization to graphs, where we allow several intervals to be sewn together at their boundaries. Throughout our discussion, we will use quantum gravity terminology so that the one-dimensional manifold is a `spacetime,' and a single such manifold represents a `universe' (and not a `particle'). It will, however, sometimes be convenient to also refer to the spacetime as the `worldline' of the universe.
The boundary of a one-dimensional spacetime is a (zero-dimensional) collection of points. As above, we use $\BC$ to denote the set of allowed boundary conditions at any single such point.\footnote{If we were to sum over oriented spacetimes, we should also assign an orientation to the boundary, which means a choice of sign for each point.} Note that in one-dimensional gravity theories these will be conditions on the `matter' fields alone as the space of zero-dimensional metrics is trivial. For example, when one defines a one-dimensional quantum gravity model by Kaluza-Klein reduction of a higher-dimensional theory, most of the features the gravitational theory in fact become part of the Kaluza-Klein matter sector, with only an overall notion of proper time left to be treated as one-dimensional gravity.
Following the discussion of \cref{sec:reviewSS}, a general quantum gravity boundary condition is an unordered list of elements of $\BC$. The associated quantum gravity amplitude is to be defined by summing over all one-dimensional manifolds compatible with the boundary conditions. Since for now our one-dimensional manifolds are collections of $n$ intervals, in the absence of inter-universe interactions, the number of boundaries must be an even number $2n$ for the amplitude to be nonzero. The topology of spacetime is specified by assigning the boundaries into $n$ pairs, where an interval connects the points in each pair.\footnote{For oriented spacetimes, the endpoints of an interval must have opposite orientation. The spacetime topologies are equivalent to maps from negatively oriented points to positively oriented points, which must be equal in number.} Each resulting pair of boundary conditions then specifies a single-universe amplitude computed by integrating over metrics and corresponding matter fields on the interval. The full quantum gravity amplitude is then formed by multiplying together the $n$ single-universe amplitudes defined by each pairing and then summing over pairings. For $b_1,\ldots, b_n \in \BC$, the associated quantum gravity amplitude may thus be written
\begin{equation}
\label{eq:QGA}
\expval{ b_1,\ldots,b_{2n}} = \sum_{\substack{\text{pairings of} \\\{b_1,\ldots,b_{2n}\}}} \ \prod_{\substack{ \mathrm{pairs} \\ \{b_i,b_j\}} } \ASU(b_i,b_j)\ ,
\end{equation}
where $\ASU(b_i,b_j) = \langle b_i b_j\rangle$ is the `single-universe amplitude' associated with integrating over the parameters of a single interval with the boundary conditions specified by the pair $b_i,b_j$. As in \cite{Marolf:2020xie}, we treat all boundaries as being distinguishable so that there are no additional symmetry factors in \eqref{eq:QGA}. Our non-interacting universe assumptions implies \eqref{eq:QGA} has the form of Wick contractions, so that in simple cases the quantum gravity amplitudes can be written as Gaussian integrals over the space of boundary conditions with a covariance matrix specified by the single-universe amplitudes $\ASU$, and the baby universe Hilbert space will be a Fock space built on a single-universe Hilbert space.\footnote{ Should we include interactions between worldlines we would have to consider summing over graphs with intermediate splitting and joining of worldlines. This can be accounted for perturbatively by including in \eqref{eq:QGA} the sums over appropriate collection of graphs with fixed boundaries. In this case the quantum gravity Hilbert space is no longer the Fock space over a single-universe Hilbert space
}
Note in particular that \eqref{eq:QGA} is invariant under arbitrary permutations of the single-universe boundary conditions
$b_1,\ldots,b_n$. This may remind the reader of bosonic quantum field theory, and thus raise questions of whether other possibilities might be allowed as well. One might also ask about further modifications of \eqref{eq:QGA}. While such questions may be of interest, we will not explore them below. Instead, we take the structure embodied in \eqref{eq:QGA} as given and consider additional choices that must be made in order to both define the single-universe amplitudes used in \eqref{eq:QGA} (discussed below) and in order to construct the quantum gravity Hilbert space from the above quantum gravity amplitudes (discussed in \cref{sec:EST,sec:GAT}).
To construct the full amplitudes it remains to define the single-universe amplitudes $\ASU$, which requires two ingredients. The first is a definition of the integral over spacetime (worldline) metrics on each interval. In practice, this will involve specifying whether the signature is Lorentzian or Euclidean and choosing a range of integration for the length (or proper time) $T$ of each interval. The second is the specification of the `matter' model and its associated dynamics and boundary conditions.
Explicitly, we can write $\ASU$ with an integral over spacetimes (labeled by their length $T$) and over matter fields, labeled $x$:
\begin{equation}\label{eq:asudef}
\ASU(b_1,b_2) = \int_D dT \int_{b_1}^{b_2} \mathcal{D}x \, e^{\eta\, S_\mathrm{matter}[x;T]} \,,
\end{equation}
where $\eta = +i$ and $\eta = -1$ for Lorentzian and Euclidean worldlines, respectively, and $b_1,b_2$ indicate some choice of boundary conditions for matter fields $x$ at each end of the worldline. We may want to add a `gravitational action' depending only on metrics, but the only local possibility in one dimension is a `cosmological constant' proportional to $T$, which we have chosen to absorb as a constant shift of the matter Lagrangian. The flat measure $dT$ over metrics is fixed by locality \cite{Cohen:1985sm} (and may also be obtained by starting with a general metric and gauge-fixing one-dimensional diffeomorphisms by the Faddeev-Popov method).\footnote{Non-trivial measures for the proper-time integration have been previously considered, see eg., \cite{Abel:2019ufz,Abel:2019zou}.} For further discussion of the measure we refer the reader to \cite{DeBoer:1995hv,Bastianelli:2006rx,Bastianelli:2013tsa,Edwards:2019eby}. Finally, we must choose a range of integration $D$ for the lapse or proper time $T$, with the only choices respecting locality being $\mathbb{R}$, $\mathbb{R}^+$ or $\mathbb{R}^-$.
To define the amplitudes, the main choices open to us are explicit in \eqref{eq:asudef}:
\begin{itemize}
\item The worldline signature $\eta$.
\item The range of proper time $D$.
\item The matter dynamics (fields $x$ and action $S_\mathrm{matter}$).
\item The allowed boundary conditions $b$ for matter.
\end{itemize}
As we will see in the course of our discussion, it is possible to construct examples that differ from the worldline formalism of QFTs by our choice of the domain $D$.
In addition, to define the quantum gravity Hilbert space we must specify the ${\sf CPT} $ operation $\dag$. It is important to note that $\dag$ is not in general determined by the choices above, and that it plays a critical role in the theory. In particular, the quantum gravity theories of sections \ref{sec:EST} and \ref{sec:EQFT} are distinguished only by the choice of $\dag$.
We will outline the class of matter theories we study below, and the remainder of the paper will be organized by various permutations of the other choices, and devoted to discussing their consequences for the resulting baby universe Hilbert space.
There are some additional choices that are less important for our purposes, and will only be mentioned parenthetically. First, we could choose our spacetimes to carry an orientation; we will mostly concentrate on unoriented worldlines, which requires restricting to matter theories with time-reversal symmetry (c.f., the discussion of JT gravity in two dimensions \cite{Stanford:2019vob}). Likewise, we could choose to have worldline supersymmetry, which for instance provides an example (once we make some of our discrete choices) of a topological sigma model \cite{Witten:1982im} (see \cite{Birmingham:1991ty} for the classic review of these developments). Finally, a more radical generalization is to sum not only over disjoint unions of intervals but general graphs, which we will have occasion to comment on in various cases.
\subsection{The matter theory}
\label{sec:matterchoice}
The main choice which determines the amplitudes $\ASU$ will be the matter theory. We can describe this either by a path integral, specifying matter fields and Lagrangian (as we have done above), or by a Hilbert space of matter states and a Hamiltonian $H$. In the Hamiltonian formulation, the contribution to $\ASU$ from an interval with proper length $T$ will be given by matrix elements of $e^{-iHT}$ or $e^{-HT}$ for Lorentzian or Euclidean spacetimes respectively, between states determined by the boundary conditions.
In describing each class of examples below, we will focus mostly on a matter theory taking the form of a sigma-model, so that the field content is a map from the one-dimensional spacetime to some target space $\mathcal{M}_{\mathrm{target}}$. We take this target space to be equipped with a metric $g_{\mu\nu}(x)$, and thus with a Laplacian (or wave operator) $\nabla^2$. The matter action takes the form
\begin{equation}\label{eq:smatter}
S_\mathrm{matter} = \int d\tau \left[ \tfrac{1}{2} g_{\mu\nu} \dv{x^\mu}{\tau}\, \dv{x^\nu}{\tau} - U(x)\right] ,
\end{equation}
allowing for the possibility of a potential $U(x)$ on $\mathcal{M}_{\mathrm{target}}$. Equivalently, the Hamiltonian is
\begin{equation}\label{eq:Hmatter}
H = -\tfrac{1}{2}\nabla^2 + U(x) \,,
\end{equation}
acting on the Hilbert space $L^2(\mathcal{M}_{\mathrm{target}})$.
The most general possible boundary condition is to give a wavefunction valued in $\mathcal{M}_{\mathrm{target}}$, that is a state in the matter Hilbert space $L^2(\mathcal{M}_{\mathrm{target}})$. This is a linear combination of boundary conditions that fix the fields to take a specific value $x\in \mathcal{M}_{\mathrm{target}}$ at the corresponding endpoint of spacetime, corresponding to delta-function wavefunctions $|x\rangle_\mathrm{matter}$, where we use the matter label to avoid confusion with $\mathcal{H}_\mathrm{BU}$. Thus, we will label our boundary conditions as points $x$, so the amplitudes will be denoted $\langle x_1,\ldots,x_n\rangle$ for lists of points in $\mathcal{M}_{\mathrm{target}}$.
Models of this kind naturally arise in the so-called mini-superspace truncation of gravity \cite{Misner:1972ab,Misner:1973zz} in which one restricts a higher-dimensional model of gravity to e.g., homogeneous spacetimes. At this stage we will not restrict the signature of $\mathcal{M}_{\mathrm{target}}$, so in particular the Hamiltonian may not be bounded below.\footnote{ We will take the metric to have mostly positive signature for Lorentzian targets.}
We will however require two properties that are \emph{not} obviously natural in the most straightforward construction of such minisuperspace models. In particular, we will take $\mathcal{M}_{\mathrm{target}}$ to be geodesically complete, so that $\nabla^2$ defines an essentially self-adjoint operator on $L^2(\mathcal{M}_{\mathrm{target}})$. If $\mathcal{M}_{\mathrm{target}}$ is Lorentzian then $\nabla^2$ will have a continuous spectrum, while for Euclidean target one may end up with a discrete spectrum if $\mathcal{M}_{\mathrm{target}}$ is compact.
\subsection{Mini-superspace models}
\label{sec:minisuper}
To provide additional context for this treatment of the one-dimensional gravitational sector, let us briefly discuss an example of a minisuperspace model (see e.g., \cite{Ryan:1975jw,Kodama:1997tk}). The dynamics of spatially homogeneous $3+1$ Lorentz signature Einstein-Hilbert gravity on a 3-torus with vanishing cosmological constant, the \emph{Bianchi I model}, is closely related to that of a free massless particle in 2+1 Minkowski space (say, with inertial coordinates $x^0, x^1, x^2$), see \cite{Misner:1974qy}. In particular, with $Y^i$ for $i=1,2,3$ being coordinates on the spatial 3-torus, consider the cosmological geometry with metric:
\begin{equation}
\gamma_{ab}\, dX^a\, dX^b = - N_0(t)^2\, dt^2 + e^{2 x^0(t)}\, (e^{2X(t)})_{ij} \,dY^i dY^j
\end{equation}
%
Here $X(t)$ is a diagonal matrix after fixing the diffeomorphisms symmetries sans time reparameterization; specifically we take
$X(t) = \operatorname{diag}\{x^1(t) +\sqrt{3}\, x^2(t), x^1(t) - \sqrt{3}\, x^2(t), -2x^1(t)\}$ to describe the anisotropies. The overall scale-factor of the torus is given by $e^{x^0}$. Using the standard lapse function $N_0$ that measures proper time and momenta $p_0, p_1,p_2$ conjugate to $x^0, x^1, x^2$, viz., $p^\mu = \dv{x^\mu}{t}$, the Einstein-Hilbert Lagrangian may be written as (nb: $\mu\in \{0,1,2\}$)
\begin{equation}
\sqrt{-\gamma}\ {}^\gamma R = \dot{x}^\mu p_\mu - N_0\,H_0\,.
\end{equation}
%
Here we introduced, $H_0$, the Hamiltonian constraint, given by
%
\begin{equation}
H_0 = \frac{e^{-3x_0} }{24} (-p_0^2 + p_1^2 + p_2^2) \,.
\end{equation}
%
Now, $H_0$ vanishes on-shell due to the equation of motion obtained by varying $N_0$. Owing to the prefactor $e^{-3x_0}$, the constraint $H_0$ tends to generate evolution that reaches a cosmological singularity at $x_0 = -\infty$.
However, using a rescaled lapse $N = \frac{1}{12} \, N_0\, e^{-3x_0}$ and the associated rescaled constraint
\begin{equation}
\label{eq:rescaledB1}
H =\frac{1}{2}\left( -p_0^2 + p_1^2 + p_2^2\right) ,
\end{equation}
%
we may recast the dynamics in the advertised form of a standard massless particle in 2+1 Minkowski space $\mathbb{R}^{2,1}$.
In particular, reduction to 0+1 dimensions yields a `matter' theory defined by \eqref{eq:rescaledB1}, which we may think of a sigma-model with target space $\mathcal{M}_{\mathrm{target}} = \mathbb{R}^{2,1}$. The gravitational sector of the reduced theory naturally has Lorentz signature, and we {\it define} the `proper time' $T$ of the reduced theory by $dT = N dt$. Note that this differs from the natural notion of proper time $N_0dt$ associated with the 3+1 geometry, though it corresponds to the usual notion for a massless particle on $\mathbb{R}^{2,1}$.
We offer this model as an illustration of way to obtain Lorentz signature target spaces for the worldline theory. Note that from the higher dimensional point of view we are only attempting to keep track of a subset of gravitational degrees of freedom in the minisuperspace approximation. One furthermore is also holding the higher dimensional topology fixed. Given the natural tendency of gravitational dynamics to lead to cosmological singularities, ${\cal M}_{\text{target}}$ is typically not geodesically complete when the metric on it is defined by the constraint associated with evolution in proper time. For now we note only that this difficulty can typically be circumvented by using a rescaled `conformal time' dynamics near the singularity (which amounts to the use of singularity-avoiding coordinates), so that we can readily construct gravity-inspired models of the mathematical form described here. But we will return to discuss the physics of this rescaling in \cref{sec:Disc}.
\section{Euclidean Statistical Theories}
\label{sec:EST}
For our first two examples, discussed in this section and the next, we will choose the worldline quantum gravity theory to have Euclidean signature. While this is perhaps not as interesting as the Lorentzian theories discussed later from the point of view of higher-dimensional models of quantum gravity, we present it first as a clean and familiar setting to illustrate the impact of certain choices made to define the Hilbert space.
In fact, in both this section and in \cref{sec:EQFT} we will use precisely the same amplitudes, identified with correlation functions of a Euclidean field theory, but nonetheless construct two different Hilbert spaces. The choices we make here will lead to a baby universe Hilbert space $\mathcal{H}_\mathrm{BU}$ which is very natural if we interpret our field theory as a classical statistical model. A different set of choices (designed to make contact with a quantum field theory Hilbert space) will be discussed in \cref{sec:EQFT}.
\subsection{The amplitudes}
\label{sec:ESTamp}
Taking our spacetimes to have Euclidean signature means that the matter amplitudes on an interval of length $T$ are given by the matrix elements $\langle x | e^{-HT} | y \rangle_\mathrm{matter}$. Our use of unoriented worldlines means that we have an $x \leftrightarrow y$ symmetry, so these matrix elements are symmetric and real. After integrating over one-dimensional metrics, the single-universe amplitudes $\ASU(x,y)$ that enter the quantum gravity amplitudes \eqref{eq:asudef} thus take the form
\begin{equation}
\label{eq:ESTASU}
\ASU(x,y) = \int_{\cal D} dT \, \mel{x}{e^{-HT}}{y}_\mathrm{matter}
\end{equation}
for some choice of integration domain ${\cal D}\subset {\mathbb R}$. The three local options are just the positive reals ${\mathbb R}^+$, the negative reals ${\mathbb R}^-$, or the entire real line. But for convergence, we require that the Hamiltonian $H$ is positive, and $\mathcal{D} = \mathbb{R}^+$ (or $H$ is negative, in which case we may replace $H\to -H$ w.l.o.g.). For a sigma-model as in \eqref{eq:Hmatter}, this means that the target space $\mathcal{M}_{\mathrm{target}}$ must have Euclidean signature. With this choice, the resulting amplitudes are given by the matrix elements of the inverse of the Hamiltonian:
\begin{equation}
\label{eq:ESTASU2}
\ASU(x,y) = \int_{{\mathbb R}^+} dT \,\mel{x}{e^{-HT}}{y}_\mathrm{matter} = \mel{x}{\frac{1}{H}}{y}_\mathrm{matter}.
\end{equation}
In other words, $\ASU(x,y)$ is the Green's function for the Hamiltonian $H$ (with appropriate fall-off conditions if $\mathcal{M}_{\mathrm{target}}$ is non-compact).
For general amplitudes, we recall that \eqref{eq:QGA} is computed from \eqref{eq:ESTASU2} from Wick contractions, and can thus be written in terms of an appropriate Gaussian path integral over a real scalar field $\Phi$ valued in $\mathcal{M}_{\mathrm{target}}$:
\begin{equation}
\label{eq:ESTQGA}
\begin{gathered}
\expval{ x_1,x_2,\ldots, x_n} =
\mathcal{N} \int {\cal D}\Phi \, \ \Phi(x_1) \cdots \Phi(x_n) \; e^{-I[\Phi]},\\
\text{where}\quad I[\Phi] = \int_{\mathcal{M}_{\mathrm{target}}}dx\bigg[\tfrac{1}{2}(\partial \Phi(x))^2+U(x) \Phi(x)^2\bigg],
\end{gathered}
\end{equation}
with $\mathcal{N}$ a normalization constant. In other words, our quantum gravity amplitudes are the correlation functions of a free Euclidean field theory on $\mathcal{M}_{\mathrm{target}}$, consisting of a scalar field $\Phi$ with action $I$.
\subsection{The baby universe Hilbert space}
\label{sec:ESTBU}
As already emphasized, the amplitudes alone do not provide sufficient information to construct the baby universe Hilbert space $\mathcal{H}_\mathrm{BU}$. In particular, in order to define the inner product we must additionally choose a conjugation operation, `$\dag$', acting on boundary conditions. Here, we will make the most obvious choice, that $\dag$ acts trivially on the $x$ boundary conditions:
\begin{equation}
x^\dag = x.
\end{equation}
From this, we can construct and interpret the baby universe Hilbert space. As noted earlier, since the general inner product is computed by Wick contractions, $\mathcal{H}_\mathrm{BU}$ is a Fock space built on the single-universe Hilbert space, spanned by $ \ket{x}$ for $x\in\mathcal{M}_{\mathrm{target}}$. A general single universe state is a superposition $\int dx\, F_1(x) \ket{x}$ for some complex-valued function $F_1$ on $\mathcal{M}_{\mathrm{target}}$. The inner product of two such states $F_1,G_1$ is constructed from the single-universe amplitude \eqref{eq:ESTASU2}, as
$\int dx dy \,G_1^*(x)\ASU(x,y)F_1(y)$. This is positive-definite, which can be seen by decomposing $F_1$ in a basis of eigenfunctions of $H$ and using positivity of the corresponding eigenvalues. In particular, there are no nontrivial null (zero-norm) states.
There is in fact a nicer characterization of the full Hilbert space $\mathcal{H}_\mathrm{BU}$, as a space of (complex-valued) functionals $F$ of our scalar field $\Phi$ taking values in $\mathcal{M}_{\mathrm{target}}$. Formally, we may write a functional $F$ in terms of its Taylor expansion,
\begin{equation}
F[\Phi] = F_0 + \int dx\, F_1(x)\Phi(x) + \tfrac{1}{2}\!\int dx_1 dx_2\, F_2(x_1,x_2)\Phi(x_1)\Phi(x_2) + \cdots,
\end{equation}
where $F_n$, the $n^\text{th}$ functional derivative of $F$, is a symmetric function $\mathcal{M}_{\mathrm{target}}^n\to \mathbb{C}$. In particular, the one-universe Hilbert space considered above corresponds to linear functionals, where all $F_n$ vanish for $n\neq 1$. We can then write any state of $\mathcal{H}_\mathrm{BU}$ in terms of a functional $F$, as
\begin{equation}\label{eq:functionalState}
\ket{F} = F_0 \ket{\mathrm{HH}} + \int dx\, F_1(x) \ket{x} + \tfrac{1}{2}\!\int dx_1 dx_2\, F_2(x_1,x_2)\ket{x_1,x_2} + \cdots,
\end{equation}
and the inner product of two such states is given by
\begin{equation}
\braket{G}{H} = \mathcal{N} \int {\cal D}\Phi \, G^*[\Phi] \,F[\Phi] \; e^{-I[\Phi]}.
\end{equation}
This holds term-by-term in the Taylor expansion, since both Gaussian integrals and our quantum gravity amplitudes \eqref{eq:QGA} are computed by Wick contractions.
In this form, we may see that $\mathcal{H}_\mathrm{BU}$ has a simple interpretation if we take the path integral to define a classical statistical system, such as the continuum limit of an Ising-like model. The path integral $\int \mathcal{D}\Phi \,\cdots e^{-I[\Phi]}$ defines a probability distribution (perhaps a Boltzmann distribution where $I[\Phi]$ is $\beta$ times the energy of the given field configuration). The functionals $F$ are then the observables of such a model, which are random variables depending on the probability distribution. The inner product then gives us the covariance matrix of these random variables,
\begin{equation}
\left\langle G \middle|F \right\rangle = \operatorname{Cov}(F,G).
\end{equation}
Thus, $\mathcal{H}_\mathrm{BU}$ is a standard construction in probability theory, the Hilbert space of random variables (of finite variance).
Moreover, it is now simple to interpret the superselection sectors (or $\alpha$-states) of $\mathcal{H}_\mathrm{BU}$: they are states where the field $\Phi$ takes a definite value, and the eigenvalues of boundary operators $\hat{x}$ are given by $\Phi(x)$. The action $e^{-I[\Phi]}$ gives the square of the overlap of (appropriately normalized) $\alpha$ states with the no-boundary state $\ket{\mathrm{HH}}$, so the probability distribution of superselection sectors is precisely identified with the original distribution defining the classical statistical theory.
\subsection{Generalizations}
\label{sec:ESTgeneralizations}
One can exemplify the construction above with many familiar examples where $\mathcal{M}_{\mathrm{target}}$ is compact. We could just as well also choose a non-compact target geometry. For definiteness, consider the Wick-rotation of the Bianchi I model introduced in \cref{sec:matterchoice} -- the replacement $x^0 \rightarrow i x_3$ yields\footnote{Here we again use the `rescaled lapse' $N = e^{-3x_0}N_0$.}
\begin{equation}
\label{ESTH}
H = p_1^2 + p_2^2 + p_3^2
\end{equation}
in terms of the Euclidean target space $\BC = {\cal M}_{\text{target}} = {\mathbb R}^3$.
From \eqref{eq:ESTASU2}, it is clear that we may generalize this approach to allow any matter theory with a positive-definite Hamiltonian, a sufficiently well-defined resolvent operator, and time-reflection symmetry. Notably, this excludes the case of Lorentz-signature target spaces as naturally occur in quantum gravity models. However, if there is an appropriate ${\mathbb{Z}}_2$ symmetry as above, one can Wick-rotate such models to Euclidean signature and then apply the above approach. As before, the time-reflection symmetry is required due to our choice to sum over unoriented spacetimes.
Considering instead oriented spacetimes removes this requirement (for example, allowing us to include a background magnetic field for our particle), both giving a time orientation for the dynamics and an orientation (a sign) for boundary conditions. In that case, we denote a Dirichlet boundary condition with positive orientation by $\Phi(x)$, and one with negative orientation by $\bar{\Phi}(x)$. Any spacetime is a union of intervals connecting a $\Phi$ boundary with a $\bar{\Phi}$ boundary, so the amplitudes are
\begin{equation}
\label{eq:ESTQGAC}
\begin{split}
\Big\langle \Phi(x_1) \cdots \Phi(x_n) &\,\bar{\Phi}(y_1)\cdots \bar{\Phi}(y_m)\Big\rangle
= \\
& {\cal N} \int {\cal D}\Phi {\cal D}\bar \Phi\ \Phi(x_1)\cdots \Phi(x_n) \,\bar{\Phi}(y_1)\cdots \bar{\Phi}(y_m) \,
e^{-\frac{1}{2}\, \int_{\mathcal{M}_{\mathrm{target}}} (\partial \bar{\Phi})(\partial \Phi)},
\end{split}
\end{equation}
which is just the path integral over a complex scalar field on $\mathcal{M}_{\mathrm{target}}$. In this case we choose $[\Phi(x)]^\dag = \bar{\Phi}(x)$ and $[\bar{\Phi}(x)]^\dag = \Phi(x)$ to make the quantum gravity inner product positive-definite.
The generalization of this case to allow general graphs is now straightforward and familiar. We merely replace the quadratic action in either \eqref{eq:ESTQGA} or \eqref{eq:ESTQGAC} with a more general functional of $\Phi$. Of course, the Euclidean fields $\Phi(x)$ remain simultaneously diagonalizable.
Such models have been discussed in the past, see e.g., \cite{Symanzik:1966euc,Nelson:1973mak}.
We can arrive at interesting possibilities in some cases by restricting the allowed set of boundary conditions. Consider a case where we take $\mathcal{M}_{\mathrm{target}}$ to be Euclidean hyperbolic space $\mathbb{H}_{d}$, and require boundary conditions for the worldlines to end on the asymptotic boundary $S^{d-1} = \partial \mathbb{H}_d$. Take the non-interacting case (without vertices) and with constant potential $U=m^2$. The resulting quantum gravitational amplitudes are the conformally invariant correlators of Euclidean mean field theory\footnote{This depends on a choice of conformal frame (i.e., choice of metric within the conformal class of the boundary $\textbf{S}^{d-1} = \partial \mathbb{H}_d$), related to how we regulate the infinite length of worldlines.} (or a `generalized free theory') defined on the conformal boundary $\partial \mathcal{M}_{\mathrm{target}}$ \cite{Maxfield:2017rkn}. We thus arrive at a critical Euclidean statistical theory defined on the boundary of target space. This example is interesting primarily as the free limit of a gravitational theory with AdS asymptotics, for which the boundary Euclidean statistical theory has a local description by the AdS/CFT correspondence: see further discussion in \cref{sec:Disc}.
\section{Euclidean approach to QFT-like theories}
\label{sec:EQFT}
In \cref{sec:ESTamp}, we discussed how the amplitudes of a one-dimensional theory of Euclidean quantum gravity lead to the correlation functions of a Euclidean QFT. In \cref{sec:ESTBU} we then constructed a Hilbert space from these amplitudes. But the result of our construction was not what one usually calls `the Hilbert space of the QFT'. In this section, we spell out the different choices that lead from the correlation functions \eqref{eq:ESTQGA} to the more familiar notions of QFT Hilbert space. Such constructions have a long history \cite{Schwinger:1958mma}, and are associated in particular with the Osterwalder-Schrader reconstruction theorem \cite{Osterwalder:1973dx,Osterwalder:1974tc} (see e.g., \cite{Glimm:1987ng}). Our main aim here is to understand these ideas in the context of the argument of \cite{Marolf:2020xie} reviewed in \cref{sec:reviewSS}.
\subsection{Hilbert space construction}
From the gravitational perspective, since we are taking the same amplitudes as in \cref{sec:EST}, we should ask what other choices were made to construct the Hilbert space, and consider other options. The most obvious is our adjoint operation $\dag$. A simple possible generalization of the choice made in \cref{sec:ESTBU} is to define $\dag$ to act locally on $\mathcal{M}_{\mathrm{target}}$, so $x^\dag = \sigma(x)$ for some function $\sigma:\mathcal{M}_{\mathrm{target}}\to\mathcal{M}_{\mathrm{target}}$. Since the adjoint must square to the identity, so must $\sigma$, so it is required to be an involution: $\sigma(\sigma(x))=x$. This means that $\sigma$ can be represented as a self-adjoint and unitary operator acting on the Hilbert space $L^2(\mathcal{M}_{\mathrm{target}})$ of our matter theory. To make the inner product on the one-universe sector conjugate symmetric ($\ASU(\sigma(x),y) = \ASU(\sigma(y),x)^*$), we also require this operator to commute with the matter Hamiltonian $H$, so $\sigma$ should be a $\mathbb{Z}_2$ symmetry of $\mathcal{M}_{\mathrm{target}}$ (preserving the metric and potential). This is of course a constraint on $\mathcal{M}_{\mathrm{target}}$ as well as $\sigma$, since not every matter theory will admit such a symmetry.
The simplest example to have in mind is the product space $\mathcal{M}_{\mathrm{target}} = \Sigma\times \mathbb{R}$ (with a constant potential $U=m^2$), where $\sigma$ acts trivially on $\Sigma$, and reflects the coordinate $t_E$ for the $\mathbb{R}$ factor: $\sigma(t_E)=-t_E$. We will see that this is the most important example for the application to QFT, since the corresponding amplitudes are the Euclidean correlation functions for the vacuum state of a scalar field of mass $m$ on the spatial manifold $\Sigma$, with $t_E$ interpreted as the Euclidean time. A specific example of this is Bianchi I model with $\mathcal{M}_{\mathrm{target}} = \mathbb{R}^3$, after Wick rotation $x^0 \to -i\,t_E$.
However, this presents an immediate problem for our inner product on the one-universe Hilbert space. This inner product is computed by the matrix elements of $\sigma \frac{1}{H}$, but this will never be a positive operator, so the inner product will not be positive-definite. The reason is that we can diagonalize $\sigma$ and $H$ simultaneously, so we may choose eigenstates of $H$ with definite parity $\pm 1$ under $\sigma$. But $H$ is a positive operator, so all the eigenstates with negative parity will have negative norm. If all eigenstates have positive parity, it means that $\sigma$ is the identity and we return to the construction of \cref{sec:ESTBU}.
Inspired by constructions of the QFT Hilbert space, we evade this by restricting the set of allowed boundary conditions. Specifically, we restrict attention to the case when the involution $\sigma$ fixes a hypersurface $\Sigma$, and $\mathcal{M}_{\mathrm{target}} -\Sigma$ has two connected components $\mathcal{M}_{\mathrm{target}}^-$ and $\mathcal{M}_{\mathrm{target}}^+=\sigma(\mathcal{M}_{\mathrm{target}}^-)$. We then define our Hilbert space to be spanned by states $|x_1,\ldots,x_n\rangle$, but now only allowing $x_i\in\mathcal{M}_{\mathrm{target}}^-$. We illustrate this in \cref{fig:EFT}. In the simple example $\mathcal{M}_{\mathrm{target}} = \Sigma\times \mathbb{R}$, the hypersurface $\Sigma$ lies at the moment of time-reflection symmetry $t_E=0$, $\mathcal{M}_{\mathrm{target}}^+$ and $\mathcal{M}_{\mathrm{target}}^-$ are the regions $t_E>0$ and $t_E<0$ respectively, and we define states with operator insertions at negative $t_E$ only.
\begin{figure}
\centering
\begin{subfigure}[t]{\textwidth}\centering
\includegraphics[width=.4\textwidth]{EQFTfigtarget}
\caption{A target space $\mathcal{M}_{\mathrm{target}}$ with reflection symmetry $\sigma$. The reflection fixes the surface $\Sigma$, which splits the target space into two pieces $\mathcal{M}_{\mathrm{target}}^\pm$.}
\label{fig:EFTa}
\end{subfigure}
\hfill
\begin{subfigure}[t]{\textwidth}\centering
\raisebox{30pt}{\scalebox{1.5}{$|x\rangle\sim$}}
\includegraphics[width=.15\textwidth]{EQFTfigx}
\raisebox{30pt}{\scalebox{1.5}{, $|y\rangle\sim$}}
\includegraphics[width=.15\textwidth]{EQFTfigy}
\raisebox{30pt}{\scalebox{1.5}{$\longrightarrow\langle y|x\rangle\sim$}}
\raisebox{-10pt}{\includegraphics[width=.2\textwidth]{EQFTfigIP}}
\caption{The single-universe states $|x\rangle$ and $|y\rangle$ are defined by a choice of points $x$ and $y$ (indicated by the crosses $\times$) in the half-space $\mathcal{M}_{\mathrm{target}}^-$. The inner product $\langle y|x\rangle$ is computed by the worldline path integral, with boundary conditions specifying that worldlines end at $x$ and $\sigma(y) = y^\dag$. More general states are defined by including multiple insertions in $\mathcal{M}_{\mathrm{target}}^-$, and most generally by superpositions (both summing over the number of insertions and integrating over their locations with some weighting).}
\label{fig:EFTb}
\end{subfigure}
\caption{\label{fig:EFT}}
\end{figure}
Note that with this restriction on boundary conditions, our adjoint operation $x^\dag = \sigma(x)$ does not preserve the space $\mathcal{M}_{\mathrm{target}}^-$ of single-universe boundary conditions to which \eqref{eq:QGIP1} applies. This construction therefore violates an implicit assumption of \cite{Marolf:2020xie}. We will later see some implications for the operators $\hat{x}$.
To see that these choices result in a positive definite inner product and to make contact with QFT constructions, we return to the path integral expression \eqref{eq:ESTQGA} for the amplitudes. We may write our inner product as
\begin{equation}
\label{eq:EQFTIP}
\braket{y_1,\ldots , y_n }{ x_1,\ldots, x_m} =
{\cal N} \, \int {\cal D}\Phi \ \Phi(\sigma(y_1)) \cdots \Phi(\sigma(y_n)) \Phi(x_1) \cdots \Phi(x_m)\;
e^{-I[\Phi]},
\end{equation}
where $x_1,\ldots, x_m,y_1,\ldots ,y_n$ are points in $\mathcal{M}_{\mathrm{target}}^-$. To consider general states (linear combinations of states $|x_1,\ldots,x_m\rangle$), we may denote them as functionals of fields as in \eqref{eq:functionalState}, with
\begin{equation}
\langle G|F\rangle = \mathcal{N} \int \piD{\Phi} \,G^*[\Phi\circ\sigma]F[\Phi]e^{-I[\Phi]},
\end{equation}
but now $F,G$ are functionals depending only on restriction of the field $\Phi$ to $\mathcal{M}_{\mathrm{target}}^-$. Since $F[\Phi]$ depends only on the field in $\mathcal{M}_{\mathrm{target}}^-$ and $G^*[\Phi\circ\sigma]$ only on the field in $\mathcal{M}_{\mathrm{target}}^+$, we may split the path integral into two pieces, integrating separately over fields $\Phi^\pm$ restricted to the respective regions. These are only identified at the common boundary $\Sigma$, so we have $\left.\Phi^\pm\right|_\Sigma=\Phi_\Sigma$, and the inner product is written as the residual path integral on $\Sigma$:
\begin{gather}
\langle G|F\rangle =
\int_\Sigma {\cal D}\Phi_\Sigma \Psi_G^*[\Phi_\Sigma] \Psi_F[\Phi_\Sigma] , \label{eq:wavefuntionalIP} \\
\text{ where}\quad \Psi_F[\Phi_\Sigma] = \sqrt{\mathcal{N}} \int_{\Phi^-|_\Sigma = \Phi_\Sigma} \piD{\Phi^-} F[\Phi^-] e^{-I_-[\Phi^-]}. \label{eq:wavefunctional}
\end{gather}
The path integral defining $\Psi_F$ is performed over fields $\Phi^-$ on $\mathcal{M}_{\mathrm{target}}^-$ with the specified values on $\Sigma$, and $I_-$ is the action in \eqref{eq:ESTQGA}, but with the integration restricted to $\mathcal{M}_{\mathrm{target}}^-$. In this form, the inner product is manifestly positive semidefinite, since the norm of the state $F$ is given by integrating the positive functional $|\Psi_F(\Phi_\Sigma)|^2$ with respect to a positive measure.\footnote{Interestingly, it is more complicated to show positive-definiteness working directly within the worldline formalism. This is related to the comment in \cite{Marolf:2020xie} that it is unclear what conditions on the quantum gravity path integral are required for positive-definiteness of the full quantum gravity inner product. We include a worldline argument for positive-definiteness of the inner product in \cref{app:WPD} for the Gaussian case.}
Indeed, we can interpret \eqref{eq:wavefunctional} as the path integral computation of a Schr\"odinger-picture wavefunctional on the surface $\Sigma$, and \eqref{eq:wavefuntionalIP} as the inner product of two such wavefunctionals. In the example $\mathcal{M}_{\mathrm{target}} = \Sigma\times \mathbb{R}$, the no-boundary state $|\mathrm{HH}\rangle$ constructs the vacuum of the QFT at time $t_E=0$, and nontrivial boundary conditions for worldlines produce excited states by inserting (Euclidean time-ordered) operators in the lower half-space $t_E<0$. Our choices are essentially equivalent to the Osterwalder-Schrader construction of the QFT Hilbert space on the spatial slice $\Sigma$ from Euclidean correlation functions.
\subsection{Null states}
Unlike in \cref{sec:ESTBU}, the inner product defined above admits nontrivial null states. As observed in \cite{Anous:2020lka}, we may think of these as arising from the QFT field equations, which we may here write as $H\Phi(x)=0$ where $H$ is the matter Hamiltonian for our one-dimensional quantum gravity theory. Importantly, this holds only at separated points: when field insertions collide this may be violated by contact terms. Indeed, these contact terms explain why the field equations did \emph{not} give rise to null states in \cref{sec:ESTBU}. However, with our restricted boundary conditions $x,y\in\mathcal{M}_{\mathrm{target}}^-$, operators $\Phi(x)$ and $\Phi(\sigma(y))$ can never collide, so inner products do not produce contact terms.
More explicitly, let us focus on the one-universe Hilbert space, and consider states $|f\rangle = \int dx\, f(x) |x\rangle$, where $f$ has compact support contained in $\mathcal{M}_{\mathrm{target}}^-$. In particular, we may choose a function $f = H h$, where the support of $h$ is contained in $\mathcal{M}_{\mathrm{target}}^-$.
Now the inner product with another state $|g\rangle= \int dx\, g(x) |x\rangle$ is given by
\begin{equation}\label{eq:nullEQFT}
\langle g|f\rangle = \langle g\circ\sigma |\frac{1}{H}|f\rangle_\mathrm{matter} = \langle g\circ\sigma |h\rangle_\mathrm{matter} =0,
\end{equation}
where $|f\rangle_\mathrm{matter}$ is the state in the matter Hilbert space with wavefunction $f$ (with an $L^2$ inner product on $\mathcal{M}_{\mathrm{target}}$). The final inner product vanishes because the support of $h$ and $g\circ\sigma$ are disjoint, lying within $\mathcal{M}_{\mathrm{target}}^-$ and $\mathcal{M}_{\mathrm{target}}^+$ respectively. Hence, $|f\rangle$ has vanishing inner product with any state, so it must be null: $|f\rangle=0$. To connect this with the field equations $H\Phi(x) =0$, the state is associated with the functional $\int dx\, H h(x) \Phi(x)=\int dx\, h(x) H\Phi(x)$ (where we may `integrate by parts' because $H$ is a symmetric operator on $L^2(\mathcal{M}_{\mathrm{target}})$).
While at first the states of the one-universe Hilbert space would appear to be determined by functions $f$ on $\mathcal{M}_{\mathrm{target}}^-$, such null states mean that the independent states in $\mathcal{H}_\mathrm{BU}$ are in fact determined by far less data: a function only on $\Sigma$ (see appendix \ref{app:WPD} for a non-redundant characterization of states in terms of a function $\Sigma\to \mathbb{C}$). Thus, the QFT Hilbert space constructed in this section is in a sense much smaller than that discussed in \cref{sec:EST}, with one-universe states determined by a function on the submanifold $\Sigma$ of $\mathcal{M}_{\mathrm{target}}$.
\subsection{Operator algebra}
Finally, we briefly discuss the construction of operators in this formalism. From the general discussion of \cref{sec:reviewSS}, one might expect to have boundary-inserting operators $\hat{x}$ acting on the baby universe Hilbert space. But in fact, these operators are not well-defined, since they do not preserve the space of null states. For example, consider acting on the null state $|f\rangle$ with $f=Hh$ in \eqref{eq:nullEQFT} with $\hat{x}$, inserting a boundary at a point $x\in \mathcal{M}_{\mathrm{target}}^-$, and take the overlap with the no-boundary state:
\begin{equation}
\langle \mathrm{HH} | \hat{x} |f\rangle = \langle x|\frac{1}{H}|f\rangle_\mathrm{matter}= \langle x|h\rangle_\mathrm{matter} = h(x).
\end{equation}
This will be nonzero for some choice of $h$, so $\hat{x}|f\rangle$ is not a null state. As a result, the boundary-inserting operator $\hat{x}$ does not give a well-defined operator on the baby universe Hilbert space, where we have performed a quotient by null states.
Such a result was possible only because our Hilbert space does not obey the axioms of \cite{Marolf:2020xie}. The particular failure is the invariance of the set of allowed boundary conditions under the CPT operation $\dag$, which means that the argument of \eqref{eq:null} is inapplicable.
This is in fact perfectly in line with expectations from QFT.\footnote{This is associated with the well-known fact that while the Osterwalder-Schrader reconstruction directly reconstructs the states of QFT, it does not provide a similarly direct construction of the operator algebra.} We would expect the operators $\hat{x}$ to be associated with the quantum fields $\hat{\Phi}(x)$, but these do not give a well-defined operator algebra on a Euclidean space. For example, if $\mathcal{M}_{\mathrm{target}} = \Sigma\times \mathbb{R}$ and we use the usual quantization with respect to Euclidean time $t_E$, products of field operators are sensible only when they appear in Euclidean time order.\footnote{More precisely, the domain of $\hat{\Phi}(x)$ and the image of $\hat{\Phi}(y)$ are disjoint if $x$ is in the Euclidean future of $y$ (except in the case when $\mathcal{M}_{\mathrm{target}}$ is one-dimensional).}\footnote{There are some boundary-inserting operators $\hat{x}$ that are well-defined (though unbounded) on the baby universe Hilbert space, namely when $x$ lies on the $\sigma$-invariant slice $\Sigma$. Since this set is invariant under our CPT operation, the general argument of \cite{Marolf:2020xie} applies. And indeed, these operators are self-adjoint and mutually commuting, hence simultaneously diagonalizable (with eigenstates corresponding to delta-function wavefunctionals in \eqref{eq:wavefuntionalIP}).}
\subsection{Generalizations}
\label{sec:EQFTgeneralizations}
This above discussion is readily generalized to any context where quantum field theory is well-defined and has the required ${\mathbb Z}_2$ reflection symmetry. For example, we may discuss complex scalar fields by using oriented worldlines, or generalize the amplitudes to general Feynman graphs in the manner of interacting quantum field theory.
Note, however, that gravitational models will not typically admit a time-reflection symmetry (for example, the Kantowski-Sachs model introduced later in \eqref{eq:HKS}). Indeed, a $\mathbb{Z}_2$ symmetry reversing time would usually be a relation between the physics of large universes and of small universes, so there are few semiclassical gravitational models to which this formalism could apply. However, it might be interesting to consider whether --- at least in some cases --- string-inspired models with some suitable notion of T-duality (see e.g.\ \cite{Polchinski:1998rr}) could provide the required symmetry.
\section{Group Averaged Theories}
\label{sec:GAT}
We now turn to the case where the worldline is taken to have Lorentz signature. We will begin by discussing a framework that may be unfamiliar to many practitioners of QFT or string theory. It is however inspired by a popular approach to studying single-universe quantum gravity models by treating them as constrained systems \cite{Landsman:1993xe,Marolf:1994wh,Ashtekar:1995zh,Marolf:1996gb,Reisenberger:1996pu,Hartle:1997dc,Marolf:2000iq,Shvedov:2001ai} and by treatments \cite{Higuchi:1991tm} of linearization-instabilities in quantum gravity.
The single-universe amplitudes $\ASU(x,y)$ that enter the quantum gravity amplitudes \eqref{eq:QGA} take the form
\begin{equation}
\ASU(x,y) = \int_{\cal D} dT \langle x | e^{-iHT} | y \rangle_\mathrm{matter}
\end{equation}
for some choice of integration domain ${\cal D}\subset {\mathbb R}$. The three options respecting worldline locality are the positive reals ${\mathbb R}^+$, the negative reals ${\mathbb R}^-$, or the full real line $\mathbb{R}$.
We here discuss the latter choice (${\cal D} = {\mathbb R}$), which fits into the so-called `group averaging' paradigm discussed in \cite{Higuchi:1991tm,Landsman:1993xe,Marolf:1994wh,Ashtekar:1995zh,Marolf:1996gb,Reisenberger:1996pu,Hartle:1997dc,Marolf:2000iq,Shvedov:2001ai}
(sometimes under other names). With this choice, we may write
\begin{equation}
\label{eq:ASUGA}
\ASU(x,y) = \mel{x}{ \delta(H) }{ y}_\mathrm{matter} \,,
\end{equation}
which shows that choosing ${\cal D} = {\mathbb R}$ imposes the constraint $H=0$ in a strong sense.\footnote{This raises the issue that is sometimes called the problem of time in quantum gravity, which is resolved by realizing that the true dynamics of such systems are encoded in relational observables. See e.g. \cite{DeWitt:1962cg,DeWitt:1967yk,Rovelli:1990jm,Rovelli:1989jn,Rovelli:1990ph,Smolin:1993ka,Marolf:1994wh} for classic discussions.}
As an example, consider the mini-superspace truncation of the Bianchi I model discussed in \cref{sec:general}, where the
Hamiltonian \eqref{eq:rescaledB1} (in rescaled lapse variable) corresponds to that of a free particle motion in $\mathbb{R}^{2,1}$. The single universe amplitudes compute the matrix elements of the on-shell constraint $\delta(p^2)$. Any general linear combination
$\int_{\mathbb{R}^{2,1}} f(y) \ket{y}$ of the allowed boundary conditions is effectively projected into the space of solution of the quantum Hamiltonian constraint $H\ket{\psi} =0$.
From \eqref{eq:ASUGA} it is manifest that our single-universe amplitudes are the matrix elements of a positive operator on the matter Hilbert space. As a result, we can define a positive-definite quantum gravity inner product by taking ${\sf CPT} $ to act trivially on $\mathcal{M}_{\mathrm{target}}$; i.e., $x^\dag = x$. In contrast, integrating $T$ only over a half-line would generate complex single-universe amplitudes from which the construction of a good Hilbert space is more complicated (see \cref{sec:LQFT}).
Since \eqref{eq:ASUGA} projects onto solutions of the constraint $H|\psi\rangle =0$, we may obtain a more explicit description of our theory by working directly with such solutions. In particular, for the Bianchi I model since
\begin{equation}
\delta(p^2) = \frac{1}{|p_0|} \,\delta\left(p_0 - \sqrt{p_1^2+p_2^2}\right) + \frac{1}{|p_0|}\, \delta\left(p_0 + \sqrt{p_1^2+p_2^2}\right),
\end{equation}
it suffices to use the plane wave solutions
\begin{equation}
\braket{ x}{p_1,p_2;\,\eta } = e^{- i\, \eta\, x^0\, \sqrt{p_1^2 + p_2^2}} \ e^{i(x^1p_1+x^2p_2)}
\end{equation}
for $\eta =\pm$ and to replace \eqref{eq:ASUGA} by the `projected' amplitudes
\begin{equation}
\label{eq:projASUGA}
\tASU(p',p) = \frac{1}{|p_0|} \, \delta(p_1-p_1')\,\delta(p_2-p_2')\, \delta_{\eta \eta'},
\end{equation}
where on the left-hand side $p = (p_1,p_2,\eta)$ and $p'=(p_1',p_2',\eta')$.
In particular, we emphasize that the group-averaging approach keeps both positive- and negative-frequency solutions to the constraint and treats both on an equal footing.
The full quantum gravity Hilbert space can now be described succinctly by using the observation that \eqref{eq:QGA} is just the result of performing a Gaussian integral over the space of plane wave solutions $p$ with covariance given by the right-hand side of \eqref{eq:projASUGA}. Consistent with the general argument from \cite{Marolf:2020xie}, the allowed boundary conditions $p$ then define a set of simultaneously-diagonalizable operators on this Hilbert space.
\subsection{Generalizations}
\label{sec:GATgeneralizations}
The above construction can be used in great generality. As it clear from \eqref{eq:ASUGA}, it requires only a matter Hamiltonian $H$ with continuous spectrum that includes zero and whose matrix elements are symmetric (so that $\ASU(x,y) = \ASU(y,x)$). The latter requirement is tantamount to requiring the matter to have a time-reversal symmetry, and is a consequence of our summing over unoriented spacetimes. This condition would be dropped if we instead summed over oriented spacetimes.
In particular, the above conditions on $H$ allow the case of sigma-models with general (geodesically complete) Lorentzian target space, or in fact any signature with additional `time' directions (or indeed none) so long as the sign of $H$ remains indefinite. We are similarly free to add a potential to $H$ that preserves continuity of the spectrum and the inclusion of the eigenvalue zero. Using a rescaled notion of lapse as described in \cref{sec:minisuper}, this allows one to treat rather general minisuperspace models \cite{Marolf:1994wh,Marolf:1994ss}.
Of perhaps greater interest is the generalization to allow graphs instead of summing only over strict one-dimensional manifolds. This might be expected to reproduce more the features of higher-dimensional gravity that one might expect from the diagrams of \cref{fig:wormholes}. Since this is not the focus of the current paper, we will content ourselves by noting that some such generalizations are straightforward. For example, in writing down the final quantum gravity Hilbert space one can readily replace the Gaussian integral over solutions to the constraint $H|\psi\rangle=0$ with some non-Gaussian integral over the same space of constraints. Similarly, one can allow the constraint to depend on the `coupling constants' $g$ that control any non-Gaussianities, so that the space of random variables at each $g$ is defined by solving a new constraint $H_g|\psi \rangle =0$. In each case, the allowed boundary conditions continue to define simultaneously diagonalizable sets of operators. We leave for future investigation whether the diagrammatics of such theories matches expectations from the quantum gravity path integral, though we will return to comment further on the physics of quantum gravity constraints in \cref{sec:Disc}.
\section{Lorentzian approach to QFT-like theories}
\label{sec:LQFT}
The construction of \cref{sec:GAT} led us to a Hilbert space for (amongst other things) worldline gravity with Lorentz-signature target space. For example, the matter Hamiltonian of the Bianchi I model is given by the wave operator on 3-dimensional Minkowski spacetime. However, we did not arrive at the quantum field theory Hilbert space (for a free field on $\mathbb{R}^{2,1}$ in the Bianchi I case, for example) as one might have expected form the worldline formalism of QFT \cite{Strassler:1992zr,Schubert:2001he}. In this section we examine the choices one might take (different from \cref{sec:GAT}) to pursue the analogy with QFT.
We first note that the amplitudes defined from our GAT are not the usual correlation functions of QFT. The one-universe amplitudes \eqref{eq:ASUGA} are the matrix elements of $\delta(H)$, while the usual two-point functions of free fields are Green's functions of the matter Hamiltonian (inverses of $H$). This is a result of integrating the lapse $T$ over the entire real line, $\mathcal{D}=\mathbb{R}$. We can consider instead the choice of integrating only over a half-line, say $\mathcal{D}=\mathbb{R}^+$:
\begin{equation}
\label{eq:ASUQFT}
\ASU(x,y) = \int_{{\mathbb R}^+} dT \mel{x}{e^{-iHT} } {y}= \mel{x}{ \frac{1}{i(H-i\epsilon)} }{ y},
\end{equation}
Here the integral gives a well-defined a distribution (associated with the Fourier transform of a step function) which we have written in terms of an $i\epsilon$ prescription. This is in fact gives a standard description of $iG_F$ where $G_F$ is the Feynman Green's function (see e.g., \cite{dewitt1965dynamical}). In particular, the path integral is well-defined for Hamiltonians with a continuous spectrum, which need not be bounded, and computes matrix elements between states with an appropriately smooth and rapidly-decaying energy representation. The result can also be written in the form
\begin{equation}
\label{eq:pole}
\frac{1}{i(H-i\epsilon)} = \mathcal{P}\frac{1}{i H} + \pi\, \delta(H),
\end{equation}
where $\mathcal{P}$ denotes the principal value distribution.
Due to the imaginary part of \eqref{eq:pole}, using \eqref{eq:QGIP1} as written, and taking ${\sf CPT} $ acting trivially on ${\cal M}_{\text{target}}$, one no longer defines a real and positive quantum gravity inner product. Yet QFTs do have a positive inner product as well as a notion of $ {\sf CPT} $-conjugation (e.g., for real scalar fields $\Phi(x)^\dagger = \Phi(x)$). Furthermore, in the free case that is relevant when we exclude non-trivial graphs, it has precisely the structure described by \eqref{eq:QGA}.
The explanation is that in QFT the amplitude \eqref{eq:ASUQFT} does not define an inner product between states $\Phi(x)|\Omega\rangle$ and $\Phi(y)|\Omega\rangle$ for some `vacuum' $|\Omega\rangle$. That inner product would be given by the expectation value of $\Phi(y)\Phi(x)$, with operators ordered as written (a Wightman function). But \eqref{eq:ASUQFT} would instead be interpreted as a time-ordered correlation function: the expectation value of $\mathcal{T} \left\{\Phi(x)\Phi(y)\right\}$, where the ordering of operators depends on their order in Lorentzian time. Choosing the lapse to run over the negative reals gives us the anti-time-ordered correlation function (the complex conjugate of the time-ordered correlator). In this QFT language, our choice of integrating over the entire real line in \cref{sec:GAT} gives the sum of these, which is the expectation value of the anti-commutator $\Phi(x)\Phi(y)+\Phi(y)\Phi(x)$.
To recover the QFT (Wightman) correlation functions, we must therefore apply \eqref{eq:ASUQFT} only when $y$ lies to the future of $x$, and instead use its conjugate (integrate over $T\in\mathbb{R}^-$) if $x$ lies to the past of $y$. But this condition is not symmetric in $x$ and $y$: the unordered list of boundary conditions $x$ is not sufficient information to tell us to construct the correlation function, as in the formalism of \cref{sec:reviewSS} (in particular \ref{eq:BUperm}).
When the target space $\mathcal{M}_{\mathrm{target}}$ is Minkowski space, we recover the familiar correlation functions of free QFT in the vacuum state, and may construct from them the associated Hilbert space \cite{Wightman:1956zz}. However, this construction of a Hilbert space does not work for general $\mathcal{M}_{\mathrm{target}}$, since it is not guaranteed that \eqref{eq:ASUQFT} will be consistent with a time-ordered correlation function in any state.\footnote{We thank E.~Witten for discussions bringing this point to light.} The problem is that the correlation function $\langle \mathcal{O}^\dag \mathcal{O}\rangle$ may not be positive (or even real) for general smearings $\mathcal{O} = \int f(x) \Phi(x) dx$ of $\Phi$, even once we interpret \eqref{eq:ASUQFT} as a time-ordered correlator. To construct a Hilbert space requires some condition on $\mathcal{M}_{\mathrm{target}}$ to ensure positivity in this sense, for example that $\mathcal{M}_{\mathrm{target}}$ admits a time-reversal symmetry. In such a case, we obtain not only the Hilbert space of free QFT, but also a particular Gaussian state $|\Omega\rangle$ of the theory.
With these choices, the Hilbert space becomes the usual bosonic Fock space associated with a real scalar fields on $\mathcal{M}_{\mathrm{target}}$. We then find operators $\hat \phi(x)$ associated with each boundary condition $x \in \BC = \mathbb{R}^{2,1}$, but their algebra is famously non-abelian and the operators cannot be simultaneously diagonalized. We see that this is a result of deviating from the structure assumed in the arguments of \cite{Marolf:2020xie}. Of course, one may nevertheless choose to focus on some abelian subalgebra, perhaps the one defined by choosing\footnote{Note that restricting to a constant value of $x^0$ means that one can work entirely with \eqref{eq:ASUQFT} without requiring the complex conjugate. As a result, with this restriction the formalism fits within the framework of \cite{Marolf:2020xie}, and the resulting abelian algebra is consistent with the general argument of \cite{Marolf:2020xie}.} $x^0=0$. This seems to be the approach taken in \cite{Coleman:1988cy,Giddings:1988cx,Polchinski:1994zs}, though the more general `3rd quantization' described in \cite{Giddings:1988wv} allows the algebra of operators on $\mathcal{H}_\mathrm{BU}$ associated with boundary quantities to be non-abelian.
For an alternative approach that remains somewhat closer to the axiomatic framework of \cite{Marolf:2020xie}, one may assign an additional parameter to each boundary condition that determines their relative ordering. We must then define an adjoint operation $\dag$ that reverses that ordering, and additionally place a constraint on boundary conditions so that `bra' boundaries are always ordered after `ket' boundary conditions. One can think of this as formally assigning each operator with an `$i \epsilon$' deformation in imaginary time, with correlation functions always in Euclidean time order. The adjoint takes $\epsilon\to-\epsilon$, and we restrict boundary conditions to negative $\epsilon$. The result is much like the Euclidean construction in \cref{sec:EQFT}, and in particular the structure of null states and the operator algebra is similar. Equivalently, we may think of the target space as having many `time-folds' in a Schwinger-Keldysh type contour, and our boundary conditions label which contour an operator insertion lies on (cf., \cite{Haehl:2017qfl}). Correlation functions are then always contour-ordered, the adjoint reverses the order of contours, and we restrict our states to be defined by insertions on the appropriate half of the contours.
\subsection{Generalizations}
\label{sec:LQFTgeneralizations}
We expect that this approach can be generalized significantly. However, to maintain contact with a worldline quantum gravity formalism one would like to continue to use \eqref{eq:ASUQFT} (or its conjugate) to define the single-universe amplitudes. An important question is then to understand the class of models for which the resulting inner product is positive definite. When this is the case, we expect that our procedure defines the usual Hilbert space for the associated free quantum field theory.\footnote{The idea is that, so long as the metric and any potential are smooth, in the far UV our construction will agree with the case of Minkowski-space QFT. This is well-known to give the correct UV behavior for any QFT. And, in the absence of IR divergences, this condition determines a unique space of QFT states; see e.g \cite{Wald:1995yp}. So if our procedure defines a Hilbert space, it must be the usual one of QFT.}
Nevertheless, at least a few minimal properties would seem to be required for success.
The first is that there be some notion of target space $\mathcal{M}_{\mathrm{target}}$, where in particular $\mathcal{M}_{\mathrm{target}}$ is a time-orientable Lorentz-signature manifold. While the definition of the amplitude in \eqref{eq:ASUQFT} makes sense in any signature, only time-orientable Lorentzian spacetimes have a partial order imposed by the causal structure. This partial order is required to interpret the amplitudes as expectation values of time-ordered products, and hence would appear to be essential to construct a QFT-like Hilbert space. In addition, in Minkowski space the discussion is simplified by the fact that $H$ is an essentially self-adjoint operator on $L^2(\mathcal{M}_{\mathrm{target}})$. When this is not the case, the role of boundary conditions defined by the details of the matter path integral will be more important, and we expect it to be necessary to choose boundary conditions that make $H$ self-adjoint. Essential self-adjointness is to be expected when $\mathcal{M}_{\mathrm{target}}$ is both globally hyperbolic and geodesically complete, though more generally it should be expected to fail.
In addition to such mathematical questions, additional physical considerations may be relevant as well. For example, as pointed out long ago in \cite{Marolf:1994wh}, the physics appropriate for a QFT may not always agree with the physics appropriate for a theory of quantum gravity. To illustrate this issue, let us follow \cite{Marolf:1994wh} and consider a gravitational model in which universes start from a big bang, expand to some maximum size, and then collapse to a big crunch. A simple example is given by the so-called Kantowski-Sachs model of anisotropic vacuum gravity on ${\bf S}^1 \times S^2 \times \mathbb{R}$, which in the end \cite{Ashtekar:1993wb} differs from \eqref{eq:rescaledB1} only by using symmetry to set $x_2=0$ and adding an external potential:
\begin{equation}
\label{eq:HKS}
H = -p_0^2 + p_1^2 - 48 \,e^{2 \, (2 \, x^0- x^1)}.
\end{equation}
The overall classical dynamics is clear from the fact that any future-directed timelike curve on ${\cal M}_{\text{target}} = \mathbb{R}^{1,1}$ has increasing $2\,x^0 - x^1$. So classical solutions cannot be completely described by such future-directed timelike curves, as in the far future this would require the spacelike condition $p_1^2 - p_0^2 > 0$. Instead, while one might begin with such a curve (say, emerging from a big bang at $x^0 = -\infty$), the dynamics forces the trajectory to become spacelike at some point, and in fact to turn around so that $x^0$ then begins to decrease. Eventually, the trajectory becomes time-like again, but it is now past-directed on ${\cal M}_{\text{target}} = \mathbb{R}^{1,1}$ and eventually results in a big crunch at $x^0 = -\infty$.
One might thus expect the quantum version of this model to display similar behavior. But the quantum field theory associated with \eqref{eq:HKS} is very different. The scalar experiences an external potential which becomes unboundedly negative at large positive $x^0$. This results in enormous particle creation, which in a gravitational interpretation would imply creation of a large number of universes at large $x^0$. Thus, rather than large $x^0$ (scale factor) being forbidden as in the classical theory, large scale factor seems to be dynamically preferred in the QFT-like quantum treatment.
\section{Discussion}
\label{sec:Disc}
In the above, we have enumerated various choices that could be made in order to build a theory of one-dimensional quantum gravity from worldline path integrals. We concentrated on situations where splitting and joining of universes is forbidden, so that the final quantum gravity inner product could be represented by a Gaussian integral. However, generalizations to include interactions between worldlines were briefly discussed in sections \ref{sec:ESTgeneralizations}, \ref{sec:EQFTgeneralizations}, \ref{sec:GATgeneralizations}, and \ref{sec:LQFTgeneralizations}. At least in perturbation theory the addition of such interactions cannot change our qualitative results.
The various choices that we examined are summarized in \cref{sec:intro} in \cref{tab:choices}, along with the particular options that lead to Euclidean Statistical Theories (ESTs), Group Averaged Theories (GATs), and to QFT-like theories. Both ESTs and GATs fall within the framework described in \cite{Marolf:2020xie}, and in particular define the quantum gravity inner product from the path integral with no restriction on boundary conditions. Since the GATs are Lorentzian and the ESTs are Euclidean, this demonstrates that the framework accommodates path integrals defined by spacetimes of either signature.\footnote{See also the recent Lorentz-signature discussion of baby universes and superselection in \cite{Marolf:2020rpm}.} In contrast, in either signature, the QFT-like approaches relate the final inner product to path integral amplitudes only with some restriction on the possible boundary conditions which is not invariant under the CPT operation used to define the inner product. Since this restriction violates the framework described in \cite{Marolf:2020xie}, for such cases path integral boundary conditions need not define simultaneously-diagonalizable operators and hence also do not lead to superselection sectors.
Of course, given a non-abelian algebra of operators, it is possible to select an abelian sub-algebra. As a result, even using a QFT-like approach, one could attempt to claim that natural boundary objects correspond to such a sub-algebra of the possible baby universe operators; one may thus consider them to be superselected. This appears to be the approach that was taken in \cite{Coleman:1988cy,Giddings:1988cx}. However, as emphasized in \cite{Giddings:1988wv}, such approaches involve assumptions about the nature of the final results -- in particular, a certain form of locality was assumed in \cite{Coleman:1988cy,Giddings:1988cx}. Reasoning of this kind thus lead \cite{Giddings:1988wv} to question the supposed locality, and to suggest that strict superselection may not hold. This stands in sharp contrast with the treatment of \cite{Marolf:2020xie} reviewed in \cref{sec:reviewSS}, which argued that properties of the quantum gravity path integral require \emph{all} boundary quantities to be simultaneously diagonalizable in $\mathcal{H}_\mathrm{BU}$.
By describing the above choices, we hope to reduce confusion in the literature. For example, the comments of \cite{McNamara:2020uza} concerning 2d theories of gravity arising from Wilson loops in Yang-Mills theories appear to be couched in the EST framework; it is unclear to us whether analogous comments hold in GAT-like constructions.
\paragraph{Implications for quantum gravity:}
In comparing the options described here, the idea that general path integral boundary conditions define the quantum gravity inner product appears natural from many perspectives and is related to historic discussions of quantum gravity and constrained systems
\cite{Halliwell:1989dy,Halliwell:1990qr,Higuchi:1991tm,Landsman:1993xe,Marolf:1994wh,Ashtekar:1995zh,Marolf:1996gb,Reisenberger:1996pu,Hartle:1997dc,Marolf:2000iq,Shvedov:2001ai}; see also \cite{Jafferis:2017tiu}. This was the point of view taken in \cite{Marolf:2020xie}, and was used as the rationale there to justify the assumptions made in that work. However, we also noted that one may adapt arguments from \cite{Marolf:1994wh} to show that -- at least for certain models of gravitational physics -- applying a QFT-like approach to quantum gravity leads to physics very different from the classical theory, even in an apparently semiclassical domain. In particular, for any cosmological model in which the universe always reaches a maximum size and then recollapses, QFT-like approaches lead to divergent `pair production' of universes with very large size. Thus, while the classical physics forbids universes of arbitrarily large size, the physics of QFT-like quantum gravity would be dominated by arbitrarily large universes. In contrast, the physics of GAT quantum gravity for those models again forbids large universes and is thus consistent with the classical results. We take this as a further argument against the use of QFT-like constructions in quantum gravity.
\paragraph{Other discrete choices:}
While we have explored several important discrete choices in the construction of candidate quantum gravity theories, it is important to emphasize that there are many other places at which further choices could be introduced and whose consequences remain to be explored. For example, as described in \cref{sec:reviewSS} our quantum gravity amplitudes were taken to be completely symmetric. This naturally reminds one of bosonic quantum field theory, and thus leads to the question of whether other forms of multi-universe statistics might be allowed and what consequences they might entail. One could also as noted include worldline supersymmetry. Similarly, for simplicity we considered only the case of unoriented spacetimes (worldline), while summing instead of oriented worldlines should lead to slight modifications in parallel with the two-dimensional discussion in \cite{Stanford:2017thb}.
Indeed, many additional choices naturally arise when one generalizes the theory to allow splitting and joining of universes by summing over graphs. For example, in that context one may elect to treat `internal' lines differently than `external' lines. One may also allow the matter Hamiltonian $H$ to have some explicit dependence on the `coupling constants' $g$ associated with the graph vertices, or perhaps to further generalize the structures discussed above.
\paragraph{Interpretation of low dimensional topological models:}
However, the most intriguing questions that stem from this work revolve around the relationship between the models discussed here and the topological model of \cite{Marolf:2020xie}, the treatments of JT gravity in \cite{Saad:2018bqo,Saad:2019lba,Blommaert:2019wfy,Saad:2019pqd,Penington:2019kki,Marolf:2020xie,Blommaert:2020seb,Bousso:2020kmy,Stanford:2020wkf}, and higher dimensional gravity more generally. For example, having noted that our EST and GAT constructions both fall within the general framework described in \cite{Marolf:2020xie}, one might wonder which best corresponds to the way in which the topological model of \cite{Marolf:2020xie} was explicitly solved in that work. However, one should recall that the main differences between ESTs and GATs involved the treatment of the constraint $H$ and the associated integration domain for the proper time $T$, and that neither $H$ nor $T$ appears at all in a topological model like that considered in \cite{Marolf:2020xie}. As a result, it is far from clear whether such models can be meaningfully associated with either construction.
Now, in considering either the topological model of \cite{Marolf:2020xie} or the treatments of JT gravity in \cite{Saad:2018bqo,Saad:2019lba,Blommaert:2019wfy,Saad:2019pqd,Penington:2019kki,Marolf:2020xie,Blommaert:2020seb,Bousso:2020kmy,Stanford:2020wkf}, one might note that all of these works focus on Euclidean path integrals and thus be tempted to associate them with the construction of ESTs in \cref{sec:EST} in contrast to the Lorentzian construction of GATs in \cref{sec:GAT}. However, in \cref{sec:EST} and \cref{sec:GAT}, the terms Lorentzian and Euclidean were used to describe the natural contours of integration that {\it define} the desired path integral, while in many other contexts one uses the term to describe the sorts of spacetimes that one uses to evaluate the path integral. This distinction is illustrated by standard non-relativistic quantum mechanics, which one may choose to think is fundamentally defined by a real-time ('Lorentz signature') path integral, but which is usefully evaluated using Euclidean methods in contexts that involve quantum tunneling under a classical barrier. In essence, the point is that one may typically deform the contour of integration into the complex plane to rewrite an originally-Lorentzian path integral in a Euclidean form, or to
rewrite an originally-Euclidean path integral in a Lorentzian form.\footnote{One may also note that, even in standard quantum mechanics, one is free to study `Euclidean' quantities like $e^{-HT}$ in the `Lorentzian' theory. }
And it was shown in \cite{Marolf:1996gb} that this could be done for the GAT path integral by taking the contour to rotate in different directions depending on the particular matter boundary conditions chosen.
As a result, the mere use of Euclidean techniques in the above references is not sufficient to conclude that their treatment is more closely related to our ESTs than to our GATs. Indeed, we note that at least one element of the treatment in \cite{Saad:2018bqo,Saad:2019lba,Blommaert:2019wfy,Saad:2019pqd,Penington:2019kki,Marolf:2020xie,Blommaert:2020seb,Bousso:2020kmy,Stanford:2020wkf} appears to resemble the GAT construction. Namely, motivated by the fact that the dilaton $\phi$ appears linearly in the JT gravity action, the above references take functional integration over the dilaton to exactly enforce the metric equation of motion $R=-2$. Thus they choose an integration contour for $\phi$ much like the GAT contour for the proper time $T$ (which resulted in exactly enforcing the equation of motion $H=0$ by producing a $\delta(H)$). However, we leave for future work any more detailed analysis of this JT path integral prescription and possible connection with GATs (or with generalizations thereof).
\paragraph{Minisuperspace models of quantum gravity:}
Indeed, an important question is the extent to which physics of higher-dimensional quantum gravity is in fact similar to {\it any} of the models described here. We consider this to be an open question, with much to be investigated. In particular, in our work above, we took the matter Hamiltonian $H$ to be a self-adjoint operator. While this seems natural from the perspective of familiar matter quantum mechanics, we believe it to be rather less obvious from the perspective of higher-dimensional quantum gravity. To this end we remind the reader that when one defines a one-dimensional quantum gravity model by Kaluza-Klein reduction of a higher-dimensional theory, most of the features and complications of the gravitational theory then become part of the Kaluza-Klein matter sector, with only some overall notion of proper time left to be treated as one-dimensional gravity.
Let us thus consider again the diagrams of \cref{fig:wormholes} associated with the splitting and joining of universes. From the higher-dimensional perspective, these are smooth Euclidean geometries. And in general, one expects smooth Euclidean solutions to be associated with tunneling amplitudes between e.g., classically-allowed Lorentzian configurations.
We can discuss this in more detail in a simple (and famous) minisuperspace model associated with the Hartle-Hawking wavefunction of the universe. To this end, consider spatially compact homogeneous isotropic universes with topology ${\bf S}^3 \times {\mathbb R}$ in the presence of a {\it positive} cosmological constant, but with no explicit matter. Such cosmologies are described by a minisuperspace model having a single degree of freedom $a$ (`the scale factor'), which upon Kaluza-Klein reduction becomes our matter sector. The scale factor takes values in ${\mathbb R}^+$, and the corresponding `matter' Hamiltonian generating evolution in proper time is then $H = p_a^2 + V(a)$, with $V(a) = 1- \frac{4}{3}\, a^2$. Note that $H$ is {\it not} essentially self-adjoint, as classical solutions associated with energies greater than unity reach the boundary at $a=0$ at finite time, and more generally quantum wavefunctions have a finite probability to tunnel under the potential barrier to reach $a=0$. Indeed, this phenomenon is associated with the Hartle-Hawking no-boundary proposal for the wavefunction of the universe \cite{Hartle:1983ai}, or perhaps more directly with the Vilenkin tunneling-from-nothing wavefunction \cite{Vilenkin:1982de,Vilenkin:1984wp}.
Of course, in the strict semiclassical limit $\ell_p\rightarrow 0$ such tunneling should not occur. In this limit, we should be able to describe a general theory of quantum gravity by cylinder diagrams, which on Kaluza-Klein reduction to one-dimension are associated with some matter Hamiltonian $H_0$. Since the above tunneling turns off in this limit, this limiting matter Hamiltonian should in fact be essentially self-adjoint. In the example above, this might be because the semiclassical limit is associated with some preferred boundary condition that defines the essentially self-adjoint $H_0$ from $H$. And it may be useful to explore the perturbative expansion in $\ell_p$, where a similar structure should arise at all orders.
However, at the non-perturbative level any notion of an essentially self-adjoint constraint defined on a single-universe Hilbert space may cease to be relevant. It would be extremely interesting to understand the structure that replaces it, as well as the associated physics that results.
\paragraph{Higher-dimensional generalizations:}
A natural higher dimensional generalization of the ideas discussed above would be to consider two dimensional spacetimes, where the allowed boundaries are line segments or closed loops. As a prominent example, we may regard string theory as a two-dimensional theory of gravity, taking a perspective where we regard the worldsheet as spacetime. From this perspective, target space is merely a manifold for matter fields. Boundary conditions correspond to asymptotic states of open or closed strings. Many of the above constructions generalize naturally to such cases (see eg., \cite{Hawking:1991vs,Lyons:1991im} for related comments).
Such a two-dimensional `quantum gravity' could arise even for rather prosaic systems by characterizing the dynamics in terms of surface-like degrees of freedom. For instance, Ising-like models with $\mathbb{Z}_2$-valued spins may described terms of domain wall variables (the surfaces across which spins flip), cf., \cite{Polyakov:1987ez,Fradkin:1980gt,Iqbal:2020msy}. Likewise, the effective dynamics of QCD flux tubes is captured by a two-dimensional theory of the confining string \cite{Aharony:2013ipa,Dubovsky:2015zey}. These theories are gravitational in the sense that they sum over two-manifolds modulo diffeomorphisms (though the fields living on the manifolds do not include a dynamical metric). In particular, the dynamics of the domain walls typically involves a sum over topologies. If interpreted as a two-dimensional theory of gravity living on the domain walls, these theories clearly fit very naturally into the EST paradigm of \cref{sec:EST}. We can then give boundary conditions by specifying loops in target space on which domain walls end. The amplitudes correspond in the statistical theory to correlation functions of `defect' loop operators ('t Hooft loops for the $\mathbb{Z}_2$ spin symmetry). This is analogous to the worldline description of a theory with particle-like excitations as a one-dimensional theory of gravity, where we integrate in the worldline metric. Integrating in two dimensional gravitational dynamics is significantly more involved. In the case of the effective QCD string, the Nambu-Goto dynamics may be recast as a $T\overline{T}$-deformed free boson theory, which could be interpreted as a two dimensional model coupled to gravity \cite{Dubovsky:2018bmo,Cardy:2018sdv,Callebaut:2019omt}.
Another example in this vein is given by the genus expansion of large-$N$ gauge theories, which one might hope to describe as a two-dimensional theory of gravity. This hope is realized concretely for $\mathcal{N}=4$ Yang-Mills via the AdS/CFT correspondence: we may describe this theory as two-dimensional gravity (a string theory), though the target space is $\mathcal{M}_{\mathrm{target}} = \mathrm{AdS}_5\times \mathbf{S}^5$ rather than simply the four-dimensional spacetime on which the gauge theory resides. Since the target space contains dynamical gravity, gauge-invariant boundary conditions are associated with strings ending on the (fixed) asymptotic boundary of $\mathcal{M}_{\mathrm{target}}$. For example, we have boundary conditions given by vertex operators corresponding to local operator insertions in the boundary CFT, and by loops on which strings end on the boundary corresponding to Wilson loops. This situation was described in \cite{McNamara:2020uza}. In particular, they interpreted JT gravity as a description of worldsheets in a topological string theory, and the corresponding amplitudes as correlation functions of Wilson loops in the dual Chern-Simons gauge theory. Again, this example fits naturally with the EST considerations of \cref{sec:EST}, and in particular as a higher-dimensional generalization of worldline descriptions of AdS/CFT noted at the end of that section. As presented there, the restriction of boundary conditions to the asymptotic boundary of AdS was rather artificial, but this restriction is expected to become a requirement of gauge invariance once our worldline particles include an interacting graviton.
Having noted the similarity between one-dimensional (worldline) and string theories, we should point out a notable difference. With the exception of the GAT theories in \cref{sec:GAT}, our constructions above were sensitive to the off-shell content of the matter theory: that is, we were not limited to states on the worldline satisfying the constraint $H=0$. This can be traced back to the boundary of the integral over the lapse $T$, corresponding to `short' worldlines with $T=0$. However, string theory is different in this respect: amplitudes are sensible only with on-shell string states corresponding to physical vertex operators. This is ultimately due to the gauging of Weyl symmetry, via which any string state can be thought of as specified at infinite distance on the worldsheet. One can attempt to eschew stringy constructions and attempt to either quantize a particular class of diffeomorphism invariant two dimensional gravitational dynamics on two-surfaces, or quantize a gauge fixed system where we only consider rigid geometric structures. For example, we can take two-geometries to be cylinders and impose as in the GAT construction independent delta function constrains for time translations and spatial rotations (this bears some resemblance to ambitwistor string constructions \cite{Mason:2013sva,Adamo:2013tsa}). Whether such models make sense, and how one might interpret off-shell string field theory in the language described above, deserve further investigation.\footnote{We believe that Hilbert space of conventional critical string theory is obtained from a QFT like construction for the worldsheet quantum gravity theory and thus does not have superselection sectors. On the other hand for non-critical strings it is unclear to us whether there exists a scheme that allows for a construction involving superselection sectors and baby universes. } A relevant perspective may be offered by the worldsheet description of out-of-equilibrium dynamics, as recently investigated in \cite{Horava:2020she,Horava:2020val}.
\paragraph{Spacetime D-branes:}
Above, we computed amplitudes defined by the path integral over all worldlines with fixed boundaries. A natural generalization allows the inclusion of \emph{dynamical} boundaries. This means that the worldline path integral includes a sum over configurations with additional boundaries of a specified type. We call such boundaries `spacetime D-branes', since they are analogous to D-branes in string theory considered from the perspective of the worldsheet. Similar boundaries have interesting and important effects in JT gravity and related models \cite{Saad:2019lba,Blommaert:2019wfy,Marolf:2020xie,Blommaert:2020seb}. How are such boundaries interpreted in the context of the worldline models discussed here?
Amplitudes with a spacetime D-brane can be described by including the exponential $\exp(\lambda \hat{b})$, where $\hat{b}$ is a boundary-inserting operator of the relevant type, and $\lambda$ a `coupling' specifying the amplitude for spacetime to end on the brane. From the perspective of the target-space path integral over fields $\Phi(x)$ (such as \eqref{eq:ESTQGA}), this becomes the insertion of an exponential $e^{\int J(x)\Phi(x)}$, which in QFT terms is a source for the field $\Phi$. In this way, a spacetime D-brane can be interpreted as a deformation of the field theory action, or correspondingly (at least in the Euclidean statistical models of section \ref{sec:EST}) as a deformation of the distribution over $\alpha$-states. Another natural example (if we augment our worldine theory to describe gauge fields in target space) is a Wilson loop, which can be described as a spacetime D-brane whereby worldlines can end with fields taking values on a specified closed curve in target space.
Furthermore, one might consider the spacetime D-branes themselves to be dynamical objects, integrating over different possible choices of such boundaries. For instance, the Wilson lines in the example above may be interpreted as the trajectory of a heavy charged `probe' particle: we may promote this particle to be dynamical by an appropriate integration over spacetime D-branes corresponding to all possible heavy particle trajectories. This gives one way to interpret the discussion of \cite{Hawking:1991vs}. A more careful consideration of such dynamical boundaries in worldline theories may help to elucidate their role more generally in higher-dimensional models.
\acknowledgments
We thank Tarek Anous, Steve Giddings, Seth Koren, Jorrit Kruthoff, and Raghu Mahajan for conversations motivating much of this work. We also thank the participants of the 2020 KITP Gravitational Holography program for motivating questions during related talks.
EC and MR were supported by U.S. Department of Energy grant DE-SC0009999 and by funds from the University of California.
DM and HM were supported by NSF grant PHY1801805 and by funds from the University of California. H.M.~was also supported in part by a DeBenedictis Postdoctoral Fellowship.
|
1,116,691,498,660 | arxiv | \section{Introduction}
\label{sec:intro}
Quantum field theories with a topological term (``$\theta$-term'') in
the action have proved to be particularly challenging to
investigate. Such theories are related to a few important open
problems in theoretical physics, including the so-called ``strong $CP$
problem'' in strong interactions, and to interesting phenomena in
condensed matter physics, such as the quantum Hall effect (for a
recent review on theories with $\theta$-term, see Ref.~\cite{VP}).
On the one hand, topological properties are intrinsically nonperturbative,
thus requiring a nonperturbative approach to the study of these
systems. On the other hand, the most effective of these approaches,
namely the numerical study by means of simulations in lattice field
theory, cannot be directly applied to these systems, due to the
presence of a so-called {\it sign problem}. In fact, the complex
nature of their Euclidean action prevents the computation of the
relevant functional integrals by means of the usual
importance-sampling techniques. Numerical investigations have then
required the use of techniques which allow to avoid the sign problem,
usually based on analytic continuation or on the resummation of the
contributions of the various topological sectors to the partition
function~\cite{BPW,PS,IKY,BISY,AN,AANV,ADGV0}. The basic idea of these
techniques is to modify or split the functional integral, in such a
way that the resulting expression(s) have a positive-definite
integration measure, and therefore can be treated with the usual
numerical techniques. The difficulty of dealing with an oscillatory
integrand is, however, not completely overcome, but simply shifted to
the problem of reconstructing the original functional integral, which
is usually a very delicate issue from the numerical point of view.
It is worth noting that, beside having their own theoretical interest,
theories with a $\theta$-term share the sign problem with
finite-density QCD, and so the development of techniques and
algorithms to solve or by-pass the sign problem can have positive
consequences on the study of the QCD phase diagram by means of
numerical simulations.
Among the various existing models, the two-dimensional $O(3)$
nonlinear sigma model with $\theta$-term ($O(3)_\theta$NL$\sigma$M)
deserves particular interest. It has been shown long ago by
Haldane~\cite{Haldane:1982rj,Haldane:1983ru} that chains of quantum
spins with antiferromagnetic interactions, in the semiclassical limit
of large but finite spin $S$, are related to this model at coupling
$g^2=4/[S(S+1)]$, and at $\theta=0$ or $\pi$ if the spin is
respectively integer or half-integer. Haldane conjectured that quantum
spin chains for half-integer spins show a gapless spectrum, and
correspondingly that a second-order phase transition takes place
in the $O(3)_\theta$NL$\sigma$M at $\theta=\pi$, with vanishing of the
mass gap and recovery of parity. Arguments supporting this conjecture
have been provided in Ref.~\cite{Affleck:1991tj}. Moreover, in
Ref.~\cite{Affleck:1987ch} it has been argued that the critical theory
for generic half-integer spin antiferromagnets is the $SU(2)$
Wess-Zumino-Novikov-Witten (WZNW) model~\cite{WZNW1,WZNW2,WZNW3} at topological
coupling $k=1$, which in turn should determine the behaviour of the
mass gap near $\theta=\pi$.
Numerical investigations of Haldane's conjecture have been performed,
following basically three different strategies.
A first strategy~\cite{BPW,Bogli,dFPW} is
based on the determination of the probability distribution of the
topological charge by means of simulations at $\theta=0$, which allows
in principle to reconstruct the expectation values of the various
observables at $\theta\ne 0$. In order to achieve the very high
accuracy required by this approach, the authors of
Refs.~\cite{BPW,Bogli,dFPW} have employed a constrained
(``topological''~\cite{Bietenholz:2010xg})
action on a triangular lattice, which
allows simulations by means of an efficient Wolff cluster
algorithm~\cite{Wolff:1988uh}. The parameters of the action were
chosen in order to be in the weak-coupling regime.
Using finite size scaling theory, the authors of Ref.~\cite{BPW,Bogli,dFPW} found
a second order phase transition at $\theta=\pi$, in agreement with
Haldane's conjecture, and a finite size scaling in good agreement with
the assumption of a WZNW-type of critical behaviour.
A second strategy~\cite{Alles:2007br} is based on the determination of
the mass gap at imaginary values of $\theta$, that can be obtained
directly by means of numerical simulations, and the subsequent
analytic continuation to real values of $\theta$, in order to check if
the mass gap vanishes at some point. The authors of
Ref.~\cite{Alles:2007br} found indeed that the mass gap
vanishes at $\theta=\theta_c$ for some real $\theta_c$, and moreover
that $\theta_c=\pi$ within the errors, again in agreement with
Haldane's conjecture.
Finally, the third strategy~\cite{ADGV1,ADGV2,CP1,AFV} makes use again of
numerical simulations at imaginary values of $\theta$, in order to
determine the topological charge density, and of a controlled way of
performing the analytic continuation to real $\theta$ that greatly
reduces the uncertainties connected to this process.
Applying this strategy to the $CP^1$ model, that is expected to be
equivalent to the $O(3)$ model, the authors of Ref.~\cite{CP1} found a
richer phase structure, with a first-order phase transition at
$\theta=\pi$ for $\beta\lesssim 0.5$, and a line of second-order phase
transitions with recovery of parity for $0.5 \lesssim \beta \lesssim
1.5$, with continuously varying critical exponent. At $\beta \simeq
1.5$ the critical exponent becomes $2$, and parity is recovered
analytically.
In this paper we want to investigate further on this issue, by
applying the strategy of Refs.~\cite{ADGV1,ADGV2,CP1,AFV} directly to the
$O(3)_\theta$NL$\sigma$M. Our aim is to understand the origin of the
discrepancy between the results of Refs.~\cite{BPW,Bogli,dFPW} and
those of Ref.~\cite{CP1}. Such a discrepancy could be of physical
origin, due to the actual inequivalence of the $O(3)$ and $CP^1$
models, contrary to the standard wisdom; or it could be of technical
origin, due to shortcomings of the employed strategy in dealing with
these models.
The plan of the paper is the following. In Section \ref{sec:method} we
briefly review the method of Refs.~\cite{ADGV1,ADGV2,CP1,AFV}. In Section
\ref{sec:o3model} we describe the model of interest, discussing in
particular the theoretical
prediction for the critical behaviour of the model at $\theta=\pi$ in
the continuum, and working out the consequences for the observables
relevant to our method. In Section \ref{sec:num_sim} we describe
the $O(3)_\theta$NL$\sigma$M on the lattice, and we discuss the
results of our numerical simulations. Finally, Section \ref{sec:concl}
is devoted to our conclusions and to an outlook on open
problems. Details of the numerical analysis are reported in the
Appendix.
\section{Theories with topological term in the action and the method
of scaling transformations}
\label{sec:method}
In this Section we briefly describe the relevant formalism and
notation that will be used in the rest of the paper. The partition
function of a theory with a topological term in the action is of the
general form
\begin{equation}
\label{eq:top1}
Z(\theta) = \int {\cal D}\phi\, e^{- S[\phi] + i\theta Q[\phi]} = e^{-VF(\theta)}\,,
\end{equation}
where $\phi$ denotes the degrees of freedom of the model, ${\cal D}\phi$
is the appropriate functional measure, $S$ is the non-topological
part of the action, and $Q$ is the quantised topological charge,
taking only integer values; moreover, $F(\theta)$ is the free energy
density and $V$ the volume of the system. Clearly, $Z(\theta)$ is a
periodic function of $\theta$, $Z(\theta+2\pi)=Z(\theta)$. In the
interesting cases, the integration measure is invariant under parity
(${\cal P}$), and $S$ is ${\cal P}$-even, while $Q$ is ${\cal
P}$-odd. As a consequence, $Z(-\theta)=Z(\theta)$; combining this
with periodicity, we have that $Z(\pi+\theta)=Z(\pi-\theta)$.
While at $\theta\ne 0,\pi$ parity is explicitly
broken, at $\theta=\pi$ any ${\cal P}$-odd observable has vanishing
expectation value in a finite volume; nevertheless, a phase transition
may take place at this point. A convenient order parameter is given by
the topological charge density,
\begin{equation}
\label{eq:orpar}
\mathbf{O}(\theta)\equiv -i\f{\langle Q \rangle_{i\theta}}{V} = -\f{1}{V}\f{\partial\log
Z(\theta)}{\partial\theta} =\f{\partial F(\theta)}{\partial\theta}\,,
\end{equation}
where we have introduced the notation
\begin{equation}
\label{eq:notation}
\langle {\cal O}[\phi] \rangle_{i\theta} = Z(\theta)^{-1}\int {\cal D}\phi\, e^{-
S[\phi] + i\theta Q[\phi]} {\cal O}[\phi]\,,
\end{equation}
for the expectation value of the observable ${\cal O}[\phi]$.
In the limit of infinite volume, a nonzero value of $\mathbf{O}(\theta=\pi)$
indicates a first-order phase transition, while a divergent
susceptibility $\mathbf{O}'(\theta=\pi)$
indicates a second-order phase transition, and so on.
In order to reconstruct the behaviour of the order parameter near
$\theta=\pi$ using numerical simulations, one has to start from
imaginary values of the vacuum angle $\theta=-ih$, with $h\in
\mathbb{R}$.
It has been suggested in Ref.~\cite{ADGV1} that a convenient observable is the
quantity
\begin{equation}
\label{eq:method1}
y(z)=\f{\langle Q \rangle_h}{V\tanh\f{h}{2}}\,,\qquad
z=\cosh\f{h}{2}\,,\quad z\ge 1\,.
\end{equation}
It is immediate to see that under analytic continuation $h\to i\theta$
one has
\begin{equation}
\label{eq:method1bis}
y(z)=-i\f{\langle Q
\rangle_{i\theta}}{V\tan\f{\theta}{2}}=\f{\mathbf{O}(\theta)}{\tan\f{\theta}{2}}\,,
\qquad z=\cos\f{\theta}{2}\,,\quad z\le 1\,.
\end{equation}
i.e., in terms of $z$ the analytic continuation is simply an
extrapolation from $z\ge 1$ to $z\le 1$. Notice that $y(1)=\f{2\langle Q^2
\rangle_h}{V}$, and $y(0)=0$, with $z=0$ corresponding to
$\theta=\pi$.\footnote{One can have $y(0)\ne 0$ only if the
topological charge density diverges at $\theta=\pi$, which seems
unlikely.}
The use of this observable is
suggested by the antiferromagnetic one-dimensional Ising model, where
the role of the $\theta$-term is played by the coupling with an
external imaginary magnetic field (for an even number of sites). This
model is exactly solvable, and one finds that $y$ actually depends
only on a specific combination of $z$ and of the antiferromagnetic
coupling $F$, namely $y(z,F)=Y((e^{-4F}-1)^{-\f{1}{2}}z)$.
Although this property is exclusive of the one-dimensional Ising
model, nevertheless one can expect that a similar smooth relation
exists between $y(z)$ and $y_\lambda(z)\equiv y(e^{\f{\lambda}{2}}z)$
also in other models with $\theta$-term. The assumption usually made
is that $y(z)$ is a monotonically increasing function of $z$,
vanishing only for $z=0$ (i.e., the order parameter does not vanish
for $\theta\in(0,\pi)$); this is indeed the case for the models where
the exact solution is known. The quantity $y_\lambda$ is then a
monotonic function $y_\lambda(y)$ of $y$, with the property that
$y_\lambda=0$ at $y=0$, so that starting from the smallest values of
$y$ that can be obtained by numerical simulations at real $h$, one can
therefore reliably extrapolate towards $y=y_\lambda=0$, i.e., in the
region corresponding to real $\theta=-ih$. This is the advantage of
this method, based on scaling transformations, over other approaches
that involve an uncontrolled analytic continuation from imaginary
values of $\theta$. Having reconstructed $y_\lambda(y)$ in this
region, one can then easily reconstruct the order parameter at real
$\theta$. Clearly, the closer one gets to $y=0$, the better the
extrapolation is expected to be: this method is then expected to work
well in situations where the density of topological objects is small,
such as asymptotically free models at weak coupling.
If one is interested only in the critical behaviour at $\theta=\pi$,
it is possible to determine the critical exponent without explicitly
reconstructing the order parameter. Consider the effective exponent
$\gamma_\lambda(y)$,
\begin{equation}
\label{eq:gammal}
\gamma_\lambda(y) \equiv \f{2}{\lambda}\log\f{y_\lambda(y)}{y}\,.
\end{equation}
Assuming a critical behaviour $\mathbf{O} \propto z^\epsilon$ near $z=0$,
i.e., $\mathbf{O} \propto (\pi-\theta)^\epsilon$ near $\theta=\pi$,
one immediately sees that $y \propto z^{\epsilon+1}$ near $z=0$, and so
\begin{equation}
\label{eq:critexp}
\gamma \equiv \lim_{y\to 0} \gamma_\lambda(y) = \f{2}{\lambda}\lim_{z\to 0}
\log\f{e^{(1+\epsilon)\f{\lambda}{2}}z^{1+\epsilon}}{z^{1+\epsilon}}
= 1+\epsilon\,.
\end{equation}
Analogously, assuming that $y_\lambda(y)$ is analytic at $y=0$, one
can obtain $\gamma$ from the relation $\gamma = \f{2}{\lambda}
\log\big(\f{d y_\lambda}{dy}\big|_{y=0}\big)$.
The method outlined above has been checked against explicitly
solvable models, and successfully applied to models where the exact
solution is not known (see Refs.~\cite{ADGV1,ADGV2,CP1,AFV}). One
implicit assumption of this method is that the function $y_\lambda(y)$
has a ``reasonable'' behaviour near $y=0$, i.e., it can be well
approximated by polynomials, or ratios of polynomials, or other
``simple'' functions. If this is the case, the critical exponent can
then be obtained with fair accuracy. What has not been done yet is the
evaluation of the impact of logarithmic corrections on the reliability
of the extrapolation. The result Eq.~\eqref{eq:critexp} holds
independently of logarithmic corrections to the critical behaviour,
i.e., it holds even if $\mathbf{O} \propto z^\epsilon\log(1/z)^{-\beta}$;
nevertheless, the way in which the limit is approached in this case
can make the extrapolation more difficult. This issue will be
discussed further on in the next Section.
\section{The $O(3)$ nonlinear sigma model with a topological
term}
\label{sec:o3model}
In this Section we briefly recall the main properties of the $O(3)$
nonlinear sigma model with a topological term
($O(3)_\theta$NL$\sigma$M) in two dimensions, and we work out the
consequences of the expected critical behaviour at $\theta=\pi$ for
the method of scaling transformations described in the previous
Section.
\subsection{Critical behaviour at $\theta=\pi$}
\label{sec:crit0}
The degrees of freedom of the $O(3)_\theta$NL$\sigma$M in two
dimensions are real three-com\-po\-nent spin variables $\vec{s}(x)$ of modulus
one, $\vec{s}(x)^2=1$, ``living'' at the point $x\in
\mathbb{R}^2$. Expectation values are defined in terms of functional
integrals as follows,
\begin{equation}
\label{eq:model}
\begin{aligned}
\langle {\cal O}[\vec{s}] \rangle_{i\theta} &\equiv Z(\theta)^{-1}\int {\cal D}\vec{s}\,
e^{- S[\vec{s}] +
i\theta Q[\vec{s}]}\,{\cal O}[\vec{s}]\,, \quad
Z(\theta) &= \int {\cal D}\vec{s}\, e^{- S[\vec{s}] + i\theta Q[\vec{s}]}\,,
\end{aligned}
\end{equation}
where the measure is given by ${\cal D}\vec{s} = \prod_x d^3\vec{s}(x)
\delta(1-\vec{s}(x)^2)$. In the continuum,
\begin{equation}
\label{eq:action0}
S[\vec{s}]= \f{1}{2g^2}\int d^2x\, \partial_\mu \vec{s}(x) \cdot\partial_\mu \vec{s}(x)\,,
\end{equation}
and the topological charge $Q[\vec{s}]$ is given by
\begin{equation}
\label{eq:charge}
Q[\vec{s}] = \f{1}{8\pi}\int d^2x\, \vec{s}(x)\cdot
\epsilon^{\mu\nu}\partial_\mu\vec{s}(x) \wedge \partial_\nu\vec{s}(x)\,.
\end{equation}
Here $\mu,\nu=1,2$, and sum over repeated indices is understood; the
antisymmetric symbol $\epsilon^{\mu\nu}$ is defined as
$\epsilon^{12}=-\epsilon^{21}=1$, $\epsilon^{11}=\epsilon^{22}=0$.
While the theory possesses a mass gap at $\theta=0$, it has been
argued that the mass gap $m(\theta)$ vanishes as $\theta\to\pi$ with
the following behaviour~\cite{AGSZ}:
\begin{equation}
\label{eq:massgap}
m(\theta) \propto |\pi-\theta|^{\f{2}{3}} \left\vert \log\f{1}{|\pi-\theta|}
\right\vert^{-\f{1}{2}} \mathop =_{\theta<\pi,\,\theta\simeq\pi}
(\pi-\theta)^{\f{2}{3}} \left( \log\f{1}{\pi-\theta}
\right)^{-\f{1}{2}}\,,
\end{equation}
where we have neglected subleading terms.\footnote{From now on we will
work in the interval $\theta\in[0,\pi]$, so that we can discard the
absolute values.} This prediction follows from
the following considerations for the continuum theory (see
Refs.~\cite{Affleck:1987ch,AGSZ,CM}). Near
$\theta=\pi$, the effective action for the $O(3)$ sigma model is given
by the $SU(2)_1$ Wess-Zumino-Novikov-Witten (WZNW)
model~\cite{WZNW1,WZNW2,WZNW3}, with a marginally irrelevant,
parity-preserving perturbation, and a relevant, parity-breaking
perturbation, whose coupling $\tilde g$ is a function
of $(\pi-\theta)$ that vanishes at $\theta=\pi$.
Renormalisation-group
arguments relate as follows the coupling and the correlation length
$\xi$ of the system~\cite{AGSZ},
\begin{equation}
\label{eq:RG1}
\f{1}{\tilde g} \propto \xi^{\f{3}{2}}(\log\xi)^{-\f{3}{4}}\times
\left[1+{\cal O}\left((\log\xi)^{-1}\right)\right]\,;
\end{equation}
neglecting subleading terms, as $m\propto\xi^{-1}$, one finds
\begin{equation}
\label{eq:RG2}
m \propto \tilde g^{\f{2}{3}} \left(\log\f{1}{\tilde
g}\right)^{-\f{1}{2}}\,.
\end{equation}
It is usually assumed that $\tilde g \propto (\pi-\theta) + \ldots$,
so that Eq.~\eqref{eq:massgap} immediately follows.
Following Kadanoff, one expects that near the critical point
$\theta=\pi$ the free energy density $F(\theta)$
is proportional to the square of the inverse of the
correlation length, that in turn is proportional to the inverse mass
gap, so that
\begin{equation}
\label{eq:fed}
F(\theta) \propto \f{1}{\xi(\theta)^2} \propto m(\theta)^2\,.
\end{equation}
The order parameter for parity breaking $\mathbf{O}(\theta)$, defined in
Eq.~\eqref{eq:orpar}, is therefore expected to show the following
behaviour near $\theta=\pi$,
\begin{equation}
\label{eq:orpar2}
\mathbf{O}(\theta) \propto \f{\partial m(\theta)^2}{\partial\theta} \propto
(\pi-\theta)^{\f{1}{3}}\left(\log\f{1}{\pi-\theta}\right)^{-1}\,,
\end{equation}
where we have neglected subleading terms. This behaviour is
conveniently rewritten as follows in terms of the variable
$z=\cos\f{\theta}{2}$ ($z\le 1$),
\begin{equation}
\label{eq:orpar3}
\mathbf{O}(\theta) \propto z^{\f{1}{3}}\left(\log\f{1}{z}\right)^{-1}\,,
\quad z\ll 1\,.
\end{equation}
The critical behaviour is therefore a second-order phase transition,
with recovery of parity, with a critical exponent $\epsilon=\f{1}{3}$.
For future utility, it is useful to work out the first correction to
the leading behaviour Eq.~\eqref{eq:RG2}. This does not require the
knowledge of the ${\cal O}((\log \xi)^{-1})$ terms in
Eq.~\eqref{eq:RG1}; we have
\begin{equation}
\label{eq:RG2bis}
m \propto \tilde g^{\f{2}{3}} \left(\log\f{1}{\tilde
g}\right)^{-\f{1}{2}} \left[1- \f{3}{8}
\f{\log\log\f{1}{\tilde g}}{\log\f{1}{\tilde g}} +
\f{1}{\log\f{1}{\tilde g}}r\left(\log\textstyle\f{1}{\tilde g}\right)
\right]\,,
\end{equation}
where the function
$r(x)$ is of the form $r(x)= r_0 + (r_1\log(x)+r_2)/x+\ldots$,
and we have omitted subleading terms at large $x$. We will assume that
subleading terms in the relation $\tilde g \propto (\pi-\theta) +
\ldots$ are suppressed as powers of $\pi-\theta$, so
that they can be safely ignored in the analysis of the following
subsections. Using Eq.~\eqref{eq:RG2bis}, we find for
the order parameter
\begin{equation}
\label{eq:RG3}
\mathbf{O} \propto
z^{\f{1}{3}}
\left(\log\f{1}{z}\right)^{-1}\left[1-
\f{3}{4}\f{\log\log\f{1}{z}}{\log\f{1}{z}} +
\f{1}{\log\f{1}{z}}\tilde
r\left(\log\textstyle\f{1}{z}\right)
\right]\,,
\end{equation}
where again $\tilde r(x)$ is of the form $\tilde r(x)= \tilde r_0 +
(\tilde r_1\log(x)+ \tilde r_2)/x+\ldots$.
\subsection{Effect of logarithmic corrections on the effective
exponent. Extension of the method}
\label{sec:crit}
We can now work out how the predicted behaviour of the order parameter
near the critical point reflects on the effective exponent
$\gamma_\lambda$ defined in Eq.~\eqref{eq:gammal}. Using
Eq.~\eqref{eq:RG3}, one finds that near $z=0$ the quantity $y(z)$ has the
following behaviour:
\begin{equation}
\label{eq:log1}
y(z) = y_0\, z^{\f{4}{3}} \left(\log\f{1}{z}\right)^{-1}c(z)\,,
\end{equation}
where $y_0$ is some constant and $c(z)=1-
\f{3}{4}\f{\log\log\f{1}{z}}{\log\f{1}{z}} + \ldots$ (see
Eq.~\eqref{eq:RG3}), where the dots stand for subleading terms. We
have therefore for $y_\lambda(z)$
\begin{equation}
\label{eq:log2}
\begin{aligned}
y_\lambda(z)&= e^{\f{\lambda}{2}\f{4}{3}}y_0\, z^{\f{4}{3}}
\left(\log\f{1}{z}-\f{\lambda}{2}\right)^{-1}c(e^{\f{\lambda}{2}}z)\\
&= e^{\f{\lambda}{2}\f{4}{3}}y_0\, z^{\f{4}{3}}
\left(\log\f{1}{z}\right)^{-1}\left(1+
\f{\lambda}{2}\f{1}{\log\f{1}{z}} +\ldots\right)c(e^{\f{\lambda}{2}}z) \\
&
=e^{\f{\lambda}{2}\f{4}{3}}\left(1+
\f{\lambda}{2}\f{1}{\log\f{1}{z}}+\ldots\right)y(z) \,,
\end{aligned}
\end{equation}
and so
\begin{equation}
\label{eq:log4}
\gamma_\lambda(y)= \f{4}{3} + \f{2}{\lambda}\log\left(1+
\f{\lambda}{2}\f{1}{\log\f{1}{z}}\right)+\ldots
= \f{4}{3} +\f{1}{\log\f{1}{z}}+\ldots\,.
\end{equation}
Taking the logarithm on both sides of Eq.~\eqref{eq:log1}, we find
to leading order $\log\f{1}{z}=\f{3}{4}\log\f{1}{y(z)}+\ldots$,
and plugging this into Eq.~\eqref{eq:log4} we finally obtain
\begin{equation}
\label{eq:log7}
\gamma_\lambda(y)=
\f{4}{3}\left(1
+\f{1}{\log\f{1}{y}}\right)+o\left(\f{1}{\log\f{1}{y}}\right)\,.
\end{equation}
The derivative of this function is infinite at the origin: as a
consequence, the effective exponent changes abruptly at very small
$y$, going from $\gamma_\lambda= \f{4}{3}\simeq 1.33$ at $y=0$ to
$\gamma_\lambda\simeq 1.43$ at $y=10^{-6}$ (see
Fig.~\ref{fig:thpred}). From a practical point of
view, this behaviour makes it very hard to obtain the correct
extrapolation from numerical data: one would need high precision data
at very small values of $y$ in order to figure out the logarithmic
behaviour.
\begin{figure}[t]
\centering
\subfigure[]{\includegraphics[width=0.49\textwidth]{log_corr.eps}}
\subfigure[]{\includegraphics[width=0.49\textwidth]{log_corr_2.eps}}
\caption{(a) Theoretical prediction for $\gamma_\lambda(y)$ up to order
${\cal O}\left(\log\f{1}{y(z)}\right)$. \\ (b) Theoretical prediction
for $\bar\gamma_\lambda(\bar y)$ up to order
${\cal O}\left(\f{\log\log\f{1}{\bar y(z)}}{(\log\f{1}{\bar
y(z)})^2}\right)$.}
\label{fig:thpred}
\end{figure}
It is possible to modify the method of scaling transformations
discussed above, in order to reduce the effect of the logarithmic
corrections. Indeed, it suffices to consider a new function $\bar
y(z)$, obtained by multiplying $y$ by an appropriate factor, designed
to cancel the
logarithmic corrections at $\theta=\pi$. A convenient choice is
\begin{equation}
\label{eq:extmet4}
\bar y(z)\equiv
y(z)\,\cosh\f{h}{2}\,\log\left(1+\f{1}{\cosh\f{h}{2}}\right)
=
\f{\langle Q
\rangle_h}{V\tanh\f{h}{2}}
\,\cosh\f{h}{2}\,\log\left(1+\f{1}{\cosh\f{h}{2}}
\right)
\,.
\end{equation}
It is easy to show that the extra term behaves as
\begin{equation}
\label{eq:extmet2}
\cosh\f{h}{2} \log\left(1+\f{1}{\cosh\f{h}{2}}\right)
\mathop\to_{h\to i\theta}
\cos\f{\theta}{2}\log\left(1+\f{1}{\cos\f{\theta}{2}}\right)
\mathop\to_{\theta\to\pi} z\log\f{1}{z}\,;
\end{equation}
therefore, near $z=0$ we have that $\bar y(z) \propto z^{\f{7}{3}}$,
without logarithmic corrections.\footnote{\label{foot:logs} More
generally, if the order parameter behaves as $\mathbf{O}
\propto z^{\epsilon}\left(\log\f{1}{z}\right)^{-\rho}$, one can
define
$$\bar y(z,\rho)=\f{\langle Q
\rangle_h}{V\tanh\f{h}{2}}\left[\cosh\f{h}{2}\,
\log\left(1+\f{1}{\cosh\f{h}{2}}
\right)\right]^\rho
\,. $$
in order to take care of logarithmic factors.
One has that $\bar
y(z,\rho)\propto z^{\epsilon + 1 + \rho}$ near $z=0$, without
logarithmic corrections. }
Compared to similar functions yielding the desired logarithmic term,
this choice has the advantage that the behaviour
Eq.~\eqref{eq:extmet2} of the extra factor has only corrections of
order ${\cal O}(z^2)$ near $z=0$, and that the large-$h$ behaviour of
$\bar y$ is the same as that of the topological charge density, so
avoiding possible distortions in the numerical analysis. Moreover, the
extra factor is a monotonically increasing function of $z$, so that it
cannot modify the monotonicity properties of $y(z)$ (which is assumed
to be a monotonically increasing function for the whole method to work).
It is straightforward now to work out the theoretical prediction for
the behaviour of the new effective exponent
\begin{equation}
\label{eq:new_eff_exp}
\bar\gamma_\lambda(\bar y)\equiv \f{2}{\lambda}\log\f{\bar
y_\lambda}{\bar y}\,.
\end{equation}
Clearly,
\begin{equation}
\label{eq:new_eff_exp2}
\bar\gamma \equiv \lim_{\bar y\to 0} \bar\gamma_\lambda(\bar y) = \gamma+1=
\epsilon+2 = \f{7}{3}\,.
\end{equation}
Near $z=0$, we have that (see Eq.~\eqref{eq:RG3})
\begin{equation}
\label{eq:log1bis}
\bar y(z) = \bar y_0\, z^{\f{7}{3}}\left[1-
\f{3}{4}\f{\log\log\f{1}{z}}{\log\f{1}{z}} + \ldots
\right]\,,
\end{equation}
where $\bar y_0$ is some constant, and also
\begin{equation}
\label{eq:log2bis}
\begin{aligned}
\bar y_\lambda(z)&= e^{\f{\lambda}{2}\f{7}{3}}\bar y_0\, z^{\f{7}{3}}
\left[1-
\f{3}{4}\f{\log(\log\f{1}{z} -\f{\lambda}{2})
}{\log\f{1}{z}-\f{\lambda}{2}} + \ldots \right]\\
&= e^{\f{\lambda}{2}\f{7}{3}}\bar y_0\, z^{\f{7}{3}}
\left[1-
\f{3}{4}\left(\f{\log\log\f{1}{z}}{\log\f{1}{z}}+
\f{\lambda}{2}\f{\log\log\f{1}{z}}{\log\f{1}{z}} +\ldots
\right)+\ldots \right]\\
&= e^{\f{\lambda}{2}\f{7}{3}}\left[1-
\f{3}{4}\f{\lambda}{2}\f{\log\log\f{1}{z}}{(\log\f{1}{z})^2} +\ldots
\right]\bar y(z)\,,
\end{aligned}
\end{equation}
where the neglected terms in the last passage are of order ${\cal
O}([\log(1/z)]^{-2})$. Taking logarithms on both sides of
Eq.~\eqref{eq:log1bis} one immediately sees that to leading order
$\log\f{1}{z} = \f{3}{7}\log\f{1}{\bar y}+\ldots$, and so one finds
that\footnote{This
result holds with a milder assumption on the relation between
$\tilde g$ and $\pi -\theta$, namely that $\tilde g\propto \pi
-\theta +\ldots$ with subleading terms suppressed with
respect to $\f{\log\log\f{1}{\pi-\theta}}{\log\f{1}{\pi-\theta}}$.
}
\begin{equation}
\label{eq:log3bis}
\bar\gamma_\lambda (\bar y)=
\f{7}{3} - \f{49}{12}\,\f{\log\log\f{1}{\bar
y}}{(\log\f{1}{\bar y})^2}+\ldots\,.
\end{equation}
Although there still are logarithmic effects in the approach to the
limit value, the ``jump'' of the function between $\bar y=0$ and $\bar
y=10^{-6}$ is half as much as that of $\gamma_\lambda$ predicted above
(see Fig.~\ref{fig:thpred}). Moreover, it is easy to see that
corrections to the leading-order relation between $\log\f{1}{z}$ and
$\log\f{1}{\bar y}$ are vanishing as $\bar y\to 0$, i.e.,
$\log\f{1}{z} = \f{3}{7}\log\f{1}{\bar y}+o(1)$, while in the
relation between $\log\f{1}{z}$ and $\log\f{1}{y}$ there are also
subleading but divergent terms as $y\to 0$. The bottom line is that
the use of $\bar\gamma_\lambda$ and $\bar y$ instead of
$\gamma_\lambda$ and $y$ is expected to improve the numerical
analysis.
In concluding this Section, we want to add a few remarks.
First of all, we want to stress that the results of this Section are expected
to hold in the continuum limit, and they are based on the fundamental
assumption that the critical theory at $\theta = \pi$ is the $SU(2)$ WZNW model
at topological coupling $k = 1$. Furthermore, the derivation above is correct
provided that the free energy is properly renormalised. Indeed, it is known
that the topological susceptibility, as well as the higher moments of the
topological charge distribution, diverge in the continuum limit
of the $O(3)$ nonlinear sigma model (at $\theta = 0 $)
\cite{Luscher,BDSL,Nogradi}.
As it has been suggested in~\cite{BDSL} and recently confirmed
in~\cite{Nogradi} by
means of numerical simulations, these divergencies can be traced back to the
first coefficient in the Fourier expansion of
$F(\theta)$, i.e.,
\begin{equation}
\label{eq:fourier}
F(\theta) = \sum_{n=1}^\infty f_n [1 -
\cos(n\theta)] \,,
\end{equation}
with
$f_1$ divergent and $f_n$ finite for $n>1$. In Eq.~\eqref{eq:fed} one
should therefore use $F^R(\theta) = F(\theta) - f_1^{\rm DIV}[1 -
\cos(\theta)]$, with $f_1^{\rm DIV}$ the divergent part of $f_1$; the
following results therefore hold for the renormalised quantity
$y^R =
\f{\partial F^R}{\partial \theta}\left(\tan\frac{\theta}{2}\right)^{-1}$,
and the related quantities $\gamma_\lambda^R$, $\bar y^R$ and
$\bar\gamma_\lambda^R$, in the continuum.
However, Haldane's conjecture is formulated for small but finite lattice
spacing $a$, where $f_1^{\rm DIV}=f_1^{\rm DIV}(a)$ is still
finite. We are therefore
interested in the critical behaviour of the model at $\theta = \pi$ and at
finite $a$, so that the observable of interest
$y_L = \f{\partial
F_L}{\partial \theta}\left(\tan\frac{\theta}{2}\right)^{-1}$
(we are using now the
subscript $L$ for lattice quantities) is a well defined, finite quantity, which
is a function of $z=\cos\frac{\theta}{2}$ and $a$, $y_L=y_L(z,a)$. Separating
now the renormalised continuum contribution from the rest, we have $y_L(z,a) =
y^R(z) + f_1^{\rm DIV}(a)\sin\theta + \delta y_L(z,a)$, where $\delta y_L$
are finite corrections that vanish as $a\to 0$. It is now evident that the
divergent term does not affect the critical behaviour at $\theta = \pi$ at
finite $a$: as it is $\propto z^2$, it is subleading with respect to $y^R(z)
\propto z^{\frac{4}{3}}$, provided that the theoretical prediction holds;
obviously, the prescription used to define the divergent part is irrelevant. On
the other hand, the corrections $\delta y_L$ may change the critical behaviour
at finite lattice spacing (and are indeed expected to do so at strong coupling):
therefore, Eqs.~\eqref{eq:log7} and \eqref{eq:log3bis} will describe
the critical behaviour of the model at
finite $a$ only if $y^R(z)$ is the leading contribution.
We notice also that the prediction for the quantity $\bar y(z)$ has
been derived using
the behaviour of the mass gap near the critical point $\theta =
\pi$, i.e., near $z = 0$, so that it is not expected to hold for $z
\ge 1$, where
numerical simulations are feasible. On the other hand, due to its expected
smoothness, the prediction for $\bar y_\lambda(\bar y)$ should hold more
generally in the region of small $\bar y$, that is accessible to numerical
simulations at sufficiently small values of the coupling.
\section{Numerical simulations on the lattice}
\label{sec:num_sim}
\begin{figure}[t]
\centering
\subfigure[]{\includegraphics[width=0.25\textwidth]{unit_square.eps}}
\hspace{0.15\textwidth}
\subfigure[]{\includegraphics[width=0.35\textwidth]{spherical_2.eps}}
\caption{(a) A unit square of the direct lattice, i.e., a site of
the dual lattice. \\ (b) Spherical triangle corresponding to the
spins $\vec{s}_1$, $\vec{s}_2$ and $\vec{s}_3$.}
\label{fig:1}
\end{figure}
In this Section we describe the setup of our numerical simulations on
the lattice, and we discuss our results on the critical behaviour of
the two-dimensional $O(3)_\theta$NL$\sigma$M at $\theta=\pi$ at finite
lattice spacing.
\subsection{The $O(3)_\theta$NL$\sigma$M on the lattice}
\label{sec:lat_O3}
In order to compute numerically the functional integrals
Eq.~\eqref{eq:model}, one replaces the continuum by a square lattice
$\Lambda$ of finite size $V$, properly discretising the action. The
simplest choice for $S$ is
\begin{equation}
\label{eq:s0lat}
S[\vec{s}] \to \frac{1}{g^2}\sum_{x\in\Lambda}\sum_{\mu=1}^2
[1-\vec{s}(x)\cdot\vec{s}(x+\hat\mu)] = 2\beta V
+ \beta S_{\rm latt}[\vec{s}]\,, \quad
\beta = 1/g^2\,.
\end{equation}
Here $\hat\mu$ is a unit lattice vector in direction $\mu$.
The lattice action $S_{\rm latt}[\vec{s}]$ is identical to the energy of
the Heisenberg statistical model, so that the resulting expression for
$Z(\theta=0)$ gives (up to an irrelevant constant) the partition
function of this model at temperature $1/\beta$ (in units of the
Boltzmann constant). As regards the topological charge, we have used
the geometrical definition of Ref.~\cite{BL},
\begin{equation}
\label{eq:top_charge}
Q_{\rm geom}[\vec{s}] = \sum_{x^*\in\Lambda^*} q(x^*)\,, \qquad
q(x^*)=\f{1}{4\pi}\left[(\sigma A)(\vec{s}_1,\vec{s}_2,\vec{s}_3) + (\sigma
A)(\vec{s}_1,\vec{s}_3,\vec{s}_4)\right]\,,
\end{equation}
where $x^*$ are sites of the dual lattice $\Lambda^*$ (i.e., squares
of the direct lattice $\Lambda$), and $\vec{s}_i=\vec{s}(x_i)$ are the spin
variables living on the corners $x_i$ of the squares (ordered
counterclockwise starting from the bottom left corner, see
Fig.~\ref{fig:1} (a)). Here we have denoted by $(\sigma
A)(\vec{s}_1,\vec{s}_2,\vec{s}_3)$ the signed area of the spherical triangle having
as vertices $\vec{s}_1$, $\vec{s}_2$, and $\vec{s}_3$ (see Fig.~\ref{fig:1} (b)): the
absolute value of the area $A$ and its sign $\sigma$, i.e., the
orientation of the spherical triangle, are given respectively by
\begin{equation}
\label{eq:area}
A = \alpha_1 + \alpha_2 + \alpha_3 - \pi\,, \qquad
\sigma = {\rm sign}\left[\vec{s}_1\cdot (\vec{s}_2\wedge \vec{s}_3)\right]\,,
\end{equation}
with $\alpha_i$ the angles at the corners of the spherical
triangle; the two terms $q(x^*)= q_1(x^*)+q_2(x^*)$ in
Eq.~\eqref{eq:top_charge} correspond to the two
triangles in which each square on the lattice is divided. In terms of the
spin variables one has
\begin{equation}
\label{eq:signedarea}
\begin{aligned}
\exp\left\{\textstyle\f{i}{2}(\sigma A)\right\} &= \rho^{-1}\left[ 1
+ \vec{s}_1\cdot \vec{s}_2 + \vec{s}_2\cdot \vec{s}_3 + \vec{s}_3\cdot \vec{s}_1 + i \vec{s}_1\cdot
(\vec{s}_2\wedge \vec{s}_3)\right] \,,\\
\rho^2 &= 2(1+\vec{s}_1\cdot \vec{s}_2)(1+\vec{s}_2\cdot \vec{s}_3)(1+\vec{s}_3\cdot \vec{s}_1)\,.
\end{aligned}
\end{equation}
Except for the exceptional configurations
\begin{equation}
\label{eq:except}
\vec{s}_1\cdot (\vec{s}_2\wedge \vec{s}_3) = 0\,, \quad 1+ \vec{s}_1\cdot \vec{s}_2 + \vec{s}_2\cdot \vec{s}_3
+ \vec{s}_3\cdot \vec{s}_1 \le 0\,,
\end{equation}
for which the topological charge is not defined, one has $\sigma=\pm
1$ and $ A < 2\pi$. One verifies directly that $Q_{\rm geom}$ has the
correct continuum limit; moreover, imposing periodic boundary
conditions, it necessarily takes only integer values.\footnote{It is
worth mentioning that the Mermin-Wagner-Hohenberg
theorem~\cite{MWH1,MWH2,MWH3}, that forbids the possibility of spontaneous
magnetisation in the model at $\theta=0$, can be easily extended to
$\theta\ne 0$ if the geometric definition $Q_{\rm geom}$ of the
charge is used.}
\begin{table}[t]
\centering
\begin{tabular}{c|c|c}
$\beta$ & $V$ & statistics \\ \hline
0.9 & $100^2$ & $2\cdot 10^6$ \\
1.2 & $100^2$ & $2\cdot 10^6$ \\
1.5 & $100^2$ & $2\cdot 10^6$ \\
1.6 & $200^2$ & $4\cdot 10^6$ \\
1.7 & $350^2$ & $2\cdot 10^6$
\end{tabular}
\caption{Details of the simulations.}
\label{tab:tech}
\end{table}
Regarding the numerical simulation of the system, the non-linear
dependence of $Q_{\rm geom}$ on the spins makes it hard to envisage fast
algorithms; we have therefore used a simple Metropolis algorithm,
supplemented by a ``partial over-relaxation'' algorithm to accelerate
the decorrelation between configurations. This ``partial
over-relaxation'' algorithm simply consists in proposing the usual
over-relaxation step used when simulating the model at $\theta=0$ to
the Metropolis accept/reject step. This algorithm turns out to be
rather efficient, especially when $\beta$ is large and the topological
content of configurations changes rarely; notwithstanding
its simplicity, it turns out also to be very effective in
decorrelating configurations.
\subsection{Numerical analysis}
\label{sec:num_an}
We have performed numerical simulations of the
$O(3)_\theta$NL$\sigma$M at various values of the coupling. For each
value of $\beta$ we have chosen 45 values of $h$, in such a way that
the topological charge was measured for both $z=\cosh\f{h}{2}$ and
$z_\lambda=e^{\f{\lambda}{2}}z\equiv\cosh\f{h_\lambda}{2}$; we used
$\lambda=0.5$. The (real) values of $h=i\theta$ that we used lie
below the line of (possible) phase transitions determined in
Ref.~\cite{BD}, so that the region where our simulations were
performed and the real-$\theta$ axis belong to the same analyticity
domain in the complex-$\theta$ plane. The statistical error on the
topological charge has been determined through binning. The lattice
size was chosen in order for the finite size effects to be negligible
(see Tab.~\ref{tab:tech}).
\begin{figure}[t]
\centering
\includegraphics[width=0.7\textwidth]{bargammal_bayesian_fit_beta1.5.eps}
\caption{Plot of the effective exponent $\bar\gamma_\lambda(\bar y)$
for $\beta=1.5$, together with the result of a Bayesian fit at
fixed $\bar\gamma=\bar\gamma_\lambda(0)$ (solid line,
Tab.~\ref{tab:bfit1.5bis} (left)) and with free $\bar\gamma$ (long-dashed
line, Tab.~\ref{tab:bfit1.5bis} (right)).}
\label{fig:fitbays1.5}
\end{figure}
We have then analysed the results for the effective exponent
$\bar\gamma_\lambda$ by means of Bayesian fits~\cite{bayes}, based on
the theoretical prediction described in Section \ref{sec:crit}. In
a nutshell, a Bayesian fit takes into account our knowledge (the
so-called {\it priors}) about the parameters that we are fitting. A
detailed account of the analysis can be found in Appendix
\ref{sec:appendix}: here we will mainly discuss the results.
The fits were based on the following general form of
$\bar\gamma_\lambda$,
\begin{equation}
\label{eq:bay_zero}
\bar\gamma_\lambda(\bar y) = \bar\gamma + F\left(\log\f{\bar
y_0}{\bar y},\{a_j^{(k)}\}\right) \,,
\end{equation}
with $F(x,\{a_j^{(k)}\})\to 0$ as $x\to\infty$, that can be derived from the
expected critical behaviour at $\theta=\pi$ (neglecting terms that
vanish as power laws). The values of the parameters $\bar y_0$ and
$a_j^{(k)}$ are not determined by the theoretical analysis, and have
been fitted to the lattice data.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\textwidth]{bargammal_bayesian_fit_beta1.6.eps}
\caption{Plot of the effective exponent $\bar\gamma_\lambda(\bar y)$
for $\beta=1.6$, together with the result of a Bayesian fit at
fixed $\bar\gamma=\bar\gamma_\lambda(0)$ (solid line,
Tab.~\ref{tab:bfit1.6bis} (left)) and with free
$\bar\gamma$ (long-dashed line,
Tab.~\ref{tab:bfit1.6bis} (right), and short-dashed line, Tab.~\ref{tab:b3fit1.6bis}).}
\label{fig:fitbays1.6}
\end{figure}
A first analysis has been carried out by fixing $\bar\gamma$ to the
theoretical value, $\bar\gamma=\f{7}{3}$, and fitting the other
parameters, starting with $\bar y_0$ only and progressively adding
terms, in order of relevance. We have then used the information
obtained on $\bar y_0$ to tune the priors for a second fit, letting
all the parameters free to vary. The results are reported in
Tabs.~\ref{tab:bfit1.5bis}, \ref{tab:bfit1.6bis} and
\ref{tab:bfit1.7bis}, for $\beta=1.5$, $\beta=1.6$ and $\beta=1.7$,
respectively. Finally, at $\beta=1.6$ we have also tried a
fit using information on $a^{(1)}_0$, obtained from the fit at fixed
$\bar\gamma$, in order to set the corresponding priors: the results
are reported in Tab.~\ref{tab:b3fit1.6bis}. The results of the fit
with the largest number of parameters are shown in
Figs.~\ref{fig:fitbays1.5}, \ref{fig:fitbays1.6} and
\ref{fig:fitbays1.7}, for $\beta=1.5$, $\beta=1.6$ and $\beta=1.7$,
respectively.
From the results of the analysis described above, we conclude that the
lattice data are compatible, within the errors, with the critical
behaviour predicted from the WZNW model, at $\beta=1.5$, $\beta=1.6$
and $\beta=1.7$. On the other hand, data at $\beta=0.9$ and
$\beta=1.2$ led to bad-quality fits when fixing $\bar\gamma$ to the
theoretical value, and to a value of $\bar\gamma$ considerably smaller
than the theoretical prediction when allowed to float ($\sim
1.9$ for $\beta=0.9$, $\sim 2$ for $\beta=1.2$). This shows that the
WZNW-like critical behaviour does not hold at small $\beta$, breaking
down at some critical value yet to be determined, but does not allow
us to draw any conclusion on the details of what happens as one lowers
$\beta$. The problem is that our analysis {\it assumes} a given
logarithmic factor in the critical behaviour of the topological charge
at $\theta=\pi$, rather than obtaining it from the numerical data. A
few attempts have shown that if we vary the exponent of the
logarithmic factor in $\bar y$, as described in footnote
\ref{foot:logs}, we still obtain a good fit but the value for the
critical exponent $\bar\gamma$ resulting from the fit changes,
too. For this reason, we have not attempted a more quantitative
analysis at $\beta=0.9$ and $\beta=1.2$.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\textwidth]{bargammal_bayesian_fit_beta1.7.eps}
\caption{Plot of the effective exponent $\bar\gamma_\lambda(\bar y)$
for $\beta=1.7$, together with the result of a Bayesian fit at
fixed $\bar\gamma=\bar\gamma_\lambda(0)$ (solid line,
Tab.~\ref{tab:bfit1.7bis} (left)) and with free
$\bar\gamma$ (long-dashed line, Tab.~\ref{tab:bfit1.7bis} (right)).}
\label{fig:fitbays1.7}
\end{figure}
Summarising, our results are compatible with one of the two following
scenarios.
\begin{enumerate}
\item There is a critical value $\beta=\beta_c$, above which the
critical behaviour of the $O(3)_\theta$NL$\sigma$M at $\theta=\pi$
is exactly the one predicted by the WZNW model at topological
coupling $k=1$.
\item The critical behaviour becomes exactly the one predicted by the
WZNW model only at infinite $\beta$, but for $\beta$ large enough,
$\beta\gtrsim\tilde\beta_c$, the difference is not appreciable
numerically.
\end{enumerate}
As for what happens at small $\beta$, there are various
possibilities. As one expects the system to undergo a first-order
phase transition at $\theta=\pi$ at strong coupling, in the case of
the first scenario above there may be a sharp change in the nature of
the transition from first order to WZNW-like second order, with the
order parameter vanishing at $\theta=\pi$ as $\mathbf{O}\propto
(\pi-\theta)^\epsilon$ with $\epsilon=\f{1}{3}$. It is
however also possible that the nature of the transition changes
continuously, i.e., the critical exponent $\epsilon$ varies from 0 to
$\f{1}{3}$ as $\beta$ increases, either reaching $\f{1}{3}$ at some
finite value of $\beta=\beta_c$, or only asymptotically, as in the
second scenario above.
\section{Conclusions}
\label{sec:concl}
In this paper we have studied the critical behaviour of the
two-dimensional $O(3)$ nonlinear sigma model with $\theta$-term
($O(3)_\theta$NL$\sigma$M) at $\theta=\pi$, by means of numerical
simulations at imaginary $\theta$. Using the method of
Refs.~\cite{ADGV1,ADGV2,CP1,AFV}, it is possible in principle to
reconstruct the behaviour of the topological charge density for real
$\theta$, and so investigate the issue of parity symmetry breaking at
$\theta=\pi$. The theoretical expectation is that parity symmetry is
recovered at $\theta=\pi$ through a second-order phase transition,
the behaviour at the critical point being determined by the $SU(2)$
Wess-Zumino-Novikov-Witten (WZNW) model~\cite{WZNW1,WZNW2,WZNW3} at
topological coupling $k=1$.
Assuming that this is the case, one can show that the method of
Refs.~\cite{ADGV1,ADGV2,CP1,AFV} is unlikely to yield the correct
critical exponent, as the large logarithmic violations to scaling at
the critical point make it difficult to reconstruct the critical
behaviour from the numerical data. Assuming that the logarithmic
violations are known, it is however easy to modify the method in order
to overcome this problem. We have then been able to show that our
numerical results for sufficiently large $\beta$, i.e., for
sufficiently weak coupling, are compatible with
the expected WZNW-like behaviour at $\theta=\pi$, in agreement with
previous numerical investigations~\cite{BPW,Bogli,dFPW}.
Several issues remain open. Although the modified method allows to
take care of logarithmic violations, it is necessary to know them in
advance in order for it to work properly. In fact,
an incorrect assumption on these logarithmic violations
could not be detected from the numerical analysis, and so would lead
to an incorrect evaluation of the critical exponent. The bottom line
is that our modified method can be used to {\it test} a theoretical
expectation on the critical behaviour of a model with results from
numerical investigation, but would not lead to conclusive results if
one rather tried to {\it determine} the critical behaviour from the
numerical data.
For this reason, we have not been able to determine the critical
behaviour of the $O(3)_\theta$NL$\sigma$M at smaller values of
$\beta$, although we have been able to exclude that it is the same as
in the WZNW model. Also, due to the numerical errors, we are not able
to tell if the critical behaviour is {\it exactly} WZNW-like, starting
from some critical value of $\beta$, or if it becomes WZNW-like only
{\it asymptotically}. Further investigations are therefore required, in
order to unveil completely the phase diagram of the
$O(3)_\theta$NL$\sigma$M.
Being in agreement with Refs.~\cite{BPW,Bogli,dFPW}, the results of this
paper are obviously in disagreement with those obtained in
Ref.~\cite{CP1} for the $CP^1$ model. There are basically two
possibilities: either the $O(3)$ and $CP^1$ model are not
equivalent, contrary to the standard wisdom; or they are equivalent,
and the results obtained in Ref.~\cite{CP1} are affected by the
numerical problems related to the logarithmic violations, discussed in
Section~\ref{sec:crit}. In order to settle this issue, a new analysis
of the numerical data of Ref.~\cite{CP1} is required, along the lines
developed in this paper, which will be discussed in a forthcoming
publication.
\section*{Acknowledgments}
This work was funded by an INFN-MICINN collaboration (under grant
AIC-D-2011-0663), MICINN (under grant FPA2009-09638 and
FPA2008-10732), DGIID-DGA (grant 2007-E24/2), and by the EU under
ITN-STRONGnet (PITN-GA-2009-238353). EF is supported by
the MICINN Ramon y Cajal program. MG is supported
by MICINN under the CPAN project CSD2007-00042 from the
Consolider-Ingenio2010 program.
\newpage
|
1,116,691,498,661 | arxiv | \section{Introduction}
Entanglement is a quantum measurement with non-local property, which plays an important role in quantum information. Let's consider a quantum system which can be divided into two parts, $A$ and $B$, such that the total Hilbert space is a direct product of these two subsystems as $H_\mathrm{tot} = H_{A} \otimes H_{B}$. For a quantum state $\ket{\Psi} \in H_\mathrm{tot}$ the reduced density matrix of $A$ is defined, by tracing out the degree of freedom of subsystem $B$, as $\rho_A = \tr_B (\ket{\Psi} \bra{\Psi})$. Then, the entanglement entropy(von Neumann entropy) of $A$ is defined as
\begin{equation}
\mathcal{S}_{A} \equiv - \tr(\rho_{A} \, \ln \rho_{A}),
\end{equation}
which is the measurement to describe the correlation between $A$ and $B$.
On the other hand, in many examples, the holographic principle~\cite{tHooft:1993dmi, Susskind:1994vu} has successfully connected two very different theories: gravity in the bulk and quantum field theory (QFT) on the boundary. One of the famous examples is the AdS/CFT correspondence~\cite{Maldacena:1997re, Gubser:1998bc, Witten:1998qj}. In this framework, Ryu and Takanayagi proposed that the entanglement entropy in QFT between the spatial region $\Sigma$ and its outside can be computed from~\cite{Ryu:2006bv, Ryu:2006ef}
\begin{equation}
\mathcal{S}_{\Sigma} = \frac{\mathcal{A}_{\Gamma}}{4},
\end{equation}
where $\mathcal{A}_{\Gamma}$ is the area of the extremal surface (with minimum area) in the bulk that the end of which is connected to the entangled surface.
The Ryu-Takayanagi proposal provides a very different and, in many situations, convenient approach to study the entanglement entropy. The calculations of the holographic entanglement entropy in vacuum were firstly carried out in~\cite{Ryu:2006bv, Ryu:2006ef}. The references~\cite{Nishioka:2009un, Takayanagi:2012kg, Rangamani:2016dms} provide nice reviews about studying entanglement entropy by the holographic method. Although the entanglement entropy is very different from the thermal entropy, people have been analyzed the thermodynamics of entanglement entropy in the thermal state by exploring the holographic entanglement entropy of black holes in the small $R$ limit~\cite{Bhattacharya:2012mi,Fischler:2013, Chaturvedi:2016kbk, Karar:2018ecr, Saha:2019ado, Caputa:2013lfa,Kundu:2016,Nadi:2019bqu,Maulik:2021,Singh:2021}. Moreover, it has been proved that the extremal surface cannot penetrate the horizon in static black holes~\cite{Hubeny:2012ry} which implies that in the large $R$ limit, the area of extremal surface will be dominated by the IR region after regularization due to a great part of the extremal surface runs along the event horizon. One can expect that, after regularization, the leading behavior of the area of the extremal surface in the large $R$ limit is identical to Bekenstein-Hawking entropy, the thermal entropy. There are some discussions of the entanglement entropy in the large $R$ limit~\cite{Fischler:2013,Caputa:2013lfa,Liu:2013una, Chaturvedi:2016kbk,Kundu:2016,Saha:2019ado, Karar:2018ecr}.
The entanglement entropy is always divergent in quantum field theories if we take the continuous limit. The leading behavior of entanglement entropy is proportional to the volume of the entangling surface owing to short-range correlations, it is so-called area law~\cite{Bombelli:1986,Srednicki:1993,Eisert:2010}, and the terms which are independent of cutoff are attributed to large-range entanglement. These finite terms of the entanglement entropy in a CFT are related to the central charge, featuring the number of degrees of freedom. On top of that, the holographic entanglement entropy is the fine-grained entropy, the definition of which is full quantum at a microscopic level, on the other hand, Bekenstein-Hawking entropy, the thermal entropy, is the coarse-grained entropy, which bases on effectively semi-classical. Thus, We expect the holographic entanglement entropy of black hole is very different from the thermal entropy for the small $R$ limit. As we take the large $R$ limit, the two entropies are approximately the same.
In this paper, we work on the holographic entanglement entropy between a strip region with width $2R$ and its complement in $(n + 1)$-dimensional strongly coupled large-$N$ CFT on $\mathbb{R}^{1,n}$ with chemical potential and angular momentum in the small and large $R$ limits. The gravitational dual is an $(n + 2)$-dimensional cylindrical Kerr-Newman black hole, which the boundary of the spatial region is topological flat $\mathbb{R}^n$. In section \ref{sec2}, we introduce the cylindrical Kerr-Newman-AdS black hole metric, and highlight several important properties in such spacetime. Moreover, we build up the formula of the area when $\Sigma$ is a strip in section \ref{sec3}. Although there is an rotation in $\phi$ direction, the extremal surface $\Gamma$ still preserved the translational symmetry along $\phi$ direction, and the corresponding constant of motion can simplify our calculations. However, it is still difficult to get the analytic results in general case. Here, we only focus on the small and large $R$ limits, the derivation of which in section \ref{sec4} and \ref{sec5} respectively. In the end, we summarize our analytical results and briefly discuss the outlook in \ref{sec6}.
\section{Cylindrical Kerr-Newman-AdS Black Hole}\label{sec2}
Let's consider an observer moves with the velocity $\Omega= a/\Theta$ in an $(n + 2)$-dimensional charged AdS-black branes, that is, Lorentz boost along the spatial direction $\phi$
\begin{equation} \label{ct}
dt \; \rightarrow \; \Theta \, dt - a \, d\phi, \qquad d\phi \; \rightarrow \; \Theta \, d\phi - a \, dt,
\end{equation}
where $a \in (-\infty, \infty)$ is a rotation parameter when the spatial coordinate $\phi$ is identified as $\phi \sim \phi + 2\pi$ and $\Theta = \sqrt{1 + a^2}$. The line element of a cylindrical Kerr-Newman black brane becomes~\cite{Awad:2002cz}
\begin{equation} \label{metric}
ds^{2} = \frac{L^2}{z^2} \left[ - h(z) (\Theta \, dt - a \, d\phi)^2 + \frac{dz^2}{h(z)} + (\Theta \, d\phi - a \, dt)^2 + d\sigma_{n-1}^2 \right],
\end{equation}
where $d\sigma_{n-1}^2$ is the line element of $(n-1)$-dimensional Euclidean space and the emblackening function is
\begin{equation} \label{hmq}
h(z) = 1 - m z^{n+1} + q^2 z^{2 n},
\end{equation}
including mass and charge parameters. The associated gauge potential is
\begin{equation}
A = \sqrt{\frac{n}{2 (n - 1)}} \, q \, (z_h^{n-1} - z^{n-1}) (\Theta \, dt - a \, d\phi).
\end{equation}
By integrating the flux of electromagnetic tensor on the boundary of the spacelike hypersurface, we get the density of electromagnetic charge \cite{Dehghani:2003}
\begin{equation}
\mathcal{Q}=\frac{q\Theta}{4 \pi G_N}\frac{n(n-1)}{2}
\end{equation}
The time-like Killing vector $\partial_t$ corresponds to the conserved quantity, namely the energy density
\begin{equation}\label{energy density}
\mathcal{E} = \frac{m}{16 \pi G_N} \left[ (n + 1) \Theta^2 - 1 \right],
\end{equation}
and the Killing vector along the azimuthal direction $\partial_\phi$ implies the conservation of angular momentum. The angular momentum density is
\begin{equation}
\mathcal{J} = \frac{n + 1}{16 \pi G_N} \, m \, a \, \Theta.
\end{equation}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{1.png}
\includegraphics[width=0.5\textwidth]{2.png}
\caption{{\it Left}: The global structure of $(n+2)-$dimensional cylindrical spacetime. $\Sigma$ is the considered spatial region, $\Gamma_\Sigma$ is the extremal surface (blue curve) for the spatial region $\Sigma$. {\it Right}: The extremal surface in an $(n+1)-$dimensional spatial slice as $\Sigma$ is a strip.}
\end{figure}
According to the definition of the horizon, $h(z_h) = 0$, we can express mass in terms of horizon radius $z_h$
\begin{equation}
m = \frac{q^2 z_h^{2 n} + 1}{z_h^{n + 1}}.
\end{equation}
For fixing $m$, there are two corresponding horizons, the inner and outer horizons, for the non-extreme case. However, the tip of the extremal surface, $z_b$, cannot greater than the outer event horizon, so we consider $z_h$ as the radius of the outer horizon in the following context. Therefore, we can rewrite the emblackening function~(\ref{hmq}) as
\begin{equation}
h(z) = 1 - g(z), \qquad g(z) = \frac{z^{n+1}}{z_h^{n+1}} + q^2 z^{n+1} \left( z_h^{n-1} - z^{n-1} \right).
\end{equation}
The temperature of the black brane~(\ref{metric}) is
\begin{equation} \label{t}
T = -\frac{h'(z_h)}{4 \pi \Theta} = \frac{n+1}{4 \pi \Theta z_h} \left( 1 - \frac{n-1}{n+1} q^2 z_h^{2 n} \right).
\end{equation}
There is an upper bound of charge parameter
\begin{equation}
q \leq \sqrt{\frac{n + 1}{n - 1}} \frac{1}{z_h^{n}},
\end{equation}
which ensures the non-negative temperature $T \geq 0$. The equality holds when the black brane is extremal. In addition, the electromagnetic field in the bulk geometry corresponds to the chemical potential of the field theory on the boundary
\begin{equation}
\mu = \lim_{z \to 0} A_t(z) = \sqrt{\frac{n}{2 (n - 1)}} \, q \, \Theta \, z_h^{n-1}.
\end{equation}
Thus, we can write the holographic entanglement entropy in terms of chemical potential $\mu$, temperature $T$~\cite{Kundu:2016} and angular velocity $\Omega$.
\section{Integration Forms of Area and Boundary}\label{sec3}
In the following calculations, we will consider a strip boundary defined by
\begin{equation}
\Sigma \equiv \left\{ x_i, \phi \Big| x_i \in \left[ - \frac{W}{2}, \frac{W}{2} \right], \, \phi \in \left[ -\phi_R, \phi_R \equiv \frac{R}{\ell} \right] \right\},
\end{equation}
where $\ell$ is the radius of the cylinder. For simplicity, we let $\ell$ be $1$. The associated extremal surface $\Gamma$ in the bulk can be parameterized by $\xi^i = (\phi, \vec{x})$ with embedding $y^{\mu}(\xi^i) = (z(\phi), \phi, \vec{x})$, and the induced metric is
\begin{equation}
\gamma_{ij} = g_{\mu\nu} \frac{\partial y^{\mu}}{\partial \xi^{i}} \frac{\partial y^{\nu}}{\partial \xi^{j}}.
\end{equation}
Since the bulk metric is independent of the coordinate $\phi$, the extremal surface should have symmetry with respect to $\phi$. Therefore, it is
convenient to let the tip of the extremal surface is at $z_b = z(\phi = 0)$.
Moreover, one should introduce the UV cutoff $z_\mathrm{UV}$ for regularization of the area of the extremal surface near the boundary $\phi = \phi_R$, i.e. $z(\phi_R) = 0$. Therefore, the area of the extremal surface $\mathcal{A}_{\Gamma}$ is
\begin{eqnarray}
\mathcal{A}_{\Gamma} = L^n\mathcal{A}_{\partial \Sigma} \int_0^{\phi_R - \phi(z_\mathrm{UV})} d\phi \, \mathcal{L}(\dot{z}, z; \phi),
\end{eqnarray}
where $\mathcal{A}_{\partial \Sigma} = 2 W^{n-1}, \, \dot{z} = \partial z/\partial \phi$ and the corresponding Lagrangian is
\begin{equation}
\mathcal{L}(\dot{z}, z; \phi) = \frac1{z^n} \, \sqrt{\frac{\dot{z}^2}{h(z)} + \mathcal{C}(z)}, \qquad \mathcal{C}(z) \equiv \Theta^2 - a^2 h(z) = 1 + a^2 g(z).
\end{equation}
Consequently, the associated Hamiltonian is
\begin{equation} \label{h}
\mathcal{H} = \frac{\partial \mathcal{L}}{\partial \dot{z}} \dot{z} - \mathcal{L} = - \frac{\mathcal{C}(z)}{z^{2n} \mathcal{L}}.
\end{equation}
Since the Hamiltonian does not explicitly depend on $\phi$ implying that the $\mathcal{H}$ is a constant of motion along $\phi$ direction, in the other words, there is a translational symmetry along $\phi$ direction. Because of translational symmetry, it enables us to compute the value of Hamiltonian by evaluating at tip of minimal surface $z = z_{b}$ such that $\dot z = 0$
\begin{equation} \label{hs}
\mathcal{H}(z = z_{b}) = - \frac{\sqrt{\mathcal{C}(z_b)}}{z_b^n} < 0.
\end{equation}
From~(\ref{h}), we can obtain the equation of motion (EoM) for $z(\phi)$ to minimize the area $\mathcal{A}_{\Gamma}$
\begin{equation} \label{zdot}
\dot{z} = \mp \frac{\mathcal{C}(z) \sqrt{h(z)} \sqrt{1 - \frac{z^{2n} \mathcal{C}(z_b)}{z_b^{2n} \mathcal{C}(z)}}}{(-\mathcal{H}) z^n}, \qquad \forall \phi \gtrless 0.
\end{equation}
The equation~(\ref{zdot}) shows that the shape of the minimal surface is symmetric. Therefore, we are able to rewrite the area formula
\begin{eqnarray} \label{area int}
\mathcal{A}_{\Gamma} = L^n\mathcal{A}_{\partial \Sigma} \int_{z_\mathrm{UV}}^{z_b} \frac{1}{z^n \sqrt{h(z)} \sqrt{1 - \frac{z^{2n} \mathcal{C}(z_b)}{z_b^{2n} \mathcal{C}(z)}}} \, dz.
\end{eqnarray}
Also, we can obtain the relation between $\phi_R$ and $z_b$ through
\begin{equation} \label{raphi}
\phi_R= \int_0^{z_b} \frac{(-\mathcal{H}) z^n}{\mathcal{C}(z) \sqrt{h(z)} \sqrt{1 - \frac{z^{2n} \mathcal{C}(z_b)}{z_b^{2n} \mathcal{C}(z)}}} \, dz.
\end{equation}
It is difficult to compute the integration~(\ref{area int}) explicitly. We are going to focus on two specific limits of small and large values of $R$.
\section{\texorpdfstring{Entanglement Entropy in Small $R$ Limit}{Entanglement Entropy in Small R Limit}}\label{sec4}
To evaluate the area of the extremal surface in the small or large $R$ limits, it is convenient to introduce new variables as
\begin{equation} \label{change}
\eta = \frac{z_b}{z_h}, \qquad u = \frac{z}{z_b},
\end{equation}
and then
\begin{equation} \label{bu}
\mathcal{C}(u) = 1 + a^2 g(u), \qquad g(u) = \eta^{n+1} u^{n+1} \left[ 1 + q^2 z_h^{2n} (1 - \eta^{n - 1} u^{n - 1}) \right].
\end{equation}
In the small $R$ limit, the tip of the extremal surface $z_b$ is far away from the position of the horizon $z_h$, that is, $0 < z_b \ll z_h$ or $0 < \eta \ll 1$. Thus we are going to find the leading behavior of area~(\ref{area int}) and $\phi_R$~(\ref{raphi}) in the limit $\eta \rightarrow 0$, respectively.
\subsection{\texorpdfstring{Relation Between $\phi_R$ and $\eta$}{The Relation Between phi R and beta}}
Expanding the integrand of~(\ref{raphi}) in the small $\eta$, we get ($n \ge 2$)
\begin{equation} \label{phi small r beta exp}
\frac{\phi_R}{(-\mathcal{H})} =\int_0^1 \left[ C_{n+1}(u) \eta^{n+1} +\left(C_{2n+2}^{(1)}(u) + C_{2n+2}^{(2)}(u)\right) \eta^{2n+2}+ C_{3n+1}(u) \eta^{3n+1} + O\left( \eta^{3n+3} \right) \right] du,
\end{equation}
where the first two leading orders are
\begin{eqnarray}
&& C_{n+1}(u) = \frac{u^n z_h^{n+1}}{\sqrt{1 - u^{2n}}},
\\
&& C_{2n+2}^{(1)}(u) = \frac{1 - 2 a^2}2 z_h^{n+1} (q^2 z_h^{2n} + 1) \frac{u^{2n+1}}{\sqrt{1 - u^{2n}}}, \label{c60}
\\
&& C_{2n+2}^{(2)}(u) = \frac{a^2}2 z_h^{n+1} (q^2 z_h^{2n} + 1) \frac{u^{3n} - u^{4n+1}}{(1 - u^{2n})^{3/2}}, \label{c61}\\
&& C_{3n+1}(u) =\frac{\left(a^2-1\right) q^2 u^{3 n} z_h^{3 n+1}}{2 \sqrt{1-u^{2 n}}}.\label{c62}
\end{eqnarray}
We employed the Euler integral of the first kind \footnote{The integration form of beta function is
$$ \int_0^1 x^{\mu - 1} (1 - x^\lambda)^{\nu - 1} \, dx = \frac{B(\mu/\lambda, \nu)}{\lambda}, \quad \forall \; \mu > 0, \; \nu > 0, \; \lambda > 0, $$
and $B\left( \frac{\mu}{\lambda}, \nu \right) = \frac{\Gamma(\mu/\lambda) \Gamma(\nu)}{\Gamma(\mu/\lambda + \nu)}$,} to evaluate the integration
\begin{eqnarray}
\int_0^1 C_{n+1}(u) \, du &=&\tilde R_0 z_h^{n+1}, \label{c3r}
\\
\int_0^1 C_{2n+2}^{(1)}(u) \, du &=& \left(1 - 2 a^2\right) \left( q^2 z_h^{2n} + 1 \right) \tilde{R}_1 z_h^{n+1} , \label{c61r}
\\
\int_0^1 C_{2n+2}^{(2)}(u) \, du &=& \frac{a^2(n+1)}{2n} (q^2 z_h^{2n} + 1) \left( 4\tilde{R}_1-\tilde{R}_0 \right) z_h^{n+1}. \label{c622r}
\\
\int_0^1 C_{3n+1}(u) \, du &=&\frac{1}{2} \left(a^2-1\right) \left(2-\frac{1}{n+1}\right) q^2 \tilde{R}_0 z_h^{3 n+1}. \label{c63r}
\end{eqnarray}
where $\tilde{R}_0 \equiv \frac{\sqrt{\pi} \, \Gamma\left( \frac12 + \frac1{2 n} \right)}{\Gamma\left( \frac{1}{2 n} \right)}$ and $\tilde{R}_1 \equiv \frac{\sqrt{\pi} \, \Gamma\left( \frac1{n} \right)}{2 n (n + 2)\Gamma\left( \frac12 + \frac1{n} \right)}$. Note that the divergences of both two ``beta-functions''-like contributions in (\ref{c622r}) indeed cancel out to give a finite result.
To compute~(\ref{c61}), we need to convert $(1 - u^{2n})^{-3/2}$ to summation form
\begin{eqnarray}
\int_0^1 \frac{u^{3 n} - u^{4 n + 1}}{(1 - u^{2 n})^{3/2}} du &=& \sum_{j=0}^{\infty} \frac{2 \Gamma\left( j + \frac{3}{2} \right)}{\sqrt{\pi} \, \Gamma(j + 1)} \int_0^1 u^{2 n j + 3 n} (1 - u^{n + 1}) \, du
\nonumber\\
&=& \frac{n+1}{n}\left(4\tilde{R}_1 - \tilde{R}_0\right).
\end{eqnarray}
On top of that, the Hamiltonian in the small $\eta$ limit looks like
\begin{equation} \label{h small}
\mathcal{H} = - \frac{1}{z_h^n \eta^n} - \frac{a^2}{2 z_h^n} (q^2 z_h^{2n} + 1) \eta + \frac{1}{2} a^2 q^2 z_h^n \eta^n + O(\eta^{n+2}).
\end{equation}
Plugging the results~(\ref{c3r}), (\ref{c61r}), (\ref{c622r}), (\ref{c63r}) and~(\ref{h small}) into~(\ref{phi small r beta exp}), we found the $\phi_R$ in the small $\eta$ limit
\begin{align} \label{small phi}
\phi_R &= \tilde{R}_0 z_h \eta + \left[\tilde{R}_1+ \frac{2a^2}{n} \left(\tilde{R}_1-\frac{\tilde{R}_0}4\right) \right] (q^2 z_h^{2n} + 1) z_h \eta^{n + 2} \\
&-\left\{\frac{1}{2} a^2 q^2 \tilde{R}_0 z_h^{2 n+1}+\tilde{R}_1 z_h \left[\frac{a^2}{2 n}\left(\frac{(n+1) \tilde{R}_0}{\tilde{R}_1}-4\right)-1\right] \left(q^2 z_h^{2 n}+1\right)\right\} \eta^{2n+1}+O(\eta^{2n+3})\nonumber.
\end{align}
In Schwarzschild case~\cite{Saha:2019ado}, $\phi_R=z_h\sum_{j=0}^{\infty}\tilde{R}_j\eta^{j(n+1)+1}$ with $\tilde{R}_j=\frac{1}{2n}\frac{\Gamma\left(j+\frac{1}2\right)\Gamma\left(\frac{1}2+\frac{1}{2n}+\frac{j(n+1)}{2n}\right)}{\Gamma\left(j+1\right)\Gamma\left(1+\frac{1}{2n}+\frac{j(n+1)}{2n}\right)}$. It is easy to see that the correction in (\ref{small phi}) due to rotation $a$ and charge parameter $q$.
\subsection{Area}
Now we move on to the integration of the area. First, we expand the integrand at $\eta \to 0$ limit
\begin{equation} \label{area small r beta exp}
\mathcal{A}_{\Gamma} = L^n\mathcal{A}_{\partial \Sigma} \int_0^1 \left[ \tilde{C}_{-n+1}(u) \eta^{-n+1} + \left(\tilde{C}_2^{(1)}(u) + \tilde{C}_2^{(2)}(u)\right) \eta^2 +\tilde{C}_{n+1}(u) \eta^{n+1}+ O(\eta^{n+3}) \right] \, du,
\end{equation}
where
\begin{eqnarray}
&& \tilde{C}_{-n + 1}(u) = \frac{1}{z_h^{n-1} u^n \sqrt{1 - u^{2n}}}\label{divc1},
\\
&& \tilde{C}_2^{(1)}(u) = \frac{1}{2 z_h^{n-1}} (q^2 z_h^{2n} + 1) \frac{u}{\sqrt{1 - u^{2n}}}, \label{tc60}
\\
&& \tilde{C}_2^{(2)}(u) = \frac{a^2}{2 z_h^{n-1}} (q^2 z_h^{2n} + 1) \frac{u^n - u^{2n+1}}{(1 - u^{2n})^{3/2}}. \label{tc61}
\\
&& \tilde{C}_{n+1}(u) = -\frac{\left(a^2+1\right) q^2 u^n z_h^{n+1}}{2 \sqrt{1-u^{2 n}}}. \label{tc62}
\end{eqnarray}
It is a little different from the integration of $\phi_R$. Here, the contribution of the leading order is divergent due to the divergence of integrant at $u = 0$. By introducing the UV cut-off $u_\mathrm{UV} = z_\mathrm{UV}/z_b$, and the relation
\begin{equation}
\frac1{n - 1} - \int_0^1 \frac1{u^n} \left( \frac1{\sqrt{1 - u^{2 n}}} - 1 \right) du = \frac{\sqrt{\pi} \, \Gamma\left( \frac12 + \frac1{2 n} \right)}{(n - 1) \Gamma\left( \frac1{2 n} \right)}.
\end{equation}
The divergent term comes from~(\ref{divc1}), and we have the ability to extract it
\begin{eqnarray} \label{tc3r}
z_h^{n-1} \int_0^1 \tilde{C}_{-n + 1}(u) \, du &=& \int_{u_\mathrm{UV}}^1 \frac{du}{u^n} + \int_0^1 \frac{du}{u^n} \left( \frac{1}{\sqrt{1 - u^{2 n}}} - 1 \right)
\nonumber\\
&=& \frac1{n - 1} \left( \frac1{u_\mathrm{UV}} \right)^{n-1} - \frac{\tilde{R}_0}{(n - 1)}.
\end{eqnarray}
There other two integrations are similar to the calculation of $\phi_R$, it is easy to get the answers
\begin{eqnarray}
\int_0^1 \tilde{C}_2^{(1)}(u) \, du &=& \frac{q^2 z_h^{2 n} + 1}{2 z_h^{n - 1}} (n+2)\tilde{R}_1 , \label{tc61r}
\\
\int_0^1 \tilde{C}_2^{(2)}(u) \, du &=& \frac{a^2}{2n z_h^{n-1}} (q^2 z_h^{2n} + 1) \left(2(n+2)\tilde{R}_1-\tilde{R}_0 \right). \label{tc622r}
\\
\int_0^1 \tilde{C}_{n+1}(u) \, du &=& -\frac{1}{2} \left(a^2+1\right)q^2 \tilde{R}_1 \eta ^{n+1} z_h^{n+1}. \label{tc63r}
\end{eqnarray}
Plugging the results~(\ref{tc3r}), (\ref{tc61r}), (\ref{tc622r}) and~(\ref{tc63r}) into~(\ref{area small r beta exp}), we obtain the area of the extremal surface in terms of $\eta$ in the small $R$ limit. Since~(\ref{small phi}) tells us the relation between $\eta$ and $\mathcal{\phi}_R$, we can write down the entanglement entropy in terms of $\mathcal{\phi}_R$
\begin{align} \label{ee in small}
&\delta\mathcal{S}_{\Sigma}\equiv \mathcal{S}_{\Sigma}-\mathcal{S}_{\Sigma(0)}= \frac{L^n\mathcal{A}_{\partial \Sigma}}{4G_N}m\left(\frac{n}{2}+a^2 \right) \frac{\tilde{R}_1}{\tilde{R}_0^2} \phi_R^2\\
&+\frac{L^n\mathcal{A}_{\partial \Sigma}}{4G_N}\left\{m z_h^{1-n} \left[a^2 \left(\frac{(n+1) \tilde{R}_0}{\tilde{R}_1}-4\right)-2 n\right]+n q^2 \left[a^2 \left(\frac{\tilde{R}_0}{\tilde{R}_1}-1\right)-1\right]\right\} \frac{\tilde{R}_1\phi_R^{n+1}}{2 n \tilde{R}_0^{n+1}}+O(\phi_R^{n+3})\nonumber
\end{align}
where $4G_N\mathcal{S}_{\Sigma(0)}/(L^n\mathcal{A}_{\partial \Sigma})=\frac{1}{(n-1)z_\mathrm{UV}^{n - 1}} - \frac{\tilde{R}_0^{n}}{(n - 1) \phi_R^{n-1}}$ is the area of the extremal surface in vacuum. The result recovers the previous the previous studies in~\cite{Bhattacharya:2012mi,Fischler:2013, Chaturvedi:2016kbk, Karar:2018ecr, Saha:2019ado,Kundu:2016,Nadi:2019bqu}. Because AdS (in vacuum) is maximally symmetric spacetime, therefore the spacetime geometry felt by any observer in inertial frame is the same. Here we see that there is no rotation parameter in $\mathcal{S}_{\Sigma(0)}$, and the rotation effect vanishes in metric tensor (\ref{metric}) when $h\rightarrow 1$. Apart from this, if we write back the radius of cylinder $\ell$, we will find that the leading behavior looks likes $\delta\mathcal{S}_{\Sigma} \propto \ell^2{\phi_R}^2 $. The reason why it should be quadratic of ${\phi_R}^2$ is that, once we recognize the entanglement entropy $\delta\mathcal{S}_{\Sigma}$ should proportional to the area of $\partial\Sigma$ and the energy density $\mathcal{E}$ (remember $\mathcal{E}\propto m$ in (\ref{energy density})), we are able to deduce the $\delta\mathcal{S}_{\Sigma}$ should proportional to the quadratic of ${\phi_R}^2$ according to the dimension analysis. The charge parameter appears in the sub-leader because $2n>n+1$ for $n\geq2$. The same behavior occurs in the entanglement wedge cross section(EWCS) of the charged black hole~\cite{Velni:2020,Sahraei:2020}. Note that~(\ref{ee in small}) is valid for both extremal and non-extremal case.
\section{\texorpdfstring{Entanglement Entropy in Large $R$ Limit}{Entanglement Entropy in Large R Limit}}\label{sec5}
In the large $R$ limit, $\eta \to 1$, the tip of the extremal surface should vary close to the event horizon, namely
\begin{equation}\label{large_e}
z_b = z_h (1 - \epsilon), \qquad \text{for $0 < \epsilon \ll 1$}.
\end{equation}
Moreover, we believe that the leading term of the ``regularized'' extremal surface, i.e. after removing the UV divergent, in the large $R$ limit is almost equal to the area of even horizon, therefore we have
\begin{equation} \label{event}
\mathcal{A}_\Gamma \Big|_\mathrm{reg.} \simeq \frac{\Theta \, \phi_R}{z_h} \frac{L^n\mathcal{A}_{\partial \Sigma}}{z_h^{n - 1}} \quad \Rightarrow \quad \frac{\mathcal{A}_{\Gamma}}{L^n\mathcal{A}_{\partial \Sigma}} - \frac{1}{\tilde{z}_\mathrm{UV}^{n-1}} \simeq \frac{\Theta \, \phi_R}{z_h^n}.
\end{equation}
Our goal is to find the leading behavior of~(\ref{area int}) and~(\ref{raphi}) in the large $R$ expansion by assuming the corresponding integrations are dominated by the region $z \simeq z_b$ in the large $R$ limit. We can execute the integration after expanding the integrand in IR regions. This approximation is valid to find the leading term which can be justified by checking the results in the small $a$ and $q$ limits. In this thesis, we only show how to deal with the integration~(\ref{raphi}) in the large $R$ limit. It is easy to verify~(\ref{event}) by using the same method to calculate the integration~(\ref{area int}).
Observing that the leading term of the emblackening factor $h(z)$ near horizon $z = z_h$ for non-extremal is proportional to the black brane temperature by~(\ref{t}), and it is proportional to the second derivative of $h(z)$ when black brane is extremal, i.e
\begin{equation}\label{h_ir}
h(z) \simeq p z_h^q (1 -\eta u)^q~~~\text{where}~~~\begin{cases}
q=1,~~p= 4 \pi \Theta T& \text{at $T\neq 0$}\\
q=2,~~p= \frac{n(n+1)}{z_h^2}& \text{at $T=0$}
\end{cases}\,.
\end{equation}
Here we do the coordinate transformation by (\ref{change}). Recognizing that the main contribution of integration is from the IR region, we can find the leading behavior by finding the expansion of integrand at $u= 1$ and then carry out the integration. More precisely, we have
\begin{equation}\label{b_ir}
\mathcal{C}(u) = \mathcal{C}(1) + O(u-1) = \left[\Theta^2 + O(u-1)\right]+ O(\epsilon), \quad \mathcal{C}(1) = \Theta^2 + O(\epsilon),
\end{equation}
and
\begin{equation}\label{main structure}
1 - \frac{z^{2n} \mathcal{C}(z_b)}{z_b^{2n} \mathcal{C}(z)}=1-u^{2n}\frac{\mathcal{C}(1)}{\mathcal{C}(u)} = \left\{r\left(1-u\right)+O\left[\left(u-1\right)^2\right]\right\}+ O(\epsilon)
\end{equation}
where
\begin{equation}\label{r_ir}
r=2 n + \frac{a^2 \partial_uh(u)}{\Theta^2}= 2 n - 4\pi z_h T\frac{a^2}{\Theta}+O(\epsilon)\,.
\end{equation}
Note that we have ignored the order $O(\epsilon)$ in (\ref{main structure}). You can convince yourself by considering the order $O(\epsilon)$ in (\ref{main structure}) and integrating it, and find that the $O(\epsilon)$ in (\ref{main structure}) does not contribute to the leading behavior of $\phi_R$. After doing so, via (\ref{h_ir}), (\ref{b_ir}), (\ref{main structure}) and (\ref{r_ir}), we obtain
\begin{equation}\label{int_r_large_r1}
\phi_R\simeq \frac{z_b}{\sqrt{prz_h^q}\Theta}\int_0^1\frac{1}{(1-\eta u)^{\frac{q}{2}}\sqrt{1-u}}\,du\,.
\end{equation}
To deal with the integration (\ref{int_r_large_r1}), we applied the binomial identity
\begin{equation}
\frac{1}{(1-\eta u)^{\frac{q}{2}}}=\sum _{j=0}^{\infty } \binom{-\frac{q}{2}}{j}(-1)^j \eta ^j u^j\,,
\end{equation}
then the integration~(\ref{int_r_large_r1}) becomes Euler's integral
\begin{equation}\label{r_ir_sum}
\phi_R\simeq \frac{z_h}{\sqrt{prz_h^q}\Theta}\sum _{j=0}^{\infty } \binom{-\frac{q}{2}}{j} \frac{\sqrt{\pi } \Gamma (j+1)}{\Gamma \left(j+\frac{3}{2}\right)}(-1)^j \eta ^j =\frac{2z_h}{\sqrt{prz_h^q}\Theta}\, _2F_1\left(1,\frac{q}{2};\frac{3}{2};\eta \right)\,.
\end{equation}
The summation in (\ref{r_ir_sum}) is exactly equal to the hypergeometric function.\footnote{The hypergeometric function is defined by $$\,_2F_1(a,b;c;z)\equiv\sum _{j=0}^{\infty } \frac{ \left(a\right)_j \left(b\right)_j}{\left(c\right)_j}\frac{z^j}{j!}$$where $$\left(q\right)_j\equiv\frac{\Gamma(q+j)}{\Gamma(q)}$$.} Hence, for finite temperature, (\ref{r_ir_sum}) becomes
\begin{equation} \label{true phi}
\phi_R \simeq - \frac{z_h \, \ln\epsilon}{\sqrt{8 \pi n \Theta z_h \, T}} \tilde{f}(a, q),
\end{equation}
where
\begin{equation}
\tilde{f}(a, q) \equiv \frac{\sqrt{2 n}}{\sqrt{2 n + a^2(n - 1) (1 + q^2 z_h^{2n})}}\simeq \frac{\sqrt{2 n}}{\sqrt{2 n - 4\pi z_h T\frac{a^2}{\Theta}}}.
\end{equation}
Note that the rotation-charge couple $\tilde{f}(a, q)$ has a property $\tilde{f}(0, q) = 1$.
For zero temperature, we concluded that
\begin{equation} \label{ext}
\phi_R \simeq \frac{\pi z_h}{n \Theta \sqrt{2 (n + 1) \epsilon}}.
\end{equation}
Here we have used (\ref{large_e}) to convert $\eta$ to $\epsilon$. It is easy to check that our results (\ref{true phi}) and (\ref{ext}) are consistent with~~\cite{Fischler:2013,Liu:2013una, Chaturvedi:2016kbk,Kundu:2016, Karar:2018ecr}.
\section{Conclusions and Outlooks}\label{sec6}
We derive the holographic entanglement entropy in the small and large $R$ limits in the Kerr-Newman-AdS black holes by considering the shape of the subsystem is a strip. After regularization (to remove the UV divergent term), we discover that the effect of rotation parameter $a$ and mass parameter $m$ always come in quadratic of $\phi_R$ in the small $R$ limit (\ref{ee in small}), and in the large $R$ limit, $q$ and $m$ absorb into the position of horizon $z_h$, and remember $\Theta$ is related to rotation parameter $a$. In fact, it is due to the length contraction of event horizon in the $\phi$ direction under Lorentz boost, that is, $\phi_R\rightarrow \phi'_R=\Theta \phi_R$. Note that both results, (\ref{ee in small}) and (\ref{event}), exhibit the extensive property of entanglement entropy.
The entanglement entropy is an measurement that quantifies how much entanglement between $\Sigma$ and $\Sigma^c$. Let's think about the subsystem in a pure state, there is no uncertainty. Therefore, we never loss any information and the von Neumann entropy vanishes. On top of that, before the observation, we can regard the entanglement entropy $\mathcal{S}_\Sigma$ as the information loss from $\Sigma^c$ is owned by the observer stand in $\Sigma$, and the observer in $\Sigma$ is not accessible to $\Sigma^c$. It indicates that, in non-vacuum state, the two observers standing in $\Sigma$ but with different angular velocity will have different information loss from $\Sigma^c$. Our results indicate that the equilibrium state is only dependent on the thermodynamic variables of the grand canonical ensemble $(\phi_R,\mathcal{E},\mathcal{Q},\mathcal{J})$.
In the future, it may interesting to consider the time depend background, which is modeled by the in-falling dust collapse to form an Kerr-Newman-AdS black holes. The generalization of time dependent proposal is worked by~\cite{Hubeny:2007xt}, and there are several examples of time dependent case~\cite{Liu:2013iza, Liu:2013qca, Albash:2010mv,Alishahiha:2014, Fonda:2014ula,Sun:2021}.
\begin{acknowledgments}
I would like to thank Professor Chiang-Mei Chen for the valuable discussion. The work was supported in part by the Ministry of Science and Technology, Taiwan.
\end{acknowledgments}
|
1,116,691,498,662 | arxiv | \section{Introduction}
Variational Inference (VI) \citep{Jordan1999, wainwright_jordan_2014} is a powerful method to approximate intractable integrals. As an alternative strategy to Markov chain Monte Carlo (MCMC) sampling, VI is fast, relatively straightforward for monitoring convergence and typically easier to scale to large data \citep{bleivi} than MCMC. The key idea of VI is to approximate difficult-to-compute conditional densities of latent variables, given observations, via use of optimization. A family of distributions is assumed for the latent variables, as an approximation to the exact conditional distribution. VI aims at finding the member, amongst the selected family, that minimizes the Kullback-Leibler (KL) divergence from the conditional law of interest.
Let $x$ and $z$ denote, respectively, the observed data and latent variables. The goal of the inference problem is to identify the conditional density (assuming a relevant reference measure, e.g.~Lebesgue) of latent variables given observations, i.e. $p(z| x)$.
Let $\mathcal{L}$ denote a family of densities defined over the space of latent variables -- we denote members of this family as $q=q(z)$ below. The goal of VI is to find the element of the family closest in KL divergence to the true $p(z| x)$. Thus, the original inference problem can be rewritten as an optimization one: identify $q^*$ such that
\begin{equation}
\label{eq:min}
q^* = \operatornamewithlimits{argmin}\limits_{q\in \mathcal{L}}\textrm{KL}(q\mid p(\cdot| x))
\end{equation}
for the KL-divergence defined as
\begin{align*}
\textrm{KL}(q\mid p(\cdot | x)) &= \mathbb{E}_{q}[\log q(z)] - \mathbb{E}_q[\log p(z| x)] \\ &= \mathbb{E}_q[\log q(z)] - \mathbb{E}_q[\log p(z,x)] + \log p(x),
\end{align*}
with $\log p(x)$ being constant w.r.t.~$z$. Notation $\mathbb{E}_q$ refers to expectation taken over $z\sim q$. Thus, minimizing the KL divergence is equivalent to maximising the evidence lower bound, ELBO$(q)$, given by
\begin{equation}
\textrm{ELBO}(q) = \mathbb{E}_q[\log p(z,x)] - \mathbb{E}_q[\log q(z)].
\label{elbo1}
\end{equation}
Let $\mathsf{S}_p\subseteq \mathbb{R}^{m}$, $m\ge 1$, denote the support of the target $p(z|x)$, and
$\mathsf{S}_{q}\subseteq \mathbb{R}^{m}$ the support of a variational density $q\in\mathcal{L}$ -- assumed to be common over all members
$q\in\mathcal{L}$. Necessarily, $\mathsf{S}_p\subseteq \mathsf{S}_q$, otherwise the KL-divergence will diverge to $+\infty$.
Many VI algorithms focus on the mean-field variational family, where variational densities in $\mathcal{L}$ are assumed to factorise over blocks of $z$. That is,
\begin{equation}
\label{eq:meanfield}
q(z) = \prod_{i=1}^b q_i(z_i),\quad \mathsf{S}_q = \mathsf{S}_{q_{1}}\times \cdots \times \mathsf{S}_{q_{b}}, \quad
z=(z_1,\ldots, z_{b})\in \mathsf{S}_q, \,\,\,z_i\in \mathsf{S}_{q_{i}},
\end{equation}
for individual supports $\mathsf{S}_{q_{i}}\subseteq\mathbb{R}^{m_i}$, $m_i\ge 1$, $1\le i\le b$, for some $b\ge 1$, and $\sum_{i}m_i =m$.
It is advisable that highly correlated latent variables are placed in the same block to improve the
performance of the VI method.
There are, in general, two types of approaches to maximise ELBO in VI: a co-ordinate ascent approach and a gradient-based one. Co-ordinate ascent VI (CAVI) \citep{bishop_2006} is amongst the most commonly used algorithms in this context. To obtain a local maximiser for ELBO, CAVI sequentially optimizes each factor of the mean-field variational density, while holding the others fixed. Analytical
calculations on function space -- involving variational derivatives -- imply that,
for given fixed $q_1,\ldots, q_{i-1},q_{i+1},\ldots, q_b$,
ELBO$(q)$ is maximised for
\begin{equation}
\label{eq:recursion}
q_i(z_i)\propto \exp\big\{\mathbb{E}_{-i}[\log p(z_{i_-},z_{i},z_{i_+},x)]\big\},
\end{equation}
\noindent
where $z_{-i}:=(z_{i_-},z_{i_+})$ denotes vector $z$ having removed component $z_i$,
with ${i_-}$ (resp.~${i_+}$) denoting the ordered indices that are smaller (resp.~larger) than~$i$; $\mathbb{E}_{-i}$ is the expectation taken under $z_{-i}$ following its variational distribution, denoted $q_{-i}$.
The above suggest immediately an iterative algorithm,
guaranteed to provide values for ELBO$(q)$ that cannot decrease as the updates are carried out.
The expected value
$\mathbb{E}_{-i}[\log p(z_{i_-},z_{i},z_{i_+},x)]$ can be difficult to derive analytically.
Also, CAVI typically requires traversing the entire dataset at each iteration, which can be overly computationally expensive for large datasets.
Gradient-based approaches, which can potentially scale up to large data -- alluding here to recent Stochastic-Gradient-type methods -- can be an effective alternative for ELBO optimisation. However, such algorithms
have their own challenges, e.g.
in the case reparameterization Variational Bayes (VB) analytical derivation of gradients of the log-likelihood can often be problematic, while in the case of score-function VB the requirement of the gradient of $\log q$ restricts the range of the family $\mathcal{L}$ we can choose from.
In real-world applications, hybrid methods combining Monte Carlo with recursive algorithms are common, e.g., Auto-Encoding Variational Bayes,
Doubly-Stochastic Variational Bayes for non-conjugate inference, Stochastic Expectation-Maximation (EM) \citep{Beaumont2025, Sisson1760, mcem}. In VI, Monte Carlo is often used to estimate the expectation within CAVI or the gradient within derivative-driven methods.
This is the case, e.g., for Stochastic VI \citep{svi} and Black-Box VI (BBVI) \citep{bbvi}.
BBVI is used in this work as a representative of gradient-based VI algorithms. It allows carrying out VI over a wide range of complex models. The variational density $q$ is typically chosen within a parametric family, so finding $q^*$
in~(\ref{eq:min}) is equivalent to determining an optimal set of parameters that characterize $q_i=q_i(\cdot|\lambda_i)$, $\lambda_{i}\in \Lambda_i\subseteq \mathbb{R}^{d_i}$, $1\le d_i$, $1\le i\le b$, with $\sum_{i=1}^{b}d_i=d$. The gradient of ELBO w.r.t.~the variational parameters $\lambda=(\lambda_1,\ldots,\lambda_b)$ equals
\begin{equation}
\label{eq:mainBB}
\nabla_{\lambda} \textrm{ELBO}(q) := \mathbb{E}_q\big[\nabla_{\lambda}\log q(z| \lambda)\{\log p(z,x)-\log q(z| \lambda)\}\big]
\end{equation}
and can be approximated by black-box Monte Carlo estimators as, e.g.,
\begin{equation}
\label{eq:BBVIest}
\widehat{\nabla_{\lambda}\textrm{ELBO}(q)} := \tfrac{1}{N}\sum^N_{n=1}\big[\nabla_{\lambda}\log q(z^{(n)}| \lambda)\{\log p(z^{(n)},x)-\log q(z^{(n)}|\lambda)\}\big],
\end{equation}
with $z^{(n)} \stackrel{iid}{\sim} q(z| \lambda)$, $1\le n\le N$, $N\ge 1$. The approximated gradient of ELBO can then be used within a stochastic optimization procedure to update $\lambda$ at the $k$th iteration with
\begin{equation}
\lambda_{k+1} \leftarrow \lambda_k + \rho_k \widehat{\nabla_{\lambda_k}\textrm{ELBO}(q)},
\label{eq:BBVIit}
\end{equation}
where $\{\rho_k\}_{k\ge 0}$ is a Robbins-Monro-type step-size sequence \citep{robbins1951}.
As we will see in later sections, BBVI is accompanied by generic
variance reduction methods, as the variability of (\ref{eq:BBVIest})
for complex models can be large.
\begin{remark}[Hard Constraints]
\label{rem:issue}
Though gradient-based VI methods are some times more straightforward to apply than co-ordinate ascent ones, -- e.g.~combined with the use of modern approaches for automatic differentiation \citep{advi} -- co-ordinate ascent methods can still be important for models with \emph{hard constraints}, where gradient-based algorithms are laborious to apply. (We adopt the viewpoint here that one chooses variational densities that respect the constaints of the target, for improved accuracy.) Indeed, notice in the brief description we have given above for CAVI and BBVI, the two methodologies are structurally different, as CAVI does not necessarily require to be build up via the introduction of an exogenous variational parameter $\lambda$. Thus, in the context of a support for the target $p(z|x)$ that involves complex constraints,
a CAVI approach overcomes this issue naturally by blocking together the $z_i$'s responsible for the constraints. In contrast, introduction of the variational parameter $\lambda$ creates sometimes severe
complications in the development of the derivative-driven algorithm, as normalising constants that depend on $\lambda$ are extremely difficult to calculate analytically
and obtain their derivatives. Thus, a main argument spanning this work -- and illustrated within it --
is that co-ordinate-ascent-based VI methods have a critical role to play amongst VI approaches for important classes of statistical models.
\end{remark}
The main contributions of the paper are:
\begin{itemize}
\item[(i)]
We discuss, and then apply a Monte Carlo CAVI (MC-CAVI) algorithm in a sequence of problems of increasing complexity, and study its performance. As the name suggests, MC-CAVI
uses the Monte Carlo principle for the approximation of the difficult-to-compute conditional expectations, $\mathbb{E}_{-i}[\log p(z_{i_-},z_{i},z_{i_+},x)]$, within CAVI.
\item[(ii)]
We provide a justification for the algorithm by showing analytically that, under suitable regularity conditions, MC-CAVI will get arbitrarily close to a maximiser of the ELBO with high probability.
\item[(iii)] We contrast MC-CAVI with MCMC and BBVI through simulated and real examples, some of which involve hard constraints; we demonstrate MC-CAVI's effectiveness in an important application imposing such hard constraints, with real data
in the context of Nuclear Magnetic Resonance (NMR) spectroscopy.
\end{itemize}
\begin{remark}
Inserting Monte Carlo steps within a VI approach (that might use a mean field or another approximation) is not uncommon in the VI literature. E.g.,
\cite{forb:07} employ an MCMC procedure in the context of a Variational EM (VEM), to obtain estimates of the normalizing constant for Markov Random Fields -- they provide asymptotic results for the correctness of the complete algorithm;
\cite{tran:16} apply Mean-Field Variational Bayes (VB)
for Generalised Linear Mixed Models, and use Monte Carlo
for the approximation of analytically intractable required expectations under the variational densities;
several references for related works are given in the above papers.
Our work focuses on MC-CAVI, and develops theory that is appropriate for this VI method. This algorithm has \emph{not} been studied analytically in the literature, thus the development of its theoretical justification -- even if it borrows elements from Monte Carlo EM -- is new.
\end{remark}
The rest of the paper is organised as follows.
Section \ref{sec:MCCAVI} presents briefly the MC-CAVI algorithm. It also provides -- in a specified setting -- an analytical result illustrating non-accumulation of Monte Carlo errors in the execution of the recursions of the algorithm. That is, with a probability arbitrarily close to 1, the variational solution provided by MC-CAVI can be as close as required to the one of CAVI, for a big enough Monte Carlo sample size, regardless of the number of algorithmic iterations.
Section \ref{sec:numerics} shows two numerical examples, contrasting MC-CAVI with alternative algorithms.
Section \ref{sec:nmr} presents an implementation of MC-CAVI in a real, complex, challenging posterior distribution arising in metabolomics. This is a practical application, involving hard constraints, chosen to illustrate the potential of MC-CAVI in this context. We finish with some conclusions in Section \ref{sec:discussion}.
\section{MC-CAVI Algorithm}
\label{sec:MCCAVI}
\subsection{Description of the Algorithm}
\label{subsec:CAVI}
We begin with a description of the basic CAVI algorithm.
A double subscript will be used to identify block variational densities: $q_{i,k}(z_i)$ (resp.~$q_{-i,k}(z_{-i})$) will refer to the density of the $i$th block (resp.~all blocks but the $i$th), after $k$ updates have been carried out on that block density (resp.~$k$ updates have been carried out on the blocks preceding the $i$th, and $k-1$ updates on the blocks following the $i$th).
\begin{itemize}
\item Step 0: Initialize probability density functions $q_{i,0}(z_i)$, $i=1,\ldots, b$.
\item Step $k$:
For $k\ge 1$, given $q_{i,k-1}(z_i)$, $i=1,\ldots, b$,
execute:
\begin{itemize}
\item For $i=1,\ldots, b$, update:
\begin{align*}
\log q_{i,k}(z_i) = const. + \mathbb{E}_{-i,k}[\log p(z,x)],
\end{align*}
with $\mathbb{E}_{-i,k}$ taken
w.r.t.~$z_{-i}\sim q_{-i,k}$.
\end{itemize}
\item Iterate until convergence.
\end{itemize}
\noindent
Assume that the expectations $\mathbb{E}_{-i}[\log p(z,x)]$, $\{i:i\in\mathcal{I}\}$, for an index set $\mathcal{I}\subseteq\{1,\ldots, b\}$,
can be obtained analytically, over all updates of the variational density $q(z)$; and that this is not the case for $i\notin\mathcal{I}$. Intractable integrals can be approximated via a Monte Carlo method. (As we will see in the applications in the sequel, such a Monte Carlo device typically uses samples from an appropriate MCMC algorithm.)
In particular, for $i\notin \mathcal{I}$, one obtains $N\ge 1$ samples from the current $q_{-i}(z_{-i})$ and uses the standard Monte Carlo estimate
\begin{equation*}
\widehat{\mathbb{E}}_{-i}[\log p(z_{i_-},z_{i},z_{i_+},x)]
= \frac{\sum_{n=1}^{N} \log p(z_{i_-}^{(n)},z_{i},z_{i_+}^{(n)},x)}{N}.
\end{equation*}
%
Implementation of such an approach gives rise to MC-CAVI,
described in Algorithm~\ref{MC-CAVI}.
\begin{algorithm}[!h]
\SetAlgoLined
\vspace{0.2cm}
\SetKwInOut{Input}{Require}
\SetKw{KwBy}{by}
\Input{Number of iterations $T$.\vspace{0.2cm}}
\Input{Number of Monte Carlo samples $N$.\vspace{0.2cm}}
\Input{$\mathbb{E}_{-i} [\log p(z_{i_-},z_i, z_{i_+},x)]$ in closed form, for $i\in \mathcal{I}$.\vspace{0.2cm}}
Initialize $q_{i,0}(z_i)$, $i=1,\ldots, b$.\vspace{0.2cm} \\
\For{$k= 1:T$\vspace{0.2cm} }{
\For{$i=1:b$\vspace{0.2cm} }{
If $i\in\mathcal{I}$, set
$q_{i,k}(z_i) \propto \exp \big\{ \mathbb{E}_{-i,k}[\log p(z{_{i_-}},z_i, z{_{i_+}},x)] \big\} $ \vspace{0.2cm} \;
If $i\notin\mathcal{I}$:\\
Obtain $N$ samples, $(z_{i_{-},k}^{(n)},z_{i_{+},k-1}^{(n)})$, $1\le n \le N$, from
%
$q_{-i,k}(z_{-i})$.
\\
Set $$q_{i,k}(z_i) \propto \exp \big\{ \tfrac{\sum_{n=1}^{N} \textrm{log}\; p(z_{i_-,k}^{(n)},z_{i},z_{i_+,k-1}^{(n)},x)}{N} \big\}.$$
%
}}
\caption{MC-CAVI}\label{MC-CAVI}
\end{algorithm}
\hfill \\
\subsection{Applicability of MC-CAVI}
We discuss here the class of problems for which MC-CAVI can be applied.
It is desirable to avoid settings where the order of samples or statistics to be stored
in memory increases with the iterations of the algorithm.
To set-up the ideas we begin with CAVI itself. Motivated by the
standard exponential class of distributions, we work as follows.
Consider the case when the target density $p(z,x)\equiv f(z)$ -- we omit reference to the data $x$ in what follows, as $x$ is fixed and irrelevant for our purposes (notice that $f$ is not required to integrate to $1$) -- is assumed to have the structure,
\begin{align}
\label{eq:class}
f(z) = h(z)\exp\big\{ \langle \eta, T(z) \rangle - A(\eta) \big\},\quad z\in \mathsf{S}_p,
\end{align}
for $s$-dimensional constant vector $\eta=(\eta_1,\ldots, \eta_s)$, vector function $T(z)=(T_1(z),\ldots, T_{s}(z))$, with some $s\ge 1$, and relevant scalar functions $h>0$, $A$; $\langle \cdot,\cdot \rangle$ is the standard inner product in $\mathbb{R}^{s}$.
Also, we are given the choice of block-variational densities $q_1(z_1),\ldots, q_b(z_b)$ in (\ref{eq:meanfield}). Following the definition of CAVI from Section \ref{subsec:CAVI} --
assuming that the algorithm can be applied, i.e.~all required expectations can be obtained analytically --
the number of `sufficient' statistics, say $T_{i,k}$ giving rise to the definition of $q_{i,k}$
will always be upper bounded by $s$. Thus, in our working scenario, CAVI will be applicable with
a computational cost that is upper bounded by a constant within the class of target distributions in
(\ref{eq:class}) -- assuming relevant costs for calculating expectations remain bounded over the algorithmic iterations.
Moving on to MC-CAVI, following the definition of index set $\mathcal{I}$ in Section \ref{subsec:CAVI},
recall that a Monte Carlo approach is required when updating $q_i(z_i)$ for $i\notin \mathcal{I}$, $1\le i \le b$. In such a scenario, controlling computational costs amounts to having a target (\ref{eq:class}) admitting the factorisations,
\begin{equation}
\label{eq:fact}
h(z) \equiv h_i(z_i)h_{-i}(z_{-i}),\quad T_{l}(z) \equiv T_{l,i}(z_{i})T_{l,-i}(z_{-i}), \,\,\,1\le l\le s,
\quad\,\, \textrm{for all }\,i\notin \mathcal{I}.
\end{equation}
Once (\ref{eq:fact}) is satisfied, we do not need to store all $N$ samples from $q_{-i}(z_{-i})$, but simply some relevant averages keeping the cost per iteration for the algorithm bounded. We stress that the combination of characterisations in (\ref{eq:class})-(\ref{eq:fact}) is very general and will typically be satisfied for most practical statistical models.
\subsection{Theoretical Justification of MC-CAVI}
An advantageous feature of MC-CAVI versus derivative-driven VI methods is its structural similarity with Monte Carlo Expectation-Maximization (MCEM). Thus, one can build on results in the MCEM literature to prove asymptotical properties of MC-CAVI; see e.g.~\cite{mc-em, boot:99, levi:01, fort:03}.
To avoid technicalities related with working on general spaces of probability density functions, we begin by assuming a parameterised setting for the variational densities -- as in the BBVI case --
with the family of variational densities being closed under CAVI or (more generally) MC-CAVI updates.
\begin{assumption}[Closedness of Parameterised $q(\cdot)$ Under Variational Update]
\label{ass:family}
For the CAVI or the MC-CAVI algorithm, each $q_{i,k}(z_i)$ density obtained during the iterations of the algorithm, $1\leq i\leq b$, $k\ge 0$, is of the parametric form
$$q_{i,k}(z_i) = q_i(z_i|\lambda_{i}^{k}),$$ for a unique $\lambda_{i}^{k}\in \Lambda_i\subseteq \mathbb{R}^{d_i}$, for some $d_i\ge 1$, for all $1\le i \le b$. \\
(Let $d=\sum_{i=1}^b {d_i}$ and $\Lambda =\Lambda_1 \times \cdots \times \Lambda_b $.)
\end{assumption}
\noindent Under Assumption \ref{ass:family}, CAVI and MC-CAVI can be corresponded to some well-defined maps
$M:\Lambda\mapsto\Lambda$, $\mathcal{M}_N:\Lambda\mapsto\Lambda$ respectively, so that,
given current variational parameter $\lambda$, one step of the algorithms can be expressed in terms of
a new parameter $\lambda'$ (different for each case) obtained via the updates
\begin{equation*}
\textrm{CAVI:}\,\,\,\,\lambda' = M(\lambda); \qquad \textrm{MC-CAVI:}\,\,\,\,\lambda' =
\mathcal{M}_N(\lambda).
\end{equation*}
\indent For an analytical study of the convergence properties of CAVI itself and relevant regularity conditions, see e.g.~\cite[Proposition 2.7.1]{bert:99},
or numerous other resources in numerical optimisation.
Expressing the MC-CAVI update -- say, the $(k+1)$th one -- as
\begin{equation}
\label{eq:perturb}
\lambda^{k+1} = M(\lambda^k) + \{ \mathcal{M}_N(\lambda^k) - M(\lambda^k) \},
\end{equation}
it can be seen as a random perturbation of a CAVI step. In the rest of this section we will explore the asymptotic properties of MC-CAVI. We follow closely the approach in \cite{mc-em} -- as it provides a less technical procedure, compared e.g.~to \cite{fort:03} or other works about MCEM -- making all appropriate adjustments to fit the derivations into the setting of the MC-CAVI methodology along the way. We denote by $M^{k}$, $\mathcal{M}_N^{k}$, the $k$-fold composition of $M$, $\mathcal{M}_{N}$ respectively, for $k\ge 0$.
\begin{assumption}
\label{ass:regular}
$\Lambda$ is an open subset of $\mathbb{R}^{d}$, and the
mappings $\lambda\mapsto \textrm{ELBO}(q(\lambda))$, $\lambda\mapsto M(\lambda)$ are continuous on $\Lambda$.
\end{assumption}
\noindent If $M(\lambda)=\lambda$ for some $\lambda\in \Lambda$, then $\lambda$ is a fixed point of $M()$.
A given $\lambda^*\in \Lambda$ is called an isolated local maximiser of the ELBO$(q(\cdot))$ if there is a neighborhood of
$\lambda^*$ over which $\lambda^*$ is the unique maximiser of the ELBO$(q(\cdot))$.
\begin{assumption}[Properties of $M(\cdot)$ Near a Local Maximum]
\label{ass:M}
Let $\lambda^*\in\Lambda$ be an isolated local maximum of ELBO$(q(\cdot))$. Then,
\begin{itemize}
\item[(i)] $\lambda^*$ is a fixed point of $M(\cdot)$;
\item[(ii)]there is a neighborhood $V\subseteq \Lambda$ of $\lambda^*$ over which $\lambda^*$ is a unique maximum, such that
$\textrm{ELBO}(q(M(\lambda)))>\textrm{ELBO}(q(\lambda))$ for any $\lambda\in V\backslash\{\lambda^*\}$.
\end{itemize}
\end{assumption}
\noindent
Notice that the above assumption refers to the deterministic update $M(\cdot)$, which performs co-ordinate ascent; thus requirements (i), (ii) are fairly weak for such a recursion.
The critical technical assumption required for delivering the convergence results in the rest of this section is the following one.
\begin{assumption}[Uniform Convergence in Probability on Compact Sets]
\label{ass:technical}
For any compact set $C\subseteq\Lambda$ the following holds: for any $\varrho,\varrho'>0$, there exists a positive integer $N_0$,
such that for all $N\ge N_0$ we have,
\begin{equation*}
\inf_{\lambda\in C} \mathrm{Prob}\,\big[\, \big| \mathcal{M}_N(\lambda)-M(\lambda) \big| < \varrho \, \big]
> 1-\varrho' .
\end{equation*}
\end{assumption}
\noindent
It is beyond the context of this paper to examine Assumption \ref{ass:technical} in more depth. We will only stress that Assumption \ref{ass:technical} is the sufficient structural condition
that allows to extend closeness between CAVI and MC-CAVI updates in a single algorithmic step into
one for arbitrary number of steps.
We continue with a definition.
\begin{define}
\label{def:stable}
A fixed point $\lambda^*$ of $M(\cdot)$ is said to be asymptotically stable if,
\begin{itemize}
\item[(i)] for any neighborhood $V_1$ of $\lambda^*$, there is a neighborhood $V_2$ of $\lambda^*$ such that for all~$k\ge 0$ and all $\lambda\in V_2$, $M^k(\lambda)\in V_1$;
\item[(ii)] there exists a neighbourhood $V$ of $\lambda^*$ such that
$\lim_{k\rightarrow\infty}M^k(\lambda)=\lambda^*$ if $\lambda\in V$.
\end{itemize}
\end{define}
We will state the main asymptotic result for MC-CAVI in Theorem \ref{th:stable} that follows; first we require Lemma
\ref{lem:stable}.
\begin{lemma}
\label{lem:stable}
Let Assumptions \ref{ass:family}-\ref{ass:M} hold.
If $\lambda^*$ is an isolated local maximiser of $\textrm{ELBO}(q(\cdot))$, then $\lambda^*$ is an asymptotically stable fixed point of $M(\cdot)$.
\end{lemma}
The main result of this section is as follows.
\begin{theorem}
\label{th:stable}
Let Assumptions \ref{ass:family}-\ref{ass:technical} hold and $\lambda^*$ be an isolated local maximiser of $\mathrm{ELBO}(q(\cdot))$. Then there exists a neighbourhood, say $V_1$, of $\lambda^*$ such that for starting values
$\lambda\in V_1$ of MC-CAVI algorithm and for all $\epsilon_1>0$, there exists a $k_0$ such that
%
\begin{equation*}
\lim_{N\rightarrow \infty}\mathrm{Prob}\,\big(\,|\mathcal{M}_N^{k}-\lambda^* | < \epsilon_1 \textrm{ for some } k\leq k_0\,\big)= 1.
\end{equation*}
\end{theorem}
\noindent The proofs of Lemma \ref{lem:stable} and Theorem \ref{th:stable} can be found in Appendices \ref{sec:lem} and \ref{sec:theorem}, respectively.
\subsection{Stopping Criterion and Sample Size}
The method requires the specification of the Monte Carlo size $N$ and a stopping rule.
\subsubsection*{Principled - but Impractical - Approach}
As the algorithm approaches a local maximum, changes in ELBO should be getting closer to zero.
To evaluate the performance of MC-CAVI, one could, in principle, attempt to monitor the evolution of ELBO during the algorithmic iterations.
For current variational distribution $q=(q_1,\ldots, q_b)$, assume that MC-CAVI is about to update $q_i$
with $q'_i= q'_{i,N}$, where the addition of the second subscript at this point emphasizes the dependence of the new value for $q_i$ on the Monte Carlo size $N$. Define,
\begin{equation*}
\Delta\mathrm{ELBO}(q, N) = \mathrm{ELBO}(q_{i-},q'_{i,N},q_{i+}) - \mathrm{ELBO}(q).
\end{equation*}
If the algorithm is close to a local maximum, $\Delta$ELBO$(q, N)$ should be close to zero, at least for sufficiently large $N$. Given such a choice of $N$, an MC-CAVI recursion should be terminated once $\Delta$ELBO$(q, N)$ is smaller than a user-specified tolerance threshold.
Assume that the random variable
$\Delta$ELBO$(q, N)$ has mean $\mu = \mu(q, N)$ and variance $\sigma^2 = \sigma^2(q, N)$.
Chebychev's inequality implies that, with probability greater than or equal to $(1-1/K^2)$, $\Delta$ELBO$(q, N)$ lies within the interval $(\mu-K\sigma, \mu + K\sigma)$, for any real $K>0$. Assume that one fixes a large enough $K$.
The choice of $N$ and of a stopping criterion should be based on the requirements:
\begin{itemize}
\item[(i)] $\sigma\leq \nu$, with $\nu$ a predetermined level of tolerance;
\item[(ii)] the effective range $(\mu-K\sigma, \mu + K\sigma)$ should include zero, implying that $\Delta$ELBO$(q, N)$ differs from zero by less than $2K\sigma$.
\end{itemize}
Requirement (i) provides a rule for the choice of $N$ -- assuming applied over all $1\le i \le b$, for $q$ in areas close to a maximiser --, and requirement (ii) a rule for defining a stopping criterion. Unfortunately, the above considerations -- based on the proper term ELBO$(q)$ that VI aims to maximise --
involve quantities that are typically impossible to obtain analytically or via some reasonably expensive approximation.
\subsubsection*{Practical Considerations}
Similarly to MCEM, it is recommended that $N$ gets increased as the algorithm becomes more stable.
It is computationally inefficient to start with a large value of $N$ when the current variational distribution can be far from the maximiser. In practice, one may monitor the convergence of the algorithm by plotting relevant \emph{statistics} of the variational distribution versus the number of iterations. We can declare that convergence has been reached when such traceplots show relatively small random fluctuations (due to the Monte Carlo variability) around a fixed value. At this point, one may terminate the algorithm or continue with a larger value of $N$, which will further decrease the traceplot variability. In the applications we encounter in the sequel, we typically have $N\le 100$,
so calculating, for instance, Effective Sample Sizes to monitor the mixing performance of the MCMC steps is not practical.
\section{Numerical Examples -- Simulation Study}
\label{sec:numerics}
In this section we illustrate MC-CAVI with two simulated examples.
First, we apply MC-CAVI and CAVI on a simple model to highlight main features and implementation strategies.
Then, we contrast MC-CAVI, MCMC, BBVI in a complex scenario with hard constraints.
\subsection{Simulated Example 1}
\label{sec:example1}
We generate $n=10^3$ data points from $\mathrm{N}(10,100)$ and fit the semi-conjugate Bayesian model
\begin{align*}
\textrm{\underline{Example Model 1}} \\
{x_1, \ldots, x_n} &\sim \mathrm{N}(\vartheta,\tau^{-1}), \\
\vartheta &\sim \mathrm{N}(0,\tau^{-1}), \\
\tau &\sim \textrm{Gamma}(1,1).
\end{align*}
Let $\bar{x}$ be the data sample mean. In each iteration, the CAVI density function -- see (\ref{eq:recursion}) -- for $\tau$ is that of the Gamma distribution $\textrm{Gamma}(\tfrac{n+3}{2},\zeta)$, with
\begin{align*}
\zeta = 1 + \tfrac{(1+n)\mathbb{E}(\vartheta^2)-2(n\bar{x})\mathbb{E}(\vartheta)+\sum^n_{j=1}x^2_j}{2},
\end{align*}
%
whereas for $\vartheta$ that of the normal distribution $\mathrm{N}(\frac{n\bar{x}}{1+n},\frac{1}{(1+n)\mathbb{E}(\tau)})$.
%
$(\mathbb{E}(\vartheta),\mathbb{E}(\vartheta^2))$ and $\mathbb{E}(\tau)$ denote the relevant expectations under the current CAVI distributions for $\vartheta$ and $\tau$ respectively; the former are initialized at 0 -- there is no need to initialise $\mathbb{E}(\tau)$ in this case. Convergence of CAVI can be monitored, e.g., via
the sequence of values of $\theta := (1+n)\mathbb{E}(\tau)$ and $\zeta$. If the change in values of these two parameters is smaller than, say, $0.01\%$, we declare convergence. Figure \ref{viresult1} shows the traceplots of $\theta$, $\zeta$.
\begin{figure}
\begin{center}
\includegraphics[scale=0.35]{0001}
\includegraphics[scale=0.35]{0002}
\end{center}
\caption{Tracplots of $\zeta$ (left), $\theta$ (right) from application of CAVI on Simulated Example~1.}
\label{viresult1}
\end{figure}
Convergence is reached within 0.0017secs\footnote{A Dell Latitude E5470 with Intel(R) Core(TM) i5-6300U [email protected] is used for all experiments in this paper.}, after precisely two iterations, due to the simplicity of the model. The resulted CAVI distribution for $\vartheta$ is $\mathrm{N}(9.6,0.1)$, and for $\tau$ it is Gamma$(501.5,50130.3)$ so that $\mathbb{E}(\tau) \approx 0.01$.
Assume now that $q(\tau)$ was intractable.
Since $\mathbb{E}(\tau)$ is required to update the approximate distribution of $\vartheta$, an MCMC step can be employed to sample $\tau_1,\ldots, \tau_{N}$ from $q(\tau)$ to produce the Monte Carlo estimator $\widehat{\mathbb{E}}(\tau)=\sum^{N}_{j=1}\tau_j/N$. Within this MC-CAVI setting, $\widehat{\mathbb{E}}(\tau)$ will replace the exact
${\mathbb{E}}(\tau)$ during the algorithmic iterations.
$(\mathbb{E}(\vartheta),\mathbb{E}(\vartheta^2))$ are initialised as in CAVI.
For the first 10 iterations we set $N=10$, and for the remaining ones, $N=10^3$ to reduce variability.
We monitor the values of $\widehat{\mathbb{E}}(\tau)$ shown
in Figure \ref{mcmcresult1}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.5]{0003}
\end{center}
\vspace{-0.4cm}
\caption{Traceplot of $\widehat{\mathbb{E}}(\tau)$ generated by MC-CAVI for Simulated Example 1, using $N=10$ for the first 10 iterations of the algorithm, and $N=10^3$ for the rest.}
\label{mcmcresult1}
\end{figure}
The figure shows that MC-CAVI has stabilized after about $15$ iterations; algorithmic time was 0.0114secs. To remove some Monte Carlo variability, the final estimator of $\mathbb{E}(\tau)$
is produced by averaging the last 10 values of its traceplot,
which gives $\widehat{\mathbb{E}}(\tau) = 0.01$, i.e.~a value very close to the one obtained by CAVI. The estimated distribution of $\vartheta$ is $\mathrm{N}(9.6,0.1)$, the same as with CAVI.
The performance of MC-CAVI depends critically on the choice $N$. Let A be the value of $N$ in the burn-in period, B the number of burn-in iterations and C the value of $N$ after burn-in. Figure \ref{mitertune} shows trace plots of $\widehat{\mathbb{E}}(\tau)$ under different settings of the triplet A-B-C.
\begin{figure}
\includegraphics[scale=0.35]{00031}
\includegraphics[scale=0.35]{00032}
\caption{Traceplot of $\widehat{\mathbb{E}}(\tau)$ under different settings of A-B-C (respectively, the value of $N$ in the burn-in period, the number of burn-in iterations and the value of $N$ after burn-in) for Simulated Example 1.}
\label{mitertune}
\end{figure}
\begin{figure}
\includegraphics[scale=0.45]{time_var}
\includegraphics[scale=0.45]{time_size}
\caption{Plot of convergence time versus variance of $\widehat{\mathbb{E}}(\tau)$ (left panel) and versus Monte Carlo sample size $N$ (right panel).}
\label{convergence_plot}
\end{figure}
\begin{table}
\centering
\begin{tabular}{|l|l|l|l|l|l|}
\hline
A-B-C & $10$-$10$-$10^5$ & $10^3$-$10$-$10^5$ & $10^5$-$10$-$10^5$ & $10$-$30$-$10^5$ & $10$-$50$-$10^5$ \\ \hline
time (secs) & 0.4640 & 0.4772 & 0.5152 & 0.3573 & 0.2722 \\ \hline
$\widehat{\mathbb{E}}(\tau)$ & 0.01 & 0.01 & 0.01 & 0.01 & 0.01 \\ \hline
\end{tabular}
\caption{Results of MC-CAVI for Simulated Example 1.}
\label{my-label}
\end{table}
As with MCEM, $N$ should typically be set to a small number at the beginning of the iterations so that the algorithm can reach fast a region of relatively high probability. $N$ should then be increased to reduce algorithmic variability close to the convergence region.
Figure \ref{convergence_plot} shows plots of convergence time versus variance of $\widehat{\mathbb{E}}(\tau)$ (left panel) and versus $N$ (right panel). In VI, iterations are typically terminated when the (absolute) change in the monitored estimate is less than a small threshold. In MC-CAVI the estimate fluctuates around the limiting value after convergence. In the simulation in Figure \ref{convergence_plot}, we terminate the iterations when the difference between the estimated mean (disregarding the first half of the chain) and the true value ($0.01$) is less than $10^{-5}$. Figure \ref{convergence_plot} shows that: (i) convergence time decreases when the variance of $\widehat{\mathbb{E}}(\tau)$ decreases, as anticipated;
(ii) convergence time decreases when $N$ increases. In (ii), the decrease is most evident when $N$ is still relatively small
After $N$ exceeds $200$, convergence time remains almost fixed, as the benefit brought by decrease of variance is offset by the cost of extra samples. (This is also in agreement with the policy of $N$ set to a small value at the initial iterations of the algorithm.)
\subsection{Variance Reduction for BBVI}
In non-trivial applications, the variability of the initial estimator $\nabla_{\lambda}\widehat{\textrm{ELBO}}(q)$ within BBVI in (\ref{eq:BBVIest}) will typically be large, so variance reduction approaches such as Rao-Blackwellization and control variates \citep{bbvi} are also used.
Rao-Blackwellization \citep{raoblack} reduces variances by analytically calculating conditional expectations.
In BBVI, within the
factorization framework of (\ref{eq:meanfield}), where
$\lambda = (\lambda_1,\ldots, \lambda_b)$, and recalling identity (\ref{eq:mainBB}) for the gradient,
a Monte Carlo estimator for the gradient with respect to $\lambda_i$, $i\in\{1,\ldots, b\}$, can be simplified as
\begin{equation}
\label{raograd}
\nabla_{\lambda_i}\widehat{\textrm{ELBO}}(q_i) = \tfrac{1}{N}\sum^N_{n=1}\big[\nabla_{\lambda_i}\log q_i(z_i^{(n)}|\lambda_i)\{\log c_i(z_i^{(n)},x)-\log q_i(z_i^{(n)}| \lambda_i)\}\big],
\end{equation}
with $z_i^{(n)} \stackrel{iid}{\sim} q_i(z_i|\lambda_i)$, $1\le n\le N$, and,
\begin{align*}
c_i(z_i,x):= \exp\big\{\mathbb{E}_{-i}[\log p(z_{i_-},z_{i},z_{i_+},x)]\big\}.
\end{align*}
%
Depending on the model at hand, term $c_i(z_i,x)$ can be obtained analytically
or via a double Monte Carlo procedure (for estimating $c_i(z_i^{(n)},x)$,
over all $1\le n\le N$) -- or a combination of thereof.
In BBVI, control variates \citep{ross_2002} can be defined on a per-component basis and be applied to the Rao-Blackwellized noisy gradients of ELBO in (\ref{raograd}) to provide the estimator,
\begin{equation}
\label{deltaelbo}
\nabla_{\lambda_i}\widehat{\textrm{ELBO}}(q_i) = \tfrac{1}{N}\sum^N_{n=1}\big[\nabla_{\lambda_i}\log q_i(z_i^{(n)}| \lambda_i)\{\log c_i(z_i^{(n)},x)-\log q_i(z_i^{(n)}| \lambda_i)-\widehat{a}^*_i\}\big],
\end{equation}
for the control,
\begin{equation*}
\widehat{a}^*_i := \frac{\sum^{d_i}_{j=1}\widehat{\textrm{Cov}}(f_{i,j},g_{i,j})}{\sum^{d_i}_{j=1}\widehat{\textrm{Var}}(g_{i,j})},
\end{equation*}
where $f_{i,j}$, $g_{i,j}$ denote the $j$th co-ordinate of the vector-valued functions $f_i$, $g_i$ respectively,
given below,
\begin{align*}
g_i(z_i)&:= \nabla_{\lambda_i}\log q_i(z_i| \lambda_i), \\
f_i(z_i)&:= \nabla_{\lambda_i}\log q_i(z_i| \lambda_i)\{\log c_i(z_i,x)-\log q_i(z_i| \lambda_i)\}.
\end{align*}
\begin{comment}
confronted with options: (i) An effortless variational distribution, such as normal distribution or gamma distribution, can be easy to sample from, but with high probability, the samples would not meet the hard constraints; (ii) A carefully designed variational distribution with the hard constraints usually is only available up to a proportionality constant, which causes complication in evaluating the score function required in Eq.\ref{deltaelbo}.
\end{comment}
\subsection{Simulated Example 2: Model with Hard Constraints}
In this section, we discuss the performance and challenges of MC-CAVI, MCMC, BBVI for models where the support of the posterior -- thus, also the variational distribution --
involves hard constraints.
Here, we provide an example which offers a simplified version of the NMR problem discussed in Section~\ref{sec:nmr} but allows for the implementation of BBVI, as the involved normalising constants can be easily computed. Moreover, as with other gradient-based methods, BBVI requires to tune the step-size sequence $\{\rho_k\}$ in (\ref{eq:BBVIit}), which might be a laborious task, in particular for increasing dimension. Although there are several proposals aimed to optimise the choice of $\{\rho_k\}$ (\citealp{Bottou2012,advi}), MC-CAVI does not face such a tuning requirement.
We simulate data according to the following scheme: observations
$\{y_j\}$ are generated from $\mathrm{N}(\vartheta + \kappa_j,\theta^{-1})$, $j = 1,\ldots,n$, with $\vartheta = 6$, $\kappa_j = 1.5\cdot \sin(-2\pi+4\pi(j-1)/{n})$, $\theta = 3$, $n = 100$.
We fit the following model:
\begin{align*}
\textrm{\underline{Example Model 2}} \\
y_j \mid \vartheta, \kappa_j, \theta&\sim \mathrm{N}(\vartheta + \kappa_j,\theta^{-1}), \\[0.1cm]
\vartheta &\sim \mathrm{N}(0,10),\\[0.1cm]
\kappa_j \mid \psi_j &\sim \mathrm{TN}(0,10,-\psi_j,\psi_j),\\
\psi_j \hspace{0.1cm} &\!\!\stackrel{i.i.d.}{\sim} \mathrm{TN}(0.05,10,0,2),\quad j = 1,\ldots,n, \\[0.1cm]
\theta &\sim \mathrm{Gamma}(1,1).
\end{align*}
\subsubsection*{MCMC}
\label{sec:MCMC}
We use a standard Metropolis-within-Gibbs. We set $y = (y_1, \ldots, y_{n})$, $\kappa = (\kappa_1, \ldots, \kappa_{n})$ and $\psi = (\psi_1, \ldots, \psi_{n})$.
Notice that we have the full conditional distributions,
\begin{align*}
p(\vartheta| y,\theta, \kappa, \psi) &= \mathrm{N}\big(\tfrac{\sum^{n}_{j=1}(y_j-\kappa_j)\theta}{\frac{1}{10}+{n}\theta},\tfrac{1}{\frac{1}{10}+{n}\theta}\big),\\[0.1cm]
p(\kappa_j| y,\theta,\vartheta, \psi)&= \mathrm{TN}\big(\tfrac{(y_j-\vartheta)\theta}{\frac{1}{10}+\theta},\tfrac{1}{\frac{1}{10}+\theta},-\psi_j,\psi_j\big) ,\\[0.1cm]
p(\theta|y,\vartheta, \kappa, \psi) &= \mathrm{Gamma}\big(1+\tfrac{n}{2},1+\tfrac{\sum^{n}_{j=1}(y_j-\vartheta-\kappa_j)^2}{2}\big).
\end{align*}
(Above, and in similar expressions written in the sequel, equality is meant to be properly understood as stating that `the density
on the left is equal to the density of the distribution on the right'.)
For each $\psi_j$, $1\le j\le {n}$, the full conditional is,
\begin{equation*}
p(\psi_j | y,\theta,\vartheta, \kappa) \propto \frac{ \phi(\tfrac{\psi_j-\frac{1}{20}}{\sqrt{10}})}{\Phi(\tfrac{\psi_j}{\sqrt{10}})-\Phi(\tfrac{-\psi_j}{\sqrt{10}})}\,
\mathbb{I}\,[\,|\kappa_j|<\psi_j<2\,],\quad j = 1,\ldots,{n},
\end{equation*}
where $\phi(\cdot)$ is the density of $\mathrm{N}(0,1)$ and $\Phi(\cdot)$ its cdf.
The Metropolis-Hastings proposal for $\psi_j$ is a uniform variate from
$\textrm{U}(0,2)$.
\subsubsection*{MC-CAVI}
For MC-CAVI, the logarithm of the joint distribution is given by,
\begin{align*}
\log p(y,\vartheta, \kappa, \psi,\theta) &= const. + \tfrac{n}{2}\log \theta - \tfrac{\theta\sum^{n}_{j=1}(y_j - \vartheta-\kappa_j)^2}{2} - \tfrac{\vartheta^2}{2\cdot 10}
-\theta-\sum^{n}_{j=1}\tfrac{\kappa_j^2+(\psi_j-\frac{1}{20})^2}{2\cdot 10}
\\[-0.4cm]
&\qquad \qquad \qquad -\sum^{n}_{j=1} \log(\Phi(\tfrac{\psi_j }{\sqrt{10}})-\Phi(\tfrac{-\psi_j }{\sqrt{10}})),
\end{align*}
under the constraints,
\begin{align*}
|\kappa_j|<\psi_j<2, \quad j = 1,\ldots,{n}.
\end{align*}
To comply with the above constraints, we factorise the variational distribution as,
\begin{align}
\label{eq:parts}
q(\vartheta,\theta, \kappa, \psi)=q(\vartheta)q(\theta)\prod^{n}_{j=1}q(\kappa_j,\psi_j).
\end{align}
Here, for the relevant iteration $k$, we have,
\begin{align*}
q_k(\vartheta) &=
\mathrm{N}\big(\tfrac{\sum^{n}_{j=1}(y_j-\mathbb{E}_{k-1}(\kappa_j))\mathbb{E}_{k-1}(\theta)}{\frac{1}{10}+{n}\mathbb{E}_{k-1}(\theta)},\tfrac{1}{\frac{1}{10}+{n}\mathbb{E}_{k-1}(\theta)}\big),\\[0.2cm]
q_k(\theta) &=
\mathrm{Gamma}\big(1+\tfrac{n}{2}, 1+\tfrac{\sum^{n}_{j=1}\mathbb{E}_{k,k-1}((y_j-\vartheta-\kappa_j)^2)}{2})\big), \\[0.3cm]
q_k(\kappa_j,\psi_j) &\propto \exp\big\{-
\tfrac{\mathbb{E}_{k}(\theta) (\kappa_j-(y_j-\mathbb{E}_{k}(\vartheta)))^2}{2} -\tfrac{\kappa_j^2+(\psi_j-\frac{1}{20})^2}{2\cdot 10} \big\} \big/
\big(\Phi(\tfrac{\psi_j }{\sqrt{10}})-\Phi(\tfrac{-\psi_j }{\sqrt{10}})\big)\\ &\qquad \qquad\qquad\qquad\qquad\qquad
\qquad\cdot \mathbb{I}\,[\,|\kappa_j|<\psi_j<2\,],\qquad 1\le j\le {n}.
\end{align*}
The quantity $\mathbb{E}_{k,k-1}((y_j-\vartheta-\kappa_j)^2)$ used in the second line above means that the expectation is considered under $\vartheta\sim q_k(\vartheta)$ and (independently) $\kappa_{j}\sim q_{k-1}(\kappa_{j},\psi_j)$.
Then, MC-CAVI develops as follows:
\begin{itemize}
\item Step 0: For $k=0$, initialize
$\mathbb{E}_{0}(\theta)=1$, $\mathbb{E}_{0}(\vartheta)=4$, $\mathbb{E}_{0}(\vartheta^2)=17$.
\item Step $k$:
For $k\ge 1$, given $\mathbb{E}_{k-1}(\theta)$, $\mathbb{E}_{k-1}(\vartheta)$,
execute:
\begin{itemize}
\item For $j=1,\ldots, {n}$, apply an MCMC algorithm -- with invariant law
$q_{k-1}(\kappa_j,\psi_j)$ -- consisted of a number, $N$, of Metropolis-within-Gibbs iterations carried out over the relevant full conditionals,
%
\begin{align*}
q_{k-1}(\psi_j| \kappa_j) &\propto\frac{\phi(\tfrac{\psi_j-\frac{1}{20}}{\sqrt{10}})}{\Phi(\tfrac{\psi_j}{\sqrt{10}})-\Phi(\tfrac{-\psi_j}{\sqrt{10}})}\,
\mathbb{I}\,[\,|\kappa_j|<\psi_j<2\,], \\[0.3cm]
q_{k-1}(\kappa_j|\psi_j)&= \mathrm{TN}\big(\tfrac{(y_j-\mathbb{E}_{k-1}(\vartheta))\mathbb{E}_{k-1}(\theta)}{\frac{1}{10}+\mathbb{E}_{k-1}(\theta)},\tfrac{1}{\frac{1}{10}+\mathbb{E}_{k-1}(\theta)},-\psi_j,\psi_j\big).
\end{align*}
As with the full conditional $p(\psi_j | y,\theta,\vartheta,\kappa)$
within the MCMC sampler, we use a uniform proposal $\mathrm{U}(0,2)$
at the Metropolis-Hastings step applied for $q_{k-1}(\psi_j| \kappa_j)$. For each $k$, the $N$ iterations begin from the $(\kappa_j,\psi_j)$-values obtained at the end of the corresponding MCMC iterations at step $k-1$, with very first initial values being $\kappa, \psi_j)=(0,1)$.
Use the $N$ samples to obtain $\mathbb{E}_{k-1}(\kappa_j)$ and $\mathbb{E}_{k-1}(\kappa_j^2)$.
\item Update the variational distribution for $\vartheta$,
\begin{align*}
q_{k}(\vartheta) &= \mathrm{N}\big(\tfrac{\sum^{n}_{j=i}(y_j-\mathbb{E}_{k-1}(\kappa_j))\mathbb{E}_{k-1}(\theta)}{\frac{1}{10}+{n}\mathbb{E}_{k-1}(\theta)},\tfrac{1}{\frac{1}{10}+{n}\mathbb{E}_{k-1}(\theta)}\big)
\end{align*}
and evaluate $\mathbb{E}_{k}(\vartheta)$, $\mathbb{E}_{k}(\vartheta^2)$.
\item Update the variational distribution for $\theta$,
\begin{align*}
q_{k}(\theta)&= \mathrm{Gamma}\big(1+\tfrac{n}{2},1+\tfrac{\sum^{n}_{j=1}\mathbb{E}_{k,k-1}((y_j-\vartheta-\kappa_j)^2)}{2}\big)
\end{align*}
and evaluate $\mathbb{E}_{k}(\theta)$.
\end{itemize}
\item Iterate until convergence.
\end{itemize}
\begin{comment}
\begin{align*}
q(\vartheta)&= \mathrm{N}(\tfrac{\sum^n_{i=i}(y_i-\mathbb{E}[\kappa_i])\mathbb{E}[\theta]}{\frac{1}{100}+n\mathbb{E}[\theta]},\tfrac{1}{\frac{1}{100}+n\mathbb{E}[\theta]}),\\[0.2cm]
q(\theta)&= \mathrm{Gamma}(1+\tfrac{n}{2},1+\tfrac{\sum^n_{i=1}\mathbb{E}(y_i-\vartheta-\kappa_i)^2}{2}),\\[0.3cm]
q(\kappa_i\mid \psi_i)&= \mathrm{TN}(\tfrac{(y_i-\mathbb{E}[\vartheta])\mathbb{E}[[\theta]}{\frac{1}{100}+\mathbb{E}[\theta]},\tfrac{1}{\frac{1}{100}+\mathbb{E}[\theta]},-\psi_i,\psi_i),\\[0.3cm]
q(\psi_i\mid \kappa) &= \mathrm{TN}(\tfrac{1}{20},10,\max\{0,\kappa_i\},2).
\end{align*}
\end{comment}
\subsubsection*{BBVI}
For BBVI we assume a variational distribution $q(\theta,\vartheta, \kappa, \psi\,|\,\boldsymbol{\alpha},\boldsymbol{\gamma})$
that factorises as in the case of CAVI in (\ref{eq:parts}), where
\begin{align*}
\boldsymbol{\alpha} &= (\alpha_{\vartheta}, \alpha_{\theta}, \alpha_{\kappa_1}, \ldots, \alpha_{\kappa_{n}}, \alpha_{\psi_1}, \ldots, \alpha_{\psi_{n}})\ , \\ \boldsymbol{\gamma} &= (\gamma_{\vartheta}, \gamma_{\theta}, \gamma_{\kappa_1}, \ldots, \gamma_{\kappa_{n}}, \gamma_{\psi_1}, \ldots, \gamma_{\psi_{n}})
\end{align*}
to be the variational parameters.
Individual marginal distributions are chosen to agree -- in type -- with the model priors. In particular, we set,
\begin{align*}
q(\vartheta) &= \mathrm{N}(\alpha_{\vartheta},\exp(\gamma_{\vartheta})),\\[0.2cm]
q(\theta) &= \mathrm{Gamma}(\exp(\alpha_{\theta}),\exp(\gamma_{\theta})), \\[0.2cm]
q(\kappa_j,\psi_j) &= \mathrm{TN}(\alpha_{\kappa_j},\exp(2\gamma_{\kappa_j}),-\psi_j,\psi_j)\otimes \mathrm{TN}(\alpha_{\psi_j},\exp(2\gamma_{\psi_j}),0,2), \quad 1\leq j \leq {n}.
\end{align*}
It is straightforward to derive the required gradients (see Appendix \ref{sec:gradient} for the analytical expressions).
BBVI is applied using Rao-Blackwellization and control variates for variance reduction. The algorithm is as follows,
\begin{itemize}
\item Step 0: Set $\eta = 0.5$; initialise $\boldsymbol{\alpha}^0 = 0$, $\boldsymbol{\gamma}^0 = 0$ with the exception $\alpha^0_{\vartheta}=4$.
\item Step $k$:
For $k\ge 1$, given $\boldsymbol{\alpha}^{k-1}$ and $\boldsymbol{\gamma}^{k-1}$
execute:
\begin{itemize}
\item Draw $(\vartheta^i, \theta^i, \kappa^i,\psi^i)$, for $1\leq i \leq N$, from $q_{k-1}(\vartheta)$, $q_{k-1}(\theta)$, $q_{k-1}(\kappa,\psi)$.
\item With the samples, use (\ref{deltaelbo}) to evaluate:
\begin{align*}
&\nabla^{k}_{\alpha_{\vartheta}}\widehat{\textrm{ELBO}}(q(\vartheta)),\quad \nabla^{k}_{\gamma_{\vartheta}}\widehat{\textrm{ELBO}}(q(\vartheta)), \\ &\nabla^{k}_{\alpha_{\theta}}\widehat{\textrm{ELBO}}(q(\theta)),\quad \nabla^{k}_{\gamma_{\theta}}\widehat{\textrm{ELBO}}(q(\theta)), \\ &\nabla^{k}_{\alpha_{\kappa_j}}\widehat{\textrm{ELBO}}(q(\kappa_j,\psi_j)), \quad \nabla^{k}_{\gamma_{\kappa_j}}\widehat{\textrm{ELBO}}(q(\kappa_j,\psi_j)),\quad 1\leq j \leq n, \\ &\nabla^{k}_{\alpha_{\psi_j}}\widehat{\textrm{ELBO}}(q(\kappa_j,\psi_j)),\quad
\nabla^{k}_{\gamma_{\psi_j}}\widehat{\textrm{ELBO}}(q(\kappa_j,\psi_j)), \quad 1\leq j \leq n.
\end{align*}
(Here, superscript $k$ at the gradient symbol $\nabla$ specifies the BBVI iteration.)
\item Evaluate $\boldsymbol{\alpha}^{k}$ and $\boldsymbol{\gamma}^{k}$:
\begin{align*}
(\boldsymbol{\alpha},\boldsymbol{\gamma})^{k} &= (\boldsymbol{\alpha},\boldsymbol{\gamma})^{k-1} + \rho_k\nabla^{k}_{(\boldsymbol{\alpha},\boldsymbol{\gamma})}\widehat{\textrm{ELBO}}(q),
\end{align*}
where $q = (q(\vartheta), q(\theta), q(\kappa_1, \psi_1), \ldots, q(\kappa_n, \psi_n))$. For the learning rate, we employed the AdaGrad algorithm \citep{duchi2011adaptive} and set $\rho_k = \eta \, \textrm{diag}(G_k)^{-1/2}$, where $G_k$ is a matrix equal to the sum of the first $k$ iterations of the outer products of the gradient, and $\textrm{diag}(\cdot)$ maps a matrix to its diagonal version.
\end{itemize}
\item Iterate until convergence.
\end{itemize}
\subsubsection*{Results}
The three algorithms have different stopping criteria. We run each for $100$secs for parity. A summary of results is given in Table \ref{resulttable}. Model fitting plots and algorithmic traceplots are shown in Figure \ref{resultplot}.
\begin{table}[!h]
\begin{tabular}{|l|l|l|l|}
\hline
& MCMC & MC-CAVI & BBVI \\ \hline
Iterations & \begin{tabular}[c]{@{}l@{}}No. Iterations = 2,500\\ Burn-in = 1,250\end{tabular} & \begin{tabular}[c]{@{}l@{}}No. Iterations = 300\\ $N = 10$\\ Burn-in = 150\end{tabular} & \begin{tabular}[c]{@{}l@{}}No. Iterations = 100\\ $N = 10$\end{tabular} \\ \hline
$\vartheta$ & 5.927 (0.117) & 5.951 (0.009) & 6.083 (0.476) \\ \hline
$\theta$ & 1.248 (0.272) & 8.880 (0.515) & 0.442 (0.172) \\ \hline
\end{tabular}
\caption{Summary of results: last two rows show the average for the corresponding parameter (in horizontal direction) and algorithm (in vertical direction), after burn-in (the number in brackets is the corresponding standard deviation). All algorithms were executed for $10^2$secs. The first row gives some algorithmic details.}
\label{resulttable}
\end{table}
\begin{figure}[!h]
\begin{center}
\includegraphics[scale=0.35]{2019mcmc}
\includegraphics[scale=0.35]{2019mcmcbeta}
\includegraphics[scale=0.35]{2019mcmctau}
\includegraphics[scale=0.35]{2019cavi}
\includegraphics[scale=0.35]{2019cavimu}
\includegraphics[scale=0.35]{2019cavitau}
\includegraphics[scale=0.35]{newbbvi}
\includegraphics[scale=0.35]{newbbvi_mu}
\includegraphics[scale=0.35]{newbbvi_tau}
\end{center}
\vspace{-0.3cm}
\caption{Model fit (left panel), traceplots of $\vartheta$ (middle panel) and traceplots of $\theta$ (right panel) for the three algorithms: MCMC (first row), MC-CAVI (second row) and BBVI (third row) -- for Example Model 2 -- when allowed $100$secs of execution. In the plots showing model fit, the green line represents the data without noise, the orange line the data with noise; the blue line shows the corresponding posterior means and the grey area the pointwise 95\% posterior credible intervals.}
\label{resultplot}
\end{figure}
\begin{figure}[!h]
\begin{center}
\includegraphics[scale=0.5]{density_comp}
\end{center}
\vspace{-0.3cm}
\caption{Density plots for the true posterior of $\vartheta$ (blue line) -- obtained via an expensive MCMC -- and the corresponding approximate distribution provided by MC-CAVI.}
\label{resultdensity}
\end{figure}
\noindent Table \ref{resulttable} indicates that all three algorithms approximate the posterior mean of $\vartheta$ effectively; the estimate from MC-CAVI has smaller variability than the one of BBVI; the opposite holds for the variability in the estimates for $\theta$.
Figure \ref{resultplot} shows that the traceplots for BBVI are unstable, a sign that the gradient estimates have high variability. In contrast, MCMC and MC-CAVI perform rather well. Figure \ref{resultdensity} shows the `true' posterior density of $\vartheta$ (obtained from an expensive MCMC with 10,000 iterations -- 5,000 burn-in) and the corresponding approximation obtained via MC-CAVI. In this case, the variational approximation is quite accurate at the estimation of the mean but underestimates the posterior variance (rather typically for a VI method). We mention that for BBVI we also tried to use normal laws as variational distributions -- as this is mainly the standard choice in the literature -- however, in this case, the performance of BBVI deteriorated
even further.
\section{Application to $^1$H NMR Spectroscopy}
\label{sec:nmr}
We demonstrate the utility of MC-CAVI in a statistical model
proposed in the field of metabolomics by \cite{batmanmodel}, and used in NMR (Nuclear Magnetic Resonance) data analysis.
Proton nuclear magnetic resonance ($^1$H NMR) is an extensively used technique for measuring abundance (concentration) of a number of metabolites in complex biofluids.
NMR spectra are widely used in metabolomics to obtain profiles of metabolites present in biofluids.
The NMR spectrum can contain information for a few hundreds of compounds.
Resonance peaks generated by each compound must be identified in the spectrum after
deconvolution. The spectral signature of a compound is given by a combination of peaks not necessarily close to each other. Such compounds can generate hundreds of resonance peaks, many of which overlap. This causes difficulty in peak identification and deconvolution. The analysis of NMR spectrum is further complicated by fluctuations in peak positions among spectra induced by uncontrollable variations in experimental conditions and the chemical properties of the biological samples, e.g.~by the pH.
Nevertheless, extensive information on the patterns of spectral resonance generated by human metabolites is now available in online databases. By incorporating this information into a Bayesian model, we can deconvolve resonance peaks from a spectrum and obtain explicit concentration estimates for the corresponding metabolites. Spectral resonances that cannot be deconvolved in this way may also be of scientific interest; these are modelled in \cite{batmanmodel} using wavelet basis functions.
More specifically,
an NMR spectrum is a collection of
peaks convoluted with various horizontal translations and vertical scalings,
with each peak having the form of a Lorentzian curve. A number of metabolites of interest
have known NMR spectrum shape, with the height of the peaks or their width in a particular experiment providing information about the abundance of each metabolite.
The zero-centred, standardized Lorentzian function is defined as:
\begin{equation}
\ell_\gamma(x) = \frac{2}{\pi}\frac{\gamma}{4x^2+\gamma^2}
\end{equation}
where $\gamma$ is the peak width at half height.
An example of $^1$H NMR spectrum is shown in Figure \ref{nmrexample}. The x-axis of the spectrum measures chemical shift in parts per million (ppm) and corresponds to the resonance frequency. The y-axis measures relative resonance intensity.
Each spectrum peak corresponds to magnetic nuclei resonating at a particular frequency in the biological mixture, with every metabolite having a characteristic molecular $^1$H NMR `signature'; the result is a convolution of Lorentzian peaks that appear in specific positions in $^1$H NMR spectra. Each metabolite in the experiment usually gives rise to more than a `multiplet' in the spectrum --
i.e.~linear combination of Lorentzian functions, symmetric around a central point.
Spectral signature (i.e.~pattern multiplets) of many metabolites are stored in public databases.
The aim of the analysis is: (i) to deconvolve resonance peak in the spectrum and assign them to a particular metabolite; (ii) estimate the abundance of the catalogued metabolites; (iii) model the component of a spectrum that cannot be assigned to known compounds. \cite{batmanmodel} propose a two-component joint model for a spectrum, in which the metabolites whose peaks we wish to assign explicitly are modelled parametrically, using information from the online databases, while the unassigned spectrum is modelled using wavelets.
\begin{figure}[!h]
\begin{center}
\includegraphics[scale=0.7]{nmr_example.png}
\end{center}
\vspace{-0.7cm}
\caption{An Example of $^1$H NMR spectrun.}
\label{nmrexample}
\end{figure}
\subsection{The Model}
We now describe the model of \cite{batmanmodel}. The available data are represented by the pair $(\mathbf{x},\mathbf{y})$, where $\mathbf{x}$ is a vector of $n$ ordered points (of the order $10^3-10^4$) on the chemical shift axis -- often regularly spaced -- and $\mathbf{y}$ is the vector of the corresponding resonance intensity measurements (scaled, so that they sum up to $1$).
The conditional law of $\mathbf{y}|\mathbf{x}$ is modelled under the assumption that $y_i| \mathbf{x}$ are independent normal variables and,
\begin{equation}
\mathbb{E}\,[\,y_i\,|\,\mathbf{x}\,] = \phi(x_i) + \xi(x_i), \quad 1\leq i \leq n.
\end{equation}
Here, the $\phi$ component of the model represents signatures that we wish to assign to target
metabolites.
The $\xi$ component models signatures of remaining metabolites present in the spectrum, but not explicitly modelled. We refer to this latter as residual spectrum and we highlight the fact that it is important to account for it as it can unveil important information not captured by $\phi(\cdot)$. Function $\phi$ is constructed parametrically using results from the physical theory of NMR and information available online databases or expert knowledge, while $\xi$ is modelled semiparametrically with wavelets generated by a mother wavelet (symlet 6) that resembles the Lorentzian curve.
More analytically,
\begin{equation*}
\phi(x_i) = \sum_{m=1}^{M}t_m(x_i)\beta_{m}
\end{equation*}
where $M$ is the number of metabolites modelled explicitly and $\beta = (\beta_{1},\ldots,\beta_{M})^{\top}$ is a parameter vector corresponding to metabolite concentrations.
Function $t_m(\cdot)$ represents a continuous template function that specifies the NMR signature of metabolite $m$ and it is defined as,
\begin{equation}
t_m(\delta) = \sum_u \sum^{V_{m,u}}_{v=1}z_{m,u}\,\omega_{m,u,v}\,\ell_{\gamma}(\delta-\delta^*_{m,u}-c_{m,u,v}),\quad \delta>0,
\end{equation}
where $u$ is an index running over all multiplets assigned to metabolite $m$,
$v$ is an index representing a peak in a multiplet and $V_{m,u}$ is the number of peaks in multiplet $u$ of metabolite $m$. In addition,
$\delta^*_{m,u}$ specifies the theoretical position on the chemical shift axis of the centre of mass of the $u$th multiplet of the $m$th metabolite; $z_{m,u}$ is a positive quantity, usually equal to the number of protons in a molecule of
metabolite $m$ that contributes to the resonance signal of multiplet $u$; $\omega_{m,u,v}$ is the weight determining the relative heights of the peaks of the multiplet; $c_{m,u,v}$ is the translation determining the horizontal offsets of the peaks from the centre of mass of the multiplet. Both $\omega_{m,u,v}$ and $c_{m,u,v}$ can be computed by empirical estimates of the so-called $J$-coupling constants; see \cite{hore2015nuclear} for more details. The $z_{m,u}$'s and $J$-coupling constants information can be found in online databases or from expert knowledge.
The residual spectrum is modelled through wavelets,
\begin{equation*}
\xi(x_i) = \sum_{j,k}\varphi_{j,k}(x_i)\vartheta_{j,k}
\end{equation*}
where
$\varphi_{j,k}(\cdot)$
denote the orthogonal wavelet functions generated by the symlet-6 mother wavelet, see \cite{batmanmodel} for full details; here,
$\vartheta = (\vartheta_{1,1},\ldots,\vartheta_{j,k},\ldots)^{\top}$ is the vector of wavelet coefficients. Indices $j,k$ correspond to the $k$th wavelet in the $j$th scaling level.
Finally, overall, the model for an NMR spectrum can be re-written in matrix form as:
\begin{equation}
\mathcal{W}(\mathbf{y} -\mathbf{T} \beta) = \mathbf{I}_{n_1} \vartheta + \epsilon, \quad \boldsymbol{\epsilon} \sim \mathrm{N}(0,\mathbf{I}_{n_1}/\theta),
\label{nmrlikelihood}
\end{equation}
where $\mathcal{W}\in \mathbb{R}^{n\times {n_1}}$ is the inverse wavelet transform,
$M$ is the total number of known metabolites,
$\mathbf{T}$ is an $n \times M$ matrix with its $(i,m)$th entry equal to $t_m(x_i)$
and $\theta$ is a scalar precision parameter.
\subsection{Prior Specification}
\label{sec:priordist}
\cite{batmanmodel} assign the following prior distribution to the parameters in the Bayesian model.
For the concentration parameters, we assume
\begin{equation*}
\beta_m \sim \mathrm{TN}(e_m,1/s_m,0,\infty),
\end{equation*}
where $e_m = 0$ and $s_m = 10^{-3}$, for all $m=1,\ldots, M$. Moreover,
\begin{align*}
\gamma &\sim \mathrm{LN}(0,1); \\
\delta^*_{m,u} &\sim \mathrm{TN}(\hat{\delta}^*_{m,u},10^{-4},\hat{\delta}^*_{m,u}-0.03,\hat{\delta}^*_{m,u}+0.03),
\end{align*}
where LN denotes a log-normal distribution and $\hat{\delta}^*_{m,u}$ is the estimate for $\delta^*_{m,u}$ obtained from the online database HMDB \citep[see][]{hmdb1, hmdb2, hmdb3, hmdb4}. In the regions of the spectrum where both parametric (i.e.~$\phi$) and semiparametric (i.e.~$\xi$) components need to be fitted, the likelihood is unidentifiable. To tackle this problem, \cite{batmanmodel} opt for shrinkage priors for the wavelet coefficients and include a vector of hyperparameters $\psi$ -- each component $\psi_{j,k}$ of which corresponds to a wavelet coefficient -- to penalize the semiparametric component. To reflect prior knowledge that NMR spectra are usually restricted to the half plane above the chemical shift axis, \cite{batmanmodel} introduce a vector of hyperparameters $\tau$, each component of which, $\tau_i$, corresponds to a spectral data point, to further penalize spectral reconstructions in which some components of $\mathcal{W}^{-1}\boldsymbol{\vartheta}$ are less than a small negative threshold. In conclusion, \cite{batmanmodel} specify the following joint prior density for $(\vartheta,
\psi,\tau,\theta)$,
\begin{align*}
p(\vartheta, \psi, \tau,\theta) &\propto \theta^{a+\tfrac{n+n_1}{2}-1} \Big\{\prod_{j,k}\psi^{c_j-0.5}_{j,k}\exp\big(-\tfrac{\psi_{j,k} d_j}{2}\big)\Big\}\\
&\qquad \qquad \times \exp\Big\{-\tfrac{\theta}{2}\Big(e+\sum_{j,k}\psi_{j,k}\,\vartheta^2_{j,k}+ r\sum^{n}_{i=1}(\tau_i-h )^2\Big)\Big\} \\
&\qquad \qquad \qquad \qquad \times \mathbbm{1}\,\big\{\,\mathcal{W}^{-1} \vartheta\geq \tau,\,\, h\mathbf{1}_{n}\geq \tau\,\big\},
\end{align*}
where $ \psi$ introduces local shrinkage for the marginal prior of $\vartheta$ and $\tau$ is a vector of $n$ truncation limits, which bounds $\mathcal{W}^{-1} \vartheta$ from below. The truncation imposes an identifiability constraint: without it, when the signature template does not match the shape of the spectral data, the mismatch will be compensated by negative wavelet coefficients, such that an ideal overall model fit is achieved even though the signature template is erroneously assigned and the concentration of metabolites is overestimated. Finally we set $c_j = 0.05$, $d_j = 10^{-8}$, $h = -0.002$, $r = 10^5$, $a = 10^{-9}$, $e = 10^{-6}$; see \cite{batmanmodel} for more details.
\subsection{Results}
BATMAN is an $\mathsf{R}$ package for estimating metabolite concentrations from NMR spectral data using a specifically designed MCMC algorithm \citep{batman} to perform posterior inference from the Bayesian model described above.
We implement a MC-CAVI version of BATMAN
and compare its performance with the original MCMC algorithm.
Details of the implementation of MC-CAVI are given in Appendix \ref{sec:BATMAN}.
Due to the complexity of the model and the datasize, it is challenging for both algorithms to reach convergence. We run the two methods, MC-CAVI and MCMC, for approximately an equal amount of time, to analyse a full spectrum with 1,530 data points and modelling parametrically 10 metabolites. We fix the number of iterations for MC-CAVI to 1,000, with a burn-in of 500 iterations;
we set the Monte Carlo size to $N=10$ for all iterations.
The execution time for this MC-CAVI algorithms is $2,048$secs.
For the MCMC algorithm, we fix the number of iterations to 2,000, with a burn-in of 1,000 iterations. This MCMC algorithm has an execution time of $2,098$secs.
In $^1$H NMR analysis, $\beta$ (the concentration of metabolites in the biofluid) and $\delta^*_{m,u}$ (the peak positions) are the most important parameters from a scientific point of view. Traceplots of four examples ($\beta_3$, $\beta_4$, $\beta_9$ and $\delta_{4,1}$) are shown in Figure \ref{paracomparison}. These four parameters are chosen due to the different performance of the two methods, which are closely examined in Figure \ref{detailcomparison}. For $\beta_3$ and $\beta_9$,
traceplots are still far from convergence for MCMC, while they move toward the correct direction (see Figure \ref{paracomparison}) when using MC-CAVI. For $\beta_4$ and $\delta_{4,1}$, both parameters reach a stable regime very quickly in MC-CAVI, whereas the same parameters only make local moves when implementing MCMC. For the remaining parameters in the model, both algorithms present similar results.
\begin{figure}[!h]
\begin{center}
\includegraphics[scale=0.4]{newinclude1.png}
\includegraphics[scale=0.4]{newinclude2.png}
\includegraphics[scale=0.4]{newinclude3.png}
\includegraphics[scale=0.4]{newinclude4.png}
\end{center}
\vspace{-0.3cm}
\caption{Traceplots of Parameter Value against Number of Iterations after the burn-in period for $\beta_3$ (upper left panel), $\beta_4$ (upper right panel), $\beta_9$ (lower left panel) and $\delta_{4,1}$ (lower right panel). The $y$-axis corresponds to the obtained parameter values (the mean of the distribution $q$ for MC-CAVI and traceplots for MCMC). The red line shows the results from MC-CAVI and the blue line from MCMC. Both algorithms are executed for the same (approximately) amount of time.}
\label{paracomparison}
\end{figure}
\begin{figure}[!h]
\begin{center}
\includegraphics[scale=0.3]{cavifitting.png}
\includegraphics[scale=0.3]{mcmcfitting.png}
\end{center}
\caption{Comparison of MC-CAVI and MCMC in terms of Spectral Fit. The upper panel shows the Spectral Fit from MC-CAVI algorithm. The lower panel shows the Spectral Fit from MCMC algorithm. The $x$-axis corresponds to chemical shift measure in ppm. The $y$-axis corresponds to standard density.}
\label{speccomparison}
\end{figure}
Figure \ref{speccomparison} shows the fit obtained from both the algorithms, while Table \ref{betatable} reports posterior estimates for $ \beta$.
From Figure \ref{speccomparison}, it is evident that the overall performance of MC-CAVI is similar as that of MCMC since in most areas, the metabolites fit (orange line) captures the shape of the original spectrum quite well. Table \ref{betatable} shows that, similar to standard VI behaviour, MC-CAVI underestimates the variance of the posterior density. We examine in more detail the posterior distribution of the $\beta$ coefficients for which the posterior means obtained with the two algorithms differ more than 1.0e-4. Figure \ref{detailcomparison} shows that MC-CAVI manages to capture the shapes of the peaks while MCMC does not, around ppm values of 2.14 and 3.78, which correspond to spectral regions where many peaks overlap making peak deconvolution challenging. This is probably due to the faster convergence of MC-CAVI. Figure \ref{detailcomparison} shows that for areas with no overlapping (e.g.~around ppm values of 2.66 and 7.53), MC-CAVI and MCMC produce similar results.
\begin{table}[!h]
\centering
\begin{center}
\begin{tabular}{|l|l|l|l|l|l|l|}
\hline
& & $\beta_1$ & $\beta_2$ & $\boldsymbol{\beta_3}$ & $\boldsymbol{\beta_4}$ & $\beta_5$ \\ \hline
\multirow{2}{*}{MC-CAVI} & mean & 6.0e-6 & 7.8e-5 & 1.4e-3 & 4.2e-4 & 2.6e-5 \\ \cline{2-7}
& sd & 1.8e-11 & 4.0e-11 & 1.3e-11 & 1.0e-11 & 6.2e-11 \\ \hline
\multirow{2}{*}{MCMC} & mean & 1.2e-5 & 4.0e-5 & 1.5e-3 & 2.1e-5 & 3.4e-5 \\ \cline{2-7}
& sd & 1.1e-10 & 5.0e-10 & 1.6e-9 & 6.4e-10 & 3.9e-10 \\ \hline
& & $\beta_6$ & $\beta_7$ & $\beta_8$ & $\boldsymbol{\beta_9}$ & $\beta_{10}$ \\ \hline
\multirow{2}{*}{MC-CAVI} & mean & 6.1e-4 & 3.0e-5 & 1.9e-4 & 2.7e-3 & 1.0e-3 \\ \cline{2-7}
& sd & 1.5e-11 & 1.6e-11 & 3.9e-11 & 1.6e-11 & 3.6e-11 \\ \hline
\multirow{2}{*}{MCMC} & mean & 6.0e-4 & 3.0e-5 & 1.8e-4 & 2.5e-3 & 1.0e-3 \\ \cline{2-7}
& sd & 2.3e-10 & 7.5e-11 & 3.7e-10 & 5.1e-9 & 7.9e-10 \\ \hline
\end{tabular}
\caption{Estimation of $\beta$ obtained with MC-CAVI and MCMC. (The coefficients of $\beta$ for which the posterior means obtained with the two algorithms differ by more than 1.0e-4 are shown in bold.)}
\label{betatable}
\end{center}
\end{table}
\begin{figure}[!h]
\begin{center}
\includegraphics[scale=0.5]{detail2.png}
\includegraphics[scale=0.5]{detail4.png}
\includegraphics[scale=0.5]{detail1.png}
\includegraphics[scale=0.5]{detail3.png}
\end{center}
\caption{Comparison of Metabolites Fit obtained with MC-CAVI and MCMC. The $x$-axis corresponds to chemical shift measure in ppm. The $y$-axis corresponds to standard density. The upper left panel shows areas around ppm value 2.14 ($\beta_4$ and $\beta_9$). The upper right panel shows areas around ppm 2.66 ($\beta_6$). The lower left panel shows areas around ppm value 3.78 ($\beta_3$ and $\beta_9$). The lower right panel shows areas around ppm 7.53 ($\beta_{10}$).}
\label{detailcomparison}
\end{figure}
Comparing MC-CAVI and MCMC's performance in the case of the NMR model, we can draw the following conclusions:
\begin{itemize}
\item In NMR analysis, if many peaks overlap (see Figure \ref{detailcomparison}), MC-CAVI can provide better results than MCMC.
\item In high-dimensional models, where the number of parameters grows with the size of data, MC-CAVI can converge faster than MCMC.
\item Choice of $N$ is important for optimising the performance of MC-CAVI. Building on results derived for other Monte Carlo methods (e.g.~MCEM), it is reasonable to choose a relatively small number of Monte Carlo iterations at the beginning when the algorithm can be far from regions of parameter space of high posterior probability, and gradually increase the number of Monte Carlo iterations, with the maximum number taken once the algorithm has reached a mode.
\end{itemize}
\section{Discussion}
\label{sec:discussion}
As a combination of VI and MCMC, MC-CAVI provides a powerful inferential tool particularly in high dimensional settings when full posterior inference is computationally demanding and the application of optimization and of noisy-gradient-based approaches, e.g. BBVI, is hindered by the presence of hard constraints. The MCMC step of MC-CAVI is necessary to deal with parameters for which VI approximation distributions are difficult or impossible to derive, for example due to the impossibility to derive closed-form expression for the normalising constant. General Monte Carlo algorithms such as sequential Monte Carlo and Hamiltonian Monte Carlo can be incorporated within MC-CAVI. Compared with MCMC, the VI step of MC-CAVI speeds up convergence and provides reliable estimates in a shorter time. Moreover, MC-CAVI scales better in high-dimensional settings. As an optimization algorithm, MC-CAVI's convergence monitoring is easier than MCMC. Moreover, MC-CAVI offers a flexible alternative to BBVI. This latter algorithm, although very general and suitable for a large range of complex models, depends crucially on the quality of the approximation to the true target provided by the variational distribution, which in high dimensional setting (in particular with hard constraints) is very difficult to assess.
\section*{Acknowledgments}
We thank two anonymous referees for their comments that greatly improved the content of the paper.
AB acknowledges funding by the Leverhulme Trust Prize.
|
1,116,691,498,663 | arxiv | \section{INTRODUCTION}
Star clusters are considered as important tracers for understanding the formation and evolution of their host galaxies \citep{san10}. Star cluster systems have been traditionally separated into two populations--globular clusters and open clusters (GCs and OCs)--on their ages, masses, metallicities, and positions. However, more recent studies have discovered that the distinction between GCs and OCs becomes increasingly blurred \citep[see][for details]{perina10}.
\citet{gk52} listed photometric colors and magnitudes for star clusters in Magellanic Clouds (MCs) and the Fornax dwarf system and divided them into two groups. They found that star clusters in blue group have central condensation properties similar to those of the red group, which were considered as GCs, however they could not be identified with the Galactic OCs. \citet{hodge61} termed 23 clusters in the Large Magellanic Cloud (LMC)--differing from GCs in their relative youth and OCs in their richness and shape--as ``young populous clusters'', which were called ``young massive clusters'' (YMCs) or ``blue luminous compact clusters'' (BLCCs) by \citet{Fusi05}. Actually, the blue integrated colors for a cluster may be influenced by several factors , such as poor metallicity (the luminosity of the horizontal branch), young age (the position of the main-sequence turnoff stars), and some exotic stellar populations (e.g., blue stragglers, Wolf-Rayet stars). However, several studies \citep[e.g.,][]{Williams01,Beasley04} have reached similar conclusions that the exceedingly blue colors of BLCCs are a direct consequence of their young ages \citep[see][for details]{Fusi05}.
M31 is the largest galaxy in the Local Group, and has a large number of star clusters, including young clusters having been studied by many authors. \citet{bohlin88,bohlin93} listed 11 objects in M31 classified as blue clusters using the UV colors,
most of which have been proved to be young clusters \citep{Fusi05,cald09,perina09,perina10}, except for
B133 and B145, which were stated as a star and an old GC \citep{cald09}, respectively. \citet{cald09, cald11} derived ages and masses for a large sample of young clusters, and found that these star clusters are less than 2 Gyr old, and most of them have ages between $10^8$ and $10^9$ yr and masses ranging from $2.5 \times 10^2~{M_\odot}$ to $1.5 \times 10^5~{M_\odot}$. These authors also stated that the young star clusters in M31 show a range of structures, most of which have low concentrations typical of OCs in the Milky Way (MW), however, there are a few with high concentrations similar to the MW GCs. \citet{vanse09} carried out a survey of compact star clusters in the southwest part of M31, and suggested a rich intermediate-mass star cluster population in M31, with a typical age range of 30 Myr $-$ 3 Gyr, peaking at $\sim$ 70 Myr. In order to ascertain the properties of the BLCCs, \citet{perina09,perina10} performed an image survey for 20 BLCCs lying in the disk of M31 using the Wide Field and Planetary Camera-2 (WFPC2) on the {\it Hubble Space Telescope} ({\it HST}). In addition, another key aim of this {\it HST} survey was to determine the fraction of contamination of BLCCs by asterisms, since \citet{Cohen05} suggested that a large fraction of the putative BLCCs may in fact be just asterisms. \citet{Cohen05} presented the resulting $K'$ images of six very young or young star clusters in M31 observed with the Keck laser guide star adaptive optics system, and indicated that the four youngest out of these six objects are asterisms. However, \citet{cald09} presented a conclusion that these four objects are true clusters based on spectra. The {\it HST} images \citep{perina09,perina10} showed that nineteen of the twenty surveyed candidates are real star clusters, and one (NB67) is a bright star. \citet{barmby09} measured surface brightness profiles for 23 bright young star clusters using images from the WFPC2, including the sample clusters of \citet{perina09,perina10}, and derived the structural properties by fitting the surface brightness profiles to several structural models. The authors stated that the sample young clusters are expected to dissolve within a few Gyr and will not survive to become old GCs, and that young star clusters in M31 and MCs follow the same fundamental plane relations as old GCs of M31, MCs, the MW and NGC 5128, regardless of their host galaxy environments. \citet{johnson12} presented a M31 stellar cluster catalog utilizing the Panchromatic Hubble Andromeda Treasury survey data, which will cover $\sim1/3$ of M31 disk with multiple filters and allow the identification of thousands of star clusters.
The large population of young star clusters reflect a high efficiency of cluster formation, possibly triggered by a current interaction event between M31 and its satellite galaxy \citep{Gordon06, block06}, suggesting that young star clusters should be associated with the star-forming (SF) regions of M31. \citet{fan10} found that young clusters ($<$ 2 Gyr) are spatially coincident with M31's disk, including the 10 kpc ring and the outer ring \citep{Gordon06}. Although these authors also found the young star clusters in the halo of M31, all of the clusters outside of the optical disk of M31 are old, globular clusters \citep[see][for details]{perina11}. \citet{kang12} stated that most of young star clusters' kinematics have the thin, rotating disk component \citep[see also][]{rey07}. The young star clusters' distribution has a distinct peak around $10-12$ kpc from the center in M31 disk, and some young star clusters show concentration around the 10 kpc ring splitting regions near M32 and most of them have systematically younger ages ($< 100$ Myr). \citet{kang12} also stated that the young star clusters show a spatial distribution similar to OB stars, UV SF regions, and dust, all of which are important tracers of disk structures.
Several criteria were developed for selecting young clusters from the integrated spectrum and colors. \citet{Fusi05} comprehensively studied the properties of 67 very blue and likely YMCs in M31 selected according to their color $[(B-V)_0\leq 0.45]$ and/or the strength of $H\beta$ spectral index ($\rm H\beta \geq 3.5$ \AA). \citet{Peacock10} presented a catalog of M31 GCs based on images from the SDSS and the Wide Field CAMera on the United Kingdom Infrared Telescope and selected a population of young clusters with a definition of $[(g-r)_0<0.3]$. \citet{kang12} published a catalog of M31 young clusters ($\leq 1$ Gyr) and supported the selection criteria $[(NUV-r)_0\leq 2.5]$ and $[(FUV-r)_0\leq 3.0]$ \citep{bohlin93,rey07}. These criterions may play important roles in distinguishing young from old clusters for those whose ages cannot be derived accurately.
The formation and disruption of young star clusters represent a latter-day example of the hierarchical formation of galaxies \citep{fall04}. Motivated by that, we decided to describe some basic properties of young star clusters in M31, such as positions, distributions of ages and masses, correlations of the ages and masses with structure parameters, which may provide important information about the processes involved in their formation and disruption.
In this paper, we will provide photometry of a set of young star clusters in M31 using images obtained with the Beijing--Arizona--Taiwan--Connecticut (BATC) Multicolor Sky Survey Telescope. By comparing the observed SEDs with the {\sc galev} simple stellar population (SSP) models, we derive their ages and masses. This paper is organized as follows. In Section 2 we present the BATC observations of the sample clusters, the relevant data-processing steps, and the {\sl GALEX} (FUV and NUV), optical broad-band, SDSS $ugriz$ and 2MASS NIR data that are subsequently used in our analysis. In Section 3 we derive ages and masses of the sample clusters. A discussion on the sample young clusters ($<$ 2 Gyr) will be given in Section 4.
Finally, we will summarize our results in Section 5.
\section{SAMPLE OF STAR CLUSTERS, OBSERVATIONS, AND DATA REDUCTION}
\subsection{Sample of Star Clusters}
The sample of star clusters in this paper is selected from \citet{cald09,cald11}, who presented a series of studies of M31 young and old clusters, respectively. We selected 178 young clusters given in \citet{cald09,cald11}, and fortunately, all the young clusters have been observed with 15 intermediate-band filters of the BATC photometric system. However, there are 42 clusters for which we cannot obtain accurate photometric measurements with different reasons as following, a) some clusters have one or more nearby very bright objects; b) some clusters are very close to other objects; c) some clusters are very faint and the signal-to-noise ratio (SNR) is low; d) some clusters are superimposed onto a bright background; e) some clusters are superimposed onto a strongly variable background.
In addition, There are several remarkable clusters with ``adhered'' \citep{vanse09} objects in our images, such as M088 and its neighbor M089, and G099 and C037-G099x \citep[see also][]{narbutis08}.
In a previous paper, \citet{ma11} presented the SEDs in 15 intermediate-band filters of the BATC photometric system for one YMC VDB0-B195D and determined its age and mass by comparing its SEDs with the theoretical evolutionary population synthesis models. Thus, here we will analyze the multicolor photometric properties of the remaining 135 clusters.
\subsection{Archival Images of the BATC Sky Survey for M31 Field}
The M31 field is part of a galaxy calibration program of BATC Multicolor Sky Survey. The BATC program uses the 60/90 cm Schmidt Telescope at the Xinglong Station of the National Astronomical Observatories, Chinese Academy of Sciences (NAOC). This system includes 15 intermediate-band filters,
covering a range of wavelength from 3000 to 10000 \AA~\citep[see][for details]{fan96}. Before February 2006, a Ford Aerospace $2{\rm k}\times2{\rm k}$ thick CCD camera was applied, which has a pixel size of 15 $\mu\rm{m}$ and a field of view of $58^{\prime} \times 58^{\prime}$, resulting in a resolution of $1''.67~\rm{pixel}^{-1}$. After February 2006, a new $4{\rm k}\times4{\rm k}$ CCD with a pixel size of 12 $\mu$m was used, with a resolution of $1''.36$ pixel$^{-1}$ \citep{fan09}. We obtained 143.9 hours of imaging of the M31 field covering about 6 square degrees, consisting of 447 images, through the set of 15 filters in five observing runs from 1995 to 2008, spanning 13 years \citep[see][for details]{fan09,wang10}.
Figure 1 shows the spatial distribution of the sample clusters and the M31 fields observed with the BATC multicolor system, in which a box only indicates a field view of $58^{\prime}$ $\times $ $58^{\prime}$ of the thick CCD camera. All the sample star clusters are indicated with black dots, with high-accuracy coordinates from \citet{cald09,cald11}, which are based on the images from the Local Group Galaxies Survey \citep{massey06} and the Digitized Sky Survey.
\begin{figure}
\figurenum{1} \resizebox{\hsize}{!}{\rotatebox{0}{\includegraphics{fig1.eps}}}
\caption{Spatial distribution of 135 sample star clusters indicated by black dots. A box represents a field view of $58^{\prime}$ $\times $ $58^{\prime}$. The large ellipse
is the M31 disk/halo boundary as defined by \citet{rac91}; the two small ellipses are the $D_{25}$ isophotes of NGC 205 (northwest) and M32 (southeast).} \label{fig:fig1}
\end{figure}
\subsection{Integrated Photometry of the Sample Star Clusters}
We processed all the CCD images to apply standard procedures including bias subtraction and flat-fielding using an automatic data reduction software named PIPELINE I, developed for the BATC Multicolor Sky Survey of the CCD images. BATC magnitudes are defined and obtained in a similar way as for the spectrophotometric AB magnitude system \citep{ma09b}. In order to improve the image quality, multiple images of the same filter were combined to one, on which the magnitudes of the sample star clusters were determined. The absolute flux of the combined images in the central field of M31 (M31-1 in Figure 1) was calibrated using observations of standard stars, while the absolute flux of the combined images of the M31-2 to M31-7 fields was calibrated based on secondary standard transformations using the M31-1 field \citep[see][for details]{fan09}.
We performed standard aperture photometry of our sample objects using the {\sc PHOT} routine in {\sc DAOPHOT} \citep{stet87}. To ensure that we adopted the most appropriate photometric radius that included all light from the objects, we used 9 different aperture sizes (with radii of $r_{\rm ap}=2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0$ pixels for the old CCD, while for the new CCD, the radii were given with pixels corresponding to the same arcsecs on the old CCD) to determine the magnitude. We also checked the aperture radii carefully on the images by visual examination in order not to include the light from extraneous objects. The local sky background was measured in an annulus with an inner radius of $r_{\rm ap}+1.0$ pixel and a width of 4.0 pixels for the old CCD, and with an inner radius of $r_{\rm ap}+1.0$ pixel and a width of 5.0 pixels for the new CCD, respectively.
There are 40 clusters in this paper which are in common with our series of previous papers
\citep{jiang03, ma06, ma09b, fan09, wang10}, and the photometric data of these 40 clusters were also
derived by these studies. We found that most photometric data obtained here are in good agreement with those obtained by previous studies. We checked the images of those clusters with photometric discrepancy, and found most of them were loaded near the bulge (B091) or in some disk regions with bright or variable background
(e.g., B210, M020, M023). The different choice of the aperture for photometry and the annulus for
background caused the discrepancy.
The SEDs for the sample clusters in M31 are listed in Table 1. Columns (1) gives the cluster names. Columns (2) to (16) present the magnitudes in the 15 BATC passbands. The $1\sigma$ magnitude uncertainties from {\sc DAOPHOT} are listed for each object on the second line
for the corresponding passbands. For some objects, the magnitudes in some filters could not be obtained because of low SNR. We should remind that magnitudes with an uncertainty larger than 0.3 will not be used in
the following analysis, although they are listed in Table 1. Columns (17) is the photometric aperture adopted in this paper.
\subsection{{\sl GALEX} UV, Optical Broad-band, SDSS, and 2MASS NIR Photometry}
As our series of papers has pointed out, accurate and numerous photometric points can derive accurate ages of star clusters \citep{degrijs03, anders04}. \citet{kavirag07} stated that the UV photometry is powerful for age estimation of young stellar populations, and the combination of UV photometry with optical observations enables one to break the age-metallicity degeneracy.
\citet{jong96} and \citet{anders04} showed that the age-metallicity degeneracy can be partially broken by adding NIR photometry to optical colors \citep[see][and references therein]{ma09b}. Several previous studies \citep{bh00,gall04,rey07,Peacock10,kang12} have provided magnitudes for star clusters in different passbands, which will be used to estimate ages of sample star clusters in this paper.
The latest Revised Bologna Catalogue of M31 GCs and candidates (hereafter RBC v.4) \citep{gall04,gall06,gall07,gall09} includes {\sl GALEX} (FUV and NUV) fluxes from \citet{rey07}, optical broad-band, 2MASS NIR magnitudes for 2045 objects.
For $UBVRI$ magnitudes given in RBC v.4, the relevant photometric uncertainties are not listed. Therefore, we adopted the original $UBVRI$ measurements of \citet{bh00} as our preferred reference, including their published photometric errors. For the remaining objects,
the $UBVRI$ magnitudes from RBC v.4 were adopted, with the photometric uncertainties set following \citet{gall04}, i.e., $\pm 0.08$ mag in $U$ and $\pm 0.05$ mag in $BVRI$
\citep{ma09b}.
In RBC v.4, the 2MASS $JHK_{\rm s}$ magnitudes were transformed to CIT photometric system \citep{gall04}. However, we needed the original 2MASS $JHK_{\rm s}$ data to compare the observed SEDs with the SSP models, so we reversed the transformation using the equations given by \citet{Carpenter01}. There were no magnitude errors for $JHK_{s}$ bands in RBC v.4, and we obtained them by comparing the photometric data with Figure 2 of \citet{Carpenter01}, in which the photometric error was shown as a function of magnitude for stars brighter than their observational completeness limits \citep{ma09b, wang10}. In addition, since RBC v.4 provided $JHK_{\rm s}$ magnitudes only for a small number of sample star clusters, we adopted $JHK_{\rm s}$ magnitudes from the 2MASS-6X-PSC catalog, with 6 times the normal exposure of 7.2 s on most fields of M31 \citep{nantais06}. There are 3 kinds of magnitudes given by 2MASS-6X-PSC catalog, the ``default'' magnitude, the $r=4''$ aperture and $r=10''$ aperture magnitudes. We found that the $r=4''$ aperture magnitudes agree well with the magnitudes in RBC v.4, while the other 2 kinds of magnitudes have large discrepancy with magnitudes in RBC v.4. However, for the $r=4''$ aperture magnitudes, when $JHK_{\rm s}$ magnitudes are fainter than $m = 16$ mag, the dispersions are considerable. In this paper, we preferentially adopted 2MASS $JHK_{\rm s}$ magnitudes in RBC v.4. For the remaining star clusters, we adopted the $r=4''$ aperture magnitudes in the 2MASS-6X-PSC catalog when the magnitudes brighter than $m=16$ mag.
\citet{Peacock10} performed SDSS $ugriz$ photometry for 1595 M31 clusters and cluster candidates
using the program SExtractor on drift scan images
of M31 obtained by the SDSS 2.5-m telescope, which
are on the AB photometric system \citep[see][for details]{bertin96}. We found that there was a magnitude offset ($\geq 0.5 \rm~mag$) between SDSS and $UBVRI$ for 35 objects by transforming between the $ugriz$ and $UBVRI$ bands following the transformations from \citet{Jester05}. Because the SDSS $ugriz$ magnitudes provided a more homogeneous set of photometric measurements,
we adopted the SDSS magnitudes and abandoned $UBVRI$ magnitudes for these objects in the following analysis. These 35 objects are flagged with small ``a'' in Column 1 of Table 2.
\citet{kang12} presented a catalog of 700 confirmed star clusters in M31, providing the most extensive and updated UV integrated photometry on the AB photometric system based on {\sl GALEX} imaging, superseding the UV photometry published by \citet{rey07}, which were included by RBC v.4. Therefore, we used the magnitudes of the FUV and NUV from \citet{kang12} as the UV photometry in our following SED fitting process.
We listed the {\sl GALEX}, optical broad-band, SDSS $ugriz$, and 2MASS NIR photometry of the sample clusters in Table 2 (Columns 2 to 16), where the photometric errors are listed for each object on the second line for the corresponding passbands. As we discussed above, the magnitudes with an uncertainty larger than 0.3 will not be used in the following analysis.
\subsection{Comparison with Previously Published Photometry}
To check our photometry, we transformed the BATC intermediate-band system to the broad-band
system using the relationships between these two systems derived by \citet{zhou03}:
\begin{equation}
B=m_{d}+0.2201(m_{c}-m_{e})+0.1278\pm0.076 \quad \mbox{and}
\end{equation}
\begin{equation}
V=m_{g}+0.3292(m_{f}-m_{h})+0.0476\pm0.027.
\end{equation}
$B$-band photometry can be derived from the BATC $c, d$, and $e$ bands, while $V$-band magnitude can be obtained from the BATC $f, g$, and $h$ bands. Figure 2 shows a comparison of the $B$ and $V$ photometry of our M31 sample objects with previous measurements from \citet{bh00} (circles) and \citet{gall04} (triangles).
There are several objects with larger offsets ($\Delta m > 0.5$), shown with black solid marks in Figure 2 (M045 in the top panel; B200D, SK036A, and SK068A in the bottom panel). The SNRs of M045, B200D, and SK036A are low, and SK068A is superimposed onto a bright background, thus we cannot derive accurate photometries for these four star clusters.
The mean $B$ and $V$ magnitude differences--in the sense of this paper minus others--are $\langle \Delta B \rangle =0.002 \pm 0.213$ mag and $\langle \Delta V \rangle =0.081 \pm 0.209$, i.e., there is no system offset between our magnitudes and previous determinations.
\begin{figure}
\figurenum{2}\resizebox{\hsize}{!}{\rotatebox{-90}{\includegraphics{fig2.eps}}}
\caption{Comparison of our newly obtained star cluster photometry with previous measurements by
\citet{bh00} (circles) and \citet{gall04} (triangles). The dashed lines enclose $\pm 0.3$ mag in $B$ and $V$. The black filled circles and triangles indicate the objects with photometry offset $> 0.5$ mag with \citet{bh00} and \citet{gall04}, respectively.}
\label{fig:fig2}
\end{figure}
\subsection{Reddening Values}
We required independently determined reddening values to estimate ages of the sample clusters robustly and accurately. Here we used \citet{kang12} and \citet{cald09} as our reference. \citet{kang12} derived reddening values from three ways: 1) mean reddening values from available literature \citep{bh00, fan08, cald09, cald11}; 2) median reddening values of star clusters located within an annulus at each 2 kpc radius from the center of M31, for these star clusters there were no available reddening values in the literature; 3) for star clusters at distances larger than 22 kpc from the center of M31, the foreground reddening value of $E(B-V)=0.13$ was adopted. Because all of our sample clusters have a projected galactocentric radius smaller than 22 kpc, the reddening values for them were derived from the first two methods.
The reddening value of cluster LGS04131.1\_404612 was not given by \citet{kang12}, so we adopted the value $E(B-V)=0.20$ from \citet{cald09}. Its reddening uncertainty was simply adopted half of the reddening value, i.e., $\sigma_{E(B-V)}=0.10$. We noticed that, for star cluster B449, the reddening value determined by \citet{kang12} is very different with the value determined by \citet{cald11} ($\Delta E(B-V) = 1.14$). We treated both age and reddening value as free parameters, and determined the reddening value to be $E(B-V)=0.10$ which was in good agreement with \citet{cald11}. So, in this paper ,we adopted $E(B-V)=0.13$ in \citet{cald11}. The reddening uncertainty for B449 was adopted to be 0.07. Column 4 of Table 4 lists the reddening values adopted for the sample clusters, while Column 5 lists the methods for deriving the reddening values ($\rm flag=1$ and $2$ indicate that the reddening values were obtained by the first and second method in Kang et al. 2012, respectively; $\rm flag=3$ indicates that the reddening values were from Caldwell et al. 2009, 2011, only for LGS04131.1\_404612 and B449).
\section{AGE AND MASS DETERMINATION}
\subsection{Stellar Populations and Synthetic Photometry}
In order to determine the ages and masses of the sample star clusters, we compared their SEDs with theoretical stellar population synthesis (SPS) models. The SSP models of {\sc galev} \citep[e.g.,][]{kurth99,schulz02,anders03} were adopted \citep{ma09b, wang10} in this paper, which are based on the Padova isochrones (with the most recent versions using the updated Bertelli 1994 isochrones, including the thermally-pulsing asymptotic giant-branch [TP-AGB] phase), and a \citet{salp55} stellar initial mass function (IMF) with a lower-mass limit of $0.10~{M_\odot}$ and the upper-mass limit between 50 and 70 $M_\odot$ depending on metallicity. The full set of models span the wavelength range from 91 {\AA} to 160 $\mu$m. These models cover ages from $4 \times 10^6$ to $1.6 \times 10^{10}$ yr, with an age resolution of 4 Myr for ages up to 2.35 Gyr, and 20 Myr for greater ages. The {\sc galev} SSP models include five initial metallicities, $Z=0.0004, 0.004, 0.008, 0.02$ (solar metallicity), and 0.05.
Since our observational data were integrated luminosities through our set of filters, we convolved the {\sc galev} SSP SEDs with the {\sl GALEX} FUV and NUV, broad-band $UBVRI$, SDSS $ugriz$, BATC, and 2MASS $JHK_{\rm s}$ filter response curves to obtain synthetic ultraviolet, optical, and NIR photometry for comparison. The synthetic $i{\rm th}$ filter magnitude in the AB magnitude system can be computed as
\begin{equation}
m_i=-2.5\log\frac{\int_{\nu}F_{\nu}\varphi_{i} (\nu){\rm d}\nu}{\int_{\nu}\varphi_{i}(\nu){\rm
d}\nu}-48.60,
\end{equation}
where $F_{\nu}$ is the theoretical SED and $\varphi_{i}$ is the response curve of the $i{\rm th}$ filter of the corresponding photometric systems. Here, $F_{\nu}$ varies with age and metallicity.
\subsection{Fits}
We used a $\chi^2$ minimization test to determine which {\sc galev} SSP models are most compatible with the observed SEDs, following
\begin{equation}
\chi^2=\sum_{i=1}^{n}{\frac{[m_{\nu_i}^{\rm intr}-m_{\nu_i}^{\rm mod}(t)]^2}{\sigma_{i}^{2}}},
\end{equation}
where $m_{\nu_i}^{\rm mod}(t)$ is the integrated magnitude in the $i{\rm th}$ filter of a theoretical SSP at metallicity $Z$ and age $t$, $n$ is the number of the filters used for fitting, $m_{\nu_i}^{\rm intr}$ represents the intrinsic integrated magnitude in the same filter and
\begin{equation}
\sigma_i^{2}=\sigma_{{\rm obs},i}^{2}+\sigma_{{\rm mod},i}^{2}+(R_{\lambda_i}*\sigma_{\rm red})^2
+\sigma_{{\rm md},i}^{2}.
\end{equation}
Here, $\sigma_{{\rm obs},i}$ is the observational uncertainty, and $\sigma_{{\rm mod},i}$ is the uncertainty associated with the model itself, for the $i{\rm th}$ filter. \citet{charlot96} estimated the uncertainty associated with the term $\sigma_{{\rm mod},i}$ by comparing the colors obtained from different stellar evolutionary tracks and spectral libraries. Following
\citet{ma07,ma09a,ma09b,ma11,ma12} and \citet{wang10}, we adopted $\sigma_{{\rm mod},i}=0.05$ mag in this paper.
$\sigma_{\rm red}$ is the uncertainty in the reddening value, and $R_{\lambda_i}= A_{\lambda_i}/E(B-V)$,
where $A_{\lambda_i}$ is taken from \citet{car89}, $R_V= A_V/E(B-V)=3.1$, and $\sigma_{{\rm md},i}$ is the uncertainty of the distance modulus, which is always 0.07 from $(m-M)_0=24.47\pm0.07$ mag
\citep{McConnachie05}.
\citet{perina09,perina10} determined ages for 20 possible YMCs in M31 with metallicity as a free parameter of their fit. Their results showed that most YMCs in M31 were best fitted with solar metallicity model. \citet{cald09} claimed that it seemed likely the young star clusters have supersolar abundances. In this paper, the {\sc galev} models of solar metallicity ($Z=0.02$) were used to fit the intrinsic SEDs for all the sample clusters here. As an example, we presented the fitting for some sample clusters in Figure 3. During the fitting, we found that, for a small number of clusters, some photometric data cannot be fitted with any SSP models. We therefore did not use these deviating photometric data points to obtain the best fits. These deviating photometric data points are the $a$ magnitude of M070, $b$ magnitude of M040, $k$ magnitude of V133, $m$ magnitude of M082 and M101, $n$ and $p$ magnitudes of B195, $J$ magnitude of B118D, and $H$ and $K_{\rm s}$ magnitudes of M091. We also noticed that, for some star clusters (B091, B305, B319, B392, B458, B480, B484, and DAO69), the photometries in $JHK_{\rm s}$ bands show obvious offsets from {\sl GALEX} FUV and NUV bands \citep[see also][]{kang12}. Considering that the UV photometry is powerful for age estimation of young stellar populations \citep{kavirag07}, we would adopt the {\sl GALEX} FUV and NUV photometries in the SED fitting, and abandon the $JHK_{\rm s}$ magnitudes for these clusters.
\begin{figure}
\figurenum{3} \resizebox{\hsize}{!}{\rotatebox{-0}{\includegraphics{fig3.eps}}}
\vspace{-2.cm}
\caption{Best-fitting
integrated SEDs of the {\sc galev} SSP models shown in relation to the intrinsic SEDs for our sample star clusters. The photometric data points are represented by the symbols with error bars (vertical error bars for uncertainties and horizontal ones for the approximate wavelength coverage for each filter). Open circles represent the calculated magnitudes of the model SEDs for each filter.}
\label{fig:fig3}
\end{figure}
The masses of the sample star clusters were determined sequentially. The {\sc galev} models provide absolute magnitudes in 77 filters for SSPs of $10^6~{M_\odot}$,
including 66 filters of the {\it HST}, Johnson $UBVRI$ \citep{landolt83}, Cousins $RI$ \citep{landolt83}, and $JHK$ \citep{bb88} systems. The difference between the intrinsic and model absolute magnitudes provides a direct measurement of the cluster mass, in units of $10^6~{M_\odot}$ \citep[see][for details]{ma11}. We transformed the 2MASS $JHK_{\rm s}$ magnitudes to the photometric system of \citet{bb88} using the equations given by \citet{Carpenter01}, and estimated masses of the clusters using magnitudes in all of the $UBVRI$ and $JHK_{\rm s}$ bands. The masses of clusters obtained based on the magnitudes in different filters were different,
therefore, we averaged them as the final cluster mass.
The masses of 22 clusters were not derived, because the $UBVRIJHK_{\rm s}$ magnitudes cannot be used: (1) there are no $UBVRIJHK_{\rm s}$ magnitudes (LGS04131.1\_404612); (2) there are no $JHK_{\rm s}$ magnitudes and the $UBVRI$ magnitudes were abandoned because of the discrepancy with the SDSS $ugriz$ magnitudes (e.g., B195); (3) the $UBVRI$ magnitudes were abandoned because of the discrepancy with the SDSS $ugriz$ magnitudes and the $JHK_{\rm s}$ magnitudes were abandoned because of the discrepancy with the {\sl GALEX} UV magnitudes (e.g., B319). The ages and masses of the sample clusters obtained in this paper are listed in Table 4.
\subsection{Comparison with Previous Determinations}
In this paper, we determined ages and masses for 135 star clusters by comparing their multicolor photometries with theoretical SPS models. These star clusters were from \citet{cald09,cald11}, who presented a series of studies of M31 young and old clusters, respectively.
As discussed in Section 2.3, there are 40 clusters in this paper which are in common with our series of previous papers, and the ages of 27 clusters were also derived by our series of studies \citep{jiang03,fan06,ma06,ma09b,wang10}.
In the study of \citet{jiang03}, the SSP models of Bruzual \& Charlot (G. Bruzual \& Charlot 1996, unpublished) were used; in the studies of \citet{fan06} and \citet{ma06}, the SSP models of \citet{bru03} were used. In addition, in these three studies, we used only the BATC photometries in 13 passbands, and did not include UV data, which is powerful tool for age estimation of young stellar populations \citep[see][and reference therein]{kang12}. So, we re-estimated the ages for these young star clusters with more photometric date including UV data, and with the same SSP models as \citet{ma09b} and \citet{wang10} used. In the studies of \citet{ma09b} and \citet{wang10}, we used the metallicities obtained by \citet{bh00} and \citet{per02} when estimating the ages of star clusters. The metallicities in \citet{bh00} and \citet{per02} were determined from the Lick indices which were calibrated from the Galactic old GCs. However, \citet{Fusi05} claimed that young star clusters are probably not so metal-poor as deduced from the metallicities obtained by \citet{per02}, and concluded that $G-$band line strength tends to underestimate [Fe/H] values in \citet{per02} by more than 1 dex \citep[see also][]{kang12}. As a result, most of ages obtained in \citet{ma09b} and \citet{wang10} are older than those obtained in this paper. So, we re-estimated the ages for the young star clusters in \citet{ma09b} and \citet{wang10} with the solar metallicity SSP models.
In addition, nine clusters (B476, BH11, M026, M040, M045, M053, M057, M058, and M070) were estimated to be older than 2 Gyr in this paper, which are considered to be old star clusters \citep[e.g.,][]{cald09}. Since this paper focused on young clusters, we would not consider these nine old clusters in the following analysis. However, we pointed out that the ages of six of these nine star clusters were estimated to be younger than 2 Gyr by previous studies: the age of B476 was estimated to be 1.2 Gyr by \citet{cald09} (however, 7.1 Gyr by Fan 2010); the age of BH11 was estimated to be 1.6 Gyr by \citet{vanse09}; the age of M040 was estimated to be $\sim130$ Myr by \citet{fan10} and $\sim320$ Myr by \citet{kang12}; the age of M053 was estimated to be 1 Gyr by \citet{cald09} and $\sim140$ Myr by \citet{fan10}; the age of M058 was estimated to be $\sim160$ Myr by \citet{fan10}; the age of M070 was estimated to be 1.2 Gyr by \citet{cald09} and $\sim810$ Myr by \citet{fan10}. The ages of these nine clusters derived by previous papers and here are listed in Table 3. There are three clusters with ages of 4 Myr, which is the lowest age limit of {\sc galev} models. Clusters KHM31-37 and V133 were estimated slightly older by other authors \citep{cald09,kang12}, while B196D was estimated slightly younger ($\sim2$ Myr) by \citet{cald09}.
In Figure 4, the estimated ages for the young ($<2$ Gyr) star clusters in this paper were compared with those from previous studies
\citep[e.g.,][]{Beasley04,vanse09,fan10,perina10,cald09,cald11,kang12}.
The star clusters with ages of 4 Myr obtained in this paper are drawn with open squares in Figure 4. There are five clusters in common between \citet{Beasley04} and this paper. We can see that the ages of \citet{Beasley04} are in good agreement with ours.
\citet{vanse09} estimated ages of star clusters located in the southern disk of M31 with $UBVRI$ SED-fitting. There is an obvious offset between their estimated ages and ours, which is caused by some large scatters.
If the three clusters with ages greater than 3 Gyr given by \citet{vanse09}, which are drawn with arrows
in Figure 4, are not included, the systematic offset can be reduced to be $-0.25$ Gyr.
\citet{fan10} estimated ages of star clusters in M31 with multi-band ($UBVRIJHK_{\rm s}$) SED-fitting,
and their results agree well with ours, with a small offset ($\sim0.02$ Gyr). \citet{perina09,perina10}
determined ages of 20 possible YMCs in M31 by comparing the observed color magnitude diagrams and the isochrones of different metallicities and ages of \citet{Girardi02}, and estimated masses of these clusters based on the Maraston's SSP models of solar metallicity and \citet{salp55} and \citet{Kroupa01} IMFs and the IR magnitudes in the 2MASS-6X-PSC catalog. In general, the ages obtained in this paper are in good agreement with the determinations by \citet{perina10}, with a small deviation ($\sim0.06$ Gyr). There is a small systematic offset ($\sim-0.10$ Gyr) between \citet{cald09,cald11} and this paper, i.e. the ages obtained by \citet{cald09,cald11} are larger than the ages obtained here. \citet{perina10} also found the systematic offset as the ages obtained by \citet{cald09} are larger than the ages obtained by \citet{perina10}. \citet{perina10} suggested that this offset is caused by the super-solar metallicity models ($Z = 0.04$) adopted by \citet{cald09} when they determined the ages of star clusters.
\citet{kang12} derived ages for young clusters by fitting the multi-band photometry with model
SEDs, and their results are in good agreement with ours.
\begin{figure}
\figurenum{4}
\resizebox{\hsize}{!}{\rotatebox{-90}{\includegraphics{fig4.eps}}}
\caption{Comparison of the ages obtained here with those obtained by previous works:
\citet{Beasley04}, \citet{vanse09}, \citet{fan10}, \citet{perina10}, \citet{cald09,cald11}, and \citet{kang12}. In each panel, the mean value of the age differences (ours minus other study) is given, with its standard deviation ($\sigma$). The error bars of ages from each study are also shown.}
\label{fig:fig4}
\end{figure}
In Figure 5, we compared the masses of clusters obtained in this paper with those from previous studies
\citep[e.g.,][]{Beasley04,vanse09,fan10,perina10,cald09, cald11,kang12}.
The masses of two (KHM31-37 and V133) of the three clusters with ages of 4 Myr, which are drawn with open squares in Figure 5, were derived to be lower than $10^3~{M_\odot}$. The masses estimated in this paper are in good agreement with those estimated by \citet{Beasley04}, \citet{perina10}, \citet{cald09,cald11}, and \citet{kang12}. There is an obvious offset between \citet{vanse09} and this paper, which is mainly caused by some scatters. The cluster
with the largest discrepancy is B335, with a mass estimate of $\sim5\times 10^5~{M_\odot}$ by \citet{vanse09}, and $\sim1.3\times 10^5~{M_\odot}$ in this paper. When B335 is excluded, the offset can be reduced to be ($\sim-0.6\times 10^4~{M_\odot}$). The masses estimated by \citet{fan10} are slightly less than those estimated here, with an offset of $\sim1.5\times 10^4~{M_\odot}$.
\begin{figure}
\figurenum{5}
\resizebox{\hsize}{!}{\rotatebox{-90}{\includegraphics{fig5.eps}}}
\caption{Comparison of the masses obtained here with those obtained by previous works:
\citet{Beasley04}, \citet{vanse09}, \citet{fan10}, \citet{perina10}, \citet{cald09,cald11}, and \citet{kang12}. In each panel, the mean value of the mass differences (ours minus other study) is given, with its standard deviation ($\sigma$). The error bars of masses from each study are also shown.}
\label{fig:fig5}
\end{figure}
\section{DISCUSSION OF YOUNG STAR CLUSTERS}
\subsection{Position}
Figure 6 shows the number, ages and masses of young star clusters ($<2$ Gyr) as a function of projected radius from
the center of M31, adopted at $\rm \alpha_0=00^h42^m44^s.30$ and $\rm \delta_0=+41^o16'09''.0$ (J2000.0) following \citet{hbk91} and \citet{per02}. In the top panel, the histogram for the radial distribution of young star clusters shows clearly two peaks at $4-7$ kpc and $9-11$ kpc, while in the middle panel and bottom panel, wide age and mass distributions can be seen in these two peak regions.
\begin{figure}
\figurenum{6}
\resizebox{\hsize}{!}{\rotatebox{-90}{\includegraphics{fig6.eps}}}
\caption{
($Top~panel$) Number histogram of young star clusters against projected radius. ($Middle~panel$) Age versus projected radius for sample young clusters. ($Bottom~panel$) Mass versus projected radius for sample young star clusters. The open rectangles show the two peaks around $4-7$ kpc and $9-11$ kpc of the radial distribution for young star clusters.}
\label{fig:fig6}
\end{figure}
\citet{kang12} presented the radial distribution of clusters against the distance from the center of M31,
and found that the young clusters show two peaks around $10-12$ kpc and $13-14$ kpc. They also found that the UV SF regions show two distinct peaks: a main peak at $\sim16$ kpc and a secondary peak around 11 kpc. In addition, a small peak at $5-8$ kpc in the distribution of ages of UV SF regions against the projected radius \citep[see Figure 19 of][]{kang12} can be clearly found. We argued that the peak at $4-7$ kpc obtained in this paper should be associated with the peak at $5-8$ kpc for the UV SF regions, while the peak at $9-11$ kpc obtained in this paper correlate with the well-known 10 kpc ring \citep{Gordon06}.
Figure 7 displays the spatial distribution and radial distribution of the M31 young clusters with
different age bins: (a) $t<0.1$ Gyr; (b) 0.1 Gyr $\leq t<$ 0.4 Gyr; (c) 0.4 Gyr $\leq t<$ 1 Gyr; (d) 1 Gyr $\leq t<$ 2 Gyr. In the top panel, young star clusters in different age ranges are drawn with different marks. The inner, solid ellipse and the dashed contour represent the 10 kpc ring and the outer ring from \citet{Gordon06} based on infrared observations with the Multiband Imaging Photometer for Spitzer (MIPS) instrument on the {\it Spitzer Space Telescope}, respectively. The 10 kpc ring was drawn with a center offset from the M31 nucleus by [$5'.5$, $3'.0$] \citep{Gordon06} with a radius of 44 arcmin (10 kpc). There are several regions drawn with open rectangles which show aggregations of young star clusters to different extents.
However, we should point out that these aggregations of young star clusters may be caused by the projection effect because of the inclination of M31 disk. \citet{vanse09} noted two clumps of young clusters, both of which are located in one rectangle ($\sim-13$ kpc $< X <$ $-9$ kpc and $-3$ kpc $< Y <$ 0 kpc). The star clusters in this study are spatially coincident with the disk and the rings, indicating that the distribution of the young star clusters correlates with the galaxy's SF regions, which is consistent with previous studies \citep{fan10,kang12}. In the bottom panel, the number of young star clusters in different age bins as a function of projected radial distance from the M31 center was shown. We can see that clusters younger than $0.1$ Gyr show most obvious aggregation around the 10 kpc ring.
\begin{figure}
\figurenum{7}
\resizebox{\hsize}{!}{\rotatebox{0}{\includegraphics{fig7.eps}}}
\caption{Spatial distribution ($top~panel$) and radial distribution ($bottom~panel$) of M31 young star clusters with different age bins: (a) $t<$ 0.1 Gyr; (b) 0.1 Gyr $\leq t<$ 0.4 Gyr; (c) 0.4 Gyr $\leq t<$ 1 Gyr; (d) 1 Gyr $\leq t<$ 2 Gyr. The inner, solid ellipse and the dashed contour represent the 10 kpc ring and the outer ring from \citet{Gordon06}, while the dotted ellipse is the M31 disk/halo boundary as defined by \citet{rac91}. The several small rectangles show the clumps of young clusters to the extents.}
\label{fig:fig7}
\end{figure}
\citet{Gordon06} ran a number of numerical simulations of the M31--M32 and M31-NGC 205 interactions, and assumed a passage of M32 through the disk of M31 occurring 20 Myr ago, resulting in a burst of star formation that propagates outward through the disk. \citet{block06} suggested that M32 and M31 had an almost head-on collision about 210 Myr ago, and M32 passed through M31 disk again about 110 Myr ago \citep[see Figure 2 of][]{block06}, which induced two off-center rings--an inner ring with projected dimensions of $\sim$ 1.5 kpc and the 10 kpc ring. Both of the simulations recurred the 10 kpc ring and the observed split.
We divided our sample star clusters younger than 300 Myr into six groups, and showed the spatial distribution for each group in Figure 8. We can see that only star clusters with ages 50 Myr $-$ 100 Myr appear around the 10 kpc ring and the ring splitting region ($-9.5$ kpc $< X <$ $-7.5$ kpc and $-2.5$ kpc $< Y <$ $-0.5$ kpc) \citep{kang12},
indicating that 1) the 10 kpc ring may begin to form about 100 Myr ago; 2) M32 passed through the southern part of M31 disk around 100 Myr and in turn resulted in the split in the form of a hole. This appears to be consistent with the prediction by \citet{block06} of a second passage of M32 about 110 Myr ago. After the second passage, star clusters formed around the split for a long period, since there are a number of star clusters around the split with ages younger than 50 Myr. \citet{davidge12} reported that the star formation rate (SFR) of the M31 disk would be elevated greatly and quickly after an encounter event, and it would finally drop when the interstellar medium is depleted and disrupted. However, from Figure 8, we cannot find evidence of radial trend of star cluster ages \citep[see also][]{kang12,cald09}.
\begin{figure}
\figurenum{8}
\resizebox{\hsize}{!}{\rotatebox{0}{\includegraphics{fig8.eps}}}
\caption{Spatial distribution of six groups of M31 young star clusters younger than 300 Myr, divided with same age bin of 50 Myr. The inner, solid ellipse and the dashed contour represent the 10 kpc ring and the outer ring from \citet{Gordon06}, while the dotted ellipse is the M31 disk/halo boundary as defined by \citet{rac91}.
The small rectangle represents the ring splitting region in the southern part of M31 disk.}
\label{fig:fig8}
\end{figure}
Figure 9 shows the spatial and radial distribution of the M31 young star clusters with different mass bins:
(a) $10^2~{M_\odot} \leq M < 10^3~{M_\odot}$; (b) $10^3~{M_\odot} \leq M < 10^4~{M_\odot}$;
(c) $M \geq 10^4~{M_\odot}$. In the top panel, clusters of these three groups are drawn with different marks.
The bottom panel presents the number of young clusters in different mass bins as a function of projected radial distance from M31 center, and it shows that young clusters more massive than $10^4~{M_\odot}$ are most concentrated nearby the 10 kpc ring.
\begin{figure}
\figurenum{9}
\resizebox{\hsize}{!}{\rotatebox{0}{\includegraphics{fig9.eps}}}
\caption{Spatial distribution ($top~panel$) and radial distribution ($bottom~panel$) of M31 young star clusters with different mass bins: (a) $10^2~{M_\odot}$ $\leq M<$ $10^3~{M_\odot}$; (b) $10^3~{M_\odot}$ $\leq M<$ $10^4~{M_\odot}$; (c) $M>$ $10^4~{M_\odot}$.
The inner, solid ellipse and the dashed contour represent the 10 kpc ring and the outer ring from \citet{Gordon06}, while the dotted ellipse is the M31 disk/halo boundary as defined by \citet{rac91}. The several small rectangles show the clumps of young clusters to the extents.}
\label{fig:fig9}
\end{figure}
\subsection{Age and Mass Distribution}
Figure 10 plots the distribution of estimated ages and masses for the young star clusters. A prominent correlation can be seen that mass increases with age. There are two distinct peaks in the age histogram: a highest peak at age $\sim$ 60 Myr ($\log \rm age=7.8$) and a secondary peak around 250 Myr ($\log \rm age=8.4$). The mass distribution of the young star clusters show a single peak around $10^4~{M_\odot}$. The mean values of age and mass of young clusters are about 385 Myr and $2\times 10^4~{M_\odot}$, slightly higher than the values presented by \citet{kang12}, which are 300 Myr and $10^4~{M_\odot}$, respectively. Most of our young clusters have masses ranging from $10^{3.5}~{M_\odot}$ to $10^{5}~{M_\odot}$, which are more massive than OCs in the solar neighborhood \citep{piskunov08}, but less massive than typical GCs in the MW \citep{mm05}. The lack of young clusters more massive than $10^5~{M_\odot}$ is also noted by \citet{vanse09} and \citet{cald09}, possibly caused by a low-average SFR of M31 \citep{barmby06} or hidden by dust clouds in the disk due to the inclination angle of M31 \citep{vanse09}.
\begin{figure}
\figurenum{10} \resizebox{\hsize}{!}{\rotatebox{-90}{\includegraphics{fig10.eps}}}
\caption{Age and mass distribution of the sample young star clusters in this paper. The histograms for age and mass are presented with gray colors.}
\label{fig:fig10}
\end{figure}
\citet{portegies10} have listed three phases for the evolution of a young star cluster: 1) the first few Myr, during which the star formation activity is still proceeding and the star cluster is rich in gas; 2) a subsequent period after the first supernovae (some 3 Myr after formation), in which a young cluster is experiencing a serious loss of gas and dust, and stellar mass loss plays an important role in the cluster evolution; 3) a later stage that stellar dynamical processes dominate the cluster evolution. The dividing line between phase 2 and phase 3 may be anywhere between 100 Myr and 1 Gyr, and most of our young clusters are experiencing the phase 2 or phase 3.
\citet{cs87} presented that after 5 Gyr, both mass and galactic location are important evolutionary parameters for GCs. \citet{spitzer58} discussed the destructive effects of encounters of clusters with giant molecular clouds (GMCs), and presented that the disruption time for a star cluster varies directly with the cluster density and is about 200 Myr for a mean density of 1 $M_\odot/{\rm pc}^3$. \citet{sh58} also reported that two-body relaxation is effective at destroying low-mass clusters and this may account for the scarcity of low-mass older clusters. Actually, the two-body relaxation and the encounters with GMCs are also important processes that lead to young cluster disruption \citep[see][and references therein]{cald09}, while \citet{portegies10} presented that mass loss due to stellar evolution is the most important process in the young cluster dissolution. It is evident that star cluster mass is one key parameter in the star cluster evolution. \citet{bl03} derived an empirical relation between the disruption time and the initial mass of star clusters in the solar neighborhood, Small Magellanic Cloud (SMC), M51, and M33.
\citet{lamers05} determined a disruption time of 1.3 Gyr for a $10^4~{M_\odot}$ cluster in the solar neighborhood, while \citet{cald09} reported that most of M31 young clusters would be destroyed in the next Gyr or so, and only some massive and dense ones may survive for a longer time.
Several features are shown in Figure 10:
1) there is an obvious gap in the age distribution around 100 Myr. 2) there are few clusters older than 400 Myr (${\log \rm age=8.6}$) with mass lower than $10^4~{M_\odot}$. Although many low-mass clusters can be easy to disrupt, this gap may be caused by a selection effect. In fact, \citet{johnson12} found that the completeness of M31 ground-based sample drops precipitously at $m_{\rm F475W}>18$ ($M_{\rm F475W}>-6.5$), which is about $2\times10^4~{M_\odot}$. 3) there are few clusters more massive than $10^5~{M_\odot}$, which may be caused by a low-average SFR of M31 or the hidden by dust clouds in the M31 disk as discussed above. 4) there is a gap of clusters with very low masses ($\sim10^3~{M_\odot}$) and younger than 30 Myr (${\log \rm age=7.5}$). These clusters may be too faint to be sample objects of \citet{cald09,cald11}, indicating that our sample is not complete in these age and mass ranges \citep[see also][]{cald09}.
Figure 11 shows the age distribution in different mass intervals (top panel) and mass function in different age intervals (bottom panel). The histograms are derived using a 0.4-dex bin width with different starting values.
These distributions contain information about the formation and disruption history of star clusters \citep{fc12},
however, the interpretation of the empirical distributions of clusters depends strongly on how incompleteness affects the sample \citep{gieles07}. In the top panel, we can see an obvious gap before 40 Myr ($\log \rm age=7.6$), which is caused by a selection effect. The age distribution of the clusters does not declines monotonically, with an apparent bend around 200 Myr ($\log \rm age=8.3$). We argued that this bend near 200 Myr may be explained as a burst of cluster formation, possibly caused by a current interaction event between M31 and its satellite galaxy, such as the collision between M31 and M32 about 210 Myr ago suggested by \citet{block06}. The two decline trends starting from 40 Myr and 200 Myr reflect a rapid disruption of clusters. \citet{vanse09} noted a peak of the cluster age distribution at 70 Myr, and suggested an enhanced cluster formation episode at that epoch.
In the bottom panel, the gap in the number of clusters in the low-mass regions ($\log \rm mass < 3.5$) is apparently due to a sample incompleteness, but not physical \citep{vanse09}. The initial mass function for star clusters should be slightly steeper \citep{fc12} than what is shown here because of the short lifetimes of low-mass clusters. Recently, \citet{fc12} compared the observed age distributions and mass functions of star clusters in the MW, MCs, M83, M51, and Antennae, and found that these
distributions of clusters are similar in different galaxies. However, due to the incompleteness of our cluster sample, partly due to the exclusion of clusters that cannot derive accurate photometry, we would not give any empirical formulas of the distributions for age and mass.
\begin{figure}
\figurenum{11} \resizebox{\hsize}{!}{\rotatebox{0}{\includegraphics{fig11.eps}}}
\caption{Age distribution of the sample young star clusters with differen mass intervals ($top~panel$) and mass function with differen age intervals ($bottom~panel$).
The histograms are derived using a 0.4-dex bin width with different starting values.}
\label{fig:fig11}
\end{figure}
\subsection{Correlations with Structure Parameters}
In this section, we will discuss the correlations of ages and masses with structure parameters, which are derived
by King-model \citep{king66} fits for clusters in M31 \citep{bhh02, barmby07, barmby09}. Because the sample clusters are younger than 2 Gyr, the structure parameters obtained from the bluer filters are preferred \citep[see][in detail]{barmby09}. There are four clusters (B315, B319, B368, and B374) which have been studied twice by \citet{bhh02, barmby07} and \citet{barmby09}, and we would use the new results in \citet{barmby09}.
Figure 12 shows structure parameters as a function of age for young clusters in this paper. Some correlations can be seen, the concentration $c$, defined as $c\equiv\log(r_t/r_0)$, decreases with age. The trend is largely driven by clusters B342 and B368, both of which have large $c$ values (3.98 for B342 and 3.87 for B368). Both the scale radius $r_0$ and projected core radius $R_c$ increase with age. Clusters B342 and B368 have very small $r_0$ and $R_c$ values and are drawn with arrows in Figure 12 (The values of $r_0$ and $R_c$ are $\sim$ 0.014 pc for B342, while are $\sim$ 0.011 pc for B368). \citet{elson89} and \citet{elson91} discussed the trend for core radius against age, and argued that this trend may represent real evolution in the structure of clusters as they grow old, partially explained by the effect of mass segregation \citep{mg03}, or dynamical effects such as heating by black hole (BH) binaries \citep{mackey07}. \citet{wilkinson03} also demonstrated that neither large differences in primordial binary fraction nor a tidal heating due to differences in the cluster orbits could account for the observed trend. The best-fit central surface brightness $\mu_{V,0}$ shows a decreasing trend with age, and \citet{barmby09} argued that this trend may be likely due to the fading of stellar population and the increase of core radius $R_c$ with age. We also see that the central mass density $\rho_0$ decreases with age, although the scatters are great. \citet{barmby09} presented that the central mass density shows very little trend with age for both the M31 young clusters and young clusters in the MCs. There is no obvious correlation between $t_{r,h}$, the two-body relaxation time at the model-projected half-mass radius, and age. The dashed line represents the region that $t_{r,h}$ equal to age. It can be seen that most clusters (except for DAO38 and M091) have ages less than $t_{r,h}$, indicating that these young clusters have not been well dynamically relaxed. Because two-body encounters can transfer energy between individual stars and then impel the system to establish thermal equilibrium \citep{portegies10}, we argue that these young clusters have not established thermal equilibrium.
\begin{figure}
\figurenum{12} \resizebox{\hsize}{!}{\rotatebox{-90}{\includegraphics{fig12.eps}}}
\caption{Structure parameters as a function of age for the sample young star clusters in this paper.}
\label{fig:fig12}
\end{figure}
Figure 13 shows structure parameters as a function of mass for the sample young clusters. The concentration $c$ increases with mass, although the trend is much weak.
\citet{fc12} presented $c$ plotted against mass for clusters in MCs, and found that there was no correlation between $c$ and mass. Actually, we found that all the clusters in \citet{fc12} have $c$ less than 2.5, much smaller than the largest value in our sample ($\sim4$). If we do not include the two clusters B342 and B368, the correlation for $c$ with mass nearly disappear. Both $r_0$ and $R_c$ increase with mass, however, the trend is largely weaken by cluster B327, which has very small values of $r_0$ and $R_c$, but larger than those of B342 and B368 which are drawn with arrows in Figure 13. Both the central surface brightness $\mu_{V,0}$ and central mass density $\rho_0$ decrease weakly with mass, while no obvious correlation between $t_{r,h}$ and mass can be seen.
\begin{figure}
\figurenum{13} \resizebox{\hsize}{!}{\rotatebox{-90}{\includegraphics{fig13.eps}}}
\caption{Structure parameters as a function of mass for the sample young star clusters in this paper.}
\label{fig:fig13}
\end{figure}
We checked the surface brightness profiles of B327, B342, and B368 displayed in \citet{barmby09}, which have very small $r_0$ and $R_c$ and very large $\mu_{V,0}$ and $\rho_0$, and found that these core profiles are cuspy. \citet{barmby09} concluded that the cores of these clusters did not appear to be resolved in the {\it HST}/WFPC2 images and the structural parameters for these clusters would be uncertain if the central cluster luminosity is dominated by only a few bright stars. However, if these cuspy core profiles are true integrated properties, which may be better fitted by a power-law structure model \citep[e.g.,][]{sersic68}, the three clusters may have been post core-collapse \citep[see][in detail]{tanvir12}.
\subsection{Young Massive Clusters}
YMCs are often related to the violent SF episodes triggered by galaxy collisions, mergers, and close encounters \citep{grijs07}. However, based on a sample of 21 nearby spirals, \citet{lr99} found that YMCs can exist in a wide variety of host galaxy environments, including quiescent galaxies, and that there is no correlation between the morphological type of the galaxies and their contents of YMCs. YMCs are dense aggregates of young stars, which are also expected to be the nurseries for many unusual objects, including exotic stars, binaries, and BHs \citep{portegies10}. Many studies \citep{barmby09, cald09, vanse09, Peacock10, perina10, portegies10, ma11}
that focused on M31 YMCs have derived remarkable achievements in understanding their stellar populations, structure parameters, and dynamical properties.
There are 13 YMCs in our cluster sample with a definition of age $\leq 100$ Myr and mass $\geq 10^4~{M_\odot}$ \citep{portegies10}. Figure 14 shows the spatial distribution of the 13 YMCs, while different sizes of the open circles indicate YMCs in different mass ranges.
The rectangle between the 10 kpc ring and the outer ring represents the split
in the southern part of the M31 disk, and the two black filled triangles represent M32 and NGC 205. It is not surprising to see that most of the YMCs gather around the split, indicating that there has been a high-level star formation activity, which is consistent with previous studies
\citep{Gordon06, kang12}.
\begin{figure}
\figurenum{14} \resizebox{\hsize}{!}{\rotatebox{0}{\includegraphics{fig14.eps}}}
\caption{Spatial distribution for YMCs drawn with different sizes of the open circles indicating different mass ranges. The inner, solid ellipse and the dashed contour represent the 10 kpc ring and the outer ring from \citet{Gordon06}, while the dotted ellipse is the M31 disk/halo boundary as defined by \citet{rac91}. The small rectangle represents the ring splitting region in the southern part of
M31 disk, and the two filled black triangles represent M32 and NGC 205.}
\label{fig:fig14}
\end{figure}
\section{SUMMARY}
In this paper, we determined the ages and masses for a sample of M31 star clusters by comparing the multicolor photometry with theoretical SPS models. Multicolor photometric data are from the {\sl GALEX} FUV and NUV, broadband $UBVRI$, SDSS $ugriz$, 15 intermediate-band filters of BATC, and 2MASS $JHK_{\rm s}$, which constitute the SEDs covering $1538-20000$ \AA.
We made a discussion on the spatial distribution, distribution of ages and masses, correlations of ages and masses with structure parameters for the sample young clusters ($<2$ Gyr). The mean value of age and mass of young clusters is about 385 Myr and $2\times 10^4~{M_\odot}$, respectively. There are two distinct peaks in the age distribution, a highest peak at age $\sim$ 60 Myr and a secondary peak around 250 Myr, while the mass distribution shows a single peak around
$10^4~{M_\odot}$. There are several regions showing aggregations of young clusters around the 10 kpc ring and
the outer ring, indicating that the distribution of the young clusters correlates well with M31's SF regions.
The ages and masses show apparent correlations with some structure parameters. We also found the correlation between core radius $R_c$ and age, which has been studied by many authors.
A few young clusters have the two-body relaxation times $t_{r,h}$ greater than their ages, indicating that they have not been well dynamically relaxed. We argued that these young clusters have not established the thermal equilibrium.
The YMCs (age $\leq 100$ Myr and mass $\geq 10^4~{M_\odot}$) show obvious aggregation around the split in the southern part of the M31 disk, suggesting a high efficiency of star formation, possibly triggered by a recent passage of a satellite galaxy (M32) through M31 disk.
\acknowledgments
We would like to thank the anonymous referee for providing rapid and thoughtful report that helped improve the original manuscript greatly. This work was supported by the Chinese National Natural Science Foundation grant Nos. 10873016, 10633020, 11073032, and 11003021, and by the National Basic Research Program of China (973 Program) No. 2007CB815403.
|
1,116,691,498,664 | arxiv | \section{Introduction}
The increasing amount of data generated by recent applications of distributed systems such as social media, sensor networks, and cloud-based databases has brought considerable attention to distributed data processing approaches, in particular the design of distributed algorithms that take into account the communication constraints and make coordinated decisions in a distributed manner~\cite{jad12,rah10,ala04,olf06,aum76,bor82,tsi84,gen86,coo90,deg74,gil93}. In a distributed system, the interactions between agents are usually restricted to follow certain constraints on the flow of information imposed by the network structure. Such information constraints cause the agents to only be able to use locally available information. This contrasts with centralized approaches where all information and computation resources are available at a single location \cite{gub93,zhu05,vis97,sun04}.
One traditional problem in decision-making is that of parameter estimation or statistical learning. Given a set of noisy observations coming from a joint distribution one would like to estimate a parameter or distribution that minimizes a certain loss function. For example, Maximum a Posteriori (MAP) or
Minimum Least Squared Error (MLSE) estimators fit a parameter to some model of the observations. Both, MAP and MLSE estimators require some form of Bayesian posterior computation based on models that explain the observations for a given parameter. Computation of such a posteriori distributions depends on having exact models about the likelihood of the corresponding observations. This is one of the main difficulties of using Bayesian approaches in a distributed setting. A fully Bayesian approach is not possible because full knowledge of the network structure, or of other agents' likelihood models, may not be available~\cite{gal03,mos10,ace11}.
Following the seminal work of Jadbabaie et al.\ in \cite{jad12,jad13,sha13}, there have been many studies of distributed non-Bayesian update rules over networks. In this case, agents are assumed to be boundedly rational (i.e., they fail to aggregate information in a fully Bayesian way \cite{gol10}). Proposed non-Bayesian algorithms involve an aggregation step, typically consisting of weighted geometric or arithmetic average of the received beliefs~\cite{ace08,tsi84,jad03,ned13,ols14}, and a Bayesian update with the locally available data~\cite{ace11,mos14}.
Recent studies proposed variations of the non-Bayesian approach and proved consistent, geometric and non-asymptotic convergence rates for a general class of distributed algorithms; from asymptotic analysis \cite{sha13,lal14,qip11,qip15,sha15,rah15} to non-asymptotic bounds \cite{sha14,ned15,lal14b,ned14}, time-varying directed graphs \cite{ned15b}, and transmission and node failures \cite{su16}; see \cite{bar13,ned16c} for an extended literature review.
We build upon the work in~\cite{bir15} on non-asymptotic behaviors of Bayesian estimators to derive new non-asymptotic concentration results for distributed learning algorithms. In contrast to the existing results which assume a finite hypothesis set, in this paper we extend the framework to countably many and a continuum of hypotheses. Our results show that in general, the network structure will induce a transient time after which all agents learn at a network independent rate, and this rate is geometric.
The contributions of this paper are as follows. We begin with a variational analysis of Bayesian posterior and derive an optimization problem for which the posterior is a step of the Stochastic Mirror Descent method. We then use this interpretation to propose a distributed Stochastic Mirror Descent method for distributed learning. We show that this distributed learning algorithm concentrates the beliefs of all agents around the true parameter at an exponential rate. We derive high probability non-asymptotic bounds for the convergence rate. In contrast to the existing literature, we analyze the case where the parameter spaces are compact. Moreover, we specialize the proposed algorithm to parametric models of an exponential family which results in especially simple updates.
The rest of this paper is organized as follows. Section \ref{sec:setup} introduces the problem setup, it describes the networked observation model and the inference task. Section \ref{sec:variational} presents a variational analysis of the Bayesian posterior, shows the implicit representation of the posterior as steps in a stochastic program and extends this program to the distributed setup. Section \ref{sec:inference} specializes the proposed distributed learning protocol to the case of observation models that are members of the exponential family. Section \ref{sec:concentration} shows our main results about the exponential concentration of beliefs around the true parameter.
Section \ref{sec:concentration} begins by gently introducing our techniques by proving a concentration result in the case of countably many hypotheses, before turning to our main focus: the case when the set of hypotheses is a compact subset of $\mathbb{R}^d$. Finally, conclusions, open problems, and potential future work are discussed.
\textbf{\textit{Notation}}:
Random variables are denoted with upper-case letters, e.g. $X$,
while the corresponding lower-case are used for their realizations, e.g. $x$.
Time indices are denoted by subscripts, and the letter $k$ or $t$ is generally used.
Agent indices are denoted by superscripts, and the letters $i$ or $j$ are used.
We write $[A]_{ij}$ or $a_{ij}$ to denote the entry of a matrix $A$ in its $i$-th row and $j$-th column.
We use $A'$ for the transpose of a matrix $A$, and $x'$ for the transpose of a vector $x$.
The complement of a set $B$ is denoted as $B^c$.
\section{Problem Setup}\label{sec:setup}
We begin by introducing the learning problem from a centralized perspective, where all information is available at a single location. Later, we will generalize the setup to the distributed setting where only partial and distributed information is available.
Consider a probability space $(\Omega,\mathcal{F},\mathbb{P})$, where $\Omega$ is a sample space, $\mathcal{F}$ is a $\sigma$-algebra and $\mathbb{P}$ a probability measure. Assume that we observe a sequence of independent random variables $X_1,X_2,\hdots$, all taking values in some measurable space
$(\mathcal{X},\mathcal{A})$ and identically distributed with a common \textit{unknown} distribution $P$. In addition, we have a parametrized family of distributions ${{\mathscr{P} = \{P_{\theta} : \theta \in \Theta\}}}$,where the map $\Theta \to \mathscr{P}$ from parameter to distribution is one-to-one. Moreover,
the models in $\mathscr{P}$ are all dominated\footnote{A measure $\mu$ is dominated by (or absolutely continuous with respect to) a measure $\lambda$ if $\lambda(B) = 0$ implies $\mu(B)=0$ for every measurable set~$B$.} by a $\sigma$-finite measure $\lambda$, with corresponding densities $p_\theta = dP_\theta / d\lambda$. Assuming that there exists a $\theta^*$ such that $P_{\theta^*} = P$, the objective is to estimate $\theta^*$ based on the received observations $x_1,x_2,\hdots$.
Following a Bayesian approach, we begin with a prior on $\theta^*$ represented as a distribution on the space $\Theta$; then given a sequence of observations, we incorporate such knowledge into a posterior distribution following Bayes' rule. Specifically, we assume that $\Theta$ is equipped with a $\sigma$-algebra and a measure $\sigma$ and that $\mu_0$, which is our prior belief, is a probability measure on $\Theta$ which is dominated by $\sigma$. Furthermore, the densities $p_{\theta}(x)$ are measurable functions of $\theta$ for any
$x \in \mathcal{X}$, and also dominated by $\sigma$. We then define the belief $\mu_k$ as the posterior distribution given the sequence of observations up to time $k$, i.e.,
\begin{align}\label{bayes}
\mu_{k+1}(B) & \propto \int_{B} \prod\limits_{t=1}^{k+1} p_\theta(x_{t}) d\mu_0(\theta) .
\end{align} for any measurable set $B \subset \Theta$ (note that we used the independence of the observations at each time step).
Assuming that all observations are readily available at a centralized location, under appropriate conditions, the recursive Bayesian posterior in Eq.~\eqref{bayes} will be consistent in the sense that the beliefs $\mu_k$ will concentrate around $\theta^*$; see \cite{gho97,sch65,gho00} for a formal statement. Several authors have studied the rate at which this concentration occurs, in both asymptotic and non-asymptotic regimes \cite{bir15,gho07,riv12}.
Now consider the case where there is a network of $n$ agents observing the process $X_1,X_2,\hdots$, where $X_k$ is now a random vector belonging to the product space $\prod_{i=1}^{n}\mathcal{X}^i$, and
$X_k = [X^1_k,X^2_k,\hdots,X^n_k ]'$ consists of observations $X_k^i$ of the agents at time~$k$.
Specifically, agent $i$ observes the sequence $X_1^i,X_2^i,\hdots$,
where $X_k^i$ is now distributed according to an unknown distributions $P^i$.
Each agent agent $i$ has a private family of distributions ${{\mathscr{P}^i = \{P_{\theta}^i : \theta \in \Theta\}}}$ it would like to fit to the observations.
However, the goal is for {\em all} agents to agree on a {\em single} $\theta$ that best explains the complete set of observations. In other words, the agents collaboratively seek to find a $\theta^*$ that makes the distribution $\boldsymbol{P}_{\theta^*} = \prod_{i=1}^{n} P^i_{\theta^*} $ as close as possible to the unknown true distribution $P = \prod_{i=1}^n P^i$. Agents interact over a network defined by an undirected graph $\mathcal{G} = (V,E)$, where $V = \{1,2,\ldots,n\}$ is the set of agents and $E$ is a set of undirected edges,
i.e., $\left(i,j\right) \in E$ if and only if agents $i$ and $j$ can communicate with each other.
We study a simple interaction model where, at each step, agents exchange their beliefs with their neighbors in the graph. Thus at every time step $k$, agent $i$ will receive the sample $x_{k}^i$ from $X_k^i$ as well as the beliefs of its neighboring agents, i.e., it will receive $\mu_{k-1}^j$ for all $j$ such that $(i,j) \in E$. Applying a fully Bayesian approach runs into some obstacles in this setting, as agents know neither the network topology nor the private family of distributions of other agents.
Our goal is to design a learning procedure which is both distributed and consistent. That is, we are interested in a belief update algorithm that aggregates information in a non-Bayesian manner and guarantees that the beliefs of all agents will concentrate around $\theta^*$.
As a motivating example, consider the problem of distributed source localization \cite{rab04,rab05}. In this scenario, a network of $n$ agents receives noisy measurements of the distance to a source. The sensing capabilities of each sensor might be limited to a certain region.
The group objective is to jointly identify the location of the source. Figure \ref{location} shows a group of $7$ agents (circles) seeking to localize a source (star).
There is an underlying graph that indicates which nodes can exchange messages.
Moreover, each node has a sensing region indicated by the dashed circle around it.
Each agent observes signals proportional to the distance to the target. Since a target cannot be localized effectively from a single measure of the distance, agents must cooperate to have any hope of achieving decent localization. For more details on the problem, as well as simulations of the several discrete learning rules,
we refer the reader to our earlier paper~\cite{ned15} dealing with the case when the set $\Theta$ is finite.
\begin{figure}[h]
\centering
\includegraphics[width=0.3\textwidth]{location4}
\caption{Distributed source localization example.}
\label{location}
\end{figure}
\section{A variational approach to distributed Bayesian filtering}\label{sec:variational}
In this section, we make the observation that the posterior in Eq.~\eqref{bayes} corresponds to an iteration of a first-order optimization algorithm, namely Stochastic Mirror Descent~\cite{bec03,ned14b,dai15,rab15}. Closely related variational interpretations of Bayes' rule are well-known, and in particular have been given in \cite{zel88,wal06,hil12}. The specific connection to Stochastic Mirror Descent has not been noted, as far as we are aware of. This connection will serve to motivate a distributed learning method which will be the main focus of the paper.
\vspace{-0.4cm}
\subsection{Bayes' rule as Stochastic Mirror Descent} Suppose we want to solve the following optimization problem
\begin{align}\label{central_dk}
\min_{\theta \in \Theta} F(\theta) & = D_{KL}(P\|P_\theta),
\end{align}
where $P$ is an unknown true distribution and $P_{\theta}$ is a parametrized family of distributions (see
Section~\ref{sec:setup}). Here, $D_{KL}(P\|Q)$ is the Kullback-Leibler (KL) divergence\footnote{$D_{KL}(P\|Q)$ between distributions $P$ and $Q$ (with $P$ dominated by $Q$) is defined to be
$D_{KL}(P\|Q) = - \mathbb{E}_P \left[\log dQ / dP\right].$
} between distributions $P$ and $Q$.
First note that we can rewrite Eq.~\eqref{central_dk} as
\begin{align*}
\min_{\theta \in \Theta} D_{KL}(P\|P_\theta) & = \min_{\pi \in \Delta_{\Theta} } \mathbb{E}_{\pi} D_{KL}(P\|P_\theta) \text{ \qquad s.t. } \theta\sim \pi \nonumber \\
& = \min_{\pi \in \Delta_{\Theta} } \mathbb{E}_{\pi} \mathbb{E}_{P} \left[-\log \frac{dP_{\theta}}{dP}\right],
\end{align*}
where $\Delta_{\Theta}$ is the set of all possible densities on the parameter space $\Theta$. Since the distribution $P$ does not depend on the parameter $\theta$, it follows that
\begin{align}\label{central_expectation}
\argmin_{\theta \in \Theta} D_{KL}(P\|P_\theta)& = \argmin_{\pi \in \Delta_{\Theta} } \mathbb{E}_{\pi} \mathbb{E}_{P} \left[-\log p_{\theta}(X)\right] \text{ \ where \ } \theta\sim \pi \text{\ and \ } X \sim P \nonumber \\
& = \argmin_{\pi \in \Delta_{\Theta} } \mathbb{E}_{P} \mathbb{E}_{\pi} \left[-\log p_{\theta}(X)\right] \text{ \ where \ } \theta\sim \pi \text{\ and \ } X \sim P .
\end{align}
The equality in Eq.~\eqref{central_expectation}, where we exchange the order of the expectations, follows from the Fubini-Tonelli theorem. Clearly, if $\theta^*$ minimizes Eq.~\eqref{central_dk}, then a distributions which puts all the mass on $\theta^*$ minimizes Eq.~\eqref{central_expectation}.
The difficulty in evaluating the objective function in Eq.~\eqref{central_expectation} lies in the fact that the distribution $P$ is unknown. A generic approach to solving such problems is using algorithms from stochastic approximation methods, where the objective is minimized by constructing a sequence of gradient-based iterates whereby the true gradient of the objective (which is not available) is replaced with a gradient sample that is available at a given time.
A particular method that is relevant for the solution of stochastic programs of the form
\begin{align*}
\min_{x\in Z} \mathbb{E}\left[F(x,\Xi)\right],
\end{align*}
for some random variable $\Xi$ with unknown distribution, is the \textit{stochastic mirror descent} method \cite{jud08,ned14b,bec03,lan12}. The stochastic mirror descent approach constructs a sequence $\{x_k\}$
as follows:
\begin{align*}
x_{k+1} & = \argmin_{x \in Z} \left\lbrace \langle \nabla F(x,\xi_k), x\rangle
+ \frac{1}{\alpha_{k}} D_w(x,x_k)\right\rbrace ,
\end{align*}
for a realization $\xi_k$ of $\Xi$. Here, $\alpha_k>0$ is the step-size,
$\langle p, q \rangle = \int_{\Theta} p(\theta) q(\theta) d \sigma$, and $D_w(x,x_k)$ is a Bregman distance function associated with a distance-generating function $w$, i.e.,
\begin{align*}
D_w(x,z) = w(z)-w(x) - \delta w[z; x-z],
\end{align*} where $ \delta w[z; x-z]$ is the Fr\'{e}chet derivative of $w$ at $z$ in the direction of $x-z$.
For Eq.~\eqref{central_expectation}, Stochastic Mirror Descent generates a sequence of densities $\{d\mu_k\}$, as follows:
\begin{align}\label{central}
d\mu_{k+1} & = \argmin_{\pi \in \Delta_{\Theta}} \left\lbrace \langle - \log p_\theta(x_{k+1}), \pi\rangle
+ \frac{1}{\alpha_{k}} D_w(\pi,d\mu_k)\right\rbrace , \qquad \text{where } \theta \sim \pi.
\end{align} If we choose $w(x) = \int x \log x$ as the distance-generating function, then the corresponding Bregman distance is the Kullback-Leibler (KL) divergence $D_{KL}$. Additionally, by selecting $\alpha_k=1$, the solution to the optimization problem in Eq.~\eqref{central} can be computed explicitly, where for each $\theta \in \Theta$,
\begin{align*}
d\mu_{k+1}(\theta) & \propto p_\theta(x_{k+1}) d\mu_k(\theta),
\end{align*}
which is the particular definition for the posterior distribution according to Eq.~\eqref{bayes}
(a formal proof of this assertion is a special case of Proposition~\ref{mirror} shown later in the paper).
\subsection{Distributed Stochastic Mirror Descent}
Now, consider the distributed problem where the network of agents want to collectively solve the following optimization problem
\begin{align}\label{opt_problem}
\min_{\theta \in \Theta} F(\theta) & \triangleq D_{KL}\left(\boldsymbol{P}\|\boldsymbol{P}_{\theta}\right) = \sum\limits_{i=1}^n D_{KL}(P^i\|P^i_{\theta}) .
\end{align}
Recall that the distribution $\boldsymbol{P}$ is unknown (though, of course, agents gain information about it by observing samples from $X_1^i, X_2^i, \ldots$ and interacting with other agents) and that $\mathscr{P}^i$ containing all the distributions $P_{\theta}^i$ is a private family of distributions and is only available to agent $i$.
We propose the following algorithm as a distributed version of the stochastic mirror descent for the solution of problem Eq.~\eqref{opt_problem}:
\begin{align}\label{distributed}
d\mu_{k+1}^i & = \argmin_{\pi \in \Delta_{\Theta}} \Big\{\langle - \log p_\theta^i(x_{k+1}^i), \pi\rangle
+ \sum\limits_{j=1}^{n} a_{ij} D_{KL}(\pi\|d\mu_k^j) \Big\} \qquad \text{where } \theta \sim \pi,
\end{align}
with $a_{ij}>0$ denoting the weight that agent $i$ assigns to beliefs coming from its neighbor $j$. Specifically,
$a_{ij}>0$ if $(i,j)\in E$ or $j=i$, and $a_{ij}=0$ if $(i,j)\notin E$.
The optimization problem in Eq.~\eqref{opt_problem} has a closed form solution.
In particular, the posterior density at each $\theta \in \Theta$ is given by
\begin{align*}
d\mu_{k+1}^i(\theta) & \propto p_{\theta}^i(x^i_{k+1}) \prod_{j=1}^{n}(d\mu_{k}^j(\theta))^{a_{ij}},
\end{align*}
or equivalently, the belief on a measurable set $B$ of an agent $i$ at time $k+1$ is
\begin{align}\label{protocol}
\mu_{k+1}^i(B) & \propto \int_{B} p_\theta^i(x_{k+1}^i) \prod_{j=1}^{n} (d\mu_k^j(\theta))^{a_{ij}} .
\end{align}
We state the correctness of this claim in the following proposition.
\begin{proposition}\label{mirror}
The probability measure $\mu_{k+1}^i$ over the set $\Theta$ defined by the update protocol Eq.~\eqref{protocol} coincides, almost everywhere, with the update
the distributed stochastic mirror descent algorithm applied to the optimization problem in Eq.~\eqref{opt_problem}.
\end{proposition}
\begin{proof}
We need to show that the density $d\mu_{k+1}^i$ associated with the probability measure $\mu_{k+1}^i$ defined by
Eq.~\eqref{protocol} minimizes the problem in Eq.~\eqref{distributed}.
To do so, let $G(\pi)$ be the objective function for the problem in Eq.~\eqref{distributed}, i.e.,
\begin{align*}
G(\pi) & = \langle - \log p_\theta^i(x_{k+1}^i), \pi\rangle
+ \sum\limits_{j=1}^{n} a_{ij} D_{KL}(\pi\|d\mu_k^j).
\end{align*}
Next, we add and subtract the KL divergence between $\pi$ and the density $d{\mu}_{k+1}^i$ to obtain
\begin{align*}
G(\pi) &
= \langle - \log p_\theta^i(x_{k+1}^i), \pi\rangle
+ \sum\limits_{j=1}^{n} a_{ij} D_{KL}(\pi\|d\mu_k^j) - D_{KL}\left(\pi\|d{\mu}_{k+1}^i\right) + D_{KL}\left(\pi\|d{\mu}_{k+1}^i\right)\\
&= \langle - \log p_\theta^i(x_{k+1}^i), \pi\rangle
+ D_{KL}\left(\pi\|d{\mu}_{k+1}^i\right) + \sum\limits_{j=1}^{n}a_{ij} \mathbb{E}_\pi \log \frac{d{\mu}_{k+1}^i}{d{\mu}_{k}^j}.
\end{align*}
Now, from Eq.~\eqref{protocol} it follows that
\begin{align}\label{prop1}
G(\pi) &
= \langle - \log p_\theta^i(x_{k+1}^i), \pi\rangle
+ D_{KL}\left(\pi\|d{\mu}_{k+1}^i\right) + \nonumber\\
& \qquad\qquad\sum\limits_{j=1}^{n}a_{ij} \mathbb{E}_\pi \log \left( \frac{1}{d{\mu}_{k}^j} \frac{1}{Z_{k+1}^i}\prod\limits_{l=1}^{n} \left(d{\mu}_{k}^l\right)^{a_{il}}p^i_{\theta}(x_{k+1}^i) \right) \nonumber \\
&
= \langle - \log p_\theta^i(x_{k+1}^i), \pi\rangle
+ D_{KL}\left(\pi\|d{\mu}_{k+1}^i\right) \nonumber \\
& \qquad\qquad - \log Z_{k+1}^i + \langle \log p_\theta^i(x_{k+1}^i), \pi\rangle + \sum\limits_{j=1}^{n}a_{ij} \mathbb{E}_\pi \log \left( \frac{1}{d{\mu}_{k}^j} \prod\limits_{l=1}^{n} \left(d{\mu}_{k}^l\right)^{a_{il}} \right) \nonumber \\
&
= - \log Z_{k+1}^i + D_{KL}\left(\pi\|d{\mu}_{k+1}^i\right) - \sum\limits_{j=1}^{n}a_{ij} \mathbb{E}_\pi \log d{\mu}_{k}^j + \sum\limits_{l=1}^{n}a_{il} \mathbb{E}_\pi \log d{\mu}_{k}^l \nonumber \\
&= - \log Z_{k+1}^i + D_{KL}\left(\pi\|d{\mu}_{k+1}^i\right)
\end{align}
where $Z_{k+1}^i = \int_{\theta} p_\theta^i(x_{k+1}^i) \prod_{j=1}^{n} (d\mu_k^j(\theta))^{a_{ij}}$ is the corresponding normalizing constant.
The first term in Eq.~\eqref{prop1} does not depend on the distribution $\pi$. Thus,
we conclude that the solution to the problem in Eq.~\eqref{distributed} is
the density $\pi^* = d{\mu}_{k+1}^i$ as defined in Eq.~\eqref{protocol} (almost everywhere).
\qed
\end{proof}
We remark that the update in Eq.~\eqref{protocol} can be viewed as two-step processes: first every agent constructs an aggregate belief using a weighted geometric average of its own belief and the beliefs of its neighbors, and then each agent performs a Bayes' update using the aggregated belief as a prior. We note that similar arguments in the context of distributed optimization have been proposed in \cite{rab15,li16} for general Bregman distances. In the case when the number of hypotheses is finite,
variations on this update rule were previously analyzed in \cite{sha14,ned15,lal14b}.
\vspace{-0.4cm}
\subsection{An example}
\begin{example}\label{ex1}
Consider a group of $4$ agents, connected over a network as shown in Figure \ref{network}.
A set of metropolis weights for this network is given by the following matrix:
\begin{align*}
A & = \left[\begin{array}{c c c c}
2/3 & 1/6 & 0 & 1/6 \\
1/6 & 2/3 & 1/6 & 0 \\
0 & 1/6 & 2/3 & 1/6 \\
1/6 & 0 & 1/6 & 2/3
\end{array} \right].
\end{align*}
\begin{figure}[h!]
\centering
\begin{overpic}[width=0.4\textwidth]{sample_agents}
\put(4,19){{ $1$}}
\put(50,32){{ $2$}}
\put(50,7){{ $3$}}
\put(94,20){{ $4$}}
\end{overpic}
\caption{A network of $4$ agents.}
\label{network}
\end{figure}
Furthermore, assume that each agent is observing a Bernoulli random variable such that $X_k^1 \sim \text{Bern}(0.2)$, $X_k^2 \sim \text{Bern}(0.4)$, $X_k^3 \sim \text{Bern}(0.6)$ and $X_k^4 \sim \text{Bern}(0.8)$.
In this case, the parameter space is $\Theta =[0,1]$. Thus, the objective is to collectively find a parameter $\theta^*$ that best explains the joint observations in the sense of the problem in Eq. \eqref{opt_problem}, i.e.
\begin{align*}
\min_{\theta \in [0,1]} F(\theta) & = \sum_{j=1}^{4} D_{KL}(\text{Bern}(\theta)\|\text{Bern}(\theta^j)) = \sum_{j=1}^{4} \left( \theta \log\frac{\theta}{\theta^j} + (1-\theta)\log\frac{1-\theta}{1-\theta^j}\right)
\end{align*}
where $\theta^1 = 0.2$, $\theta^2 = 0.4$, $\theta^3 = 0.6$ and $\theta^4 = 0.8$.
We can be see that the optimal solution is $\theta^* = 0.5$ by determining it explicitly via the first-order optimality conditions or by exploiting the symmetry in the objective function.
Assume that all agents start with a common belief at time $0$ following a Beta distribution, i.e.,
$\mu_0^i = \text{Beta}(\alpha_0,\beta_0)$ (this specific choice will be motivated in the next section).
Then, the proposed algorithm in Eq. \eqref{protocol} will generate a belief at time $k+1$ that also has a Beta distribution. Moreover, $\mu^i_{k+1} = \text{Beta}(\alpha_{k+1}^i,\beta_{k+1}^i)$, where
\begin{align*}
\alpha_{k+1}^i & = \sum_{j=1}^{n}a_{ij}\alpha_{k}^j + x_{k+1}^i, \qquad
\beta_{k+1}^i = \sum_{j=1}^{n}a_{ij}\beta_{k}^j + 1 - x^i_{k+1}.
\end{align*}
\end{example}
To summarize, we have given an interpretation of Bayes' rule as an instance of Stochastic Mirror Descent. We have shown how this interpretation motivates a distributed update rule. In the next section, we discuss explicit forms of this update rule for parametric models coming from exponential families.
\section{Cooperative Inference for Exponential Families}\label{sec:inference}
We begin with the observation that, for a general class of models $\{\mathscr{P}^i\}$, it is not clear whether the computation of the posterior beliefs $\mu_{k+1}^i$ is tractable. Indeed, computation of $\mu_{k+1}^i$ involves solving an integral of the form
\begin{align}\label{integral_theta}
\int_{\Theta}p_\theta^i(x_{k+1}^i) \prod_{j=1}^{n} (d\mu_k^j(\theta))^{a_{ij}}.
\end{align}
There is an entire area of research called \textit{variational Bayes' approximations} dedicated to efficiently approximating integrals that appear in such context \cite{fox12,bea03,dai16}.
The purpose of this section is to show that for exponential family~\cite{koo36,dar35} there are closed-form expressions for the posteriors.
\begin{definition}
The exponential family, for a parameter $\theta = [\theta^1,\theta^2,\hdots,\theta^s]'$, is the set of probability distributions whose density can be represented as
\begin{align*}
p_\theta(x) & = H(x) \exp(M(\theta)'T(x)-C(\theta))
\end{align*}
for specific functions $H(\cdot)$, $M(\cdot)$, $T(\cdot)$ and $C(\cdot)$, with ${{M(\theta) = [M(\theta^1),M(\theta^2),\hdots,M(\theta^s)]'}}$. The function $M(\theta)$ is usually referred to as
the natural parameter.
When $M(\theta)$ is used as a parameter itself, it is said that the distribution is in its canonical form. In this case, we can write the density as
\begin{align*}
p_M(x) & = H(x) \exp(M'T(x)-C(M)),
\end{align*}
with $M$ being the parameter.
\end{definition}
Among the members of the exponential family, one can find the distributions such as Normal, Poisson, Exponential, Gamma, Bernoulli, and Beta, among others~\cite{gel14}. In our case,
we will take advantage of the existence of \textit{conjugate priors} for all members of the exponential family. The definition of the conjugate prior is given below.
\begin{definition}
Assume that the prior distribution $p$ on a parameter space $\Theta$ belongs to the exponential family. Then, the distribution $p$ is referred to as the \textit{conjugate prior} for a likelihood function $p_\theta(x)$ if the posterior distribution $p(\theta|x) \propto p_\theta(x) p(\theta)$ is in the same family as the prior.
\end{definition}
Thus, if the belief density at some time $k$ is a conjugate prior for our likelihood model, then our belief at time $k+1$ will be of the same class as our prior. For example, if a likelihood function follows a Gaussian form, then having a Gaussian prior will produce a Gaussian posterior. This property simplifies the structure of the belief update procedure, since we can express the evolution of the beliefs generated by the proposed algorithm in
Eq.~\eqref{protocol} by the evolution of the natural parameters of the member of the exponential family it belongs to.
We now proceed to provide more details.
First, the conjugate prior for a member of the exponential family can be written as
\begin{align*}
p_{\chi,\nu}(M) & = f(\chi,\nu)\exp(M'\chi - \nu C(M)),
\end{align*}
which is a distribution over the natural parameters $M$, where $\nu>0$ and $\chi \in \mathbb{R}^s$ are the parameters of the conjugate prior. Then, it can be shown that the posterior distribution,
given some observation $x$, has the same exponential form as the prior with updated parameters as
follows:
\begin{align}\label{posterior_expo}
p_{\chi,\nu}(M|x) & = p_{\chi+T(x),\nu + 1}(M) \propto p_{\theta}(x)p_{\chi,\nu}(M|x).
\end{align}
On the other hand, for a set on $n$ priors of the same exponential family, the weighted geometric averages
also have a closed form in terms of the conjugate parameters.
\begin{proposition}\label{geo_expo}
Let $(p_{\chi^1,\nu^1}(M),\hdots,p_{\chi^n,\nu^n}(M) )$ be a set of $n$ distributions, all in the same class in the exponential family, i.e., $p_{\chi^i,\nu^i}(M)=f(\chi^i,\nu^i)\exp(M'\chi^i - \nu^i C(M))$ for $i=1,\hdots,n$. Then, for a set $(\alpha_1,\hdots,\alpha_n)$ of weights with $\alpha_i >0$ for all $i$,
the probability distribution defined as
\begin{align*}
p_{\bar \chi,\bar \nu}(M) & =\frac{\prod_{i=1}^{n}(p_{\chi^i,\nu^i}(M))^{\alpha_i}}{\int\prod_{j=1}^{n}(p_{\chi^j,\nu^j}(dM))^{\alpha_j}},
\end{align*}
belongs to the same class in the exponential family with parameters $\bar \chi = \sum_{i=1}^{n}\alpha_i \chi^i$ and $\bar \nu = \sum_{i=1}^{n}\alpha_i \nu^i$.
\end{proposition}
\begin{proof}
We write the explicit geometric product, and discard the constant terms
\begin{align*}
p_{\bar \chi,\bar \nu}(M)
& \propto \prod_{i=1}^{n}(f(\chi^i,\nu^i)\exp(M'\chi^i - \nu^i C(M)))^{\alpha_i} \\
& \propto \exp\left(M' \sum_{i=1}^{n}\alpha_i \chi^i - \sum_{i=1}^{n}\alpha_i\nu^i C(M)\right) .
\end{align*}
The last line provides explicit values for the parameters of the new distribution.
\qed
\end{proof}
The relations in Eq.~\eqref{posterior_expo} and Proposition~\ref{geo_expo} allow us to write the algorithm
in~Eq.~\eqref{protocol} in terms of the natural parameters of the priors, as shown by the following proposition.
\begin{proposition}\label{prop_conjugate}
Assume that the belief density $d\mu_k^i$ at time $k$ has an exponential form with natural parameters
$\chi^i_k$ and $\nu^i_k$ for all $1 \leq i \leq n$, and that these densities are conjugate priors of the likelihood models $p^i_\theta$. Then, the belief density at time $k+1$, as computed in the update rule in Eq.~\eqref{protocol}, has the same form as the beliefs at time $k$ with the natural parameters given by
\begin{equation*}\label{protocol_natural}
\chi^i_{k+1} = \sum_{j=1}^{n}a_{ij} \chi^j_k + T^i(x^i), \quad
\nu_{k+1}^i= \sum_{j=1}^{n}a_{ij} \nu^j_k + 1\qquad\hbox{for all } i=1,\ldots,n.
\end{equation*}
\end{proposition}
The proof of Proposition \ref{prop_conjugate} follows immediately from Eq.~\eqref{posterior_expo} and Eq.~\eqref{geo_expo}.
Proposition \ref{prop_conjugate} simplifies the algorithm in Eq.~\eqref{protocol} and facilitates its use in traditional estimation problems where members of the exponential family are used. We next illustrate this by discussing a number of distributed estimation problems with likelihood models coming from exponential families.
\vspace{-0.4cm}
\subsection{Distributed Poisson Filter}
Consider an observation model where the agent signals follow Poisson distributions, i.e., $X^i_k = \text{Poisson}(\lambda^i)$ for all $i$. In this case, the optimization problem to be solved is
\begin{align*}
\min_{\lambda >0} F(\lambda) & = \sum_{j=1}^{n} D_{KL}(\text{Poisson}(\lambda)\|\text{Poisson}(\lambda^j)) ,
\end{align*}
or equivalently,
$
\min_{\lambda >0} \{-\sum\limits_{i=1}^{n} \lambda^i\log \lambda + \lambda\}.
$
The conjugate prior of a Poisson likelihood model is the Gamma distribution.
Thus, if at time $k$ the beliefs are given by $\mu_k^i = \text{Gamma}(\alpha_k^i,\beta_k^i)$ for all $i$,
then the beliefs at time $k+1$ are $\mu_{k+1}^i= \text{Gamma}(\alpha_{k+1}^i,\beta_{k+1}^i)$, where
\begin{align*}
\alpha_{k+1}^i & = \sum_{j=1}^{n}a_{ij}\alpha_{k}^j + x_{k+1}^i \qquad \text{and} \qquad
\beta_{k+1}^i = \sum_{j=1}^{n}a_{ij}\beta_{k}^j + 1 .
\end{align*}
\vspace{-0.4cm}
\subsection{Distributed Gaussian Filter with known variance}
Assume each agent observes a signal of the form $X^i_k = \theta^i +\epsilon^i_k$, where $\theta^i$ is finite and unknown, while $\epsilon^i \sim \mathcal{N}(0,1/\tau^i)$, with $\tau^i = 1/(\sigma^i)^2$, is known by agent $i$.
The optimization problem to be solved is
\begin{align*}
\min_{\theta \in \mathbb{R}} F(\theta) & = \sum_{j=1}^{n} D_{KL}(\mathcal{N}(\theta,1/\tau^j)\|\mathcal{N}(\theta^j,1/\tau^j)) ,
\end{align*}
or equivalently
$
\min_{\theta \in \mathbb{R}} \sum_{j=1}^{n} \tau^j(\theta - \theta^j)^2.
$
In this case, the likelihood models, the prior and the posterior are Gaussian. Thus, if the beliefs of the agents at time $k$ are Gaussian, i.e., $\mu_k^i = \mathcal{N}(\theta^i_k,1/\tau^i_k)$ for all $i=1\hdots,n$,
then their beliefs at time $k+1$ are also Gaussian. In particular, they are given
by ${{\mu_{k+1}^i = \mathcal{N}(\theta^i_{k+1},1/\tau^i_{k+1})}}$ for all $i=1\hdots,n$, with
\begin{align*}
\tau_{k+1}^i & = \sum\limits_{j=1}^{n}a_{ij}\tau_k^j + \tau^i \qquad \text{and} \qquad
\theta_{k+1}^i = \frac{1}{\tau_{k+1}^i}\left( \sum\limits_{j=1}^{n} a_{ij} \tau_k^j \theta_k^j + x_{k+1}^i\tau^i \right).
\end{align*}
We note that this specific setup is known a Gaussian Learning and has been studied in \cite{ned16,chu16}, where the expected parameter estimator is shown to converge at an $O(1/k)$ rate.
\vspace{-0.4cm}
\subsection{Distributed Gaussian Filter with unknown variance}
In this case, the agents want to cooperatively estimate the value of a variance. Specifically, based on observations of the form $X^i_k = \theta^i +\epsilon^i_k$, with $\epsilon^i_k \sim \mathcal{N}(0,1/\tau^i)$, where $\theta^i$ is known and $\tau^i$ is unknown to agent $i$, they want to solve the following problem
\begin{align*}
\min_{\tau > 0} F(\tau) & = \sum_{j=1}^{n} D_{KL}(\mathcal{N}(\theta^j,1/\tau)\|\mathcal{N}(\theta^j,1/\tau^j)) .
\end{align*}
We choose the Scaled Inverse Chi-Squared\footnote{The density function of the Scaled Inverse Chi-Squared is defined for $x>0$ as ${{p_{\nu,\tau}(x) = \frac{(\tau v /2)^{v/2}}{\Gamma(v/2)}\frac{\exp(-\frac{-\nu \tau}{2x})}{x^{1+v/2}}}}$.} as the distribution of our prior, so that $\mu_k^i = \text{Scaled Inv}\text{-}\chi^2(\nu_k^i,\tau_k^i)$
for all $i$, then the beliefs at time $k+1$ are given by $\mu_{k+1}^i = \text{Scaled Inv}\text{-}\chi^2(\nu_{k+1}^i,\tau_{k+1}^i)$ for all $i$, with
\begin{align*}
\nu_{k+1}^i & = \sum\limits_{j=1}^{n}a_{ij}\nu_k^j+1 \qquad \text{and} \qquad
\tau_{k+1}^i = \frac{1}{\nu_{k+1}^i}\left( \sum\limits_{j=1}^{n}a_{ij}\nu_k^j \tau_k^j +(x_{k+1}^i - \theta^i)^2\right) .
\end{align*}
\vspace{-0.4cm}
\subsection{Distributed Gaussian Filter with unknown mean and variance}
In the preceding examples, we have considered the cases when either the mean or the variance is known. Here, we will assume that both the mean and the variance are unknown and need to be estimated.
Explicitly, we still have noise observations ${{X^i_k = \theta^i +\epsilon^i_k}}$, with $\epsilon^i_k \sim \mathcal{N}(0,1/\tau^i)$, and want to solve
\begin{align*}
\min_{\theta \in \mathbb{R},\tau > 0} F(\theta,\tau) & = \sum_{j=1}^{n} D_{KL}(\mathcal{N}(\theta,1/\tau)\|\mathcal{N}(\theta^j,1/\tau^j)) .
\end{align*}
The Normal-Inverse-Gamma distribution serves as conjugate prior for the likelihood model over the parameters $(\theta,\tau)$. Specifically, we assume that the beliefs at time $k$ are given by
\begin{align*}
\mu_k^i &= \text{Normal-Inv-Gamma}(\theta^i_k,\tau_{k}^i,\alpha_k^i,\beta_k^i)\qquad\hbox{for all } i=1,\ldots,n.
\end{align*}
Then, the beliefs at time $k+1$ will have a Normal-Inverse-Gamma distribution with the following parameters
\begin{align*}
\tau_{k+1}^i & = \sum_{j=1}^{n}a_{ij}\tau_k^j +1, \qqua
\theta_{k+1}^i = \frac{\sum_{j=1}^{n}a_{ij}\tau_k^j\theta_k^j +x^i_{k+1}}{\tau_{k+1}^i},\\
\alpha^i_{k+1} & = \sum_{j=1}^{n}a_{ij}\alpha_k^j + 1/2 ,\qquad
\beta^i_{k+1} = \sum_{j=1}^{n}a_{ij}\beta_{k}^j + \frac{\sum_{j=1}^{n}a_{ij}\tau^j_k(x^i_{k+1} - \theta^j_k)^2}{2\tau_{k+1}^i}.
\end{align*}
\section{Belief Concentration Rates}\label{sec:concentration}
We now turn to the presentation of our main results which concern the rate at which beliefs generated by the update rule in Eq.~\eqref{protocol} concentrate around the true parameter $\theta^*$. We will break up our analysis into two cases. Initially, we will focus on the case when $\Theta$ is a countable set, and will prove a concentration result for a ball containing the optimal hypothesis having finitely many hypotheses outside it. We will use this case to gently introduce the techniques we will use. We will then turn to our main scenario of interest, namely when $\Theta$ is a compact subset of $\mathbb{R}^d$. Our proof techniques use concentration arguments for beliefs on Hellinger balls from the recent work~\cite{bir15} which,
in turn, builds on the classic paper~\cite{lecam73}.
We begin with two subsections focusing on background information, definitions, and assumptions.
\vspace{-0.4cm}
\subsection{Background: Hellinger Distance and Coverings}
We equip the set of all probability distributions over the parameter set $\mathscr{P}$ with the Hellinger distance\footnote{The Hellinger distance between two probability distributions $P$ and $Q$ is given by,
\begin{align*}
h^2\left(P,Q\right) & = \frac{1}{2} \int \left(\sqrt{\frac{dP}{d\lambda}}-\sqrt{\frac{dQ}{d\lambda}}\right)^2d\lambda,
\end{align*}
where $P$ and $Q$ are dominated by $\lambda$. Note that this formula is for the square of the Hellinger distance.}
to obtain the {\it metric} space $\left(\mathscr{P},h\right)$. The metric space induces a topology, where we can define an open ball $\mathcal{B}_r(\theta)$ with a radius $r>0$ centered at a point $\theta \in \Theta$, which we use to construct a special covering of subsets $B\subset \mathscr{P}$.
\begin{definition}\label{h_balls}
Define an $n$-Hellinger ball of radius $r$ centered at $\theta$ as
\begin{align*}
\mathcal{B}_r(\theta) & = \left\lbrace \hat\theta \in \Theta \left| \sqrt{\frac{1}{n} \sum_{i=1}^{n} h^2\left(P^i_{\theta},P^i_{\hat\theta}\right) } \right. \leq r \right\rbrace .
\end{align*}
Additionally, when no center is specified, it should be assumed that it refers to $\theta^*$, i.e. $\mathcal{B}_r = \mathcal{B}_r(\theta^*) $.
\end{definition}
Given an $n$-Hellinger ball of radius $r$, we will use the following notation for a covering of its complement $\mathcal{B}_{r}^c$. Specifically, we are going to express $\mathcal{B}_{r}^c$ as the union of finite disjoint and concentric anuli.
Let $r>0$ and $\{r_l\}$ be a finite strictly decreasing sequence such that ${r_1 =1}$ and $r_L = r$. Now, express the set $\mathcal{B}_{r}^c $ as the union of anuli generated by the sequence $\{r_l\}$ as
\begin{align*}
\mathcal{B}_{r}^c & = \bigcup_{l = 1}^{L-1} \mathcal{F}_l,
\end{align*}
where $\mathcal{F}_l = \mathcal{B}_{r_{l}} \setminus\mathcal{B}_{r_{l+1}} $.
\vspace{-0.4cm}
\subsection{Background: Assumptions on Network and Mixing Weights}
Naturally, we need some assumptions on the matrix $A$. For one thing, the matrix $A$ has to be ``compatible'' with the underlying graph, in that information from node $i$ should not affect node $j$ if there is no edge from $i$ to $j$ in $\mathcal{G}$. At the other extreme, we want to rule out the possibility that $A$ is the identity matrix, which in terms of Eq. (\ref{protocol}) means nodes do not talk to their neighbors. Formally, we make the following assumption.
\begin{assumption}\label{assum:graph}
The graph $\mathcal{G}$ and matrix $A$ are such that:
\begin{enumerate}[(a)]
\item $A$ is doubly-stochastic with $\left[A\right]_{ij} = a_{ij} > 0$ for $i\ne j$ if and only if $(i,j)\in E$.
\item $A$ has positive diagonal entries, $a_{ii}>0$ for all $i \in V $.
\item The graph $\mathcal{G}$ is connected.
\end{enumerate}
\end{assumption}
Assumption~\ref{assum:graph} is common in the distributed optimization literature. The construction of a set of weights satisfying Assumption~\ref{assum:graph} can be done in a distributed way, for example,
by choosing the so-called ``lazy Metropolis'' matrix, which is a stochastic matrix given by
\begin{align*}
a_{ij} = \left\{ \begin{array}{l l}
\frac{1}{2 \max \left\{ d^i+1,d^j+1 \right\} } & \quad \text{if $(i,j) \in E$},\\
0 & \quad \text{if $(i,j) \notin E$},
\end{array} \right.
\end{align*}
where $d^i$ is the degree (the number of neighbors) of node $i$. Note that although the above formula only gives the off-diagonal entries of $A$, it uniquely defines the entire matrix (the diagonal elements are uniquely defined via the stochasticity of $A$). To choose the weights corresponding to a lazy Metropolis matrix, agents will need to spend an additional round at the beginning of the algorithm broadcasting their degrees to their neighbors.
Assumption 1 can be seen to guarantee that $A^t \rightarrow (1/n) {\bf 1} {\bf 1}^T$ where ${\bf 1}$ is the vector of all ones. We will use the following result that provides convergence rate for the difference
$|A^t - (1/n) {\bf 1} {\bf 1}^T|$, based on the results from~\cite{sha14} and~\cite{ned15}:
\begin{lemma}\label{shahin}
Let Assumption \ref{assum:graph} hold, then the matrix $A$ satisfies the following relation:
\begin{align*}
\sum\limits_{t=1}^{k} \sum\limits_{j=1}^{n} \left| \left[A^{k-t}\right]_{ij} - \frac{1}{n} \right| & \leq \frac{4 \log n}{1-\delta} \qquad \mbox{ for } i = 1, \ldots, n,
\end{align*}
where $\delta = 1-\eta / 4n^2$ with $\eta$ being the smallest positive entry of the matrix $A$. Furthermore, if $A$ is a lazy Metropolis matrix associated with the graph $\mathcal{G}$,
then $\delta = 1- 1 / \mathcal{O}(n^2)$.
\end{lemma}
\vspace{-0.4cm}
\subsection{Concentration for the Case of Countable Hypotheses}
We now turn to proving a concentration result when the set $\Theta$ of hypotheses is countable. We will consider the case of a ball in the Hellinger distance containing a countable number of hypotheses, including the correct one, and having only finitely many hypotheses outside it; we will show exponential convergence of beliefs to that ball. The purpose is to gently introduce the techniques we will use later in the case of a compact set of hypotheses.
In the case when the number of hypotheses is countable, the density update in Eq.~\eqref{protocol} can be restated in a simpler form for discrete beliefs over the parameter space $\Theta$ as
\begin{align}\label{protocol_dis}
\mu_{k+1}^i(\theta) & \propto p_\theta^i(x_{k+1}^i) \prod_{j=1}^{n} (\mu_k^j(\theta))^{a_{ij}} .
\end{align}
We will fix the radius $r$, and our goal will be to prove a concentration result for a Hellinger ball of radius $r$ around the optimal hypothesis $\theta^*$. We partition the complement of this ball $\mathcal{B}_r^c$ as described above into annuli $\mathcal{F}_l$. We introduce the notation $\mathcal{N}_l$ to denote the number of hypotheses within the annulus $\mathcal{F}_l$. We refer the reader to Figure \ref{lcovering_dis} which shows a set of probability distributions, represented as black dots, where the true distribution $P$ is represented by a star.
\begin{figure}[ht]
\centering
\begin{overpic}[width=0.4\textwidth]{covering_dis}
\put(38,39){{\small $\mathcal{B}_r $}}
\put(48,53){{\small $P$}}
\put(58,73){{\small $P_{\theta}$}}
\end{overpic}
\caption{Creating a covering for a ball $\mathcal{B}_r $. $\bigstar$ represents the correct hypothesis
$\boldsymbol{P}_{\theta^*}$, $\bullet$ indicates the location of other hypotheses and the dash lines indicate the boundary of the balls $\mathcal{B}_{r_l} $.}
\label{lcovering_dis}
\end{figure}
We will assume that the number of hypotheses outside the desired ball is finite.
\begin{assumption}\label{assum:conv_hyp}
The number of hypothesis outside $\mathcal{B}_r$ is finite.
\end{assumption}
Additionally, we impose a bound on the separation between hypotheses which will avoid some pathological cases. The separation between hypotheses is defined in terms of the Hellinger affinity between two distributions
$Q$ and $P$, given by
\[\rho(Q,P) = 1 - h^2(Q,P).\]
\begin{assumption}\label{assum:bound}
There exists an $\alpha > 0 $ such that
$\rho(P_{\theta_1}^i,P_{\theta_2}^i)>\alpha$ for any $\theta_1, \theta_2 \in \Theta$ and $i = 1,\hdots,n$ .
\end{assumption}
With these assumptions in place, our first step is a lemma that bounds concentration of log-likelihood ratios.
\begin{lemma}\label{lemmab}
Let Assumptions \ref{assum:graph}, \ref{assum:conv_hyp} and \ref{assum:bound} hold. Given a set of independent random variables $\{X^i_t\}$ such that $X^i_t \sim P^i$ for $i=1,\hdots,n$ and $t =1,\hdots, k$, a set of distributions $\{Q^i\}$ where $Q^i$ dominates $P^i$, then for all $y\in \mathbb{R}$,
\begin{align*}
\mathbb{P}\left[\sum\limits_{t=1}^{k} \sum\limits_{j=1}^{n} [A^{k-t}]_{ij} \log \frac{dQ^j}{dP^j}(X^j_t) \geq y \right]
& \leq \exp(-y / 2) \exp\left(\log \frac{1}{\alpha} \frac{4 \log n}{1- \delta} \right) \exp\left( -k \frac{1}{n}\sum\limits_{j=1}^{n} h^2(Q^j,P^j)\right).
\end{align*}
\end{lemma}
\begin{proof}
By the Markov inequality and Jensen's inequality we have
\begin{align*}
\mathbb{P}\left[\sum\limits_{t=1}^{k} \sum\limits_{j=1}^{n} [A^{k-t}]_{ij} \log \frac{dQ^j}{dP^j}(X^j_t) \geq y \right] & \leq \exp(-y / 2)\mathbb{E}\left[\prod\limits_{t=1}^{k} \prod\limits_{j=1}^{n} \sqrt{ \left( \frac{dQ^j}{dP^j}(X^j_t)\right) ^{[A^{k-t}]_{ij}}}\right] \\
& \leq \exp(-y / 2) \prod\limits_{t=1}^{k} \prod\limits_{j=1}^{n} \mathbb{E}\left[ \sqrt{ \left( \frac{dQ^j}{dP^j}(X^j_t)\right) }\right]^{[A^{k-t}]_{ij}} \\
& \leq \exp(-y / 2) \prod\limits_{t=1}^{k}\prod\limits_{j=1}^{n}\rho(Q^j,P^j)^{[A^{k-t}]_{ij}},
\end{align*}
where the last inequality follows from the definition of the Hellinger affinity function $\rho(Q,P)$.
Now, by adding and subtracting $\frac{1}{n}\sum_{j=1}^{n}\log \rho(Q^j,P^j)$ we have
\begin{align*}
\mathbb{P}\left[\sum\limits_{t=1}^{k} \sum\limits_{j=1}^{n} [A^{k-t}]_{ij} \log \frac{dQ^j}{dP^j}(X^j_t) \geq y \right]
& \leq \exp(-y / 2) \exp \left( \sum\limits_{t=1}^{k} \sum\limits_{j=1}^{n}([A^{k-t}]_{ij}-1/n)\log \rho(Q^j,P^j)^{} + \sum\limits_{t=1}^{k} \frac{1}{n} \sum\limits_{j=1}^{n}\log \rho(Q^j,P^j)^{} \right) \\
& \leq \exp(-y / 2) \exp \left(\log \frac{1}{\alpha}\sum\limits_{t=1}^{k} \sum\limits_{j=1}^{n}|[A^{k-t}]_{ij}-1/n| + \sum\limits_{t=1}^{k} \frac{1}{n} \sum\limits_{j=1}^{n}\log \rho(Q^j,P^j)^{} \right),
\end{align*}
where the last line follows from $\rho(P^j,Q^j)>\alpha$.
Then, from Lemma \ref{shahin} it follows that
\begin{align*}
\mathbb{P}\left[\sum\limits_{t=1}^{k} \sum\limits_{j=1}^{n} [A^{k-t}]_{ij} \log \frac{dQ^j}{dP^j}(X^j_t) \geq y \right] & \leq \exp(-y / 2) \exp \left( \log \frac{1}{\alpha}\frac{4 \log n}{1- \delta} + \sum\limits_{t=1}^{k} \frac{1}{n} \sum\limits_{j=1}^{n}\log \rho(Q^j,P^j)^{} \right) \\
& \leq \exp(-y / 2) \exp\left( \log \frac{1}{\alpha}\frac{4 \log n}{1- \delta} \right) \prod\limits_{j=1}^{n}\rho^k(Q^j,P^j)^{1 /n} \\
& \leq \exp(-y / 2) \exp\left(\log \frac{1}{\alpha} \frac{4 \log n}{1- \delta} \right) \prod\limits_{j=1}^{n}\exp(-k h^2(Q^j,P^j))^{1 /n}.
\end{align*}
The last inequality follows from $\rho(Q^j,P^j) = 1 - h^2(Q^j,P^j)$ and $1-x \leq \exp(-x)$ for $x \in [0,1]$.
\qed
\end{proof}
We are now ready to state our first main result, which bounds concentration of Eq. ~\eqref{protocol_dis} around the optimal hypothesis for a countable hypothesis set $\Theta$. The following theorem shows that the beliefs of all agents will concentrate around the Hellinger ball $\mathcal{B}_r$ at an exponential rate.
\begin{theorem}\label{main_count}
Let Assumptions~\ref{assum:graph}, \ref{assum:conv_hyp} and \ref{assum:bound} hold,
and let $\sigma \in (0,1)$ be a desired probability tolerance.
Then, the belief sequences $\{\mu_{k}^i\}$, $i\in V$ that are generated by the update rule in Eq.~\eqref{protocol_dis},
with initial beliefs such that ${\mu_0^i(\theta^*) >\epsilon}$ for all $i$,
have the following property: for any radius $r>0$ with probability $1-\sigma$,
\begin{align*}
\mu_{k+1}^i\left(\mathcal{B}_r \right) & \geq 1 - \frac{1}{\epsilon}\chi \exp \left( -k r^2 \right) \qquad \text{ for all } i \text{ and all }k\geq N,
\end{align*}
where
\begin{align*}
N=& \inf \left\lbrace t \geq 1 \Bigg| \exp\left( \log\frac{1}{\alpha}\frac{4 \log n}{1- \delta} \right) \sum\limits_{l = 1}^{L-1}\mathcal{N}_{r_l} \exp \left( -tr_{l+1}^2 \right) < \rho\right\rbrace,
\end{align*}
$\chi = \Sigma_{l =1}^{L-1} \exp\left(-\frac{1}{2}r_{l}^2 +\log \mathcal{N}_{r_l}\right)$,
${\delta = 1-\eta/n^2}$, and $\eta$ is the smallest positive element of the matrix $A$.
\end{theorem}
\begin{proof}
We are going to focus on bounding the beliefs of a measurable set $B$, such that $\theta^* \in B$. For such a set, it follows from Eq.~\eqref{protocol_dis} that
\begin{align*}
\mu_{k}^i\left(B\right) & = \frac{1}{Z_{k}^i}\sum\limits_{\theta \in B}
\left(\prod\limits_{j=1}^{n} \mu_0^j\left(\theta\right)^{\left[A^{k}\right]_{ij} } \right)
\prod\limits_{t=1}^{k} \prod\limits_{j=1}^{n} p^j_{\theta}(X_{t}^j)^{\left[A^{k-t}\right]_{ij}},
\end{align*}
where $ Z_{k}^i $ is the appropriate normalization constant.
Furthermore, after a few algebraic operations we obtain
\begin{align*}
\mu_{k}^i\left(B\right) & \geq 1 - \sum\limits_{\theta \in B^c} { \prod\limits_{j=1}^{n} \left( \frac{\mu_0^j(\theta)}{\mu_0^j(\theta^*) }\right) ^{\left[A^{k}\right]_{ij} }} \prod\limits_{t=1}^{k} \prod\limits_{j=1}^{n} \left( \frac{p^j_{\theta}(X_{t}^j)}{p^j(X_{t}^j)}\right) ^{\left[A^{k\text{-}t}\right]_{ij}}.
\end{align*}
Moreover, since $\mu_0^i(\theta^*) > \epsilon$ for all $i=1,\hdots,n$, it follows that
\begin{align}\label{ready_for_bounds2}
\mu_{k}^i\left(B\right) & \geq 1 - \frac{1}{\epsilon}\sum\limits_{\theta \in B^c}\prod\limits_{t=1}^{k} \prod\limits_{j=1}^{n} \left( \frac{p^j_{\theta}(X_{t}^j)}{p^j(X_{t}^j)}\right) ^{\left[A^{k\text{-}t}\right]_{ij}}.
\end{align}
The relation in Eq.~\eqref{ready_for_bounds2} describes the iterative averaging of products of density functions, for which we can use Lemma \ref{lemmab} with $Q=P_\theta$ and $P = P_{\theta^*}$. Then,
\begin{align*}
\mathbb{P}\left( \left\lbrace \boldsymbol{X}^k \Bigg| \sup_{\theta \in B^c}\sum\limits_{t=1}^{k} \sum\limits_{j=1}^{n} [A^{k-t}]_{ij} \log \frac{p_{\theta}^j(X^j_t)}{p^j(X^j_t)} \geq y \right\rbrace \right) & \leq \sum_{\theta \in B^c} \exp(-y / 2) \exp\left(\log\frac{1}{\alpha} \frac{4 \log n}{1- \delta} \right) \exp\left( -k \frac{1}{n}\sum\limits_{j=1}^{n} h^2(P_{\theta}^j,P^j)\right)
\end{align*}
and by setting $y =-k \frac{1}{n}\sum\limits_{j=1}^{n} h^2(P_{\theta}^j,P^j)$ we obtain
\begin{align*}
&\mathbb{P}\left( \left\lbrace \boldsymbol{X}^k \Bigg| \sup_{\theta \in B^c}\sum\limits_{t=1}^{k} \sum\limits_{j=1}^{n} [A^{k-t}]_{ij} \log \frac{p_{\theta}^j(X^j_t)}{p^j(X^j_t)} \geq -k \frac{1}{n}\sum\limits_{j=1}^{n} h^2(P_{\theta}^j,P^j) \right\rbrace \right) \\
& \qquad \qquad \leq \exp\left(\log\frac{1}{\alpha} \frac{4 \log n}{1- \delta} \right) \sum_{\theta \in B^c} \exp\left( -\frac{k}{2} \frac{1}{n}\sum\limits_{j=1}^{n} h^2(P_{\theta}^j,P^j)\right).
\end{align*}
Now, we let the set $B$ be the Hellinger ball of a radius $r$ centered at $\theta^*$
and define a cover (as described above) to exploit the representation of
$\mathcal{B}_r^c $ as the union of concentric Hellinger annuli, for which we have
\begin{align*}
&\mathbb{P}\left( \left\lbrace \boldsymbol{X}^k \Bigg| \sup_{\theta \in B^c}\sum\limits_{t=1}^{k} \sum\limits_{j=1}^{n} [A^{k-t}]_{ij} \log \frac{p_{\theta}^j(X^j_t)}{p^j(X^j_t)} \geq -k \frac{1}{n}\sum\limits_{j=1}^{n} h^2(P_{\theta}^j,P^j) \right\rbrace \right)\\
& \qquad \qquad \leq \exp\left( \log\frac{1}{\alpha}\frac{4 \log n}{1- \delta} \right) \sum\limits_{l = 1}^{L-1} \sum_{\theta \in \mathcal{F}_l} \exp\left( -\frac{k}{2} \frac{1}{n}\sum\limits_{j=1}^{n} h^2(P_{\theta}^j,P^j)\right)\\
& \qquad \qquad \leq \exp\left( \log\frac{1}{\alpha}\frac{4 \log n}{1- \delta} \right) \sum\limits_{l = 1}^{L-1}\mathcal{N}_{r_l} \exp \left( -kr_{l+1}^2 \right).
\end{align*}
We are interested in finding a value of $k$ large enough such that the above probability is below $\sigma$. Thus, lets define the value of $N$ as
\begin{align*}
N=& \inf \left\lbrace t \geq 1 \Bigg| \exp\left( \log\frac{1}{\alpha}\frac{4 \log n}{1- \delta} \right) \sum\limits_{l = 1}^{L-1}\mathcal{N}_{r_l} \exp \left( -tr_{l+1}^2 \right) < \sigma\right\rbrace .
\end{align*}
It follows that for all $k\leq N$ with probability $1-\sigma$, for all $\theta \in \mathcal{B}_{r}^c $
\begin{align*}
\sum\limits_{t=1}^{k} \sum\limits_{j=1}^{n} [A^{k-t}]_{ij} \log \frac{p_{\theta}^j(X^j_t)}{p^j(X^j_t)} \leq -k \frac{1}{n}\sum\limits_{j=1}^{n} h^2(P_{\theta}^j,P^j).
\end{align*}
Thus, from Eq.~\eqref{ready_for_bounds2} with probability $1-\sigma$ we have
\begin{align*}
\mu_{k}^i\left(\mathcal{B}_{r} \right)
& \geq 1 - \frac{1}{\epsilon}\sum_{\theta \in \mathcal{B}_{r}^c } \exp \left( -k \frac{1}{n}\sum\limits_{j=1}^{n} h^2(P_{\theta}^j,P^j) \right) \\
&= 1 - \frac{1}{\epsilon}\sum\limits_{l =1}^{L-1}\sum_{\theta \in \mathcal{F}_l} \exp \left( -k \frac{1}{n}\sum\limits_{j=1}^{n} h^2(P_{\theta}^j,P^j) \right)\\
& \geq 1 -\frac{1}{\epsilon}\sum\limits_{l =1}^{L-1} \mathcal{N}_{r_{l}} \exp \left( -kr_{l+1}^2 \right)\\
& \geq 1 -\frac{1}{\epsilon}\sum\limits_{l = 1}^{L-1} \mathcal{N}_{r_l} \exp \left( -r_{l+1}^2 \right)\exp \left( -(k-1)r_{l}^2 \right)\\
& \geq 1 - \chi \frac{1}{\epsilon}\exp \left( -(k-1) r^2 \right),
\end{align*}
where $\chi = \Sigma_{l =1}^{L-1} \exp\left(-\frac{1}{2}r_{l}^2 +\log \mathcal{N}_{r_l}\right)$.
\qed
\end{proof}
\vspace{-0.4cm}
\subsection{A Concentration Result for a Compact Set of Hypotheses}
Next we consider the case when the hypothesis set $\Theta$ is a compact subset of $\mathbb{R}^d$. We will now additionally require the map from $\Theta$ to $\prod_{i=1}^n P_{\theta}^i$ be continuous (where the topology on the space of distributions comes from the Hellinger metric).
This will be useful in defining coverings, which will be made clear shortly.
\begin{definition}\label{separated}
Let $\left(M,d\right)$ be a metric space. A subset $S \subseteq M$ is called $\varepsilon$-separated with $\varepsilon>0$ if $d(x,y)\geq \varepsilon$ for any $x,y \in S$. Moreover, for a set
$B \subseteq M$, let $N_B(\varepsilon)$ be the smallest number of Hellinger balls with centers in $S$
of radius $\varepsilon$ needed to cover the set $B$, i.e., such that $B \subseteq \bigcup_{ m \in S} \mathcal{B}_{\varepsilon}\left(m\right)$.
As before, given a decreasing sequence $1=r_1 \geq r_2 \geq \cdots \geq r_L = r$, we will define the annulus $\mathcal{F}_l$ to be $\mathcal{F}_l = \mathcal{B}_{r_{l}} \setminus\mathcal{B}_{r_{l+1}}$. Furthermore, $S_{\varepsilon_l}$ will denote maximal $\varepsilon_l$-separated subset of $\mathcal{F}$. Finally, $K_l = |S_{\varepsilon_l}|$.
\end{definition}
We note that, as a consequence of our assumption that the map from $\Theta$ to $\prod_{i=1}^n P_{\theta}^i$ is continuous, we have that each $K_l$ is finite (since the image of a compact set under a continuous map is compact). Thus, we have the following covering of $\mathcal{B}_{r}^c $:
\begin{align*}
\mathcal{B}_{r}^c & = \bigcup_{l = 1}^{L-1}\bigcup_{m \in S_{\varepsilon_l}} \mathcal{F}_{l,m} ,
\end{align*}
where each $\mathcal{F}_{l,m}$ is the intersection of a ball in $S_{\varepsilon_l}$ with $\mathcal{F}_l$. Figure \ref{lcovering_cont} shows the elements of a covering for a set $\mathcal{B}_r^c $. The cluster of circles at the top right corner represents the balls $ \mathcal{B}_{\varepsilon_l}$ and, for a specific case in the left of the image, we illustrate the set $\mathcal{F}_{l,m}$.
\begin{figure}[h]
\centering
\begin{overpic}[width=0.4\textwidth]{covering_cont}
\put(38,29){{\small $\mathcal{B}_{r} $}}
\put(18,50){{\small $\mathcal{F}_{l,m}$}}
\put(48,53){{\small $P_{\theta^*}$}}
\end{overpic}
\caption{Creating a covering for a set $\mathcal{B}_r $. $\bigstar$ represents the correct hypothesis $\boldsymbol{P}_{\theta^*}$. }
\label{lcovering_cont}
\end{figure}
\begin{example}
We continue Example \ref{ex1} from Section \ref{sec:variational}. Suppose we are interested in analyzing the concentration of the beliefs around the true parameter $\theta^*$ on a Euclidean ball of radius $0.05$; that is we want to see the total mass on the set $[0.45,0.55]$. This in turn, represents a Hellinger ball of radius $r=0.001254$. For this choice of $r$, we propose a covering where $r_1 = 1$, $r_2 = 1/2$, $r_3 = 1/4$, $\hdots$, $r_{10} =1/512$, $r_{11} =r$.
\begin{figure}[ht]
\centering
\begin{overpic}[width=0.7\textwidth]{hell_balls}
\put(47.7,3){{\tiny $[0.45,0.55]$}}
\put(91,58){{\tiny $\mathcal{F}_1$}}
\put(91,54.5){{\tiny $\mathcal{F}_2$}}
\put(91,51){{\tiny $\mathcal{F}_3$}}
\put(91,47.1){{\tiny $\mathcal{F}_4$}}
\put(91,43.6){{\tiny $\mathcal{F}_5$}}
\put(91,39.9){{\tiny $\mathcal{F}_6$}}
\put(91,36.3){{\tiny $\mathcal{F}_7$}}
\put(91,32.3){{\tiny $\mathcal{F}_8$}}
\put(91,28.5){{\tiny $\mathcal{F}_9$}}
\put(91,25){{\tiny $\mathcal{F}_{10}$}}
\put(93,15){{\rotatebox{90}{$\mathcal{B}_r $}}}
\end{overpic}
\caption{Hellinger distance of the density $p_\theta$ to the optimal density $p_{\theta^*}$.}
\label{hell_ball}
\end{figure}
Figure \ref{hell_ball} shows the Hellinger distance between the hypotheses $p_\theta$ and the optimal one $p_{\theta^*}$. Specifically, the $x$-axis is the value of $\theta$, and the $y$-axis shows the Hellinger distance between the distributions. Figure \ref{hell_ball} also shows the covering we defined before, as horizontal lines for each value of the sequence $r_l$, which in turn defines the annulus $\mathcal{F}_l$. T
he Hellinger ball of radius $r$ is also shown, with the corresponding subset of $\Theta$ where we want to analyze the belief concentration.
In this example, the parameter has dimension $1$. The number of balls needed to cover each annulus can be seen to be 2, i.e., we only need $2$ balls of radius $r_l/2$ to cover the annulus $\mathcal{F}_l$.
Thus, $K_l=2$ for $1 \leq l \leq L-1$.
\qed
\end{example}
Our concentration result requires the following assumption on the densities.
\begin{assumption}\label{assum:max}
For every $i = 1,\hdots,n$ and all $\theta$, it holds that $p_\theta^i(x)\leq 1$ almost everywhere.
\end{assumption}
Assumption \ref{assum:max} will be technically convenient for us. It can be made without loss of generality in the following sense: we can always modify the underlying problem to make it hold.
Let us give an example before explaining the reasoning behind this assertion. Let us assume there is just one agent, and say $X \sim P$ is Gaussian with mean $\theta^* = 5$ and variance $0.01$. Our model is $P_\theta = \mathcal{N}(\theta,0.01)$ for $\theta \in \Theta = [0,10]$. Because the variance is so small, the density values are larger than $1$. Instead let us multiply all our observations by $10$. We will then have that our observations come from $10X$, which indeed has density upper bounded by one. In turn our model now should be $Q_\theta = \mathcal{N}(10\theta,1)$ or, alternatively, $Q_\theta = \mathcal{N}(\theta,1)$ for $\theta \in \hat \Theta = [0,100]$.
We note that this modification does not come without cost. As in the case of countable hypotheses,
our convergence rates will depend on $\alpha$, defined to be a positive number such that
$\rho(P_{\theta^1},P_{\theta^2})>\alpha$ for any $\theta^1$ and $\theta^2$.
The process we have sketched out can decrease this parameter $\alpha$.
In the general case, if each agent observe $X_t^j \sim P^j$, then there exists a large enough constant $M >1$ such that ${{MX_t^j \sim Q^j}}$ where the density of $Q^j$ is at most $1$. We can then have agents multiply their measurements by $M$ and redefine the densities to account for this.
We next provide a concentration result for the logarithmic likelihood of a ratio of densities, which will serve the same technical function as Lemma \ref{lemmab} in the countable hypothesis case.
We begin by defining two measures. For a hypothesis~$\theta$ and a measurable set $B \subseteq \Theta$, let ${\boldsymbol{P}}_{B}^{\otimes k}$ be the probability distribution with density
\begin{align}\label{density_g}
g_B(\boldsymbol{x}^k) & = \frac{1}{\mu_0(B)} \int\limits_{B} \prod\limits_{t=1}^{k} \prod\limits_{j=1}^{n} p_\theta^j(x_t^j)d\mu_{0}(\theta).
\end{align}
Similarly, let $\bar{\boldsymbol{P}}_{B}^{\otimes k}$ be the measure with density (i.e., Radon-Nikodym derivative with respect to $\lambda^{\otimes nk}$),
\begin{align}\label{bar_density_g}
\bar g_B(\boldsymbol{x}^k) & = \frac{1}{\mu_0(B)} \int\limits_{B} \prod\limits_{t=1}^{k} \prod\limits_{j=1}^{n} ( p_\theta^j(x_t^j))^{[A^{k-t}]_{ij}}d\mu_{0}(\theta).
\end{align}
Note that $\bar{\boldsymbol{P}}_{B}^{\otimes k}$'s are not probability distributions due to the exponential weights. Nonetheless, they are bounded and positive. The next lemma shows the concentration of the logarithmic ratio of two weighted densities, as defined in Eq.~\eqref{bar_density_g}, for two different sets $B_1$ and $B_2$, in terms of the probability distribution ${\boldsymbol{P}}_{B_1}^{\otimes k}$.
\begin{lemma}\label{lemma2}
Let Assumptions \ref{assum:graph}, \ref{assum:bound} and \ref{assum:max} hold. Consider two measurable sets $B_1,B_2 \subset \Theta$, both with positive measures, and assume that $B_{1} \subset \mathcal{B}_{r^{1}}(\theta^{1})$ and $B_{2} \subset \mathcal{B}_{r^{2}}(\theta^{2})$ {where $\mathcal{B}_{r^{1}}(\theta^{1})$ and $\mathcal{B}_{r^{2}}(\theta^{2})$ are disjoint}. Then, for all $y \in \mathbb{R}$
\begin{align*}
{\mathbb{P}}_{B_1}\left[\log \frac{\bar g_{B_2}(\boldsymbol{X}^k)}{\bar g_{B_1}(\boldsymbol{X}^k)} \geq y\right]
& \leq \exp(- y / 2) \exp \left(\log \frac{1}{\alpha} \frac{4 \log n}{1- \delta} \right) \exp\left( -k \left( \sqrt{\frac{1}{n}\sum_{j=1}^{n} h^2(P^j_{\theta^1},P^j_{\theta^2})} -r^1 -r^2\right)^2\right),
\end{align*}
where ${\mathbb{P}}_{B_1}$ is the probability measure that gives $\boldsymbol{X}^k$ having a distribution ${\boldsymbol{P}}_{B_1}^{\otimes k}$ with density $ g_{B_1}$ as defined in Eq.~\eqref{density_g}.
\end{lemma}
\begin{proof}
By the Markov inequality, it follows that
\begin{align*}
{\mathbb{P}}_{B_1}\left[\log \frac{\bar g_{B_2}(\boldsymbol{X}^k)}{\bar g_{B_1}(\boldsymbol{X}^k)} \geq y\right] & \leq \exp(-y/2)\mathbb{E}_{B_1}\left[ \sqrt{\frac{\bar g_{B_2}(\boldsymbol{X}^k)}{\bar g_{B_1}(\boldsymbol{X}^k)}}\right] \nonumber\\
& = \exp(-y/2) \int_{\boldsymbol{\mathcal{X}}^k} \sqrt{\frac{\bar g_{B_2}(\boldsymbol{x}^k)}{\bar g_{B_1}(\boldsymbol{x}^k)}} g_{B_1}(\boldsymbol{x}^k) d\lambda^{\otimes kn}(\boldsymbol{x}^k).
\end{align*}
Now, by Assumption \ref{assum:max} it follows that $g_{B} \leq \bar g_{B}$ almost everywhere. Thus, we have
\begin{align*}
{\mathbb{P}}_{B_1}\left[\log \frac{\bar g_{B_2}(\boldsymbol{X}^k)}{\bar g_{B_1}(\boldsymbol{X}^k)} \geq y\right]& \leq \exp(-y/2) \int_{\boldsymbol{\mathcal{X}}^k} \sqrt{ \bar g_{B_2}(\boldsymbol{x}^k)} \sqrt{\bar g_{B_1}(\boldsymbol{x}^k)} d\lambda^{\otimes kn}(\boldsymbol{x}^k) \\
& \leq \exp(-y/2) \rho\left( \bar {\boldsymbol{P}}_{B_2}^{\otimes k},\bar{\boldsymbol{P}}_{B_1}^{\otimes k} \right),
\end{align*}
where we are interpreting the definition of the Hellinger affinity function $\rho(\cdot,\cdot)$ as a function of two bounded positive measures, not necessarily probability measures.
At this point, we can follow the same argument as in Lemma~$2$ in \cite{lecam86}, page $477$, where the Hellinger affinity of two members of the convex hull of sets of probability distributions is shown to be less than the product of the Hellinger affinity of the factors. In our particular case, the measures $\bar{\boldsymbol{P}}_{B}^{\otimes k}$ are not probability distributions, nonetheless, the same disintegration argument holds. Thus, we obtain
\begin{align*}
\rho\left( \bar {\boldsymbol{P}}_{B_2}^{\otimes k},\bar{\boldsymbol{P}}_{B_1}^{\otimes k} \right)
& \leq \prod_{t=1}^k\prod_{j=1}^{n}\rho\left( \bar{P}_{B_2}^j,\bar{P}_{B_1}^j \right) ,
\end{align*}
where $\bar{P}_{B}^j$ is the measure with Radon-Nikodym derivative $\bar g_{B}(x) = \frac{1}{\mu_{0}(B)} \int\limits_{B} (p_\theta^j(x))^{[ A^{k-t}]_{ij}} d\mu_{0}(\theta)$ with respect to $\lambda$.
In addition, by Jensen's inequality\footnote{For a concave function $\phi$ and $\int_{\Omega}f(x)dx =1$, it holds that $\int_{\Omega}\phi(g(x))f(x)dx \leq \phi\left( \int_{\Omega}g(x)f(x)\right) $.}, with $x^{[ A^{k-t}]_{ij}}$ being a concave function and $1/\mu_{0}(B)\int_B d\mu_{0} = 1$, we have that
\begin{align*}
\bar{g}_{B}(x) & \leq \left( \frac{1}{\mu_{0}(B)} \int\limits_{B}p_\theta^j(x) d\mu_{0}(\theta)\right) ^{[ A^{k-t}]_{ij}} .
\end{align*}
thus,
\begin{align*}
{\mathbb{P}}_{B_1}\left[\log \frac{\bar g_{B_2}(\boldsymbol{X}^k)}{\bar g_{B_1}(\boldsymbol{X}^k)} \geq y\right]
& \leq \exp(-y / 2)\prod_{t=1}^{k}\prod_{j=1}^{n} \rho({P}_{B_1}^{j},{P}_{B_2}^{j})^{[ A^{k-t}]_{ij}},
\end{align*}
where ${P}_{B}^{j}$ is the probability distribution associated with the density $ \frac{1}{\mu_{0}(B)} \int\limits_{B}p_\theta^j(x) d\mu_{0}(\theta)$.
Assumption \ref{assum:bound} and the compactness of $\Theta$ guarantees that $\rho({P}_{B_1}^{j},{P}_{B_2}^{j})>\alpha$ for some positive $\alpha$, thus similarly as in Lemma~\ref{lemmab}, we have that
\begin{align*}
\mathbb{P}_{B_1}\left[\log \frac{\bar g_{B_2}(\boldsymbol{X}^k)}{\bar g_{B_1}(\boldsymbol{X}^k)} \geq y\right]
& \leq \exp(-y / 2)\exp\left(\log\frac{1}{\alpha}\frac{4\log n}{1- \delta} \right) \prod_{t=1}^{k}\prod_{j=1}^{n} \rho({P}_{B_1}^{j},{P}_{B_2}^{j})^{1/n} \\
& \leq \exp(-y / 2)\exp\left(\log\frac{1}{\alpha}\frac{4\log n}{1- \delta} \right) \exp\left( -\frac{k}{n} \sum_{j=1}^{n}h^2({P}_{B_1}^{j},{P}_{B_2}^{j})\right) .
\end{align*}
Finally, by using the metric defined for the $n$-Hellinger ball and the fact that for a metric $d(A,B)$ for two sets $A$ and $B$ $d(A,B) = \inf_{x\in A, y\in B} d(x,y)$ we have
\begin{align*}
\mathbb{P}_{B_1}\left[\log \frac{\bar g_{B_2}(\boldsymbol{X}^k)}{\bar g_{B_1}(\boldsymbol{X}^k)} \geq y\right]
& \leq \exp(-y / 2)\exp\left(\log\frac{1}{\alpha}\frac{4\log n}{1- \delta} \right) \exp\left( -k \left(\sqrt{ \frac{1}{n} \sum_{j=1}^{n}h^2({P}_{B_1}^{j},{P}_{B_2}^{j})}\right) ^2\right) \\
& \leq \exp(-y / 2)\exp\left(\log\frac{1}{\alpha}\frac{4\log n}{1- \delta} \right) \exp\left( -k \left(\sqrt{\frac{1}{n} \sum_{i=1}^{n} h^2\left(P_{\theta_1}^j,P_{\theta_2}^j\right) } -r^1 - r^2\right)^2 \right).
\end{align*}
\qed
\end{proof}
Lemma \ref{lemma2} provides a concentration result for the logarithmic ratio between two weighted densities over a pair of subsets $B_1$ and $B_2$. The terms involving the auxiliary variable $y$ and the influence of the graph, via $\delta$ are the same as in Lemma \ref{lemmab}. Moreover, the rate at which this bound decays exponentially is influenced now by the radius of the two disjoint Hellinger balls where $B_1$ and $B_2$ are contained respectively.
The bound provided in Lemma \ref{lemma2} is defined for the random variables $\boldsymbol{X}^k$ having a distribution ${ \boldsymbol{P}}_B^{\otimes k}$. Nonetheless, $\boldsymbol{X}^k$ are distributed according to $\boldsymbol{P}^{\otimes k}$. Therefore, we introduce a lemma that relates the Hellinger affinity of distributions defined over subsets of $\Theta$.
\begin{lemma}\label{rhos_b}
Let Assumptions \ref{assum:graph}, \ref{assum:bound} and \ref{assum:max} hold. Consider ${ \boldsymbol{P}}_B^{\otimes k}$ as the distribution with density $ g_B$ as defined in Eq. \eqref{density_g}, for $B \subseteq \mathcal{B}_R$. Then $ {{h({\boldsymbol{P}}_{B}^{\otimes k},\boldsymbol{P}^{\otimes k}) \leq \sqrt{nk}R}}$.
\end{lemma}
\begin{proof}
By Jensen's inequality we have that
\begin{align*}
\sqrt{ g_B(\boldsymbol{x})} \geq \frac{1}{\mu_0(B)}\int_B \sqrt{\prod_{t=1}^k \prod_{j=1}^{n} p^j_\theta(x^j_t)}d\mu_0(\theta).
\end{align*}
Then, by definition of the Hellinger affinity, it follows that
\begin{align*}
\rho({\boldsymbol{P}}_{B}^{\otimes k},\boldsymbol{P}^{\otimes k})& \geq \int_{\boldsymbol{\mathcal{X}}^{\otimes k}}\sqrt{\prod_{t=1}^k \prod_{j=1}^{n} p^j(x^j_t)}\left( \frac{1}{\mu_0(B)}\int_B\sqrt{\prod_{t=1}^k \prod_{j=1}^{n} p^j_\theta(x^j_t)}d\mu_0(\theta)\right) d\lambda^{\otimes nk}(\boldsymbol{x}).
\end{align*}
By using the Fubini-Tonelli Theorem, we obtain
\begin{align*}
\rho({\boldsymbol{P}}_{B}^{\otimes k},\boldsymbol{P}^{\otimes k})
& \geq \frac{1}{\mu_0(B)}\int_B \int_{\boldsymbol{\mathcal{X}}^{\otimes k}}\sqrt{\prod_{t=1}^k \prod_{j=1}^{n} p^j(x^j_t)}\sqrt{\prod_{t=1}^k \prod_{j=1}^{n} p^j_\theta(x^j_t)}d\lambda^{\otimes nk}(\boldsymbol{x})d\mu_0(\theta)\\
& = \frac{1}{\mu_0(B)}\int_B \prod_{t=1}^k \prod_{j=1}^{n} \rho(P^j,P^j_\theta) d\mu_0(\theta)\\
& = \frac{1}{\mu_0(B)}\int_B \prod_{t=1}^k \prod_{j=1}^{n} \left( 1 - h^2(P^j,P^j_\theta)\right) d\mu_0(\theta).
\end{align*}
Finally, by the Weierstrass product inequality it follows that
\begin{align*}
\rho({\boldsymbol{P}}_{B}^{\otimes k},\boldsymbol{P}^{\otimes k}) & \geq \frac{1}{\mu_0(B)}\int_B \left( 1 - \sum_{t=1}^k \sum_{j=1}^{n} h^2(P^j,P^j_\theta)\right) d\mu_0(\theta)\\
& = \frac{1}{\mu_0(B)}\int_B \left( 1 - n\frac{1}{n}\sum_{t=1}^k \sum_{j=1}^{n} h^2(P^j,P^j_\theta)\right) d\mu_0(\theta)\\
& \geq \frac{1}{\mu_0(B)}\int_B \left( 1 - nkR^2\right) d\mu_0(\theta),
\end{align*}
where the last line follows by the fact that any density $\boldsymbol{P}_{\theta}$, inside the
$n$-Hellinger ball defined in the statement of the lemma, is at most at a distance $R$ to $\boldsymbol{P}$.
\qed
\end{proof}
Finally, before presenting our main result for compact sets of hypotheses, we will state an assumption regarding the necessary mass all agents should have around the correct hypothesis $\theta^*$ in their initial beliefs.
\begin{assumption}\label{assum:initial}
The initial beliefs of all agents are equal. Moreover, they have the following property: for any constants ${{C \in (0,1]}}$ and $r \in (0,1]$ there exists a finite positive integer $K$, such that
\begin{align*}
\mu_0\left( \mathcal{B}_{\frac{C}{\sqrt{k}}} \right) \geq \exp\left( -k\frac{r^2}{32}\right) \qquad \text{for all} \ k\geq K.
\end{align*}
\end{assumption}
Assumption \ref{assum:initial} implies that the initial beliefs should have enough mass around the correct hypothesis $\theta^*$ when we consider balls of small radius.
Particularly, as we take Hellinger balls of radius decreasing as $O(1/\sqrt{k})$, the corresponding initial beliefs should not decrease faster than $O(\exp(-k))$.
The assumption can almost always be satisfied by taking initial beliefs to be uniform. The reason is that, in any fixed dimension, the volume of a ball of radius $O(1/\sqrt{k})$ will usually scale as a polynomial in $1/\sqrt{k}$, whereas we only need to lower bound it by a decaying exponential in $k$. For concreteness, we show how this assumption is satisfied by an example.
\smallskip
\noindent {\bf Example:} Consider a single agent, with a uniform initial, belief receiving observations from a standard Gaussian distribution, i.e. $X_k \sim \mathcal{N}(0,1)$. The variance is known and the agent would like to estimate the mean. Thus the models are $P_\theta=\mathcal{N}(\theta,1)$.
Now, the Hellinger distance can be explicitly written as
\begin{align*}
h^2(P,P_\theta) & = 1 - \exp\left(-\frac{1}{4}\theta^2\right).
\end{align*}
Therefore, the Hellinger balls of radius $1/\sqrt{k}$ will correspond to euclidean balls in the parameter space of radius
\begin{align*}
2\sqrt{\log \left( \frac{1}{1-\frac{1}{k}}\right)}.
\end{align*}
Uniform initial belief indicates that $\mu_0\left( \mathcal{B}_{\frac{C}{\sqrt{k}}} \right) = \Theta(\frac{1}{\sqrt{k}})$, which can be made larger than $\exp(-k\frac{r^2}{32})$ for sufficiently large $k$.
\bigskip
We are ready now to state our main result regarding the concentration of beliefs around $\theta^*$ for compact sets of hypotheses.
\begin{theorem}\label{main_cont}
Let Assumptions \ref{assum:graph}, \ref{assum:bound}, \ref{assum:max} and \ref{assum:initial} hold, and let ${\sigma\in (0,1)}$ be a given probability tolerance level. Moreover, for any $r \in (0,1]$, let $\{R_k\}$ be a decreasing sequence such that for $k=1,\hdots,$ ${{R_k \leq \min\left\lbrace \frac{\sigma}{2\sqrt{2kn}},\frac{r}{4}\right\rbrace }}$. Then, the beliefs $\{\mu_k^i\},$ $i\in V,$ generated by the update rule in~Eq.~\eqref{protocol} have the following property: with probability $1-\sigma$,
\begin{align*}
\mu^i_{k+1}(\mathcal{B}_r )& \geq 1 - \chi \exp\left( - \frac{k}{16}r^2\right) \qquad \text{ for all } i \text{ and all }k\geq \max\{N,K\}
\end{align*}
where
\begin{align*}
N=& \inf \left\lbrace t \geq 1 \Bigg| \exp \left(\log \frac{1}{\alpha} \frac{4 \log n}{1- \delta} \right) \sum\limits_{l= 1}^{L-1} K_l\exp\left( -\frac{t}{32} r_{l+1}^2\right) < \frac{\sigma}{2} \right\rbrace,
\end{align*}
with $K$ as defined in Assumption \ref{assum:initial}, $\chi=\sum\limits_{l =1}^{L-1}\exp(- \frac{1}{16}r_{l+1}^2)$ and ${\delta = 1-\eta/n^2}$, where $\eta$ is the smallest positive element of the matrix $A$.
\end{theorem}
\begin{proof}\label{proof_cont}
Lets start by analyzing the evolution of the beliefs on a measurable set $B$ with $\theta^* \in B$. From Eq.~\eqref{protocol} we have that
\begin{align*}
\mu^i_k(B) & = \int\limits_{B} \prod\limits_{t=1}^{k} \prod\limits_{j=1}^{n} p_\theta^j(X_t^j)^{[ A^{k-t}]_{ij}} d\mu_{0}(\theta) \Bigg/\int\limits_{\Theta} \prod\limits_{t=1}^{k} \prod\limits_{j=1}^{n} p_\theta^j(X_t^j)^{[ A^{k-t}]_{ij}} d\mu_{0}(\theta) \\
& \geq 1 - \int\limits_{B^c} \prod\limits_{t=1}^{k} \prod\limits_{j=1}^{n} p_\theta^j(X_t^j)^{[ A^{k-t}]_{ij}} d\mu_{0}(\theta) \Bigg/\int\limits_{B} \prod\limits_{t=1}^{k} \prod\limits_{j=1}^{n} p_\theta^j(X_t^j)^{[ A^{k-t}]_{ij}} d\mu_{0}(\theta).
\end{align*}
Now lets focus specifically on the case where $B$ is a $n$-Hellinger ball of radius $r>0$ with center at $\theta^*$. In addition, since $R_k < r$, we get
\begin{align*}
\mu^i_k(\mathcal{B}_r ) &\geq 1 - \int\limits_{\mathcal{B}_r ^c} \prod\limits_{t=1}^{k} \prod\limits_{j=1}^{n} p_\theta^j(X_t^j)^{[ A^{k-t}]_{ij}} d\mu_{0}(\theta) \Bigg/\int\limits_{\mathcal{B}_{R_k} } \prod\limits_{t=1}^{k} \prod\limits_{j=1}^{n} p_\theta^j(X_t^j)^{[ A^{k-t}]_{ij}} d\mu_{0}(\theta).
\end{align*}
Our goal will be to use the concentration result in Lemma \ref{lemma2}. Thus, we can multiply and divide by $\mu_0(\mathcal{B}_{R_k} )$ to obtain
\begin{align*}
\mu^i_k(\mathcal{B}_r ) &\geq 1 - \int\limits_{\mathcal{B}_r ^c} \prod\limits_{t=1}^{k} \prod\limits_{j=1}^{n} p_\theta^j(X_t^j)^{[ A^{k-t}]_{ij}} d\mu_{0}(\theta) \Bigg/ \bar g_{\mathcal{B}_{R_k} }(\boldsymbol{X}^k) \mu_0(\mathcal{B}_{R_k} )
\end{align*}
Moreover, we use the covering of the set $\mathcal{B}_r^c $ to obtain,
\begin{align}\label{bound3}
\mu^i_k(\mathcal{B}_r ) &\geq 1 - \sum\limits_{l=1}^{L-1} \sum\limits_{m=1}^{K_l} \int\limits_{\mathcal{F}_{l,m}}\prod\limits_{t=1}^{k} \prod\limits_{j=1}^{n} p_\theta^j(X_t^j)^{[ A^{k-t}]_{ij}} d\mu_{0}(\theta) \Bigg/ \bar g_{\mathcal{B}_{R_k} }(\boldsymbol{X}^k) \mu_0(\mathcal{B}_{R_k} ) \nonumber \\
&\geq 1 - \sum\limits_{l=1}^{L-1} \sum\limits_{m=1}^{K_l} \bar g_{\mathcal{F}_{l,m}}(\boldsymbol{X}^k) \mu_0(\mathcal{F}_{l,m}) \Bigg/ \bar g_{\mathcal{B}_{R_k} }(\boldsymbol{X}^k) \mu_0(\mathcal{B}_{R_k} ) .
\end{align}
The previous relation defines a ratio between two densities, i.e. $\bar g_{\mathcal{F}_{l,m}}(\boldsymbol{X}^k) / \bar g_{\mathcal{B}_{R_k} }(\boldsymbol{X}^k)$, both for the wighted likelihood product of the observations, where the numerator is defined over to the set $\mathcal{F}_{l,m}$ and the denominator with respect to the set $\mathcal{B}_{R_k} $.
Lemma \ref{lemma2} provides a way to bound term $\bar g_{\mathcal{F}_{l,m}}(\boldsymbol{X}^k) / \bar g_{\mathcal{B}_{R_k} }(\boldsymbol{X}^k)$ with high probability, thus
\begin{align*}
& \mathbb{P}_{\mathcal{B}_{R_k} }\left( \left\lbrace \boldsymbol{X}^k \Bigg|\sup_{l,m} \log \frac{\bar g_{\mathcal{F}_{l,m}}(\boldsymbol{X}^k)}{\bar g_{\mathcal{B}_{R_k} }(\boldsymbol{X}^k)} \geq y\right\rbrace \right)\leq \sum\limits_{l = 1}^{L-1}\sum\limits_{m=1}^{K_l}\mathbb{P}_{\mathcal{B}_{R_k} }\left(\log \frac{\bar g_{\mathcal{F}_{l,m}}(\boldsymbol{X}^k)}{\bar g_{\mathcal{B}_{R_k} }(\boldsymbol{X}^k)} \geq y \right) \\
& \leq \sum\limits_{l= 1}^{L-1}\sum\limits_{m=1}^{K_l}\exp(- y / 2) \exp \left( \log \frac{1}{\alpha} \frac{4 \log n}{1- \delta} \right) \exp\left( -k \left( \sqrt{\frac{1}{n}\sum_{j=1}^{n} h^2(p^j_{m},p^j)} -\delta_l -R_k\right)^2\right) \\
& \leq \sum\limits_{l = 1}^{L-1}\sum\limits_{m=1}^{K_l}\exp(- y / 2) \exp \left(\log \frac{1}{\alpha} \frac{4 \log n}{1- \delta} \right) \exp\left( -k \left( r_{l+1} -\delta_l -R_k\right)^2\right).
\end{align*}
where $p^j_{m}$ is the density of at the point $\theta = m \in S_{\varepsilon_l}$, where $S_{\varepsilon_l}$ is the maximal $\varepsilon_l$ separated set of $\mathcal{F}_l$ as in Definition \ref{separated}.
Particularly, lets use the covering proposed in \cite{bir15}, where
$\delta_{l} = r_{l+1}/2$. From this choice of covering, we have that
\begin{align*}
r_{l+1} -\delta_l -R_k & > r_{l+1} -r_{l+1}/2 -r_{l+1}/4 \\
& = r_{l+1}/4
\end{align*}
where we have used the assumption that $R_k \leq r/4$ or equivalently $R_k \leq r_{l}/4$ for all $1\leq l \leq L$.
Thus, we can set $y = -\frac{k}{16}r_{l+1}^2$ and it follows that
\begin{align}\label{prob_count}
\mathbb{P}_{\mathcal{B}_{R_k} }\left( \left\lbrace \boldsymbol{X}^k \Bigg|\sup_{l,m} \log \frac{\bar g_{\mathcal{F}_{l,m}}(\boldsymbol{X}^k)}{\bar g_{\mathcal{B}_{R_k} }(\boldsymbol{X}^k)} \geq y\right\rbrace \right) & \leq \exp \left(\log \frac{1}{\alpha} \frac{4 \log n}{1- \delta}\right) \sum\limits_{l=1}^{L-1}K_l \exp\left( -\frac{k}{16}r_{l+1}^2\right) .
\end{align}
The probability measure in Eq. \eqref{prob_count} is computed for $\boldsymbol{X}^k$ distributed according to $\boldsymbol{P}_{\mathcal{B}_{R_k} }^{\otimes k}$. Nonetheless, $\boldsymbol{X}^k$ is distributed according to the (slightly different) $\boldsymbol{P}^{\otimes k}$. Our next step is to relate these two measures.
First, we have that for any distribution $\boldsymbol{P}_{\theta} \in \mathcal{B}_{R_k} $, from the Definition \ref{h_balls} of the $n$-Hellinger ball, it holds that
\begin{align*}
\sqrt{\frac{1}{n}\sum_{j=1}^{n}h^2(P^j_{\theta},P^j)} \leq R_k,
\end{align*}
and we relate the total variation distance and the Hellinger affinity as in Lemma $1$ in \cite{lecam73}; for any measurable set $A$ it holds that
\begin{align*}
\sup_A \left( \boldsymbol{P}_{\mathcal{B}_{R_k} }^{\otimes k} (A) - \boldsymbol{P}^{\otimes k} (A)\right) ^2 & \leq 1 - \rho^2(\boldsymbol{P}_{\mathcal{B}_{R_k} }^{\otimes k},\boldsymbol{P}^{\otimes k}) ,
\end{align*}
and by definition of the Hellinger affinity we have that
\begin{align*}
\sup_A \left( \boldsymbol{P}_{\mathcal{B}_{R_k} }^{\otimes k} (A) - \boldsymbol{P}^{\otimes k} (A)\right) ^2& = 1 - (1-h^2(\boldsymbol{P}_{\mathcal{B}_{R_k} }^{\otimes k},\boldsymbol{P}^{\otimes k}))^2 \\
& \leq 2h^2(\boldsymbol{P}_{\mathcal{B}_{R_k} }^{\otimes k},\boldsymbol{P}^{\otimes k}),
\end{align*}
where first we have used the relation that for any $x\in \mathbb{R}$, it holds that ${{1-(1-x^2)^2<2x^2}}$. Then, from Lemma \ref{rhos_b} we have that
\begin{align*}
\sup_A \left( P_{\mathcal{B}_{R_k} } (A) - \boldsymbol{P}^{\otimes k} (A)\right) ^2 & \leq 2knR_k^2.
\end{align*}
Therefore, by considering the measurable subset $\Gamma^k = \left\lbrace \boldsymbol{X}^k \Bigg|\sup_{l,m} \log \frac{\bar g_{\mathcal{F}_{l,m}}(\boldsymbol{X}^k)}{\bar g_{\mathcal{B}_{R_k} }(\boldsymbol{X}^k)} \geq -\frac{k}{16}r_{l+1}^2\right\rbrace $, we have that
\begin{align*}
\mathbb{P}\left( \Gamma^k\right) & < \mathbb{P}_{\mathcal{B}_{R_k} }\left( \Gamma^k\right) + \sqrt{2kn}R_k \\
& \leq \exp \left( \log \frac{1}{\alpha} \frac{4 \log n}{1- \delta}\right) \sum\limits_{l=1}^{L-1}K_l \exp\left( -\frac{k}{16}r_{l+1}^2\right) +\frac{\sigma}{2}.
\end{align*}
Furthermore, we are interested in finding a large enough $k$ such that the probability described in Eq. \eqref{prob_count} is at most $\sigma$. Thus, we define
\begin{align*}
N \geq \inf \left\lbrace t \geq 1 \Bigg| \exp \left(\log \frac{1}{\alpha} \frac{4 \log n}{1- \delta} \right) \sum\limits_{l= 1}^{L-1}K_l\exp\left( -\frac{t}{16}r_{l+1}^2\right) < \frac{\sigma}{2} \right\rbrace .
\end{align*}
Moreover, from Eq.~\eqref{bound3} we obtain that with probability $1-\sigma$ for all $k \geq N$,
\begin{align*}
\mu^i_k(\mathcal{B}_r ) &\geq 1 - \sum\limits_{l= 1}^{L-1} \sum\limits_{m=1}^{K_l} \exp\left( -\frac{k}{16}r_{l+1}^2\right) \frac{\mu_0(\mathcal{F}_{l,m}) }{\mu_0(\mathcal{B}_{R_k} ) } \\
& = 1 - \sum\limits_{l =1}^{L-1} \exp\left( -\frac{k}{16}r_{l+1}^2\right) \frac{\mu_0(\mathcal{F}_{l}) }{\mu_0(\mathcal{B}_{R_k} ) } \\
& \geq 1 - \frac{1}{\mu_0(\mathcal{B}_{R_k} ) }\sum\limits_{l = 1}^{L-1} \exp\left( -\frac{k}{16}r_{l+1}^2\right) .
\end{align*}
Now, lets define $\chi=\sum\limits_{l = 1}^{L-1}\exp\left( -\frac{1}{16} r_{l+1}^2\right)$, then it follows that
\begin{align*}
\mu^i_{k}(\mathcal{B}_r )& \geq 1 - \frac{1}{\mu_0(\mathcal{B}_{R_k} ) }\sum\limits_{l = 1}^{L-1} \exp\left( -\frac{k}{16} r_{l+1}^2\right)\\
& = 1 - \frac{1}{\mu_0(\mathcal{B}_{R_k} ) }\sum\limits_{l = 1}^{L-1} \exp\left( -\frac{1}{16} r_{l+1}^2\right)\exp\left( -\frac{k-1}{16} r_{l+1}^2\right)\\
& \geq 1 - \frac{1}{\mu_0(\mathcal{B}_{R_k} ) }\chi\exp\left( -\frac{k-1}{16} r^2\right),
\end{align*}
where the last inequality follows from $r_{l}\geq r$ for all $L \leq l \leq 1$. Finally, by Assumption \ref{assum:initial} we have that, for all $k\geq K$
\begin{align*}
\mu^i_{k}(\mathcal{B}_r ) & \geq 1 - \chi \exp(- \frac{k-1}{16}r^2 + \frac{k-1}{32} r^2)\\
& = 1 - \chi \exp(- \frac{k-1}{32}r^2),
\end{align*}
or equivalently $\mu^i_{k+1}(\mathcal{B}_r ) \geq 1 - \chi \exp(- \frac{k}{32}r^2)$.
\qed
\end{proof}
Analogous to Theorem \ref{main_count}, Theorem \ref{main_cont} provides a probabilistic concentration result for the agents' beliefs around a Hellinger ball of radius $r$ with center at $\theta^*$ for sufficiently large $k$.
\section{Conclusions}\label{sec_conclusion}
We have proposed an algorithm for distributed learning with both countable and compact sets of hypotheses. Our algorithm may be viewed as a distributed version of Stochastic Mirror Descent applied to the problem of minimizing the sum of Kullback-Leibler divergences. Our results show non-asymptotic geometric convergence rates for the beliefs concentration around the true hypothesis.
It would be interesting to explore how variations on stochastic approximation algorithms will produce new non-Bayesian update rules for more general problems. Promising directions include acceleration results for proximal methods, other Bregman distances or constraints within the space of probability distributions.
Furthermore we have modeled interactions between agents as exchanges of local probability distributions (i.e., beliefs) between neighboring nodes in a graph. An interesting open question is to understand to what extent this can be reduced when agents transmit only an approximate summary of their beliefs. We anticipate that future work will additionally consider the effect of parametric approximations allowing nodes to communicate only a finite number of parameters coming from, say, Gaussian Mixture Models or Particle Filters.
\bibliographystyle{spbasic}
|
1,116,691,498,665 | arxiv | \subsubsection*{Acknowledgements}
We thank V.~Domcke, E.~Kiritsis and D.~Langlois for their valuable comments and for very useful and helpful discussions. M.P. would like to thank the Niels Bohr Institute and Paris Center for Cosmological Physics for hospitality at various stages of this work.
J.M. is supported by Principals Career Development Scholarship and Edinburgh Global Research Scholarship. M.P. acknowledges the support of the Spanish MINECO's ``Centro de Excelencia Severo Ochoa'' Programme under grant SEV-2012-0249. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sk\l{}odowska-Curie grant agreement No 713366.
\vspace{1cm}
\input{bibliography}
\end{document}
\subsection{Asymptotical power law solutions}
\label{sec:positive}
As already mentioned in Sec.~\ref{subsubsec:General_Exact_solution_cst}, an exactly constant $\beta$-function corresponds to (eternal) power law inflation and therefore is not a complete model. Adding contributions to the $\beta$-function leads to the definition of a consistent effective model that may naturally provide an end for inflation. Let us express the $\beta$-function as\footnote{To simplify the discussion we have chosen to explain the general picture in terms of a positive $\beta$-function, however, similar arguments apply for a negative $\beta(\phi)$.}:
\begin{equation}
\label{eq:beta_asympt_pl}
\beta(\phi) = \sqrt{6\lambda} + f(\phi) \; .
\end{equation}
Since $f(\phi)$ vanishes as we go deeper in the inflationary stage, in this regime it is the constant term that dominates giving the asymptotical power law solution. However, as we depart from this configuration, the function $f(\phi)$ (that here is chosen to be positive), eventually becomes sufficiently large (at $\beta(\phi_{\mathrm{f}}) \simeq 1$) and puts an end to inflation. \\
Before discussing the predictions associated with these classes of models and defining some examples where the situation depicted in the previous paragraph is realized, we need to stress some points:
\begin{itemize}
\item These classes of models are intrinsically different from the classes introduced in~\cite{Binetruy:2014zya}. The presence of a constant term in the $\beta$-function implies that going deeper in the inflationary stage the $\beta$-function does not approach the usual dS configuration but rather a power law solution as the one discussed in Sec.~\ref{subsubsec:General_Exact_solution_cst}.
\item As $\beta(\phi)$ does not approach zero during the inflation, Eq.~\eqref{eq:general_numberofoefoldings} directly implies that in general\footnote{It is fair to point out that this can happen even if $\beta$ goes to zero. However, in the well-defined cases considered in~\cite{Binetruy:2014zya} $N$ is always diverging approaching the fixed point.} there could be an upper bound for $N$. As a consequence, in this case it is necessary to check that the deformation allows for a sufficiently long period of inflation.
\item As during inflation the $\beta$-function asymptotes to a constant, the dual theory does not attain conformal invariance. As a matter of fact, the constant term corresponds to a relevant operator which modifies the IR behavior (on the other hand the function $f(\phi)$ corresponds to the introduction of an irrelevant operator which modifies the UV behavior) of the RG flow. The QFTs that are dual to this class of models are not CFTs but rather theories with generalized conformal invariance~\cite{Jevicki:1998ub}. These are a subclass of hyperscaling violating theories~\cite{Charmousis:2010zz,Gouteraux:2011ce,Huijse:2011ef} where the dynamical critical exponent $z=1$ but $\theta \neq 0$ ($z \neq 1 $ leads to anistropic solutions which are beyond the interest of this paper). For a discussion of the holographic interpretation of this class of theories see for example~\cite{Jevicki:1998ub,Kanitscheider:2008kd,McFadden:2010na,Dong:2012se,Alishahiha:2012cm,Gath:2012pg,Hartnoll:2016apf}.
\end{itemize}
In order to compute the predictions for $n_s$ and $r$ associated with these classes of models we can directly substitute the parameterization of Eq.~\eqref{eq:beta_asympt_pl} into Eq.~\eqref{eq:general_nsandr} to get:
\begin{align}
n_s-1&=-\left[6\lambda+2\sqrt{6\lambda}f+f^2+2f_{,\phi}\right]\;,\\
r&=8\left[6\lambda+2\sqrt{6\lambda}f+f^2\right]\;.
\end{align}
We can directly see from these equations that the predictions for a $\beta$-function of form~\eqref{eq:beta_asympt_pl}, interpolate between the predictions of the power law class \textbf{Ib(0)} and of the class in which $f(\phi)$ belongs to. This mechanism of interpolation is analogous to the one discussed in~\cite{Binetruy:2014zya,Pieroni:2015cma} and in this case is clearly mediated by the value of $\lambda$. Indeed if $f,f_{,\phi}\ll\sqrt{6\lambda}$ at $N=60$, we recover the predictions of a power law model. On the other hand, if $f,f_{,\phi} \gg\sqrt{6\lambda}$ at $N=60$, the constant does not play a significant role and the predictions are the same as if the $\beta$-function were simply $\beta(\phi)=f(\phi)$. Notice however, that independently on the predictions at $N\simeq 60$, the presence of the (small) constant is still deeply affecting the asymptotic behavior according to the discussion of the previous paragraph.\\
Finally, let us define a minimal set of relevant examples. In order to produce a systematic analysis of the different possibilities, we can consider separately the following cases:
\begin{itemize}
\item $f_{,\phi}/f \simeq 1$, functions of the Exponential class \textbf{II($\gamma$)}:\\
$f(\phi)=e^{\gamma\phi}$ with $\gamma > 0$ and $\phi < 0$. \\
The asymptotic power law solution is recovered for $\phi \rightarrow -\infty$.
\item $f_{,\phi}/f \ll 1$, functions\footnote{In principle we could also use functions of the Fractional class \textbf{Ib(p)} (with $p<1$) but as they are not leading to interesting predictions for $n_s$ and $r$ we are not considering them in our analysis.} of the Inverse \textbf{Ib(p)} and Chaotic class \textbf{Ib(1)}:\\
$f(\phi)=\alpha\phi^{-n}$ with $\phi<0 $ and either $\alpha>0$ and $n$ even or $\alpha<0$ and $n$ odd.\\
The asymptotic power law solution is recovered for $\phi \rightarrow -\infty$.
\item $f_{,\phi}/f \gg 1$, functions of the Monomial class \textbf{Ia(q)}:\\
$f(\phi)=\alpha\phi^n$ with $\alpha>0$ and $\phi>0$.\\
The asymptotic power law solution is recovered for $\phi \rightarrow 0$.
\end{itemize}
To be consistent with the convention used in~\cite{Binetruy:2014zya,Pieroni:2015cma}, in the following we proceed by choosing the sign of the terms appearing in the $\beta$-function in order to have always $\phi>0$ and positive constants. Clearly the results are independent on this choice.
\section{Conclusions and outlook}
\label{sec:conclusions}
In this work we have studied constant-roll inflation in terms of the $\beta$-function formalism. In particular we have shown that the constant-roll condition translates into a simple first order differential equation for $\beta(\phi)$. We have derived the solutions of this equation and shown that they reproduce the constant-roll models already studied in the literature. Interestingly, among the cases discussed in this paper, we have recognized some $\beta$-functions that were already considered in the original work on this topic~\cite{Binetruy:2014zya}. This is a consequence of the generality of the results obtained in terms of this formalism (which allows performing analytical computations even beyond the usual slow-roll approximation). Finally we have discussed the interpolating behavior of some solutions. This first part of the work is intended to show the simplicity of the $\beta$-function formalism when dealing with concrete examples. \\
Having reproduced the models that are already known in the literature, we have also shown that this formalism is convenient to go beyond exact solutions. In particular we have defined a set of approximate solutions for the constant-roll equation that correspond to models with phenomenological predictions in good agreement with the latest experimental data. With this procedure we have been able to consider some new classes of models that to our knowledge are not present in the literature. For completeness (even if our approach is not based on the potential) we derived the form of the scalar potential associated with each model. The rather complicated form of the potentials in balance with the relative ease to perform analytical computations illustrates once again the strength of the method. \\
We have considered in greater details the power-law solution and its different extensions. It is well known that an exact power-law model of inflation is incomplete since there is no mechanism to end the period of inflation. We showed that by adding a correction to the corresponding $\beta$-function, it becomes possible to have concrete models with a natural end to the inflationary period. For the specific case of a monomial correction to the $\beta$-function, we found the interesting feature that the total period of inflation may only be finite. Most of the asymptotic power-law models studied in this work predict cosmological parameters that are in agreement with the most recent observational constraints~\cite{Ade:2015xua,Ade:2015lrj,Array:2015xqh} which is an interesting result from a phenomenological point of view. An intriguing subject for further works on this topic would be the search for peculiar observable signatures that allows distinguishing these models from standard dS asymptoting solutions. \\
In this work we illustrated the strength of the $\beta$-function approach when dealing with concrete and phenomenologically interesting models. An interesting prospect for future works would be to relate these models with some theoretically well-motivated theories (which could arise for example from the usual QFT approach). As already stated in Sec.~\ref{sec:positive}, the holographic interpretation might play a significant role in this analysis. The three dimensional QFTs that are dual to asymptotic power-law models are theories with generalized conformal invariance~\cite{Jevicki:1998ub} which are a particular case (with $z = 1$ ) of theories that violate hyperscaling\footnote{In these theories the so-called hyperscaling violation exponent $\theta$ can be introduced in order to quantify the degree of violation of scale invariance.}~\cite{Jevicki:1998ub,Charmousis:2010zz,Gouteraux:2011ce,Huijse:2011ef,Kanitscheider:2008kd,Dong:2012se,Alishahiha:2012cm,Gath:2012pg,Hartnoll:2016apf}. These models are thus conceptually different from dS asymptoting solutions whose holographic dual theories are deformed CFTs. As a consequence, while phenomenologically power-law models are quite similar to (and in some cases nearly indistinguishable from) the most common realization of inflation, there are deep theoretical differences between these two classes of models.\\
Asymptotic power-law solutions are not only interesting in the context of inflationary model building but they can also be useful while studying the late time evolution of our Universe. In particular a wide set of models with exponentially flat potentials (which lead to a power-law expansion) has been studied in the literature~\cite{Capozziello:2005ra,Rubano:2001su,Rubano:2003et,Pavlov:2001dt,Rubano:2002mc,Demianski:2004qt}. The $\beta$-function formalism is not only suitable to describe inflation but rather it can also be used to describe the late time evolution of Universe driven by quintessence\footnote{A classification of models of quintessence in term of the $\beta$-function formalism was carried out in~\cite{Cicciarella:2016dnv}.}. In this case the situation is typically reversed and the period of accelerated expansion corresponds to an RG flow towards a fixed point. As discussed in~\cite{Cicciarella:2016dnv}, the presence of matter naturally induces a flow and thus in this case a consistent late time evolution can be attained even if the $\beta$-function is exactly constant.\\
While in this work we have only focused on the case of a single scalar field with standard derivative term and minimal coupling to gravity, it has been already shown in~\cite{Pieroni:2015cma,Binetruy:2016hna} that the $\beta$-function formalism can be consistently extended to more general models of inflation. However, it is fair to point out that generalizations of the formalism in order to include for example multi-fields models of inflation, non-minimal derivative couplings between the inflaton and gravity\footnote{As in the case of ``new Higgs'' inflation~\cite{Germani:2010gm,Germani:2010ux}.} and models of modified gravity are still lacking. The analysis carried out in~\cite{Bourdier:2013axa} and~\cite{Garriga:2014fda,Garriga:2015tea} can provide a useful guideline in order to define the first of these generalizations.
\section{Exact solutions}
\label{exact}
In this section we study the exact solutions of Eq.~\eqref{eq:constant_roll_beta}. Depending on the sign of the constant $\lambda$, we find different parameterizations for the $\beta$-functions. In particular we show that we can easily recover all the known solutions of~\cite{Motohashi:2014ppa}. We discuss these solutions in terms of the universality classes introduced in~\cite{Binetruy:2014zya} and explain that some of these cases should be interpreted as composite models that interpolate between two different classes. Interestingly some of the cases discussed in this section have been already considered in~\cite{Binetruy:2014zya} illustrating the generality of the results obtained using this formalism (which indeed does not require slow-roll).
\subsection{Exact dS and chaotic models - $\lambda=0$ }
\label{subsubsec:General_Exact_solution_exact_chaotic}
We start by considering the special case\footnote{This case does not properly belong to the constant-roll class of models, as from Eq.~\eqref{eq:constant_roll_definition} it follows $\ddot{\phi}=0$, which is the standard slow-roll condition.} $\lambda=0$ so that the equation for the $\beta$-function becomes $\beta_{,\phi}=\beta^2/2$. It is easy to show that this differential equation has two solutions:
\begin{equation}
\beta(\phi)=0 \; , \qquad \qquad \beta(\phi)= -\frac{2}{\phi} \; .
\end{equation}
The first case corresponds to an exact de Sitter configuration in which $a\propto e^{Ht}$, with constant $H$. This case is not dynamical (there is no flow and the condition $\beta\sim 1$ is never attained \emph{i.e.} there is no end to inflation) and it cannot be used to describe the inflationary universe. In particular, some mechanism must be introduced in order to end the period of accelerated expansion. In terms of the $\beta$-function formalism this can be implemented by introducing one (or more terms) in the parameterization of $\beta$ which induces the flow. On the other hand, the second case is the $\beta$-function for the chaotic class \textbf{Ib(1)} (of~\cite{Binetruy:2014zya}) with $\beta_1=2$ that corresponds to the case of Chaotic inflation~\cite{Linde:1983gd}. In general this class comprehends inflationary models for which the fixed point is reached at $|\phi|\to\infty$ and includes, for generic $\beta_1$, all the inflationary potentials of the form $V(\phi)=V_{\textrm{f}} \, \phi^{\beta_1}$. \\
Let us consider the generic case with $\beta_1$. The number of e-foldings is easily computed:
\begin{equation}
N=\frac{\phi^2-\phi_{\mathrm{f}}^2}{2\beta_1}\;,
\end{equation}
with $|\phi_{\mathrm{f}}|=\beta_1$. We obtain $\phi(N)=\sqrt{2\beta_1 N+\beta_1^2}$ and thus:
\begin{equation}
\beta(N)=-\frac{\beta_1}{\sqrt{2\beta_1N+\beta_1^2}}\;.
\end{equation}
Setting $\beta_1=2$, we have for $n_s$ and $r$:
\begin{align}
\label{eq:ns_and_r_chaotic}
n_s-1\simeq-\frac{2}{N}\;,&& r\simeq\frac{8}{N}\;,
\end{align}
which are exactly the predictions given by the standard chaotic model with quadratic potential~\cite{Linde:1983gd}.
\subsection{Constant $\beta$-function - $\lambda > 0$}
\label{subsubsec:General_Exact_solution_cst}
We now consider the case of a constant solution for Eq.~\eqref{eq:constant_roll_beta}, i.e. $\beta_{,\phi}=0$. Then $\beta(\phi)=\pm\sqrt{6\lambda}$, implying $\lambda>0$. As pointed out in~\cite{Binetruy:2014zya}, this case simply corresponds the power law class \textbf{Ib(0)}. Notice that Power law inflation~\cite{Lucchin:1984yf} is a static solution (different from dS) of Einstein Equations and thus there is no natural end to inflation. Similarly to the case of an exact dS configuration, in order to define a complete model it is necessary to introduce a mechanism to leave the period of accelerated expansion. As later shown in Sec.~\ref{sec:models}, such a mechanism might be provided by the insertion of some corrections to the exact solution outlined in this section.\\
In the case of power law solutions the scale factor is not exponentially increasing with time but rather scales with some power of the cosmic time. In the present case it is easy to show that $a\sim t^{1/(3\lambda)}$. As the $\beta$-function does not depend on $\phi$, the same will be true for $n_s$ and $r$, which can then immediately be computed:
\begin{align}
n_s-1=-\beta^2=-6\lambda=-\frac{r}{8}\;,&&r=48\lambda\;.
\end{align}
These models are strongly disfavored by the latest cosmological data~\cite{Ade:2015xua,Ade:2015lrj,Array:2015xqh}.
\subsection{Hyperbolic tangent and cotangent - $\lambda>0$ }
\label{subsubsec:General_Exact_solution_tanh}
We now solve~\eqref{eq:constant_roll_beta} for positive $\lambda$ and $\beta_{,\phi}\ne 0$ to find two possible solutions:
\begin{align}
\label{eq:constant_roll_tanhbeta_and_cotanhbeta}
\beta(\phi)&=\sqrt{6\lambda}\tanh\left(-\sqrt{\frac{3\lambda}{2}} \phi \right)\;,&&\text{and}&&\beta(\phi)=\sqrt{6\lambda}\ \text{cotanh}\left(-\sqrt{\frac{3\lambda}{2}} \phi \right)\;.
\end{align}
Let us start by considering the first of these two solutions. In this case the functional form for $\beta(\phi)$ clearly presents a zero in $\phi = 0$. We proceed with our analysis by computing the number of e-foldings:
\begin{align}
\label{eq:N_expression_hyperbolic}
N&=-\int_{\phi_\textrm{f}}^\phi\frac{\mathrm{d}\phi'}{\beta(\phi')}=\frac{1}{6\lambda}\ln\left[\frac{\sinh^2\left(-\sqrt{\frac{3\lambda}{2}} \phi \right)}{\sinh^2\left(-\sqrt{\frac{3\lambda}{2}} \phi_\textrm{f} \right)}\right]\;.
\end{align}
To have a positive number of e-folding (corresponding to an expanding Universe) we need $|\phi_f|<|\phi|$. This corresponds to a flow towards the fixed point. As already mentioned (see footnote~\ref{flow_towards}), this is not an appropriate description of the early Universe since there is no natural end to the period of inflation. Conversely, in this case a flow away from the fixed point would describe a shrinking Universe that of course is not relevant for the analysis carried out in this work.\\
For the second solution the fixed point is attained in the limit $\phi\rightarrow-\infty$. As a first step we obtain $\phi_\textrm{f}$ from $\beta(\phi_\textrm{f})=1$:
\begin{align}
\label{eq:constant_roll_cosh2phif}
\sqrt{6\lambda}\text{cotanh}\left(-\sqrt{\frac{3\lambda}{2}} \phi_\textrm{f} \right)&=1\;,&&\mbox{hence}&&
\cosh^2\left(-\sqrt{\frac{3\lambda}{2}} \phi_\textrm{f} \right)=\frac{1}{1-6\lambda}\;.
\end{align}
Notice that as $\lambda$ is positive in order for the field to be real valued at the end of inflation we need $0<\lambda<1/6$. As usual, we proceed by deriving the number of e-foldings:
\begin{align}
\label{eq:constant_roll_Ncotanh}
N&=\frac{1}{6\lambda}\ln\left[\frac{\cosh^2\left(-\sqrt{\frac{3\lambda}{2}} \phi \right)}{\cosh^2\left(-\sqrt{\frac{3\lambda}{2}} \phi_\textrm{f} \right)}\right] =\frac{1}{6\lambda}\ln\left[(1-6\lambda)\cosh^2\left(-\sqrt{\frac{3\lambda}{2}} \phi \right)\right] \;,
\end{align}
where we used Eq.~\eqref{eq:constant_roll_cosh2phif}. Notice that since $\cosh^2\left(-\sqrt{\frac{3\lambda}{2}} \phi \right)>\cosh^2\left(-\sqrt{\frac{3\lambda}{2}} \phi_\textrm{f} \right)$ for $|\phi|>|\phi_f|$, $N$ is positive. At this point we can express $\beta$ in terms of $N$ as:
\begin{align}
\beta(N)&=\sqrt{\frac{6\lambda}{1-(1-6\lambda)e^{-6\lambda N}}}\;.
\end{align}
We then find for the cosmological observables $n_s$ and $r$:
\begin{align}
n_s-1=6\lambda\left(1-\frac{2}{1-(1-6\lambda)e^{-6\lambda N}}\right)\;, \qquad r=8\beta^2=\frac{48\lambda}{1-(1-6\lambda)e^{-6\lambda N}}\;.
\end{align}
For completeness, we report the scalar potential associated with this parameterization of the $\beta$-function:
\begin{align}
V(\phi)&=V_f\left[-\lambda+(1-\lambda)\sinh^2\left(-\sqrt{\frac{3\lambda}{2}} \phi \right)\right]\;,
\end{align}
where $V_\textrm{f}$ is the normalization of the inflationary potential that as usual can be set using the COBE normalization~\cite{Ade:2015xua,Ade:2015lrj}.\\
We can appreciate the interpolating behavior of $\beta(N)$ (between the power law \textbf{Ib(0)} and the chaotic class \textbf{Ib(1)}) by taking first the limit $6\lambda N\gg1$:
\begin{align}
\beta(N)&=\sqrt{\frac{6\lambda}{1-(1-6\lambda)e^{-6\lambda N}}}\simeq\sqrt{6\lambda}\;,
\end{align}
which is the power law class \textbf{Ib(0)} with $\beta_0=-\sqrt{6\lambda}$. On the other hand for $6\lambda N\ll 1$, we have:
\begin{align}
\beta(N)&=\sqrt{\frac{6\lambda}{1-(1-6\lambda)e^{-6\lambda N}}}\simeq\sqrt{\frac{6\lambda}{1-(1-6\lambda)(1-6\lambda N)}} \simeq \frac{1}{\sqrt{N+1}}\;,
\end{align}
corresponding to the chaotic class \textbf{Ib(1)} with $\beta_1=2$, for which $n_s$ and $r$ are those given in Eq.~\eqref{eq:ns_and_r_chaotic}.
\subsection{Tangent - $\lambda<0$ }
\label{subsubsec:General_Exact_solution_tan}
Looking for a solution of Eq.~\eqref{eq:constant_roll_beta} with negative $\lambda$ and $\beta_{,\phi}\ne 0$, we find\footnote{Note that also $\beta(\phi)=\sqrt{6|\lambda|}\cot\left(-\sqrt{\frac{3|\lambda|}{2}}\phi\right)$ is a solution in this case. However, this reduces to Eq.\eqref{eq:constant_roll_tanbeta} through the redefinition $\phi\to\frac{\pi}{\sqrt{6|\lambda|}}-\phi $.}:
\begin{align}
\label{eq:constant_roll_tanbeta}
\beta(\phi)&=\sqrt{6|\lambda|}\tan\left(\sqrt{\frac{3|\lambda|}{2}} \phi \right)\;.
\end{align}
This form of $\beta$-function corresponds to the interpolating class introduced in~\cite{Binetruy:2014zya} with the choice $f=1/\sqrt{6|\lambda|}$. It interpolates between the linear \textbf{Ia(1)} and the chaotic class \textbf{Ib(1)}, respectively in the limits $\phi\rightarrow0$ and $\phi\rightarrow\frac{\pi}{\sqrt{6|\lambda|}}$.\\
The number of e-foldings associated with Eq.~\eqref{eq:constant_roll_tanbeta} is given by:
\begin{align}
\label{eq:N_expression_tangent}
N&=-\frac{1}{6|\lambda|}\ln\left[\frac{\sin^2\left(\sqrt{\frac{3|\lambda|}{2}} \phi \right)}{\sin^2\left(\sqrt{\frac{3|\lambda|}{2}} \phi_\textrm{f} \right)}\right]\;.
\end{align}
We obtain $\phi_\textrm{f}$ from $\beta(\phi_\textrm{f})=1$ (in this case the field is positive, $\phi_\textrm{f}>\phi\geq0$):
\begin{align}
\label{eq:constant_roll_sinh2phif}
\sqrt{6|\lambda|}\tan\left(\sqrt{\frac{3|\lambda|}{2}} \phi_\textrm{f} \right)&=1\;,&&\mbox{hence}&&
\sin^2\left(\sqrt{\frac{3|\lambda|}{2}} \phi_\textrm{f} \right)=\frac{1}{1+6|\lambda|}\;.
\end{align}
Then substituting into Eq.~\eqref{eq:N_expression_tangent}:
\begin{align}
\label{eq:constant_roll_Ntan}
N&=-\frac{1}{6|\lambda|}\ln\left[(1+6|\lambda|)\sin^2\left(\sqrt{\frac{3|\lambda|}{2}} \phi \right)\right]\;,
\end{align}
and thus we get:
\begin{align}
\beta(N)&=\sqrt{\frac{6|\lambda|}{(1+6|\lambda|)e^{6|\lambda| N}-1}}\;,
\end{align}
where we used that $\sin^2\left(\sqrt{\frac{3|\lambda|}{2}} \phi \right)=e^{-6|\lambda| N}/(1+6|\lambda|)$. We then find for $n_s$ and $r$:
\begin{align}
n_s-1& \simeq -6|\lambda|-\frac{12|\lambda|}{(1+6|\lambda|)e^{6|\lambda| N}-1}\;, \qquad r =\frac{48|\lambda|}{(1+6|\lambda|)e^{6|\lambda| N}-1}\;.
\end{align}
The scalar potential associated with this $\beta$-function reads:
\begin{align}
V(\phi)&=V_f\left[-|\lambda|+(1+|\lambda|)\cos^2\left(\sqrt{\frac{3|\lambda|}{2}} \phi \right)\right]\;,
\end{align}
where again $V_\textrm{f}$ is the normalization of the inflationary potential.\\
We can appreciate the interpolating behavior of this form of $\beta(N)$ by taking first the limit $6|\lambda| \gg1$ :
\begin{align}
\beta(N)&= e^{-3|\lambda| N}\;,
\end{align}
which is the linear class \textbf{Ia(q)} with $\beta_1=3|\lambda|$, $q=1$. In this limit (\emph{i.e.} the small field limit of~\eqref{eq:constant_roll_tanbeta}) we get:
\begin{align}
n_s-1&\simeq -6|\lambda|\;,&& r\simeq 8e^{-6|\lambda|N}\;.
\end{align}
On the other hand for $6|\lambda| N\ll 1$, we have:
\begin{align}
\beta(N)&\simeq\sqrt{\frac{6|\lambda|}{(1+6|\lambda|)(1+6|\lambda|N)-1}}=\sqrt{\frac{1}{N+1+6|\lambda| N}}\simeq\sqrt{\frac{1}{N+1}}\;,
\end{align}
corresponding to the chaotic class \textbf{Ib(1)} with $\beta_1=2$, for which $n_s$ and $r$ are those given in Eq.~\eqref{eq:ns_and_r_chaotic}. Recall that this is the large field limit, with $\sqrt{6|\lambda|} \phi \rightarrow \pi$.
\subsubsection{Exponential}
\label{sec:exponential}
Let us consider a $\beta$-function of the form:
\begin{equation}
\label{eq:exponential_class}
\beta(\phi)=-\sqrt{6\lambda}-\alpha e^{-\gamma\phi}\;,
\end{equation}
with $\alpha > 0$, $\gamma> 0$ and $\phi >0$ so that inflation is realized for large positive values of $\phi$. In this case the $\beta$-function goes from $-\sqrt{6\lambda}$ to $-1$ as $\phi$ decreases from $\phi > \phi_{\mathrm{f}}$ to $\phi_{\mathrm{f}}$. Notice that by redefining the inflaton field as $\phi\to\phi-\ln|\alpha|/\gamma$ the $\beta$-function can be simplified to:
\begin{equation}
\label{eq:exponential_class_simplified}
\beta(\phi)=-\sqrt{6\lambda}-e^{-\gamma\phi}\;,
\end{equation}
and we can easily show that $\phi_\textrm{f} = - \ln\left[1 -\sqrt{6\lambda}\right]/\gamma$.\\
This class of models corresponds to potentials that at the lowest order can be expressed as:
\begin{align}
\label{eq:pot_exp_app}
V(\phi)& \simeq V_\textrm{f} \left(1 - C e^{-\gamma\phi} \right) \exp\left\{\sqrt{6\lambda}\phi\right\}\;,
\end{align}
where $V_\textrm{f}$ is the normalization of the inflationary potential (that as usual can be set using the COBE normalization~\cite{Ade:2015xua,Ade:2015lrj}) and $C$ is a constant which can be expressed in terms of $\gamma$ and $\lambda$. Let us proceed by computing the expression for the number of e-foldings:
\begin{equation}
\label{eq:efold_exp}
N(\phi) =\frac{1}{\gamma\sqrt{6\lambda}}\ln\left(\frac{1+\sqrt{6\lambda}e^{\gamma\phi}}{1+\sqrt{6\lambda}e^{\gamma\phi_{\mathrm{f}}}}\right) =\frac{1}{\gamma\sqrt{6\lambda}}\ln\left[\left(1-\sqrt{6\lambda}\right)\left(1+\sqrt{6\lambda}e^{\gamma\phi}\right)\right] \; ,
\end{equation}
which is positive since $\phi$ decreases during inflation. We can now invert Eq.~\eqref{eq:efold_exp} in order to get $\phi(N)$:
\begin{equation}
\label{eq:phitoN_exp}
\phi(N)=\frac{1}{\gamma}\ln\left(\frac{e^{\gamma\sqrt{6\lambda}N}+\sqrt{6\lambda}-1}{\sqrt{6\lambda}-6\lambda}\right)\;,
\end{equation}
and similarly we can express the $\beta$-function in terms of $N$:
\begin{equation}
\label{eq:beta_N}
\beta(N)=-\frac{\sqrt{6\lambda}e^{\gamma\sqrt{6\lambda}N}}{e^{\gamma\sqrt{6\lambda}N}+\sqrt{6\lambda}-1} \; .
\end{equation}
It is important to stress that despite the rather complicated form of the potential shown in Eq.~\eqref{eq:pot_exp_app}, the corresponding $\beta$-function is simple and convenient to manipulate. In particular we have shown that the model can be completely solved analytically. This is a clear difference with respect to the usual approach where it is quite difficult to specify potentials that lead to an evolution that admits analytical solutions. This illustrates the strength of this approach which allows constructing new interesting cases that can be solved analytically (and that eventually lead to interesting predictions for cosmological observables) even if they correspond to non-trivial potential. Finally, the predictions for $n_s$ and $r$ in terms of $N$ can be directly obtained from Eq.~\eqref{eq:general_nsandr}. The results for a set of models in this class are shown in the plots on Figs.~\ref{fig:exponential} and Fig.~\ref{fig:exponentiallog} (in linear and semi-logarithmic scale respectively). Before discussing these plots in detail let us describe the two asymptotic limits $\gamma\sqrt{6\lambda}N\ll 1$ and $\gamma\sqrt{6\lambda}N\gg 1$ which are respectively reproducing the Exponential and Power Law classes. \\
\begin{figure}[htb]
\centering
\includegraphics[width=1.\textwidth]{Exponential_interpol_lin}
\caption{Predictions for models described by the $\beta$-function of Eq.~\eqref{eq:exponential_class} (in purple) compared with Planck 2015 constraints~\cite{Ade:2015lrj} in the $(n_s,r)$ plane in linear scale. For comparison we show the predictions for the Exponential class (in orange) and for the Power law class (in green). In this plot we choose values for $\lambda$ in the interval $\lambda \in [10^{-6},0.8] $ and values of $\gamma$ in the interval $\gamma \in [0.3,10]$. Both the values of $\lambda$ and $\gamma$ are chosen with a even logarithmic spacing. Each horizontal segment corresponds to values of $N$ ranging from $50$ to $60$ (from left to right). More details on this plot are given in the main text.\label{fig:exponential}}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=1.\textwidth]{Exponential_interpol_log}
\caption{Predictions for models described by the $\beta$-function of Eq.~\eqref{eq:exponential_class} (in purple) in the $(n_s,r)$ plane in semi-logarithmic scale. The values of $\lambda$ and $\gamma$ used for this plot are the same as in Fig.~\ref{fig:exponential}. \label{fig:exponentiallog}}
\end{figure}
In the limit $\gamma\sqrt{6\lambda}N\ll 1$ we can easily get:
\begin{equation}
\phi(N)\simeq \frac{1}{\gamma}\ln\left(\frac{1+\gamma N}{1-\sqrt{6\lambda}}\right) \; , \qquad \text{and} \qquad \beta(N)\simeq -\frac{1}{1+\gamma N} \; ,
\end{equation}
which is precisely the $\beta$-function of the Exponential class. Conversely, in the limit $\gamma\sqrt{6\lambda}N\gg 1$ we find:
\begin{equation}
\phi(N)\simeq \sqrt{6\lambda}N-\frac{1}{\gamma}\ln(\sqrt{6\lambda}-6\lambda) \; , \qquad \text{and} \qquad \beta(N) \simeq -\sqrt{6\lambda} \; ,
\end{equation}
being $e^{-\gamma\sqrt{6\lambda}N}\simeq 0$. We see that in this limit we recover the Power Law class.\\
In the plots of Fig.~\ref{fig:exponential} and Fig.~\ref{fig:exponentiallog} we show the predictions for $n_s$ and $r$ (respectively in linear and semi-logarithmic scale) for the models described in this section. Notice that in this case the $\beta$-function is completely specified by the value of the two critical exponents $\gamma$ and $\lambda$. Once these two parameters are fixed, we can compute numerical predictions as a function of $N$ only. In order to produce the plots of Fig.~\ref{fig:exponential} and Fig.~\ref{fig:exponentiallog}, we first fix some values of $\gamma$ (with even log spacing from $\gamma =0.3$ to $\gamma = 10$) and varying the value of $\lambda$ (again with even log spacing from $\lambda = 0.8$ to $\lambda = 10^{-6}$) we show the interpolation between the two asymptotic limits \emph{i.e.} the Exponential (in orange) and the Power law (in green). The solid black lines are used to follow the variation of $\lambda$ while keeping the values of $\gamma$ and $N$ fixed. The thick black line corresponds to $ N = 60 $ and the thin black line corresponds to $N = 50$. As for larger values of $\gamma$ the exponential term approaches zero more rapidly (going deeper into the inflationary phase), smaller values of $\lambda$ are required in order to approach the exponential limit at $N\simeq 50 \div 60$. This feature is manifest in the plot of Fig.~\ref{fig:exponentiallog}. Notice that between the two asymptotic limits there is an intermediate region with a whole new set of inflationary models.
\section{Setting up the model}
\label{sec:general}
In this section we start by giving a brief review of the $\beta$-function formalism for models of inflation introduced in~\cite{Binetruy:2014zya} and then we discuss the case of the constant-roll within this framework. Although in this paper we consider a single scalar field minimally coupled to gravity and with standard kinetic term, the results can be generalized to more complex scenarios (following the analysis of~\cite{Pieroni:2015cma,Binetruy:2016hna}).
\subsection{$\beta$-function formalism - A short review}
\label{subsec:General_beta_Function_Formalism}
The action for a homogeneous classical scalar field $\phi(t)$ minimally coupled to gravity in a FLRW universe with line element $\mathrm{d}s^2=-\mathrm{d}t^2+a^2(t)\mathrm{d}\vec{x}^2$ reads (in natural units $M_p^2\equiv (8\pi G)^{-1}=1$):
\begin{align}
\label{eq:general_action}
S&=\int \mathrm{d}^4 x \sqrt{-g}\left(\frac{ R }{2}-\frac{1}{2}\partial^\mu\phi\partial_\mu\phi-V(\phi)\right)\;.
\end{align}
Using the definition of the stress-energy tensor $T_{\mu\nu}\equiv -2(\delta S_m/\delta g^{\mu\nu})/\sqrt{-g}$, we can easily compute the pressure and the energy density of the scalar field:
\begin{align}
p&=\frac{\dot{\phi}^2}{2}-V(\phi)\;,&& \rho=\frac{\dot{\phi}^2}{2}+V(\phi)\;.
\end{align}
The dynamical evolution of the system is then set by the Einstein equations which read:
\begin{align}
\label{eq:general_EinsteinEquations}
H^2&\equiv\left(\frac{\dot{a}}{a}\right)^2=\frac{\rho}{3}\;,&& -2\dot{H}=p+\rho=\dot{\phi}^2\;.
\end{align}
Combining these equations we obtain the equation of state for the scalar field:
\begin{align}
\label{eq:general_equationofstate}
\frac{p+\rho}{\rho} = -\frac{2}{3}\frac{\dot{H}}{H^2}\;.
\end{align}
At this point, we can study the system in terms of the Hamilton-Jacobi approach of Salopek and Bond~\cite{Salopek:1990jq}. Assuming the dependence $\phi\equiv\phi(t)$ to be at least piecewise monotonic, we can invert it to get $t(\phi)$ and use the field as a clock to describe the evolution. By introducing the superpotential $W(\phi)\equiv-2H$, we obtain:
\begin{align}
\dot{W}&=W_{,\phi}\dot{\phi}=-2\dot{H}=\dot{\phi}^2\;,&&\mbox{hence}&&\dot{\phi}=W_{,\phi}\;.
\end{align}
In analogy with the definition of the $\beta$-function in the context of QFT we define:
\begin{align}
\label{eq:general_betafunction}
\beta&=\frac{\mathrm{d}\phi}{\mathrm{d} \ln a}=\frac{\dot{\phi}}{H}=-2\frac{\dot{\phi}}{W}=-2\frac{W_{,\phi}}{W}\;,
\end{align}
which exhibits the standard form of a RG equation with $\phi$ playing the role of the renormalization constant and $a$ playing the role of the renormalization scale. To be precise, the analogy with the QFT $\beta$-function is not only at a formal level. Indeed it is possible to relate the cosmological $\beta$-function of Eq.~\eqref{eq:general_betafunction} with the $\beta$-function describing the RG flow induced by some scalar operator in the dual QFT. However, in order to properly set this correspondence, we have to specify a mapping between the bulk inflaton and the coupling in the dual QFT. In particular, this typically requires the specification of some renomalization condition which in principle may require a modification of the simple expression of $\beta$ in terms of $W$. While a detailed discussion of the holographic interpretation is beyond the scope of this work, an accurate discussion of this procedure can be found in~\cite{McFadden:2013ria}. \\
Eq.~\eqref{eq:general_equationofstate} implies that a phase of accelerated expansion is realized\footnote{More precisely, $\ddot{a}/a>0$ requires $|\beta(\phi)|<\sqrt{2}$. For simplicity, in the following we assume inflation to end at $|\beta(\phi)|\sim 1$.} for $| \beta (\phi) | \ll 1$. In analogy with the RG approach, we identify the zeros of the $\beta$-function as fixed points, which in the cosmological case correspond to exact dS solutions. Depending on the sign of the $\beta$-function inflationary periods are then represented by the flow of field away (or towards) these fixed points\footnote{\label{flow_towards}When the flow is towards the fixed point there is no natural end to the period of accelerated expansion. Clearly, this configuration is not suitable to describe inflation. However, as discussed in~\cite{Cicciarella:2016dnv}, this scenario can be relevant in the description of the late time acceleration of the Universe in terms of quintessence.}. As a consequence, it is possible to classify the various models of inflation according to the behavior of the $\beta$-function in the neighborhood of the fixed point, rather than according to the potential. The advantage of this approach is that specifying a form for $\beta(\phi)$ actually defines an universality class that encompasses more models which might have in principle very different potentials but nevertheless yield to a similar cosmological evolution (and thus to similar predictions for cosmological observables such as the scalar spectral index $n_s$ and the tensor-to-scalar ratio $r$). Notice that in order to realize a phase of accelerated expansion we only need $| \beta (\phi) | \ll 1$ and in general this does not require $\beta \rightarrow 0$. In particular, inflation can be realized even if $\beta(\phi)$ approaches a small constant value. As discussed in~\cite{Binetruy:2014zya}, this is the case of power law inflation~\cite{Lucchin:1984yf}. \\
In the $\beta$-function formalism the number of e-foldings $N$ can be expressed as:
\begin{align}
\label{eq:general_numberofoefoldings}
N=-\ln(a/a_{\mathrm{f}})=-\int_{\phi_\textrm{f}}^\phi\frac{\mathrm{d}\phi'}{\beta(\phi')}\;,
\end{align}
where $\phi_{\mathrm{f}}$ is the value of the field at the end of inflation. Similarly we can compute the superpotential:
\begin{equation}
\label{eq:general_superpot}
W(\phi) = W_{\textrm{f}} \exp \left( -\int_{\phi_\textrm{f}}^\phi \frac{\beta(\phi')}{2} \mathrm{d}\phi' \right)\;,
\end{equation}
and the inflationary potential:
\begin{align}
\label{eq:general_potential}
V(\phi)&=\frac{3}{4}W^2-\frac{1}{2}W_{,\phi}^2=\frac{3}{4}W^2\left(1-\frac{\beta^2(\phi)}{6}\right)\;,
\end{align}
whose parameterization is similar to the one in the context of supersymmetric quantum mechanics (for reference see for example~\cite{Binetruy:2006ad}). It is important to stress that so far all the computations are exact, \emph{i.e.} we have not performed any approximation and the analysis holds even if we are not assuming slow-roll.\\
Assuming now to be in a neighborhood of the fixed point, we have $|\beta(\phi)|\ll 1$ and $n_s$ and $r$ at the lowest order in terms of $\beta$ and its derivative simply read:
\begin{align}
\label{eq:general_nsandr}
n_s-1&\simeq-\left(\beta^2+2\beta_{,\phi}\right)\;,&&r=8\beta^2\;.
\end{align}
In order to obtain the standard expressions of $n_s$ and $r$ in terms of $N$, we should first determine the value of $\phi$ at the end of inflation (using the condition $|\beta(\phi_{\mathrm{f}})|\sim 1$). We should then proceed by computing $N(\phi)$ (using Eq.~\eqref{eq:general_numberofoefoldings}) and invert it into $\phi(N)$ to express $n_s$ and $r$ in terms of the number of e-foldings.
\subsection{Constant-roll inflation}
\label{subsec:General_Constant_Roll_Inflation}
Models of constant-roll inflation have been studied in the last years as an alternative that goes beyond the simple slow-roll realization of inflation. Unlike the usual slow-roll approach, the second derivative $\ddot{\phi}$ in the Klein-Gordon equation is not neglected in constant-roll inflation. A constant rate of rolling for the inflaton field, \emph{i.e.} $\ddot{\phi}/(H\dot{\phi})$ is then assumed. The first case to be studied was the so called \emph{ultra slow-roll inflation}~\cite{Martin:2012pe}, in which $\ddot{\phi}/(H\dot{\phi})=-3$, corresponding to an exactly flat potential $\partial V/\partial\phi=0$. Later, deviation from this regime have been considered in~\cite{Motohashi:2014ppa,Cai:2016ngx,Motohashi:2017aob,Odintsov:2017yud,Gao:2017uja,Gao:2017owg,Motohashi:2017vdc,Nojiri:2017qvx,Odintsov:2017qpp,Motohashi:2017gqb,Oikonomou:2017bjx}, with $\ddot{\phi}/(H\dot{\phi})=\mbox{const}\ne -3$.\\
In general the constant-roll condition\footnote{Note that $\lambda$ here is related to $\alpha$ of~\cite{Motohashi:2014ppa} as $\lambda=1+\alpha/3$ and to $\beta$ of~\cite{Motohashi:2017aob} as $\lambda=-\beta/3$.} is:
\begin{equation}
\label{eq:constant_roll_definition}
\ddot{\phi} = -3 \lambda H \dot{\phi}\;.
\end{equation}
In order to express it in terms of the $\beta$-function formalism we should start by expressing $\ddot{\phi}$ in terms of the superpotential ($\dot{\phi}=W_{,\phi}$):
\begin{align}
\ddot{\phi}&=\dot{W_{,\phi}}=W_{,\phi\phi}\dot{\phi}\;,
\end{align}
so that the constant-roll equation~\eqref{eq:constant_roll_definition} becomes:
\begin{equation}
\label{eq:w_phiphi}
W_{,\phi\phi}=\frac{3\lambda}{2}W \;.
\end{equation}
At this point we compute the first derivative of the $\beta$-function:
\begin{align}
\beta_{,\phi}&=-\frac{2W_{,\phi\phi}}{W}+2\left(\frac{W_{,\phi}}{W}\right)^2\;,&&\mbox{yielding}&&W_{,\phi\phi}=\frac{W}{2}\left(\frac{1}{2}\beta^2-\beta_{,\phi}\right)\,,
\end{align}
to express Eq.~\eqref{eq:w_phiphi} as:
\begin{align}
\label{eq:constant_roll_beta}
\frac{1}{2}\beta^2-\beta_{,\phi}&=3\lambda\;,
\end{align}
which is a nonlinear Riccati equation (the corresponding linear second-order ODE is Eq.~\eqref{eq:w_phiphi}). This equation shows a first advantage of this formalism: instead of dealing with a second order differential equation for $\phi$ in which in addition we need to specify $H(t)$, here we have a first order differential equation for $\beta(\phi)$ that can be easily integrated once the value of the constant $\lambda$ is chosen. \\
The scalar spectral index and the tensor-to-scalar ratio from Eq.~\eqref{eq:general_nsandr} are given in this particular case by:
\begin{align}
\label{eq:ns_r_constant_roll}
n_s-1&=-\left(\beta^2+2\beta_{,\phi}\right)=-2\beta^2+6\lambda=6\lambda-\frac{r}{4}\;.
\end{align}
We see that $n_s$ and $r$ are not independent. This should not come as a surprise, as at the lowest order the two parameters depends uniquely on $\beta$ and $\beta_{,\phi}$ and the constant-roll condition actually establishes a relationship between them.
\section{Introduction}
\label{sec:introduction}
Nowadays inflation is widely accepted as a cornerstone of early time cosmology. However, while the main mechanism that drove this early phase of accelerated expansion starts to be understood, the definition of a concrete model that is completely satisfying from a theoretical point of view is still lacking. Since the original proposals~\cite{Guth:1980zm,Linde:1981mu,Albrecht:1982wi,Linde:1983gd,Starobinsky:1980te}, a huge amount of models to realize inflation has been proposed (for a fairly complete review see for example~\cite{Martin:2013tda}) and in some cases theoretical predictions are so close that they are nearly indistinguishable. A well known example of this degeneracy\footnote{However, it is fair to point out that some slight differences in the predictions for these models can be found by performing a more accurate analysis that keeps into account the physics of reheating~\cite{Bezrukov:2011gp}.} is the case of $R^2$ inflation~\cite{Starobinsky:1980te} and Higgs inflation~\cite{Bezrukov:2007ep,Bezrukov:2009db}.\\
In order to have a better connection between theory and experiments, several methods to classify inflationary models have been introduced over the last years~\cite{Mukhanov:2013tua,Roest:2013fha,Garcia-Bellido:2014gna}. In this context, we have proposed the $\beta$-function formalism for inflation~\cite{Binetruy:2014zya}. While at a formal level this characterization is equivalent to the standard procedure based on the specification of the inflationary potential, it typically permits to perform exact computations even without assuming slow-roll. This formalism is inspired by a formal analogy between the equations describing the evolution of a scalar field in a Friedmann-Lema\^itre-Robertson-Walker (FLRW) background and a renormalization group equation (RGE). Because of this similarity, the evolution of the Universe during inflation is expressed in terms of a $\beta$-function similar to the one well known in the context of quantum field theory (QFT). When the magnitude of the cosmological $\beta$-function is smaller than one, the Universe experiences a phase of accelerated expansion and in particular it is easy to show that an exact de Sitter (dS) configuration is realized when the $\beta$-function is exactly equal to zero. As a consequence, in this framework inflation can be described using the Wilsonian picture of renormalization group (RG) flows between fixed points and it is thus natural to classify inflationary models in terms of a minimal set of parameters (\emph{i.e.} critical exponents) that specifies these flows. \\
The $\beta$-function formalism has a strong connection with the idea of applying holography to describe the inflationary Universe~\cite{Skenderis:2006jq,McFadden:2009fg,McFadden:2010na} which is nowadays a rapidly developing field of research (see for example~\cite{McFadden:2010vh,Bzowski:2012ih,Garriga:2014ema,Garriga:2014fda,Afshordi:2016dvb,Afshordi:2017ihr,Hawking:2017wrd,Conti:2017pqc}). In this framework, which is based on the (A)dS-CFT correspondence of Maldacena~\cite{Maldacena:1997re}, the deformation of an asymptotic (A)dS space-time (corresponding to the period inflation) is dual to the deformation of a (pseudo)-QFT (corresponding to the RG flow which is typically described in terms of a $\beta$-function). The cosmological $\beta$-function is thus interpreted as the usual $\beta$-function for the dual QFT. In fixed points with $\beta = 0 $ an exact dS configuration is realized and the dual QFT attains scale invariance becoming a CFT. A departure from these fixed points thus corresponds to scaling regions and on the cosmological side to inflationary epochs. This indicates that the appearance of a RG equation in the cosmological context is not fortuitous but rather supported by deeper theoretical motivations. \\
In this work, we study the case of constant-roll inflation in terms of the $\beta$-function formalism. Constant-roll inflation was originally introduced in~\cite{Martin:2012pe} and recently it attracted a lot of interest~\cite{Motohashi:2014ppa,Cai:2016ngx,Motohashi:2017aob,Odintsov:2017yud,Gao:2017uja,Gao:2017owg,Motohashi:2017vdc,Nojiri:2017qvx,Odintsov:2017qpp,Motohashi:2017gqb,Oikonomou:2017bjx} because of the possibility of predicting cosmological parameters that are in agreement with the most recent cosmological observational constraints~\cite{Ade:2015xua,Ade:2015lrj,Array:2015xqh}. In these models the scalar field $\phi$ is assumed to satisfy the constant-roll condition $\ddot{\phi} = - 3 \lambda H \dot{\phi}$, where $H$ is the Hubble parameter and $\lambda$ is some constant. Notice that for $\lambda \simeq 0 $ we recover the usual case of slow-roll inflation and for $\lambda = 1$ we have ultra slow-roll inflation~\cite{Tsamis:2003px,Kinney:2005vj,Namjoo:2012aa,Martin:2012pe,Dimopoulos:2017ged} (where the potential is exactly flat)\footnote{Interestingly, some recent works have shown that in this class of models the curvature perturbation on comoving slices and the curvature perturbation on uniform density slices do not coincide and are not conserved~\cite{Romano:2015vxz,Romano:2016gop}. Moreover, it has also been shown that it is possible to violate the non-Gaussianity consistency relations for single field inflationary models~\cite{Martin:2012pe,Namjoo:2012aa,Cai:2016ngx,Motohashi:2017aob,Odintsov:2017yud,Romano:2016gop,Mooij:2015yka}.}. We show that the class of models satisfying the constant-roll condition can be easily described in terms of the $\beta$-function formalism. Since the analysis carried out in this framework does not require the slow-roll approximation, it allows for a very simple characterization of constant-roll inflation\footnote{In fact some of the models that satisfy the constant-roll condition were already discussed in the general classification carried out in~\cite{Binetruy:2014zya}.}. Moreover, we show that with this approach we can define a further generalization of these models, leading to a new set of inflationary landscapes. In particular we construct a set of consistent inflationary models that asymptote to power-law inflation and have interesting cosmological predictions. \\
The paper is organized as follows. In Sec.~\ref{sec:general} we briefly review the $\beta$-function formalism and describe constant-roll inflation using this method. In Sec.~\ref{exact} we show that this approach is well suited to derive models for constant-roll inflation and discuss them in terms of the universality classes introduced in~\cite{Binetruy:2014zya}. In Sec.~\ref{sec:models} we go beyond the exact cases and construct quasi solutions of the constant-roll equation. We present their interpolating behavior and their phenomenological predictions. Our concluding remarks are given in Sec.~\ref{sec:conclusions}.
\subsubsection{Inverse}
\label{sec:inverse}
In this section we consider the inverse of a monomial as a correction to the constant $\beta$-function\footnote{Note that as already stated in Sec.~\ref{sec:positive}, in order to be consistent with the convention used in~\cite{Binetruy:2014zya,Pieroni:2015cma}, the $\beta$-function is taken to be negative and the field $\phi$ is positive. An analogous derivation can be carried out with a positive $\beta$-function and a negative valued field by adjusting the sign of the parameter $\alpha$.} ($\alpha >0$):
\begin{equation}
\label{eq:inverse_class}
\beta(\phi) = -\sqrt{6 \lambda} - \frac{\alpha}{\phi^n} \;.
\end{equation}
This case is expected to interpolate between a power law and either the chaotic class (for $n=1$) or the inverse class ($n >1$). This behavior will be shown explicitly for the cases $n =1$ and $n = 2 $ where analytical expressions for $N(\phi)$ can be obtained. Before considering the two cases separately, it is useful to compute the general expression for $\phi_{\textrm{f}}$ using $|\beta(\phi_{\textrm{f}})| =1 $:
\begin{equation}
\phi_{\textrm{f}} = \left( \frac{\alpha}{ 1 -\sqrt{6 \lambda}} \right)^{1/n} \; .
\end{equation}
Notice that analogously to the Exponential case discussed in the previous section, this is a large field model where inflation occurs for $\phi_{\textrm{f}} \lesssim \phi $. At lowest order the potential associated with this model is:
\begin{align}
V(\phi)& \simeq V_\textrm{f} \left( 1 - \frac{\alpha}{n-1} \phi^{1-n} \right) e^{\sqrt{6\lambda}\phi} \;,&&n > 1\;,\\
V(\phi)&\simeq V_\textrm{f} \, \phi^\alpha e^{\sqrt{6\lambda}\phi}\;,&&n=1\;.
\end{align}
Let us now focus on the two cases $n=1$ and $n=2$.
\begin{itemize}
\item \textbf{Chaotic, $n = 1$}\\
In this case we easily get:
\begin{equation}
\label{eq:inverse_class_N_n1}
N(\phi) = \frac{\phi - \phi_\textrm{f}}{\sqrt{6 \lambda}} - \frac{\alpha}{6 \lambda } \ln \left[ \frac{1 + \frac{\sqrt{6 \lambda}}{\alpha} \phi}{1 + \frac{\sqrt{6 \lambda}}{\alpha} \phi_{\textrm{f}}} \right] \; .
\end{equation}
In general this equation cannot be inverted to get $\phi(N)$. However, we can show that there are two asymptotic behaviors. As a first step we multiply both sides by $6\lambda / \alpha$ and define $\sqrt{6 \lambda} \phi / \alpha \equiv z$, we then express Eq.~\eqref{eq:inverse_class_N_n1} as:
\begin{equation}
\label{eq:equation_for_N_inverse_n1}
\frac{6\lambda N}{\alpha} = z - z_{\textrm{f}} - \ln \left[ \frac{1 + z }{1 + z_{\textrm{f}}} \right] \; .
\end{equation}
Notice that as $z > z_{\textrm{f}}$, the second term on the r.h.s. of this equation is negative. At this point we can consider the two limiting cases:
\begin{enumerate}
\item In the limit $ \frac{6\lambda N}{\alpha} \ll 1$ a solution for Eq.~\eqref{eq:equation_for_N_inverse_n1} exists for $z \ll 1$. We approximate the logarithm on the right-hand side to get:
\begin{equation}
\frac{6\lambda N}{\alpha} \simeq z^2 /2 - z_{\textrm{f}}^2 /2 \;.
\end{equation}
We can thus express the field as function of $N$ as:
\begin{equation}
\phi(N) \simeq \sqrt{2 \alpha N + \phi_{\textrm{f}}^2}\;,
\end{equation}
where $\phi_{\textrm{f}}=\alpha(1-\sqrt{6\lambda})^{-1}$. The $\beta$-function becomes:
\begin{equation}
\beta(N) \simeq -\sqrt{6\lambda} - \alpha /\sqrt{2 \alpha N + \phi_{\textrm{f}}^2} \simeq - \sqrt{ \frac{\alpha}{2 N} }\;,
\end{equation}
and we recognize the expression for $\phi(N)$ of the Chaotic class \textbf{Ib(1)} (see Sec.~\ref{subsubsec:General_Exact_solution_exact_chaotic}). For completeness we report the predictions for $n_s$ and $r$:
\begin{equation}
n_s - 1 \simeq - \frac{1 + \alpha/2}{N} \;, \qquad r \simeq \frac{4 \alpha}{N} \; .
\end{equation}
\item In the limit $\frac{6\lambda N}{\alpha} \gg 1$ the solution for Eq.~\eqref{eq:equation_for_N_inverse_n1} exists for $z \gg 1$. The linear term dominates on the right-hand side and we get:
\begin{equation}
\phi(N) = \sqrt{6 \lambda} N + \phi_\textrm{f} \; .
\end{equation}
At lowest order the $\beta$-function as a function of $N$ is simply a constant and thus corresponds to the power law class \textbf{Ib(0)} (see Sec.~\ref{subsubsec:General_Exact_solution_cst}).
\end{enumerate}
\begin{figure}[htb]
\centering
\includegraphics[width=1.\textwidth]{Inverse}
\caption{Prediction for models described by the $\beta$-function of Eq.~\eqref{eq:inverse_class} compared with Planck 2015 constraints~\cite{Ade:2015lrj} in the $(n_s,r)$ plane in linear scale. We show the results both for $n=1$ (in blue) and for $n=2$ (in teal). For comparison we are also showing the predictions of the power law class (in green), the chaotic class (in black) and the inverse class with $p=2$ (in magenta). For $n=1$ we use $\alpha = 0.33,0.5,1,2,3$ and $\lambda \in [10^{-6},10^{-1}]$ with even logarithmic spacing, for $n=2$ we use values for $\alpha \in [10^{-2},10^2]$ and $\lambda \in [10^{-6},10^{-1}]$ both with even logarithmic spacing. Horizontal segment corresponds to $N \in [50,60]$ (from left to right).}
\label{fig:inverse}
\end{figure}
\item \textbf{Quadratic inverse, n = 2}\\
In this case the number of e-foldings reads:
\begin{equation}
\label{eq:inverse_class_N_n2}
N(\phi) = \frac{\phi - \phi_\textrm{f}}{\sqrt{6 \lambda}} - \sqrt{\frac{\alpha}{6 \lambda\sqrt{6 \lambda} }}\left[ \arctan \left(\sqrt{\frac{\sqrt{6 \lambda}}{\alpha}} \phi \right) - \arctan \left(\sqrt{\frac{\sqrt{6 \lambda}}{\alpha}}\phi_{\textrm{f}} \right) \right] \; .
\end{equation}
In general this equation cannot be inverted to get $\phi(N)$ but again we can consider the two asymptotic behaviors. We start by multiplying both sides by $(6\lambda)^{3/4} / \sqrt{\alpha}$ and define $(6 \lambda)^{1/4}\phi / \sqrt{\alpha} \equiv z$ so that Eq.~\eqref{eq:inverse_class_N_n2} reads:
\begin{equation}
\label{eq:equation_for_N_inverse_n2}
\frac{(6 \lambda)^{3/4} N}{\sqrt{\alpha}} = z - z_{\textrm{f}} - \left[\arctan \left(z \right) - \arctan \left( z_{\textrm{f}}\right) \right] \; .
\end{equation}
We then notice that as $z > z_{\textrm{f}}$ the second term on the r.h.s. is always negative and we proceed by distinguishing the two limiting cases:
\begin{enumerate}
\item In the limit $ \frac{(6 \lambda)^{3/4} N}{\sqrt{\alpha}} \ll 1$ there is a solution for Eq.~\eqref{eq:equation_for_N_inverse_n2} when $z \ll 1$. In this limit we can Taylor expand the $\arctan$ to get:
\begin{equation}
\frac{(6\lambda)^{3/4} N}{\sqrt{\alpha} } \simeq \frac{z^3}{3} - \frac{z_{\textrm{f}}^3}{3}\;,
\end{equation}
and therefore:
\begin{equation}
\phi(N) \simeq\left( 3 \alpha N + \phi_\textrm{f}^3 \right)^{\frac{1}{3}}\;.
\end{equation}
Substituting into Eq.~\eqref{eq:inverse_class} we get:
\begin{equation}
\beta(N) = -\sqrt{6 \lambda} - \alpha \left( 3 \alpha N + \phi_\textrm{f}^3 \right)^{-\frac{2}{3}} \simeq - \left( \frac{\sqrt{\alpha}}{3 N}\right)^{2/3} \;,
\end{equation}
that corresponds to the Inverse class \textbf{Ib(p)} with $n=2$. For completeness the predictions for $n_s$ and $r$ are:
\begin{equation}
n_s - 1 \simeq -\frac{4}{3 N} \; , \qquad r \simeq \frac{8 \alpha^{2}{3}}{ (3 N)^{4/3}} \;.
\end{equation}
\item In the limit $\frac{(6 \lambda)^{3/4} N}{\sqrt{\alpha}} \gg 1$, we must look for a solution of Eq.~\eqref{eq:equation_for_N_inverse_n2} with $z \gg 1$. In this case the linear term dominates the r.h.s and we simply get:
\begin{equation}
\phi(N) = \sqrt{6\lambda}N+ \phi_\textrm{f} \; .
\end{equation}
That again corresponds to the power law class \textbf{Ib(0)} (see Sec.~\ref{subsubsec:General_Exact_solution_cst}).
\end{enumerate}
\end{itemize}
In the plot of Fig.~\ref{fig:inverse} we show the predictions for $n_s$ and $r$ (in linear scale) for the class of models described in this section. The procedure used to produce this plot is similar to the one defined in Sec.~\ref{sec:exponential}. We can see that (as expected) for small values of $\lambda$ the predictions for the classes of models are approaching the predictions for the chaotic and the inverse class respectively. On the other hand for larger values of $\lambda$ we recover the usual power law behavior. Notice that in this plot (and more generally in this section) we are only considering models for which we can perform analytical computations. However, according to the discussion of~\cite{Binetruy:2014zya}, for larger values of $n$ we can define models of the inverse class which lead to predictions for $n_s$ and $r$ that are in even better agreement with the experimental constraints. Clearly, these models could be used to implement the mechanism discussed in this paper leading to the definition of models which predict better values for $n_s$ and $r$.
\section{Beyond exact solutions}
\label{sec:models}
Having established that the $\beta$-function formalism is convenient to describe constant-roll inflation, in this section we show that this approach is also well suited for going beyond the exact cases discussed so far. We consider quasi solutions, \emph{i.e.} models that satisfy the constant-roll equation asymptotically (deep in the inflationary phase) but not at later times (towards the end of inflation). These approximated solutions are interesting for several reasons:
\begin{itemize}
\item From a purely phenomenological point of view it is interesting to explore the possibility of defining a new set of models that may lead to interesting predictions for $n_s$ and $r$.
\item From a theoretical point of view, these models are (as we will explain in the following sections) intrinsically different from the usual models.
\item As already mentioned in Sec.~\ref{subsubsec:General_Exact_solution_cst}, in the case of the power-law class \textbf{Ib(0)} (\emph{i.e.} power-law inflation) the model is incomplete as it lacks a method to put an end to the inflationary stage. The introduction of corrections is thus necessary to define a consistent model for inflation (\emph{i.e.} a graceful exit).
\end{itemize}
Before considering some explicit models, let us first discuss the general procedure to implement this mechanism.\\
Let $F(\phi)$ be an exact solution of Eq.~\eqref{eq:constant_roll_beta} (\emph{i.e.} one of the functions discussed in the previous section) and let us express $\beta$-function as:
\begin{equation}
\beta(\phi) = F(\phi) + f(\phi) \;,
\end{equation}
where $f(\phi)$ is a generic function of $\phi$. In order for $\beta(\phi)$ to asymptotically satisfy the constant-roll condition~\eqref{eq:constant_roll_beta}, the function $f(\phi)$ must satisfy:
\begin{equation}
-f_{,\phi}(\phi) + f^2(\phi)/2 + f(\phi) F(\phi) \simeq 0 \; ,
\end{equation}
as we go deeper into the inflationary stage. In particular this means that in this regime both $f(\phi)$ and its first derivative go to zero. Notice that since $\beta$ is not an exact solution of~\eqref{eq:constant_roll_beta}, the prediction for $n_s$ and $r$ are not given by the special case~\eqref{eq:ns_r_constant_roll} but by the general expression~\eqref{eq:general_nsandr}.\\
For example, assuming $F(\phi)$ to have the form discussed in Sec.~\ref{subsubsec:General_Exact_solution_tan} (\emph{i.e.} the tangent case with $\lambda < 0$), we can express the $\beta$-function as:
\begin{equation}
\label{eq:deformed_tanget}
\beta(\phi) = \sqrt{6|\lambda|}\tan\left(\sqrt{\frac{3|\lambda|}{2}} \phi \right) + f(\phi) \; .
\end{equation}
At this point we can use the equations introduced in Sec.~\ref{subsec:General_beta_Function_Formalism} to study the deformation induced by the function $f(\phi)$ around the exact solution discussed Sec.~\ref{subsubsec:General_Exact_solution_tan}. While in the case defined by Eq.~\eqref{eq:deformed_tanget} the equations are not easy to be handled, an analytical treatment can be carried out for $F(\phi) = \pm \sqrt{6 \lambda}$ allowing for a systematical study of the deformations over the asymptotical power law solution. In the remaining of this section, we will focus on this case and we will explicitly construct some models that arise as deformation of the constant solution studied in Sec.~\ref{subsubsec:General_Exact_solution_cst} and discuss their predictions.
\subsubsection{Monomial}
\label{sec:monomial}
In this section we consider a $\beta$-function of the form:
\begin{equation}
\label{eq:monomial_class}
\beta(\phi) = \sqrt{6\lambda} + \alpha \phi^n\;,
\end{equation}
with $\alpha>0$. Conversely to the cases discussed in Sec.~\ref{sec:exponential} and in Sec.~\ref{sec:inverse}, for this class of models inflation is realized for $\phi \rightarrow 0$. During this epoch the field $\phi$ is positive and is monotonically increasing from $\phi=0$ to $\phi_f = [ ( 1 - \sqrt{6 \lambda})/\alpha ]^{1/n}$. Notice that for $\alpha > 1$ the field excursion during inflation is smaller than $1$. \\
As a first step we show that the case $n=1$ can be safely ignored. Since in this case the $\beta$-function is simply $\beta(\phi) = \sqrt{6\lambda} + \alpha \phi$, a field redefinition $\phi = \phi' - \sqrt{6\lambda}/\alpha $ leads simply to the usual monomial class with $\beta(\phi') = \alpha \phi'$. Analogously, for any odd power of $n$ we can perform a field redefinition $\phi = \phi' - (\sqrt{6\lambda}/\alpha)^{1/n}$ to get:
\begin{equation}
\beta(\phi) = \sum_{i = 1}^n c_i \phi^i\;,
\end{equation}
where the $c_i$ denote some constant factors. As the constant term disappears, these cases are already discussed in~\cite{Binetruy:2014zya} and they are not relevant for the analysis presented in this paper. For this reason we can simply restrict to cases with an even value for $n$. \\
The potentials for these models (with $n>1$) are given by:
\begin{align}
V(\phi)&=V_\textrm{f} \, \exp\left\{-\sqrt{6\lambda}\phi-\frac{\alpha}{n+1}\phi^{n+1}\right\}\left[1-\lambda-\frac{1}{6}\left(2\sqrt{6\lambda}\alpha\phi^n+\alpha^2\phi^{2n}\right)\right] \, .
\end{align}
As we explain in the following, in order to provide a consistent model of inflation the field excursion is non-trivially related to the values of $\alpha$ and $\lambda$. Therefore we cannot provide a general first order expression for the potential. \\
\begin{figure}[htb]
\begin{center}
\centering
\includegraphics[width=1.\textwidth]{Monomial}
\end{center}
\caption{Prediction for models described by the $\beta$-function of Eq.~\eqref{eq:monomial_class} compared with Planck 2015 constraints~\cite{Ade:2015lrj} in the $(n_s,r)$ plane in linear scale. We show the results both for $n=2$ (in red) and for $n=4$ (in pink) as well as for the power law class (in green). For $n=2$ we used values for $\lambda$ in the interval $\lambda \in [10^{-10},10^{-3.5}] $ and values of $\alpha$ in the interval $\alpha \in [10^{-3},10^{-2}]$. For $n=4$ we used values for $\lambda$ in the interval $\lambda \in [10^{-10},10^{-3.5}] $ and values of $\alpha$ in the interval $\alpha \in [10^{-4},10^{-3}]$. Both the values of $\lambda$ and $\alpha$ are chosen with a even logarithmic spacing. Horizontal lines corresponds to values of $N$ ranging from $50$ to $60$ (from left to right).}
\label{fig:monomial}
\end{figure}
In order to discuss an explicit example which can be solved fully analytically let us focus on the case with $n=2$. The first step is to compute the number of e-foldings:
\begin{equation}
N =-\frac{1}{\sqrt{\alpha\sqrt{6\lambda}}}\left(\arctan\left[\sqrt{\frac{\alpha}{\sqrt{6\lambda}}}\phi\right]-\arctan\left[\sqrt{\frac{1-\sqrt{6\lambda}}{\sqrt{6\lambda}}}\right]\right)\;,
\end{equation}
where we have substituted $\phi_{\textrm{f}}^2=(1-\sqrt{6\lambda})/\alpha$. Before proceeding further, it is interesting to stress that these models can support inflation only for a limited number of e-foldings. This can be easily checked by computing the value of $N$ for $\phi = 0$:
\begin{align}
N_{\text{tot}}&=N(\phi=0)=\frac{1}{\sqrt{\alpha\sqrt{6\lambda}}}\arctan\left[\sqrt{\frac{1-\sqrt{6\lambda}}{\sqrt{6\lambda}}}\right]\;.
\end{align}
For the model to be a realistic description of inflation, we then need $N_{\text{tot}}$ to be larger than $50 \div 60$. This will turn into a constraint for the maximal value $\alpha$ at a given value of $\lambda$. For example if we require $N_{\text{tot}} \gtrsim 60$ we get:
\begin{equation}
\label{eq:monomial_alpha_max}
\alpha \lesssim \frac{1}{3600\sqrt{6\lambda}}\arctan^2\left[\sqrt{\frac{1-\sqrt{6\lambda}}{\sqrt{6\lambda}}}\right]\;.
\end{equation}
We can proceed by computing the expression for $\phi$ as a function of $N$:
\begin{align}
\label{eq:monomial_efold}
\phi(N)&=\sqrt{\frac{\sqrt{6\lambda}}{\alpha}}\tan\left[-\sqrt{\alpha\sqrt{6\lambda}}N+\arctan\left[\sqrt{\frac{1-\sqrt{6\lambda}}{\sqrt{6\lambda}}}\right]\right]\;.
\end{align}
Notice that the r.h.s. of this equation is positive \emph{i.e.} the ``fixed point'' $\phi = 0$ is consistently approached for $N = N_{\text{tot}}$. Moreover, it is interesting to notice that if we choose the parameter $\alpha$ to be at its maximal value given by~\eqref{eq:monomial_alpha_max}, we have $\phi = 0$ for $N_{\text{tot}}=60$. In this case the model would have an overall inflationary period of exactly $60$ e-foldings and thus the predictions at $N=60$ are given by the constant $\sqrt{6\lambda}$ \emph{i.e.} by the power law fixed point\footnote{Notice however that in order to impose the Bunch-Davies vacuum for the perturbations that leave the horizon at $N = 60$ we need $N_{\text{tot}} > 60$.}. \\
Using Eq.~\eqref{eq:monomial_efold} the $\beta$-function can be expressed as:
\begin{equation}
\beta(N) = \sqrt{6\lambda}\left(1+\tan^2\left[-\sqrt{\alpha\sqrt{6\lambda}}N+\arctan\left[\sqrt{\frac{1-\sqrt{6\lambda}}{\sqrt{6\lambda}}}\right]\right]\right)\;,
\end{equation}
again $n_s$ and $r$ are respectively given by:
\begin{align}
n_s-1&=-\left[\beta^2(N)+2\beta_{,\phi}(N)\right] \; , \qquad r=8\beta^2(N)\;,
\end{align}
where $\beta_{,\phi}(N)$ is obtained by taking first the $\phi$ derivative of Eq.~\eqref{eq:monomial_class} and then substituting Eq.~\eqref{eq:monomial_efold}.\\
The plot of Fig.~\ref{fig:monomial} shows the predictions for $n_s$ and $r$ (in linear scale) for the class of models described in this section\footnote{Again the procedure used to obtain these plots is similar to the one defined in Sec.~\ref{sec:exponential}}. Notice that in this plot we are also showing the predictions for a model $(n=4)$ where we are not able to perform a fully analytical treatment. Interestingly both the case with $n=2$ and $n=4$ predict values for $n_s$ and $r$ that are in agreement with the constraints set by the latest cosmological data~\cite{Ade:2015xua,Ade:2015lrj,Array:2015xqh}. Even in this case we can appreciate the interpolating behavior of the $\beta$-function. If $\sqrt{6\lambda}\ll1$ we simply recover the monomial case, and no difference can be appreciated in the last $60$ e-foldings. Conversely, for larger values of $\lambda$ (but still in agreement with the constraint of at least $50 \div 60$ e-foldings), we approach power law predictions. In summary, we have defined a set of inflationary models that are in agreement with the latest cosmological data and that lead to a finite period of inflation.
|
1,116,691,498,666 | arxiv | \section{Introduction}
There are many topics of great importance and interest in the areas of modeling and inverse problems which are properly viewed as essential in the use of mathematics and statistics in scientific inquiries. A brief, noninclusive list of topics include the use of traditional sensitivity functions (TSF) and generalized sensitivity functions (GSF) in experimental design (what type and how much data is needed, where/when to take observations) \cite{BDE,BDEK,BEG,Kappel,TC}, choice of mathematical models and their parameterizations (verification, validation, model selection and model comparison techniques) \cite{BDSS,BF1,BF2,bed,boz1,boz2,BA1,BA2,hur}, choice of statistical models (observation process and sampling errors, residual plots for statistical model verification, use of asymptotic theory and bootstrapping for computation of standard errors, confidence intervals) \cite{BDSS,BHR,DG,ET,SeWi,ST}, choice of cost functionals (MLE, OLS, WLS, GLS, etc.,) \cite{BDSS,DG}, as well as parameter identifiability and selectivity. There is extensive literature on each of these topics and many have been treated in surveys in one form or another (\cite{DG} is an excellent monograph with many references on the statistically related topics) or in earlier lecture notes \cite{BDSS}.
We discuss here an enduring major problem: selection of which model parameters can be readily and reliably (with quantifiable uncertainty bounds) estimated in an inverse problem formulation. This is especially important in many areas of biological modeling where often one has large dynamical systems (many state variables), an even larger number of unknown parameters to be estimated and a paucity of longitudinal time observations or data points. As biological and physiological models (at the cellular, biochemical pathway or whole organism level) become more sophisticated (motivated by increasingly detailed understanding - or lack thereof - of mechanisms), it is becoming quite common to have large systems (10-20 or more differential equations), with a plethora of parameters (25-100) but only a limited number (50-100 or fewer) of data points per individual organism. For example, we find models for the cardiovascular system \cite[Chapter 1]{Kappel} (where the model has 16 state variables and 22 parameters) and \cite[Chapter 6]{Ottesen} (where the model has 22 states and 55 parameters), immunology \cite{nelson} (8 states, 24 parameters), metabolic pathways \cite{Engl} (8 states, 35 parameters) and HIV progression \cite{BDHKR,jones}
(8 \& 6 states, 11 \& 8 parameters, respectively). Fortunately, there is a growing recent effort among scientists to develop quantitative methods based on sensitivity, information matrices and other statistical constructs (see for example \cite{BDE,BDEK,BEG,burth,ac09,fink,fink2,wu}) to aid in identification or parameter estimation formulations. We discuss here one approach using sensitivity matrices and asymptotic standard errors as a basis for our developments. To illustrate our discussions, we will use a recently developed in-host model for HIV dynamics which has been successfully validated with clinical data and used for prediction \cite{adams07,BDHKR}.
The topic of system and parameter identifiability is actually an old one. In the
context of parameter determination from system observations or
output it is at least forty years old and has received much attention in
the peak years of linear system and control theory in the
investigation of observability, controllability and detectability
\cite{AE,bellams,BeKa,Eykhoff,GW,Kalman,MehraLain,reid77,SageMelsa}.
These early investigations and results were focused primarily on
engineering applications, although much interest in other areas (e.g.,
oceanography, biology) has prompted more recent inquiries for both
linear and nonlinear dynamical systems
\cite{anh06,BS,cob80,evans05,holm82,navon,white01,wu,xia03,yue08}.
\subsection{A Mathematical Model for HIV Progression with Treatment Interruption}\label{modelsection}
We summarize and use as an illustrative example one of the many dynamic models for HIV progression found in an extensive literature (e.g., see \cite{adamsthesis,brian1,brian2,adams07,BDHKR,Bon,callaway,NowakBangham,PerelsonReview,WodarzNowak} and the many references therein). For our example model, the dynamics of in-host HIV are described by the interactions between uninfected and infected type 1 target cells ($T_1$ and $T_1^*$) (CD4$^{+}$ T-cells), uninfected and infected type 2 target cells ($T_2$ and $T_2^*$) (such as macrophages or memory cells, etc.), infectious free virus $V_I$, and immune response $E$ (cytotoxic T-lymphocytes CD8$^+$) to the infection. This model, which was developed and studied in \cite{adamsthesis,adams07} and later extended in subsequent efforts (e.g., see \cite{BDHKR}), is essentially one suggested in
\cite{callaway}, but includes an immune response compartment and dynamics as in
\cite{Bon}. The model equations are given by
\begin{equation}
\begin{array}{l}
\dot{T}_1 = \lambda_1 - d_1 T_1 - \left(1 - \bar{\epsilon}_1 (t)\right) k_1 {V_I} T_1 \\
\dot{T}_2 = \lambda_2 - d_2 T_2 - (1- f \bar{\epsilon}_1 (t)) k_2 {V_I} T_2 \\
\dot{T}_1^* = (1 - \bar{\epsilon}_1 (t) )k_1 {V_I} T_1 - \delta {T_1^*} - m_1 {E} {T_1^*} \\
\dot{T}_2^* = (1 - f \bar{\epsilon}_1 (t))k_2 {V_I} T_2 - \delta {T_2^*} - m_2 {E} {T_2^*} \\
\dot{V}_I = (1 - \bar{\epsilon}_2 (t)) 10^3 N_T \delta ({T_1^*} + {T_2^*}) - c {V_I} \\
\hspace{0.5in} - (1 - \bar{\epsilon}_1 (t)) 10^3 k_1 T_1 V_I - (1 - f\bar{\epsilon}_1 (t)) 10^3 k_2 T_2 V_I \\
\dot{E} = \lambda_E + \frac{b_E ({T_1^*} + {T_2^*})}{({T_1^*} + {T_2^*}) + K_b}{E} - \frac{d_E ({T_1^*} + {T_2^*})}{({T_1^*} + {T_2^*}) + K_d}{E} - \delta_E {E}, %
\label{EQN_E_dynamics}
\end{array}
\end{equation}
together with an initial condition vector $\left( T_1(0), T_1^*(0), T_2(0), T_2^*(0), V_I(0), E(0) \right)^T.$
The differences in infection rates and treatment
efficacy help create a low, but non-zero, infected cell steady
state for $T_2^*$, which is compatible with the idea that
macrophages or memory cells may be an important source of virus after T-cell
depletion. The populations of uninfected target cells $T_1$ and
$T_2$ may have different source rates $\lambda_i$ and natural
death rates $d_i$. The time-dependent treatment factors
$\bar{\epsilon}_1(t) = \epsilon_1 u(t)$ and $\bar{\epsilon}_2(t) =
\epsilon_2 u(t)$ represent the effective treatment impact of a reverse transcriptase
inhibitor (RTI) (that blocks new infections) and a protease inhibitor (PI) (which causes
infected cells to produce non-infectious virus), respectively. The RTI is potentially more
effective in population 1 ($T_1, T_1^*$) than in population 2
($T_2, T_2^*$), where the efficacy is $f \bar{\epsilon}_1$, with
$f \in [0,1]$. The relative effectiveness of RTIs is
modeled by $\epsilon_1$ and that of PIs by $\epsilon_2$,
while the time-dependent treatment function $0 \leq u(t) \leq 1$
represents therapy levels drug level, with $u(t) = 0$ for fully off and
$u(t) = 1$, for fully on. Although HIV treatment is nearly always administered as combination
therapy, the model allows the possibility of monotherapy, even
for a limited period of time, implemented by
considering separate treatment functions $u_1(t), u_2(t)$ in the treatment factors.
As in \cite{adamsthesis,adams07}, for our numerical investigations we consider a log-transformed and reduced version of the model. This transformation is frequently used in the HIV modeling literature because of the large differences in orders of magnitude in state values in the model and the data and to guarantee non-negative state values as well as because of certain probabilistic considerations (for further discussions see \cite{adams07}). This results in the nonlinear system of differential equations
\begin{eqnarray}
\frac{d x_1}{dt}&=& \frac{10^{-x_1}}{\ln(10)}\left( \lambda_1-d_1 10^{x_1}-(1-\bar\varepsilon_1(t))k_1 10^{x_5} 10^{x_1}\right) \label{x1eqn}\\\
\frac{d x_2}{dt}&=&\frac{10^{-x_2}}{\ln(10)}\left( (1-\bar\varepsilon_1(t))k_110^{x_5}10^{x_1}-\delta 10^{x_2}-m_1 10^{x_6}10^{x_2}\right)\\
\frac{d x_3}{dt}&=& \frac{10^{-x_3}}{\ln(10)}\left(\lambda_2 -d_210^{x_3}-(1-f\bar\varepsilon_1(t))k_2 10^{x_5}10^{x_3}\right)\\
\frac{d x_4}{dt}&=&\frac{10^{-x_4}}{\ln(10)}\left((1-f\bar\varepsilon_1(t))k_210^{x_5}10^{x_3} -\delta 10^{x_4} -m_2 10^{x_6}10^{x_4}\right)\\
\nonumber \\
\frac{d x_5}{dt}&=&\frac{10^{-x_5}}{\ln(10)} ( (1-\bar\varepsilon_2(t))10^3N_T\delta(10^{x_2}+10^{x_4})-c10^{x_5}- \nonumber\\
&& \quad\quad\quad \quad(1-\bar\varepsilon_1(t))\rho_1 10^3 k_1 10^{x_1}10^{x_5} -(1-f\bar\varepsilon_1(t))\rho_2 10^3k_210^{x_3}10^{x_5})\\
\frac{d x_6}{dt}&=&\frac{10^{-x_6}}{\ln(10)}\left(\lambda_E+\frac{b_E(10^{x_2}+10^{x_4})}{(10^{x_2}+10^{x_4})+K_b}10^{x_6}
-\frac{d_E(10^{x_2}+10^{x_4})}{(10^{x_2}+10^{x_4})+K_d}10^{x_6}-\delta_E 10^{x_6}\right) \label{x6eqn},
\end{eqnarray}
where the changes of variables are defined by
\begin{equation}
T_1=10^{x_1},\\
T_1^*=10^{x_2},\\
T_2=10^{x_3},\\
T_2^*=10^{x_4},\\
V_I=10^{x_5},\\
E=10^{x_6}.
\end{equation}
We note that this model contains six state variables and twenty-two (in general, unknown) system parameters given by
\[ \theta_2= (\lambda_1,d_1,\epsilon_1,k_1,\lambda_2,d_2,f,k_2,\delta,m_1,m_2,
\epsilon_2,N_T,c,\rho_1,\rho_2,\lambda_E,b_E,K_b,d_E,K_d,\delta_E).
\]
A list of the model parameters along with units of these model parameters are given below in Table \ref{tabpars}.
The initial conditions for equations (\ref{x1eqn})--(\ref{x6eqn}) are denoted by $x_i(t_0)=x_i^0$, for $i=1,\dots,6$. We will also consider the initial conditions as unknowns and we use the following notation for the vector of parameters and initial conditions:
\[
\theta=(\theta_1,\theta_2)\]
where
\[
\theta_1=(x_1^0,x_2^0,x_3^0,x_4^0,x_5^0,x_6^0)^T.
\]
\begin{table}[h]
\caption{Parameters for the HIV model.}
\begin{center}
\begin{tabular}{ccl} \hline\hline
Parameter & Units & Description \\ \hline\hline
$\lambda_1$&$\frac{\mbox{cells}}{\mbox{ml} \ \mbox{day}}$& Target cell type 1 production rate\\
$d_1$&$\frac{1}{\mbox{day}}$& Target cell type 1 death rate\\
$\epsilon_1$&---& Target cell type 1 treatment efficacy\\
$k_1$& $\frac{\mbox{ml}}{\mbox{virions} \ \mbox{day}} $& Target cell type 1 infection rate\\
$\lambda_2$&$\frac{\mbox{cells}}{\mbox{ml} \ \mbox{day}}$& Target cell type 2 production rate\\
$d_2$&$\frac{1}{\mbox{day}}$& Target cell type 2 death rate\\
$f$&---& Treatment efficacy reduction in target cell type 2\\
$k_2$&$\frac{\mbox{ml}}{\mbox{virions} \ \mbox{day}}$& Target cell type 2 infection rate\\
$\delta$&$\frac{1}{\mbox{day}}$& Infected cell death rate\\
$m_1$&$\frac{\mbox{ml}}{\mbox{cells} \ \mbox{day}}$& Type 1 immune-induced clearance rate\\
$m_2$&$\frac{\mbox{ml}}{\mbox{cells} \ \mbox{day}}$& Type 2 immune-induced clearance rate\\
$\epsilon_2$&---& Target cell type 2 treatment efficacy\\
$N_T$&$\frac{\mbox{virions}}{\mbox{cell}}$& Virions produced per infected cell\\
$c$&$\frac{1}{\mbox{day}}$& Virus natural death rate\\
$\rho_1$&$\frac{\mbox{virions}}{\mbox{cell}}$& Average number of virions infecting a type 1 cell\\
$\rho_2$&$\frac{\mbox{virions}}{\mbox{cell}}$& Average number of virions infecting a type 2 cell\\
$\lambda_E$&$\frac{\mbox{cells}}{\mbox{ml} \ \mbox{day}}$& Immune effector production rate\\
$b_E$&$\frac{1}{\mbox{day}}$& Maximum birth rate for immune effectors\\
$K_b$&$\frac{\mbox{cells}}{\mbox{ml}}$& Saturation constant for immune effector birth\\
$d_E$&$\frac{1}{\mbox{day}}$& Maximum death rate for immune effectors\\
$K_d$&$\frac{\mbox{cells}}{\mbox{ml}}$& Saturation constant for immune effector death\\
$\delta_E$&$\frac{1}{\mbox{day}}$& Natural death rate for immune effectors\\
\end{tabular}
\end{center}
\label{tabpars}
\end{table}
\clearpage
As reported in \cite{adamsthesis,adams07}, data to be used with this model in inverse or parameter estimation problems typically consisted of monthly observations over a 3 year period (so approximately 36 longitudinal data points per patient) for the states $T_1+T_1^*$ and $V$. While this inverse problem is relatively ``small'' compared to many of those found in the literature, it still represents a nontrivial estimation challenge and is more than sufficient to illustrate the ideas and methodology we discuss in this presentation. Other difficult aspects (censored data requiring use of the Expectation Maximization algorithm as well as use of residual plots in attempts to validate the correctness of choice of corresponding statistical models introduced and discussed in the next section) of such inverse problems are discussed in the review chapter \cite{BDSS} and will not be pursued here.
\section{Statistical Models for the Observation Process}
One has errors in any data collection process and the presence of this error is reflected in any parameter estimation results one might obtain. To understand and treat this, one usually specifies a {\em statistical model} for the observation process in addition to the {\em mathematical model} representing the dynamics. To illustrate ideas here we use ordinary least squares (OLS) consistent with an error model for absolute error in the observations. For a discussion of other frameworks (maximum likelihood in the case of known error distributions, generalized least squares appropriate for relative error models) see \cite{BDSS}.
Here the OLS estimation is based on the mathematical model for in-host HIV dynamics described above. The observation process is formulated assuming there exists a vector $\theta_0\in\mathbb{R}^p$,
referred to as the {\em true parameter vector}, for which the model describes the log-scaled total number of
CD4$^{+}$ T-cells (uninfected and infected) exactly. It is also reasonably assumed that each of $n$ longitudinal observations $\{Y_i\}_{i=1}^{n}$
is affected by random deviations from the true underlying process. That is, if the mathematical model output is denoted by
\begin{equation}
z(t_i;\theta_0)=\log_{10}\left(10^{x_1(t_i;\theta_0)}+10^{x_2(t_i;\theta_0)}\right),
\end{equation}
then the statistical model for the scalar observation process is
\begin{equation}
\begin{array}{lr}
Y_i=z(t_i;\theta_0)+\mathcal{E}_i& \mbox{for } i=1,\dots,n.
\end{array}
\end{equation}
The errors $\mathcal{E}_i$ are assumed to be random variables satisfying the following assumptions:
\begin{itemize}
\item[(i)] the errors $\mathcal{E}_i$ have mean zero, $E[\mathcal{E}_i]=0$;
\item[(ii)] the errors $\mathcal{E}_i$ have finite common variance, $\mbox{var}(\mathcal{E}_i)=\sigma_0^2<\infty$;
\item[(iii)] the errors $\mathcal{E}_i$ are independent (i.e., $\mbox{cov}(\mathcal{E}_i,\mathcal{E}_j)=0$ whenever $i\neq j$) and identically distributed.
\end{itemize}
Assumptions (i)--(iii) imply that the mean of the observation is equal to the model output, $E[Y_i]=z(t_i;\theta_0)$,
and the variance in the observations is constant in time, $\mbox{var}(Y_i)=\sigma_0^2$.
The estimator $\theta_{OLS}=\theta_{OLS}^n$ minimizes
\begin{equation}\label{olscotfun}
\sum_{i=1}^{n}[Y_i-z(t_i;\theta)]^2.
\end{equation}
From \cite{SeWi} we find that under a number of regularity and sampling conditions, as $n\rightarrow\infty$,
$\theta_{OLS}$ is approximately distributed according to a multivariate normal distribution, i.e.,
\begin{equation}
\theta_{OLS}^n\sim\mathcal{N}_p\left(\theta_0,\Sigma_0^n\right),
\end{equation}
where $\Sigma_0^n=\sigma^2_0[n\Omega_0]^{-1}
\in\mathbb{R}^{p\times p}$ and
\begin{equation}
\Omega_0=\lim_{n\rightarrow\infty}\frac{1}{n} \chi^{n}(\theta_0)^{T}\chi^n(\theta_0).
\end{equation}
Asymptotic theory requires existence of this limit and non-singularity of $\Omega_0$. The $p\times p$ matrix $\Sigma_0^n$
is the covariance matrix, and the $n\times p$ matrix $\chi^n(\theta_0)$ is known as the {\em sensitivity matrix} of the system, and is defined as
\begin{eqnarray} \label{defchimtrx}
\chi_{ij}^n(\theta_0)=\left.\frac{\partial z(t_i;\theta)}{\partial \theta_j} \right|_{\theta=\theta_0} && 1\leq i\leq n, \ 1\leq j\leq p.
\end{eqnarray}
If $g\in\mathbb{R}^6$ denotes the right-side of Equations (\ref{x1eqn})--(\ref{x6eqn}), then
numerical values of $\chi^n(\theta)$ are readily calculated, for a particular $\theta$, by solving
\begin{eqnarray}\label{seneqns}
\frac{dx}{dt}&=&g(t,x(t;\theta);\theta)\\
\frac{d}{dt}\frac{\partial x}{\partial \theta}&=&\frac{\partial g}{\partial x}\frac{\partial x}{\partial \theta}+\frac{\partial g}{\partial \theta},
\end{eqnarray}
from $t=t_0$ to $t=t_n$. One could alternatively solve for the sensitivity matrix using difference quotients (usually less accurately) or by using automatic differentiation software (for additional details on sensitivity matrix calculations see \cite{BDSS,BDE,ac08,ac09,eslami,finkAD}).
The estimate $\hat\theta_{OLS}=\hat\theta_{OLS}^n$ is a realization of the estimator $\theta_{OLS}$, and is calculated
using a realization $\{y_i\}_{i=1}^n$ of the observation process $\{Y_i\}_{i=1}^n$, while minimizing (\ref{olscotfun}) over $\theta$.
Moreover, the estimate $\hat\theta_{OLS}$ is used in the calculation
of the sampling distribution for the parameters. The error variance $\sigma_0^2$ is approximated by
\begin{equation}
\hat\sigma^2_{OLS}=\frac{1}{n-p} \sum_{i=1}^{n}[y_i-z(t_i;\hat\theta_{OLS})]^2,
\end{equation}
while the covariance matrix $\Sigma_0^n$ is approximated by
\begin{equation}\label{covmtrx}
\hat\Sigma_{OLS}^n= \hat\sigma^2_{OLS} \left[\chi(\hat\theta_{OLS}^n)^T\chi(\hat\theta_{OLS}^n)\right]^{-1}.
\end{equation}
As discussed in \cite{BDSS,DG,SeWi} an approximate for the sampling distribution of the estimator is given by
\begin{equation}
\theta_{OLS}=\theta_{OLS}^n\sim \mathcal{N}_{p}(\theta_{0},\Sigma_{0}^n)\approx
\mathcal{N}_{p}(\hat\theta_{OLS}^n,\hat\Sigma_{OLS}^n).
\end{equation}
Asymptotic standard errors can be used to quantify uncertainty in the estimation, and they are calculated by taking the square roots of the diagonal
elements of the covariance matrix $\hat\Sigma^n_{OLS}$, i.e.,
\begin{equation}\label{seeqn}
SE_k(\hat\theta_{OLS}^n)=\sqrt{(\hat\Sigma_{OLS}^n)_{kk}}, \quad
k=1,\dots,p.
\end{equation}
\section{Subset Selection Algorithm}
The focus of our presentation here is how one chooses {\em a priori} (i.e., {\em before} any inverse problem calculations are carried out) which parameters and initial conditions can be readily estimated with a typical longitudinal data set. That is, from the parameters $\theta_2$ and initial conditions $\theta_1$, which components of $\theta=(\theta_1,\theta_2)$ yield a subset of readily identifiable parameters and initial conditions?
We illustrate an algorithm, developed recently in \cite{ac09}, to select parameter vectors that can be estimated from a given data set
using an ordinary least squares inverse problem formulation (similar ideas apply if one is using a relative error statistical model and generalized least squares formulations). The algorithm searches all possible
parameter vectors and selects some of them based on two main criteria: (i) full rank of the sensitivity matrix, and (ii) uncertainty quantification
by means of asymptotic standard errors. Prior knowledge of a nominal set of values for all parameters along with the observation times for data (but not the values of the observations) will be required for our algorithm. Before describing the algorithm in detail and illustrating its use, we provide some motivation underlying the steps which involve the sensitivity matrix $\chi$ of \eqref{defchimtrx} and the Fisher Information Matrix $\mathcal{F}=\chi^T\chi$.
Ordinary least squares problems involve choosing $\Theta=\theta_{OLS}$ to minimize the difference between observations $Y$ and model output $z(\theta)$, i.e., minimize $|Y-z(\theta)|$ (here we use $|\cdot|$ for the Euclidean norm in $\mathbb{R}^n$). Replacing the the model with a first order linearization about $\theta_0$, we then wish to minimize
\[
|Y-z(\theta_0)-\nabla_{\theta}z(\theta_0)[\theta-\theta_0]|.
\]
If we use the statistical model $Y=z(\theta_0)+\mathcal{E}$ and let $\delta \theta =\theta-\theta_0$, we thus wish to minimize
\[
|\mathcal{E}-\chi(\theta_0)\delta\theta|,
\]
where $\chi=\nabla_{\theta}z$ is the $n\times p$ sensitivity matrix defined in \eqref{defchimtrx}. This is a standard optimization problem \cite[Section 6.11]{Lu} whose solution can be given using the pseudo inverse $\chi^{\dag}$ defined in terms of minimal norm solutions of the optimization problem and satisfying $\chi^{\dag}=(\chi^T\chi)^{\dag}\chi^T=\mathcal{F}^{\dag}\chi^T$. The solution is
\[
\delta \Theta=\chi^{\dag}\mathcal{E}
\]
or
\[
\Theta= \theta_0 + \chi^{\dag}\mathcal{E} = \theta_0 + \mathcal{F}^{\dag}\chi^T\mathcal{E}.
\]
If $\mathcal{F}$ is invertible, then the solution (to first order) of the OLS problem is
\begin{equation} \label{sol}
\Theta=\theta_0+ \mathcal{F}^{-1}\chi^T\mathcal{E}.
\end{equation}
From these calculations, we see that the rank of $\chi$ and the conditioning (or ill-conditioning) of $\mathcal{F}$ play a significant role in solving OLS inverse problems. Observe that the error (or noise) $\mathcal{E}$ in the data will in general be amplified as the ill-conditioning of $\mathcal{F}$ increases. We further note that the $n\times p$ sensitivity matrix $\chi$ is of full rank $p$ if and only if the $p\times p$ Fisher matrix $\mathcal{F}$ has rank $p$, or equivalently, is nonsingular. These underlying considerations have motivated a number of efforts (e.g., see \cite{BDE,BDEK,BEG}) on understanding the conditioning of the Fisher matrix as a function of the number $n$ and longitudinal locations $\{t_i\}^n_{i=1}$ of data points as a key indicator for well-formulated inverse problems and as a tool in optimal design, especially with respect to computation of uncertainty (standard errors, confidence intervals) in parameter estimates.
Thus, we use an algorithm which first seeks sub-vectors of the parameter vector $\theta$ for which the corresponding sensitivity matrix has full rank and then use the normalized diagonals of the covariance matrix (the coefficients of variation) to rank the parameters among the resulting sub-vectors according to their potential for reliability in estimation.
In view of the comments above (which are very {\em local} in nature--both the sensitivity matrix and the Fisher Information Matrix are local quantities), one should be pessimistic about using these quantities to obtain any {\em nonlocal} selection methods or criteria for estimation. Indeed, for nonlinear complex systems, it is easy to argue that questions related to some type of global parameter identifiability are not fruitful questions to be pursuing.
As we have stated above, to apply the parameter subset selection algorithm we require prior knowledge of nominal variance and nominal
parameter values. These nominal values of $\sigma_0$ and $\theta_0$ are needed to calculate the sensitivity matrix, the Fisher matrix and the corresponding covariance matrix defined in \eqref{covmtrx}. For our illustration here, we use the variance and parameter estimates obtained in \cite{adamsthesis,adams07} for Patient \# 4 as nominal values. In problems for which no prior estimation has been carried out, one must use knowledge of the observation process error and some knowledge of viable parameter values that might be reasonable with the model under investigation.
More precisely, here we assume the error variance is $\sigma_0^2= 1.100\times10^{-1}$, and assume the following nominal parameter values (for description and units see Table \ref{tabpars}):
\(
x_1^0=\log_{10}(1.202\times 10^{3}),\ x_2^0=\log_{10}(6.165\times 10^{1}),\ x_3^0=\log_{10}(1.755\times 10^{1}),
x_4^0=\log_{10}(6.096\times10^{-1}),\ x_5^0=\log_{10}(9.964\times 10^{5}),\ x_6^0=\log_{10}(1.883\times 10^{-1}),
\lambda_1=4.633,\ d_1=4.533\times10^{-3},\ \epsilon_1= 6.017\times 10^{-1},
k_1=1.976\times10^{-6},\ \lambda_2=1.001\times10^{-1},\ d_2=2.211\times10^{-2},
f=5.3915\times10^{-1},\ k_2=5.529\times10^{-4},\ \delta=1.865\times10^{-1},
m_1=2.439\times10^{-2},\ m_2=1.3099\times10^{-2},\ \epsilon_2=5.043\times10^{-1},
N_T=1.904\times10^{1},\ c= 1.936\times10^{1},\ \rho_1=1.000,
\rho_2=1.000,\ \lambda_E= 9.909\times10^{-3},\ b_E=9.785\times10^{-2},
K_b=3.909\times10^{-1},\ d_E=1.021\times10^{-1},\ K_d= 8.379\times10^{-1}, \text{ and }
\delta_E=7.030\times10^{-2}.
\)
In Figure \ref{pat4logdata} we depict the log-scaled longitudinal observations (data) on the number of CD4$^{+}$ T-cells, $\{y_i\}$, and the model
output evaluated at the estimate (the nominal parameter values described above), $z(t_i;\hat\theta_{OLS})$, for Patient \#4 in \cite{adamsthesis,adams07}.
\begin{figure}[h]
\begin{center}
\includegraphics[width=4.55in,height=2.5in]{patient4_obs_log}
\end{center}
\caption{Log-scaled data $\{y_i\}$ of Patient 4 CD4$^{+}$ T-cells (represented as `x'), and model output $z(t;\hat\theta_{OLS})$
(represented by the solid curve) evaluated at
parameter estimates obtained in \cite{adamsthesis,adams07}.}
\label{pat4logdata}
\end{figure}
Given the vector
\[
\theta=(\theta_1,\theta_2)\in\mathbb{R}^{28},
\]
for initial conditions plus system parameters,
we will consider sub-vectors, by partitioning into fixed and active (those to possibly be estimated) parameters. It is assumed
the following entries are always fixed at known values provided in \cite{adamsthesis,adams07}:
$x_3^0$, $x_4^0$, $x_6^0$, $\rho_1$, and $\rho_2$. In other words, we will calculate sub-vectors
from the $\mathbb{R}^{23}$ vector
\begin{equation}\label{q23}
q=(x_1^0,x_2^0,x_5^0,\lambda_1,d_1,\epsilon_1,k_1,\lambda_2,d_2,f,k_2,\delta,m_1,m_2,
\epsilon_2,N_T,c,\lambda_E,b_E,K_b,d_E,K_d,\delta_E ).
\end{equation}
For every fixed value of $p$, such that $p=2,3,\dots,22$, there are two partitions of interest: one with $p$ active parameters,
and the other one with $23-p$ fixed parameters. For example, when $p=22$ one of twenty three possible partitions is the following:
fix $x_1^0$ and consider
\[
(x_2^0,x_5^0,\lambda_1,d_1,\epsilon_1,k_1,\lambda_2,d_2,f,k_2,\delta,m_1,m_2,
\epsilon_2,N_T,c,\lambda_E,b_E,K_b,d_E,K_d,\delta_E )^T\in\mathbb{R}^{22},
\]
as a vector with active parameters. In the implementation of this subset selection algorithm, we carry out the calculation of all possible
vectors by using binary matrices with twenty eight columns,
such that every row has zeros for entries that are fixed, and ones for those that are active. In the example above, the binary row is (recall that $x_3^0$, $x_4^0$, $x_6^0$, $\rho_1$, and $\rho_2$ are fixed throughout)
\[
(0,1,0,0,1,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,1,1,1,1,1,1).
\]
For a fixed value of $p$ the set
\begin{equation}
\mathcal{S}_p=\{\theta\in\mathbb{R}^p|\ \theta \mbox{ is a sub-vector of } q\in\mathbb{R}^{23} \mbox{defined in equation (\ref{q23})}\}
\end{equation}
collects all the possible active parameter vectors in $\mathbb{R}^p$.
We define the set
\begin{equation}\label{viableq}
\Theta_p=\{\theta|\ \theta\in \mathcal{S}_p \subset \mathbb{R}^{p},\ \mbox{rank}(\chi(\theta))=p\},
\end{equation}
where $\chi(\theta)$ denotes the $n\times p$ sensitivity matrix. By
construction, the elements of $\Theta_p$ are parameter vectors
that give sensitivity matrices with independent columns.
The next step in the selection procedure involves the
calculation of standard errors (uncertainty quantification) using
the asymptotic theory (see \eqref{seeqn}). For every
$\theta\in \Theta_p$, we define a vector of {\em coefficients of
variation} $\nu(\theta)\in \mathbb{R}^{p}$ such that for each
$i=1,\dots,p$,
\[
\nu_i(\theta)=\frac{\sqrt{(\Sigma(\theta))_{ii}}}{\theta_i},
\]
and
\[
\Sigma(\theta)=\sigma_0^2\left[\chi(\theta)^T\chi(\theta)\right]^{-1}\in\mathbb{R}^{p\times p}.
\]
The components of the vector $\nu(\theta)$ are the
ratios of each standard error for a parameter to the corresponding
nominal parameter value. These ratios are dimensionless numbers
warrenting comparison even when parameters have considerably
different scales and units (e.g., $N_T$ is on the order of $10^1$, while $k_1$ is on the order of $10^{-6}$). We then define
the {\em selection score} as
\[
\alpha(\theta)=\left| \nu(\theta) \right|,
\]
where $|\cdot|$ is the norm in $\mathbb{R}^{p}$.
A selection score $\alpha(\theta)$ near zero indicates lower uncertainty
possibilities in the estimation, while large values of
$\alpha(\theta)$ suggest that one could expect to find substantial
uncertainty in at least some of the components of the estimates in any parameter estimation attempt.
We summarize the steps of the algorithm as follows:
\begin{enumerate}
\item{\bf All possible active vectors.} For a fixed value of $p=2,\dots,22$,
fix $23-p$ parameters to nominal values, and then
calculate the set $\mathcal{S}_p$, which collects all the possible active parameter vectors in $\mathbb{R}^p$:
\[
\mathcal{S}_p=\{\theta\in\mathbb{R}^p|\ \theta \mbox{ is a sub-vector of } q\in\mathbb{R}^{23} \mbox{defined in equation (\ref{q23})}\}.
\]
\item {\bf Full rank test}.
Calculate the set $\Theta_p$ as follows
\[
\Theta_p=\{\theta|\ \theta\in \mathcal{S}_p \subset \mathbb{R}^{p},\ \mbox{rank}(\chi(\theta))=p\}.
\]
\item {\bf Standard error test.} For every $\theta\in \Theta_p$
calculate a vector of coefficients of variation $\nu(\theta)\in
\mathbb{R}^{p}$ by
\[
\nu_i(\theta)=\frac{\sqrt{(\Sigma(\theta))_{ii}}}{\theta_i},
\]
for $i=1,\dots,p$, and
\(
\Sigma(\theta)=\sigma_0^2\left[\chi(\theta)^T\chi(\theta)\right]^{-1}\in\mathbb{R}^{p\times p}.
\)
Calculate the selection score as
\(
\alpha(\theta)=\left|\nu(\theta) \right|.
\)
\end{enumerate}
\section{Results and Discussion}
Results of the subset selection algorithm with the HIV model of Section \ref{modelsection} are given in Table \ref{tab5top}. Parameter vectors, condition numbers (ratio of largest to smallest singular value \cite{golvan}),
and values of the selection score are displayed for $p=11$. The third column of Table \ref{tab5top} displays selection score values from smallest (top) to largest (bottom). For
the sake of clarity we only display five out of one million parameter vectors chosen by the selection algorithm. The selection score values
range from $2.813\times10^{1}$ to $2.488\times10^{5}$ for the one million parameter vectors selected when $p=11$.
\begin{table}[h]
\caption{Parameter vectors obtained with subset selection algorithm for $p=11$. For each
parameter vector $\theta\in\Theta_p$ the sensitivity matrix condition number $\kappa(\chi(\theta))$,
and the selection score $\alpha(\theta)$ are displayed.}
\begin{center}
\begin{tabular}{|c|c|c|} \hline\hline
Parameter vector, $\theta$ & Condition number, $\kappa(\chi(\theta))$ & Selection score, $\alpha(\theta)$ \\ \hline
$(x_1^0,x_5^0,\lambda_1,d_1,\epsilon_1,\lambda_2,d_2,k_2,\delta,\epsilon_2,N_T)$&3.083$\times 10^{5}$&2.881$\times 10^{1}$\\ \hline
$(x_1^0,x_5^0,\lambda_1,d_1,\epsilon_1,\lambda_2,d_2,k_2,\delta,\epsilon_2,c)$&3.083$\times 10^{5}$&2.884$\times 10^{1}$\\ \hline
$(x_1^0,x_5^0,\lambda_1,d_1,\epsilon_1,k_1,\lambda_2,d_2,k_2,\delta,\epsilon_2)$&2.084$\times 10^{8}$&2.897$\times 10^{1}$\\ \hline
$(x_2^0,x_5^0,\lambda_1,d_1,\epsilon_1,\lambda_2,d_2,k_2,\delta,\epsilon_2,N_T)$&2.986$\times 10^{5}$&2.905$\times 10^{1}$\\ \hline
$(x_2^0,x_5^0,\lambda_1,d_1,\epsilon_1,\lambda_2,d_2,k_2,\delta,\epsilon_2,c)$&2.986$\times 10^{5}$&2.907$\times 10^{1}$\\ \hline
\end{tabular}
\end{center}
\label{tab5top}
\end{table}
\begin{figure}
\begin{center}
\includegraphics[width=5in,height=3in]{sel_score_vs_p_a}
\includegraphics[width=5in,height=3in]{sel_score_vs_p_b}
\end{center}
\caption{(a) Selection score versus the number of parameters $p$. (b) Natural logarithm of selection score (circles) and regression line
versus number of parameters $p$. For each fixed value of $p$, the smallest 100 values of the selection score
are displayed.}
\label{selvsp}
\end{figure}
In \cite{adamsthesis,adams07}, the authors estimate the parameter vector
\[
\theta=(x_1^0,x_2^0,x_5^0,\lambda_1,d_1,\epsilon_1,k_1,\epsilon_2,N_T,c,b_E) \in\mathbb{R}^{11}.
\]
The selection algorithm chooses most of these parameters. For instance, the sub-vector $(x_5^0,\lambda_1,d_1,\epsilon_1,\epsilon_2)$
appears in every one of the top five parameter vectors displayed in Table \ref{tab5top}. However, the sub-vector $(x_1^0,x_2^0,x_5^0)$ along with $b_E$
are never chosen among the top five parameter vectors. Even so, use of the subset selection algorithm discussed here (had it been available) might have proved valuable in the efforts reported in \cite{adamsthesis,adams07}.
In Figure \ref{selvsp}(a) we depict the selection score as a function of the number of parameters. For each fixed value of $p$, one hundred values
are displayed, corresponding to the parameter vectors with the smallest one hundred selection score values. Figure \ref{selvsp}(a) suggests that
parameter vectors with more than thirteen parameters ($13\leq p\leq 18$) might be expected to have large uncertainty when estimated
from observations, because the selection score ranges from $2.263\times 10^{2}$ to $1.090\times 10^{4}$. Figure \ref{selvsp}(b) is a semilog
plot of Figure \ref{selvsp}(a), i.e., it displays the natural logarithm of the selection score as a function of the number of parameters. Figure \ref{selvsp}(b)
also depicts the regression line, which fits the natural logarithm of the selection score. From this linear regression we conclude the selection score $\alpha$
grows exponentially with the number of parameters to be estimated. More precisely, for $3\leq p\leq18$, we find
\begin{equation}\label{alexp}
\alpha\equiv\alpha(p)=Ce^{0.75p},
\end{equation}
where $C=8.52\times10^{-4}$.
\begin{figure}
\begin{center}
\includegraphics[width=5in,height=2.5in]{con_sco_p5p18}
\end{center}
\caption{Selection score $\alpha(\theta)$ versus condition number $\kappa(\chi(\theta))$, where $\theta\in\mathbb{R}^p$,
for $p=5$ (circles) and $p=18$ (triangles). Both axes are in logarithmic scale. The smallest hundred values of the selection score
are depicted for each value of $p$.}
\label{selvscondp5p18}
\end{figure}
In Figure \ref{selvscondp5p18} we graph (in logarithmic scales) the smallest one hundred selection score values $\alpha(\theta)$ versus
the sensitivity matrix condition number $\kappa(\chi(\theta))$, with $\theta\in\mathbb{R}^p$, for $p=5$ (circles) and
$p=18$ (triangles). The condition number $\kappa(\chi(\theta))$ is defined as the ratio of the largest to smallest singular value \cite{golvan} of the sensitivity matrix
$\chi(\theta)$. It is clear from Figure \ref{selvscondp5p18} that the selection score drops dramatically from $p=18$ to $p=5$, which
is suggestive of a reduction in uncertainty quantification for these scenarios. However, the conditioning of the sensitivity matrix does not exhibit
this decaying feature. Some values of $\kappa(\chi(\theta))$ are within the same ball park, $10^{7}\leq\kappa(\chi(\theta))\leq10^{8}$
for $p=5$ and $p=18$, while other $\kappa(\chi(\theta))$ values for $p=5$ range considerably from $7.768\times10^{1}$ to $5.486\times10^{6}$ .
In Table \ref{cvp5p18} we examine the effect that removing parameters from an estimation has in uncertainty quantification. The coefficient of
variation (CV) is defined as the ratio of the standard error to the estimate for each parameter. In Table \ref{cvp5p18} three cases are considered:
$p=18$, where $\theta=(x_1^0,x_2^0,x_5^0,\lambda_1,d_1,\epsilon_1,d_2,f,k_2,\delta,m_1,m_2,\epsilon_2,N_T,b_{E},K_{b},d_{E},K_{d})$;
$p=5$, where $\theta=(x_1^0,\lambda_1,\delta,\epsilon_2,N_{T})$; and $p=5$, where $\theta=(x_2^0,b_{E},K_{b},d_{E},K_{d})$.
There are consistent improvements in uncertainty quantification,
with CV dropping as much as four orders of magnitude. For instance, by comparing the second and third columns of
Table \ref{cvp5p18}, one sees the reduction of CV for $\lambda_1$, going from $8.430\times 10^{-1}$ to
$1.150\times 10^{-1}$, implies the standard error is 84\% of the estimate for $p=18$, while it reduces to 11\% of the estimate when $p=5$.
For the parameter $N_{T}$, it is observed that the standard error reduces from being 40000\% to 10\% of the estimate. A similar
remarkable improvement is also seen for $x_1^0$, with a standard error equal to 50000\% of the estimate for $p=18$, dropping to
4\% of the estimate for $p=5$. The improvement in uncertainty quantification is related to going from the
upper right corner of Figure \ref{selvscondp5p18} into the lower left corner. On one hand, the condition number and selection score for
$\theta=(x_1^0,x_2^0,x_5^0,\lambda_1,d_1,\epsilon_1,d_2,f,k_2,\delta,m_1,m_2,\epsilon_2,N_T,b_{E},K_{b},d_{E},K_{d})$,
are $7.518\times 10^{8}$ and $1.025\times 10^{5}$, respectively. On the other hand, the condition number and selection score for
$\theta=(x_1^0,\lambda_1,\delta,\epsilon_2,N_{T})$ are $8.383\times 10^{1}$ and $3.990\times 10^{-1}$, respectively.
The fourth column of Table \ref{cvp5p18} is a reminder that reducing the number of parameters (e.g. from $p=18$ to $p=5$) is not enough
to guarantee reasonable improvements in uncertainty quantification. Even though equation (\ref{alexp}) establishes an exponential relationship between
the norm of the vector of coefficients of variation and the number of parameters. The best improvement in uncertainty quantification, while comparing the second
and fourth column of Table \ref{cvp5p18}, is observed for $x_2^0$, with a standard error equal to 2,000,000\% when $p=18$, which
drops to 200\% when $p=5$. However, the latter is still an estimate with large uncertainty which must be avoided.
\begin{table}
\caption{Coefficient of variation (CV), defined as the ratio of standard error divided by estimate, for three parameter vectors.}
\begin{center}
\begin{tabular}{|c|c|c|c|}\hline
Parameter & CV ($p=18$) & CV ($p=5$) & CV($p=5$) \\ \hline
$x_1^0$&4.82$\times 10^{2}$ &4.10$\times 10^{-2}$ & ---\\ \hline
$x_2^0$&1.62$\times 10^{4}$ &---& 1.72$\times 10^{0}$\\ \hline
$x_5^0$&6.42$\times 10^{3}$ &---& ---\\ \hline
$\lambda_1$&8.43$\times 10^{-1}$ &1.15$\times 10^{-1}$ &--- \\ \hline
$d_1$&9.93$\times 10^{-1}$ &---& ---\\ \hline
$\epsilon_1$&1.24$\times 10^{2}$ &---& --- \\ \hline
$d_2$&3.79$\times 10^{1}$ &---&--- \\ \hline
$f$&4.94$\times 10^{2}$ &---& --- \\ \hline
$k_2$&4.70$\times 10^{1}$ &---&--- \\ \hline
$\delta$&3.98$\times 10^{2}$ &3.39$\times 10^{-1}$ & --- \\ \hline
$m_1$&2.24$\times 10^{4}$ &---&--- \\ \hline
$m_2$&3.82$\times 10^{4}$ &---&--- \\ \hline
$\epsilon_2$&2.06$\times 10^{2}$ &1.39$\times 10^{-1}$ & ---\\ \hline
$N_T$&4.04$\times 10^{2}$ &9.99$\times 10^{-2}$ & --- \\ \hline
$b_E$&6.10$\times 10^{4}$ &---&1.12$\times 10^{4}$ \\ \hline
$K_b$&2.51$\times 10^{4}$ &---&4.29$\times 10^{3}$ \\ \hline
$d_E$&5.79$\times 10^{4}$ &---&1.07$\times 10^{4}$ \\ \hline
$K_d$&2.30$\times 10^{4}$ &---&4.04$\times 10^{3}$ \\ \hline
\end{tabular}
\end{center}
\label{cvp5p18}
\end{table}
\clearpage
\section{Concluding Remarks}
As we have noted, inverse problems for complex system models containing a large number of parameters are difficult. There is great need for quantitative methods to assist in posing inverse problems that will be well formulated in the sense of the ability to provide parameter estimates with quantifiable small uncertainty estimates. We have introduced and illustrated use of such an algorithm that requires prior local information about ranges of admissible parameter values and initial values of interest along with information on the error in the observation process to be used with the inverse problem. These are needed in order to implement the sensitivity/Fisher matrix based algorithm.
Because sensitivity of a model with respect to a parameter is fundamentally related to the ability to estimate the parameter, and because sensitivity is a local concept, we observe that the pursuit of a global algorithm to use in formulating parameter estimation or inverse problems is most likely a quest that will go unfulfilled.
\section*{Acknowledgements} This research was
supported in part by Grant Number R01AI071915-07 from the National
Institute of Allergy and Infectious Diseases and in part by the
Air Force Office of Scientific Research under grant number
FA9550-09-1-0226. A. C.-A. carried portions of this work while visiting
the Statistical and Applied Mathematical Sciences Institute, which is funded
by the National Science Foundation under Grant DMS-0635449. The content is solely the responsibility of the
authors and does not necessarily represent the official views of
the NIAID, the NIH, the AFOSR, or the NSF.
|
1,116,691,498,667 | arxiv | \subsection*{Organization of the paper}
In Section~\ref{lattices} we study bimodular lattices and their short characteristic covectors.
In Section~\ref{obstruction} we describe an obstruction for a rational homology sphere $Y$ with $H_1(Y)\cong \mathbb{Z}/2\mathbb{Z}$ to bound a definite 4-manifold.
In Section~\ref{example}, we show that the manifold $N$ described above satisfies the conditions of the obstruction from Section \ref{obstruction}.
\subsection*{Acknowledgements}
We thank Paolo Aceto for bringing this problem to our attention, and Adam Levine, Brendan Owens, Kim Fr{\o}yshov, Matthew Hedden, and Andr\'as Stipsicz for useful conversations.
MG thanks the R\'enyi Institute for their hospitality at the beginning of this project.
\end{section}
\begin{section}{Characteristic covectors of bimodular lattices}\label{lattices}
In this section, a lattice $\Lambda$ will be a subset $\Lambda \subset \mathbb{R}^m$ isomorphic to $\mathbb{Z}^m$, and such that, with respect to the Euclidean scalar product on $\mathbb{R}^m$ one has $v\cdot w \in \mathbb{Z}$ for each $v,w \in \Lambda$. A lattice is said to be \emph{minimal} if it contains no vector of square $1$.
We denote with $\Lambda'$ the \emph{dual lattice} of $\Lambda$, i.e. the set of vectors in $\mathbb{R}^m$ that pair integrally with each element in $\Lambda$. Note that $\Lambda \subset \Lambda'$, and that $\Lambda'$ is \emph{not} a lattice according the definition above, unless $\Lambda = \Lambda'$. (There are always vectors in $\Lambda'$ that square to a rational if the containment is strict.) If $\Lambda = \Lambda'$ we say that $\Lambda$ is \emph{unimodular} (or \emph{self-dual}). Even if $\Lambda$ and $\Lambda'$ both live in $\mathbb{R}^m$ and we use the scalar product on $\mathbb{R}^n$ for the pairing, we use the notation $\langle \xi, v\rangle$ to denote the pairing of a covector $\xi \in \Lambda'$ and a vector $v \in \Lambda$.
We denote with $\Char(\Lambda)$ the subset of $\Lambda'$ comprising all $\xi$ such that $\langle \xi, v\rangle \equiv v^2 \pmod 2$. Each such $\xi$ is a \emph{characteristic covector}. (Again, when $\Lambda$ is unimodular $\xi$ is actually an element in $\Lambda$, but we prefer to talk about covectors to emphasize that we are thinking about the dual.)
Throughout the section, the letter $L$ will always denote a positive definite integral lattice of rank $n$ and determinant $2$ (a \emph{bimodular lattice}), $A$ will be an auxiliary lattice of determinant $2$, and $M_A$ will be the lattice $L \oplus A$.
Note that $M'_A/M_A \cong L'/L \oplus A'/A \cong \mathbb{Z}/2\mathbb{Z} \oplus \mathbb{Z}/2\mathbb{Z}$ has a unique metabolizer (i.e. a subgroup isomorphic to $\mathbb{Z}/2\mathbb{Z}$ that is isotropic with respect to the induced $\mathbb{Q}/\mathbb{Z}$-valued bilinear form induced by the product on $M_A$), so that $M_A$ is an index-$2$ sublattice of a unimodular lattice $U_A$.
(See, for example,~\cite{Larson} for a topologically-minded treatment.)
In fact, $U_A$ is simply $L\oplus A \cup (L'\setminus L) \times (A'\setminus A) \subset L' \oplus A'$.
With a slight abuse of notation, we view $L$ and $A$ as subsets of $U_A$. It is easy to see that $L = A^\perp$ and $A = L^\perp$.
Since neither $L$ nor $A$ have a metabolizer, note that $L\otimes \mathbb{Q} \cap U_A = L$ and $A \otimes \mathbb{Q} \cap U_A = A$.
Note that $U_A$ is uniquely determined by both $L$ and $A$, but we do not make the dependency on $L$ explicit in the notation.
Dualizing, $U_A' \cong U_A$ is an index-2 subset of $M_A' = L' \oplus A'$. Moreover, the restriction maps $U_A' \to L'$ and $U_A' \to A'$ are both onto. In the same way, the restriction maps $\Char(U_A) \to \Char(L)$ and $\Char(U_A) \to \Char(A)$ are onto.
Two choices stand out. We can choose $A = L$, or $A = A_1$, where $A_1$ is the rank-$1$ lattice generated by a vector of square $2$ (i.e. a \emph{root}). We write $U$ and $M$ instead of $U_{A_1}$ and $M_{A_1}$.
Call $r$ one of the two generators of the auxiliary lattice $A = A_1$.
By construction, $U$ contains $r$, a vector of norm $2$.
\begin{lemma}\label{l:charcongruence}
Let $\xi \in \Char(L)$ be a characteristic covector. Then $\xi^2$ is an integer, and $\xi^2 \equiv n\pm 1 \pmod 8$.
\end{lemma}
\begin{proof}
Choose a characteristic covector $\xi_U \in \Char(U)$ that extends $\xi$, and call $\xi_A \in \Char(A_1)$ its restriction to $A_1$. Then $\xi_U^2 \equiv \rk U = n+1 \pmod 8$, and $\xi_U^2 = \xi^2 + \xi_A^2$. Since $\xi_A^2 \equiv 0,2 \pmod 8$ by direct verification, the lemma follows.
\end{proof}
We call $\Char_\pm(L)$ the set of characteristic covectors $\xi$ of $L$ for which $\xi^2 \equiv n\pm 1 \pmod 8$.
Let us go back to the case of $A$ arbitrary. Since $A$ is a determinant-$2$ lattice, $\Char_\pm(A)$ are defined, too.
\begin{lemma}\label{l:extend}
$\xi_u \in U_A'$ is a characteristic covector of $U_A$ if and only if there exists a sign $s$ such that $\xi_{u}|_L \in \Char_s(L)$ and $\xi_u|_{A} \in \Char_{-s}(A)$.
\end{lemma}
Recall that if $\Lambda$ is a lattice, then $\Char(\Lambda)$ is affine over $2\Lambda'$ (i.e. $\Char(\Lambda) = \xi + 2\Lambda'$ for any $\xi \in \Char(\Lambda)$). An easy extension of this fact is the observation that $\Char_{\pm}(L)$ is affine over $2L$ (and not over $2L'$), and that translations by elements in $2L'\setminus 2L$ swap $\Char_+(L)$ and $\Char_-(L)$.
\begin{proof}
Call $\xi_\ell = \xi_u|_L$ and $\xi_a = \xi_u|_{A}$.
The `only if' direction is clear, since $\xi_\ell^2 + \xi_a^2 = \xi_u^2 \equiv n+\rk A \pmod 8$.
Let us look at the `if' direction. Now, let $C := \Char_+(L) \times \Char_-(A) \cup \Char_-(L) \times \Char_+(A) \subset M'$.
By the `only if' direction above, $C$ contains $\Char(U_A)$, which is an affine space over $2U_A'$.
On the other hand, $\Char_\pm(L) \times \Char_\mp(A)$ is also affine over $2L+2A = 2M_A$, and, by the remark below the statement, if $v \in U_A \setminus (L\oplus A) = (L'\setminus L) \times (A'\setminus A)$ and $\xi \in \Char_\pm(L) \times \Char_\mp(A)$ then $\xi + 2v \in \Char_\mp(L) \times \Char_\pm(A)$, so that $C$ is affine over $2U_A'$, too.
In particular, $C$ and $\Char(U_A)$ are both affine subspaces over $2U_A'$, and since $\Char(U_A) \subset C$ then they are equal.
\end{proof}
We denote with $I^m$ the lattice $\mathbb{Z}^m$ with the diagonal intersection form (i.e. with an orthonormal basis). The lattice $\Delta_n := I^{n-1} \oplus A_1$ is the unique bimodular lattice of rank $n$ whose intersection form is diagonal.
\begin{lemma}\label{l:diagonal}
The lattice $U$ is diagonal if and only if $L$ is.
\end{lemma}
\begin{proof}
Suppose that $U$ is diagonal, with base $e_0,\dots,e_n$. Then the generator $r$ of $A$ is a root in $U$. In particular, up to re-indexing the generators of $U$ (and possibly flipping signs), $r = e_0 - e_1$. Now, $L \cong \langle r \rangle^\perp$, but $\langle r \rangle^\perp$ is spanned by $e_0 + e_1, e_2, \dots, e_n$, and in particular it is isomorphic to $\Delta_n$.
If $L = \Delta_n$, then $L$ embeds in $I^{n+1}$ as we have just seen; however, $U$ is uniquely determined by $L$, so $U \equiv I^{n+1}$.
\end{proof}
We recall a result of Elkies on characteristic covectors in unimodular lattices.
\begin{theorem}[\cite{Elkies}]\label{t:elkies1}
Let $\Lambda$ be unimodular lattice of rank $m$. Then
\[
\min_{\xi \in \Char(\Lambda)} \xi^2 \le m,
\]
and the equality is attained if and only if $\Lambda$ is the diagonal lattice $I^m$.
\end{theorem}
We find it convenient to introduce the notation for the defect of a lattice.
The \emph{defect} $d(\Lambda)$ of $\Lambda$ is:
\[
d(\Lambda) = \min_{\xi \in \Char(\Lambda)} \frac{\xi^2- \rk\Lambda}4
\]
Note that Elkies' theorem can be rephrased as saying that a unimodular lattice has non-positive defect, and that the defect is $0$ if and only if the lattice is diagonal.
When $L$ is bimodular, using Lemma~\ref{l:charcongruence} we can identify \emph{two} defects, denoted with $d_\pm(L)$:
\[
d_\pm(L) = \min_{\xi \in \Char_\pm(L)} \frac{\xi^2-\rk L}4
\]
Note that $d_\pm(L) \equiv \pm\frac14 \pmod 2$.
The main result of this section is a bimodular version of of Elkies' theorem and gives another characterization of $\Delta_n$ (see~\cite{OwensStrle-char} for a different characterization).
\begin{theorem}\label{p:2-elkies}
For every bimodular lattice $L$, $d_+(L) + d_-(L) \le 0$. Moreover, if equality is attained, $L$ is diagonal, and in particular $d_\pm(L) = \pm\frac14$.
\end{theorem}
As mentioned above, a root in a lattice $\Lambda$ is a vector of square $2$; we say that $\Lambda$ is a \emph{root lattice} if it is rationally spanned by its set of roots. To a collection $R \subset \Lambda$ of $\mathbb{Q}$-linearly independent roots we associate an edge-weighted graph $G(R)$ as follows: the set of vertices of $G(R)$ is $R$, and there is an edge joining $r, s \in R$ with weight $r\cdot s$ if $r \cdot s \neq 0$. Note that if $r, s \in R$ are distinct, they are linearly independent, so by Cauchy--Schwarz $r\cdot s \in \{-1,0,1\}$.
We will make the choice $A = L$ in the proof. To make the notation lighter, we denote $U_L$ by $D$, and we call it the \emph{double} of $L$. To distinguish between the two summands in $M_A \cong L \oplus L$, we still call $A$ its second summand; however, we drop the dependency on $A$ from the notation. In summary, we have $M = L\oplus A$ as an index-$2$ sublattice of $D$, and $L$ and $A$ are viewed as a pair of orthogonal sublattices in $M$ and in $D$.
We call $\pi_\ell \colon\thinspace M \to L$ and $\pi_a \colon\thinspace M \to A$ the two orthogonal projections, and $\rho_\ell \colon\thinspace D' \to L'$ and $\rho_a \colon\thinspace D' \to A'$ the two restriction maps. In fact, $\pi_\ell$ and $\rho_\ell$ are both restrictions of a linear map $D\otimes \mathbb{Q} \to L \otimes \mathbb{Q}$ (and similarly for $A$).
\begin{proof}
We can assume, without loss of generality, that $L$ is minimal (i.e. it contains no vectors of norm $1$): indeed, it is easy to verify that $d_\pm(L) = d_\pm(L \oplus I^m)$, and $L$ is diagonal if and only if $L\oplus I^m$ is.
Call $n$ the rank of $L$. Consider now $D$, the double of $L$. $D$ is a unimodular lattice, so $d(D) \le 0$. By Lemma~\ref{l:extend}, $d(D) = \min\{d_+(L) + d_-(A), d_-(L) + d_+(A)\} = d_+(L) + d_-(L)$, which proves the first assertion.
Let us now suppose that $d_+(L) + d_-(L) = 0$; again by Elkies' theorem, this implies that $D$ is the diagonal lattice $I^{2n}$. Call $e_1, \dots, e_{2n}$ an orthonormal basis of $D$.
We claim that $L$ is a root lattice.
Since $L$ is minimal, $e_i \in D\setminus M$. However, since $M$ has index $2$ in $D$, $2e_i \in M$. We also know that $2e_i \not\in L \cup A$: indeed, as mentioned at the beginning of the section, $L \otimes \mathbb{Q} \cap D = L$, so if $2e_i \in L$, then also $e_i \in L$. (By symmetry, this proves the statement for $A$ as well.)
This implies that $r_i := \pi_\ell(2e_i)$ and $\pi_a(2e_i)$ are two non-zero vectors whose squares sum to $(2e_i)^2 = 4$; since $L$ is minimal, they both have square $2$. Since the collection $\{2e_i\}$ is a rational basis of $D$, the collection $\{r_i\}$ is a set of roots that rationally spans $L$, which proves the claim.
If $d_\pm(L) = \pm\frac14$, then, since $d_\pm(A_1) = \pm\frac14$, $d(U) = 0$ and by Elkies' theorem $U$ is diagonal
(recall that $U$ is the unimodular overlattice of $L \oplus A_1$). By Lemma~\ref{l:diagonal} $L$ is diagonal.
Suppose now $d_\pm(L) \neq \pm \frac14$, so that in particular $|d_+(L)| = |d_-(L)| =: d \ge \frac74$. Consider the characteristic covector $\xi_0 = e_1 + \dots + e_{2n} \in \Char(D)$, and, for each $i$, the characteristic covector $\xi_i = \xi_0 - 2e_i \in \Char(D)$. Note that $\xi_i$ is norm-minimizing among all characteristic covectors in $D$ for $i = 0,\dots, 2n$, and so that its restrictions $\lambda_i = \rho_\ell(\xi_i) \in L'$ and $\alpha_i = \rho_a(\xi_i)$ are characteristic and they minimize the norm \emph{in their congruence class}. That is, if $\lambda_i \in \Char_{+}(L)$, then $\lambda_i$ minimizes the norm among all elements in $\Char_{+}(L)$; in this case, $\alpha_i \in \Char_-(A)$ and $\alpha_i$ minimizes the norm in $\Char_-(A)$.
Without loss of generality, let us suppose that $\lambda_0 \in \Char_+(L)$. The key observation is that $\lambda_i \in \Char_-(L)$ for each $i = 1,\dots,2n$.
This follows from the fact, observed above, that $e_i \not\in L$, so that $\pi_\ell(2e_i) \in 2L'\setminus 2L$, and in particular $\pi_\ell(2e_i)$ swaps $\Char_+(L)$ and $\Char_-(L)$.
Now, since $\lambda_i \in \Char_-(L)$ for each $i >0$ is a norm-minimizer in its class:
\[
|\lambda_0^2 - \lambda_i^2| = 8d \ge 14.
\]
However,
\[
\lambda_0^2 - \lambda_i^2 = 2\langle \lambda_0, r_i\rangle - r_i^2,
\]
so that for each $i > 0$:
\[
|\langle \lambda_0, r_i)\rangle| \ge 4d-1 \ge 3.
\]
Pick a subset $J \subset \{1, \dots, 2n\}$ of indices such that $R = \{r_j \mid j \in J\}$ is a rational basis for $L$; this in particular means that $|J| = n$. Up to relabelling, let us assume $J = \{1,\dots,n\}$. We claim that $G(R)$ is bicolorable.
To see this, we will prove that all cycles in $G(R)$ have even length, and in fact they all have length $4$. Assume by contradiction that there is a cycle $C \subset G(R)$ of length $k \ge 3, k \neq 4$. Up to another relabelling, let us assume that $C$ comprises $r_1,\dots,r_k$ in this order. Up to replacing $r_i$ with $-r_i$ for some values of $i$, we can assume that all edges $(r_1,r_2), \dots, (r_{k-1},r_k)$ are labelled with $-1$. Under this assumption, $(r_k, r_1)$ has to be labelled by $+1$, for otherwise $(r_1+\dots+r_k)^2 = 0$, which would contradict the fact that $R$ is a linearly independent set. Now recall that $R \subset L \subset D \cong I^{2n}$ comprises elements of square $2$. So there is a basis $f_1,\dots,f_{2n}$ of $D$ such that $r_1 = f_1 - f_2, \dots, r_{k-1} = f_{k-1} - f_k$, and $r_k = f_k + f_1$; but then $r_1 + \dots + r_k = 2f_1 \in L$, which implies $f_1 \in L$ since $L\otimes \mathbb{Q} \cap D = L$, and this contradicts the minimality of $L$.
Since $G(R)$ is bicolorable there is a subset $R' \subset R$, indexed by $J' \subset J$, containing $\lceil \frac n2 \rceil$ roots that are pairwise orthogonal.
Now, by Bessel's inequality:
\[
\lambda_0^2 \ge \sum_{j \in J'} \frac{\langle \lambda_0, r_j \rangle^2}{r_j^2} \ge \left\lceil \frac n2 \right\rceil \cdot \frac92 > 2n,
\]
which contradicts the fact that $\lambda_0^2 \le \lambda_0^2 + \alpha_0^2 = \xi_0^2 = 2n$.
\end{proof}
\begin{remark}
Note that the assumption that the length of the cycle is not $4$ is in fact used: for the $4$-cycle as above, the embedding given by $(f_1-f_2, f_2-f_3, -f_1-f_2, -f_1+f_4)$ has components that sum to $-f_1-f_2-f_3+f_4$.
\end{remark}
\end{section}
\begin{section}{The obstruction}\label{obstruction}
In this section we discuss a topological application of Theorem~\ref{p:2-elkies}.
We start with an algebraic topology lemma.
\begin{lemma}\label{extension}
Let $X$ be a compact, oriented $4$-manifold with boundary $Y$, a closed $3$-manifold with $H_1(Y)$ finite of square-free order.
Then $|{\det Q_X}| = |H_1(Y)|$ and all spin$^c$ structures on $Y$ extend to $X$.
\end{lemma}
\begin{proof}
Let us look at the long exact sequence for the pair $(X, Y)$:
\[
0 \to H^2(X,Y) \to H^2(X) \to H^2(Y) \to H^3(X,Y) \to H^3(X) \to 0.
\]
All spin$^c$ structures on $Y$ extend if and only if the restriction map $H^2(X) \to H^2(Y)$ is onto.
Since $H^2(Y)$ is finite, $H^3(X,Y)$ and $H^3(X)$ have the same rank, $b_3$;
call $B$ and $A$ their torsion subgroups, respectively.
For the same reason, $H^2(X,Y)$ and $H^2(X)$ have the same rank, $b_2$;
by the universal coefficient theorem and Poincar\'e--Lefschetz duality, their torsion subgroups are isomorphic to $A$ and $B$, respectively.
Since torsion can only map to torsion, call $\tau_i$ the map obtained by restricting $\pi_i^*: H^i(X,Y) \to H^i(X)$ to the torsion subgroup, and then projecting the target to the torsion subgroup;
we regard $\tau_2$ as a map $\tau_2: A\to B$, and $\tau_3$ as a map $\tau_3: B\to A$.
Note that, since $\pi_3^*$ is onto and $H^2(Y)$ is torsion, the induced map on the quotient $H^3(X,Y)/B \to H^3(X)/A$ is injective (in fact, an isomorphism).
Finally, the map $H^2(X,Y)/A\to H^2(X)/B$ is represented by the intersection form $Q_X$ of $X$.
With this in place, we can then apply the nine lemma to
\[
\xymatrix{
0\ar[r] & B\ar[r] \ar[d] & H^3(X,Y)\ar[r] \ar[d] & H^3(X,Y)/B \ar[d] \ar[r] & 0\\
0\ar[r] & A\ar[r] & H^3(X)\ar[r] & H^3(X)/A\ar[r] & 0\\
}
\]
and
\[
\xymatrix{
0\ar[r] & A\ar[r] \ar[d] & H^2(X,Y) \ar[r] \ar[d] & H^2(X,Y)/A \ar[d]\ar[r] & 0\\
0\ar[r] & B\ar[r] & H^2(X) \ar[r] & H^2(X)/B \ar[r] & 0\\
}
\]
to obtain that $\kerr \pi_3^*$ is a group $G$ of order $\left|\kerr\tau_3\right| = |B|/|A|$, and that $\cokerr \pi_2^*$ is a group $H$ of order $\left|\cokerr \tau_2\right|\cdot \left|\cokerr Q_X\right| = (|B|/|A|)\cdot |{\det Q_X}|$.
It follows that we can extract a short exact sequence of finite groups:
\[
0 \to H \to H_1(Y) \to G \to 0,
\]
from which
\[
|H_1(Y)| = |H|\cdot|G| = \frac{|B|^2}{|A|^2}|{\det Q_X}|.
\]
Since we assumed that $H_1(Y)$ has square-free order, we conclude that $|A| = |B|$ and $|H_1(Y)| = |{\det Q_X}|$;
each of these conclusions imply that $H^2(X)\to H^2(Y)$ is onto.
\end{proof}
To a closed, oriented, spin$^c$ rational homology 3-sphere $(Y,\t)$, Ozsv\'ath and Szab\'o~\cite{OSz-annals} associate a family of invariants, collectively called \emph{Heegaard Floer homology};
we will work with two of these, namely $\HF^+(Y,\t)$ and $\HF^\infty(Y,\t)$, and we will recall a few results from~\cite{OSz}.
For convenience, we will work over the field of two elements.
Recall that from Heegaard Floer homology we can extract a rational number $d(Y,\t)$, called the \emph{correction term} of $(Y,\t)$, that is an invariant under spin$^c$ rational homology cobordism~\cite{OSz}, and reduces modulo 2 to the rho-invariant $\rho(Y,\t)$.
The correction term $d(Y,\t)$ is defined to be the minimal grading of any non-zero element in the image of the map $\HF^\infty(Y,\t) \xrightarrow{\pi} \HF^+(Y,\t)$.
In the remainder of the section $Y$ will always denote a closed, oriented 3-manifold with $H_1(Y)\cong \mathbb{Z}/2\mathbb{Z}$; we say that $Y$ is a \emph{homology} $\mathbb{RP}^3$.
Hence $Y$ has exactly two spin$^c$ structures.
Our argument will depend on the value of these correction terms.
First we pin down their value modulo 2.
\begin{proposition}\label{labelling}
We can label the two spin$^c$ structures on $Y$ as $\t_+$ and $\t_-$, so that $d(Y,\t_\pm) \equiv \pm \frac{1}{4} \pmod 2$.
\end{proposition}
We start with a preliminary lemma.
\begin{lemma}\label{extension}
Let $W$ be a cobordism from an integral homology sphere $Z$ to a homology $\mathbb{RP}^3$, $Y$.
Then both spin$^c$ structures on $Y$ extend to $W$.
\end{lemma}
\begin{proof}
Carve an open neighborhood of a path from $Z$ to $Y$ into $W$, to obtain a 4-manifold $X$ with boundary $Y\#(-Z)$.
The statement is equivalent to the fact that both spin$^c$ structures on $\partial X = Y\#(-Z)$ extend to $X$, which follows from Lemma~\ref{extension}.
\end{proof}
Fix a spin$^c$ structure $\t$ on $Y$ and a simply-connected 4-manifold $X$ with spin$^c$ structure $\mathfrak{s}$, such that $\partial X= Y$ and $\mathfrak{s} |_Y = \t$.
Recall now that the d-invariant $d(Y,\t)$ reduces modulo 2 to the rho-invariant $\rho(Y,\t) \in \mathbb{Q}/2\mathbb{Z}$. The rho-invariant is defined as $\rho(Y,\t) \equiv \frac{c_1(\mathfrak{s})^2-\sigma(X)}{4} \pmod 2
; it follows from the definition that if $(W,\mathfrak{s})$ is a spin$^c$ cobordism from $(Y,\t)$ to $(Y', \t')$, then $\rho(Y',\t') - \rho(Y,\t) \equiv \frac{c_1(\mathfrak{s})^2-\sigma(W)}{4} \pmod 2$.
In particular, integral homology spheres $Z$ have $\rho(Z,\t) = 0$, since they bound spin manifolds (which have signature divisible by 8, by the van der Blij Lemma~\cite[Section~II.5]{HusemollerMilnor} and which have a spin$^c$ structure with trivial first Chern class).
\begin{proof}[Proof of Proposition~\ref{labelling}]
Since $\rho(Y,\t)$ lifts to $d(Y,\t)$, the statement clearly reduces to showing that $\rho(Y,\t_\pm) \equiv \pm\frac14 \pmod 2$.
This is what we set out to prove, by finding a suitable cobordism from $Y$ to an integral homology sphere.
Pick a knot $K$ in $Y$ such that $[K] \neq 0 \in H_1(Y)$.
Then there exists a slope $\gamma$ such that the result of Dehn surgery along $K$ with slope $\gamma$ is an integral homology sphere $Z_0 := Y_\gamma(K)$.
Let $K_0 \subset Z_0$ denote the dual knot.
Since $H_1(Y)\cong \mathbb{Z}/2\mathbb{Z}$ and $Z_0$ is an integer homology sphere, the surgery on $K_0$ that returns $Y$ must have slope $2/q$ for some odd integer $q$.
We can write $2/q$ as a (negative) continued fraction $2/q = [0,-\frac{q+1}2,-2]$, so that $Y$ can be represented by the surgery diagram in Figure~\ref{f:Z0toZ}.
It is easy to see that the 3-manifold obtained by doing surgery on the $0$- and $-\frac{q+1}2$-framed components is again an integral homology sphere, which we denote by $Z$, and that the cobordism $W$ from $Z$ to $Y$ given by the $-2$-framed 2-handle is negative definite.
Moreover, since $W$ is obtained by attaching a single 2-handle to an integral homology sphere, $H_1(W) = H_3(W) = 0$;
in particular, both spin$^c$ structures on $Y$ extend to $W$.
Since $W$ is the trace of a 2-handle attachment along a knot in an integral homology sphere with framing $-2$, the two spin$^c$ structures $\mathfrak{s}_+$ and $\mathfrak{s}_-$, with Chern classes $0$ and $2\gamma \in H^2(W;\mathbb{Z}) \equiv \mathbb{Z}\cdot \gamma$ respectively, have $c_1(\mathfrak{s}_+)^2 = 0$, $c_1(\mathfrak{s}_-^2) = -2$.
By the cobordism formula mentioned above, letting $\t_\pm$ be the restriction of $\mathfrak{s}_\pm$ to $Y$, we get:
\begin{align*}
\rho(Y,\t_+) = \frac{c_1(\mathfrak{s}_+)^2 - \sigma(W)}4 + \rho(Z,\t) \equiv \frac14 \pmod 2,\\
\rho(Y,\t_-) = \frac{c_1(\mathfrak{s}_-)^2 - \sigma(W)}4 + \rho(Z,\t) \equiv -\frac14 \pmod 2,
\end{align*}
thus concluding the proof.
\end{proof}
\begin{figure}
\labellist
\pinlabel $K_0$ at 36 133
\pinlabel $=$ at 108 133
\pinlabel $=$ at 252 133
\pinlabel $\frac2q$ at 45 30
\pinlabel $K_0$ at 179 133
\pinlabel $0$ at 190 25
\pinlabel $-\frac{q}2$ at 131 75
\pinlabel $K_0$ at 324 133
\pinlabel $\langle 0\rangle$ at 338 25
\pinlabel $\langle -\frac{q+1}2\rangle$ at 263 75
\pinlabel $-2$ at 418 75
\endlabellist
\centering
\includegraphics[scale=.66]{figures/Z0toZ}
\caption{Going from $Z_0$ to $Z$.
Recall that the knot $K_0$ lives in $Z_0$.
The right-most picture represents the cobordism from $Z$ to $Y$.
Here we used the braced framing notation: namely, the surgery diagram comprising the components with braced framings describes $Z$, i.e. the lower boundary component of the cobordism, and the non-braced ones represent actual handle attachments for the cobordism.}\label{f:Z0toZ}
\end{figure}
Proposition~\ref{labelling} justifies the following definition.
\begin{definition}
For $Y$ a homology $\mathbb{RP}^3$, we set $d_{\pm 1/4}(Y) = d(Y,\t_\pm)$.
\end{definition}
Note that the labelling is chosen so that $d_{\pm 1/4}(Y) \equiv \pm \frac14 \pmod 2$;
observe also that since $d(Y,\t) = -d(-Y,\t)$, we have that $d_{\pm1/4}(-Y) = -d_{\mp 1/4}(Y)$.
Now suppose that $Y$ bounds a \emph{positive definite} 4-manifold $W$.
In this context we have the following inequality.
\begin{theorem}[\cite{OSz}]\label{t:negativedefiniteinequality}
For each spin$^c$ structure $\mathfrak{s}$ on $W$ with $\mathfrak{s}|_Y = \t$, we have
\[
\frac{c_1(\mathfrak{s})^2 -b_2(W)}4 \geq d(Y,\t).
\]
Moreover, the two sides of the inequality are congruent modulo $2$.
\end{theorem}
We are now ready to give a topological translation of Theorem~\ref{p:2-elkies}.
\begin{proposition}\label{ob}
Let $Y$ be a homology $\mathbb{RP}^3$.
If $Y$ bounds a positive definite $4$-manifold, then $d_{1/4}(Y) + d_{-1/4}(Y) \le 0$.
Moreover, if equality is attained, then $d_{\pm 1/4}(Y) = \pm \frac{1}{4}$.
\end{proposition}
\begin{proof}
Suppose that $Y$ bounds a positive definite $4$-manifold $W$, and let $L$ be the lattice $(H_2(W;\mathbb{Z})/{\rm Tor}, Q_W)$.
By Lemma~\ref{extension}, $L$ is a positive definite lattice of determinant $2$, and the first Chern class gives a surjection $c_1 \colon \text{Spin}^c(W) \to \Char(L)$. Call $n = b_2(W) = \rk L$.
By the last statement in Theorem~\ref{t:negativedefiniteinequality}, using the labelling of Proposition~\ref{labelling}, we see that $\mathfrak{s} \in \text{Spin}^c(W)$ restricts to $\t_\pm$ if and only if $c_1(\mathfrak{s}) \in \Char_\pm(L)$.
Let $\xi_+ \in \Char_+(L)$ and $\xi_- \in \Char_-(L)$ be characteristic covectors with minimal square; note that there exist spin$^c$ structures $\mathfrak{s}_\pm$ on $W$ such that $c_1(\mathfrak{s}_\pm) = \xi_\pm$, and that $\mathfrak{s}_\pm$ restricts to $\t_\pm$.
Then using Theorem~\ref{t:negativedefiniteinequality} and Theorem~\ref{p:2-elkies} we get
\[
d_{1/4}(Y) + d_{-1/4}(Y) \le \frac{\xi_+^2 -n}4 + \frac{\xi_-^2 -n}4 = d_+(L) + d_-(L) \le 0,
\]
proving the first part of the theorem.
Furthermore, if $d_{1/4}(Y) + d_{-1/4}(Y) = 0$, then the above inequality forces $d_+(L) + d_-(L) = 0$, and so by Theorem~\ref{p:2-elkies} we get that $d_\pm(L) = \pm \frac14$.
This, in turn, together with Theorem~\ref{t:negativedefiniteinequality}, forces $d_{\pm1/4}(Y) = \pm \frac14$.
\end{proof}
\end{section}
\begin{section}{The example}\label{example}
Recall that we defined $\overline Y$ as the Seifert fibered space $Y(2;\frac{15}{13},\frac{17}{3},\frac{23}{22})$ and $N = 3P \# \overline Y$, where $P$ is the Poincar\'e homology sphere, oriented as the boundary of the negative $E_8$-plumbing;
equivalently, $P$ is the Brieskorn sphere $\Sigma(2,3,5)$.
We start by computing the correction terms of $\overline Y$.
\begin{proposition}
$d_{-1/4}(\overline Y) = -\frac{17}{4}$ and $d_{1/4}(\overline{Y}) = -\frac{31}{4}$.
\end{proposition}
\begin{proof}
Since $-\overline{Y}$ is the boundary of a negative definite plumbing with a single bad vertex, we can compute these correction terms using \c{C}a\u{g}r{\i} Karakurt's implementation~\cite{Cagricode} of N\'emethi's formula~\cite[Section~11.13]{Nemethi}, which, in turn, is a generalization of Ozsv\'ath and Szab\'o's algorithm from~\cite{OSz3}.
\end{proof}
\begin{remark}
N\'emethi computes of the $d$-invariant of a Seifert fibered space as a sum of two terms;
the first summand is expressed in terms of Dedekind--Rademacher sums associated to the Seifert parameters~\cite[Section 11.9]{Nemethi}, while the second depends on the minimum of a certain function $\tau: \mathbb{Z}_{\ge 0} \to \mathbb{Z}$.
The function is eventually increasing, and the minimum is contained in a bounded interval $[0,N]$, where $N$ can be chosen to be the product of the multiplicities of the fibers.
Furthermore, in principle the computation of the correction terms of $\overline Y$ could be done in other ways: either by computing the minimal squares in the lattice associated to the canonical negative plumbing of $-\overline Y$~\cite[Corollary~1.5]{OSz3} or by following the entire algorithm in~\cite{OSz3}.
\end{remark}
We are now ready to prove our main result; more precisely, we will prove that $N$ does not bound a definite 4-manifold.
\begin{proof}[Proof of Theorem~\ref{main}]
By additivity of correction terms, and since $d(P,\t) = 2$ for the unique spin$^c$ structure $\t$ on $P$, we know that $d_{\pm 1/4}(N) = 3d(P,\t) + d_{\pm 1/4}(\overline{Y})$. By the previous proposition, we get $d_{\pm 1/4}(N) = \mp \frac74$.
Proposition~\ref{ob} now implies that $N$ cannot bound a positive definite 4-manifold.
Reversing orientation and again applying Proposition~\ref{ob} shows that $N$ cannot bound a negative definite 4-manifold either.
\end{proof}
We conclude with two observations about $\overline Y$ and spineless 4-manifolds.
\begin{proposition}
Let $\Sigma$ be an integral homology sphere. The $3$-manifold $\overline Y \# \Sigma$ is not integral homology cobordant to a $3$-manifold obtained as Dehn surgery along a knot in $S^3$.
\end{proposition}
\begin{proof}
If we let $Y = S^3_{2/q}(K)$ with $q > 0$; then, by~\cite[Proposition~1.4 and Lemma~2.4]{NW}:
\begin{align*}
d_{\pm 1/4}(Y) \in \left\{-2V_0(K) + \frac14, -2V_1(K) + \frac14\right\} \Longrightarrow d_{1/4}(Y) &\ge -2V_0(K) + \frac14,\\
d_{-1/4}(Y) &= -2V_0(K) - \frac14,
\end{align*}
so that in particular $d_{1/4}(Y) - d_{-1/4}(Y) \ge \frac12$. However
\[ d_{1/4}(\overline Y \# \Sigma) - d_{-1/4}(\overline Y \# \Sigma) = d_{1/4}(-(\overline Y \# \Sigma)) - d_{-1/4}(-(\overline Y \# \Sigma)) = -\frac72,\]
so $\pm(\overline Y\#\Sigma)$ cannot be integrally homology cobordant to a positive surgery along a knot in $S^3$.
\end{proof}
The following remark was suggested to the authors by Adam Levine.
\begin{remark}
Note that the previous proposition implies that, for any integral homology sphere $\Sigma$, $\overline Y \# \Sigma$ cannot bound a simply-connected spineless 4-manifold, i.e. a 4-manifold $W$ that is homotopy equivalent to $S^2$ but such that the generator of $H_2(W)$ is not represented by a PL-sphere.
Closely following Levine and Lidman's approach~\cite{LevineLidman}, we produce a homotopy $S^2$ whose boundary is $\overline Y \# \Sigma$ for some homology sphere $\Sigma$, which is necessarily going to be spineless. We sketch the construction, which is very similar to~\cite{LevineLidman}.
The key observation is that there is an integral homology sphere $-\Sigma$ such that $\overline Y$ is obtained as integral surgery along a knot in $-\Sigma$. For example, we can choose $\Sigma$ to be the Brieskorn sphere $\Sigma(15,17,181)$. Indeed, the negative plumbing graph of $\Sigma(15,17,181)$ is obtained by adding a single vertex to the negative plumbing graph of $-\overline Y$, which exhibits $\overline Y$ as surgery along a singular fiber of $-\Sigma(15,17,181)$.
By~\cite[Lemma~3.2 and Proposition~3.1]{LevineLidman}, the 4-manifold obtained from the trace of this surgery and carving a path in $\Sigma \times I$ is a homotopy $S^2$ whose boundary is $\overline Y \# \Sigma$.
\end{remark}
\end{section}
\bibliographystyle{abbr}
|
1,116,691,498,668 | arxiv | \section{Introduction}
We consider the numerical solution of the following non-linear programming (NLP) problem for local minimizers:
\begin{subequations}
\begin{align}
&\min_{\textbf{x}\xspace \in \mathbb{R}\xspace^n} & f(\textbf{x}\xspace)& \\
&\text{s.t.} & c(\textbf{x}\xspace)&=\textbf{0}\xspace\,, \\
& & -\textbf{1}\xspace\leq\phantom{g(}\textbf{x}\xspace\phantom{)}&\leq \textbf{1}\xspace
\end{align}\label{eqn:NLP}
\end{subequations}
where $\textbf{1}\xspace \in \mathbb{R}\xspace^n$ is the vector of ones and
\begin{align*}
f\,&:\, \mathbb{R}\xspace^n \rightarrow \mathbb{R}\xspace^1\,, & c\,&:\, \mathbb{R}\xspace^n \rightarrow \mathbb{R}\xspace^m
\end{align*}
are bounded twice Lipschitz-continuously differentiable functions. We write $\textbf{x}\xspace^\star$ for an arbitrary fixed local minimizer. The Lagrange-function is defined as
\begin{align*}
\mathcal{L}\xspace\,:\,\mathbb{R}\xspace^n \times \mathbb{R}\xspace^m \rightarrow \mathbb{R}\xspace^1, (\textbf{x}\xspace,\boldsymbol{\lambda}\xspace) \mapsto f(\textbf{x}\xspace) - \boldsymbol{\lambda}\xspace\t\cdot c(\textbf{x}\xspace)\,.
\end{align*}
Every non-linear programming problem can be substituted into form \eqref{eqn:NLP} by using bounds on $\|\textbf{x}\xspace^\star\|_\infty$ and slacks for inequality constraints. The dimensions are $m,n \in \mathbb{N}\xspace$, where $m$ can be smaller, equal or larger than $n$.
\paragraph{Penalty barrier program} In this paper we treat \eqref{eqn:NLP} by solving a related minimization problem. The Karush-Kuhn-Tucker (KKT) conditions for \eqref{eqn:NLP} are
\begin{subequations}
\begin{align}
\nabla f(\textbf{x}\xspace) + \rho \cdot \textbf{S}\xspace \cdot \textbf{x}\xspace - \nabla c(\textbf{x}\xspace)\cdot \boldsymbol{\lambda}\xspace - \boldsymbol{\mu}\xspace_L + \boldsymbol{\mu}\xspace_R = &\textbf{0}\xspace \label{eqn:KKT:NLP:1}\\
c(\textbf{x}\xspace) + \omega \cdot \boldsymbol{\lambda}\xspace = &\textbf{0}\xspace \label{eqn:KKT:NLP:2}\\
\textsl{diag}\xspace(\boldsymbol{\mu}\xspace_L) \cdot (\textbf{1}\xspace+\textbf{x}\xspace) - \tau \cdot\textbf{1}\xspace = &\textbf{0}\xspace \label{eqn:KKT:NLP:3}\\
\textsl{diag}\xspace(\boldsymbol{\mu}\xspace_R) \cdot (\textbf{1}\xspace-\textbf{x}\xspace) - \tau \cdot\textbf{1}\xspace = &\textbf{0}\xspace \label{eqn:KKT:NLP:4}\\
\boldsymbol{\mu}\xspace_L \geq& \textbf{0}\xspace \label{eqn:KKT:NLP:5}\\
\boldsymbol{\mu}\xspace_R \geq& \textbf{0}\xspace \label{eqn:KKT:NLP:6}\\
\textbf{1}\xspace+\textbf{x}\xspace \geq& \textbf{0}\xspace \label{eqn:KKT:NLP:7}\\
\textbf{1}\xspace-\textbf{x}\xspace \geq& \textbf{0}\xspace\,,\label{eqn:KKT:NLP:8}
\end{align}\label{eqn:KKT:NLP}
\end{subequations}
where $\rho=\omega=\tau=0$, $\boldsymbol{\lambda}\xspace \in \mathbb{R}\xspace^m$, $\boldsymbol{\mu}\xspace_L,\boldsymbol{\mu}\xspace_R \in \mathbb{R}\xspace^n_+$, and $\textbf{S}\xspace \in \mathbb{R}\xspace^{n \times n}$ symmetric positive definite. These equations can be numerically unsuitable. E.g., when $\nabla f(\textbf{x}\xspace)$ and $c(\textbf{x}\xspace)$ have constant values around some $\textbf{x}\xspace$ then the system can be locally non-unique for $\textbf{x}\xspace$. And when columns of $\nabla c(\textbf{x}\xspace)$ are linearly dependent then there are multiple solutions for the Lagrange multiplier $\boldsymbol{\lambda}\xspace$ at fixed $\textbf{x}\xspace$. Also $\boldsymbol{\mu}\xspace_L,\boldsymbol{\mu}\xspace_R$ can be non-unique when inequality constraints are active whose gradients are linearly dependent to the columns of $\nabla c(\textbf{x}\xspace)$.
For sufficiently small regularization parameters $\rho,\omega,\tau>0$ the system's solution is locally unique, which is desirable when using equations \eqref{eqn:KKT:NLP:1}--\eqref{eqn:KKT:NLP:4} within Newton's iteration to compute local solutions $\textbf{x}\xspace,\boldsymbol{\lambda}\xspace,\boldsymbol{\mu}\xspace_L,\boldsymbol{\mu}\xspace_R$. This is because the uniqueness gives regularity of the Jacobian that appears in the linear equation system of Newton's method. Thus, there is second order local convergence of the iterates. Substituting \eqref{eqn:KKT:NLP:2}--\eqref{eqn:KKT:NLP:4} into \eqref{eqn:KKT:NLP:1}, we find that solutions of \eqref{eqn:KKT:NLP} are critical points for the following problem.
\begin{align}
\min_{\textbf{x}\xspace \in \overline{\Omega}} \quad \phi(\textbf{x}\xspace) \label{eqn:MinPhi}
\end{align}
where
\begin{align*}
\phi(\textbf{x}\xspace)&:=f(\textbf{x}\xspace) + \frac{\rho}{2} \cdot \|\textbf{x}\xspace\|_\textbf{S}\xspace^2 + \frac{1}{2 \cdot \omega} \cdot \|c(\textbf{x}\xspace)\|_2^2 - \tau \cdot \textbf{1}\xspace\t\cdot\Big(\, \log(\textbf{1}\xspace+\textbf{x}\xspace) + \log(\textbf{1}\xspace-\textbf{x}\xspace) \,\Big) \\[3pt]
\Omega &:= \big\lbrace \boldsymbol{\xi}\xspace \in \mathbb{R}\xspace^n \ : \ \|\boldsymbol{\xi}\xspace\|_\infty < 1 \big\rbrace\,,
\end{align*}
$\|\textbf{x}\xspace\|_\textbf{S}\xspace := \sqrt{\textbf{x}\xspace\t\cdot\textbf{S}\xspace\cdot\textbf{x}\xspace}$ is the induced norm and $\log(\cdot)$ is the natural logarithm of each component of the argument. The equations \eqref{eqn:KKT:NLP:5}--\eqref{eqn:KKT:NLP:8} are strictly forced because $\phi$ goes to infinity as $\textbf{x}\xspace$ approaches $\partial\Omega$. Numerically suitable values for the regularization are $\rho,\omega,\tau$ between $10^{-5}$ and $10^{-8}$.
The program \eqref{eqn:MinPhi} we call \textit{penalty-barrier program}. It is badly scaled for small values of $\rho,\omega,\tau>0$. This is why iterative schemes based on \mbox{(Quasi-)}Newton-type descent directions yield poor progress for it and would result in an impractically large amount of iterations \cite[p. 569ff, p. 621]{Boyd}, \cite{SUMT}.
\paragraph{Outer primal inner primal-dual method}
In this paper we present a novel approach to solving \eqref{eqn:MinPhi}. We still perform a direct minimization of \eqref{eqn:MinPhi} because we believe that this is the robustest approach. Since search directions from Newton steps would yield bad progress, we instead use search directions that are obtained from the solution of subproblems of the following form:
\begin{align}
\min_{\textbf{x}\xspace \in \overline{\Omega}} \quad q(\textbf{x}\xspace) \label{eqn:MinQ}
\end{align}
where
\begin{align*}
q(\textbf{x}\xspace):=&\frac{1}{2}\cdot\textbf{x}\xspace\t\cdot\textbf{Q}\xspace\cdot\textbf{x}\xspace+\textbf{c}\xspace\t\cdot\textbf{x}\xspace + \frac{1}{2 \cdot \omega} \cdot \|\textbf{A}\xspace\cdot\textbf{x}\xspace-\textbf{b}\xspace\|_2^2 \\
&\quad - \tau \cdot \textbf{1}\xspace\t\cdot\Big(\, \log(\textbf{1}\xspace+\textbf{x}\xspace) + \log(\textbf{1}\xspace-\textbf{x}\xspace) \,\Big)\tageq\label{eqn:def:q}
\end{align*}
and $\textbf{Q}\xspace \in \mathbb{R}\xspace^{n \times n}$ is symmetric positive definite, $\textbf{c}\xspace \in \mathbb{R}\xspace^n$, $\textbf{A}\xspace \in \mathbb{R}\xspace^{m \times n}$, $\textbf{b}\xspace \in \mathbb{R}\xspace^m$. In our algorithm $\textbf{Q}\xspace$ will be (approximately)
\begin{align*}
\nabla^2_{\textbf{x}\xspace\bx} \mathcal{L}\xspace(\textbf{x}\xspace,\boldsymbol{\lambda}\xspace) + \rho \cdot \textbf{S}\xspace
\end{align*}
so that at a given iterate $\textbf{x}\xspace,\boldsymbol{\lambda}\xspace$ it holds $\nabla q = \nabla \phi$ and (approximately) $\nabla^2 q = \nabla^2 \phi$.
Problems \eqref{eqn:MinQ} can be solved efficiently using a particular primal-dual path-following method described in \cite{StableIPM}. The avoidance of a quadratic approximation for the logarithmic terms yields a better fit of the search direction to minimize \eqref{eqn:MinPhi}. Since we minimize \eqref{eqn:MinPhi} directly with the search directions obtained from \eqref{eqn:MinQ}, there is no need to spend extra attention on the convergence of feasibility: The equality constraints $c(\textbf{x}\xspace)=\textbf{0}\xspace$ are treated with a quadratic penalty that is well-represented in $q$. We will employ a watch-dog technique \cite{watchdog} to achieve large steps along the directions obtained from \eqref{eqn:MinQ} even though the penalty parameter $\omega>0$ is very small. The inequalities $-\textbf{1}\xspace\leq\textbf{x}\xspace\leq\textbf{1}\xspace$ are forced through barriers in \eqref{eqn:MinPhi}. These are considered in an unmodified way in \eqref{eqn:MinQ}, always keeping $\textbf{x}\xspace \in \Omega$. Altogether this results in a simple and robust algorithm that is easy to implement and analyse.
\subsection{Literature review}
Algorithms for NLP can be divided into three distinct classes, confer to \cite{NumOpt}: active set methods (ASM), successive quadratic programming (SQP), and interior-point methods (IPM).
\largeparbreak
ASM are based on iteratively improving a guess of the active inequality constraints in \eqref{eqn:NLP}. The guess is stored as a set $\mathcal{A}\xspace$ of indices, called \textit{active set}. Using a guess for $\mathcal{A}\xspace$, an equality-constrained non-linear programming problem is formed and solved for a local minimizer $\textbf{x}\xspace^\star_\mathcal{A}\xspace$. At $\textbf{x}\xspace^\star_\mathcal{A}\xspace$ the Lagrange multipliers provide information on the optimality of $\mathcal{A}\xspace$. If $\mathcal{A}\xspace$ is non-optimal, then a new estimate $\mathcal{A}\xspace$ for the active set is formed and again $\textbf{x}\xspace^\star_\mathcal{A}\xspace$ is computed. This procedure is repeated until $\mathcal{A}\xspace$ is correct, which implies $\textbf{x}\xspace^\star_\mathcal{A}\xspace \equiv \textbf{x}\xspace^\star$ is a local minimizer of \eqref{eqn:NLP}. An introduction to active set methods can be found in \cite{Flet87}.
ASM are numerically robust because no penalty or barrier must be introduced to treat the constraints. As a further advantage, ASM provide additional information on the set $\mathcal{A}\xspace$ of active constraints at the local minimizer. The problem however with ASM is that there is no polynomially efficient method for determining the optimal active set $\mathcal{A}\xspace$. Problems are known for which ASM would try all possible active sets \cite{klee1970good} until in the very last attempt they find the correct one. This results in a worst-case time complexity that grows exponentially with $n$ \cite{klee1970good}.
\largeparbreak
SQP methods improve the current iterate by moving in a direction obtained from solving a convex quadratic sub-program. The step-size along this direction is determined by minimizing a merit-function or using a filter. For a general overview on SQP methods consult \cite{Gill:2005:SSA:1055334.1055404}.
Special care must be taken to modify the sub-program accordingly such that it always admits a feasible solution. Typically this is achieved through $\ell_1$-penalties. This is sometimes referred to as \textit{elastic mode} \cite{SQPinconsistent}. The $\ell_1$-penalties in the subproblem must be sufficiently large to ensure progress towards feasibility. On the other hand, too large values for the penalties lead to a bad scaling of the quadratic subproblem, confer to \cite{Han1977}.
Special care must be further taken to make sure that ---despite the modification with the elastic mode--- the search direction obtained from the subproblem is still a descent direction for the line-search. As one possible way to achieve this, the penalty parameters in the $\ell_1$-merit-function must be chosen with respect to those in the subproblem, confer to \cite{Schittkowski1982}. If the penalty terms in the merit-function are too large then it is likely that the line-search admits small steps only, confer to \cite{watchdog}. This can be resolved, e.g., by using second-order corrections \cite{TrustRegionMethods}, which however may require the computationally prohibitive task of solving a convex quadratic program at several trial points.
The sub-program that must be solved in each iteration is a convex quadratic program (CQP). CQP can be solved using either active set methods or interior-point methods. Active set methods can have exponential time complexity in the worst case but can be fast in practice. In contrast to that, there are interior-point methods for CQP that are proven to be polynomially efficient in theory \cite{Kar84}. In practice they converge very fast. The field is strongly influenced by Mehrotra's predictor-corrector method \cite{Mehrotra}, which is a primal-dual interior-point method that can be used for solving CQP in a very efficient way.
\largeparbreak
IPM solve \eqref{eqn:NLP} by considering a barrier function as in \eqref{eqn:MinPhi}. The inequality constraints are removed and instead the cost-function is augmented with so-called barrier terms. These are terms that go to infinity when $\textbf{x}\xspace$ approaches the border of $\Omega$. The barrier-augmented cost-function we call $f_\tau$. For example, $f_\tau$ could be
\begin{align*}
f_\tau(\textbf{x}\xspace) = f(\textbf{x}\xspace)- \tau \cdot \textbf{1}\xspace\t\cdot\Big(\, \log(\textbf{1}\xspace+\textbf{x}\xspace) + \log(\textbf{1}\xspace-\textbf{x}\xspace) \,\Big)\,.
\end{align*}
For small values $\tau>0$, e.g. $\tau=10^{-10}$, the barrier-term mildly influences the level-sets of $f_\tau$ in the interior of $\Omega$. All minimizers of $f_\tau$ are interior and thus satisfy the inequality constraints in a strict way.
To make sure that the unconstrained minimizers of $f_\tau$ are accurate approximations to the constrained minimizers of $f$ it is necessary to choose $\tau>0$ very small. However, for small $\tau$ the barrier term leads to a bad scaling of the barrier-augmented cost function. This results in bad progress when using descent directions obtained from \mbox{Quasi-}Newton-type methods, which however are used in almost every IPM, compare e.g. to \cite{IPOPT,LOQO,HOPDM}. This is why practical algorithms decrease the size of $\tau$ iteratively within the iterative computation of $\textbf{x}\xspace$. Thus, initially $\tau$ is large and yields good progress for the iterates of $\textbf{x}\xspace$. As $\textbf{x}\xspace$ approaches the minimizer, $\tau$ is slowly reduced and $\textbf{x}\xspace$ needs only be mildly refined. For an introduction to interior-point methods we refer to \cite{Wright,IPM25ylater}.
For many classes within the domain of convex programming there is strong evidence on the computational efficiency of IPM. Prominent examples are primal methods for self-concordant functions \cite{nesterov1994interior} and primal-dual methods for linear programming \cite{Wright}. However, for general NLP there is no result available on the complexity of the iteration count of IPM; compare to \cite{Forsgren02interiormethods}.
A serious disadvantage of IPM it their difficulty to make good use of initial guesses \cite{YildirimW02}. This phenomenon can be explained by the fact that for the initially large values of $\tau$ the function $f_\tau$ has little in common with $f$. Thus, a potentially good initial guess $\textbf{x}\xspace_0$ of the local minimizer is driven away in early iterations of IPM towards a minimizer of $f_\tau$ for this initially large value of $\tau$. Eventually, $\tau$ decreases and the iterates $\textbf{x}\xspace$ move back the local minimizer (to which the initial guess may have been close, or to another one).
\largeparbreak
In contrast to interior-point methods we use a fixed value of $\tau$ within the minimization of \eqref{eqn:MinPhi}. Thus, our method does not move away from good initial guesses if they are close to local minimizers of $\phi$. A strategy with decreasing values for $\tau$ is not required in our method because even for small values like $\tau=10^{-8}$ the search directions obtained from solving \eqref{eqn:MinQ} allow fast progress within the line-search on $\phi$. This holds because the value of $\tau$ does not influence the accuracy in which $q$ approximates $\phi$.
\subsection{Structure}
In Section~2 we present the numerical method. In Section~3 we provide proofs for the convergence: We prove that the local minimizers of \eqref{eqn:MinPhi} converge to the constrained local minimizers of \eqref{eqn:NLP}. We prove global convergence of our numerical method and we prove second order local convergence. In Section~4 we discuss details of our implementation and practical enhancements. Section~5 presents numerical experiments against \textsc{Ipopt}\xspace \cite{IPOPT} for large sparse non-linear programs that arise from the direct discretization of optimal control problems. Eventually we draw a summarizing conclusion.
\section{Primal-primal-dual interior-point method}
Our method is an iterative method that computes a sequence $\lbrace \textbf{x}\xspace_k \rbrace \subset \Omega$ of interior-points that converge to stationary points of \eqref{eqn:MinPhi}. $\rho,\omega,\tau>0$ and $\textbf{S}\xspace\in\mathbb{R}\xspace^{n \times n}$ symmetric positive definite are considered to be provided by the user. The values $\rho=\omega=\tau=10^{-7}$ and $\textbf{S}\xspace = \textbf{I}\xspace_{n \times n}$ are often suitable.
The method goes as follows. Given $\textbf{x}\xspace_k$, either from a former iteration or an initial guess when $k=0$, we compute
\begin{align*}
\boldsymbol{\lambda}\xspace_k := \frac{-1}{\omega}\cdot c(\textbf{x}\xspace_k)\,.
\end{align*}
We notice
\begin{align*}
\nabla \phi(\textbf{x}\xspace_k)&= \rho \cdot \textbf{S}\xspace \cdot \textbf{x}\xspace_k + \nabla_\textbf{x}\xspace \mathcal{L}\xspace(\textbf{x}\xspace_k,\boldsymbol{\lambda}\xspace_k) - \frac{\tau}{\textbf{1}\xspace+\textbf{x}\xspace_k} + \frac{\tau}{\textbf{1}\xspace-\textbf{x}\xspace_k}\\
\nabla^2 \phi(\textbf{x}\xspace_k)&= \rho \cdot \textbf{S}\xspace + \textbf{M}\xspace_k + {\tau}\cdot\Big( \textsl{diag}\xspace(\textbf{1}\xspace+\textbf{x}\xspace_k)^{-2} + \textsl{diag}\xspace(\textbf{1}\xspace-\textbf{x}\xspace_k)^{-2}\Big)
\end{align*}
where
\begin{subequations}
\begin{align}
\textbf{M}\xspace_k &:= \textbf{H}\xspace_k + \frac{1}{\omega}\cdot\nabla c(\textbf{x}\xspace_k) \cdot \nabla c(\textbf{x}\xspace_k)\t\\
\textbf{H}\xspace_k &:= \nabla^2_{\textbf{x}\xspace\bx} \mathcal{L}\xspace(\textbf{x}\xspace_k,\boldsymbol{\lambda}\xspace_k)\,.
\end{align}\label{eqn:def:MH}
\end{subequations}
If
\begin{align}
\nabla_{\textbf{x}\xspace\bx}^2\mathcal{L}\xspace(\textbf{x}\xspace_k,\boldsymbol{\lambda}\xspace_k)
\end{align}
is symmetric positive semi-definite then $\textbf{H}\xspace_k$ and thus $\textbf{M}\xspace_k$ are positive semi-definite. If
\begin{align}
\big(\nabla c(\textbf{x}\xspace_k)^\perp\big)\t \cdot \nabla_{\textbf{x}\xspace\bx}^2\mathcal{L}\xspace(\textbf{x}\xspace_k,\boldsymbol{\lambda}\xspace_k) \cdot \big(\nabla c(\textbf{x}\xspace_k)^\perp\big)\label{eqn:projectedhessian}
\end{align}
is positive semi-definite ---\,which is a necessary condition at least for interior minimizers $\textbf{x}\xspace^\star$ of \eqref{eqn:NLP}\,--- then there exist suitably small values $\omega>0$ such that $\textbf{M}\xspace_k$ is positive semi-definite. Whenever $\textbf{M}\xspace_k$ is positive semi-definite it follows in turn that $\nabla^2 \phi(\textbf{x}\xspace_k)$ is positive definite.
We form a \textit{penalty-barrier convex quadratic program} \eqref{eqn:MinQ} with the following values
\begin{subequations}
\begin{align}
\textbf{Q}\xspace_k &:= \widetilde{\textbf{H}}\xspace_k+\rho\cdot\textbf{S}\xspace\\
\textbf{A}\xspace_k &:= \nabla c(\textbf{x}\xspace_k)\t\\
\textbf{c}\xspace_k &:= \nabla_\textbf{x}\xspace f(\textbf{x}\xspace_k) \ - \widetilde{\textbf{H}}\xspace_k \cdot \textbf{x}\xspace_k\\
\textbf{b}\xspace_k &:= - c(\textbf{x}\xspace_k) \ + \textbf{A}\xspace_k \cdot \textbf{x}\xspace_k
\end{align}\label{eqn:def:cqp}
\end{subequations}
where $\widetilde{\textbf{H}}\xspace_k$ is an approximation to $\textbf{H}\xspace_k$ such that
$$\widetilde{\textbf{M}}\xspace_k := \widetilde{\textbf{H}}\xspace_k + \frac{1}{\omega}\cdot\nabla c(\textbf{x}\xspace_k) \cdot \nabla c(\textbf{x}\xspace_k)\t$$
is symmetric positive semi-definite. By construction, the resulting function $q$ in \eqref{eqn:def:q} satisfies
\begin{align*}
\nabla q(\textbf{x}\xspace_k) &= \nabla \phi(\textbf{x}\xspace_k)
\end{align*}
and $q$ is strictly convex due to positive semi-definiteness of $\widetilde{\textbf{M}}\xspace_k$. If in addition one of the above-mentioned conditions holds then the choice $\widetilde{\textbf{H}}\xspace_k = \textbf{H}\xspace_k$ is suitable such that $\widetilde{\textbf{M}}\xspace_k=\textbf{M}\xspace_k$ is symmetric positive semi-definite. It then follows
\begin{align*}
\nabla^2 q(\textbf{x}\xspace_k) &= \nabla^2 \phi(\textbf{x}\xspace_k)\,.
\end{align*}
Now that $q$ is fully defined, we solve the penalty-barrier convex quadratic program \eqref{eqn:MinQ} of it. In \cite{StableIPM} we describe a short-step primal-dual path-following method that can solve \eqref{eqn:MinQ} in weakly polynomial time complexity. The method described in the reference is further numerically stable if $\varepsilon_{\textsf{mach}}\xspace$ is chosen sufficiently small with respect to a weak constant that depends on the logarithms of the norms of $\textbf{Q}\xspace_k,\textbf{A}\xspace_k,\textbf{c}\xspace_k,\textbf{b}\xspace_k$ and the logarithm of $\omega$. In Section~4 of this paper we provide a long-step variant of the referred method that is fast and reliable in practice, is suitable also for large sparse problems, and can solve \eqref{eqn:MinQ} to high numerical accuracy.
Once that the solution of \eqref{eqn:MinQ} is obtained, we write it into a vector $\hat{\textbf{x}}\xspace_k$. We define the step-direction $\textbf{v}\xspace_k := \hat{\textbf{x}}\xspace_k - \textbf{x}\xspace_k$. Finally, we compute a new iterate
\begin{align*}
\textbf{x}\xspace_{k+1} := \textbf{x}\xspace_k + \alpha_k \cdot \textbf{v}\xspace_k
\end{align*}
where $\alpha_k \in\mathbb{R}\xspace^+$ is chosen to minimize $\phi$ along the line $\textbf{x}\xspace(\alpha) := \textbf{x}\xspace_k + \alpha \cdot \textbf{v}\xspace_k \in \Omega$. $\alpha$ is chosen, e.g., by using a back-tracking line-search with Armijo-rule, confer to \cite{NumOpt}. Algorithm~\ref{algo:Solver} encapsulates the algorithmic steps.
\begin{algorithm}
\caption{PPD-IPM, pure version}
\label{algo:Solver}
\begin{algorithmic}[1]
\Procedure{PPDIPM}{\,$\textbf{x}\xspace_0,\rho,\omega,\tau,\textbf{S}\xspace,{\textsf{tol}}\xspace$\,}
\State $k:=0$
\While{$ \|\nabla \phi(\textbf{x}\xspace_k)\|_2> {\textsf{tol}}\xspace $}
\State $\boldsymbol{\lambda}\xspace_k := \frac{-1}{\omega}\cdot c(\textbf{x}\xspace_k)$
\State Choose $\widetilde{\textbf{H}}\xspace_k \approx \textbf{H}\xspace_k$ such that $\widetilde{\textbf{M}}\xspace_k$ is positive semi-definite.
\State Compute $\textbf{Q}\xspace_k,\textbf{c}\xspace_k,\textbf{A}\xspace_k,\textbf{b}\xspace_k$ from \eqref{eqn:def:cqp}, defining $q$.
\State Compute $\hat{\textbf{x}}\xspace_k$, the minimizer of \eqref{eqn:MinQ}.
\State $\textbf{v}\xspace_k := \hat{\textbf{x}}\xspace_k - \textbf{x}\xspace_k$
\State $\alpha_k := \operatornamewithlimits{argmin}_{\alpha \in \mathbb{R}\xspace^+}\big\lbrace \phi(\textbf{x}\xspace_k + \alpha \cdot \textbf{v}\xspace_k) \big\rbrace$ \label{algo:Solver:lineAlpha}
\State $\textbf{x}\xspace_{k+1} := \textbf{x}\xspace_k + \alpha_k \cdot \textbf{v}\xspace_k$
\State $k := k+1$\label{algo:Solver:line:End}
\EndWhile
\State \Return $\textbf{x}\xspace_k$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\section{Proof of convergence}
We show that the iterates $\textbf{x}\xspace_k$ of Algorithm~\ref{algo:Solver} converge to stationary points of \eqref{eqn:MinPhi}. In the first subsection we show that there is convergence to a stationary point of $\phi$ from every initial guess $\textbf{x}\xspace_0$. In the second subsection we show that under suitable conditions there is second-order convergence of $\|\nabla\phi(\textbf{x}\xspace_k)\|_2$ to zero for $k \in \mathbb{N}\xspace$ greater than some finite number.
\subsection{Global convergence}
We start with some technical results.
\begin{lem}[Boundedness]\label{lem:BoundednessOmega0}
Let $\textbf{x}\xspace \in \Omega$. We define
\begin{align*}
\overline{\Omega}_0(\textbf{x}\xspace) := \lbrace \boldsymbol{\xi}\xspace \in \Omega \ : \ \phi(\boldsymbol{\xi}\xspace) \leq \phi(\textbf{x}\xspace)\, \rbrace\,.
\end{align*}
The space $\overline{\Omega}_0(\textbf{x}\xspace)$ is always bounded and closed. Further, for each $\overline{\Omega}_0(\textbf{x}\xspace)$ there is a constant $C_\phi\in \mathbb{R}\xspace$ such that
\begin{align*}
|\phi(\tilde{\textbf{x}}\xspace)|,\ \|\nabla\phi(\tilde{\textbf{x}}\xspace)\|_2,\ \|\nabla^2 \phi(\tilde{\textbf{x}}\xspace)\|_2 \leq C_\phi \quad \forall \tilde{\textbf{x}}\xspace \in \overline{\Omega}_0(\textbf{x}\xspace)\,.
\end{align*}
\end{lem}
\begin{itshape}
\noindent
\underline{Proof:} left to the reader. q.e.d.
\end{itshape}
\begin{lem}[Sufficient descent]\label{lem:SufficientDescent}
Let $\textbf{x}\xspace \in \Omega$, $\textbf{v}\xspace \in \mathbb{R}\xspace^n \setminus \lbrace \textbf{0}\xspace \rbrace$, $0 < \vartheta < \pi/2$. If the angular condition
\begin{align*}
\angle(\,\textbf{v}\xspace\,,\,-\nabla \phi(\textbf{x}\xspace)\,)\leq \frac{\pi}{2} - \vartheta
\end{align*}
is satisfied then $\exists \ \theta > 0$, only depending on $\vartheta$ and $\|\nabla\phi(\textbf{x}\xspace)\|_2$, such that the following holds:
\begin{align*}
\min_{\alpha \in \mathbb{R}\xspace^+} \Big\lbrace\, \phi(\textbf{x}\xspace + \alpha \cdot \textbf{v}\xspace) \,\Big\rbrace \leq \phi(\textbf{x}\xspace)-\theta
\end{align*}
\end{lem}
\begin{itshape}
\noindent
\underline{Proof:} Confer to \cite[Sections~3.1--3.2]{NumOpt}.
\end{itshape}
\begin{lem}[Sufficient descent direction]\label{lem:SufficientDescentDirection}
Let $\rho>0$, $\lambda_\text{min}(\textbf{S}\xspace)>0$, $\textbf{x}\xspace \in \Omega$. Then the following holds:
\begin{align*}
\forall \eta > 0 \quad \exists \vartheta > 0 \quad : \quad \|\nabla \phi(\textbf{x}\xspace)\|_2 \geq \eta \ \Rightarrow \ \angle(\,\hat{\textbf{x}}\xspace-\textbf{x}\xspace\,,\,-\nabla\phi(\textbf{x}\xspace)\,) \leq \frac{\pi}{2}-\vartheta
\end{align*}
\end{lem}
\begin{itshape}
\noindent
\underline{Proof:} Consider $\hat{\textbf{x}}\xspace$, computed as local minimizer of the local approximation function $q(\cdot)$ of $\phi(\cdot)$ around $\textbf{x}\xspace$, defined in \eqref{eqn:MinQ}. Our proof works by showing that $\hat{\textbf{x}}\xspace \in \mathcal{B}\xspace_{\text{large}} \setminus \mathcal{B}\xspace_{\text{small}}$ holds, where $\mathcal{B}\xspace_{\text{large}}\,,\ \mathcal{B}\xspace_{\text{small}}$ are two spheres. The geometric relation of the two spheres then enforces the claimed angular condition.
We start with $\mathcal{B}\xspace_{\text{small}}$. From the steepest descent direction at $\textbf{x}\xspace$ we find
\begin{align*}
\min_{\tilde{\textbf{x}}\xspace \in \Omega}\big\lbrace\,q(\tilde{\textbf{x}}\xspace)\,\big\rbrace \leq q\big(\textbf{x}\xspace + \alpha \cdot \overbrace{\nabla q(\textbf{x}\xspace)}^{\equiv \nabla \phi(\textbf{x}\xspace)}\big) \leq q(\textbf{x}\xspace) - \alpha \cdot \|\nabla \phi(\textbf{x}\xspace)\|_2 + \frac{\alpha^2}{2} \cdot \|\nabla^2\phi(\textbf{x}\xspace)\|_2^2 \cdot C_{\phi}\,.
\end{align*}
Inserting $\alpha = \frac{1}{C_{H\phi}}$, we get
\begin{align*}
\min_{\tilde{\textbf{x}}\xspace \in \Omega}\big\lbrace\,q(\tilde{\textbf{x}}\xspace)\,\big\rbrace \leq q(\textbf{x}\xspace) - \underbrace{\frac{\|\nabla\phi(\textbf{x}\xspace)\|_2^2}{2 \cdot C_{\phi}}}_{=:\textsf{gap}}
\end{align*}
Since $\hat{\textbf{x}}\xspace = \operatornamewithlimits{argmin}_{\tilde{\textbf{x}}\xspace \in \Omega}\lbrace q(\tilde{\textbf{x}}\xspace)\rbrace$, and since the negative slope of $q$ below $q(\textbf{x}\xspace)$ is bounded by $\|\nabla\phi(\textbf{x}\xspace)\|_2$, cf. Figure~\ref{fig:proofdescentbsmall} left, we find
\begin{align*}
\|\hat{\textbf{x}}\xspace-\textbf{x}\xspace\|_2 \geq \sigma = \frac{\textsf{gap}}{\|\nabla\phi(\textbf{x}\xspace)\|_2} = \frac{\|\nabla\phi(\textbf{x}\xspace)\|_2}{C_{\phi}}\,.
\end{align*}
We define $\mathcal{B}\xspace_{\text{small}}:= \lbrace \,\boldsymbol{\xi}\xspace \in \mathbb{R}\xspace^n\ : \ \|\boldsymbol{\xi}\xspace-\textbf{x}\xspace\|_2<\sigma\,\rbrace$.
Consider
\begin{align*}
\psi(\tilde{\textbf{x}}\xspace) = \phi(\textbf{x}\xspace) + \nabla\phi(\textbf{x}\xspace)\t \cdot \tilde{\textbf{x}}\xspace + \frac{\rho\cdot\lambda_\text{min}(\textbf{S}\xspace)}{2}\cdot \|\tilde{\textbf{x}}\xspace\|_2^2\,
\end{align*}
cf. Figure~\ref{fig:proofdescentbsmall} right. We define the minimizer
\begin{align*}
\textbf{x}\xspace_c := \textbf{x}\xspace - \frac{1}{\rho \cdot \lambda_\text{min}(\textbf{S}\xspace)} \cdot \nabla\phi(\textbf{x}\xspace)
\end{align*}
of $\psi$ and $\mathcal{B}\xspace_{\text{large}} := \lbrace \boldsymbol{\xi}\xspace \in \mathbb{R}\xspace^n \ : \ \|\boldsymbol{\xi}\xspace - \textbf{x}\xspace_c\|_2 \leq \|\textbf{x}\xspace-\textbf{x}\xspace_c\|_2 \rbrace$. Since $\psi(\cdot)$ is a lower bound on $q(\cdot)$ it must hold
$\hat{\textbf{x}}\xspace \in \mathcal{B}\xspace_{\text{large}} \setminus \mathcal{B}\xspace_{\text{small}}$. Now consider Figure~\ref{fig:blargebsmall}, from which we find that the claimed angular condition must hold. q.e.d.
\end{itshape}
\begin{figure}
\centering
\includegraphics[width=0.999\linewidth]{Images_PDF/ProofDescentBsmall}
\caption{Left: Plot of $q$ through line $\hat{\textbf{x}}\xspace \rightarrow \textbf{x}\xspace$. $q$ is convex and slope from $\textbf{x}\xspace$ to $\hat{\textbf{x}}\xspace$ is bounded below by $\|\nabla q(\textbf{x}\xspace)\|_2$, thus $\sigma$ bounds distance $\|\hat{\textbf{x}}\xspace-\textbf{x}\xspace\|_2$ from below. Right: Red curve is quadratic function $\psi$ with isotropic second derivative $\rho \cdot \lambda_\text{min}(\textbf{S}\xspace)$. Thus $q$ is bounded below by $\psi$. We can bound the distance of $\hat{\textbf{x}}\xspace$ to $\textbf{x}\xspace_c$.}
\label{fig:proofdescentbsmall}
\end{figure}
\begin{lem}[Limit point]\label{lem:LimitPoint}
Choose the initial guess $\textbf{x}\xspace_0 \in \Omega$ and consider the iterates $\lbrace \textbf{x}\xspace_k \rbrace_{k \in \mathbb{N}\xspace_0} \subset \overline{\Omega}(\textbf{x}\xspace_0)$ of Algorithm~\ref{algo:Solver}. Define $\textbf{x}\xspace_\infty := \lim\limits_{k \rightarrow \infty} \textbf{x}\xspace_k$. Then $\forall \varepsilon>0$ $\exists N \in \mathbb{N}\xspace$ such that the following holds $\forall k \geq N$:
\begin{align*}
|\phantom\nabla\phi(\textbf{x}\xspace_\infty) - \phantom\nabla\phi(\textbf{x}\xspace_k) |\phantom{\|_2} \leq & \varepsilon\\
\|\nabla\phi(\textbf{x}\xspace_\infty) - \nabla\phi(\textbf{x}\xspace_k) \phantom|\|_2 \leq & \varepsilon
\end{align*}
\end{lem}
\begin{itshape}
\noindent\underline{Proof:} follows by L-continuity and Lemma~\ref{lem:BoundednessOmega0}. q.e.d.
\end{itshape}
Now we have everything in hand for the final result. The following theorem proves the global convergence of Algorithm~\ref{algo:Solver} to a stationary point of \eqref{eqn:MinPhi}.
\begin{thm}[Stationary limit]\label{lem:StationaryLimit}
Consider the properties from Lemma~\ref{lem:LimitPoint}. Then:
\begin{align*}
\|\nabla\phi(\textbf{x}\xspace_\infty)\|_2 = 0
\end{align*}
\end{thm}
\begin{itshape}
\noindent\underline{Proof:} (by contradiction). Let $\textbf{d}\xspace := \nabla\phi(\textbf{x}\xspace_\infty)$, $\eta := 0.5 \cdot \|\textbf{d}\xspace\|_2$.
\begin{align}
\text{We assume }\eta > 0\,.\label{eqn:lem:StationaryLimit:WrongAssumption}
\end{align}
We choose a strictly positive value $\varepsilon < \eta$ for Lemma~\ref{lem:LimitPoint} and get $N \in \mathbb{N}\xspace$. Consider an arbitrary integer $k \geq N$. Notice that from Lemma~\ref{lem:LimitPoint} follows
\begin{align*}
\phi(\textbf{x}\xspace_\infty)-\varepsilon \leq \phi(\textbf{x}\xspace_{k+1})\,.\tageq\label{eqn:StationaryLimitContra1}
\end{align*}
Since $\|\nabla\phi(\textbf{x}\xspace_k)-\textbf{d}\xspace\|_2\leq \eta$ holds according to Lemma~\ref{lem:LimitPoint}, it follows $\|\nabla\phi(\textbf{x}\xspace_k)\|_2\geq \eta$. We apply Lemma~\ref{lem:SufficientDescentDirection} to obtain $\vartheta>0$. Notice that $\vartheta,\eta$ are independent of $k$.
Define $\textbf{v}\xspace_k := \\hat{\textbf{x}}\xspace_k - \textbf{x}\xspace_k$. Due to Lemma~\ref{lem:SufficientDescentDirection}, $\textbf{x}\xspace_k$, $\textbf{v}\xspace_k$ and $\vartheta$ together satisfy the angular condition of Lemma~\ref{lem:SufficientDescent}, which then says that $\textbf{x}\xspace_{k+1}=\textbf{x}\xspace_k + \alpha_k \cdot \textbf{v}\xspace_k$ satisfies
\begin{align*}
\phi(\textbf{x}\xspace_{k+1}) \leq \phi(\textbf{x}\xspace_k) - \theta\tageq\label{eqn:StationaryLimitContra2}
\end{align*}
where $\theta$ depends only on $\vartheta,\eta$, which in turn do not depend on $k,\varepsilon$. Thus, we can choose $\varepsilon>0$ sufficiently small such that \eqref{eqn:StationaryLimitContra1} and \eqref{eqn:StationaryLimitContra2} contradict to each other. In only consequence, assumption \eqref{eqn:lem:StationaryLimit:WrongAssumption} must be wrong. q.e.d.
\end{itshape}
We admit that due to the small value of $\omega$ it can happen that $C_\phi$ becomes very large. This is why in Section~4.2 we include a practical enhancement that yields global convergence in a satisfactory amount of iterations regardless of the value that is chosen for $\omega$.
\begin{figure}
\centering
\includegraphics[width=0.99999\linewidth]{Images_PDF/BlargeBsmall}
\caption{Plot of $\mathcal{B}\xspace_{\text{large}}$ and $\mathcal{B}\xspace_{\text{small}}$. They only depend on $\nabla\phi(\textbf{x}\xspace)$ and $C_{\phi}$, where the latter only depends on $\overline{\Omega}(\textbf{x}\xspace_0)$ for some former initial guess from which $\textbf{x}\xspace$ may have propagated. $\hat{\textbf{x}}\xspace$ lives in the gray region, implying that $\hat{\textbf{x}}\xspace-\textbf{x}\xspace$,$-\nabla\phi(\textbf{x}\xspace)$ have an angle of strictly less than $90$ degrees.}
\label{fig:blargebsmall}
\end{figure}
\subsection{Locally second order convergence}
From the Taylor series of the functions $\phi$ and $q$ at $\textbf{x}\xspace_k$ we find
\begin{align*}
\nabla \phi(\textbf{x}\xspace_k + \alpha_k \cdot \textbf{v}\xspace_k) -\nabla q(\textbf{x}\xspace_k + \alpha_k \cdot \textbf{v}\xspace_k) = \Big(\,\textbf{H}\xspace_k - \widetilde{\textbf{H}}\xspace_k\,\Big) \cdot \alpha_k \cdot \textbf{v}\xspace_k + \mathcal{O}\xspace(\|\alpha_k \cdot \textbf{v}\xspace_k\|_2^2)\,.
\end{align*}
In the beginning of Section~2 we discussed sufficient conditions under which the choice $\widetilde{\textbf{H}}\xspace_k = \textbf{H}\xspace_k$ is suitable. The values for $\hat{\textbf{x}}\xspace_k = \textbf{x}\xspace_k + \textbf{v}\xspace_k$ then satisfy
\begin{align*}
\nabla q(\hat{\textbf{x}}\xspace_k) = \textbf{0}\xspace
\end{align*}
because $\hat{\textbf{x}}\xspace_k$ is the unique minimizer of $q$. Thus, if $\alpha_k=1$ and $\widetilde{\textbf{H}}\xspace_k=\textbf{H}\xspace_k$ hold then
\begin{align*}
\|\nabla \phi(\textbf{x}\xspace_k + \alpha_k \cdot \textbf{v}\xspace_k)\|_2 = \mathcal{O}\xspace(\|\textbf{v}\xspace_k\|_2^2)\,.
\end{align*}
In the open neighborhood of a local minimizer $\textbf{x}\xspace^\star$ of $\phi$ it holds $\nabla \phi(\textbf{x}\xspace^\star)=\textbf{0}\xspace$ and $\nabla^2 \phi(\textbf{x}\xspace^\star)>0$. Thus $\phi$ can be approximated of second order by a parabola on the line $\textbf{x}\xspace_k(\alpha)=\textbf{x}\xspace_k + \alpha \cdot \textbf{v}\xspace_k$. In consequence of this, the line-search will return $\alpha_k$ sufficiently close to $1$ for all $k$ sufficiently large.
From the global convergence of $\lbrace\textbf{x}\xspace_k\rbrace$, discussed in Section~3.1, and the Cauchy criterion we find that there is an iteration $k$ from which holds $\|\textbf{v}\xspace_k\|_2 \ll 1$. We thus showed that for $k \in \mathbb{N}\xspace$ sufficiently large there will be second order convergence of
\begin{align}
\|\nabla \phi(\textbf{x}\xspace_k)\|_2 \rightarrow 0\,.
\end{align}
We admit that the requirement on $\widetilde{\textbf{H}}\xspace$ is quite strong and may not hold for the problem instance at hand. This is why in Section~4.3 we provide an enhancement that guarantees second order local convergence under any circumstances.
\section{Practical enhancements}
Our algorithm uses four practical enhancements. These are:
\begin{itemize}
\item a practical line-search;
\item a watchdog technique \cite{watchdog} to accelerate global convergence;
\item an additional Newton step per iteration to yield second-order convergence under no requirements;
\item a primal-dual long-step interior-point method for solving the subproblems defined in \eqref{eqn:MinQ}.
\end{itemize}
\subsection{Line-search}
It is not worth the effort to use optimal values for $\alpha$. In practice it is important that the line-search makes the choice $\alpha=1$ under mild conditions, so that second-order local convergence can be easily achieved. On the other hand, it is important that also $\alpha>1$ can also be chosen because crude Hessian approximations for $\widetilde{\textbf{H}}\xspace$ can be overly convex, which results in very small length of the step direction $\|\textbf{v}\xspace\|_2$. In the following we propose a line-search that is cheap and accomplishes both goals.
The following line-search code shall replace line~\ref{algo:Solver:lineAlpha} in Algorithm~\ref{algo:Solver}. For ease of notation, we dropped the iteration index $k$ for $\textbf{x}\xspace_k,\textbf{v}\xspace_k,\alpha_k$.
\begin{algorithmic}[1]
\State $\alpha_{\max} := 1$\,,\quad $\alpha:=0$\,,\ $\check{\bx}\xspace:=\textbf{x}\xspace$
\While{( \textbf{true} )}
\State $\check{\alpha}\xspace:=\alpha_{\max}$
\While{\textbf{not}$\Big($ \texttt{StepCriterion}($\check{\bx}\xspace,\check{\alpha}\xspace,\textbf{v}\xspace$) \ \textbf{and} \ $\check{\bx}\xspace+\check{\alpha}\xspace\cdot\textbf{v}\xspace \in \Omega$ $\Big)$}
\State $\check{\alpha}\xspace:=\beta \cdot \check{\alpha}\xspace$
\EndWhile
\State $\check{\bx}\xspace := \check{\bx}\xspace + \check{\alpha}\xspace \cdot \textbf{v}\xspace$\,,\quad $\alpha := \alpha + \check{\alpha}\xspace$
\If{( $\alpha<\alpha_{\max}$ )}
\State \textbf{break}
\EndIf
\State $\alpha_{\max} := 2 \cdot \alpha_{\max}$
\EndWhile
\end{algorithmic}
The line-search comprises back-tracking with an iterative increase of the maximum trial step-size. But note, when $\alpha$ exceeds $\alpha_{\max}$ then we require that the step-criterion holds from the an updated value along the line. Thus, the step-criterion becomes more restrictive the more often we increase $\alpha_{\max}$. Typically, the step-criterion is Armijo's rule, i.e.
\begin{align*}
\texttt{StepCriterion}(\textbf{x}\xspace,\alpha,\textbf{v}\xspace) := \phi(\textbf{x}\xspace + \alpha \cdot \textbf{v}\xspace) \leq \phi(\textbf{x}\xspace) + \gamma \cdot \alpha \cdot \nabla\phi(\textbf{x}\xspace)\t\cdot\textbf{v}\xspace\,.
\end{align*}
For the line-search parameters we choose $\gamma = 0.1$, $\beta = 0.8$, as suggested in \cite{Boyd}. However, we occasionally allow different step criteria, cf. Section~\ref{sec:Watchdog}\,.
\subsection{Watchdog}\label{sec:Watchdog}
The watchdog-technique is a particular line-search technique that is introduced in \cite{watchdog}. The motivation of watchdog is that for small values of $\omega$ and non-linear constraint functions $c$ the step-criterion due to Armijo will only allow very small steps because $\phi$ grows rapidly when $\|c(\textbf{x}\xspace)\|_2$ increases. Small step-sizes for $\alpha$ however mean that the algorithm would make little progress per iteration, resulting in large amounts of iterations and long computation times.
A way out of this dilemma is the use of a \textit{relaxed} step-criterion. The relaxed criterion admits larger values for $\alpha$ in the line-search and thus offers the convergence in a smaller amount of iterations, compared to the \textit{standard}, i.e. Armijo, step-criterion.
Implementing the algorithm with only a relaxed step-criterion is insufficient, as the relaxed condition is not restrictive enough to guarantee global convergence. The watchdog is an algorithmic safeguard that keeps track of the iterates. It tells our optimization method which step-criterion to use. The watchdog is aggressive, meaning that it would always allow our method to use the relaxed criterion, hoping it yields rapid convergence. But if the watchdog notices that the iterates won't make progress, it switches over to the standard criterion in order to force global convergence.
Strong theoretical results are available that prove that the watchdog-technique maintains the original global and local convergence properties of the algorithm. For all details on the implementation of the watchdog technique we refer the reader to \cite{watchdog}. We use a watchdog parameter $\ell=5$.
In our algorithm we use the following relaxed step-criterion. Call $\textbf{x}\xspace_\alpha := \textbf{x}\xspace + \alpha \cdot \textbf{v}\xspace$. For the relaxed step acceptance we require that at least either of these two conditions is satisfied.
\begin{align*}
&\text{Condition 1} & &\phi(\textbf{x}\xspace_\alpha)<\phi(\textbf{x}\xspace)\\
&\text{Condition 2} & &f(\textbf{x}\xspace_\alpha) < f(\textbf{x}\xspace)\quad \wedge \quad \|c(\textbf{x}\xspace_\alpha)\|_\infty \leq 10 \cdot \max\big\lbrace\,\|c(\textbf{x}\xspace)\|_\infty\,,\,0.01\,\big\rbrace
\end{align*}
The motivation for the above conditions is that we want a relaxed step-acceptance criterion while also avoiding totally unreasonable steps. The first condition says that there is progress after all. The second condition says that the objective improves while the constraint violation does not grow too much. Typically, when the constraint violation is moderately small then usually the subsequent descent steps will always be able to rapidly reach back to very small values for the norm of $c$, thus it is fine to use a maximum expression for further relax the criterion.
\subsection{An additional Newton step}
In Section~3.2 we proved second order local convergence of Algorithm~\ref{algo:Solver} to a stationary point under certain requirements. But in general, second-order convergence is impossible to achieve when only using step-directions that are obtained from solving \eqref{eqn:MinQ}. This is because $q$ is a convex approximation only, which can be insufficient. Letting $q$ be a nonconvex approximation could result in prohibitive cost for solving the subproblem \cite{Murty1987} and is thus not considered a practical option. But second order convergence can be achieved by using an additional search direction in Algorithm~\ref{algo:Solver} that arises from performing a simple Newton step.
This subsection is organized as follows. We first give a simple example problem that shows that steps obtained from the solution of a convex approximation $q$ do not permit second order convergence in general. We then the additional Newton step. Finally we discuss why indeed this step is sufficient to yield second-order convergence under any circumstances.
\paragraph{Example problem with no second-order convergence}
Consider the minimization of $f(x) = -0.5 \cdot x^2$ for $-1\leq x \leq 1$. The function $\phi$ in our algorithm becomes
\begin{align*}
\phi(x) = (\rho-0.5) \cdot x^2 - \tau \cdot \Big( \log(1+x) + \log(1-x) \Big)
\end{align*}
$\omega$ does not appear since there are no equality constraints. Using the initial guess $x_0 = 0.5$, we hope to converge to a value close to $x=1$. For simplicity, we omit the convexization with $\rho$ and the left barrier term, yielding
\begin{align*}
\phi(x) = -0.5 \cdot x^2 - \tau \cdot \log(1-x)\,.
\end{align*}
If for $q$ we use the positive semi-definite best-approximation $\widetilde{\textbf{H}}\xspace=0$ to $\textbf{H}\xspace=-1$, then $q$ has the following form at an iterate $x_k$:
\begin{align*}
q_k(x) = -x_k \cdot x + \frac{\tau}{1-x}
\end{align*}
Using step-sizes of $\alpha=1$, as usually required for second-order convergence in higher dimensions, we get $x_{k+1} = \operatornamewithlimits{argmin}_{x \in \mathbb{R}\xspace} q_k(x)$. Since $q_k$ is convex, we can use the necessary optimality condition to obtain the explicit formula
\begin{align*}
x_{k+1} = 1-\frac{\tau}{2 \cdot x_k}\,.
\end{align*}
This sequence converges to $x^\star = 0.5 + \sqrt{0.25 - 0.5 \cdot \tau}$ at a linear rate only, namely
\begin{align*}
|x_{k}-x^\star| \in \Theta(\tau^k)\,.
\end{align*}
\paragraph{Second-order convergence of the Newton step}
We propose to add the following lines after line~\ref{algo:Solver:line:End} in Algorithm~\ref{algo:Solver}.
\begin{algorithmic}[1]
\State Attempt computing $\tilde{\textbf{x}}\xspace := \textbf{x}\xspace_k - \nabla^2 \phi(\textbf{x}\xspace_k)^{-1}\xspace \cdot \nabla\phi(\textbf{x}\xspace_k)$\,.
\If{( $\tilde{\textbf{x}}\xspace \in \mathbb{R}\xspace^n$ \ \textbf{and} \ $\phi(\tilde{\textbf{x}}\xspace)<\phi(\textbf{x}\xspace_k)$)}
\State $\textbf{x}\xspace_k := \tilde{\textbf{x}}\xspace$
\EndIf
\end{algorithmic}
I.e., we attempt performing one Newton step for solving $\nabla\phi(\textbf{x}\xspace)=\textbf{0}\xspace$ from the initial guess $\textbf{x}\xspace_k$. This can fail since $\nabla^2 \phi(\textbf{x}\xspace_k)$ may be singular away from a local minimizer. But from above we know that in the local neighborhood of a minimizer it will be regular.
Since we search a local minimizer, we only accept the step if it yields reduction of the objective. Since \eqref{eqn:MinPhi} is unconstrained and strictly convex in the local minimizer (thanks to $\rho \cdot \lambda_\text{min}>0$), the above Newton step is second-order convergent whenever $\textbf{x}\xspace_k$ is sufficiently accurate. Since the sequence $\lbrace \textbf{x}\xspace_k \rbrace$ is globally convergent, eventually $\textbf{x}\xspace_k$ is sufficiently accurate.
\subsection{Primal-dual long-step interior-point method for solving the subproblems}
For sake of a self-contained presentation and for commenting on practical adaptations of this method, we discuss the algorithm introduced in \cite{StableIPM} that is used within our implementation of Algorithm~\ref{algo:Solver} for the solution of the subproblems \eqref{eqn:MinQ}.
We state the problem:
\begin{align*}
\min_{\textbf{x}\xspace \in \Omega}\quad q(\textbf{x}\xspace) := &\frac{1}{2} \cdot \textbf{x}\xspace\t\cdot\textbf{Q}\xspace \cdot \textbf{x}\xspace + \textbf{c}\xspace\t\cdot\textbf{x}\xspace + \frac{1}{2 \cdot \omega} \cdot \|\textbf{A}\xspace \cdot \textbf{x}\xspace - \textbf{b}\xspace\|_2^2 \\
&\quad - \tau \cdot \textbf{1}\xspace\t \cdot \Big(\,\log(\textbf{1}\xspace+\textbf{x}\xspace)+\log(\textbf{1}\xspace-\textbf{x}\xspace)\,\Big)
\end{align*}
This is essentially a convex quadratic function augmented with barrier terms for box constraints. We define an auxiliary variable
\begin{align}
\boldsymbol{\lambda}\xspace = \frac{-1}{\omega} \cdot (\textbf{A}\xspace \cdot \textbf{x}\xspace - \textbf{b}\xspace)\,.\label{eqn:def:lambda}
\end{align}
Our algorithm makes use of the following two functions, that are both parametric in $\nu>0$:
\begin{align*}
\psi_\nu(\textbf{x}\xspace) := &\frac{1}{\nu} \cdot \Bigg(\, \frac{1}{2} \cdot \textbf{x}\xspace\t\cdot\textbf{Q}\xspace \cdot \textbf{x}\xspace + \textbf{c}\xspace\t\cdot\textbf{x}\xspace + \frac{1}{2 \cdot \omega} \cdot \|\textbf{A}\xspace \cdot \textbf{x}\xspace - \textbf{b}\xspace\|_2^2 \,\Bigg)\\
& \quad -\Big(\, \log(\textbf{1}\xspace+\textbf{x}\xspace)+\log(\textbf{1}\xspace-\textbf{x}\xspace) \,\Big)\tageq\\[10pt]
F_\nu(\textbf{z}\xspace) := & \begin{pmatrix}
\textbf{Q}\xspace \cdot \textbf{x}\xspace + \textbf{c}\xspace - \textbf{A}\xspace\t \cdot \boldsymbol{\lambda}\xspace - \boldsymbol{\mu}\xspace_L + \boldsymbol{\mu}\xspace_R\\
\textbf{A}\xspace \cdot \textbf{x}\xspace - \textbf{b}\xspace + \omega \cdot \lambda\\
\textsl{diag}\xspace(\boldsymbol{\mu}\xspace_L) \cdot (\textbf{1}\xspace+\textbf{x}\xspace)- \nu \cdot \textbf{1}\xspace\\
\textsl{diag}\xspace(\boldsymbol{\mu}\xspace_R) \cdot (\textbf{1}\xspace-\textbf{x}\xspace)- \nu \cdot \textbf{1}\xspace
\end{pmatrix}\,,\tageq
\end{align*}
where we use the short-hand $\textbf{z}\xspace = (\textbf{x}\xspace,\boldsymbol{\lambda}\xspace,\boldsymbol{\mu}\xspace_L,\boldsymbol{\mu}\xspace_R) \in \mathbb{R}\xspace^{n+m+n+n}$.
\largeparbreak
\paragraph{Idea of the algorithm}
For all details on the algorithm we refer to \cite{StableIPM}. In the following we only give the ideas. Algorithm~\ref{algo:PrimalDual} states the method.
$\psi_\nu$ is self-concordant, strictly convex, and has a unique minimizer that converges to $\textbf{x}\xspace=\textbf{0}\xspace$ as $\nu\rightarrow +\infty$. One can prove that there is a value for $\nu$ that scales weakly with the logarithm of the norms of $\textbf{Q}\xspace,\textbf{c}\xspace,\textbf{A}\xspace,\textbf{b}\xspace$ and $\omega$ such that a Newton iteration for minimization of $\psi_\nu$ with initial guess $\textbf{x}\xspace=\textbf{0}\xspace$ converges rapidly to a sufficiently good minimizer of $\psi_\nu$ for the aforementioned suitable value of $\nu$. $\textbf{x}\xspace$ will then satisfy $2 \cdot \textbf{x}\xspace \in \Omega$. The suitable value for $\nu$ is found iteratively by evaluating an upper bound of the Newton-decrement at $\textbf{x}\xspace=\textbf{0}\xspace$, cf. \cite{Boyd,StableIPM} for details.
In Algorithm~\ref{algo:PrimalDual}, the suitable value for $\nu$ is determined iteratively in line 4. Ihe sufficiently accurate minimizer of $\psi_\nu$ is computed with $10$ Newton iterations in line 7. Damping is not needed because due to the choice of $\nu$ it holds that $\textbf{x}\xspace$ is always sufficiently close to the exact minimizer so that the step-length $1$ in the Newton-iteration is always acceptable.
Once that the minimizer $\textbf{x}\xspace$ of $\psi_\nu$ is computed, we then augment the primal vector $\textbf{x}\xspace$ to a primal-dual vector $\textbf{z}\xspace$ by computing $\boldsymbol{\lambda}\xspace$ as given above and
\begin{align}
\boldsymbol{\mu}\xspace_L := \tau/(\textbf{1}\xspace+\textbf{x}\xspace)\,,\quad \boldsymbol{\mu}\xspace_R := \tau/(\textbf{1}\xspace-\textbf{x}\xspace)\,.\label{eqn:def:mu}
\end{align}
Since $\textbf{x}\xspace$ was an accurate root of $\nabla \psi_\nu$, it follows that $\textbf{z}\xspace$ is an accurate root of $F_\nu$ for the same value of $\nu$. Exact measures for the accuracy are given in \cite{StableIPM}. Given $\textbf{z}\xspace$ and $\nu$, we employ Mehrotra's predictor-corrector method to follow the path of roots of $F_\nu$ for iteratively reduced values of $\nu$ that converge to $\tau$. Eventually we arrive at a vector $\hat{\textbf{z}}\xspace$ satisfying
\begin{align*}
F_\tau(\hat{\textbf{z}}\xspace)=\textbf{0}\xspace\,.
\end{align*}
The first component $\hat{\textbf{x}}\xspace$ of $\hat{\textbf{z}}\xspace$ solves our minimization problem \eqref{eqn:MinQ}. At least for the original short-step path-following version strong theory is available: All iterates $\textbf{x}\xspace$ are bounded away from $\partial\Omega$. All values in $\boldsymbol{\mu}\xspace_L,\boldsymbol{\mu}\xspace_R$ are bounded from below by strictly positive values. Last, the condition number of the Jacobian $DF_\nu$ of $F_\nu$ is bounded by a reasonable value at all iterates, even when the path-following iterates are perturbed by round-off errors \cite{StableIPM}.
For the long-step variant such guarantees do not exist. However, we find in practice that on averages it converges in $15$ iterations. The long-step path-following iteration uses the Mehrotra heuristic and is implemented in lines 11--23 in Algorithm~\ref{algo:PrimalDual}. Mehortra's method uses an affine step to estimate a value $\sigma$ for the geometric reduction of $\nu$, cf. line 17\,. A corrector step is then employed in order to restore centrality. Details on Mehrotra's method can be found in \cite{Mehrotra}.
Within the algorithm the set
\begin{align*}
\mathcal{F}\xspace := \Omega \times \mathbb{R}\xspace^m \times \mathbb{R}\xspace_+^n \times \mathbb{R}\xspace^n_+
\end{align*}
is used. The relation $\textbf{z}\xspace \in \mathcal{F}\xspace$ means that its components $\textbf{x}\xspace$ are strictly interior and $\boldsymbol{\mu}\xspace_L,\boldsymbol{\mu}\xspace_R$ are strictly positive.
\begin{algorithm}
\caption{Primal-dual method}
\label{algo:PrimalDual}
\begin{algorithmic}[1]
\Procedure{CcpPbSolver}{\,$\textbf{Q}\xspace,\textbf{c}\xspace,\textbf{A}\xspace,\textbf{b}\xspace,\omega,\tau,{\textsf{tol}}\xspace$\,}
\State $\nu:=1$,\quad $\textbf{x}\xspace:=\textbf{0}\xspace$
\While{$\|\nabla \psi_\nu(\textbf{x}\xspace)\|_2 \geq 0.25$}
\State $\nu := 10 \cdot \nu$
\EndWhile
\For{$k=1,...,10$}
\State $\textbf{x}\xspace := \textbf{x}\xspace - \nabla^2 \psi_\nu(\textbf{x}\xspace)^{-1} \cdot \nabla \psi_\nu(\textbf{x}\xspace)$
\EndFor
\State Compute $\boldsymbol{\lambda}\xspace,\boldsymbol{\mu}\xspace_L,\boldsymbol{\mu}\xspace_R$ and state $\textbf{z}\xspace$, according to \eqref{eqn:def:lambda},\eqref{eqn:def:mu}.
\While{$ \|F_\tau(\textbf{z}\xspace)\|_\infty >{\textsf{tol}}\xspace $}
\State $\nu := 0.5 \cdot \big(\,\boldsymbol{\mu}\xspace_L\t\cdot(\textbf{1}\xspace+\textbf{x}\xspace) + \boldsymbol{\mu}\xspace_R\t\cdot(\textbf{1}\xspace-\textbf{x}\xspace) \,\big)$
\State \textit{// predictor (affine step to target $\nu=\tau$)}
\State $\Delta\textbf{z}\xspace^{\text{aff}} := -DF_\nu(\textbf{z}\xspace)^{-1} \cdot F_\tau(\textbf{z}\xspace)$
\State Choose $\alpha^{\text{aff}} \in (0,1]$ maximal subject to $\textbf{z}\xspace + \alpha^{\text{aff}} \cdot \Delta\textbf{z}\xspace^{\text{aff}} \in \overline{\mathcal{F}\xspace}$
\State $\textbf{z}\xspace^{\text{aff}} := \textbf{z}\xspace + \alpha^{\text{aff}} \cdot \Delta\textbf{z}\xspace^{\text{aff}}$
\State $\nu^{\text{aff}} := 0.5 \cdot \big(\,(\boldsymbol{\mu}\xspace^{\text{aff}}_L)\t\cdot(\textbf{1}\xspace+\textbf{x}\xspace^{\text{aff}}) + (\boldsymbol{\mu}\xspace^{\text{aff}}_R)\t\cdot(\textbf{1}\xspace-\textbf{x}\xspace^{\text{aff}}) \,\big)$
\State $\sigma := (\nu^{\text{aff}} / \nu)^3$
\State ${\hat{\nu}}\xspace := \max\lbrace\,\tau\,,\,\sigma\cdot\nu\,\rbrace$
\State \textit{// corrector (step to target $\nu={\hat{\nu}}\xspace$)}
\State $\Delta\textbf{z}\xspace^\text{cor} := -DF_\nu(\textbf{z}\xspace)^{-1} \cdot F_{\hat{\nu}}\xspace(\textbf{z}\xspace^{\text{aff}})$
\State $\Delta\textbf{z}\xspace := \Delta\textbf{z}\xspace^{\text{aff}} + \Delta\textbf{z}\xspace^\text{cor}$
\State Choose $\alpha \in (0,1]$ maximal subject to $\textbf{z}\xspace + \alpha \cdot \Delta\textbf{z}\xspace \in \overline{\mathcal{F}\xspace}$
\State $\textbf{z}\xspace := \textbf{z}\xspace + 0.99 \cdot \alpha \cdot \Delta\textbf{z}\xspace$
\EndWhile
\State \textit{// $\textbf{z}\xspace = (\textbf{x}\xspace,\boldsymbol{\lambda}\xspace,\boldsymbol{\mu}\xspace_L,\boldsymbol{\mu}\xspace_R)$}
\State \Return $\textbf{x}\xspace$
\EndProcedure
\end{algorithmic}
\end{algorithm}
\paragraph{Linear systems}
The linear systems to be solved in line 7 are well-posed since strongly dominated by the positive diagonal elements from the logarithmic terms in $\psi_\nu$. The linear systems in lines 13 and 20 can be made symmetric with the system matrix
\begin{align*}
\textbf{K}\xspace = \begin{bmatrix}
\textbf{Q}\xspace & \textbf{A}\xspace\t & \textbf{I}\xspace & \textbf{I}\xspace \\
\textbf{A}\xspace & -\omega \cdot \textbf{I}\xspace & \textbf{0}\xspace & \textbf{0}\xspace\\
\textbf{I}\xspace & \textbf{0}\xspace & - \textsl{diag}\xspace\Big(\frac{\boldsymbol{\mu}\xspace_L}{\textbf{1}\xspace+\textbf{x}\xspace}\Big) & \textbf{0}\xspace\\
\textbf{I}\xspace & \textbf{0}\xspace & \textbf{0}\xspace & - \textsl{diag}\xspace\Big(\frac{\boldsymbol{\mu}\xspace_L}{\textbf{1}\xspace-\textbf{x}\xspace}\Big)
\end{bmatrix}\,.
\end{align*}
It is symmetric indefinite, belonging to the category
\begin{align*}
\textbf{K}\xspace= \begin{bmatrix}
\textbf{Q}\xspace & \textbf{G}\xspace\t\\
\textbf{G}\xspace & -\textbf{D}\xspace
\end{bmatrix}\,,
\end{align*}
where $\textbf{Q}\xspace$ and $\textbf{D}\xspace$ are both positive definite. Thus,
\begin{align}
\textsl{cond}\xspace_2(\textbf{K}\xspace) \leq ( \|\textbf{Q}\xspace^{-1}\|_2 + \|\textbf{D}\xspace^{-1}\|_2 ) \cdot \|\textbf{K}\xspace\|_2 \,.
\end{align}
The inverse norm of $\textbf{D}\xspace$ in turn can be bounded from the lower and upper bounds that hold for all the iterates $\textbf{x}\xspace,\boldsymbol{\mu}\xspace_L,\boldsymbol{\mu}\xspace_R$. For details we refer to \cite{StableIPM}. The system can be easily reduced, using the Schur-complement
\begin{align}
\Sigma := \textbf{Q}\xspace + \textbf{G}\xspace\t \cdot \textbf{D}\xspace^{-1} \cdot \textbf{G}\xspace\,,
\end{align}
where $\textbf{D}\xspace$ is a diagonal matrix.
\section{Numerical experiments}
For the numerical experiments we are particularly interested in large sparse NLP problems that arise from the discretization of one-dimensional path-constrained optimal-control problems. For the direct transcription of the control problem into a large space NLP we use the method introduced in \cite{StableTranscription}. This method yields an optimization problem where $\rho,\omega,\tau,\textbf{S}\xspace$ and functions for $f,c,\nabla f, \nabla c, \nabla^2_{\textbf{x}\xspace\bx} \mathcal{L}\xspace$ as well as a sparsified positive semi-definite projection of $\nabla^2_{\textbf{x}\xspace\bx} \mathcal{L}\xspace$ are provided. The objective that must be minimized is $\phi$ itself, so our method can be directly applied to solve these problems.
We compare our method against \textsc{Ipopt}\xspace \cite{Ipopt}. Unfortunately, the interface of \textsc{Ipopt}\xspace forbids to pass problems where $m$, the output-dimension of $c$, is larger than $n$. This is why we introduce auxiliary variables $\textbf{s}\xspace$ that we force by equality constraints to satisfy $\omega \cdot \textbf{s}\xspace + c(\textbf{x}\xspace) = \textbf{0}\xspace$. Since $\|c(\textbf{x}\xspace)\|_2$ will be very small at the minimizer (in $\mathcal{O}\xspace(\omega)$), it will be $\|\textbf{s}\xspace\|_2 \in \mathcal{O}\xspace(1)$, i.e. the problem is reasonably scaled and no large numbers are introduced in the interface to \textsc{Ipopt}\xspace. To use \textsc{Ipopt}\xspace, we choose the objective $f(\textbf{x}\xspace) + \rho/2 \cdot \|\textbf{x}\xspace\|_\textbf{S}\xspace^2 + 0.5 \cdot \omega \cdot \|\textbf{s}\xspace\|^2$. Since \textsc{Ipopt}\xspace uses primal barrier functions, the logarithmic terms with $\tau$ in $\phi$ will also appear in the effectively minimized objective of \textsc{Ipopt}\xspace, yielding that both compared algorithms effectively solve the same optimization problem. For further details on how the problem is formulated to pass it to \textsc{Ipopt}\xspace we refer to \cite[Section 5]{StableTranscription}.
The implementations are in \textsc R2016b in Windows 8.1 with Processor Intel(R) Core(TM) i7-4600U and 8GB RAM. We used the MEX-compiled \textsc{Ipopt}\xspace version 3.11 from COIN-OR (www.coin-or.org).
\paragraph{Test problems}
Our test problems are listed in Table~\ref{table:Problems}. Problems 1 and 10 are given in \cite[eqns. 3 and 26]{Kameswaren}. Problems 2 and 3 can be found on the web-page of GPOPS-II (www.gpops2.com). The other problems are in order from \cite[pp. 79, 163, 85, 149, 113, 39]{BettsCollection}. Since we only access a computer with $\varepsilon_{\textsf{mach}}\xspace=10^{-16}$, we cannot choose $\rho,\omega,\tau$ very small. We solve all problems with $\rho=10^{-6}, \omega=10^{-6}, \tau=10^{-8}$. In all except two cases we use ${\textsf{tol}}\xspace= 10^{-8}$. The exceptions are problem 5 and 7, since these are badly scaled.
\vspace{2mm}
The Aly-Chan problem is a problem with a singular arc where the sensitivity of the optimality value with respect to a variation in the control is below $10^{-10}$. Thus, it is difficult for the optimizer to find the unique smooth solution for this control, potentially resulting in many iterations.
The problems of brachistochrone, and those due to Bryson and Denham, Goddard, Hager and Rao form a biased set of well-scaled trial problems, involving convex quadratic programs and non-linear programs of small to large size.
Problems 5 and 7 involve biological models. As typical for biological problems, the states/controls differ widely in scales, leading to a bad scaling of the NLPs In our experiments we were unable to solve these NLPs to small tolerances.
Regarding the size, the problems 8 and 11 are most prohibitive. Problem 8 involves twelve species over a time-interval of 12 seconds. Problem 11 is originally stated for a time-interval of $10^4$ seconds. For the purpose of our experiments we reduced the time-interval to $100$ seconds, which still yields the same problem characteristic in terms of the shape of the solution.
For the initial guesses we used constant values for problems 1, 2, 3, 4, 8, 10 and 11. For problems 5, 6, 7 and 9 we used constant values for the controls and integrated the states numerically for the given initial conditions and constant control values. However, since most of the listed test problems involve end conditions, our initial guesses are usually infeasible.
The mesh-size has been chosen sufficiently small to yield curves for the discrete solutions that do qualitatively represent the shapes of the reference solutions. Yet, the mesh size is rather moderate, so that the NLPs are small enough to be solvable in reasonable amount of times on our computing system. Particularly the computation times of \textsc{Ipopt}\xspace were a limiting factor in this regard.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|}
\hline
\# & Name & type & initial guess & $h$ & FEM degree \\ \hline
1 & Aly-Chan & non-convex QCQP & $\vec{0}$ & $\pi/20$ & 10 \\
2 & Brachistochrone & NLP & $0.5 \cdot \vec{1}$ & $1/40$ & 4 \\
3 & Bryson-Denham & convex QP & $\vec{0}$ & $1/20$ & 8 \\
4 & Chemical reactor & NLP & $0.5 \cdot \vec{1}$ & $1/20$ & 4 \\
5 & Chemotherapy & badly scaled NLP & $\int$, $\vec{u}=\vec{0}$ & $5$ & 4 \\
6 & Caintainer crane & NLP & $\int$, $\vec{u}=\vec{0}$ & $9/40$ & 4 \\
7 & Drug treatment & badly scaled NLP & $\int$, $\vec{u}=\vec{0}$ & $5/2$ & 4 \\
8 & Free flying robot & mildly nonlinear NLP & $\vec{1}$ & $3/10$ & 8 \\
9 & Goddard problem & NLP & $\int$, $T=T_m$ & $1/40$ & 4 \\
10 & Hager problem & convex QP & $\vec{0}$ & $1/4$ & 2 \\
11 & Rao Problem & NLP & $\vec{0}$ & $1/10$ & 4 \\
\hline
\end{tabular}
\caption[]{List of test problems: problem number, name, type of resulting finite-dimensional optimization problem, construction of initial guess, mesh-size, polynomial degree of finite element shape functions.}\label{table:Problems}
\end{table}
\paragraph{Experimental results} For the experiments we used a computation time limit of 10 hours and an iteration limit of 10000 iterations. \textsc{Ipopt}\xspace caused two times a crash of \textsc{Matlab} for problems 2 and problem 11. In problem 11 it broke shortly before the time limit was reached. For the problems 2, 4, 5, 7 and 9 \textsc{Ipopt}\xspace reached the iteration limit without a sufficiently accurate solution. For problem 1 \textsc{Ipopt}\xspace terminated prematurely with a solution that it found of "acceptable accuracy", although ${\textsf{tol}}\xspace=10^{-8}$ was specified. For problems 6 and 11 \textsc{Ipopt}\xspace reported algorithmic errors in its own subroutines and terminated. In all except the aforementioned cases both algorithms worked as intended.
The results are given in Table~\ref{tab:ExperimentResults}. \textsc{Ipopt}\xspace solved 2 out of 11 problems with success. PPD-IPM solved 11 out of 11 problems with success. All successfully solved problems of both solvers yielded feasible solutions that were good approximations to the reference optimal control solutions.
The table shows that in principle the iterations in \textsc{Ipopt}\xspace are cheaper than in PPD-IPM. This is because \textsc{Ipopt}\xspace solves only one linear system per iteration, while PPD-IPM utilizes Algorithm~\ref{algo:PrimalDual}, which solves $\approx 15$ linear systems per iteration. PPD-IPM compensates the larger cost per iteration by achieving a smaller amount of iterations in total. We further enhanced the solution of the linear systems in PPD-IPM by exploiting the particular sparsity pattern that results from the discretization. This explains why our method is only ten times slower per iteration, although it is \textsc{Matlab} (instead of MEX-compiled C++) and solves more systems per iteration.
We observe that for the convex quadratic programs in problems 3 and 10 PPD-IPM converged in one iteration. This is because the optimization of the subproblem $q$ is also an accurate interior-point solution to a convex quadratic program, confer to \cite{StableIPM}. Problem 9 converged in 4 iterations. During our investigation we found that our integrated initial guess with $T(t)=T_m$ is indeed the optimal solution, so the initial guess is -- apart from errors of numerical integration and discretization -- identical to the minimizer. PPD-IPM can strike a benefit of this accurate initial guess in that it converges in very few iterations, while \textsc{Ipopt}\xspace cannot utilize the good initial guess.
It strikes the eye that in 7 out of 11 cases PPD-IPM terminates with a solution tolerance of less than $10^{-9}$ although the tolerance was only ${\textsf{tol}}\xspace=10^{-8}$. This is because for each of the problems the method achieves local second order convergence. Thus, choosing smaller values for ${\textsf{tol}}\xspace$ would only have negligible impact on the iteration count. Unfortunately, we cannot demonstrate this because $\varepsilon_{\textsf{mach}}\xspace=10^{-16}$ is not small enough.
\largeparbreak
In general, the results for \textsc{Ipopt}\xspace are fairly bad, compared to PPD-IPM. We assume that the reason is as follows: We use exact Hessian matrices and we use highly accurate positive semi-definite approximations to the Hessian (instead of BFGS updates \cite[Chapter~6]{NumOpt}). PPD-IPM makes high use of the Hessian information. \textsc{Ipopt}\xspace on the other hand does not. For example, when the Hessian is not positive definite, \textsc{Ipopt}\xspace applies a crude shift with the identity, potentially resulting in bad progress because of the different scales in each solution component.
If instead of the exact Hessian we had used a crude Hessian approximation then certainly this would increase the iteration count of PPD-IPM. Then maybe the performance of \textsc{Ipopt}\xspace would become better in comparison to PPD-IPM, since \textsc{Ipopt}\xspace's iteration count would probably only change mildly while the count of PPD-IPM would grow severely. But we have accurate Hessians and want to strike a benefit of this, so there would be no point in doing experiments without using them.
\begin{table}
\begin{tabular}{|l|l|l||l|l|l||l|l|l|}
\hline
\multicolumn{3}{c|}{Problem} & \multicolumn{3}{c|}{\textsc{Ipopt}\xspace} & \multicolumn{3}{c|}{PPD-IPM} \\
\# & n & m & time/iter & \# iters & NLP error & time/iter & \# iters & $\|\nabla \phi\|_\infty$ \\ \hline\hline
1 & 1505 & 1804 & 0.33 & 2188 & 9.8e-7$^{***,\dagger}$ & 1.3 & 146 & 6.2e-11 \\
2 & 2405 & 2885 & 0.30 & 10000 & 3.4e-2$^\dagger$ & 2.0 & 360 & 1.4e-9 \\
3 & 1443 & 1604 & 0.22 & 20 & 3.1e-9 & 2.2 & 1 & 1.2e-11 \\
4 & 1924 & 2243 & 0.23 & 10000 & 3.3e-2$^\dagger$ & 1.5 & 138 & 3.5e-10 \\
5 & 6005 & 7204 & 0.65 & 10000 & 1.7e0$^\dagger$ & 2.6 & 275 & 7.4e-6 \\
6 & 3848 & 4492 & 0.63 & 1687 & 1.5e-1$^{*,\dagger}$ & 2.5 & 48 & 1.8e-9 \\
7 & 964 & 962 & 0.19 & 10000 & 1.0e+2$^{\dagger}$ & 1.4 & 203 & 2.6e-6 \\
8 & 11532 & 12812 & 20 & 543 & 9.5e-9 & 12 & 71 & 5.1e-10 \\
9 & 2886 & 3525 & 0.30 & 10000 & 4.9e+1$^{\dagger}$ & 1.9 & 4 & 5.4e-10 \\
10 & 50 & 49 & 0.20 & 6 & 9.1e-10 & 1.4 & 1 & 1.7e-10 \\
11 & 24002 & 24002 & $\approx$8.5 & 4243$^{**}$ & --- & 5.9 & 13 & 1.2e-10 \\
\hline
\end{tabular}
\caption[]{Experimental results. Problem number, $n$ number of unknowns, each with box constraints, $m$ number of penalty-equality constraints; time per iteration, number of iterations for each solver. Regarding solution accuracy, \textsc{Ipopt}\xspace monitors NLP error while we measure $\|\nabla\phi(\textbf{x}\xspace)\|_\infty$. Legend: $*$ Restoration failed; $**$ Restoration phase converged to feasible point that is unacceptable to the filter; $***$ \textsc{Ipopt}\xspace says "acceptable solution"; $\dagger$ solution is not sufficiently accurate.}\label{tab:ExperimentResults}
\end{table}
\section{Final remarks}
We presented a novel optimization method that merges the ideas of primal interior-points, primal-dual interior-points, and successive quadratic programming. The method directly minimizes a penalty-barrier function, which is similar in approach to primal interior-point methods. In each iteration a step direction is computed by minimizing a convex subproblem that is the sum of a convex quadratic function, quadratic penalties, and logarithmic barriers. The approach of solving a convex (approximately quadratic) subproblem is related to successive quadratic programming. For the solution of the subproblem a primal-dual path-following method is used.
The method has some nice theoretical properties: It is very simple. There are no complicated issues related to infeasible subproblems because by construction all subproblems are feasible. The method has global convergence and second order local convergence (when an additional Newton step is used) under actually no requirements.
The method has three big practical advantages that make it yet un-competed by any other solver: First, it can solve overdetermined problems, i.e. where $m > n$, in a meaningful way in a comparably small amount of iterations. This is highly required, because the only convergent numerical scheme that is available for direct transcription of optimal control problems \cite{StableTranscription} results in such overdetermined problems. Also, the cost for solving the linear system is lower than the time for evaluating the problem, thus a small number of iterations is needed for good execution times. Second, our method does not require regularity of the Hessian of the Lagrange-function. Most other methods require it and thus enforce it by using a shift (that is significantly larger than $\rho$ and is not aligned with the objective), resulting in large amounts of iterations. Third, as the experiments showed our method can find highly accurate solutions to problems with indefinite or singular Hessian matrices and linearly dependent or overdetermined constraints. We believe this is only possible because highly accurate sparse approximations to the positive semi-definite projections of the exact Hessian matrices are employed for fast global convergence and exact Hessians are used for second-order local convergence. As far as we know, most other optimization methods do not even have an interface that would allow passing both a positive definite Hessian projection and the exact Hessian.
Further work shall be dedicated to implementing this method on a highly parallel system that computes with at least 32 significant digits. The algorithm itself is highly suitable for parallel computations because for our optimal control problems there are weakly scalable algorithms for the solution of the linear systems. From using 32 significant digits we expect that it removes all round-off related issues that could currently arise when choosing really small values for $\rho,\omega,\tau,{\textsf{tol}}\xspace$.
\FloatBarrier
|
1,116,691,498,669 | arxiv | \section{Acknowledgements}
This work was supported in part by Polish Science Council
grant KBN 2 P301 050 07.
|
1,116,691,498,670 | arxiv | \section{Introduction}
To understand the nature and constrain the Equation of State (EOS) of super-dense neutron-rich nuclear matter has been a major science goal shared by many astrophysical observations and terrestrial nuclear experiments, see, e.g., refs. \citep{Danielewicz02,LCK08,Lattimer16,Watts16,Oertel17,Ozel16,Li17,Herman17,Blaschke2018,Bom18,BUR18,ISAAC18,Pro19,Baiotti} for topical reviews.
The most basic quantity for calculating the EOS of nuclear matter at nucleon density $\rho=\rho_n+\rho_p$ and isospin asymmetry $\delta\equiv (\rho_n-\rho_p)/\rho$
is the average nucleon energy $E(\rho ,\delta )$
\begin{equation}\label{eos}
E(\rho ,\delta )=E_0(\rho)+E_{\rm{sym}}(\rho )\cdot \delta ^{2} +\mathcal{O}(\delta^4)
\end{equation}
according to essentially all existing nuclear many-body theories \citep{Bom91}. The first term $E_0(\rho)$ is the nucleon energy in symmetric nuclear matter (SNM) having equal numbers of neutrons and protons while the symmetry energy $E_{\rm{sym}}(\rho )$ quantifies the energy needed to make nuclear matter more neutron rich. While much progress has been made over the last few decades in constraining the SNM EOS in a broad density range, the symmetry energy $E_{\rm{sym}}(\rho )$ is relatively well constrained only around and below the saturation density of nuclear matter $\rho_0\approx 2.8\times 10^{14}$ g/cm$^{3}$ (0.16 fm$^{-3}$) \citep{Li98,ibook01,Bar05,Steiner05,LCK08,Lattimer2012,Tsang12,Dutra12,Dutra14,Chuck14,Tesym,Bal16,Li18}. While very little is known about the symmetry energy at supra-saturation densities. In fact, the $E_{\rm{sym}}(\rho )$ has been broadly recognized as the most uncertain part of the EOS of super-dense neutron-rich nucleonic matter, see, e.g., refs. \citep{Kut93,Ditoro2,BALI19}.
The nuclear symmetry energy has broad ramifications for many properties of neutron stars and gravitational waves from their mergers. For example, the density profile of isospin asymmetry in NSs at $\beta$ equilibrium, i.e, $\delta(\rho)$ or the corresponding proton fraction $x_p(\rho)$, is uniquely determined by the $E_{\rm{sym}}(\rho )$ through the $\beta$-equilibrium and charge neutrality conditions. Once the $\delta(\rho)$ is determined by the $E_{\rm{sym}}(\rho )$, both the pressure $P(\rho, \delta)$ and energy density $\epsilon(\rho,\delta)$ reduce to functions of nucleon density only. Their relation $P(\epsilon)$ can then be used to study NS structures. Moreover, both the critical nucleon density $\rho_c$ (where $x_p(\rho_c)\approx 1/9$ in the neutron+proton+electron ($npe$) matter in NSs) above which the fast cooling of protoneutron stars by neutrino emissions through the direct URCA process can occur, and the crust-core transition density in NSs depend sensitively on the $E_{\rm{sym}}(\rho )$ \citep{Lattimer00}. Furthermore, the frequencies and damping times of various oscillations, especially the g-mode of the core and the torsional mode of the crust, quadrupole deformations of isolated NSs and the tidal deformability of NSs in inspiraling binaries also depend on the $E_{\rm{sym}}(\rho )$ \citep{DongLai,Plamen1,Newton,Wen19}. There is also a degeneracy between the EOS of super-dense neutron-rich matter and the strong-field gravity in understanding both properties of super-massive NSs and the minimum mass to form black holes. Thus, a precise determination of the $E_{\rm{sym}}(\rho )$ has broad impacts in many areas of astrophysics, cosmology and nuclear physics \citep{Wen09,XTHe}.
While it is very challenging to extract the density dependence of nuclear symmetry energy $E_{\rm{sym}}(\rho )$ from terrestrial experiments and/or astrophysical observations for many scientific and technical reasons, much progress has been made over the last two decades. For example, by 2013 there were at least 28 analyses of terrestrial nuclear laboratory experiments and astrophysical observations to extract the magnitude $E_{\rm{sym}}(\rho_0)$ and slope $L(\rho_0)$ of symmetry energy at saturation density $\rho_0$. Assuming all studies are equally reliable/respectable, these analyses together indicate that the fiducial values of $E_{\rm{sym}}(\rho_0)$ and $L(\rho_0)$ are, respectively, $E_{\rm{sym}}(\rho_0)=31.6\pm 2.66$ MeV and $L=59\pm 16$ MeV\,\citep{Li13}. In 2016, a survey of 53 analyses \citep{Oertel17} found the new fiducial values are $E_{\rm{sym}}(\rho_0)=31.7 \pm 3.2$~MeV and $L = 58.7 \pm 28.1$~MeV, respectively.
These results are consistent with the earlier ones albeit with a larger uncertainty for $L$ as more diverse analyses were included. Most of the post-GW170817 analyses of NS star radii and/or tidal deformability
found $L$ values are generally consistent with the above values, see, e.g., refs. \citep{BALI19,Baiotti} for reviews. Interestingly, predictions of some of the latest state-of-the-art microscopic nuclear many-body theories are in very good agreement with the above fiducial values. For example, using a novel Bayesian approach to quantify the truncation errors in chiral effective field theory (EFT) predictions for pure neutron matter and a many-body perturbation theory with consistent nucleon-nucleon and three-nucleon interactions up to fourth order in the EFT expansion, the $E_{\rm{sym}}(\rho_0)$ and $L(\rho_0)$ are found to be $E_{\rm{sym}}(\rho_0)=31.7 \pm 1.1$~MeV and $L = 59.8 \pm 4.1$~MeV, respectively \citep{Ohio20}. It thus seems that the values of $E_{\rm{sym}}(\rho_0)$ and slope $L(\rho_0)$ at saturation density $\rho_0$ are converging nicely while there are certainly needs to better understand and reduce both the statistical and systematic errors. It is worth noting that at sub-saturation densities, various studies on nuclear structures and reactions are also making significant progress \citep{Jorge14,Colo14,X18}. In particular, many studies of neutron-skins of heavy nuclei using various approaches including the parity violating electron-nucleus scatterings provide constrain on the symmetry energy around $2/3\rho_0$, see, e.g., refs. \citep{ZZ13,Vin14,India}. The latter has its own importance in both nuclear physics and astrophysics and can be extrapolated to somewhat higher densities near $\rho_0$.
\begin{figure}[!hpbt]
\centering
\vspace{1.cm}
\includegraphics[scale=0.5]{Fig1-Esym2.eps}
\vspace{0.5cm}
\caption{Nuclear symmetry energy at twice the saturation density of nuclear matter deduced from energetic heavy-ion reactions in terrestrial laboratories and observations of neutron stars, see, text for details.}
\label{Esym-survey}
\end{figure}
It is also very encouraging to note that recent analyses of heavy-ion reaction experiments in terrestrial laboratories and properties of neutron stars from multiple messengers have led to some new progress in constraining the $E_{\rm{sym}}(\rho )$ up to about twice the saturation density. For example, shown in Fig. \ref{Esym-survey} are the values of symmetry energy at $2\rho_0$, i.e., $E_{\rm{sym}}(2\rho_0)$, from (1) the FOPI-LAND \citep{Rus11} and (2) the ASY-EOS \citep{Rus16} Collaborations by analyzing the relative flows and yields of light mirror nuclei as well as neutrons and protons in heavy-ion collisions at beam energies of 400 MeV/nucleon, (3) (Chen) an extrapolation of the systematics of low-density symmetry energy \citep{ChenLW15}, (4) (Zhang \& Li) direct inversions of observed NS radii, tidal deformability and maximum mass in the high-density EOS space \citep{Zhang18,Zhang19epj,Zhang19apj}, (5) (Xie \& Li) a Bayesian inference from the radii of canonical NSs observed by using X-rays and gravitational waves from GW170817 \citep{Xie19}, (6) (Zhou, Chen \& Zhang) analyses of NS radii, tidal deformability and maximum mass within an extended Skyrme Hartree-Fock approach (eSHF) \citep{LWChen19,YZhou19}, (7) (Nakazato \& Suzuki) analyzing cooling timescales of protoneutron stars as well as the radius and tidal deformability of GW170817 \citep{Nakazato19}, (8) a Bayesian inference directly from the X-ray data of 7 quiescent low mass X-ray binaries in globular clusters \citep{Baillot19}. Despite of the rather different assumptions and methods used in analyzing the different types of laboratory and observational data,
it is very interesting to see that they all together are consistent with a fiducial value of $E_{\rm{sym}}(2\rho_0)=47$ MeV within the still relatively large error bars of the individual analyses.
Moreover, several recent theoretical studies also predicted values of $E_{\rm{sym}}(2\rho_0)$ consistent with its fiducial value of 47 MeV. For example, an upper bound of $E_{\rm{sym}}(2\rho_0) \leq 53.2$ MeV was derived recently in Ref. \citep{PKU-Meng} by studying the radii of neutron drops using the state-of-the-art nuclear energy density functional theories. Quantum Monte Carlo calculations using local interactions derived from chiral EFT up to next-to-next-to-leading order predicted a value of $E_{\rm{sym}}(2\rho_0) \approx 46 \pm 4$ MeV \citep{Diego}. While the latest many-body perturbation theory calculations with consistent nucleon-nucleon and three-nucleon interactions up to fourth order in the EFT expansion predicted a value of $E_{\rm{sym}}(2\rho_0) \approx 45 \pm 3$ MeV \citep{Ohio20}.
They are both consistent with the fiducial value of $E_{\rm{sym}}(2\rho_0)=47$ MeV and have much smaller error bars. It is worth noting that the chiral EFT is currently applicable to a maximum density of about $2\rho_0$.
So, what is the main remaining problem with the symmetry energy of super-dense neutron-rich matter? Besides the large error bars of $E_{\rm{sym}}(2\rho_0)$ shown in Fig. \ref{Esym-survey}, detailed analyses
by both inverting directly the radii and/or tidal deformability of canonical NSs in the high-density EOS parameter space \citep{Zhang19epj,Zhang19jpg,Zhang19apj} or the Bayesian statistical inference of these NS observables \citep{Xie19} have shown clearly that the macroscopic properties of canonical NSs do not constrain the symmetry energy at densities above about $2\rho_0$. In particular, as we shall demonstrate, the skewness of symmetry energy characterizing its behavior above $2\rho_0$ is not constrained by the radii and/or tidal deformability of canonical NSs with masses around 1.4 M$_{\odot}$. This is mainly because
both the radii and tidal deformability of these NSs are mostly sensitive to the pressure at densities around $(1-2)\rho_0$ \citep{Lattimer00}. It was demonstrated clearly within both relativistic mean-field and Skyrme Hartree-Fock energy density functional theories that both the tidal deformability \citep{Fattoyev13} and radii \citep{Fattoyev14} of NSs heavier than 1.4M$_{\odot}$ have much stronger sensitivity to the high-density behavior of nuclear symmetry energy.
\begin{figure*}[htb]
\begin{center}
\resizebox{0.8\textwidth}{!}{
\includegraphics{Fig2-MRD.eps}
}
\caption{Left: representative mass-radius correlations considered for massive NSs. Right: the corresponding average density in NSs of mass M scaled by that of canonical NSs of mass M$_{1.4}\equiv$1.4M$_{\odot}$ as a function of the mass ratio $M/M_{1.4}$.}\label{MRD}
\end{center}
\end{figure*}
\begin{deluxetable}{lllll}
\tablecolumns{5}
\tablecaption{Imagined massive NS radii at 90\% confidence level}
\tablewidth{0pt}
\tablehead{&\colhead{$R_{1.4}$} & \colhead{$R_{1.6}$} & \colhead{$R_{1.8}$}& \colhead{$R_{2.0}$ (km)}
}
\startdata
\label{tab-data}
Reference & 11.9$\pm1.4$ & & & \\
case-1 & 11.9$\pm1.4$ &11.3$\pm1.4$ &10.7$\pm1.4$ &10.2$\pm1.4$ \\
case-2 & 11.9$\pm1.4$ &11.9$\pm1.4$ &11.9$\pm1.4$ &11.9$\pm1.4$ \\
case-3 & 11.9$\pm1.4$ &12.5$\pm1.4$ &13.1$\pm1.4$ &13.8$\pm1.4$ \\
\enddata
\end{deluxetable}
So, what is new in this work? It was speculated earlier that to constrain the symmetry energy significantly above $2\rho_0$, one may have to study the radii of more massive NSs and/or additional messengers especially those directly from NS cores or emitted during collisions between either two NSs in space or two heavy nuclei in the laboratory \citep{BALI19,Xie19}. Using as references the posterior probability distribution functions (PDFs) of EOS parameters as well as the corresponding $E_{0}(\rho)$ and $E_{\rm{sym}}(\rho )$ determined by the GW170817 and recent NICER data for PSR J0030+0451 within a Bayesian statistical approach, here we examine how future radius measurements for massive NSs in the region of 1.4M$_{\odot}$ to 2.0M$_{\odot}$ may provide useful new information about the EOS especially its symmetry energy term at densities above $2\rho_0$. More specifically, we use as imagined data in our Bayesian analyses the radii of three massive NSs of mass 1.6 M$_{\odot}$, 1.8 M$_{\odot}$ and 2.0 M$_{\odot}$ together with the reference radius $R_{1.4}$ for canonical NSs from GW170817 \citep{LIGO18} as listed in Table \ref{tab-data} along the three representative lines shown in the left window of Fig. \ref{MRD}. The radius as a function of mass along the three lines can be described approximately by
\begin{equation}\label{MRD-e}
R(M)~ ({\rm km})=\left
\{\begin{array}{ll}
R_{1.4}-4.2(M/M_{1.4}-1),~~&{\rm case-1,}\\
R_{1.4}=11.9\pm 1.4,~~&{\rm case-2,}\\
R_{1.4}+4.2(M/M_{1.4}-1),~~&{\rm case-3.}
\end{array}\right.
\end{equation}
The corresponding average densities scaled by that of a canonical NS of mass 1.4 M$_{\odot}$, i.e.,
\begin{equation}
\rho_{M}/\rho_{1.4}\equiv (M/M_{1.4})\cdot (R_{1.4}/R_M)^3,
\end{equation}
are shown in the right window for the three cases considered. In the case-1 where the radius decreases within increasing masses as predicted by many models, the average density increases by a factor of more than 2 going from canonical to 2 M$_{\odot}$ NSs, providing the best chance of probing super-dense NS matter. In the case-2, the radius is independent of mass and the average density increases with increasing mass relatively slowly completely due to the increase in mass. This case is also predicted by many theories and this assumption was actually used in a number of analyses of X-ray data. The case-3 is often predicted by models considering strangeness and/or hadron-quark phase transitions in NSs. In this case, the average density decreases slightly with increasing NS mass. All together, the three cases represent diverse model predictions. Moreover, the corresponding average density in NSs changes from case to case in a broad range. Of course, not all available theoretical predictions go through the reference point for canonical NSs as we require here.
While the latest NS maximum mass $M=2.14^{+0.10}_{-0.09}$~M$_\odot$ from observations of PSR~J0740+6620 \citep{M217} is rather precise, the available radius data of some massive NSs studied so far suffer from some systematic uncertainties, see, e.g., discussions in ref. \citep{Steiner18}. Fortunately, NICER and several more advanced X-ray observatories proposed are expected to measure much more precisely the radii of NSs in a broad mass range \citep{wp1,Strobe,wp2,Watts19}. It is thus useful to know what new physics can be extracted from future radius data of massive NSs compared to what we have already learned from studying the radii of canonical NSs. Moreover, if one considers the case-2 as the mean, the case-1 and case-3 as the lower and upper $1\sigma$ systematic error bounds of radius measurements, our imaginary data in Table \ref{tab-data} represent an approximately $\pm 15\%$ systematic error on top of the $\pm 4\%$ statistical error for NSs with mass 2.0 M$_{\odot}$. Comparing results of Bayesian inferences using the three typical cases will help us understand how the systematic errors in measuring the radii of massive NSs may affect what EOS information one can infer reliably.
So, what are the most important and interesting findings in this work? We find that the 68\% confidence boundaries of the SNM EOS $E_{0}(\rho)$ from the case-1 to case-2 and then the case-3 becomes only slightly more stiff, indicating that the $\pm 15\%$ systematic error in measuring the radii of massive NSs will not affect much the accuracy of extracting the SNM EOS at supra-saturation densities. While the corresponding symmetry energy $E_{\rm{sym}}(\rho )$ becomes significantly more stiff gradually. In particular, the PDFs of parameters characterizing the high-density $E_{\rm{sym}}(\rho )$ are significantly different, indicating that the radii of massive NSs have the strong potential of constraining tightly the $E_{\rm{sym}}(\rho )$ above $2\rho_0$ with little influence from the remaining uncertainties of SNM EOS at supra-saturation densities.
In the following, we shall first summarize the main ingredients of our Bayesian approach using an explicitly isospin-dependent parametric EOS for NSs containing neutrons, protons, electrons and muons (i.e., the $npe\mu$ model). We then establish the reference PDFs of EOS model parameters for canonical NSs using the radius data from LIGO/VIRGO and NICER Collaborations. We then compare the posterior PDFs and correlations of EOS parameters as well as the resulting 68\% confidence boundaries of the $E_{0}(\rho)$ and $E_{\rm{sym}}(\rho )$ for the three cases with respect to the reference. We also examine effects of the prior ranges of the poorly known high-density EOS parameters on inferring their posterior PDFs from the radii of massive NSs. A summary will be given at the end.
\section{Theoretical framework}
For completeness and ease of discussions, here we briefly recall the Bayesian inference approach using an isospin-dependent parametric EOS for the core of NSs consisting of nucleons, electrons and muons. More details can be found in our earlier publication \citep{Xie19}. The $npe\mu$ model is the minimum model for the core of NSs. Certainly, in super-dense matter new particles and/or phases may appear. Results of our study within the minimum model thus have to be understood within the model limitations. Nevertheless, we feel that our results establish a useful baseline for future studies including more degrees of freedom and new phases.
\subsection{Isospin-dependent parameterizations for the core EOS of NSs}
Within the $npe\mu$ model, the pressure is written in terms of the nucleon number density $\rho$ and isospin asymmetry $\delta$ as
\begin{equation}\label{pressure}
P(\rho, \delta)=\rho^2\frac{d\epsilon(\rho,\delta)/\rho}{d\rho},
\end{equation}
where $\epsilon(\rho, \delta)=\epsilon_n(\rho, \delta)+\epsilon_l(\rho, \delta)$ denotes the energy density with $\epsilon_n(\rho, \delta)$ and $\epsilon_l(\rho, \delta)$ being respectively the energy densities of nucleons and leptons. While the $\epsilon_l(\rho, \delta)$ is calculated using the noninteracting Fermi gas model \citep{Oppenheimer39}, the $\epsilon_n(\rho, \delta)$ is related to the energy per nucleon $E(\rho, \delta)$ and the average mass of nucleons $M_N$ via
\begin{equation}\label{lepton-density}
\epsilon_n(\rho, \delta)=\rho [E(\rho,\delta)+M_N].
\end{equation}
We parameterize the two parts of $E(\rho, \delta)$ according to
\begin{eqnarray}\label{E0para}
E_{0}(\rho)&=&E_0(\rho_0)+\frac{K_0}{2}(\frac{\rho-\rho_0}{3\rho_0})^2+\frac{J_0}{6}(\frac{\rho-\rho_0}{3\rho_0})^3,\\
E_{\rm{sym}}(\rho)&=&E_{\rm{sym}}(\rho_0)+L(\frac{\rho-\rho_0}{3\rho_0})+\frac{K_{\rm{sym}}}{2}(\frac{\rho-\rho_0}{3\rho_0})^2
+\frac{J_{\rm{sym}}}{6}(\frac{\rho-\rho_0}{3\rho_0})^3\label{Esympara}
\end{eqnarray}
where $E_0(\rho_0)=-15.9 \pm 0.4$ MeV \citep{Brown14} is the nuclear binding energy at $\rho_0$.
As discussed in detail in refs. \citep{Zhang18,Zhang19epj}, these parameterizations are purposely chosen to have the same forms as if we are Taylor expanding known energy functionals. But they are just parameterizations of unknown functions. The parameters will be inferred (backward modeling) from Bayesian analyses of observational data, while in Taylor expansions they are calculated from known functions.
Compared to some other parameterizations widely used in the literature, such as the piece-wise polytropes for the pressure as a function of density that is composition-blind, by first parameterizing separately the $E_{0}(\rho)$ and $E_{\rm{sym}}(\rho )$ then reconstructing the pressure as a function of density at $\beta$ equilibrium, although being more complicated we can explore self-consistently the composition of super-dense neutron-rich matter. Actually, this is absolutely necessary to extract information about the symmetry energy at high densities. Moreover, parameterizing the $E_{0}(\rho)$ and $E_{\rm{sym}}(\rho )$ in Taylor forms has the advantage that we can directly use existing predictions of nuclear many-body theories and/or indications of nuclear experiments in setting the prior ranges of the EOS parameters to be inferred from astrophysical observations. Mathematically, the two parameterizations naturally become the Taylor expansions of the unknown functions $E_{0}(\rho)$ and $E_{\rm{sym}}(\rho )$ when the density approaches $\rho_0$. Therefore, one may consider the above two expressions as having the dual meanings of being Taylor expansions around $\rho_0$ on one hand, and on the other hand being purely parameterizations far from $\rho_0$. Near $\rho_0$, the EOS parameters then obtain their asymptotic meaning one normally gives to the coefficients of Taylor expansions of known functions. Namely, the $K_0$ parameter represents the incompressibility of SNM $K_0=9\rho_0^2[\partial^2 E_0(\rho)/\partial\rho^2]|_{\rho=\rho_0}$ and the $J_0$ parameter represents the skewness of SNM $J_0=27\rho_0^3[\partial^3 E_0(\rho)/\partial\rho^3]|_{\rho=\rho_0}$ at saturation density. While the four parameters involved in the $E_{\rm{sym}}(\rho)$ denote the magnitude $E_{\rm{sym}}(\rho_0)$, slope $L=3\rho_0[\partial E_{\rm{sym}}(\rho)/\partial\rho]|_{\rho=\rho_0}$, curvature $K_{\rm{sym}}=9\rho_0^2[\partial^2 E_{\rm{sym}}(\rho)/\partial\rho^2]|_{\rho=\rho_0}$ and skewness $J_{\rm{sym}}=27\rho_0^3[\partial^3 E_{\rm{sym}}(\rho)/\partial\rho^3]|_{\rho=\rho_0}$ of nuclear symmetry energy at saturation density, respectively. While in the literature, one usually uses the above asymptotic meanings to describe the EOS parameters, we emphasize again that they are parameters to be extracted from data through the Bayesian analyses. As such, the two parameterizations can be used far above $\rho_0$ as they are not simply Taylor expansions near $\rho_0$. Besides the obvious limitations on the flexibility and computing costs of using different number of parameters, Bayesian analyses also depend on the amount of relevant data available. We shall thus also investigate how using different numbers of parameters may affect what we extract from the three data sets by turning on and off the $J_{\rm{sym}}$ term in our Bayesian analyses.
We also note here that the density profile of isospin asymmetry $\delta(\rho)$ (or the corresponding proton fraction $x_p(\rho)$) at density $\rho$ within broad ranges of the symmetry energy parameters have been studied in detail in refs. \citep{Zhang18,Zhang19epj}. The relative particle fractions in NSs at $\beta$ equilibrium are obtained through the condition
$
\mu_n-\mu_p=\mu_e=\mu_\mu
$
and the charge neutrality condition
$ \rho_p=\rho_e+\rho_\mu
$
for the proton density $\rho_p$, electron density $\rho_e$, and muon density $\rho_{\mu}$, respectively.
The chemical potential of particle $i$ is given by
$
\mu_i=\frac{\partial\epsilon(\rho,\delta)}{\partial\rho_i}.
$
The most important information for this study is that soft/stiff symmetry energy at a given density will make the matter there more/less neutron-rich due to the $E_{\rm{sym}}(\rho )\cdot \delta ^{2}$ term in the EOS of isospin asymmetric matter in Eq. (\ref{eos}).
It is also worth noting that the parameterized $E_{0}(\rho)$ and $E_{\rm{sym}}(\rho )$ may not always go to zero mathematically as $\rho\rightarrow 0$ when the parameters are randomly selected in Bayesian analyses. Nevertheless, this does not create a physical problem as the parameterizations are only used in constructing the EOS for the core of NSs. In fact, we use the NV EOS \citep{Negele73} for the inner crust and the BPS EoS \citep{Baym71b} for the outer crust. The crust-core transition density and pressure are determined consistently from the same parametric EOS for the core. This is achieved by investigating the thermodynamical instability of the uniform matter in the NS core as detailed in ref. \citep{Zhang18}. Namely, when the incompressibility of the $npe\mu$ matter in the core becomes negative at low densities, the uniform matter becomes unstable against the formation of clusters, indicating the transition from core to crust \citep{Kubis04,Kubis07,Lattimer07,Xu09}.
\subsection{Bayesian inference}
As discussed above, the six parameters in Eqs. (\ref{E0para}) and (\ref{Esympara}) will be inferred from NS properties through Bayesian analyses. For completeness, we recall here the Bayesian theorem
\begin{equation}\label{Bay1}
P({\cal M}|D) = \frac{P(D|{\cal M}) P({\cal M})}{\int P(D|{\cal M}) P({\cal M})d\cal M},
\end{equation}
where the denominator is the normalization constant. The $P({\cal M}|D)$ represents the posterior PDF of the model $\cal M$ given the data set $D$. The $P(D|{\cal M})$ is the likelihood function obtained by comparing predictions of the model $\cal M$ with the data $D$, while the $P({\cal M})$ is the prior PDF of the model $\cal M$.
\begin{deluxetable}{lcc}
\tablecolumns{3}
\tablecaption{Prior ranges of the six EOS parameters used}\label{tab-pri}
\tablewidth{0pt}
\tablehead{
\colhead{Parameters (MeV)} & \colhead{Lower limit} & \colhead{Upper limit}
}
\startdata
$K_0$ & 220 & 260 \\
$J_0$ & -800 & 400 \\
$K_{\mathrm{sym}}$ & -400 & 100 \\
$J_{\mathrm{sym}}$ & -200 & 800 \\
$L$ & 30 & 90 \\
$E_{\mathrm{sym}}(\rho_0)$ & 28.5 & 34.9 \\
\enddata
\end{deluxetable}
The six EOS parameters are randomly sampled using flat prior PDFs between their minimum and maximum values listed in Table 2. The listed ranges are based on available indications of nuclear laboratory experiments and theoretical predictions. In particular, the values of $K_0$, $E_{\rm sym}(\rho_0)$ and $L$ are known to be around $K_0\approx 240 \pm 20$ MeV \citep{Shlomo06,Piekarewicz10,Garg18}, $E_{\rm sym}(\rho_0)=31.7\pm 3.2$ MeV and $L\approx 58.7\pm 28.1 $ MeV \citep{Li13,Oertel17}, respectively. While the three high-density EOS parameters $K_{\rm{sym}}$, $J_{\rm{sym}}$ and $J_0$ are still poorly known to be around $-400 \leq K_{\rm{sym}} \leq 100$ MeV, $-200 \leq J_{\rm{sym}}\leq 800$ MeV, and $-800 \leq J_{0}\leq 400$ MeV \citep{Tews17,Zhang17}, respectively.
After generating the EOS parameters, $p_{i=1,2\cdots 6}$, one can construct the corresponding NS EOS model $\cal M$ as described earlier. Each NS EOS in the form of $P(\epsilon)$ is then used as an input to solve the Tolman-Oppenheimer-Volkov (TOV) NS structure equations \citep{Tolman34,Oppenheimer39}. The resulting mass-radius relation is then used in evaluating the likelihood of this set of EOS parameters. The radius data $D$ we shall use are summarized in Table \ref{tab-data}. The likelihood function measures the ability of the model $\cal M$ to reproduce the observational data. In the present work, we use
\begin{equation}\label{Likelihood}
P[D|{\cal M}(p_{1,2,\cdots 6})]=P_{\rm{filter}} \times P_{\rm{mass,max}} \times P_{\rm{radius}},
\end{equation}
where the $P_{\rm{filter}}$ is a filter selecting EOS parameter sets satisfying the following conditions: (i) The crust-core transition pressure stays positive; (ii) At all densities, the thermaldynamical stability condition (i.e., $dP/d\varepsilon\geq0$) and the causality condition (i.e, the speed of sound is always less than that of light) are satisfied. The $P_{\rm{mass,max}}$ stands for the requirement that each accepted EOS has to be stiff enough to support the observed NS maximum mass $M_{\rm{max}}$. While in our previous work \citep{Xie19}, we have studied effects of using 1.97 M$_{\odot}$, 2.01 M$_{\odot}$ and 2.17 M$_{\odot}$ for $M_{\rm{max}}$ on extracting the EOS parameters, to be consistent with the reference data point $R_{1.4}=11.9\pm 1.4$ km extracted by the LIGO/VIRGO Collaborations from GW170817 by assuming $M_{\rm{max}}$=1.97 M$_{\odot}$ \citep{LIGO18}, we adopt the later in the present analysis. While the $P_{radius}$ is the probability for the chosen EOS model to reproduce the NS radius data. Depending on the number of data points in each data set along a mass-radius sequence (i.e, each case listed in Table \ref{tab-data}), the $P_{radius}$ may be a product of several Gaussian functions. It can be generally written as
\begin{equation}\label{Likelihood-radius}
P_{radius}=\prod_{j=1}^{n}\frac{1}{\sqrt{2\pi}\sigma_{\mathrm{obs},j}}\exp[-\frac{(R_{\mathrm{th},j}-R_{\mathrm{obs},j})^{2}}{2\sigma_{\mathrm{obs},j}^{2}}],
\end{equation}
where $\sigma_{\mathrm{obs},j}$ represents the $1\sigma$ error bar of the observation $j$, and $n$ is the total number of data points used. For example, n is 1 for our reference where there is only one data point and 4 for the three cases listed in Table \ref{tab-data}.
A Markov-Chain Monte Carlo (MCMC) approach with the Metropolis-Hastings algorithm is used to simulate the posterior PDF of the model parameters. The PDFs of all individual EOS parameters and the two-parameter correlations are calculated by integrating over all other parameters. For example, the PDF for the $i$th parameter $p_i$ is given by
\begin{equation}\label{Bay3}
P(p_i|D) = \frac{\int P(D|{\cal M}) dp_1dp_2\cdots dp_{i-1}dp_{i+1}\cdots dp_6}{\int P(D|{\cal M}) P({\cal M})dp_1dp_2\cdots dp_6}.
\end{equation}
Numerically, we have to discard the initial samples in the so-called burn-in period because the MCMC does not sample from the equilibrium distribution in the beginning\citep{Trotta17}. It was found that 40,000 burn-in steps are enough as in our recent work\citep{Xie19}. We thus throw away the first 40,000 steps and use the remaining one million steps for calculating the posterior PDFs of the six EOS parameters in the present analysis.
\section{Results and discussions}
In this section, we present and discuss results of inferring the EOS parameters from the three sets of imagined radii of massive NSs. First we shall establish a reference using the radius data of canonical NSs measured by LIGO/VIRGO and NICER.
\begin{figure*}[htb]
\begin{center}
\resizebox{0.85\textwidth}{!}{
\includegraphics{Fig3.eps}
}
\caption{The posterior PDFs of NS EOS parameters from the two data sets indicated in comparison with their prior PDFs.}\label{nicer-fig1}
\end{center}
\end{figure*}
\subsection{Establish the reference: PDFs of EOS parameters from radii of canonical neutron stars}
After the historical observation of GW170817 binary NS merger event, an exciting flood of interesting papers appeared. Many studies using various approaches have extracted from the reported tidal deformability the radius $R_{1.4}$ of canonical NSs in the range of about 8.5 to 13.8 km, see, e.g., Fig. 41 in Ref. \citep{BALI19} for comparisons of the radii from 18 analyses carried out by mid-2019. The latest study combining multimessenger observations of GW170817 with many-body theory predictions using nuclear forces based on the chiral EFT found a more precise value of $R_{1.4}=11.0^{+0.9}_{-0.6}$ km at 90\% confidence level \citep{Capano20}. Most of the analyses indicate that the radius extracted is independent of the masses of the two NSs involved in GW170817. For example, the principal NS in GW170817 has a mass between 1.36 and 1.58 M$_{\odot}$, while the mass of the secondary NS is between 1.18 and 1.36 M$_{\odot}$ \citep{LIGO18}. Assuming initially the radii are mass dependent within two models in one of the first analyses \citep{LIGO18}, it was found that the radii of the two NSs are basically the same independent of their masses. After enforcing the requirement that all EOSs have to support NSs at least as massive as 1.97 M$_{\odot}$, both models lead to the same radius $R=11.9\pm 1.4$ km independent of the masses of the two NSs involved. In the independent analysis of GW170817 in ref. \citep{De18}, starting by explicitly assuming the two NSs have the same radius, the radius inferred was found independent of the prior Gaussian mass distributions centered around 1.33, 1.49 or 1.54 M$_{\odot}$. These studies clearly indicate that it is reasonable to assume that the radii of canonical NSs with masses around 1.4 M$_{\odot}$ are the same. Thus, as a reference for our study, we use $R_{1.4}=11.9\pm 1.4$ km from LIGO/VIRGO as a common and first data point as shown in Table 1.
It is exciting that the mass and radius of PSR J0030+0451 has been measured simultaneously recently by the NICER Collaboration. Two analyses of their data found the mass and radius are, respectively,
$M=1.44^{+0.15}_{-0.14}$ M$_{\odot}$ and $R=13.02^{+1.24}_{-1.06}$ km \citep{Miller19}, and $M=1.34^{+0.16}_{-0.15}$ M$_{\odot}$ and $R=12.71^{+1.19}_{-1.14}$ km \citep{Riley19}.
A recent study in ref. \citep{Zhang20} by directly inverting the radius in the high-density EOS parameter space of $J_0-K_{\rm{sym}}-J_{\rm{sym}}$ indicates that the NICER data provide similar constraints on the high-density EOS parameters as the NS tidal deformability from GW170817. As a comparison, we shall also calculate the PDFs of the six EOS parameters by combining the LIGO/VIRGO and NICER radius data. More specifically, for the NICER data we use the radius $R_{\mathrm{obs},j}=12.71$ km with $\sigma_{\mathrm{obs},j}$=1.16. In our model calculation with each EOS generated, we take the average radius for NSs with masses from 1.19 M$_{\odot}$ to 1.5 M$_{\odot}$ and regard it as the theoretical $R_{\mathrm{th},j}$ in evaluating the likelihood function using the NICER data.
Shown in Fig. \ref {nicer-fig1} are the posterior PDFs of the six EOS parameters using the GW170817 only and the combined GW170817+NICER data, respectively.
To see clearly the relative contributions of the likelihood and prior to the posterior PDFs, the uniform prior PDFs used in this work for the EOS parameters are also shown.
First of all, the two data sets lead to almost the same PDFs, indicating a strong consistency of the two observations. In the following discussions, we will use the results from using the GW170817 data alone as our reference. Secondly, the PDFs of $J_0$, $K_{\rm{sym}}$ and $L$ all have reasonably strong peaks. Compared to their flat prior PDF in the original ranges, obviously the radius data of canonical NSs have already constrained these parameters significantly with respect to their prior ranges. However, the PDFs of the saturation-density parameters $K_0$ and $E_{\rm sym}(\rho_0)$ remain roughly the same as their prior PDFs. This is not surprising since the radius of canonical NSs are known to be most sensitive to the variation of pressure around $(1-2)\rho_0$ \citep{Lattimer00}. Perhaps, the most interestingly result is the PDF of the $J_{\rm{sym}}$ parameter which controls the behavior of nuclear symmetry energy above about $2\rho_0$. Overall, it favors a large positive value mostly because of its correlation with the $K_{\rm{sym}}$ which favors a large negative value. The shoulder in the PDF of $J_{\rm{sym}}$ in its negative region is due to its correlation with $J_0$ as we shall discuss in more detail later. Moreover, it is seen that the PDF of $J_{\rm{sym}}$ peaks at the upper end of its prior range, i.e,
$800$ MeV. We found that if we artificially enlarge its upper boundary, say to 1000 MeV, its most probable value will increase correspondingly to 1000 MeV. It indicates clearly that the radius data of canonical NSs do not constrain the $J_{\rm{sym}}$ parameter and the corresponding behavior of $E_{\rm sym}(\rho)$ above $2\rho_0$. This finding is consistent with the results from directly inverting the radius and/or the tidal deformability of canonical NSs in the $J_0-K_{\rm{sym}}-J_{\rm{sym}}$ high-density EOS space \citep{Zhang18,Zhang19epj,Zhang19apj}. This further illustrates the importance of investigating whether the radii of more massive NSs can do better.
\begin{figure*}[htb]
\begin{center}
\resizebox{0.8\textwidth}{!}{
\includegraphics{Fig4.eps}
}
\caption{Posterior probability distribution functions of EOS parameters from the three sets of mass-dependent NS radius data shown in Fig. \ref{MRD} in comparison with their prior PDFs and the reference PDFs from GW170817 shown in Fig. \ref{nicer-fig1}.}\label{default-l1458}
\end{center}
\end{figure*}
\begin{figure*}[htb]
\begin{center}
\resizebox{0.497\textwidth}{!}{
\includegraphics{Fig5case1-Correlation.eps}
}
\resizebox{0.497\textwidth}{!}{
\includegraphics{Fig5case3-Correlation.eps}
}
\vspace{0.4cm}
\caption{Correlation functions of the high-density EOS parameters for the case-1 (left) and case-3 (right) described in the text.}\label{cor1}
\end{center}
\end{figure*}
\subsection{Posterior PDFs and correlations of EOS parameters from radii of massive neutron stars}
We now turn to inferring the PDFs of EOS parameters and their correlations from the imagined three radius data sets for massive NSs listed in Table 1.
Shown in Fig. \ref{default-l1458} are the posterior PDFs of the six EOS parameters derived from the three cases in comparison with the reference and their prior PDFs discussed in the previous subsection.
Firstly, the posterior PDFs of $E_{\rm sym}(\rho_0)$ in all cases remain approximately the same as its flat prior. This simply indicates that the radii of massive NSs are not sensitive to the value of symmetry energy at $\rho_0$ as one expects. For the case-2 where the radius is the same for all NSs considered, the PDFs of $J_0$, $J_{\mathrm{sym}}$ and $K_{\mathrm{sym}}$ are almost the same as for the reference from GW170817. While both $K_0$ and $L$ become smaller, indicating that both the SNM EOS $E_0(\rho)$ and symmetry energy $E_{\rm sym}(\rho)$ become slightly softer compared to the reference as we shall discuss in more detail. This observation is understandable. It was shown before that fixing the incompressibility $K_0$ of SNM but varying the slope $L$ of $E_{\rm sym}(\rho)$ at $\rho_0$ only changes the radii without changing the maximum mass of NSs, while fixing the symmetry energy but varying the $K_0$ only changes the NS maximum mass with little effect on the radii \citep{LiSteiner}. Already under the common constraint that all EOSs have to be stiff enough to support NSs at least as massive as 1.97 M$_{\odot}$, and as shown in Fig. \ref{MRD} in this case the average density only increases by at most 50\% going from NSs with 1.4 M$_{\odot}$ to 2.0 M$_{\odot}$, both $K_0$ and $L$ only need to be slightly softened to support all the NSs with masses up to 2.0 M$_{\odot}$ but having the same radius. We emphasize that the pressure is required to always increase with increasing density. When the density is known to increase from a canonical NS to a heavier one, the required EOS can become softer as the pressure in a denser NS is naturally higher. All of these physical effects were incorporated consistently in the likelihood function discussed earlier. Therefore, it is understandable that the PDFs of $K_0$ and $L$ inferred from the mass-radius correlation of the case 2 shift towards lower values $K_0$ and $L$ slightly while the others remain approximately the same compared to the reference PDFs.
It is interesting to compare the results for case-1 and case-3. The posterior PDFs of both the SNM EOS parameters $K_0$ and $J_0$ and the symmetry energy parameters $J_{\mathrm{sym}}$, $K_{\mathrm{sym}}$ and $L$ shift to the left (right) for case-1 (case-3), indicating that the SNM EOS $E_0(\rho)$ and symmetry energy $E_{\rm sym}(\rho)$ are softer (stiffer) for case-1 (case-3).
This can be understood from the relative densities reached in the two cases shown in the right window of Fig. \ref{MRD}. With respect to canonical NSs of mass 1.4 M$_{\odot}$, the average density in case-1 increases significantly with the increasing mass, while in the case-3, it slightly decreases with increasing mass. Again, because the pressure increases with density, to support NSs with the same masses,
in the case where the density is higher the EOS can be softer.
We notice that there are secondary peaks or shoulders in the PDFs of $J_0$ and $J_{\mathrm{sym}}$. This is because of the strong anti-correlation between these two parameters. As discussed in detail in ref. \cite{Xie19}, not only (anti)correlations between parameters of two adjacent terms in either $E_0(\rho)$ or $E_{\rm sym}(\rho)$, cross-(anti)correlations may also exist among parameters used in the two functions. Mathematically one expects to see (anti)correlations between two adjacent terms used to parameterize the same function, e.g, between $K_0$ and $J_0$, or between $L$ and $K_{\rm{sym}}$ when physical conditions are enforced. Physically, for very neutron-rich matter where the isospin asymmetry $\delta$ approaches 1, the $E_0(\rho)$ and $E_{\rm sym}(\rho)\cdot \delta^2$ in Eq. (\ref{eos}) may become equally important in contributing to the total pressure of NS matter. Then, there will be cross-correlations among parameters of $E_0(\rho)$ and $E_{\rm sym}(\rho)$. This most likely happens in dense matter when the symmetry energy becomes very soft (such that the matter is close to pure neutron matter with $\delta=1$).
Shown in Fig. \ref{cor1} are the correlations functions of the high-density EOS parameters for the case-1 (left) and case-3 (right), respectively. In both cases, the anti-correlation between $J_0$ and $J_{\mathrm{sym}}$ is strongest at negative $J_0$ but positive $J_{\mathrm{sym}}$ values. Interestingly, corresponding to the secondary peak/shoulder of their PDFs, there is an appreciable anti-correlation between them at positive $J_0$ but negative $J_{\mathrm{sym}}$ values (where the symmetry energy is super-soft). This is because the required pressure in the high density region to balance the gravity of NSs in the data set can come from either the $J_0$ or $J_{\mathrm{sym}}$ terms. The secondary peak/shoulder in case-1 is stronger than that in case-3 because only in the former case the density is high enough for the $J_{\mathrm{sym}}$ to play an important role and can lead to super-soft symmetry energy. It is also seen that generally two adjacent $E_{\rm sym}(\rho)$ parameters, e.g., $L$ and $K_{\rm{sym}}$, $K_{\rm{sym}}$ and $J_{\mathrm{sym}}$, are anti-correlated as one expects while the relationships between two distant parameters, e.g., $L$ and $J_{\mathrm{sym}}$ or $L$ and $J_{0}$ are more complicated and normally correlated weakly.
\begin{figure*}[htb]
\begin{center}
\resizebox{0.7\textwidth}{!}{
\includegraphics{Fig6-Jsym13.eps}
}
\caption{Posterior PDFs of $J_{\mathrm{sym}}$ for the case-1 and case-3 by varying the prior limits of $J_0$ and $J_{\mathrm{sym}}$ as indicated. The magenta lines are the results of using the default prior ranges.}\label{jsym13}
\end{center}
\end{figure*}
\subsection{Effects of prior ranges of high-density EOS parameters}
Comparing the PDFs of all EOS parameters in both case-1 and case-3, the most dramatic difference is in the shapes of the PDFs for $J_{\mathrm{sym}}$. For the case-1, it has a major peak indicating that the most probable value of $J_{\mathrm{sym}}$ is around 340 MeV. For the case-3, however, it peaks at the upper boundary at 800 MeV. As we discussed in the introduction, the $J_{\mathrm{sym}}$ parameter characterizing the high-density behavior of nuclear symmetry energy is so far not constrained by any experiment or observation. The prior range we used for it $-200 \leq J_{\rm{sym}}\leq 800$ MeV is completely based on surveys of some theoretical predictions as mentioned before. While the situation for the high-density SNM matter is better due to the progress in analyzing heavy-ion reaction experiments \citep{Xie20}, the $J_0$ also suffers from large uncertainties. It is thus necessary to examine how these two high-density EOS parameters affect the PDF of the $J_{\mathrm{sym}}$, especially for the case-3 where the peak at the upper boundary of $J_{\mathrm{sym}}=800$ MeV looks suspicuous.
Shown in Fig. \ref{jsym13} are the posterior PDFs of $J_{\mathrm{sym}}$ for the case-1 and case-3 by varying the prior limits of $J_0$ and $J_{\mathrm{sym}}$ as indicated. The magenta lines are the results of using the default prior ranges. By comparing the three calculations in each case, we can see clearly effects of the upper bounds of both
$J_0$ and $J_{\mathrm{sym}}$. Since their PDFs vanish or are very small at their lower boundaries as shown in Fig. \ref{default-l1458} already, it is not necessary to modify the lower boundaries of $J_0$ and $J_{\mathrm{sym}}$. In the case-1, the PDF peak of $J_{\mathrm{sym}}$ remains around $J_{\mathrm{sym}}=340$ MeV, indicating a reliable extract of the most probable value of $J_{\mathrm{sym}}$ although the PDF values vary a little bit at the two ends.
In the case-3, however, the peak or the most probable value of $J_{\mathrm{sym}}$ keeps changing as its upper limit increases, indicating that the data in this case do not constrain the $J_{\mathrm{sym}}$.
This is consistent with our earlier finding that the radius data of canonical NSs do not constrain the high-density symmetry energy above $2\rho_0$ and the corresponding $J_{\mathrm{sym}}$ parameter.
As shown in Fig. \ref{MRD}, the average density reached in massive NSs in case-3 is slightly lower than that reached in canonical NSs. While in the case-1, the average density in NSs of 2.0 M$_{\odot}$ is about
2.3 times that in canonical NSs. Therefore, the data set in case-1 can constrain the $J_{\mathrm{sym}}$ while those in case-3 can't. It is also interesting to see that the second peak near $J_{\mathrm{sym}}=-200$ MeV disappears while the most probable value of $J_{\mathrm{sym}}$ stays around 340 MeV when the $J_0$ is restricted to less than -100 MeV. This is due to the anti-correlation between $J_0$ and $J_{\mathrm{sym}}$ as we discussed in the previous subsection.
\begin{figure*}[htb]
\begin{center}
\resizebox{0.8\textwidth}{!}{
\includegraphics{Fig7-normal-EOS.eps}
}
\caption{Comparisons of the 68\% confidence boundaries of $E_0(\rho)$ and $E_{\mathrm{sym}}(\rho)$ from the four data sets. }\label{EoS-nor}
\end{center}
\end{figure*}
\subsection{EOS confidence boundaries at supra-saturation densities constrained by radii of massive neutron stars}
Applying the obtained posterior PDFs of the EOS parameters in Eqs. \ref{E0para} and \ref{Esympara}, we can easily obtain constraining bands of $E_0(\rho)$ and $E_{\mathrm{sym}}(\rho)$ at any specific confidence level. For example, shown in Fig. \ref{EoS-nor} are the constraining bands on the $E_0(\rho)$ and $E_{\mathrm{sym}}(\rho)$ at 68\% confidence level for the three cases in comparison with the reference from GW170817. The $E_{\mathrm{sym}}(\rho)$ bands for the case-2 and the reference largely overlap, while the $E_0(\rho)$ band for the case-2 is only slightly lower than that of the reference as we expected earlier from examining the PDFs of the EOS parameters. While both the $E_0(\rho)$ and $E_{\mathrm{sym}}(\rho)$ bands for the case-1 are significantly softer than those for the case-3 also as we expected. In particular, due to the large uncertainty of $J_{\mathrm{sym}}$ and the high densities reached in the case-1, the 68\% confidence band for the $E_{\mathrm{sym}}(\rho)$ is very wide in this case. We notice that the lower boundary of the high-density $E_{\mathrm{sym}}(\rho)$ has some dependence on the NS maximum mass used. If 2.14 M$_{\odot}$ instead of 1.97 M$_{\odot}$ is used for the NS maximum mass observed, the lower boundary of $E_{\mathrm{sym}}(\rho)$ around $3\rho_0$ increases slightly \citep{Zhang19epj,Zhang19apj,YZhou19,Xie19}. In the case-3, however, it is seen that the symmetry energy becomes significantly more stiff and is distinctly different from that in the case-1.
Comparing the three cases, it is seen that the constraining bands on their SNM EOSs $E_0(\rho)$ are not much different. This is again because the three cases cover the same range of NS masses determined mainly by the $E_0(\rho)$ with little influence from the $E_{\mathrm{sym}}(\rho)$. Going back to the imaged data sets shown in Fig. \ref{MRD}, this means that a $\pm 15\%$ error bar in measuring the NS mass-radius correlation will not affect the accurate extract of the SNM EOS. On the other hand, the $E_{\mathrm{sym}}(\rho)$ bands for the three cases are rather different, indicating that the main cause for the different mass-radius relations shown in Fig.\ \ref{MRD} is the underlying high-density behavior of nuclear symmetry energy. Thus, a precise measurement of the mass-radius correlation for massive NSs hopefully in the near future will help further constrain the nuclear symmetry energy above $2\rho_0$ with little influence from the remaining uncertainties of the SNM EOS $E_0(\rho)$.
\subsection{Effects of the cubic term in parameterizing the high-density symmetry energy}
In Bayesian kinds of analyses, there is a general question as to how the extracted physical quantities may depend qualitatively and/or quantitatively on the parameterizations used. Ideally, at least the qualitative conclusions should be independent of the parameterizations used. Moreover, correlations among the model parameters may also depend on the total number of parameters used and how well we know about the high-order terms. Of course, considering the limitations of data available and computing costs, since high-order parameters normally involve more uncertainties some compromise may thus have to be made. For example, it was found that some of the existing correlations among different empirical parameters of the nuclear EOS, e.g, between $L$ and $K_{\rm{sym}}$, can be understood from basic physical constraints imposed on the Taylor expansions of $E_0(\rho)$ and $E_{\mathrm{sym}}(\rho)$ at $\rho_0$ \citep{MM3}. However, large dispersions of the correlations among low-order empirical parameters can be induced by the unknown higher-order empirical parameters. For example, the correlation between $E_{\rm sym}(\rho_0)$ and $L$ depends strongly on the poorly known $K_{\rm sym}$, while the correlation between $L$ and $K_{\rm sym}$ is strongly blurred by the even more poorly known $J_0$ and $J_{\rm sym}$ \citep{MM3}. Our results discussed above are in general agreement with these earlier findings. Because the $J_{\rm sym}$ is so poorly known, often it is simply set to zero in many studies in the literature, see, e.g, discussions in several recent works \citep{Margueron18,Baillot19,Perot19,Zimmerman20,Wei20}. To see how our results may depend on the $J_{\mathrm{sym}}$ term, in the following we
compare the PDFs of EOS parameters and the corresponding confidence boundary of high-density $E_{\mathrm{sym}}(\rho)$ calculated with the default $J_{\mathrm{sym}}$ randomly generated within 200 MeV$\leq J_{\mathrm{sym}} \leq$ 800 MeV and those calculated by setting $J_{\mathrm{sym}}$= 0 MeV.
\begin{figure*}[htb]
\begin{center}
\resizebox{0.8\textwidth}{!}{
\includegraphics{Fig8-default-Jsym0.eps}
}
\caption{Posterior probability distribution functions of EOS parameters from the four sets of mass-dependent radius data of neutron stars by settting $J_{\mathrm{sym}}$ = 0.}\label{default-jsym0}
\end{center}
\end{figure*}
Shown in Fig. \ref{default-jsym0} are the posterior PDFs of the EOS parameters by setting $J_{\mathrm{sym}}$ to zero. Compared to the default results shown in Fig. \ref{default-l1458},
several interesting observations can be made:
\begin{itemize}
\item As for the default results, except the posterior PDF of $K_0$ in the case-3, the PDFs of $K_0$ and $E_{\mathrm{sym}}(\rho_0)$ are less affected by the data sets used.
\item The PDFs of $J_0$, $K_{\mathrm{sym}}$ and $L$ are narrowed down to smaller ranges than the default results when the $J_{\mathrm{sym}}$ has a wide uncertainty range.
\item The differences between results from the case-1 and case-3 become larger
\item The most probable value of $J_0$ shifts significantly to higher values compared to the default results to keep the total pressure the same when the contribution from the high-density symmetry energy is turned off by setting $J_{\mathrm{sym}}$ to zero.
\end{itemize}
\begin{table*}[htbp]
\centering
\caption{Most probable values and their 68\% credible intervals of $J_0$, $K_0$, $K_{\mathrm{sym}}$ and $L$ with 200 MeV$\leq J_{\mathrm{sym}} \leq$ 800 MeV and $J_{\mathrm{sym}}$= 0 MeV, respectively.}\label{MP1}
\begin{tabular}{lccccccc}
\hline\hline
Parameters (MeV) &200 MeV$\leq J_{\mathrm{sym}} \leq$ 800 MeV &$J_{\mathrm{sym}}$= 0 MeV \\
&Reference, case-1, case-2, case-3 &Reference, case-1, case-2, case-3 \\
\hline\hline\\
$J_0:$ &$-165_{-45}^{+55}, -180_{-50}^{+50}, -170_{-40}^{+60}, -100_{-70}^{+20}$ &$-80_{-60}^{+40}, -40_{-30}^{+30}, -80_{-50}^{+40}, -85_{-55}^{+10}$\\
$K_0:$ &$258_{-24}^{+2}, 222_{-0}^{+24}, 222_{-0}^{+26}, 260_{-22}^{+0}$ &$258_{-24}^{+2}, 222_{-0}^{+26}, 258_{-24}^{+2}, 260_{-20}^{+0}$ \\
$K_{\mathrm{sym}}:$ &$-120_{-100}^{+80}, -110_{-120}^{+30}, -100_{-90}^{+70}, -30_{-70}^{+80}$ &$-50_{-40}^{+70}, -80_{-40}^{+20}, -40_{-40}^{+50}, 40_{-50}^{+50}$ \\
$L:$ &$66_{-20}^{+12}, 38_{-6}^{+18}, 50_{-14}^{+14}, 70_{-16}^{+12}$ &$66_{-15}^{+15}, 40_{-9}^{+10}, 60_{-12}^{+12}, 80_{-12}^{+8}$\\
\hline
\end{tabular}
\end{table*}
To be more quantitative in comparing the results, the most probable values and 68\% credible intervals of the EOS parameters are listed in Table \ref{MP1}. Interestingly, the most probable values of $J_0$ and $K_{\mathrm{sym}}$
are most significantly shifted. This is what one expects. As we discussed earlier, the $J_{\mathrm{sym}}$ is most strongly anti-correlated with these two parameters. While the low-density parameter
$L$ is much less directly correlated with the high-density $J_{\mathrm{sym}}$ parameter. As a result, the most probable values of $L$ in all cases studied remain approximately the same in the two calculations.
These findings remind us again that cautions have to be taken in interpreting the EOS parameters inferred from Bayesian analyses using different parameterizations even from the same data set.
\begin{figure*}[htb]
\begin{center}
\resizebox{0.8\textwidth}{!}{
\includegraphics{Fig9-Jsym0-EOS.eps}
}
\caption{The 68\% confidence boundaries of $E_0(\rho)$ and $E_{\mathrm{sym}}(\rho)$ when the parameter $J_{\mathrm{sym}}$ is set to zero. }\label{EoS-jsym0}
\end{center}
\end{figure*}
While the PDFs of some individual EOS parameters depend strongly on how and to what order the EOS is parameterized, the reconstructed posterior EOS from these parameters has less dependence on the parameterization due to the auto-adjustments of the EOS parameters in Bayesian analyses through the likelihood function. Shown in Fig.\ \ref{EoS-jsym0} are the 68\% confidence boundaries of $E_0(\rho)$ and $E_{\mathrm{sym}}(\rho)$ when $J_{\mathrm{sym}}$ is set to zero. Compared to the default results shown in Fig. \ref{EoS-nor}, the SNM EOS in all cases becomes stiffer as already indicated by the increased $J_0$ values discussed above, while the $E_{\mathrm{sym}}(\rho)$ in all cases becomes softer at high densities by setting $J_{\mathrm{sym}}$= 0 MeV. Nevertheless, the relative effects of the different mass-radius correlation in the four data sets remain approximately the same. Namely, the data in the case-1 leads to a significantly softer symmetry energy at high densities than the case-3. Thus, our qualitative conclusion is independent of the EOS parameterizations we used.
\begin{table*}[htbp]
\centering
\caption{Most probable values and the 68\% credible intervals of $E_{\mathrm{sym}}(2\rho_0)$ and $E_{\mathrm{sym}}(3\rho_0)$ with 200 MeV$\leq J_{\mathrm{sym}} \leq$ 800 MeV and $J_{\mathrm{sym}}$= 0 MeV, respectively. }\label{MP2}
\begin{tabular}{lccccccc}
\hline\hline
$E_{\mathrm{sym}}(\rho)$ (MeV) &200 MeV$\leq J_{\mathrm{sym}} \leq$ 800 MeV &$J_{\mathrm{sym}}$= 0 MeV \\
&Reference, case-1, case-2, case-3 &Reference, case-1, case-2, case-3 \\
\hline\hline\\
$E_{\mathrm{sym}}(2\rho_0):$ &$54.8_{-19}^{+8.4}$, $43.2_{-15.9}^{+8.5}$, $50.5_{-16.3}^{+9.4}$, $61.1_{-15.4}^{+8.4}$ &$53.4_{-17.9}^{+6.7}$, $43.4_{-8.9}^{+4.4}$, $52.3_{-12.9}^{+6.1}$, $60.1_{-9}^{+7.2}$\\
$E_{\mathrm{sym}}(3\rho_0):$ &$91.3_{-61.2}^{+25.8}$, $52.2_{-60.4}^{+25.6}$, $85.1_{-55}^{+24.9}$, $114_{-48}^{+25.8}$ &$66.7_{-36.2}^{+16.7}$, $43.4_{-18.2}^{+11.1}$, $65.6_{-26.2}^{+17.8}$, $92.1_{-20}^{+19.6}$\\\\
\hline
\end{tabular}
\end{table*}
To make a more quantitative comparison, listed in Table \ref{MP2} are the most probable values and 68\% credible intervals of $E_{\mathrm{sym}}(2\rho_0)$ and $E_{\mathrm{sym}}(3\rho_0)$ from the two calculations. As we found already in Ref. \citep{Xie19}, the value of $E_{\mathrm{sym}}(2\rho_0)$ is approximately independent of the EOS parameterizations used. This is because around $2\rho_0$ the symmetry energy is mostly controlled by the $L$ and $K_{\rm sym}$ parameters with little influence from the $J_{\mathrm{sym}}$ term mostly through its anti-correlation with $K_{\rm sym}$. Moreover, the most probable values of $E_{\mathrm{sym}}(2\rho_0)$ from the four different data sets are different from each other by about 20\% but overlap significantly within their $1\sigma$ error bars. Interestingly, they are all consistently with the results shown in Fig. \ref{Esym-survey} within their error bars. For the $E_{\mathrm{sym}}(3\rho_0)$, it is seen that it decreases by about 15\% to 27\% when the $J_{\mathrm{sym}}$ is set to zero. Nevertheless, this is still much smaller than the approximately 53\% difference in $E_{\mathrm{sym}}(3\rho_0)$ between the case-1 and case-3 in both calculations. Therefore, one can still draw qualitatively clear conclusions about the $E_{\mathrm{sym}}(3\rho_0)$ from observed mass-radius correlations of massive NSs regardless of the EOS parameterizations one use in the Bayesian analyses.
Here it is necessary to emphasize that there is no physical reason to ignore the $J_{\mathrm{sym}}$ term besides simplifying calculations. Moreover, as we discussed already, in neutron-rich matter when the
symmetry energy is very soft the isospin asymmetry at $\beta$ equilibrium is close to 1. Then the $J_0$ and $J_{\mathrm{sym}}$ are at the same order and are equally important in contributing to the total pressure in NSs. While our comparisons presented in this subsection are interesting, in our opinion, the default results are more physical and reliable. As to even higher order terms, such as the quartic terms some people included in Taylor expanding nuclear energy density functionals, to our best knowledge, there is so far no meaningful constraints from any experiment/observation and model predictions are even more diverse than for $J_{\mathrm{sym}}$. In our opinion, as long as we stay below about $4\rho_0$ above which the quark-hadron phase transition definitely will happen according to many predictions, parametrizing the EOS up to the cubic term with both $J_0$ and $J_{\mathrm{sym}}$ are sufficient and necessary.
\subsection{Verifying the importance of mass-radius curves in the Bayesian inference of EOS parameters in another way}
To this end, some readers may be still wondering to what extent the results presented above may have nothing to do with the radius constraints but come only from the choice of nuclear matter EOS parameterizations. What would have been the result if we had removed the constraint on the slope of the mass-radius curve? We attempt to address this by carrying out a new calculation using a constant radius $R_{\mathrm{M}}=R_{1.4}=11.9 \pm 3.2$ km at 90\% confidence level for all NSs considered. The mean radius is the same as in the case-2 but with an error bar so large that one can no longer distinguish the three cases shown in Fig. \ \ref{MRD} any more. We thus essentially removed the constraint on the slope of the mass-radius curve. Namely, at 90\% confidence level the most massive NS of mass 2.0 M$_{\odot}$ considered has the same probability to have a radius of $R_{2.0}=8.7$ km as in the case-1 and $R_{2.0}=15.1$ km as in the case-3, while the most probable radius is kept at $R_{2.0}=11.9$ km as in the case-2.
\begin{figure*}[htb]
\begin{center}
\resizebox{0.8\textwidth}{!}{
\includegraphics{Fig10-newradius.eps}
}
\vspace{-0.5cm}
\caption{Prior and posterior probability distribution functions of EOS parameters assuming all neutron stars have the same radius of $R_{\mathrm{M}}=R_{1.4}=11.9 \pm 3.2$ km at 90\% confidence level in comparison with the results from the case-2 (with $R_{\mathrm{M}}=R_{1.4}=11.9 \pm 1.4$ km at 90\% confidence level) shown originally in Fig.4.}\label{newradius}
\end{center}
\end{figure*}
Shown in Fig.\ \ref{newradius} are the prior and posterior PDFs of EOS parameters in the calculation assuming all neutron stars have the same radius of $R_{\mathrm{M}}=R_{1.4}=11.9 \pm 3.2$ km at 90\% confidence level in comparison with the results from the case-2 shown originally in Fig. \ \ref{default-l1458}. First of all, comparing the inferred posterior PDFs with the uniform prior PDFs used as inputs, one can see already very generally the importance of mass-radius measurements in constraining the EOS parameters compared to our prior knowledge. Since this calculation used the same most probable value of $R_{2.0}=11.9$ km as in the case-2 but with a significantly larger error bar (3.2 vs 1.4 km at 90\% confidence level), the differences shown in the results in the two calculations are all due to the difference in the error bars of the radius measurements. The posterior PDFs of all three parameters of the symmetry energy, especially its slope $L$ and curvature $K_{\mathrm{sym}}$, become significantly wider in the new calculation as one expects. On the other hand, as we discussed earlier, the $J_0$ is constrained mostly by the NS maximum mass, its PDF hardly changes in the new calculation as the masses of all NSs considered remain the same. The PDF of $K_0$ shows some slight changes mostly due to its correlations with other parameters.
Since the radii of all NSs in the two calculations shown in Fig.\ \ref{newradius} are the same but have two different constant error bars, it is the PDF of the parameter L that is showing the most significant change. This is one might expect from a neutron star radius measurement. However, to answer the questions posted at the beginning of this subsection, one has to compare the new results obtained from using the $R_{\mathrm{M}}=R_{1.4}=11.9 \pm 3.2$ km with those shown earlier from calculations using different mass-radius curves for the case-1 and case-3.
Most interestingly, when we compare the new PDFs of EOS parameters with those in the case-1 and case-3 shown in Fig. 4, it is seen that the PDFs of the symmetry energy parameters in the new calculation are significantly different from those in both the case-1 and case-3, reflecting clearly the importance of the different NS mass-radius curves used in the Bayesian analyses. In particular, the outstanding peak in the PDF of the $J_{\mathrm{sym}}$ parameter in the case-1 where the highest density is reached is completely gone. The new PDF of the $J_{\mathrm{sym}}$ parameter is also significantly different from that in the case-3. We notice that both the $J_0$ and $K_0$ in the new calculation also have appreciably different PDFs compared to those in the case-1 and case-3, again verifying the importance of the mass-radius curves of massive NSs in the Bayesian inference of the dense neutron-rich matter EOS. We are thus very confident that the posterior PDFs of EOS parameters presented above reflect faithfully the NS radius constraints and they are not simply from our choice of nuclear matter EOS parameterizations.
\section{Summary and outlook}
In summary, using an explicitly isospin-dependent parametric EOS of nucleonic matter within the $npe\mu$ mode for the core of NSs, we performed Bayesian analyses using three different sets of imagined mass-radius correlation data of massive NSs. Using the PDFs of EOS parameters as well as the corresponding symmetry energy $E_{\mathrm{sym}}(\rho)$ and SNM EOS $E_0(\rho)$ inferred from GW170817 and NICER radius data for canonical NSs as references, we investigated how future measurements of massive NS radii will improve our current knowledge about the EOS of super-dense neutron-rich nuclear matter. The three imagined radius data sets represent typical predictions using EOSs from various nuclear many-body theories, i.e, the radius stays the same, decreases or increases with increasing NS mass within $\pm 15\%$ between 1.4 M$_{\odot}$ and 2.0 M$_{\odot}$. These three cases model three possible scenarios in the core of NSs assuming no hadron-quark phase transition and/or new particle production: the average density increases quickly, slowly or slightly decreases as the mass increases from 1.4 M$_{\odot}$ to 2.0 M$_{\odot}$. In these three cases the high-density symmetry energy plays different roles. Consequently, the PDFs of EOS parameters and the corresponding EOS confidence boundaries inferred from the three radius data sets are rather different. In particular, while the SNM EOS $E_0(\rho)$ inferred from three data sets are approximately the same, the corresponding high-density symmetry energies $E_{\mathrm{sym}}(\rho)$ at densities above about $2\rho_0$ are very different, indicating that the radii of massive NSs carry important information about the high-density behavior of nuclear symmetry energy with little influence from the remaining uncertainties of the SNM EOS $E_0(\rho)$.
We have also investigated correlations among the EOS parameters and effects of turning on/off the high-order term in parameterizing the symmetry energy. We found that it is important to keep the cubic term to extract more accurately the symmetry energy below about $4\rho_0$.
The major shortcoming of this work is that the NS model used is the minimum $npe\mu$ model without considering phase transitions as well as productions of hyperons and/or baryon resonances that are expected to appear above certain high densities. Nevertheless, as evidenced by many earlier and recent publications in the literature, researches within the minimum NS model provide a useful guidance for possible advanced studies. Extending the present work by incorporating the hadron-quark phase transition and more particles is on our working plan.
Besides the ongoing NICER mission measuring simultaneously the radii and masses of several NSs as well as various gravitational wave searches which can potentially reveal both the masses and radii of super/hyper-massive remnants of NS mergers from multimessengers released, new ideas have been put forward in the Astro 2020 Decadal Survey to measure more accurately the radii of massive NSs using the next-generation X-ray observatories \citep{wp1,Strobe,wp2,Watts19}. It is thus very hopeful that precise mass-radius data for more massive NSs will be available in the near future. On the other hand, new radioactive beam facilities being built around the world \citep{NAP2012,LRP2015,NuPECC} provide great opportunities to probe the EOS of super-dense neutron-rich nuclear matter in controlled laboratory conditions.
Ongoing efforts in nuclear physics, see, e.g., refs. \citep{FRIB,Wolfgang}, are proving complementary information about the EOS of super-dense neutron-rich nuclear matter. Eventually, a truely multimensenger approach involving astrophysics observation, nuclear physics experiments and related theories will enable us to finally pin down the EOS, especially the symmetry energy, of super-dense neutron-rich nuclear matter.\\
\noindent{\bf Acknowledgments:} We thank Bao-Jun Cai, Lie-Wen Chen and Nai-Bo Zhang for very helpful discussions. This work is supported in part by the U.S. Department of Energy, Office of Science, under Award Number DE-SC0013702, the CUSTIPEN (China-U.S. Theory Institute for Physics with Exotic Nuclei) under the US Department of Energy Grant No. DE-SC0009971.
|
1,116,691,498,671 | arxiv | \section{Introduction, main results, and proof ideas}
The Kardar-Parisi-Zhang (KPZ) universality class consists of a large variety of models, all of which are believed to exhibit certain universal behaviors; for example, common scaling limits. Most progress in this area has been in the setting of certain models that are known as \emph{exactly solvable} or \emph{integrable}, which possess certain algebraic structure that makes their analysis within reach, in comparison to non-integrable models.
In such models, it is often important for applications to have control on the upper and lower tails of the KPZ observable on the fluctuation scale. Of the two, it is more challenging to obtain this control on the lower tail, though in what are known as zero-temperature models, such as TASEP and last passage percolation, a variety of techniques have been developed over the last two decades to do this. In contrast, for \emph{positive} temperature models such as the KPZ equation, stochastic six vertex model, ASEP, and polymer models, only a few techniques have recently been developed to approach this problem. Further, each technique only seems to be applicable in particular cases; due to fundamental limitations, there is no broad coverage.
In this paper we study the exactly solvable, interacting particle system model of $q$-pushTASEP, a positive temperature model, but one for which the few methods available to obtain lower tails in positive temperature do not seem applicable. We develop a new technique for lower tail estimates on the position of the right-most particle, harnessing recently discovered connections between it and last passage percolation.
We start by introducing the model of study and our main results.
\subsection{Principal objects and models of study}
\subsubsection{Some notation and distributions}
The \emph{$q$-Pochhammer symbol} $(z;q)_n$ is given by
\begin{align*}
(z;q)_n = \prod_{i=0}^{n-1}(1-zq^i) \quad \text{ for } n=0,1, \ldots,
\end{align*}
with $(z;q)_{\infty}$ defined by replacing $n-1$ by $\infty$.
The \emph{$q$-binomial coefficient} is given by
\begin{align}\label{e.q-binomial coefficient}
\binom{n}{k}_{\!\!q} = \frac{(q;q)_n}{(q;q)_k(q;q)_{n-k}}.
\end{align}
The \emph{q-deformed beta binomial distribution} is a distribution with parameters $q$, $\xi$, $\eta$, and $m$. Here $m\in\mathbb{Z}_{\geq 0}$ and the distribution is supported on $\{0,1, \ldots, m\}$; the other parameters are non-negative real numbers, and are restricted to more specific domains in certain cases that we will describe. For $s\in\{0, \ldots, m\}$, the probability mass function at $s$ is given by
\begin{align*}
\varphi_{q,\xi, \eta}(s\mid m) = \xi^s \frac{(\eta/\xi; q)_s(\xi; q)_{m-s}}{(\eta; q)_m}\cdot \binom{m}{s}_{\!\!q}.
\end{align*}
We refer the reader to \cite[Section~6.1]{matveev2016q} for more information regarding this distribution, including a discussion on why the above expression sums (over $s=0, \ldots, m$) to $1$ when the expression is well-defined and non-negative.
A special case is the $q$-Geometric distribution of parameter $\xi$ (denoted $q$-Geo($\xi$)), obtained by taking $m=\infty$, $\eta=0$, and $q,\xi\in(0,1]$, so that the probability mass function at $s\in\mathbb{Z}_{\geq 0}$ is given by
\begin{align*}
\varphi_{q,\xi, \eta}(s\mid \infty) = \xi^s \frac{(\xi; q)_{\infty}}{(q;q)_s}.
\end{align*}
\subsubsection{The model of $q$-pushTASEP}\label{s.q-pushTASEPmodel} The $q$-pushTASEP is a discrete time interacting particle system on $\mathbb{Z}$ first introduced in \cite{matveev2016q}.
We have $N\in\mathbb{N}$ many particles which occupy distinct sites in $\mathbb{Z}$, and we label their position at time $T\in\mathbb{Z}_{\geq 0}$ in increasing order as $x_1(T) < x_2(T) < \ldots < x_N(T)$; we denote the collection of these random variables by $x(T)$. We also specify a collection of parameters $a_1, \ldots, a_N$ and $b_1, b_2, \ldots$, all lying in $(0,1)$.
The evolution from time $T$ to $T+1$ is as follows. The particle positions are updated from left to right: for $k\in\{1, \ldots, N\}$,
\begin{align*}
x_k(T+1) = x_k(T) + J_{k,T} + P_{k,T},
\end{align*}
where $J_{k,T}$ and $P_{k,T}$ are independent random variables with $J_{k,T}\sim q\text{-Geo}(a_kb_{T+1})$ (encoding a \emph{jump} contribution) and
$$P_{k,T} \sim \varphi_{q^{-1}, \xi= q^{\mathrm{gap}_k(T)}, \eta=0}\bigl(\cdot \mid x_{k-1}(T+1) - x_{k-1}(T)\bigr),$$
(encoding a \emph{push} contribution) where $\mathrm{gap}_k(T) = x_k(T) - x_{k-1}(T)$, $x_0(T) = -\infty$ by convention and, by a slight abuse of notation, $\sim$ means the LHS is distributed according to the measure which has probability mass function given by the RHS. In other words, $P_{k,T}$ is a $q$-deformed beta binomial random variable with parameters $q^{-1}, \xi= q^{\mathrm{gap}_k(T)}, \eta = 0$, and $m=x_{k-1}(T+1) - x_{k-1}(T)$. Note in particular that $x_1$'s motion does not depend on that of any other particle, i.e., marginally it follows a random walk.
\begin{figure}
\begin{tikzpicture}
\draw[<->, thick] (-5,0) -- (5,0);
\foreach \x in {-4,..., 4}
\draw[thick] (\x, 0.15) -- ++(0,-0.3);
\fill[red!80!black] (0,0) circle (0.2cm);
\fill[red!80!black] (2,0) circle (0.2cm);
\draw[black, dashed, thick] (-3,0) circle (0.2cm);
\fill[black] (-2,0) circle (0.2cm);
\draw[->, dashed, semithick] (-2.9, 0.3) to[out=60, in=120] (-2.1,0.3);
\draw[->, semithick] (0.1, 0.3) to[out=45, in=135] (1.9,0.3);
\node[scale=0.7] at (3.1, 0.7) {$x_{k+1}(T+1) = x_{k+1}(T) + 2$};
\node[scale=0.7] at (-2, 0.7) {$x_{k}(T) = x_{k}(T-1) + 1$};
\end{tikzpicture}
\caption{A depiction of one step in the evolution of $q$-pushTASEP. The dotted circle is the position of the $k$\textsuperscript{th} particle at time $T-1$, and the solid black circle is it after it moves to its position at time $T$. The left red circle is the $(k+1)$\textsuperscript{th} particle at time $T$ and the right one the same at time $T+1$. The movement of the $k$\textsuperscript{th} particle in the previous step effects the $P_{k,T}$ contribution to the total jump size of $2$ of the $(k+1)$\textsuperscript{th} particle at time $T$.}
\end{figure}
This model is integrable. More precisely, the distribution of $x_N(T)$, started from a special initial condition known as \emph{step initial condition} where $x_k(0) = k$, can be related to a marginal of the $q$-Whittaker measure, a measure on partitions (equivalently, Young diagrams) defined in terms of $q$-Whittaker polynomials. This connection will be important for our arguments, and we will specify it more precisely, along with the definition of $q$-Whittaker measures, in Section~\ref{s.integrable connections}.
Apart from integrability, another reason $q$-pushTASEP is of interest is because it degenerates to other well-known models. Indeed, in the $q\to 1$ limit, when appropriately renormalized, $x_N(T)$ converges to the free energy of the log-gamma polymer model. While our results will not carry over to this limit, we will make some further remarks about this relationship between the models in Section~\ref{s.log gamma remark}.
\subsubsection{Law of large numbers and asymptotic Tracy-Widom fluctuations of $q$-pushTASEP}
In this work we will focus on $q$-pushTASEP when the parameters are equal, i.e., $u = a_i = b_j$ for all $i=1, \ldots, N$ and $j=1, 2, \ldots$ for some $u\in(0,1)$, and when the initial condition is $x_k(0) = k$ for $k=1, \ldots, N$. In this setting, and under some additional restrictions on the parameters, \cite{vetHo2022asymptotic} proved a law of large numbers for $x_N(T)$, which in the $T=N$ case states (with the convergence being in probability) that
\begin{align}\label{e.fq definition}
\lim_{N\to\infty} \frac{x_N(N)}{N} = 2\times\frac{\psi_q(\log_q u) + \log(1-q)}{\log q} + 1 =: f_q;
\end{align}
here $\log_q u = \log u/\log q$ is the logarithm to the base $q$ and $\psi_q$ is the $q$-digamma function, given by
\begin{equation}\label{e.q-digamma}
\psi_q(x) = \frac{1}{\Gamma_q(x)}\frac{\partial \Gamma_q(x)}{\partial x},
\end{equation}
where $\Gamma_q(x) = \frac{(q;q)_\infty}{(q^x;q)_{\infty}}(1-q)^{1-x}$ is the $q$-gamma function.
Note that our definition of $q$-pushTASEP differs from that of \cite{vetHo2022asymptotic} and \cite{matveev2016q}, in that particles move to the right for us rather than the left, thus introducing an extra negative sign in the law of large numbers. Our definition agrees with the one given in \cite{imamura2022solvable}.
\cite{vetHo2022asymptotic} also proves that the asymptotic fluctuation of $x_N$ converges to the GUE Tracy-Widom distribution. To state this, let us consider the rescaled observable
\begin{align}\label{e.definition of X^sc}
X^{\mrm{sc}}_N = \frac{x_N(N) -f_qN}{(-\psi_q''(\log_q u))^{1/3}(\log q^{-1})^{-1}N^{1/3}};
\end{align}
note that the denominator is a positive quantity, since $\psi_q''(x)<0$ for all $x>0$ (see e.g. \cite{mansour2009some}).
Now for $q,u\in(0,1)$, and under certain restrictions that are used to simplify the analysis there, \cite[Theorem~2.2]{vetHo2022asymptotic} asserts that $X^{\mrm{sc}}_N \Rightarrow F_{\mathrm{GUE}}$, where $F_{\mathrm{GUE}}$ is the GUE Tracy-Widom distribution.
The proof given in \cite{vetHo2022asymptotic} relies on certain formulas for $q$-Laplace transforms of particle positions proved in \cite{borodin2015height}. The recent work \cite{imamura2022solvable} gives different Fredholm determinant formulas for randomly shifted versions of $x_N(T)$ (see Corollary 5.1 there) from which it should also be possible to extract the above distributional convergence, with a perhaps simpler analysis; indeed, the analogous convergence is demonstrated for a half-space version of the model in \cite[Theorem~6.11]{imamura2022solvable}.
It remains a question whether the conditions assumed in \cite{vetHo2022asymptotic} are necessary for this convergence to hold; our techniques suggest it should hold for any $q, u\in(0,1)$. In particular, our results will hold for all $q,u\in(0,1)$.
\subsection{Main results}
Our main theorem bounds the lower tail of the fluctuations of the centred and scaled position $X^{\mrm{sc}}_N$ (as defined in \eqref{e.definition of X^sc}) of the $N$\textsuperscript{th} particle of $q$-pushTASEP, as introduced in Section~\ref{s.q-pushTASEPmodel}.
\begin{theorem}\label{mt.q-pushtasep bound}
Let $q, u\in(0,1)$ and let $a_i=b_j = u$ for all $i,j$. There exist positive absolute constants $c$, C, and $N_0$ (independent of $q$ and $u$) such that, with $\theta_0 = C|\log(\log q^{-1})|$and for $N\geq N_0$ and $\theta>\theta_0$,
\begin{align*}
\P\left(X^{\mrm{sc}}_N < -\theta N^{1/3}\right) \leq \exp\bigl(-c\theta^{3/2}\bigr).
\end{align*}
\end{theorem}
We believe the true lower tail behavior to be $\exp(-c\theta^3)$, at least for $\theta \ll N^{2/3}$, i.e., smaller than the large deviation regime, similar to other models in the KPZ class. We discuss ahead in Remark~\ref{r.geometric lpp tail exponent} why our arguments do not achieve this, and also how with additional different arguments it should be possible to attain the full exponent of $3$.
\subsection{Lower tails of KPZ observables}
In the past decade, integrable tools have been combined with other perspectives to slowly push out of strictly integrable settings. Examples include studies of geometric properties in models of last passage percolation (e.g. \cite{slow-bond,watermelon,schmid2022mixing}), process-level regularity properties (e.g. \cite{corwin2014brownian,corwin2016kpz,calvert2019brownian,hammond2016brownian,hammond2017patchwork}) of processes whose finite-dimensional distributions are accessible via exactly solvable tools, recent progress on constructing the ASEP speed process \cite{aggarwal2022asep}, edge scaling behavior of tiling or dimer models \cite{huang2021edge,aggarwal2021edge}, as well as the construction of the directed landscape \cite{dauvergne2018basic,dauvergne2018directed}. In these works, a crucial input from the integrable side has repeatedly been bounds on the tails of the relevant statistic.
In the case of zero temperature models, these inputs had been studied two decades ago, with arguments relying in an essential way on determinantal structure possessed by these models. In positive temperature, where such structure is not directly available, progress on obtaining these important tail inputs has only been made in the last few years, but their availability promises to create the opportunity to bring the zero temperature successes to the positive temperature.
\subsubsection{The relative difficulty of upper and lower tail bounds} From a physical perspective, it is easy to see that the upper and lower tails should have different rates of decay, with the lower tail decaying faster. This is because the upper tail concerns making a single, ``largest'' object even larger---in $q$-pushTASEP, making the right-most particle lie even further to the right, which can be accomplished by demanding a single large jump of the right-most particle. So, in particular, the other particles are not a barrier. In contrast, in the lower tail exactly the opposite happens: for the right-most particle to lie atypically to the left, \emph{all} the other particles must also do so---in particular, many jumps, including those of other particles, must be suppressed. However, this intuition does not reveal the fact that, typically, it is technically much more challenging to obtain lower tail bounds than upper ones. Further, while this intuition turns out to be well-suited for arguments to understand large deviations behavior (e.g. \cite{basu2017upper} for LPP), i.e., deviations on scale $N$, in applications one needs bounds on the fluctuation scale (i.e., deviations on scale $N^{1/3}$).
For solvable zero temperature models, which have determinantal descriptions, the difference in the difficulty of upper and lower tails can be seen from the fact that the upper tail bounds follow directly from bounds on the kernel of the associated determinantal point process, while this is not the case for lower tail bounds.
Nevertheless, as mentioned above, in these models, a number of approaches have been developed over the last two decades. These include the Riemann-Hilbert approach (e.g. \cite{baik2001optimal}), methods based on explicit formulas for moments (e.g. \cite{ledoux2005deviation}), and connections to random matrix theory (e.g. \cite{ledoux2010,basu2019lower}).
The toolbox for the upper tail is already fairly well developed in positive temperature. Here too one often has determinantal formulas for the distribution of the observable, and one can extract the upper tail by establishing decay of the kernel in these determinantal formulas. An instance where this is done is \cite[Theorem~1.4]{barraquand2021fluctuations}, in the context of the log-gamma polymer. Besides this approach, one can also try to extract upper tail estimates from the moments of the exponential of the random variable of interest (e.g. for the KPZ equation, this corresponds to moments of the stochastic heat equation, or, for our model, the analogue is $q$-moments). An example of this method is captured in \cite[Proposition~4.3 and Lemma~4.5]{corwin2018kpz}, where tail estimates for the narrow-wedge KPZ equation are obtained through estimates on the $k$\textsuperscript{th} moment of the stochastic heat equation. We also mention recent works \cite{ganguly2022sharp} and \cite{landon2022tail} which respectively make use of Gibbs properties and special structure of stationary versions of the relevant models (the KPZ equation and O'Connell-Yor polymer respectively) to prove upper tail estimates, but these methods by their nature are specific to models which have such probabilistic structure.
For our model of $q$-pushTASEP, a Fredholm determinant formula of the type the first approach relies on can be found in \cite[Theorem 3.3]{borodin2015height} (via our model's connection to the $q$-Whittaker measure, see Section~\ref{s.integrable connections} ahead or the discussion in \cite[Section~7.4]{matveev2016q}), and a different one in \cite[Corollary 5.1]{imamura2022solvable}. A formula for the $q$-moments in our model is also available, though there is a subtlety in that that not all $q$-moments are finite; see Section 7.4 in \cite{matveev2016q} for the formula and a brief discussion of this point.
Having said this, while there are well-established approaches to obtain such upper tail estimates, it is certainly not a triviality to actually do so in any model, and we do not pursue them for $q$-pushTASEP in this work. However, we plan to revisit this question as part of subsequent work in which we will need both tail bounds to study further aspects of this model.
The toolbox for the lower tail in positive temperature is smaller but is being actively developed, and we briefly review some of the tools now. However, these do not seem applicable to our model, and so we are ultimately led to develop a new technique.
\subsubsection{Work using determinantal representations of Laplace transforms} The first class of techniques for lower tails in positive temperature models gives a determinantal representation for the Laplace transform (or $q$-Laplace transform) of the observable. This approach was initiated in \cite{corwin2020lower}, which obtained fluctuation-scale lower tail bounds for the narrow wedge solution to the KPZ equation. \cite{corwin2020lower} used a formula from \cite{borodin2016moments,amir2011probability} which equates the Laplace transform of the fundamental solution of the stochastic heat equation (which is related to the KPZ equation via the Cole-Hopf transform) to an expectation of a multiplicative functional of the Airy point process, which is determinantal. That it is a multiplicative functional (as well as its precise form) is very useful as it allows the lower tail of the KPZ equation to be bounded in terms of the lower tail behavior of the particles at the edge of the Airy point process, which in turn can be controlled via determinantal techniques as outlined above.
The Laplace transform identity that this argument relies on can be seen as a special case of a general matching proved in \cite{borodin2018stochastic} between the stochastic six vertex model's height function and a multiplicative functional of the row lengths of a partition sampled according to the Schur measure. The stochastic six vertex model and the Schur measure are each known to specialize to a number of models also of interest; for example, one degeneration of the former is the asymmetric simple exclusion process (ASEP), and the analogous one for the latter is the discrete Laguerre ensemble, a determinantal process. This yields an identity between the $q$-Laplace transform of ASEP and a multiplicative functional of the discrete Laguerre ensemble \cite{borodin2017asep}, which was used to obtain a lower tail bound for the former in \cite{aggarwal2022asep}, using the latter's connection to TASEP.
Unfortunately, not all degenerations to models of interest play nicely on both sides. For instance, for the O'Connell-Yor polymer, the Schur measure side of the stochastic six vertex model identity degenerates to an average of a multiplicative functional with respect to a point process whose measure is a \emph{signed} measure instead of a probability measure \cite{imamura2016determinantal}. Typical modes of analysis break down in the context of signed measures. For other models too, including ours, this issue of signed measures seems to arise.
A related recent approach brings in the machinery of Riemann-Hilbert problems and has been developed in \cite{cafasso2022riemann}. There, in the setting of the KPZ equation, the mentioned expectation of the multiplicative functional of the Airy point process is expressed as a Fredholm determinant, and then the latter is written as a Riemann-Hilbert problem. So far this approach has only been developed at the level of the KPZ equation, and so it remains to be seen how broadly it can be applied.
\subsubsection{Coupling and geometric methods in polymer models}\label{s.recent work in polymer}
For the semi-discrete O'Connell-Yor and log-gamma polymer models, recent work \cite{landon2022upper,landon2022tail} has obtained lower tail estimates via a mixture of exact formulas, coupling arguments, and geometric considerations. This builds on methods developed in the zero temperature model of exponential last passage percolation \cite{emrah2020right,emrah2021optimal,emrah2022coupling}. The program has so far been implemented in full in the semi-discrete O’Connell-Yor model and in part for the log-gamma polymer. First, \cite{landon2022upper} obtains the bounds for a stationary version of the model (where one can prove an explicit formula for the Laplace transform of the free energy), and then, for the O'Connell-Yor case, these are translated to the original model using geometric considerations of the polymer measure in \cite{landon2022tail}. In fact, the
Laplace transform bound obtains a lower tail exponent of 3/2, which is then upgraded to the sharp exponent of 3 by adapting geometric arguments from \cite{ganguly2020optimal}. Since this method relies heavily on the polymer geometry, it is unclear how it could be extended to address the model of $q$-pushTASEP which only has a particle interpretation.
In summary, while there are a variety of methods in zero-temperature models to obtain lower tail bounds, so far only a handful of tools are available for positive temperature models. The ones available do not seem immediately applicable to our model. For this reason, we introduce a new method which does not rely on polymer structure or identities between $q$-Laplace transforms and multiplicative functionals of determinantal point processes, which are not directly available in $q$-pushTASEP. We rely instead on the recent work \cite{imamura2021skew} which relates the $q$-Whittaker measure on partitions to a model of periodic geometric last passage percolation. In this way, we are able to use both the polymer techniques and determinantal structure which \emph{are} available in geometric last passage percolation to analyze $q$-pushTASEP. To explain this, we next describe this model of periodic last passage percolation.
\subsection{Last passage percolation}\label{s.lpp}
We first describe the environment in which our last passage percolation (LPP) problem will exist. We consider a sequence of $N\times N$ ``big squares'' indexed by $k\in\mathbb{N}\cup\{0\}$, each of which contains $N^2$ ``small squares'' inside. These are arranged in a periodic strip as shown in Figure~\ref{f.infinite LPP}. To each small square with coordinates $(i,j)$ (with $i,j\in\{1, \ldots, N\}$) in the $k$\textsuperscript{th} big square is associated an independent non-negative random variable $\smash{\xi_{(i,j);k}}$ which we call a \emph{site weight}; in our model, the site weights in the same big square, i.e., with the same value of $k$, will additionally be identically distributed. For $s$ a site in the strip (say, with coordinates $(i,j)$ in the $k$\textsuperscript{th} big box), we may also write $\xi_s$ for $\smash{\xi_{(i,j); k}}$.
We consider downward paths which are allowed to wrap around the strip; again see Figure~\ref{f.infinite LPP}. Each such path $\gamma$ is assigned a weight $w(\gamma)$ given by $\sum_{v\in\gamma} \xi_{v}$. Note that while a priori the weight of $\gamma$ could be infinite if $\gamma$ is an infinite path, in our setting the environment will only have finitely many non-zero site weights almost surely (see Remark~\ref{r.ultimately zero}), and so this possibility will not arise.
Now, the last passage value $L_{v,w}$ between $v$ and $w$ small squares in the strip is defined as
$$L_{v,w} := \max_{\gamma: v\to w} w(\gamma),$$
where the maximum is over all downward paths from $v$ to $w$, assuming at least one such path exists. If not, we define $L_{v,w}$ to be $-\infty$; we say this only to give a logically complete definition, but such cases will not actually arise in this paper.
We now specify the distribution of the randomness of the site weights. We will say $X\sim\mathrm{Geo}(z)$ if $X$ is a random variable such that $\P(X\geq k) = z^k$ for $k=0,1,2 \ldots $; in other words, $z$ is the \emph{failure} probability in repeated independent trials and $X$ is the number of failures before the first success. Then the site weights are specified as follows: $\smash{\xi_{(i,j);k}}$ are independent across all $i,j,k$, and distributed as $\mathrm{Geo}(u^2 q^k)$ for $k=0,1,2, \ldots$ and $i,j\in\{1, \ldots, N\}$.
\begin{remark}\label{r.ultimately zero}
Observe that $\P(\xi_{(i,j); k} \neq 0) = u^2q^k$ for all $i$, $j$, $k$, which is summable over $i,j\in\{1, \ldots, N\}$ and $k = 0,1, \ldots$. So by the Borel-Cantelli lemma, almost surely, for all large enough $k$ and all $i,j\in\{1, \ldots, N\}$, $\smash{\xi_{(i,j);k}}$ will be zero.
\end{remark}
This model of LPP is similar to other models of periodic LPP considered in the literature \cite{baik2021periodic,betea2021peaks,schmid2022mixing} (though perhaps with slightly different selections of parameters or distributions of the random variables), and also has connections to the periodic Schur measure \cite{borodin2007periodic,betea2021peaks}.
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[scale=0.56]
\newcommand{2}{2}
\begin{scope}[shift={(10,1.75)}]
\begin{scope}
\clip (-2-1.7, 0.1) rectangle (2+2, -7);
\fill[yellow, opacity=0.25] (-2, -2) -- (0,0) -- (2, -2) -- (0, -2*2) -- cycle;
\fill[orange, opacity=0.25] (2, -2) -- ++(0,-4) -- ++(-2, 2) -- cycle;
\fill[orange, opacity=0.25] (-2, -2) -- ++(0,-4) -- ++(2, 2) -- cycle;
\fill[purple!50!blue, opacity=0.25] (-2, -3*2) -- ++(2,-2) -- ++(2, 2) -- ++(-2, 2) --cycle;
\fill[green!40, opacity=0.25] (-2, -3*2) -- ++(0,-1) -- ++(1,0) -- cycle;
\fill[green!40, opacity=0.25] (2, -3*2) -- ++(0,-1) -- ++(-1,0) -- cycle;
\draw[thick] (-2,-7) -- (-2,-2) -- (0,0) -- (2,-2) -- (2,-7);
\draw[thick] (-2, -2) --coordinate[at end](R1) ++(2*2, -2*2);
\draw[thick] (2, -2) --coordinate[at end](L1) ++(-2*2, -2*2);
\draw[thick,dashed] (R1) -- ++(-2*2, -2*2);
\draw[thick,dashed] (L1) -- ++(2*2, -2*2);
\foreach \i [evaluate=\i as \x using \i-0.5] in {1, 1.5,...,2}
{
\draw[opacity=0.6] (-\x, -\x) -- ++(2+\x, -2-\x) -- ++(-2*2,-2*2);
\draw[opacity=0.6] (\x, -\x) -- ++(-2-\x, -2-\x) -- ++(2*2,-2*2);
}
\end{scope}
\draw[very thick, green!60!black] (0,-0.5) -- ++(1,-1) -- ++(-1, -1) -- ++(2, -2);
\draw[very thick, green!60!black] (-2, -4.5) -- ++(1.5,-1.5) -- ++(-0.5, -0.5) -- ++(0.5, -0.5);
\node[scale=0.8] (first-label) at (2+0.3, -1) {$\mathrm{Geo}(u^2)$};
\node[scale=0.8] (second-label) at (2+1.4, -4) {$\mathrm{Geo}(u^2q)$};
\node[scale=0.8] (third-label) at (-2-1.4, -6) {$\mathrm{Geo}(u^2q^2)$};
\draw[->, semithick] (first-label) to[out=180, in=90] (0, -1.5);
\draw[->, semithick] ($(second-label)+(-0.1,0.2)$) to[out=120, in=80] (2-1, -4.5);
\draw[->, semithick] (third-label) to[out=60, in=90] (0, -5.5);
\draw[thick, dashed, <-] (40:2cm and 0.4cm) arc (40:-220:2cm and 0.4cm);
\end{scope}
\end{tikzpicture}
\end{center}
\caption{The environment in which the infinite last passage percolation occurs. The dashed arrow on top indicates the direction in which the squares wrap around, and the solid green line is a downward path which wraps around the strip.}\label{f.infinite LPP}
\end{figure}
\subsection{Relation between $q$-pushTASEP and LPP}
We can now explain the exactly solvable connection between $q$-pushTASEP and the model of LPP in an infinite periodic strip just introduced that was recently discovered by Imamura-Mucciconi-Sasamoto, and on which our arguments crucially rely.
The connection runs through a measure on partitions known as the $q$-Whittaker measure, as mentioned above. More precisely, it was shown in \cite{matveev2016q} (and stated ahead as Theorem~\ref{t.q-pushTASEP and whittaker}) that $x_N(T)$, for any $N,T\in\mathbb{N}$ and with step initial condition (i.e., $x_k(0)= k$ for $k=1, \ldots, N$), is distributed as the length of the top row in a partition (encoded as a Young diagram) distributed according to the $q$-Whittaker measure (after a deterministic shift by $N$). See Theorem~\ref{t.q-pushTASEP to lpp}.
\cite{imamura2021skew} proves a relation between the $q$-Whittaker measure and the periodic LPP model. From the perspective of the needs of this paper, the main consequence of \cite{imamura2021skew} is the development of a bijection which generalizes the Robinson-Schensted-Knuth correspondence. In traditional LPP on $\mathbb{Z}^2$, the RSK correspondence associates to the LPP environment (with non-negative integer site weights) a pair of Young tableaux of the same shape, with the property that LPP statistics are encoded in the row lengths of the tableaux; for instance, the LPP value is exactly the length of the top row of the tableaux. It is well-known that, under this correspondence, the measure on the environment given by i.i.d.\ geometric random variables gets pushed forward to give the Schur measure on partitions, i.e., Young diagrams.
The generalization of RSK established in \cite{imamura2021skew}, called skew RSK there, relates pairs of \emph{skew} Young tableaux to pairs of \emph{vertically strict tableaux} (tableaux where the ordering condition on the entries is imposed only along columns and not rows) along with some additional data.
In this bijection, LPP has not had a role to play. To involve LPP, we recall an earlier generalization of RSK known as the Sagan-Stanley correspondence \cite{sagan1990robinson}, which can be interpreted as giving a bijection between the LPP environment in an infinite strip (again with non-negative integer site weights) and pairs of skew Young tableaux. \cite{imamura2021skew} also shows that, if one composes this bijection with the skew RSK bijection, then the LPP value is exactly the length of the top row of the vertically strict tableaux coming from the skew RSK.
It turns out that the generating function of vertically strict tableaux can be written in terms of the $q$-Whittaker polynomials. Using this fact, certain weight preservation properties of the bijection, and an argument similar to the well-known one that establishes the above mentioned relationship between geometric LPP and the Schur measure, one can show that the LPP value when the infinite strip has site weights given by independent geometric variables with parameter specified above has the same distribution as the top row of a random partition from the $q$-Whittaker measure. Since this statement is not recorded explicitly in \cite{imamura2021skew}, we will give a proof using results from that paper in Appendix~\ref{app.q-whittaker and lpp}.
\subsection{Proof ideas}\label{s.proof ideas}
To summarize, $x_N(N)$ is, up to a deterministic shift by $N$, the LPP value in an infinite periodic environment of inhomogeneous geometric random variables. Now, the weight of \emph{any} path in this environment is a lower bound on the last passage percolation value. We consider a specific path which allows us to utilize the homogeneity of the geometric variable parameters inside a big square (as well as tail information of geometric LPP in such homogeneous squares) along with the independence across big squares.
More specifically, we consider the path formed by concatenating paths from the top to bottom of the big squares along the center, i.e., the squares in which the geometric parameter is $u^2q^{2i}$ for some $i\in\mathbb{N}\cup\{0\}$. More precisely, we do not exactly concatenate the paths as they do not have a common site; we simply consider the sum of the weights of the paths, ignoring the positive weight of the extra site needed to actually join the paths. See Figure~\ref{f.concatenation}.
\begin{figure}
\begin{tikzpicture}
\fill[yellow, opacity=0.25] (0,0) -- ++(-1,1) -- ++(1,1) -- ++(1,-1) --cycle;
\fill[orange, opacity=0.25] (0,0) -- ++(1,1) -- ++(1,-1) -- ++(-1,-1) --cycle;
\fill[orange, opacity=0.25] (0,0) -- ++(-1,1) -- ++(-1,-1) -- ++(1,-1) --cycle;
\fill[purple!50!blue, opacity=0.25] (0,0) -- ++(-1,-1) -- ++(1,-1) -- ++(1,1) --cycle;
\draw[thick] (-1,1) -- (1,-1);
\draw[thick] (1,1) -- (-1,-1);
\draw[opacity=0.6] (-1.5, -0.5) -- ++(2,2);
\draw[opacity=0.6] (-0.5, -1.5) -- ++(2,2);
\draw[opacity=0.6] (-1.5, 0.5) -- ++(2,-2);
\draw[opacity=0.6] (-0.5, 1.5) -- ++(2,-2);
\draw[very thick, green!60!black] (0, 1.5) -- ++(-0.25,0.25);
\draw[very thick, green!60!black] (0, 1.5) -- ++(0.5,-0.5) -- ++(-0.5, -0.5);
\draw[very thick, green!60!black] (0, -0.5) -- ++(-0.5,-0.5) -- ++(0.75, -0.75);
\draw[very thick, green!60!black, dashed] (0, 0.5) -- ++(0.5,-0.5) -- ++(-0.5, -0.5);
\end{tikzpicture}
\caption{A depiction of the paths we consider near the boundary between different big squares. The two solid green paths go from the topmost site to the bottommost site in their respective big squares, where the environment is homogeneous. We do not consider the dotted green path needed to connect them, which is valid for proving an upper bound on the lower tail since including its weight will only increase the overall weight.}
\label{f.concatenation}
\end{figure}
Let us calculate the law of large numbers of this path, i.e., its weight up to first order, using the knowledge of the LLN for geometric LPP. Indeed, in an $N\times N$ square with geometric parameter $u^2q^{2i}$, to first order in $N$, the LPP value is $2N\times\frac{uq^{i}}{1-uq^i}$ (see for example \cite{johansson2000shape} or Theorem~\ref{t.uniform lower tail} ahead), so that the overall LPP value of the path we have described is, again to first order,
\begin{align}\label{e.first form of LLN}
2N\times\sum_{i=0}^{\infty}\frac{uq^{i}}{1-uq^i} = 2N\times\sum_{i=0}^{\infty}\frac{q^{i+\log_q u}}{1-q^{i+\log_q u}}.
\end{align}
To evaluate this sum we need the $q$-digamma function $\psi_q$, defined in \eqref{e.q-digamma}.
Now, the $q$-digamma function $\psi_q$ is related to the sum in \eqref{e.first form of LLN} by the formula
$$\psi_q(x) = -\log(1-q)+\log q\cdot\sum_{i=0}^\infty \frac{q^{i+x}}{1-q^{i+x}}.$$
From this we see that \eqref{e.first form of LLN} equals
\begin{align}\label{e.lln expression}
2N\times\frac{\psi_q(\log_q(u)) + \log(1-q)}{\log q} = N\times(f_q-1),
\end{align}
which is the first order term in the probability in Theorem~\ref{mt.q-pushtasep bound} (remember that $L$ and $x_N(N)$ differ by a constant term of $N$) and matches the LLN proved in \cite{vetHo2022asymptotic}.
\subsubsection{Uniform LPP control}
We have identified a concatenation of LPP problems which obtains the correct first order behaviour. Now, the order of fluctuations of geometric LPP of parameter $u^2q^{2i}$ in an $N\times N$ square is $u^{1/3}q^{i/3}(1-u^{2}q^{2i})^{-1}N^{1/3}$ (again see for example \cite{johansson2000shape} or Theorem~\ref{t.uniform lower tail} ahead). One can think of these fluctuations, once rescaled by this expression, as being approximately distributed according to the GUE Tracy-Widom distribution. The latter has a negative mean and, a calculation as in the previous subsection shows that the accumulated loss on the fluctuation scale across all the big squares is finite, in particular of order $(\log q^{-1})^{-1}|\log \log q^{-1}| N^{1/3}$ (ignoring the dependence on $u$). This means that if we can control the geometric LPP values across all the squares and use appropriate tools on concentration of sums of independent random variables, we will obtain a lower tail inequality for $x_N(N)$.
(In fact, the true behavior of $x_N(N)$ should be $f_q N - \Theta((\log q^{-1})^{-1}N^{1/3})$, i.e., the fluctuation term should not have the $\log\log$ factor. Our approach does not seem able to achieve this, and we discuss this more ahead in Section~\ref{s.log gamma remark}, along with some of the consequences of the appearance of the extra $\log\log$ factor.)
To apply concentration inequalities, we will need control over all the constituent geometric LPP problems. Observe that as the big squares get farther into the environment, the parameter $u^2q^{2i}$ of the geometric random variables goes to zero. So, in fact, we need a tail bound on the geometric LPP problems which is \emph{uniform} in the parameter $q$ essentially in the entire range $(0,1)$.
Now, the literature contains extremely sharp estimates on the upper and lower tails of geometric LPP for any fixed parameter $q$ \cite{baik2001optimal}. Unfortunately, these estimates are not stated uniformly in $q$ in the required range, and the method of proof does not seem like it would yield such an estimate. Indeed, the arguments rely on steepest descent analysis of contour integrals, and the resulting contours implicitly depend on $q$, thus making it difficult to extract uniform-in-$q$ estimates. Thus we need to prove new results. The following is our second main result and obtains a uniform lower tail in the entire parameter range of $q$. Here,
\begin{equation}\label{e.mu_q definition}
\mu_q = \frac{(1+q^{1/2})^2}{1-q}.
\end{equation}
\begin{theorem}\label{t.uniform lower tail}
Let $T_N$ be the LPP value from top to bottom of an $N\times N$ square in an environment given by i.i.d.\ Geo($q$) random variables. There exist positive constants $c$ and $N_0$ such that, for $q\in(0, 1)$, $N\geq N_0$, and $x > 0$,
\begin{align*}
\P\left(T_N \leq (\mu_q-1) N - x\cdot \frac{q^{1/6}}{1-q}N^{1/3}\right) \leq \exp(-cx^{3/2}).
\end{align*}
\end{theorem}
We next make some remarks on aspects of this result before outlining how to use Theorem~\ref{t.uniform lower tail} to complete the proof of Theorem~\ref{mt.q-pushtasep bound}.
\begin{remark}[Effective range of $x$]
Observe that $\mu_q-1 = \smash{\frac{2q^{1/2}(1+q^{1/2})}{1-q}}$, so the first order term $(\mu_q-1)N = O(q^{1/2}N/(1-q))$. Thus, for $x>C(q^{1/3}N^{2/3})$ for some fixed constant $C$, the probability is actually zero (since $T_N\geq 0$ always). For this reason it will be enough to prove the theorem for $x< \delta(q^{1/2}N)^{2/3}$ for some small $\delta>0$; then for $\delta(q^{1/2}N)^{2/3}\leq x \leq C(q^{1/2}N)^{2/3}$ one can obtain the claimed bound by modifying the constant $c$, and beyond that the bound holds trivially.
\end{remark}
\begin{remark}[Effective range of $q$]
Though $q$ is allowed to be arbitrarily close to zero, the statement is really only meaningful when $q$ is lower bounded by a constant times $N^{-2}$. This is simply because when $q=o(N^{-2})$, then $(\mu_q-1)N = O(q^{1/2}N)= o(1)$; similarly, the fluctuation scale is also $o(1)$. As a result the upper bound on $x$ of $\delta(q^{1/2}N)^{2/3}$ under which we need to prove the theorem also becomes $o(1)$. This effective lower bound on $q$ reflects the fact that $q=\Theta(N^{-2})$ is the regime in which the number of points in $[1,N]^2\cap\mathbb{Z}^2$ where the geometric random variable is non-zero is $O(1)$, and, more precisely, converges to a Poisson random variable; thus the geometric LPP problem converges to Poissonian LPP (see \cite{johansson-toprows}).
\end{remark}
\begin{remark}[Tail exponent of $3/2$]\label{r.geometric lpp tail exponent}
While the tail bound we obtain is $\exp(-cx^{3/2})$, the true lower tail behavior is $\exp(-cx^3)$ as proven in \cite{baik2001optimal} for fixed $q$. As we said earlier, the usual method of obtaining lower tail bounds via steepest descent analysis of Riemann-Hilbert problems does not appear to be suited to obtain uniform estimates. Instead, we utilize a method, often referred to in the literature as ``Widom's trick'',
which was first introduced by Widom \cite{widom2002convergence} to reduce the task to understanding the trace of the kernel operator of the Meixner ensemble, a determinantal point process associated to geometric LPP via the RSK correspondence. Widom's trick essentially treats the points of the Meixner ensemble as being independent, ignoring the repulsive behavior determinantal point processes exhibit. This simplifies the task of obtaining a lower tail bound, but at the cost of only yielding a tail bound with exponent $3/2$. This can likely be upgraded to the full cubic tail exponent uniformly in $q$ using bootstrapping arguments developed in \cite{ganguly2020optimal}, but one would first have to obtain similar uniform lower tail estimates to points displaced (on the $N^{2/3}$ scale) from $(N,N)$. This should be doable, but we have not pursued it in this work as it is not necessary for our bounds on $q$-pushTASEP.
\end{remark}
Let us finally say a few words about what we need to know about the Meixner operator's trace. It is well-known that the trace can be expressed in terms of the upper tail of the expected empirical distribution $\nu_{q,N}$ of the Meixner ensemble. We then need to obtain a lower bound on the upper tail of $\nu_{q,N}$. An argument of Ledoux given in \cite{ledoux2005deviation} shows that this can be accomplished by obtaining sharp asymptotics for the moments of $\nu_{q,N}$. In our context, this means that the estimates need to be sharp in both their $q$ and $N$ dependencies. We obtain these estimates by doing a careful analysis of formulas available for the factorial or Pochhammer moments of $\nu_{q,N}$ (i.e., $\mathbb{E}[X(X-1)\cdots(X-k+1)]$ for $X\sim \nu_{q,N}$) from \cite{ledoux2005distributions} and then converting these into ones for the polynomial moments.
\subsubsection{Tying it together}
With Theorem~\ref{t.uniform lower tail}, the last ingredient is a concentration inequality. The inequality must take into account the fact that the scale of the random variables is decreasing. Typical concentration inequalities are for sub-Gaussian tail decay (while here we only have tail exponent $3/2$) and are for deviations from the mean (while our estimates are from the law of large numbers centering). While there are results in the literature for different tail decays, eg. \cite{stretched-exp-concentration}, adapting these to address the second point directly to our setting results in a constant order loss for each term being summed, independent of the scale of the summand. This is too lossy as we have an infinite number of terms. Instead we redo the arguments establishing these bounds, which ultimately rely on estimates on the moment generating function, in such a way to fit our applications.
With this final step, we will obtain Theorem~\ref{mt.q-pushtasep bound}.
\begin{remark}[An argument for the lower tail of $x_N(T)$]
As we saw, the conceptual heart of the argument consisted of finding a concatenation of paths which, to first order, has the same weight as the law of large numbers \eqref{e.first form of LLN} for the model. Now, if we were interested in $x_N(T)$ for general $T$, there is also a representation of it in terms of a periodic LPP problem, where the environment consists of periodic rectangles of dimension $N\times T$ instead of $N\times N$ squares as here. However, in such an environment, it is not clear what concatenation of paths would achieve the correct first order weight, and this is why we restrict to $T=N$ in this paper. We leave the general $T$ case for future work.
\end{remark}
\subsection{A remark on convergence to the log-gamma free energy}\label{s.log gamma remark}
Though not needed for the results in this paper, we also note that, as proven in \cite{matveev2016q}, the $q\to 1$ limit of $X^{\mrm{sc}}_N$, when renormalized correctly, is the free energy of the log-gamma polymer introduced in \cite{seppalainen2012scaling} (we refer to the reader to that paper for the precise definition of the model).
Indeed for example, setting $q=\exp(-\varepsilon)$ and $u=\exp(-A\varepsilon)$ for a fixed $A>0$, \cite[Theorem~8.7]{matveev2016q} tell us that $\varepsilon(x_N(N)-(2N-1)\varepsilon^{-1}\log \varepsilon^{-1})$ converges in distribution to the log-gamma free energy where the parameters of the inverse gamma random variables are all $2A$.
It can be checked that the appropriately normalized $q\to 1$ limit of $f_q$ (as defined in \eqref{e.fq definition}) is indeed the law of large numbers for the log-gamma polymer.
However, notice that the centering term for the convergence is $(2N-1)\varepsilon^{-1}\log\varepsilon^{-1}$, while the first order behavior (in $N$) we calculated in \eqref{e.lln expression} via the connection to LPP, when written in terms of $\varepsilon$, was $2N\varepsilon^{-1}\log\varepsilon^{-1}$. In other words, there is a discrepancy of $\varepsilon^{-1}\log\varepsilon^{-1}$. This comes from the earlier noted point that the fluctuation scale we are able to prove (when written in terms of $\varepsilon$) is $\varepsilon^{-1}\log\varepsilon^{-1}$, unlike the true fluctuation scale of $\varepsilon^{-1}$ suggested by \cite{matveev2016q}; equivalently, our lower tail bound (for $x_N(N)$ and not $X^{\mrm{sc}}_N$) only kicks in after $\varepsilon^{-1}\log \varepsilon^{-1}N^{1/3}$ into the tail.
For this reason, unfortunately, our tail bounds do not survive in the limit to provide a tail bound on the log-gamma free energy.
The ultimate source of the discrepancy in the fluctuation scale that we are able to prove is that we are approximating the true LPP value in the infinite cylinder by a sum of LPP values in $N\times N$ big squares. In more detail, the portion of our path in the $i$\textsuperscript{th} big square from the top suffers a loss of order $q^{i/6}(1-q^{2i})^{-1} N^{1/3}$ (ignoring the $u$-dependence), essentially because this is the scale of fluctuations on which the LPP value in this box converges to the Tracy-Widom distribution, and the latter has a negative mean. Observing that $1-q^{2i}$ is approximately $\varepsilon i$ up to constants when $q=\exp(-\varepsilon)$, we see that the sum of this loss from $i=1$ to $\infty$ yields an overall loss of of order $N^{1/3}$ times $\varepsilon^{-1}\sum_{i=1}^\infty i^{-1}e^{-i\varepsilon/6} = \varepsilon^{-1}\log(1-e^{-\varepsilon/6}) \approx \varepsilon^{-1}\log(\varepsilon^{-1})$. Thus to avoid the lossy factor of $\log \varepsilon^{-1}$ it seems one would need a different scheme of approximation.
As mentioned earlier in Section~\ref{s.recent work in polymer}, very recent work \cite{landon2022upper} has established a bound (with tail exponent $3/2$) on the lower tail of the free energy of a \emph{stationary} version of the log-gamma model (as well as other polymer models such as the O'Connell-Yor model) using a Burke property enjoyed by the model, which gives access to formulas for the moment generating function of the free energy. For the O'Connell-Yor model, in \cite{landon2022tail}, these bounds were transferred to the non-stationary version of the model using geometric arguments involving the polymer measure introduced in \cite{flores2014fluctuation}, and the tail exponent was upgraded to the optimal $3$ by adapting geometric methods from \cite{ganguly2020optimal}. One expects that a similar program would deliver the corresponding bounds in the log-gamma case as well.
\subsection*{Acknowledgements}
The authors thank Matteo Mucciconi for explaining the proof of Theorem~\ref{t.q-pushTASEP to lpp}, as well as Philippe Sosoe and Benjamin Landon for sharing their preprint \cite{landon2022tail} with us in advance. I.C. was partially supported by the NSF through grants DMS:1937254, DMS:1811143, DMS:1664650,
as well as through a Packard Fellowship in Science and Engineering, a Simons Fellowship, and
a W.M. Keck Foundation Science and Engineering Grant. I.C. also thanks the Pacific Institute for the Mathematical Sciences (PIMS) and Centre de Recherches Mathematique (CRM), where some materials were developed in conjunction with the lectures he gave at the PIMS-CERM summer school in probability (which is partially supported by NSF grant DMS:1952466). M.H. was partially supported by NSF grant DMS:1937254.
\section{The $q$-Whittaker measure and last passage percolation}\label{s.integrable connections}
As mentioned earlier, our strategy will rely on first relating the observable $x_N(N)$ to an infinite last passage problem in a periodic and inhomogeneous environment. The first step in establishing this relation requires us to introduce the $q$-Whittaker measure, and we start there.
\subsection{$q$-Whittaker polynomials and measure}
\begin{definition}[$q$-Whittaker polynomial]
For a skew partition $\mu/\lambda$, the skew $q$-Whittaker
polynomial in $n$ variables $\mathscr P_{\mu/\lambda} (x_1, \ldots , x_n; q)$ is defined recursively by the branching rule
\begin{align*}
\mathscr P_{\mu/\lambda} (x_1, \ldots , x_n; q) = \sum_{\eta}\mathscr P_{\eta/\lambda} (x_1, \ldots , x_{n-1}; q)\mathscr P_{\mu/\eta} (x_n; q),
\end{align*}
where, for a single variable $z\in\mathbb{C}$ (recalling the $q$-binomial coefficient defined in \eqref{e.q-binomial coefficient}),
\begin{align*}
\mathscr P_{\mu/\eta}(z; q) = \mathbbm{1}_{\eta\prec \mu} \prod_{i\geq 1} z^{\mu_i-\eta_i}\binom{\mu_i-\mu_{i+1}}{\mu_i-\eta_i}_{\!\!q}.
\end{align*}
For a partition $\mu$, the $q$-Whittaker polynomial $\mathscr P_\mu$ is given by the skew $q$-Whittaker polynomial $\mathscr P_{\mu/\lambda}$ with $\lambda$ taken to be the empty partition. The $q$-Whittaker polynomial is a special case ($t=0$) of the Macdonald polynomials, for which a comprehensive reference is \cite[Section VI]{macdonald1998symmetric}.
\end{definition}
For a partition $\mu$, we also define $\mathdutchcal b_{\mu}(q)$ by
\begin{align*}
\mathdutchcal{b}_{\mu}(q) = \prod_{i\geq 1} \frac{1}{(q;q)_{\mu_i-\mu_{i+1}}}.
\end{align*}
\begin{definition}[$q$-Whittaker measure]
The $q$-Whittaker measure $\mathbb{W}_{a;b}^{(q)}$, first introduced in \cite{borodin2014macdonald}, is the measure on the set of all partitions given by
\begin{align*}
\mathbb{W}_{a;b}^{(q)}(\mu) = \frac{1}{\Pi(a;b)} \mathdutchcal{b}_\mu(q) \mathscr P_{\mu}(a;q)\mathscr P_{\mu}(b;q),
\end{align*}
where $a = (a_1, \ldots, a_n)$ and $b = (b_1, \ldots, b_t)$ satisfy $a_i, b_j\in(0,1)$, and $\Pi(a;b)$ is a normalization constant given explicitly by
\begin{align*}
\Pi(a;b) = \prod_{i=1}^n\prod_{j=1}^t \frac{1}{(a_ib_j; q)_{\infty}}.
\end{align*}
\end{definition}
We may now record the important connection between $x_N(T)$ and the $q$-Whittaker measure which holds under general parameter choices and general times, when started from the narrow-wedge initial condition:
\begin{theorem}[Section~3.1 of \cite{matveev2016q}]\label{t.q-pushTASEP and whittaker}
Let $a$, $b$ be specializations of parameters respectively $(a_1, \ldots , a_N) \in (0, 1)^N$ and $(b_1, \ldots , b_T ) \in (0, 1)^T$. Let $\mu \sim \mathbb{W}^{(q)}_{a;b}$ and let $x(T)$ be a $q$-PushTASEP under initial conditions $x_k(0) = k$ for $k = 1, \ldots , N$. Then,
$$x_N(T) \stackrel{d}{=} \mu_1 + N.$$
\end{theorem}
Finally, we can give the equivalence between $q$-pushTASEP and the LPP value. As indicated, this is a straightforward consequence of the results in \cite{imamura2021skew}, and we will provide a proof, explained to us by Matteo Mucciconi, in Appendix~\ref{app.q-whittaker and lpp}.
While this paper only considers the case $T=N$, we will state the LPP equivalence for general $T$. For this, the LPP problem we consider is in an infinite periodic environment where the ``fundamental domain'' has dimension $N\times T$ instead of $N\times N$ as described in Section~\ref{s.lpp}. The distribution of the geometric random variables remains the same, i.e., it is $u^{2}q^k$ in the $k$\textsuperscript{th} copy of the fundamental domain.
\begin{theorem}\label{t.q-pushTASEP to lpp}
Let $L$ be the LPP value in the environment just described with $u,q\in(0,1)$. Let $x_N(T)$ be the position of the $N$\textsuperscript{th} particle at time $T$ in $q$-pushTASEP with $a_i=b_j=u$ for all $(i,j)\in\{1, \ldots, N\}\times\{1, \ldots, T\}$ and step initial condition. Then
\begin{align*}
x_N(T) \stackrel{d}{=} L + N .
\end{align*}
\end{theorem}
The proof actually combines Theorem~\ref{t.q-pushTASEP and whittaker} with a statement giving a distributional equality between $L$ and the first row of a partition sampled from the $q$-Whittaker measure. In fact, one can relate the lengths of all the rows of the partition to LPP values involving multiple disjoint paths, and we prove this stronger statement in Theorem~\ref{t.full q-whittaker to lpp}.
\section{Widom's trick applied to the lower tail in geometric LPP}
In the next two sections we will prove Theorem~\ref{t.uniform lower tail}, which provides an upper bound on the lower tail of the LPP value in an i.i.d.\ geometric environment, uniform in the parameter of the geometric random variables.
The argument relies on a trick introduced by Widom in \cite{widom2002convergence}, which we explain next.
\subsection{Widom's trick}
We first need to introduce the Meixner ensemble, the determinantal point process associated to geometric LPP via the RSK correspondence. The fact that it is determinantal is the crucial property for Widom's argument.
\begin{definition}[Meixner ensemble]
First let $\mu^q_{\mrm{Geo}}$ denote the Geo($q$) distribution on $\mathbb{N}_0 := \mathbb{N}\cup\{0\}$, i.e., the distribution with discrete weights given by
\begin{align*}
\mu^q_{\mrm{Geo}}(\{x\}) = (1-q)q^x.
\end{align*}
For $q\in(0,1)$ and $N\in\mathbb{N}$, the $N\times N$ \emph{Meixner ensemble} is a determinantal point process on $\mathbb{N}_0$ with kernel given, for $x,y\in\mathbb{N}_0$ with $x\neq y$ and with respect to $\mu^q_{\mrm{Geo}}$, by
\begin{align}\label{e.meixner kernel}
K_{N}(x,y) = \frac{\kappa_{N-1}}{\kappa_N}\cdot\frac{M_N(x)M_{N-1}(y) - M_{N-1}(x)M_N(y)}{x-y};
\end{align}
here $M_N=\kappa_N x^N + \kappa_{N-1}x^{N-1}+ \ldots +\kappa_0$ are the orthonormal polynomials (which we call the Meixner polynomials, though it differs from the classical Meixner polynomials by a constant multiple due to the normalization) with respect to $\mu^q_{\mrm{Geo}}$.
The second factor on the right-hand side of \eqref{e.meixner kernel} makes sense for $x,y\in\mathbb{R}$, and so the $x=y$ case can be defined by taking the appropriate limit.
\end{definition}
Here is the relation between the Meixner ensemble and the geometric LPP value.
\begin{proposition}[Proposition~1.3 of \cite{johansson2000shape}]\label{p.meixner and geo}
Fix $q\in(0,1)$ and $N\in\mathbb{N}$. Let $\lambda_1 \geq \lambda_2 \geq \ldots \lambda_N$ be distributed according to the $N\times N$ Meixner ensemble and let $T_N$ be the LPP value in the environment of i.i.d.\ Geo($q$) random variables. Then $\smash{T_N \stackrel{d}{=} \lambda_1-N+1}$.
\end{proposition}
With this background we may now explain Widom's trick. The fact that $(\lambda_1, \ldots, \lambda_N)$ is determinantal with kernel $K_{N}$ given by \eqref{e.meixner kernel} implies, using the Cauchy-Binet formula, that, for any $t\in\mathbb{R}$,
\begin{align*}
\P\left(\lambda_1 \leq t\right) = \det\left(I_N - K^t_{N}\right),
\end{align*}
where $K^t_{N}$ can be written as the Gram matrix of the Meixner polynomials, i.e.,
\begin{align}
K^t_{N} = \bigl(\langle M_{\ell -1}, M_{k-1}\rangle_{\ell^2(\{t, t+1, \ldots\},\, \mu^q_{\mrm{Geo}})}\bigr)_{1\leq k,\ell\leq N}. \label{e.K^t definition}
\end{align}
The fact that Gram matrices are positive semi-definite implies that the eigenvalues of $\smash{K^t_{N}}$ are non-negative; also, we may write, for any unit vector $\smash{u\in\mathbb{R}^N}$ and with $\smash{g(x) = \sum_{i=1}^N u_iM_{i-1}(x)}$,
\begin{align*}
1= \sum_{i=1}^N u_i^2 = \langle g,g\rangle_{\ell^2(\mathbb{N}_0, \mu^q_{\mrm{Geo}})} &= \langle g\mathbbm{1}_{\cdot<t},g\mathbbm{1}_{\cdot<t}\rangle_{\ell^2(\mathbb{N}_0, \mu^q_{\mrm{Geo}})} + \langle g\mathbbm{1}_{\cdot\geq t},g\mathbbm{1}_{\cdot\geq t}\rangle_{\ell^2(\mathbb{N}_0, \mu^q_{\mrm{Geo}})}\\
&\geq \langle g\mathbbm{1}_{\cdot\geq t},g\mathbbm{1}_{\cdot\geq t}\rangle_{\ell^2(\mathbb{N}_0, \mu^q_{\mrm{Geo}})}
= \langle g,g\rangle_{\ell^2(\{t, t+1,\ldots\}, \mu^q_{\mrm{Geo}})} = u^T K^t_{N} u,
\end{align*}
which in turn implies that the eigenvalues of $K^t_{N}$ are at most 1.
Let us label the eigenvalues of $K^t_{N}$ as $\rho^t_1, \ldots, \rho^t_N$. Since $1-x\leq e^{-x}$ for $x\in[0,1]$,
\begin{align*}
\P\left(\lambda_1 \leq t\right) = \det\left(I_N - K^t_{N}\right) = \prod_{i=1}^N (1-\rho^t_i) &\leq \exp\left(-\sum_{i=1}^N\rho^t_i\right) = \exp\left(-\mathrm{Tr}(K^t_{N})\right).
\end{align*}
Thus Widom's trick reduces the problem of bounding the lower tail to understanding the trace of an associated operator. This in turn can be accomplished by lower bounding the upper tail of the expected empirical distribution $\nu_{q,N}$ of the Meixner ensemble, defined precisely by
\begin{equation}\label{e.nu_q,N definition}
\nu_{q,N} = \mathbb{E}\left[\frac{1}{N}\sum_{i=1}^N\delta_{\lambda_i}\right].
\end{equation}
We record the connection between the operator's trace and the tail of $\nu_{q,N}$ next.
\begin{lemma}\label{l.expression for trace}
For any $t\in \mathbb{N}$, $\mathrm{Tr}(K^t_{N}) = N \nu_{q,N}([t,\infty))$.
\end{lemma}
\begin{proof}
First we observe that, from \eqref{e.K^t definition},
\begin{align}\label{e.trace formula}
\mathrm{Tr}(K^t_{N}) = \sum_{\ell=0}^{N-1}\langle M_{\ell-1}, M_{\ell-1}\rangle_{\ell(\{t, t+1, \ldots\}, \mu^q_{\mrm{Geo}})} = \int_{t}^\infty \sum_{\ell=0}^{N-1} M_{\ell}^2\,\mathrm d \mu^q_{\mrm{Geo}}.
\end{align}
Now since $(\lambda_1, \ldots, \lambda_N)$ is determinantal with kernel $K_N$ with respect to $\mu^q_{\mrm{Geo}}$, it is a standard fact of the theory of determinantal point processes (or see \cite[Proposition~1.2]{ledoux2005deviation}) that, for any bounded measurable $f:\mathbb{N}_0\to\mathbb{R}$,
\begin{align*}
\mathbb{E}\left[\prod_{i=1}^N[1+f(\lambda_i)]\right] = \sum_{r=0}^N \frac{1}{r!}\int_{\mathbb{N}_0^r}\prod_{i=1}^r f(x_i)\det(K_{N}(x_i, x_j))_{1\leq i,j\leq r}\,\mathrm d \mu^q_{\mrm{Geo}}(x_1)\cdots \mathrm d\mu^q_{\mrm{Geo}}(x_r).
\end{align*}
Replacing $f$ by $\varepsilon f$, taking the $\varepsilon\to 0$ limit, and thereby equating the order $\varepsilon$ terms on both sides (since the constant-in-$\varepsilon$ terms on both sides are easily seen to be $1$), we obtain that
\begin{align*}
\mathbb{E}\left[\sum_{i=1}^N f(\lambda_i)\right] = \int_{\mathbb{N}_0} f(x) K_N(x,x) \,\mathrm d \mu^q_{\mrm{Geo}}(x).
\end{align*}
By the Christoffel-Darboux formula and \eqref{e.meixner kernel}, $K_N(x,x) = \sum_{\ell=0}^{N-1} M_{\ell}(x)^2$. With this, taking $f(x) = \mathbbm{1}_{x\geq t}$ and using \eqref{e.trace formula} yields the claim.
\end{proof}
So the task is now to obtain a lower bound on the upper tail of $\nu_{q,N}$. The bound we prove is stated in the next theorem, and its proof will be the main goal of the remainder of this section as well as of the next.
\begin{theorem}\label{t.mean empirical law lower bound}
Let $X$ be distributed as $\nu_{q,N}$ as defined in \eqref{e.nu_q,N definition}. There exist positive absolute constants $c$ and $N_0$ such that, for $N\geq N_0$, $\varepsilon\in(0,\frac{1}{2})$, and $q\in[\varepsilon^3, 1)$,
\begin{align*}
\P\left(X\geq \mu_qN(1-q^{1/6}\varepsilon)\right) \geq c\varepsilon^{3/2}.
\end{align*}
\end{theorem}
With this statement and Widom's trick, Theorem~\ref{t.uniform lower tail}'s proof is straightforward.
\begin{proof}[Proof of Theorem~\ref{t.uniform lower tail}]
Recall from Proposition~\ref{p.meixner and geo} that if $\lambda_1$ is distributed as the largest particle of the Meixner ensemble, then $\smash{T_N \stackrel{d}{=} \lambda_1-N+1}$. From Lemma~\ref{l.expression for trace}, for any $t\in \mathbb{N}$,
\begin{align*}
\P\left(\lambda_1\leq t\right) \leq \exp\Bigl(-N\nu_{q,N}(t,\infty)\Bigr).
\end{align*}
So we see that, for any $\varepsilon>0$,
\begin{align*}
\P\Bigl(T_N \leq (\mu_q-1)N - \mu_qNq^{1/6}\varepsilon)\Bigr)
&= \P\Bigl(\lambda_1 \leq \mu_qN(1-q^{1/6}\varepsilon)\Bigr)\\
&\leq \exp\Bigl\{-c\nu_{q,N}\bigl([\mu_qN(1-q^{1/6}\varepsilon), \infty)\bigr)\Bigr\}.
\end{align*}
By Theorem~\ref{t.mean empirical law lower bound}, there exists $c>0$ such that, for all $q\in[\varepsilon^3, 1)$ and $0<\varepsilon<\frac{1}{2}$,
\begin{align*}
\nu_{q,N}\bigl(\mu_qN(1-q^{1/6}\varepsilon)\bigr) \geq c\varepsilon^{3/2}.
\end{align*}
Putting the above together with $\varepsilon = xN^{-2/3}$ and adjusting the constant in the exponent gives
\begin{align*}
\P\left(T_N \leq (\mu_q-1)N - x \frac{q^{1/6}}{1-q}N^{1/3}\right) \leq \exp\left(-cx^{3/2}\right)
\end{align*}
when $q\geq x^3N^{-2}$, which translates to $x\leq (q^{1/2}N)^{2/3}$. This completes the proof.
\end{proof}
To prove Theorem~\ref{t.mean empirical law lower bound}, we rely on a strategy of Ledoux explained in \cite[Section 5]{ledoux2005deviation}. It relies on getting strong estimates on polynomial moments of the mean empirical distribution $\nu_{q,N}$. The bounds we prove are the following.
\begin{theorem}\label{t.meixner poly moment bounds}
Let $X$ be distributed according to $\nu_{q,N}$ as defined in \eqref{e.nu_q,N definition}. For any $W>0$, there exist positive $C_W$ and $N_0$ such that for any $N\geq N_0$, $k\leq WN^{2/3}$ and $q\in [k^{-2},1)$,
\begin{align*}
C_W^{-1}(q^{1/6}k)^{-3/2}(\mu_q N)^k \leq \mathbb{E}[X^k] \leq C_W(q^{1/6}k)^{-3/2}(\mu_q N)^k.
\end{align*}
\end{theorem}
Observe that the moments grow to first order like $(\mu_q N)^k$, which reflects that we expect $\nu_{q,N}$ to be supported on $[0,\mu_q N]$ (though more precisely there is a decaying-in-$N$ amount of mass beyond this point). The polynomial dependence on $k$ is what captures the behaviour of the tail of $\nu_{q,N}$ near this right edge, and this is the basic observation of Ledoux' argument. Indeed, the exponent of $-3/2$ for $k$ is what gives the $3/2$ exponent of $\varepsilon$ in Theorem~\ref{t.mean empirical law lower bound}.
We will prove Theorem~\ref{t.meixner poly moment bounds} in Sections~\ref{s.factorial bounds} and \ref{s.moment bounds}; the upper and lower bounds are separated into Propositions~\ref{p.stronger poly moment upper bound meixner} and \ref{p.polynomial lower bound meixner} respectively.
We conclude this section by using Theorem~\ref{t.meixner poly moment bounds} to implement Ledoux' argument to establish Theorem~\ref{t.mean empirical law lower bound}.
\begin{proof}[Proof of Theorem~\ref{t.mean empirical law lower bound}]
First, by the Cauchy-Schwarz inequality, we have
\begin{align*}
\mathbb{E}[X^{2k}\mathbbm{1}_{X\geq \mu_qN(1-\varepsilon)}] \leq \mathbb{E}[X^{4k}]^{1/2}\P\left(X\geq \mu_qN(1-\varepsilon)\right)^{1/2}.
\end{align*}
By Theorem~\ref{t.meixner poly moment bounds}, when $q\geq k^{-2}$ and $N\geq N_0$ (conditions we assume in the rest of the proof),
\begin{align*}
\mathbb{E}[X^{4k}] \leq C_1(q^{1/6}k)^{-3/2}(\mu_q N)^{4k}.
\end{align*}
It is also easy to see that
\begin{align*}
\mathbb{E}[X^{2k}\mathbbm{1}_{X\geq \mu_qN(1-\varepsilon)}]%
&= \mathbb{E}[X^{2k}] - \mathbb{E}[X^{2k}\mathbbm{1}_{X< \mu_{q}N(1-\varepsilon)}]\\
&\geq \mathbb{E}[X^{2k}] - \mathbb{E}[X^{k}]\left(\mu_{q}N(1-\varepsilon)\right)^k.
\end{align*}
Now, from Theorem~\ref{t.meixner poly moment bounds},
\begin{align*}
\mathbb{E}[X^{2k}] \geq C_2(q^{1/6}k)^{-3/2}(\mu_qN)^{2k},
\end{align*}
while, again from Theorem~\ref{t.meixner poly moment bounds},
\begin{align*}
\mathbb{E}[X^{k}] \leq C_3(q^{1/6}k)^{-3/2}(\mu_q N)^{k}.
\end{align*}
Since overall it holds that
\begin{align*}
\P\bigl(X\geq \mu_qN(1-\varepsilon)\bigr) \geq \mathbb{E}[X^{4k}]^{-1}\left(\mathbb{E}[X^{2k}] - \mathbb{E}[X^k](\mu_qN)^k(1-\varepsilon)^k\right)^2,
\end{align*}
substituting the above yields (cancelling out all the common factors of $(\mu_qN)^{4k}$ on the right-hand side)
\begin{align*}
\P\bigl(X\geq \mu_qN(1-\varepsilon)\bigr)
&\geq C_1^{-1}(q^{1/6}k)^{3/2}\left[C_2(q^{1/6}k)^{-3/2} - C_3(q^{1/6}k)^{-3/2}(1-\varepsilon)^k\right]^2\\
&= C_1^{-1}(q^{1/6}k)^{-3/2}\left[C_2 - C_3(1-\varepsilon)^k\right]^2
\end{align*}
We pick the absolute constant $W$ such that, with $k=W\varepsilon^{-1}$, $C_3(1-\varepsilon)^k \leq \frac{1}{2}C_2$. This yields that for some absolute constant $C>0$,
\begin{align*}
\P\bigl(X\geq \mu_qN(1-\varepsilon)\bigr) \geq C(q^{-1/6}\varepsilon)^{3/2}.
\end{align*}
The earlier assumed condition that $q\geq k^{-2}$ translates into $q\geq \varepsilon^2$. Replacing $\varepsilon$ by $q^{1/6}\varepsilon$ completes the proof, after noting that the resulting condition on $q$ is that $q\geq \varepsilon^3$.
\end{proof}
\section{Sharp factorial moment bounds}\label{s.factorial bounds}
Here we prove Theorem~\ref{t.meixner poly moment bounds} on sharp bounds for the moments of the expected empirical distribution $\nu_{q,N}$ (as defined in \eqref{e.nu_q,N definition}) of the Meixner ensemble. While there is no explicit formula available for these moments, there \emph{is} one for the factorial moments, i.e., moments of the form $\mathbb{E}[X(X-1)\cdots(X-k+1)]$. Indeed letting $X$ be distributed according to $\nu_{q,N}$, \cite[Lemma~5.2]{ledoux2005distributions} states that
$$M^q(k,N) := \mathbb{E}[X(X-1)\cdots(X-k+1)] = \frac{q^k}{(1-q)^k}\sum_{i=0}^kq^{-i}\binom{k}{i}^2\cdot\sum_{\ell=i}^{N-1} \frac{(\ell+k-i)!}{(\ell-i)!}.$$
In fact, this simplifies \cite[eq. (39)]{cohen2020moments} to
\begin{align}\label{e.factorial moment formula}
M^q(k,N) = \frac{q^k}{(1-q)^k}\frac{1}{N}\cdot \frac{1}{k+1}\sum_{i=0}^kq^{-i}\binom{k}{i}^2\cdot \frac{(N+k-i)!}{(N-i-1)!}.
\end{align}
Our approach is to use this formula to obtain asymptotics on the factorial moments, and then later convert them into polynomial moments.
\subsection{The factorial moment asymptotics}
\begin{theorem}\label{t.factorial moment asymptotics}
Let $X$ be distributed according to $\nu_{q,N}$ as defined in \eqref{e.nu_q,N definition}, and let $\mu_q = (1+q^{1/2})^2/(1-q)$ be as in \eqref{e.mu_q definition}. For any $W>0$, there exist positive constants $C_W$, $N_0$, and $k_0$ such that for all $N\geq N_0$, $k_0\leq k\leq WN^{2/3}$, and $q\in [k^{-2},1)$,
\begin{align*}
C_W^{-1}(q^{1/6}k)^{-3/2}(\mu_qN)^k\exp\left(-\frac{k^2}{2\mu_qN}\right)\leq M^q(k,N) \leq C_W(q^{1/6}k)^{-3/2}(\mu_qN)^k\exp\left(-\frac{k^2}{2\mu_qN}\right).
\end{align*}
\end{theorem}
As we said, $\nu_{N,q}$ has right edge of support roughly $\mu_qN=\smash{\frac{(1+q^{1/2})^2}{1-q}}N$, and this is what gives that to first order, $M^q(k,N)N^{-k}$ grows like $\smash{\mu_q^k}$. The main task is to obtain the correct polynomial dependence on $k$ and $q$, namely $q^{-1/4}k^{-3/2}$, of the same.
Before turning to giving the proof in full, let us outline the strategy. First, we will use Stirling's approximation and a precise form of the fact that $\binom{k}{i} \approx \exp(k H(i/k))$ (with $H(x) = -x\log x - (1-x)\log(1-x)$ the entropy function) to write the sum in \eqref{e.factorial moment formula} as, approximately and up to absolute constants,
$$\frac{N^kq^k}{(1-q)^k}\cdot k^{-1}\cdot\frac{1}{k+1}\sum_{i=0}^k\frac{k^2}{i(k-i)}\exp\left[i\log q^{-1} + 2kH(i/k) +\frac{k(k-i)}{N}- \frac{k^2}{2N}\right].$$
Then, the idea is to obtain asymptotics for the sum using Laplace's method. Indeed, if we were to regard the sum (along with the $(k+1)^{-1}$ factor and ignoring the $-k^2/(2N)$ term in the exponent) as being approximately the integral
\begin{align*}
\int_0^1\frac{1}{x(1-x)}\exp\left[k\left(x\log q^{-1} + 2H(x) + \frac{k}{N}(1-x)\right)\right]\,\mathrm d x
\end{align*}
then, writing the integrand as $g(x)\exp(kf(x))$, where
\begin{align*}
f(x) = x\log q^{-1} + 2H(x) + \frac{k(1-x)}{N} \qquad\text{and}\qquad
g(x) = (x(1-x))^{-1}
\end{align*}
one would expect from the Laplace method intuition that
\begin{align*}
\MoveEqLeft[14]
\frac{1}{k+1}\sum_{i=0}^k\frac{1}{\tfrac{i}{k}(1-\tfrac{i}{k})}\exp\left[k\left(\tfrac{i}{k}\log q^{-1} + 2H(i/k) +\frac{k(1-\frac{i}{k})}{N}\right)\right]
\approx (k|f''(x_0)|)^{-1/2}\exp(kf(x_0))g(x_0),
\end{align*}
up to constants, where $x_0$ is the maximizer of $f$ on $[0,1]$. Evaluating $f(x_0)$ and $g(x_0)$ will yield the claimed $q$ and $k$ dependencies in Theorem~\ref{t.factorial moment asymptotics}.
There are existing results in the literature, for example \cite{masoero2014laplace}, on Laplace's method for sums which also obtain the correct constant. However these are not directly useful to us: our function $f$ depends on $k$, which is not typical in Laplace's method, and we need all our estimates to be uniform in the parameter $q$, which can be difficult to verify after applying black-box theorems. For these reasons we perform the analysis explicitly ourselves; but we will not be concerned with obtaining the correct constant dependencies as these are not necessary for our ultimate applications. However, since there is not much novel or probabilistic content in these computations, we defer to Appendix~\ref{app.sum asymptotics} the proofs of these bounds (which are stated as Propositions~\ref{p.ledoux sum upper bound} and \ref{p.ledoux sum lower bound}).
\begin{proof}[Proof of Theorem~\ref{t.factorial moment asymptotics}]
In the following, an equality with a factor of $\Theta(f(k,N))$ on the right-hand side means that the left-hand side is upper and lower bounded by the rest of the right-hand side up to a constant times $f(k,N)$ replacing $\Theta(f(k,N))$, where the constant may depend on $W$ but not $q$, $i$, $k$, or $N$ (at least in the range $1\leq i\leq k\leq WN^{2/3}$).
We will first obtain a bound on the summands in \eqref{e.factorial moment formula}. We start with bounding
$(N+k-i)!/(N-i-1)!$
using Stirling's approximation (non-asymptotic form) to obtain
\begin{align*}
\frac{(N+k-i)!}{(N-i-1)!} &= \frac{\sqrt{2\pi (N+k-i)}}{\sqrt{2\pi(N-i-1)}} \exp\Bigl[(N+k-i)\log(N+k-i)\\
&\qquad - (N+k-i) - (N-i-1)\log(N-i-1) + (N-i-1) + \Theta(N^{-1})\Bigr]\\
&= \Theta(1)\cdot\exp\left[(N-i-1) \log\frac{N+k-i}{N-i-1} + (k+1)\bigl(\log(N+k-i)-1\bigr)\right]\\
&= \Theta(1)\cdot\exp\left[(N-i-1) \log\left(1+\frac{k+1}{N-i-1}\right) + (k+1)\bigl(\log(N+k-i)-1\bigr)\right]\\
&= \Theta(1)\cdot\exp\left[k+1 - \frac{(k+1)^2}{2(N-i-1)} + (k+1)\bigl(\log(N+k-i)-1\bigr) + \Theta\left(\frac{k^3}{N^2}\right)\right]\\
&= \Theta(1)\cdot\exp\left[(k+1)\log (N+k-i) - \frac{k^2}{2N} + \Theta\left(\frac{k^3}{N^2}+\frac{k}{N}\right)\right]
\end{align*}
Now since $k\leq WN^{2/3}$, the $k^3/N^2$ and $k/N$ terms are both bounded by a power of $W$, and so may be absorbed into the $\Theta(1)$ factor outside. So we see from \eqref{e.factorial moment formula} that
\begin{align*}
M^q(k,N)N^{-k}
&= \frac{\Theta(1)q^k}{(1-q)^k(k+1)}\sum_{i=0}^kq^{-i}\binom{k}{i}^2 \exp\left[(k+1)\log (N+k-i) -(k+1)\log N- \frac{k^2}{2N}\right]\\
&= \frac{\Theta(1)q^k}{(1-q)^k(k+1)}\sum_{i=0}^kq^{-i}\binom{k}{i}^2 \exp\left[(k+1)\log \left(1+\frac{k-i}{N}\right)- \frac{k^2}{2N}\right]\\
&= \frac{\Theta(1)q^k}{(1-q)^k(k+1)}\sum_{i=0}^kq^{-i}\binom{k}{i}^2 \exp\left[\frac{(k+1)(k-i)}{N}- \frac{k^2}{2N} + \Theta\left(\frac{(k+1)(k-i)^2}{N^2}\right)\right].
\end{align*}
Now since
\begin{align*}
\binom{k}{i} = \Theta(1) \sqrt{\frac{k}{i(k-i)}} \exp(kH(i/k)),
\end{align*}
where $H(p) = -p\log p -(1-p)\log(1-p)$ is the entropy function, we obtain that $M^q(k,N)N^{-k}$ is equal to
\begin{align*}
\frac{\Theta(1)q^k}{(1-q)^k}\cdot k^{-1}\cdot\frac{1}{k+1}\sum_{i=0}^k\frac{k^2}{i(k-i)}\exp\left[i\log q^{-1} + 2kH(i/k) +\frac{k(k-i)}{N}- \frac{k^2}{2N} + \Theta\left(\frac{k-i}{N}\right)\right].
\end{align*}
Rewriting the previous display a little, we obtain
\begin{align}
\frac{\Theta(1)q^k}{(1-q)^k}\cdot k^{-1}\cdot\frac{1}{k+1}\sum_{i=0}^k\frac{1}{\tfrac{i}{k}(1-\tfrac{i}{k})}\exp\left[k\left(\tfrac{i}{k}\log q^{-1} + 2H(i/k) +\frac{k(1-\frac{i}{k})}{N}\right) - \frac{k^2}{2N}\right].\label{e.factorial moment simplified upper bound}
\end{align}
The sum is upper and lower bounded Propositions~\ref{p.ledoux sum upper bound} and \ref{p.ledoux sum lower bound} in Appendix~\ref{app.sum asymptotics}, up to a constant factor and for $N\geq N_0$, $k_0\leq k \leq N$, and $q\geq k^{-2}$, by
\begin{align*}
q^{-1/4} k^{1/2}\left[\frac{\left(1+q^{1/2}\exp\left(\frac{1}{2}kN^{-1}\right)\right)^2}{q}\right]^k.
\end{align*}
Substituting this into \eqref{e.factorial moment simplified upper bound} yields that $M^q(k,N)N^{-k}$ is equal to
\begin{align}\label{e.M(k,N) near final bound}
\Theta(1)\cdot k^{-3/2} q^{-1/4}\exp\left[k\log\left(\frac{(1+q^{1/2}\exp(\tfrac{1}{2}kN^{-1}))^2}{1-q}\right) - \frac{k^2}{2N}\right].
\end{align}
Now since $\exp\left(\frac{1}{2}kN^{-1}\right) = 1+\frac{1}{2}kN^{-1}+\Theta(k^2/N^2)$, we can expand the logarithm of the numerator term in the previous display to obtain
\begin{align*}
\log\left(1+q^{1/2}e^{k/2N}\right)
&= \log\left(1+q^{1/2}+\frac{1}{2}q^{1/2}kN^{-1}+\Theta(1)qk^2N^{-2}\right)\\
&= \log(1+q^{1/2}) + \frac{k}{2N}\cdot\frac{q^{1/2}}{1+q^{1/2}} + \Theta(1)\frac{qk^2}{N^2}
\end{align*}
Substituting this into \eqref{e.M(k,N) near final bound} yields that $M^q(k,N)N^{-k}$ is equal to
\begin{align*}
\Theta(1)k^{-3/2}q^{-1/4}\exp\left[k\log\left(\frac{(1+q^{1/2})^2}{1-q}\right)+\frac{k^2}{2N}\left(\frac{2q^{1/2}}{1+q^{1/2}} - 1\right) + \Theta(1)\frac{qk^3}{N^2}\right],
\end{align*}
which, after simplifying and using that $1\leq k\leq WN^{2/3}$, equals
\begin{align*}
\Theta(1) (q^{1/6}k)^{-3/2}\mu_q^k\exp\left[-\frac{k^2}{2N}\cdot\frac{1-q^{1/2}}{1+q^{1/2}}\right] = \Theta(1) (q^{1/6}k)^{-3/2}\mu_q^k\exp\left[-\frac{k^2}{2\mu_qN}\right],
\end{align*}
which completes the proof.
\end{proof}
\section{Translating from factorial to polynomial moments}\label{s.moment bounds}
Now we convert the factorial moment bounds of Theorem~\ref{t.factorial moment asymptotics} to polynomial bounds and thus prove Theorem~\ref{t.meixner poly moment bounds}. The basic idea is the following. Notice that the difference between the bounds in the two theorems is essentially the factor of $\exp(-k^2/(2\mu_q N))$ in the bounds for the factorial moments. The presence of this factor is a problem because, when $k=\smash{\Theta(N^{2/3})}$ (a value of $k$ we will need to take), this factor equals $\exp(-\mu_q^{-1}N^{1/3})$, i.e., much smaller than $O(1)$. So our goal is to show that the $\exp(\smash{-k^2/(2\mu_qN)})$ goes away when we move to the $k$\textsuperscript{th} polynomial moment.
Now, we can write the ratio $[X(X-1)\cdots (X-k+1)]/X^k$ as
$$\prod_{i=1}^{k-1}(1-iX^{-1}) \approx \exp\left(-X^{-1}\sum_{i=1}^ki\right) \approx \exp\left(-\frac{k^2}{2X}\right).$$
If we believe that the support of $X$ essentially has upper boundary $\mu_q N$, then, when considering high powers of $X$, heuristically one should get the correct behavior when replacing $X$ by $\mu_q N$. Doing so, the above then suggests that $\mathbb{E}[X^k] \approx \mathbb{E}[X(X-1)\cdots (X-k+1)]\exp(k^2/(2\mu_q N))$, which exactly cancels the extra factor we noted above for the latter.
The proof essentially comes down to making this heuristic precise. In many of the statements we will have terms of the form $k^3/N^2$ in the exponent, which comes from the next-order term after $k^2/N$; note that this term is $O(1)$ when $k=O(\smash{N^{2/3}})$, the regime we care about, and so is controlled.
\subsection{General lemmas connecting factorial and polynomial moments} We start with two statements capturing, in each direction, the correct (to first order) conversion factor between factorial and polynomial moments for general $\mathbb{N}$-valued random variables.
We adopt the notation
$$(x)_k := x(x-1)\cdots(x-k+1),$$
so that, for example, $M^q(k,N) = \mathbb{E}[(X)_k]$ when $X\sim \nu_{q,N}$ (as defined in \eqref{e.nu_q,N definition}).
\begin{lemma}\label{l.polynomial moment upper bound}
Let $X$ be a non-negative integer-valued random variable. Then for any $k\in\mathbb{N}$ and $R\geq 2k$,
\begin{align*}
\mathbb{E}[X^k \mathbbm{1}_{X\geq R}] \leq \mathbb{E}[(X)_k]\exp\left(\frac{k^2}{2R} + \frac{k^3}{3R^2}\right).
\end{align*}
\end{lemma}
\begin{proof}
As above, for any non-negative integer $x$, we see that
\begin{align*}
(x)_k = x(x-1)\cdots(x-k+1)
= x^k \times \prod_{i=1}^{k-1}\left(1-\frac{i}{x}\right)
&=x^k\cdot\exp\left(\sum_{i=1}^{k-1}\log\left(1-\frac{i}{x}\right)\right).
\end{align*}
Next we use that $\log(1-y)\geq -y-y^2$ for $y\in(0,\frac{1}{2})$, so that, for $k-1\leq x/2$, we obtain
\begin{align*}
(x)_k \geq x^k \exp\left(-\sum_{i=1}^{k-1}\left(\frac{i}{x} + \frac{i^2}{x^2}\right)\right)
&= x^k \exp\left(-\frac{k(k-1)}{2x}-\frac{k(k-1)(2k-1)}{6x^2}\right)\\
&\geq x^k \exp\left(-\frac{k^2}{2x}-\frac{k^3}{3x^2}\right).
\end{align*}
Substituting $X$ in place of $x$ and taking expectations yields (since we have assumed $R\geq 2k$)
\begin{align*}
\mathbb{E}[(X)_k] \geq \mathbb{E}[(X)_k\mathbbm{1}_{X\geq R}]
&\geq \mathbb{E}\left[X^k\exp\left(-\frac{k^2}{2X}-\frac{k^3}{3X^2}\right)\mathbbm{1}_{X\geq R}\right]
\end{align*}
Now since $x\mapsto \exp(-k^2/(2x) - k^3/(3x^2))$ is increasing, we see that
\begin{align*}
\mathbb{E}\left[X^k\exp\left(-\frac{k^2}{2X}-\frac{k^3}{3X^2}\right)\mathbbm{1}_{X\geq R}\right] \geq \mathbb{E}\left[X^k\mathbbm{1}_{X\geq R}\right]\exp\left(-\frac{k^2}{2R}-\frac{k^3}{3R^2}\right),
\end{align*}
which, after rearranging, completes the proof.
\end{proof}
\begin{lemma}\label{l.general polynomial moment lower bound}
Let $X$ be a non-negative integer-valued random variable. Then for any $k\in\mathbb{N}$ and $R> 0$,
$$\mathbb{E}[X^{k}] \geq \left(\mathbb{E}[(X)_k] - \mathbb{E}[X^{2k}]^{1/2}\P(X> R)^{1/2}\right)\exp\left(\frac{k(k-1)}{2R}\right).$$
\end{lemma}
\begin{proof}
As in the previous proof, we see that (since $\log(1-x)\leq -x$)
\begin{align*}
(x)_k = x^k\times \prod_{i=1}^{k-1}\left(1-\frac{i}{x}\right)
\leq x^k\times \exp\left(-\sum_{i=1}^{k-1}\frac{i}{x}\right)
&= x^k\times \exp\left(-\frac{k(k-1)}{2x}\right).
\end{align*}
So with this, and since $X$ is non-negative, for any $R>0$,
\begin{align*}
\mathbb{E}[X^k] \geq \mathbb{E}[X^k\mathbbm{1}_{X\leq R}] \geq \mathbb{E}\left[(X)_k\exp\left(\frac{k(k-1)}{2X}\right)\mathbbm{1}_{X\leq R}\right] \geq \mathbb{E}\left[(X)_k\mathbbm{1}_{X\leq R}\right]\cdot\exp\left(\frac{k(k-1)}{2R}\right),
\end{align*}
the last inequality since $\exp(k(k-1)/(2x))$ is decreasing in $x$. To lower bound $\mathbb{E}[(X)_k\mathbbm{1}_{X\leq R}]$ we use that $(X)_k\leq X^k$ and the Cauchy-Schwarz inequality to obtain
\begin{align*}
\mathbb{E}[(X)_k\mathbbm{1}_{X\leq R}] &= \mathbb{E}[(X)_k] - \mathbb{E}[(X)_k\mathbbm{1}_{X> R}]\\
&\geq \mathbb{E}[(X)_k] - \mathbb{E}[X^{2k}]^{1/2}\P(X> R)^{1/2}.\qedhere
\end{align*}
\end{proof}
\subsection{Applying the general lemmas to $\nu_{q,N}$: the upper bound}
In this section we apply Lemmas~\ref{l.polynomial moment upper bound} to obtain the upper bound on $\mathbb{E}[X^k]$, and we will turn to the lower bounds in Section~\ref{s.general to meixner, lower}. The precise upper bound is the following. (Recall from \eqref{e.mu_q definition} that $\mu_q = (1+q^{1/2})^2/(1-q)$.)
\begin{proposition}\label{p.stronger poly moment upper bound meixner}
Let $X$ be distributed according to $\nu_{q,N}$ as defined in \eqref{e.nu_q,N definition} and $\mu_q$ be as in \eqref{e.mu_q definition}. For any $W>0$, there exist positive constants $C_W$, $k_0$, and $N_0$ such that, for any $N\geq N_0$, $k_0 \leq k\leq WN^{2/3}$, and $q\in [k^{-2},1)$,
\begin{align*}
\mathbb{E}[X^k] \leq C_W(q^{1/6}k)^{-3/2}(\mu_q N)^k.
\end{align*}
\end{proposition}
To prove this, in the next Lemma~\ref{l.polynomial moment upper bound for meixner}, we first obtain a bound on $\smash{\mathbb{E}[X^k\mathbbm{1}_{X\geq \mu_q N(1-\varepsilon)}]}$ by directly applying Lemma~\ref{l.polynomial moment upper bound}. We will then convert this into a bound on $\mathbb{E}[X^k]$ to prove Proposition~\ref{p.stronger poly moment upper bound meixner}, thus proving the upper bound half of Theorem~\ref{t.meixner poly moment bounds}.
\begin{lemma}\label{l.polynomial moment upper bound for meixner}
Let $X$ be distributed according to $\nu_{q,N}$ as defined in \eqref{e.nu_q,N definition} and $\mu_q$ be as in \eqref{e.mu_q definition}. Then for any $W>0$ there exist positive constants $C_W$, $k_0$, and $N_0$ such that, for any $N\geq N_0$, $k_0\leq k \leq WN^{2/3}$, $q\in[k^{-2}, 1)$, and $0<\varepsilon<\frac{1}{2}$,
\begin{align*}
\mathbb{E}[X^k\mathbbm{1}_{X\geq \mu_qN(1-\varepsilon)}] \leq C_W(q^{1/6}k)^{-3/2}(\mu_q N)^k \exp\left(\frac{k^2\varepsilon}{N}\right).
\end{align*}
\end{lemma}
\begin{proof}
By Lemma~\ref{l.polynomial moment upper bound}, and since $(1-\varepsilon)^{-1} \leq 1+2\varepsilon$ for $0<\varepsilon<\frac{1}{2}$,
\begin{align*}
\mathbb{E}[X^k\mathbbm{1}_{X\geq \mu_qN(1-\varepsilon)}]
&\leq \mathbb{E}[(X)_k] \exp\left(\frac{k^2}{2\mu_qN(1-\varepsilon)} + \frac{k^3}{3\mu_q^2N^2(1-\varepsilon)^2}\right)\\
&\leq \mathbb{E}[(X)_k] \exp\left(\frac{k^2}{2\mu_qN}(1+2\varepsilon) + \frac{Ck^3}{\mu_q^2N^2}\right).
\end{align*}
By Theorem~\ref{t.factorial moment asymptotics}, for fixed $W$ and $N\geq N_0$, $k_0\leq k\leq WN^{2/3}$, and $q\geq k^{-2}$,
\begin{align*}
\mathbb{E}[(X)_k] &\leq C_W(q^{1/6}k)^{-3/2}(\mu_qN)^k \exp\left(-\frac{k^2}{2\mu_qN}\right).
\end{align*}
Thus we see that
\begin{align*}
\mathbb{E}[X^k\mathbbm{1}_{X\geq \mu_qN(1-\varepsilon)}]
&\leq C_W(q^{1/6}k)^{-3/2} (\mu_qN)^{k} \exp\left(\frac{k^2\varepsilon}{\mu_qN} + \frac{Ck^3}{\mu_q^2N^2}\right),
\end{align*}
which yields the inequality in the statement (using also that $\mu_q^{-1}\leq 1$ and relabeling $C_W$).
\end{proof}
\begin{proof}[Proof of Proposition~\ref{p.stronger poly moment upper bound meixner}]
First we write, for any $\varepsilon>0$,
\begin{align*}
\mathbb{E}[X^k] = \mathbb{E}[X^k\mathbbm{1}_{X>\mu_qN(1-\varepsilon^{1/2})}] + \mathbb{E}[X^k\mathbbm{1}_{X\leq\mu_qN(1-\varepsilon^{1/2})}].
\end{align*}
By Lemma~\ref{l.polynomial moment upper bound for meixner}, the first term is upper bounded, for $N\geq N_0$, $k_0\leq k\leq WN^{2/3}$, and $q\in[k^{-2}, 1)$, by
\begin{align*}
C_W(q^{1/6}k)^{-3/2}(\mu_q N)^k \exp\left(\frac{k^2\varepsilon^{1/2}}{N}\right),
\end{align*}
while, trivially, the second term is upper bounded by
\begin{align*}
(\mu_qN)^k (1-\varepsilon^{1/2})^k \leq (\mu_qN)^k \exp(-\varepsilon^{1/2}k).
\end{align*}
Setting $\varepsilon = k^{-1}$, overall, we obtain that
\begin{align*}
\mathbb{E}[X^k]
&\leq C_W(q^{1/6}k)^{-3/2}(\mu_q N)^k \exp\left(\frac{k^{3/2}}{N}\right) + (\mu_qN)^k\exp(-k^{1/2})\\
&\leq C_W(q^{1/6}k)^{-3/2}(\mu_q N)^k + (\mu_qN)^k\exp(-k^{1/2}),
\end{align*}
the last inequality since $k\leq WN^{2/3}$. Since $q<1$ and $k^{-3/2} > \exp(-k^{1/2})$, the second term is bounded by (a constant times) the first, which completes the proof.
\end{proof}
\subsection{Applying the general lemmas to $\nu_{q,N}$: the lower bound} \label{s.general to meixner, lower}
Here is the lower bound statement we prove:
\begin{proposition}\label{p.polynomial lower bound meixner}
Let $X$ be distributed according to $\nu_{q,N}$ as defined in \eqref{e.nu_q,N definition} and $\mu_q$ be as in \eqref{e.mu_q definition}. Then for any $W>0$, there exist positive constants $c_W>0$, $N_0$, and $k_0$ such that, for any $N\geq N_0$, $k_0 \leq k\leq WN^{2/3}$, and $q\in[k^{-2},1)$,
\begin{align*}
\mathbb{E}[X^k] \geq c_W(q^{1/6}k)^{-3/2}(\mu_q N)^k.
\end{align*}
\end{proposition}
The idea is to combine Lemma~\ref{l.general polynomial moment lower bound} with Theorem~\ref{t.factorial moment asymptotics}. Recall that the lower bound from Lemma~\ref{l.general polynomial moment lower bound} has the term $\mathbb{E}[(X)_k] - \mathbb{E}[X^{2k}]^{1/2}\P(X> R)^{1/2}$; in our argument, we will show that this is lower bounded by $\frac{1}{2}\mathbb{E}[(X)_k]$.
Now, we can lower bound the first term by Theorem~\ref{t.factorial moment asymptotics} and upper bound $\mathbb{E}[X^{2k}]^{1/2}$ by the just established Proposition~\ref{p.stronger poly moment upper bound meixner}. However the former is smaller by a factor of $\exp(-k^2/(2\mu_qN))$ which must be compensated for by $\smash{\P(X> R)^{1/2}}$ to achieve the overall lower bound of $\frac{1}{2}\mathbb{E}[(X)_k]$; here we will take $R = \mu_q N(1+N^{-1/3})$, though the choice of $N^{-1/3}$ is not significant. The next statement gives a strong enough bound on this probability, using Markov inequality and the just established upper bounds on $\mathbb{E}[X^k]$.
\begin{lemma}\label{l.crude upper tail bound}
Let $L> 0$ and $X$ be distributed according to $\nu_{q,N}$ as defined in \eqref{e.nu_q,N definition}. There exists $N_0 = N_0(L)$ such that, for $q\in[N^{-2}, 1)$ and $N\geq N_0$,
\begin{align*}
\P\left(X\geq \mu_qN(1+N^{-1/3})\right) \leq \exp\left(-LN^{1/3}\right)
\end{align*}
\end{lemma}
In fact, one expects that this probability should decay something like $\exp(-(N^{1/3})^{3/2}) = \exp(-N^{1/2})$, since $N\times N^{-1/3} = N^{2/3} = N^{1/3}\times N^{1/3}$, and one expects the upper tail of the expected empirical distribution to be similar to that of the top particle, which indeed fluctuates on scale $N^{1/3}$ with an upper tail exponent of $3/2$. Proving this would require our estimates on the polynomial moment bounds from Proposition~\ref{p.stronger poly moment upper bound meixner} to go up to $O(N)$ instead of $O(N^{2/3})$, which we omit doing since we have no other need for it.
We also observe that, when $k\leq WN^{2/3}$ and for a correspondingly large choice of $L$, $\exp(-LN^{1/3}) \ll \exp(-k^2/(2\mu_qN))$ as the latter is at least $\exp(-\frac{1}{2}W^2N^{1/3})$. Thus the requirement for this probability estimate outlined above is met.
\begin{proof}[Proof of Lemma~\ref{l.crude upper tail bound}]
By Markov's inequality and Proposition~\ref{p.stronger poly moment upper bound meixner}, for any $N\geq N_0$, $k_0\leq k\leq 2LN^{2/3}$, and $q\in[k^{-2}, 1)$,
\begin{align*}
\P\left(X\geq \mu_qN(1+\varepsilon^{1/2})\right) \leq \frac{\mathbb{E}[X^k]}{(\mu_qN)^k(1+\varepsilon^{1/2})^j} \leq C_L(q^{1/6}k)^{-3/2}(1+\varepsilon^{1/2})^{-k}.
\end{align*}
We will ultimately pick $\varepsilon<\frac{1}{2}$. Since $(1+x)^{-1} \leq 1-\frac{x}{2}$ for $x\in[0,\frac{1}{2}]$, we obtain
\begin{align*}
\P\left(X\geq \mu_qN(1+\varepsilon^{1/2})\right) \leq C(q^{1/6}k)^{-3/2}\exp(-k\varepsilon^{1/2}).
\end{align*}
Using that $q\geq N^{-2}$ and setting $k=2LN^{2/3}$, $\varepsilon=N^{-2/3}$, we obtain
\begin{align*}
\P\left(X\geq \mu_qN(1+\varepsilon^{1/2})\right) \leq C_LN^{-1/2}\exp(-2LN^{1/3}),
\end{align*}
which is clearly smaller than $\exp(-LN^{1/3})$ for all large $N$ (depending on $L$).
\end{proof}
\begin{proof}[Proof of Proposition~\ref{p.polynomial lower bound meixner}]
We will take $R = \mu_qN(1+\varepsilon^{1/2})$ for $\varepsilon = N^{-2/3}$ and apply Lemma~\ref{l.general polynomial moment lower bound}. So we need to lower bound $\mathbb{E}[(X)_k] - \mathbb{E}[X^{2k}]^{1/2}\P(X>\mu_qN(1+N^{-1/3}))^{1/2}$, in particular, to show that the second term is much smaller than the first. We start with upper bounding the second term using Proposition~\ref{p.stronger poly moment upper bound meixner} and~\ref{l.crude upper tail bound}, for a $L$ to be chosen soon and for $N$ large enough (depending on $L$):
\begin{align*}
\mathbb{E}[X^{2k}]^{1/2}\P\left(X>\mu_qN(1+N^{-1/3})\right)^{1/2} \leq C_W(q^{1/6}k)^{-3/4}(\mu_q N)^k\times \exp\left(-\tfrac{1}{4}LN^{1/3}\right).
\end{align*}
On the other hand, by Theorem~\ref{t.factorial moment asymptotics}, for $N\geq N_0$, $k_0\leq k\leq WN^{2/3}$, and $q\in[k^{-2},1)$ and using that $\mu_q\geq 1$ in the second inequality,
\begin{align}
\mathbb{E}[(X)_k]
&\geq C'_W(q^{1/6}k)^{-3/2}(\mu_qN)^k\exp\left(- \frac{k^2}{2\mu_qN}\right) \nonumber\\
&\geq C'_W(q^{1/6}k)^{-3/2}(\mu_qN)^k\exp\left(-W^2N^{1/3}\right). \label{e.poly moment lower bound step}
\end{align}
So we see that
\begin{align*}
\frac{\mathbb{E}[X^{2k}]^{1/2}\P\left(X>\mu_qN(1+N^{-1/3})\right)^{1/2}}{\mathbb{E}[(X)_k]} \leq \frac{C_W}{C_W'}(q^{1/6}k)^{3/4}\exp\left(-\left[\tfrac{1}{4}L+W^2\right]N^{1/3}\right).
\end{align*}
Next we use that $k\leq WN^{2/3}$ and set $L = 4W^2+1$. Since $q<1$, the previous right-hand side is upper bounded by
\begin{align*}
\frac{C_W}{C_W'}N^{1/2}\exp\left(-N^{1/3}\right),
\end{align*}
which is less than $\frac{1}{2}$ for all large enough $N$ depending only on $W$; also note that $L$ was chosen as a function of $W$, so we may take the overall lower bound on $N$ to be purely a function of $W$.
Now applying Lemma~\ref{l.general polynomial moment lower bound} with $R=\mu_qN(1+\varepsilon^{1/2})$ with $\varepsilon = N^{-2/3}$, Theorem~\ref{t.factorial moment asymptotics}, and using the above, we get
\begin{align*}
\mathbb{E}[X^k]
&\geq \frac{1}{2}\mathbb{E}[(X)_k]\exp\left(\frac{k(k-1)}{2\mu_qN(1+\varepsilon^{1/2})}\right)\\
&\geq c_W(q^{1/6}k^{-3/2})(\mu_qN)^k \exp\left(-\frac{k^2}{2\mu_qN}+\frac{k(k-1)}{2\mu_qN(1+\varepsilon^{1/2})}\right).
\end{align*}
Since $(1+x)^{-1}\geq 1-x$,
\begin{align*}
\exp\left(-\frac{k^2}{2\mu_qN}+\frac{k(k-1)}{2\mu_qN(1+\varepsilon^{1/2})}\right)
&\geq \exp\left(-\frac{k^2}{2\mu_qN}+\frac{k(k-1)}{2\mu_qN}(1-\varepsilon^{1/2})\right)\\
&\geq \exp\left(-\frac{k^2\varepsilon^{1/2}}{2\mu_qN} -\frac{k}{2\mu_qN}\right)\\
&\geq \exp(-cW^2),
\end{align*}
since $\varepsilon^{1/2} = N^{-1/3}$, $\mu_q\geq 1$ and $k\leq WN^{2/3}$. So overall we see that
\begin{equation*}
\mathbb{E}[X^k] \geq c_W(q^{1/6}k^{-3/2})(\mu_qN)^k. \qedhere
\end{equation*}
\end{proof}
\section{Concentration inequalities and the proof of Theorem~\ref*{mt.q-pushtasep bound}}
In this section we combine Theorem~\ref{t.uniform lower tail} (on the uniform tail for geometric LPP) with the representation of the position $x_N(N)$ of the first particle in $q$-pushTASEP in terms of the LPP value in an infinite periodic strip of inhomogeneous geometric random variables, and so obtain an upper bound on the lower tail of $x_N(N)$. Recall from the proof outline given in Section~\ref{s.proof ideas} that the main idea is to lower bound the LPP value by a sum of independent LPP values, each one in an $N\times N$ square; the parameter of the geometric random variables is the same within each single such square, but varies across different ones.
For this argument we need one final ingredient: a concentration inequality for a sum of independent random variables that takes into account the possibly varying scales of the summands. Indeed, we will be considering a sum of geometric LPP values where the parameter of the geometric is $q^i$ for varying $i$; the scale of fluctuation for fixed $i$ is $q^{i/6}/(1-q^{i}) \approx i^{-1}(|\log q|)^{-1} q^{i/6}$. Such a concentration inequality is recorded next, and will be proven in Section~\ref{s.concentration}.
\begin{theorem}\label{t.concentration}
Let $I\in\mathbb{N}\cup\{\infty\}$ and suppose $X_1, \ldots, X_I$ are independent, and assume that there exists $C_1<\infty$ and $\rho_1, \ldots, \rho_I >0$ such that each $X_i$ satisfies
\begin{align*}
\P\left(X_i\geq t\right) \leq C_1\exp(-\rho_i t^{3/2})
\end{align*}
for all $t>0$. Let $\sigma_2 = \sum_{i=1}^I \rho_i^{-2}$ and $\sigma_{2/3} = \sum_{i=1}^I \rho_i^{-2/3}$. Then there exist positive absolute constants $C$ and $c$ such that, for $t>0$,
\begin{align*}
\P\left(\sum_{i=1}^I X_i \geq t + C\cdot C_1\sigma_{2/3}\right) \leq \exp\left(-c\sigma_2^{-1/2}t^{3/2}\right).
\end{align*}
\end{theorem}
Using Theorem~\ref{t.concentration} we may give the proof of the main result, Theorem~\ref{mt.q-pushtasep bound}.
\begin{proof}[Proof of Theorem~\ref{mt.q-pushtasep bound}]
We have to upper bound, for $\theta>\theta_0=\theta_0(q)$,
\begin{align}\label{e.main prob to upper bound}
\P\left(x_N(N) \leq f_qN - (\log q^{-1})^{-1}\theta N^{1/3} \right),
\end{align}
where we recall that $f_q$ is defined in \eqref{e.fq definition} as
\begin{align*}
f_q = 2\times\frac{\psi_q(\log_q u) + \log(1-q)}{\log q} + 1.
\end{align*}
(In going from the statement of Theorem~\ref{mt.q-pushtasep bound} to \eqref{e.main prob to upper bound} we implicitly use that $\psi_q''(\log_q u)$ is bounded over $q\in(0,1)$ for fixed $u\in(0,1)$, see for example \cite[eq. (1.6)]{mansour2009some}, so that its effect can be absorbed into the constant $c$ which will show up in the exponent of the final probability bound.)
We let $\varepsilon$ be such that $q = e^{-\varepsilon}$, and we will sometimes, when convenient, write things in terms of $\varepsilon$ in the proof, so that, for example, $(\log q^{-1})^{-1}$ becomes $\varepsilon^{-1}$. We also define $\tilde f_q = f_q - 1$. By Theorem~\ref{t.q-pushTASEP to lpp}, we know that $\smash{L + N \stackrel{d}{=} x_N(N)}$, where $L$ is the LPP value from the topmost site to $\infty$ in the infinite periodic environment defined in Section~\ref{s.lpp}. Let $\smash{L_N^{(i)}}$ be the last passage time from the top to the bottom of the $i$\textsuperscript{th} large square on the vertical line from the top (which has i.i.d.\ Geo($u^2q^{2i}$) random variables associated to each small square). Then clearly $\sum_{i=0}^\infty \smash{L_N^{(i)}} \leq L$, so
\begin{align*}
\eqref{e.main prob to upper bound} = \P\left(L \leq \tilde f_qN - \varepsilon^{-1} \theta N^{1/3} \right)
\leq \P\left(\sum_{i=0}^\infty L_N^{(i)} \leq \tilde f_qN - \varepsilon^{-1} \theta N^{1/3}\right).
\end{align*}
We next add and subtract the law of large numbers term of $L_{N}^{(i)}$, which as we see from Theorem~\ref{t.uniform lower tail} is $2N uq^i(1-uq^i)^{-1}$, as this is the term by which the random variables are centered to yield the tail bounds in the same theorem. So we can write the right-hand side of the previous display as
\begin{align}\label{e.prob with mean removed}
\P\left(\sum_{i=0}^\infty \left(L_N^{(i)}-2N\frac{uq^{i}}{1-uq^{i}}\right) \leq \tilde f_qN - \sum_{i=0}^\infty 2N\frac{uq^{i}}{1-uq^{i}}- \varepsilon^{-1}\theta N^{1/3}\right)
\end{align}
We have already evaluated the LLN sum in the proof ideas section. So we recall from \eqref{e.first form of LLN} and \eqref{e.lln expression} that
\begin{align*}
\sum_{i=0}^\infty 2N\times\left[\frac{uq^{i}}{1-uq^{i}} \right]
&= 2N\times \frac{\psi_q(\log_q(u)) + \log(1-q)}{\log q} = \tilde f_q N.
\end{align*}
Putting this back into \eqref{e.prob with mean removed}, we see that
\begin{align}\label{e.concentration prob to bound}
\eqref{e.prob with mean removed} = \P\left(\sum_{i=0}^\infty \left(L_N^{(i)}-2N\frac{uq^{i}}{1-uq^{i}}\right) \leq - \varepsilon^{-1} \theta N^{1/3}\right).
\end{align}
In the remainder of the proof we will invoke the concentration bound from Theorem~\ref{t.concentration} to upper bound the previous display.
Now, $L_N^{(i)}$ is the LPP value in an $N\times N$ square with i.i.d.\ geometric random variables of parameter $u^2q^{2i}$. We know from Theorem~\ref{t.uniform lower tail} that there exist positive constants $c$ and $N_0$ such that, when $u^2q^{2i} \in (0, 1)$, $t>0$, and $N>N_0$,
\begin{align*}
\P\left(L_N^{(i)} - 2N\frac{uq^{i}}{1-uq^{i}} \leq - t\cdot\frac{u^{1/3}q^{i/3}}{1-u^2q^{2i}}N^{1/3}\right) \leq \exp(-ct^{3/2}).
\end{align*}
Rewriting $u^{1/3}q^{i/3}(1-u^2q^{2i})^{-1}$ as $\Theta(q^{i/3} (\log q)^{-1}i^{-1}) = \Theta(q^{i/3}\varepsilon^{-1} i^{-1})$ (where we absorb $u$ dependencies into the $\Theta$ notation), we see that, for $t>0$ and possibly a different $c$,
\begin{align*}
\P\left(L_N^{(i)} - 2N\frac{uq^{i}}{1-uq^{i}} \leq -t (q^{i/3} \varepsilon^{-1}i^{-1}) N^{1/3}\right) \leq \exp\left(-c t^{3/2}\right),
\end{align*}
which implies that, again for $t>0$,
\begin{align*}
\P\left(L_N^{(i)} - 2N\frac{uq^{i}}{1-uq^{i}} \leq -t N^{1/3}\right) \leq \exp\left(-c\varepsilon^{3/2} i^{3/2}q^{-i/2} t^{3/2}\right).
\end{align*}
We next want invoke Theorem~\ref{t.concentration} with $I=\infty$ and (from the previous display) $\rho_i = \varepsilon^{3/2} i^{3/2}q^{-i/2}$. Now,
\begin{align*}
\sigma_2 = \sum_{i=1}^\infty \rho_i^{-2} = \sum_{i=1}^\infty \varepsilon^{-3} i^{-3}q^{i} = \Theta(\varepsilon^{-3})
\quad\text{and}\quad
\sigma_{2/3} = \sum_{i=1}^\infty \rho_i^{-2/3} = \sum_{i=1}^\infty \varepsilon^{-1}i^{-1}q^{i/3} = \Theta(\varepsilon^{-1}|\log\varepsilon^{-1}|);
\end{align*}
where the bounds on $\sigma_{2/3}$ hold because, for some $c'>0$, $c'\sum_{i=1}^{\varepsilon^{-1}} i^{-1} \leq \sigma_{2/3}\leq \sum_{i=1}^\infty i^{-1}e^{-\varepsilon i/3}$, and both lower and upper bounds are of order $\log(\varepsilon^{-1})$ for small $\varepsilon$, and bounded for $\varepsilon$ bounded away from zero.
With these estimates, we obtain from Theorem~\ref{t.concentration} that, for all $t>0$,
\begin{align*}
\P\left(\sum_{i=0}^\infty \left(L_N^{(i)}-2N \frac{uq^{i}}{1-uq^{i}}\right) \leq -t N^{1/3} - C\varepsilon^{-1}|\log\varepsilon^{-1}|\right) \leq \exp\left(-c\varepsilon^{3/2}t^{3/2}\right).
\end{align*}
So, putting in $t=\theta \varepsilon^{-1}$, we obtain that, for $\theta>2C|\log\varepsilon^{-1}|$ with $C$ as in the last display, \eqref{e.concentration prob to bound} is bounded as
\begin{equation*}
\P\left(\sum_{i=0}^\infty \left(L_N^{(i)}-2N\frac{uq^{i}}{1-uq^{i}}\right) \leq -\theta\varepsilon^{-1} N^{1/3}\right) \leq \exp\left(-c\theta^{3/2}\right).
\end{equation*}
Thus we obtain the desired bound \eqref{e.main prob to upper bound} with $\theta_0(q) = C|\log \varepsilon^{-1}| = 2C|\log(\log q^{-1})|$.
\end{proof}
\subsection{Proving the concentration inequality}\label{s.concentration}
The only remaining task is to prove Theorem~\ref{t.concentration}. As is typical for such inequalities, the main step is to obtain a bound on the moment generating function.
\begin{proposition}\label{p.exponential moment bound}
Suppose $X$ is such that
\begin{align}\label{e.cubic tail}
\P(X\geq t) \leq C_1\exp(-\rho t^{3/2})
\end{align}
for some $\rho>0$, and all $t\geq 0$. Then, there exist positive absolute constants $C$ and $c >0$ such that, for $\lambda>0$,
$$\mathbb{E}\left[e^{\lambda X}\right] \leq \exp\left\{C\cdot C_1 \left[\lambda\rho^{-2/3} + \lambda^{3}\rho^{-2}\right]\right\}.$$
\end{proposition}
\begin{proof}
First, we see that
\begin{align*}
\mathbb{E}[e^{\lambda X}] = \mathbb{E}\left[\lambda \int_{-\infty}^X e^{\lambda x}\, \mathrm d x\right]
&= \mathbb{E}\left[\lambda \int_{-\infty}^\infty e^{\lambda x} \mathbbm{1}_{X\geq x}\, \mathrm d x\right]
= \lambda \int_{-\infty}^\infty e^{\lambda x} \cdot \P(X\geq x)\, \mathrm d x.
\end{align*}
Next, we break up the integral into two at $0$ and use the hypothesis \eqref{e.cubic tail} for the second term:
\begin{align*}
\lambda\int_{-\infty}^\infty e^{\lambda x} \cdot \P(X\geq x)\, \mathrm d x
&= \lambda\int_{-\infty}^{0} e^{\lambda x} \cdot \P(X\geq x)\, \mathrm d x + \int_{0}^\infty e^{\lambda x} \cdot \P(X\geq x)\, \mathrm d x\\
&\leq \lambda\int_{-\infty}^{0} e^{\lambda x}\,\mathrm d x + C_1\lambda\int_{0}^\infty e^{\lambda x - \rho x^{3/2}}\, \mathrm d x\\
&= 1 + C_1\lambda\int_{0}^\infty e^{\lambda x - \rho x^{3/2}}\, \mathrm d x.
\end{align*}
We focus on the second integral now, and make the change of variables $y = \lambda^{-2}\rho^{2} x \iff x=\lambda^{2}\rho^{-2} y$, to obtain
\begin{align*}
C_1\lambda\int_{0}^\infty e^{\lambda x - \rho x^{3/2}}\, \mathrm d x
= C_1\lambda^{3}\rho^{-2}\int_{0}^\infty e^{\lambda^{3}\rho^{-2}(y - y^{3/2})}\, \mathrm d y.
\end{align*}
We upper bound this essentially using Laplace's method. We first note that, for all $y>0$, it holds that $y-y^{3/2}\leq 1-\frac{1}{2}y^{3/2}$. So
\begin{align*}
C_1\lambda^{3}\rho^{-2}\int_{0}^\infty e^{\lambda^{3}\rho^{-2}(y - y^{3/2})}\, \mathrm d y
&\leq C_1\lambda^{3}\rho^{-2}\cdot \int_{0}^\infty e^{\lambda^{3}\rho^{-2}(1 - \frac{1}{2}y^{3/2})}\, \mathrm d y\\
&= C_1\lambda^{3}\rho^{-2} \cdot e^{\lambda^3\rho^{-2}}\cdot c(\lambda^3\rho^{-2})^{-2/3}\Gamma(5/3)\\
&= C\cdot C_1 (\lambda^3\rho^{-2})^{1/3}e^{\lambda^3\rho^{-2}}.
\end{align*}
for positive absolute constants $C$ and $c$. Putting all the above together yields that
\begin{align*}
\mathbb{E}[e^{\lambda X}]
&\leq 1 + C\cdot C_1 (\lambda^3\rho^{-2})^{1/3}e^{c\lambda^{3}\rho^{-2}}
\end{align*}
Now we break into two cases depending on whether $\lambda^3\rho^{-2}$ is less than or greater than 1: if $\lambda^3\rho^{-2} \leq 1$, then, since $1+x\leq \exp(x)$ and $C\cdot C_1 \exp(c\lambda^3\rho^{-2}) \leq C\cdot C_1\exp(c)$, we obtain, for an absolute constant $\tilde C$,
\begin{align*}
\mathbb{E}[e^{\lambda X}] \leq 1 + C\cdot C_1\exp(c)(\lambda^3\rho^{-2})^{1/3} \leq \exp\left(\tilde C\cdot C_1\lambda\rho^{-2/3}\right).
\end{align*}
On the other hand if $\lambda^3\rho^{-2}\geq 1$, we observe that, since $x\leq \exp(x)$, and by increasing the coefficient in the exponent,
\begin{align*}
1 + C\cdot C_1 \lambda^3\rho^{-2}e^{c\lambda^{3}\rho^{-2}} \leq e^{c'\lambda^{3}\rho^{-2}}.
\end{align*}
It is easy to see that we may take $c'$ to depend linearly on $C_1$. Thus overall, since $\max(a,b)\leq a+b$ when $a,b\geq 0$, we obtain, for some universal constant $C$,
%
\begin{equation*}
\mathbb{E}\left[e^{\lambda X}\right] \leq \exp\left(C\cdot C_1(\lambda\rho^{-2/3} + \lambda^3\rho^{-2})\right).\qedhere
\end{equation*}
\end{proof}
\begin{proof}[Proof of Theorem~\ref{t.concentration}]
Following the proof of the Chernoff bound, we exponentiate inside the probability (with $\lambda>0$ to be chosen shortly) and apply Markov's inequality:
\begin{align*}
\P\left(\sum_{i=1}^I X_i \geq t + C\cdot C_1\sigma_{2/3}\right)
&= \P\left(\exp\left(\lambda\sum_{i=1}^I X_i\right) \geq \exp\Bigl(\lambda t + \lambda C\cdot C_1\cdot\sigma_{2/3}\Bigr)\right)\\
&\leq e^{-\lambda t - \lambda C\cdot C_1\cdot\sigma_{2/3}}\prod_{i=1}^I \mathbb{E}\left[e^{\lambda X_i}\right]\\
&\leq \exp\left(-\lambda t - \lambda C\cdot C_1\sigma_{2/3} + C\cdot C_1\left[\lambda\sum_{i=1}^I\rho_i^{-2/3} + \lambda^{3}\sum_{i=1}^I\rho_i^{-2}\right]\right),\\
&= \exp\left(-\lambda t + C\cdot C_1\cdot \lambda^{3}\sigma_2\right)
\end{align*}
the penultimate line using Proposition~\ref{p.exponential moment bound} for $\lambda > 0$ to be chosen soon. Optimizing over $\lambda$ and setting it to $c't^{1/2}\sigma_2^{-1/2}$ for some small constant $c'>0$ now yields that,
\begin{equation*}
\P\left(\sum_{i=1}^I X_i \geq t + C\cdot C_1\cdot \sigma_{2/3}\right) \leq \exp\left(-ct^{3/2}\sigma_2^{-1/2}\right). \qedhere
\end{equation*}
\end{proof}
|
1,116,691,498,672 | arxiv | \section{Introduction}
One of the most striking consequences of Nevanlinna's theory
was his ``five values'' theorem, which says that if $f$ and
$g$ are non-constant meromorphic functions on ${\Bbb C}$ such that
$f^{-1}(a_i)=g^{-1}(a_i)$ for five distinct points $a_i$ in the
extended complex plane, then $f=g$. This theorem is an example of
what is now known as ``uniqueness theorem". In 1975, Fujimoto
generalized this result of Nevanlinna to the case of
meromorphic maps of ${\Bbb C}^m$ into ${\Bbb C} P^n.$ In the last years, many uniqueness theorems for meromorphic maps with
hyperplanes (both for fixed and for moving ones) have
been established.
\hbox to 5cm {\hrulefill }
\noindent {\small Mathematics Subject
Classification 2000: Primary 32H30; Secondary 32H04, 32H25,
14J70.}
\noindent {\small Key words: Nevanlinna theory,
Second Main Theorem, Uniqueness Theorem.}
{\small The first named author was partially supported by the Fields Institute Toronto. The second named author was partially supported by the
post-doctoral research program of the Abdus Salam International Centre for Theoretical Physics.}
For the case of hypersurfaces, however, there are
so far only the uniqueness theorem of Thai and Tan \cite{ThT} for the case
of Fermat moving hypersurfaces and the one of Dulock and Ru
\cite{DR2} for the case of (general) fixed hypersurfaces. More precisely, in \cite{DR2},
Dulock and Ru prove that one has a uniqueness theorem for algebraically non-degenate
holomorphic maps $f,g:{\Bbb C} \rightarrow {\Bbb C} P^n$
satisfying $f=g$ on $\cup_{i=1}^q(f^{-1}(Q_i)\cup g^{-1}(Q_i)),$
with respect to
$q > (n+1)+\frac{2Mn}{\tilde d} + \frac{1}{2}$ fixed hypersurfaces $Q_i \subset {\Bbb C} P^n$ in general position, where $\tilde d$ is the minimum of the degrees of these hypersurfaces and $M$ is the truncation level
in the Second Main Theorem for fixed hypersurface targets obtained by An-Phuong \cite{AP}
with $\epsilon = \frac{1}{2}$. Their method of proof comes from their paper \cite{DR1},
where they prove a uniqueness theorem for holomorphic curves into abelian varieties.
In this paper, by a method different to the one used by Dulock and Ru, we prove a uniqueness theorem for the
case of slowly moving hypersurfaces (Corollary \ref{3.2} below).
More precisely, we prove that one has a uniqueness theorem for algebraically non-degenate
meromorphic maps $f,g:{\Bbb C}^m \rightarrow {\Bbb C} P^n$
satisfying $f=g$ on $\cup_{i=1}^q(f^{-1}(Q_i)\cup g^{-1}(Q_i))$
with respect to
$q > (n+1) + \frac{2nL}{\tilde d} + \frac{1}{2}$ moving hypersurfaces $Q_i \subset {\Bbb C} P^n$
in (weakly) general position, where $\tilde d$ is the minimum of the degrees of these hypersurfaces and $L$ is the truncation level
in the Second Main Theorem for moving hypersurface targets obtained by the authors in \cite{DT1} with $\epsilon = \frac{1}{2}$.
Moreover, under the additional assumption that the $f^{-1}(Q_i)$, $i=1,...,q$ intersect properly, $q > (n+1) + \frac{2L}{\tilde d} + \frac{1}{2}$ moving hypersurfaces are sufficient.
We remark that in the special case of
fixed hypersurfaces, our result gives back the uniqueness theorem
of Dulock and Ru (remark that $L \leqslant M$ in this case).
Moreover, we give our uniqueness
theorem in a slightly more general form (Theorem \ref{3.1} below), requiring assumptions on the $(p-1)$ first derivatives of the maps, which gives in return a better bounds
on the number of moving hypersurfaces in ${\Bbb C} P^n$, namely
$q > (n+1) + \frac{2nL}{p\tilde d} + \frac{1}{2}$ respectively $q > (n+1) + \frac{2L}{p\tilde d} + \frac{1}{2}$.
\section{Preliminaries}
For $z = (z_1,\dots,z_m) \in {\Bbb C}^m$, we set
$\Vert z \Vert = \Big(\sum\limits_{j=1}^m |z_j|^2\Big)^{1/2}$ and
define
\begin{align*}
B(r) &= \{ z \in {\Bbb C}^m : \Vert z \Vert < r\},\quad
S(r) = \{ z \in {\Bbb C}^m : \Vert z \Vert = r\},\\
d^c &= \dfrac{\sqrt{-1}}{4\pi}(\overline \partial - \partial),\quad
\Cal V = \big(dd^c \Vert z\Vert^2\big)^{m-1},\; \sigma = d^c
\text{log}\Vert z\Vert^2 \land \big(dd^c\text{log}\Vert z
\Vert\big)^{m-1}.
\end{align*}
Let $L$ be a positive integer or $+\infty$ and $\nu$ be a divisor on
${\Bbb C}^m.$ Set $ |\nu| = \overline {\{z : \nu(z) \neq 0\}}.$
We define the counting function of $\nu$ by
\begin{align*}
N^{(L)}_\nu(r) := \int\limits_1^r \frac{n^{(L)}(t)}{t^{2m-1}}dt\quad
(1 < r < +\infty),
\end{align*}
where
\begin{align*}
n^{(L)}(t) &= \int\limits_{|\nu | \cap B(t)} \text{min}\{\nu
,L\}\cdot \Cal V\ \quad
\text{for}\quad m \geq 2 \ \text{and}\\
n^{(L)}(t) &= \sum_{|z| \leqslant t}\text{min}\{ \nu(z),L\} \qquad\quad
\text{for}\quad m = 1.
\end{align*}
Let $F$ be a nonzero holomorphic function on ${\Bbb C}^m$. For a set
$\alpha = (\alpha_1,\dots,\alpha_m)$ of nonnegative integers, we set
$|\alpha| := \alpha_1 + \dots + \alpha_m$ and $\mathcal D^\alpha F
:= \dfrac{\partial^{|\alpha|}} {\partial^{\alpha_1}z_1 \cdots
\partial^{\alpha_m}z_m}\,\cdotp$
We define the zero divisor $\nu_F$ of $F$ by
\begin{align*}
\nu_F(z) = \max \big\{ p : \mathcal D^\alpha F(z) = 0 \ \text{for
all $\alpha$ with}\ |\alpha| < p \big\}.
\end{align*}
Let $\varphi$ be a nonzero meromorphic function on ${\Bbb C}^m$. The zero
divisor $\nu_\varphi$ of $\varphi$ is defined as follows: For each
$a \in {\Bbb C}^m$, we choose nonzero holomorphic functions $F$ and $G$ on
a neighborhood $U$ of $a$ such that $\varphi = \dfrac{F}{G}$ on $U$
and $\text{dim}\big(F^{-1}(0) \cap G^{-1}(0)\big) \leqslant m-2$, then we
put $\nu_\varphi(a) := \nu_F(a)$.
Set $N_\varphi^{(L)}(r):=N_{\nu_\varphi}^{(L)}(r).$ For brevity we
will omit the character ${}^{(L)}$ in the counting function if
$L=+\infty.$
Let $f$ be a meromorphic map of ${\Bbb C}^m$ into ${\Bbb C} P^n$. For arbitrary
fixed homogeneous coordinates $(w_0: \cdots : w_n)$ of ${\Bbb C} P^n$, we
take a reduced representation $f = (f_0 : \cdots : f_n)$, which
means that each $f_i$ is a holomorphic function on ${\Bbb C}^m$ and $f(z)
= (f_0(z) : \cdots : f_n(z))$ outside the analytic set $\{ z :
f_0(z) = \cdots = f_n(z) = 0\}$ of codimension $\geq 2$. Set $\Vert
f \Vert = \max \{ |f_0|, \dots , |f_n| \}$.
The characteristic function of $f$ is defined by
\begin{align*}
T_f(r) := \int\limits_{S(r)}\text{log}\Vert f \Vert \sigma -
\int\limits_{S(1)} \text{log}\Vert f \Vert \sigma ,\quad 1 < r < +\infty.
\end{align*}
For a meromorphic function $\varphi$ on ${\Bbb C}^m$, the characteristic function
$T_\varphi(r)$ of $\varphi$ is defined by considering $\varphi$ as
a meromorphic map of ${\Bbb C}^m$ into ${\Bbb C} P^1$.
Let $f$ be a nonconstant meromorphic map of ${\Bbb C}^m$ into ${\Bbb C} P^n$.
We say that a meromorphic function $\varphi$ on ${\Bbb C}^m$ is ``small"
with respect to $f$ if $T_\varphi(r) = o(T_f(r))$ as $r \to \infty$
(outside a set of finite Lebesgue measure).
Denote by $\mathcal M$ the field of all meromorphic functions on
${\Bbb C}^m$ and by $\Cal K_f$ the subfield of $\mathcal M$ which consists of
all ``small" (with respect to $f$) meromorphic functions on ${\Bbb C}^m$.
For a homogeneous polynomial $Q \in \Cal M[x_0,\dots,x_n]$ of degree
$d \geq 1$
we write $Q = \sum\limits_{I \in \Cal T_{d}} a_{I}x^I,$ where $ \Cal
T_d := \big\{ (i_0,\dots,i_n) \in {\Bbb N}_0^{n+1} : i_0 + \dots + i_n = d
\big\}$ and $x^I = x_0^{i_0} \cdots x_n^{i_n}$ for $x =
(x_0,\dots,x_n)$ and $I = (i_0, \dots,i_n) \in \Cal
T_d.$
Denote by $Q(z)= Q(z)(x_0, \dots, x_n)=\sum\limits_{I \in \Cal T_{d}} a_{I}(z)x^I$ the homogeneous polynomial over ${\Bbb C}$ obtained by
evaluating the coefficients of $Q$ at a specific point $z \in {\Bbb C}^m$
in which all coefficient functions of $Q$ are holomorphic.
Let $Q \in \Cal M[x_0,\dots,x_n]$ of degree
$d \geq 1$
with $Q(f):=Q(f_0, \dots , f_n) \not\equiv 0$. We define
$$
N^{(L)}_f(r,Q) := N^{(L)}_{Q(f)}(r)\;\;\text{and}\;\;
f^{-1}(Q):=\{z: \nu_{Q(f)}>0\}.$$
The \textit{First Main Theorem} of Nevanlinna theory gives, for
$Q = \sum\limits_{I \in \Cal T_{d}} a_{I}x^I$ with
$Q(f):=Q(f_0, \dots , f_n) \not\equiv 0$ :
$$ N(r, Q)\leqslant d\cdot T_f(r)+ O\big(\sum_{I \in \Cal
T_d}T_{{a_{I}}}(r)\big).$$
Let
\begin{align*}
Q_j = \sum\limits_{I \in \Cal T_{d_j}} a_{jI}x^I \quad (j = 1,\dots,q)
\end{align*}
be homogeneous polynomials in $\Cal K_f[x_0,\dots,x_n]$ with
$\text{deg}\,Q_j = d_j \geq 1.$ Denote by $\Cal
K_{\{Q_j\}_{j=1}^q}$ the field over ${\Bbb C}$ of all meromorphic
functions on ${\Bbb C}^m$ generated by all quotients
$\big\{\frac{a_{jI_{1}}}{a_{jI_{2}}} :a_{jI_{2}}\not\equiv 0,
I_{1}, I_{2}
\in \Cal T_{d_j}; j \in \{1,\dots,q\} \big\}$.
We say that $f$ is algebraically nondegenerate over $\Cal
K_{\{Q_j\}_{j=1}^q}$ if there is no nonzero homogeneous polynomial
$Q \in \Cal K_{\{Q_j\}_{j=1}^q}[x_0,\dots,x_n]$ such that
$Q(f_0,\dots,$ $f_n) \equiv 0$.
We say that a set $\{Q_j\}_{j=1}^q$ $(q \geq n+1)$ of homogeneous
polynomials in $\Cal K_f [x_0,\dots,$ $x_n]$ is admissible (or in (weakly) general position)
if there exists $z \in {\Bbb C}^m$
in which all coefficient functions of all $Q_j$, $j=1,...,q$ are holomorphic and such that for any
$1 \leqslant j_0 < \dots < j_n \leqslant q$ the system of equations
\begin{align} \label{zz}
\left\{ \begin{matrix}
Q_{j_i}(z)(x_0,\dots,x_n) = 0\cr
0 \leqslant i \leqslant n\end{matrix}\right.
\end{align}
has only the trivial solution $(x_0, \dots , x_n) = (0,\dots,0)$ in
${\Bbb C}^{n+1}$. We remark that in this case this is true for the generic
$z \in {\Bbb C}^m$.
In order to prove our result for (weakly) general position (under the stronger assumption of pointwise general position this can be avoided), we finally will need some classical results on resultants,
see Lang \cite{b10}, section IX.3, for the precise definition, the existence and for the principal properties of resultants, as well as Eremenko-Sodin \cite{b4}, page 127:
Let $\big\{Q_j\big\}_{j=0}^n$ be a set of homogeneous
polynomials of common degree $d \geq 1$ in
$\Cal K_f[x_0,\dots,x_n]$
\begin{align*}
Q_j = \sum_{I \in \Cal T_d} a_{jI}x^I,\quad a_{jI} \in \Cal K_f \quad
(j = 0,\dots,n).
\end{align*}
Let $T = (\dots,t_{kI},\dots)$ \ ($k \in \{0,\dots,n\}$, $I \in \Cal T_d$)
be a family of variables. Set
\begin{align*}
\widetilde Q_j = \sum_{I \in \Cal T_d} t_{jI}x^I \in {\Bbb Z}[T,x],\quad
j = 0,\dots, n.
\end{align*}
Let $\widetilde R \in {\Bbb Z}[T]$ be the resultant of $\widetilde Q_0, \dots,
\widetilde Q_n$. This is a polynomial in the variables
$T = (\dots,t_{kI},\dots)$ \ ($k \in \{0,\dots,n\}$, $I \in \Cal T_d$)
with integer coefficients, such that the condition
$\widetilde R (T) =0$ is necessary and sufficient for the
existence of a nontrivial solution
$(x_0, \dots , x_n) \not= (0,\dots,0)$ in ${\Bbb C}^{n+1}$
of the system of equations
\begin{align} \label{z}
\left\{ \begin{matrix}
\widetilde Q_{j}(T)(x_0,\dots,x_n) = 0\cr
0 \leqslant i \leqslant n\end{matrix}\right. \:.
\end{align}
From equations (\ref{z}) and (\ref{zz}) is follows immediately
that if $$\big\{Q_j= \widetilde Q_j(a_{jI})(x_0, \dots, x_n)\,, \:j=0, \dots , n\big\}$$ is an admissible set,
\begin{equation}R := \widetilde R(\dots, a_{kI}, \dots) \not\equiv 0\,.\label{zzz}
\end{equation}
Furthermore, since $a_{kI} \in \Cal K_f$, we have
$R \in \Cal K_f$.
We finally will use the following result on resultants,
which is contained in Theorem 3.4 in \cite{b10} (see also Eremenko-Sodin \cite{b4}, page 127, for a similar result):
\begin{proposition}\label{lang} There exists a positive integer $s$
and polynomials $\big\{\widetilde
b_{ij}\big\}_{0 \leqslant i, j \leqslant n}$ in ${\Bbb Z}[T,x]$, which are (without loss of generality) zero or
homogenous in $x$ of degree $s-d$,
such that
\begin{align*}
x_i^s \cdot \widetilde R = \sum_{j=0}^n \widetilde b_{ij} \widetilde Q_j\quad
\text{for all}\ i \in \{0,\dots,n\}.
\end{align*}
\end{proposition}
\noindent If we still set
\begin{align*}
b_{ij} = \widetilde b_{ij}\big((\dots,a_{kI},\dots), (f_0,\dots,f_n)\big),\quad
0 \leqslant i, j \leqslant n,
\end{align*}
we get
\begin{align}\label{res}
f_i^s \cdot R = \sum_{j=0}^n b_{ij} \cdot Q_j(f_0,\dots,f_n)\quad
\text{for all}\ i \in \{0,\dots,n\}.
\end{align}
In particular, if $D \subset {\Bbb C}^m$ is a divisor contained in all divisors $f^{-1}(Q_j)$, $j=0,...,n$, then $R$ vanishes on $D$: This follows from (\ref{res}) since $f=(f_0:...:f_n)$ is a reduced representation (and it follows in principle already directly from the definition of
the resultant).
\section{Main result}
Let $f,g$ be nonconstant meromorphic maps of ${\Bbb C}^m$ into ${\Bbb C} P^n$.
Let $\big\{Q_j\big\}_{j=1}^q$ be an admissible set of homogeneous
polynomials in $\Cal K_f [x_0,\dots,x_n]$ with $\deg Q_j = d_j \geq
1$. Denote by $d, d^*,\tilde{d}$ respectively the
least common multiple, the maximum number and the minimum number of the $d_j$'s. Put $N=d\cdot(4(n+1)(2^n-1)(nd+1)+n+1)$. Set
$t_{\{Q_j\}_{j=1}^q}=1$ if the field $\Cal K_{\{Q_j\}_{j=1}^q}$
coincides with the complex number field ${\Bbb C}$ (ie. all $Q_j$ are fixed
hypersurface targets) and
$$
t_{\{Q_j\}_{j=1}^q}
=\Bigg(\binom{n+N}{n}^2.\binom{q}{n}+\big[\frac{\big(
\binom{n+N}{n}^2.\binom{q}{n}-1 \big).\log\big(
\binom{n+N}{n}^2.\binom{q}{n}\big)}{\log(1+\frac{1}{4\binom{n+N}{n}N})}+1\big]^2\Bigg)^{
\binom{n+N}{n}^2.\binom{q}{n}-1}$$ if $\Cal
K_{\{Q_j\}_{j=1}^q}\ne{\Bbb C},$
where we
denote $[x]:=\max\{k\in {\Bbb Z}: k\leqslant x\}$ for a real number $x.$
Let
$L=[\frac{d^*\cdot\binom{n+N}{n}t_{\{Q_j\}_{j=1}^q}-d^*}{d}+1].$
With these notations, we state our main result:
\begin{theorem} \label {3.1} a) Assume that $f,g$ are algebraically nondegenerate
over $\Cal K_{\{Q_j\}_{j=1}^q}$ and satisfy \\
\indent i) $\mathcal D^\alpha\big(\frac{f_k}{f_s}\big)=\mathcal
D^\alpha\big(\frac{g_k}{g_s}\big)$ on
$\big(\cup_{i=1}^q(f^{-1}(Q_i)\cup g^{-1}(Q_i))\big)\backslash \big(Zero
(f_s.g_s)\big),$
for all $|\alpha | < p,\ 0\leqslant k\ne s
\leqslant n,$ where $p$ is a positive integer and $(f_0:\cdots:f_n),$ $(g_0:\cdots:g_n)$ are
reduced representations of $f,g$ respectively.\\
Then for $q>n+\frac{2nL}{p\tilde{d}}+\frac{3}{2}$ , we have $f\equiv g.$\\
b) Assume that $f,g$ as in a) satisfy i) and\\
\indent ii) $\dim \big(f^{-1}(Q_i)\cap f^{-1}(Q_j)\big)\leqslant m-2$ for all
$1\leqslant i<j\leqslant q$.\\
Then for $q>n+\frac{2L}{p\tilde{d}}+\frac{3}{2}$ , we have $f\equiv g.$
\end{theorem}
We note that if $p=1$ the condition $i)$ becomes the following
usual condition: $f=g$ on $\cup_{i=1}^q(f^{-1}(Q_i)\cup g^{-1}(Q_i)),$ and we state this case again explicitly because of its importance:
\begin{corollary} \label {3.2} a) Assume that $f,g$ are algebraically nondegenerate
over $\Cal K_{\{Q_j\}_{j=1}^q}$ and satisfy\\
\indent i) $f=g$ on $\cup_{i=1}^q(f^{-1}(Q_i)\cup g^{-1}(Q_i))$.\\
Then for $q>n+\frac{2nL}{\tilde{d}}+\frac{3}{2}$ , we have $f\equiv g.$\\
b) Assume that $f,g$ as in a) satisfy i) and\\
\indent ii) $\dim \big(f^{-1}(Q_i)\cap f^{-1}(Q_j)\big)\leqslant m-2$ for all
$1\leqslant i<j\leqslant q$.\\
Then for $q>n+\frac{2L}{\tilde{d}}+\frac{3}{2}$ , we have $f\equiv g.$
\end{corollary}
In order to prove Theorem 3.1, we need the following two results.
The first one is similar to Lemma 5.1 in Ji \cite{J}, the second one
is a special case of our main result in \cite{DT1}.
\begin{proposition} \label {3.3}Let $A_1,\dots,A_k$ be pure $(m-1)$- dimensional
analytic subsets of ${\Bbb C}^m.$
Let $f_1, f_2$ be meromorphic maps
of ${\Bbb C}^m$ into ${\Bbb C} P^n$. Then there exists a dense subset $\mathcal
C\subset {\Bbb C}^{n+1}\backslash \{0\}$ such that for any
$c=(c_0,\dots,c_n)\in \mathcal C$ the hyperplane $H_c$ defined by
$c_0w_0+\cdots+c_nw_n=0$ satisfies: $\dim\big(\cup_{j=1}^kA_j\cap
f_i^{-1}(H_c)\big)\leqslant m-2, i\in\{1,2\}.$
\end{proposition}
\noindent {\bf Proof of Proposition~\ref{3.3}:}
For any irreducible pure $(m-1)-$dimensional component $\sigma$ of
$\cup_{j=1}^k A_j$ we set
\begin{equation*}
K_\sigma^i =\big\{(t_0,\dots,t_n) \in {\Bbb C}^{n+1} :
\sum\limits_{s=0}^nt_sf_{is} =0 \text{\ \ on\ }\sigma \big\}\ ,\quad
i\in\{1, 2\},
\end{equation*}
where $(f_{i0}:\cdots:f_{in})$ are reduced representations of
$f_i$. Then $K_\sigma^i$ is a complex vector subspace of ${\Bbb C}^{n+1}$.
Since $\text{dim}\{ f_{i0} = \cdots = f_{in} = 0\}\leqslant m-2,$ we get
that $\sigma\backslash\bigcup\limits_{i\in\{1, 2\}}\{ f_{i0} =
\cdots = f_{in} = 0\}\ne\varnothing.$ This implies that
$\dim K_\sigma^i \leqslant n$. Let $K = \bigcup\limits_{i\in\{1, 2\}}\bigcup\limits_\sigma K_\sigma^i,$
then $K$ is a union of at most a countable number of at
most $n-$dimensional complex vector subspaces in ${\Bbb C}^{n+1}$.
Let $\mathcal{C} = {\Bbb C}^{n+1}\backslash K$.
Then $\mathcal{C}$ meets the requirement of the Proposition.
\qed
\begin{theorem} \label {3.4} Under the same assumption as in Theorem~\ref{3.1},
we have
\begin{align*}
(q-n-\frac{3}{2}) T_f(r) \leqslant \sum_{j=1}^q \frac{1}{d_j}
N^{(L)}_f(r,Q_j),
\end{align*}
for all $r \in [1, +\infty)$ excluding a Borel subset $E$ of $[1,
+\infty)$ with $\displaystyle{\int\limits_E} dr < + \infty$.
\end{theorem}
\noindent {\bf Proof of Theorem~\ref{3.4}:} This is the special case of the Main Theorem and Proposition 1.2. in \cite{DT1}
for $\epsilon = \frac{1}{2}$ and where we estimate the different
$d_j$'s in the numerators of the expressions entering into the
truncation level $L$ by $d^*$.
\qed
\\
\noindent {\bf Proof of Theorem~\ref{3.1}:} Assume that $f\not\equiv g.$ We first prove the following\\
{\bf Claim:} There
exist (fixed) hyperplanes $H_i: a_{i0}w_0+\dots a_{in}w_n=0\;(i=1,2)$ in ${\Bbb C}
P^n$ such that $S=S_{H_1, H_2}(f,g):=
\frac{H_1(f)}{H_2(f)}-\frac{H_1(g)}{H_2(g)}\not\equiv 0$ and
\begin{align}\dim(f^{-1}(Q_j)\cap f^{-1}\big(H_i)\big)\leqslant m-2,\; \dim(g^{-1}(Q_j)\cap g^{-1}\big(H_i)\big)\leqslant
m-2\label{1}
\end{align}
for all $j\in\{1,\dots,q\}$, $i\in\{1,2\}.$\\
{\bf Proof of the Claim}: By assumption i) of Theorem~\ref{3.1} we have pure $(m-1)-$dimensional analytic sets
\begin{align}A_j:=f^{-1}(Q_j)=g^{-1}(Q_j) \subset
{\Bbb C}^m, \:
j=1,\dots, q\,.\label{2}\end{align}
By Proposition~\ref{3.3}
there exists a dense subset $\mathcal
C\subset {\Bbb C}^{n+1}\backslash \{0\}$ such that for any
$c=(c_0,\dots,c_n)\in \mathcal C$ the hyperplane $H_c$ defined by
$c_0w_0+\cdots+c_nw_n=0$ satisfies (\ref{1}), that is
$$\dim(A_j\cap f^{-1}\big(H_c)\big)\leqslant m-2,\; \dim(A_j\cap g^{-1}\big(H_c)\big)\leqslant
m-2$$
for all $j\in\{1,\dots,q\}$. Since $f,g$ are algebraically nondegenerate
over $\Cal K_{\{Q_j\}_{j=1}^q}$, so in particular algebraically nondegenerate over ${\Bbb C}$, we have that $L_c(f)\not\equiv 0$
and $L_c(g)\not\equiv 0$ are holomorphic functions for all
$c=(c_0,\dots,c_n)\in \mathcal C$, where
$L_c(f):= \sum_{i=0}^nc_if_i$ with a reduced representation
$f=(f_0:\dots :f_n)$ and
$L_c(g):= \sum_{i=0}^nc_ig_i$ with a reduced representation
$g=(g_0:\dots :g_n)$. Finally for
$c^{(1)}, c^{(2)}\in \mathcal C$, we put
$S_{c^{(1)},c^{(2)}}(f,g):=
\frac{L_{c^{(1)}}(f)}{L_{c^{(2)}}(f)}-\frac{L_{c^{(1)}}(g)}{L_{c^{(2)}}(g)}$. In order to prove the Claim it suffices to show that for some $c^{(1)}, c^{(2)}\in \mathcal C$, $S_{c^{(1)},c^{(2)}}(f,g) \not\equiv 0$. Assume the contrary. Then for all $0 \leqslant i < j \leqslant n$
there exist sequences $(c^{(1)})_{\nu}$, $(c^{(2)})_{\nu}$,
$\nu \in {\Bbb N}$, of elements in $\mathcal C$ such that
$L_{(c^{(1)})_{\nu}}(f) \rightarrow f_i$ and
$L_{(c^{(2)})_{\nu}}(f) \rightarrow f_j$.
From this we get $$0\equiv
S_{(c^{(1)})_{\nu},(c^{(2)})_{\nu}}(f,g) \rightarrow \frac{f_i}{f_j} -
\frac{g_i}{g_j}\,,$$
what implies $0 \equiv \frac{f_i}{f_j} -
\frac{g_i}{g_j}$ for all $0\leqslant i<j\leqslant n$, contradicting the assumption $f\not\equiv g$. This proves the claim.\qed
Since $f=g$ on $\cup_{j=1}^q f^{-1}(Q_j),$ for any generic point
$$z_0\in\cup_{j=1}^q
f^{-1}(Q_j)\backslash \big( f^{-1}(H_2)\cup g^{-1}(H_2)\big)$$
(outside an analytic subset of codimension at least 2), there exists
$s\in\{0,\dots,n\}$ such that both of $f_s(z_0)$ and $g_s(z_0)$ are
different from zero. Then by assumption i) we have
\begin {align*}
\mathcal D^\alpha S(z_0)&=\mathcal D^\alpha\bigl (
\frac{H_1(f)}{H_2(f)}-\frac{H_1(g)}{H_2(g)}\bigl )(z_0)\\&
=\mathcal D^\alpha\bigl (
\frac{\sum_{v=0}^{n}\frac{f_v}{f_s}a_{1v}}{\sum_{v=0}^{n}\frac{f_v}{f_s}a_{2v}}-\frac{\sum_{v=0}^{n}\frac{g_v}{g_s}a_{1v}}{\sum_{v=0}^{n}\frac{g_v}{g_s}a_{2v}}\bigl
)(z_0)=0
\end{align*}
for all $ |\alpha |< p.$
\noindent This implies that
\begin{align}
\nu_S\ge p \;\text{on}\;\cup_{j=1}^q f^{-1}(Q_j)\backslash
\big(A\cup f^{-1}(H_2)\cup g^{-1}(H_2)\big).\label{3}
\end{align}
where $A$ is an analytic subset of codimension at least 2.
Now we will estimate the divisors $\nu_{Q_j \circ f}$ by making use of the resultants:
In fact, for any $J=\{j_0,...,j_n\} \subset \{1,2,...,q\}$, let $R_J$ be the resultant of
of $Q_{j_0},...,Q_{j_n}$. Then if $D \subset {\Bbb C}^m$ is a divisor contained in all divisors $f^{-1}(Q_{j_k})$, $k=0,...,n$, then $R_J$ vanishes on $D$. Thus, we get
\begin{align}\label{n}
\sum_{j=1}^q\text{min}\{1, \nu_{Q_j \circ f}\} \leqslant n \cdot \text{min}\{1, \sum_{j=1}^q \nu_{Q_j \circ f}\} + (q-n) \cdot \text{min}\{1, \sum_{|J|=n+1}\nu_{R_J}\}
\end{align}
By (\ref{1}), (\ref{2}),(\ref{3}), (\ref{n}), by the First Main Theorem and since $R_J \in {\cal K}_f$, we have
\begin{align}
\sum_{j=1}^qN_g^{(1)}(r,Q_j)=\sum_{j=1}^qN_f^{(1)}(r,Q_j) &\leqslant
\frac{n}{p}N_S(r) + o(T_f(r))\label{4a}
\end{align}
Furthermore, by the First Main Theorem
\begin{align}
N_S(r)
&\leqslant
T_{\frac{H_1(f)}{H_2(f)}-\frac{H_1(g)}{H_2(g)}}(r)+O(1)\notag\\
&\leqslant
T_{\frac{H_1(f)}{H_2(f)}}(r)+T_{\frac{H_1(g)}{H_2(g)}}(r)+O(1)\notag\\
&\leqslant T_f(r)+T_g(r)+O(1).\label{4}
\end{align}
Thus,
\begin{align}
\sum_{j=1}^q\big(N_f^{(1)}(r,Q_j)+N_g^{(1)}(r,Q_j)\big)\leqslant
\frac{2n}{p}\big(T_f(r)+T_g(r)\big)+o(T_f(r)).\label{5}
\end{align}
By Theorem~\ref{3.4} and by the First Main Theorem, we have
\begin{align}
(q-n-\frac{3}{2})T_f(r)\leqslant\sum_{j=1}^q\frac{1}{d_j}N_f^{(L)}(r,Q_j)\notag\\
\leqslant\sum_{j=1}^q\frac{L}{d_j}N_f^{(1)}(r,Q_j)=\sum_{j=1}^q\frac{L}{d_j}N_g^{(1)}(r,Q_j)\leqslant
qLT_g(r)+o(T_f(r))\label{6}
\end{align}
for all $r \in [1, +\infty)$ excluding a Borel subset $E$ of $(1,
+\infty)$ with $\displaystyle{\int\limits_E} dr < + \infty$
(note that $Q_j\in\mathcal K_f[x_0,\dots,x_n]).$
\noindent This implies that $\mathcal K_f\subset\mathcal K_g.$ Then
$\{Q_j\}_{j=1}^q\subset\mathcal K_g[x_0,\dots,x_n].$
So we can
apply Theorem~\ref{3.4} for both meromorphic maps $f$ and $g$ with moving
hypersurfaces $\{Q_j\}_{j=1}^q.$ By Theorem~\ref{3.4} and by the First Main
Theorem, we have
\begin{align}
(q-n-\frac{3}{2})\big(T_f(r)+T_g(r)\big)\leqslant\sum_{j=1}^q\frac{1}{d_j}\big(N_f^{(L)}(r,Q_j)+N_g^{(L)}(r,Q_j)\big)\notag\\
\leqslant\frac{L}{\tilde{d}}\sum_{j=1}^q\big(N_f^{(1)}(r,Q_j)+N_g^{(1)}(r,Q_j)\big)\label{7}
\end{align}
for all $r \in [1, +\infty)$ excluding a Borel subset $E$ of $(1,
+\infty)$ with $\displaystyle{\int\limits_E} dr < + \infty$.
Combining with (\ref{5}), we get
\begin{align}
(q-n-\frac{3}{2})\big(T_f(r)+T_g(r)\big)\leqslant\frac{2nL}{p\tilde{d}}\big(T_f(r)+T_g(r)\big)+o(T_f(r))
\label{8}\end{align}
for all $r \in [1, +\infty)$ excluding a Borel subset $E$ of $(1,
+\infty)$ with $\displaystyle{\int\limits_E} dr < + \infty$.
This is a contradiction, since
$q>n+\frac{2nL}{p\tilde{d}}+\frac{3}{2}$, thus finishing the proof of part a).\\
In order to prove b), we observe that under the additional assumption ii), we can
improve (\ref{4a}), namely we get, by using (\ref{1}), (\ref{2}), (\ref{3}) and assumption ii)
\begin{align}
\sum_{j=1}^qN_g^{(1)}(r,Q_j)=\sum_{j=1}^qN_f^{(1)}(r,Q_j) &\leqslant
\frac{1}{p}N_S(r)\label{4ab}
\end{align}
This improves (\ref{5}), namely we get from (\ref{4}) and (\ref{4ab}):
\begin{align}
\sum_{j=1}^q\big(N_f^{(1)}(r,Q_j)+N_g^{(1)}(r,Q_j)\big)\leqslant
\frac{2}{p}\big(T_f(r)+T_g(r)\big)+O(1).\label{5b}
\end{align}
Using this (\ref{8}) becomes, by using now (\ref{7}) and (\ref{5b}):
\begin{align}
(q-n-\frac{3}{2})\big(T_f(r)+T_g(r)\big)\leqslant\frac{2L}{p\tilde{d}}\big(T_f(r)+T_g(r)\big)+O(1)
\label{8b}\end{align}
for all $r \in [1, +\infty)$ excluding a Borel subset $E$ of $(1,
+\infty)$ with $\displaystyle{\int\limits_E} dr < + \infty$.
This is a contradiction, since
$q>n+\frac{2L}{p\tilde{d}}+\frac{3}{2}$, thus finishing the proof of part b).
\qed
|
1,116,691,498,673 | arxiv |
\section{Introduction}
\label{sec:introduction}
\emph{Type theory} and \emph{higher category theory} are closely
related: dependent type theories with intensional identity types
provide a syntactic way of reasoning about
\((\infty,1)\)-categories. This is known as the family of
\emph{internal language conjectures} and has led for example to syntactic developments of classical material in homotopy theory such as the homotopy groups of spheres \parencite{brunerie2016thesis, brunerie2019spheres, licata2013spheres} and the Blakers-Massey Theorem \parencite{hou2016blakersmassey}, just to name a few. These proofs often lead to new perspectives on classical material and their nature makes them applicable to a wider class of \((\infty,1)\)-categories, importing ideas from the homotopy theory of spaces to other \((\infty,1)\)-categories, see for example \parencite{anel2018goodwillie} and \parencite{anel2020blakersmassey}. One of the main appeals of type theory for higher category theory and homotopy theory is thus the usage of this type theoretic language to reason in a synthetic way.
On the other hand, higher categories will be useful for the
study of type theories. For example, one can expect a conceptual proof
of Voevodsky's homotopy canonicity conjecture that any closed term of the type of natural
numbers is homotopic to a numeral using a higher dimensional analogue of
the Freyd cover \parencite{lambek1986higher}.
However, internal language conjectures are still open problems in \emph{homotopy type theory}
\parencite{hottbook}.
The advantage of type-theoretic languages, that a lot of equations \emph{strictly} hold in type theories so that
a lot of trivial homotopies in \((\infty,1)\)-categories can be
eliminated, is at the same time the main difficulty of
internal language conjectures. One has to justify interpreting strict
equality in type theories as homotopies in
\((\infty,1)\)-categories. This is an \(\infty\)-dimensional version
of the \emph{coherence problem} in the categorical semantics of type
theories.
An internal language conjecture should be formulated as an equivalence
between an \((\infty,1)\)-category of theories and an
\((\infty,1)\)-category of structured
\((\infty,1)\)-categories. Currently, only a few internal language
conjectures have been made precise. \Textcite{kapulkin2018homotopy}
made precise formulations of the simplest cases and conjectured that
the \((\infty,1)\)-category of theories over Martin-L{\"o}f type
theory with intensional identity types (and dependent function types
with function extensionality) is equivalent to the
\((\infty,1)\)-category of small \((\infty,1)\)-categories with finite
limits (and pushforwards). In this paper, we prove
\citeauthor{kapulkin2018homotopy}'s conjecture by introducing a novel
\(\infty\)-dimensional generalization of type theories which we call
\emph{\(\infty\)-type theories}.
The basic strategy for proving \citeauthor{kapulkin2018homotopy}'s
conjecture is to decompose the equivalence to be proved into smaller
pieces. An existing approach is to introduce \(1\)-categorical
presentations of \((\infty,1)\)-categories with finite limits. It had
already been shown by \textcite{szumilo2014two} that
\((\infty,1)\)-categories with finite limits are equivalent to
categories of fibrant objects in the sense of
\textcite{brown1973abstract}. \Textcite{kapulkin2019internal} then
proved that categories of fibrant objects are equivalent to
\citeauthor{joyal2017clans}'s tribes
\parencite{joyal2017clans}. Tribes are considered as \(1\)-categorical
models of the type theory, but a full proof of the equivalence between
tribes and theories has not yet been achieved.
Although this approach is natural for those who know the homotopical
interpretation of intensional identity types
\parencite{awodey2009homotopy,arndt2011homotopy,shulman2015inverse},
\(1\)-categorical models of intensional identity types are not
convenient to work with. A problem is that \(1\)-categories of
\(1\)-categorical models of type theories need not be rich enough to
calculate the \((\infty,1)\)-categories they present. It is also
unclear if this approach can be generalized to internal language
conjectures for richer type theories.
In this paper we seek another path. The key idea is to introduce a
notion of \emph{\(\infty\)-type theories}, an \(\infty\)-dimensional
generalization of type theories. Intuitively, an \(\infty\)-type
theory is a kind of type theory but equality is like homotopies rather
than strict one. Ordinary type theories are considered as truncated
\(\infty\)-type theories in the sense that all homotopies are
trivial.
Our proof strategy is as follows. Let \(\itth\) denote the type theory
with intensional identity types. We introduce an \(\infty\)-type
theory \(\itth_{\infty}\) which is analogous to \(\itth\) but without
truncation. Because \(\itth_{\infty}\) is already a higher dimensional
object, it is straightforward to interpret \(\itth_{\infty}\) in
\((\infty,1)\)-categories with finite limits. The internal language
conjecture is then reduced to a coherence problem between \(\itth\)
and \(\itth_{\infty}\): how to interpret \(\itth\) in models of
\(\itth_{\infty}\). Although this coherence problem is as difficult as
the original internal language conjecture, this reduction is an
important step. Since the problem is now formulated in the language of
\(\infty\)-type theories and related concepts, our proof strategy is
easily generalized to internal language conjectures for richer type
theories. When we extend \(\itth\) by some type constructors, we just
extend \(\itth_{\infty}\) in the same way.
A solution to coherence problems in the \(1\)-categorical semantics of
type theories given by \textcite{hofmann1995interpretation} is to
replace a ``non-split'' model, in which equality between types is up
to isomorphism, by an equivalent ``split'' model, in which equality
between types is strict. In our approach, models of \(\itth_{\infty}\)
are like non-split models of \(\itth\), so we consider replacing a
model of \(\itth_{\infty}\) by an equivalent model of
\(\itth\). Splitting techniques for \((\infty,1)\)-categorical
structures have not yet been fully developed except for some
presentable \((\infty,1)\)-categories considered by
\textcite{gepner2017univalence,shulman2019toposes}. Since we have to
split small \((\infty,1)\)-categories which are usually
non-presentable, their results cannot directly apply. However, as he
already mentioned in \parencite[Remark 1.4]{shulman2019toposes},
\citeauthor{shulman2019toposes}'s result on splitting presentable
\((\infty,1)\)-toposes can be used for splitting small
\((\infty,1)\)-categories by embedding them into presheaf
\((\infty,1)\)-toposes.
\paragraph{Organization}
In \cref{sec:preliminaries} we fix notations and remember some
concepts in \((\infty,1)\)-category theory. Relevant concepts to this
paper are \emph{\(\infty\)-cosmoi} \parencite{riehl2018elements},
\emph{compactly generated} \((\infty,1)\)-categories,
\emph{exponentiable} arrows, and \emph{representable maps} between
right fibrations.
We introduce the notion of an \(\infty\)-type theory in
\cref{sec:infty-type-theories}. It is defined as an
\((\infty,1)\)-category with a certain structure, generalizing the
categorical definition of type theories introduced by the second named
author \parencite{uemura2019framework}. There are two important
notions around \(\infty\)-type theories: \emph{models} and
\emph{theories}. The notion of models we have in mind is a
generalization of categories with families
\parencite{dybjer1996internal} and, equivalently, natural models
\parencite{awodey2018natural,fiore2012discrete}. The notion of
theories is close to the essentially algebraic definitions of theories
given by
\textcite{garner2015combinatorial,isaev2017algebraic,voevodsky2014b-system}.
In \cref{sec:theory-model-corr} we prove \(\infty\)-analogue of the
main results of the previous work
\parencite{uemura2019framework}. Given an \(\infty\)-type theory
\(\tth\), we construct a functor that assigns to each model of
\(\tth\) a \(\tth\)-theory called the \emph{internal language} of the
model. The internal language functor has a fully faithful left adjoint
which constructs a \emph{syntactic model} from a \(\tth\)-theory. We
further characterize the image of the left adjoint.
We study some concrete \(\infty\)-type theories in
\cref{sec:corr-betw-type}. The most basic example is
\(\etth_{\infty}\), the \(\infty\)-analogue of Martin-L{\"o}f type
theory with \emph{extensional} identity types. We show that
\(\etth_{\infty}\)-theories are equivalent to
\((\infty,1)\)-categories with finite limits (\cref{etth-dem-lex}),
which is an \(\infty\)-analogue of the result of
\textcite{clairambault2014biequivalence}. This is to be an
intermediate step toward \citeauthor{kapulkin2018homotopy}'s
conjecture, but it also has an interesting corollary. One can derive a
new universal property of the \((\infty,1)\)-category of small
\((\infty,1)\)-categories with finite limits from a universal property
of \(\etth_{\infty}\) (\cref{Lex-ump}). We also study a couple of
examples of \(\infty\)-type theories with dependent function
types. Finally in \cref{sec:intern-lang-left} we prove
\citeauthor{kapulkin2018homotopy}'s conjecture.
\section{Preliminaries}
\label{sec:preliminaries}
\subsection{\(\infty\)-categories}
For concreteness, we will work with \emph{\(\infty\)-categories}, also called \emph{quasicategories} in the literature, \parencite{joyal2008notes, lurie2009higher, cisinski2019higher} as models for \((\infty,1)\)-categories. An \(\infty\)-category is a simplicial set satisfying certain horn filling conditions. We recollect some standard definitions and notations.
\begin{definition}
\begin{enumerate}
\item Given an \(\infty\)-category \(\cat\) and a simplicial set \(\sh\), we denote \(\Fun(\sh, \cat)\) the internal hom of simplicial sets, which is itself an \(\infty\)-category and models the \(\infty\)-category of functors and natural transformations.
\item For an \(\infty\)-category \(\cat\), we denote by \(\cat^\simeq\) the largest \(\infty\)-groupoid (Kan complex) contained in \(\cat\). Furthermore we write \(\lgpd (\cat,\catI):= \Fun(\cat,\catI)^\simeq\).
\item For an \(\infty\)-category \(\cat\), we denote by \(C^\triangleright\) the join \(\cat \star \Delta^0\).
\item We denote by \(\Cat_\infty\) the \(\infty\)-category of small \(\infty\)-categories. This is obtained as the homotopy coherent nerve of the simplicial category with objects given by small \(\infty\)-categories and hom simplicial sets given by \(\lgpd(\cat, \catI)\).
\item We denote by \(\CAT_\infty\) the \(\infty\)-category of (possibly large) \(\infty\)-categories obtained in a similar way.
\item We denote by \(\Space\) the \(\infty\)-category of small \(\infty\)-groupoids obtained as the homotopy coherent nerve of the simplicial category with objects small Kan complexes and hom simplicial sets given by the internal hom of simplicial sets.
\end{enumerate}
\end{definition}
Although we chose to work with \(\infty\)-categories, we will primarily use the language of the formal category theory of \(\infty\)-categories as expressed by \(\infty\)-cosmoi. Therefore, most of our constructions, statements and proofs are independent of the model.
\subsection{\(\infty\)-cosmoi}
An \emph{\(\infty\)-cosmos} \parencite{riehl2018elements} is, roughly,
a complete \((\infty,2)\)-category with enough structure to do formal
category theory. More concretely, an \(\infty\)-cosmos \(\cosmos\) is
a simplicially enriched category such that for any pair of objects
\(\cat,\catI \in \cosmos\), the hom simplicial set
\(\cosmos(\cat,\catI)\) is an \(\infty\)-category. \(\cosmos\) is also
equipped a class of morphisms called isofibrations, and all small
\((\infty,1)\)-categorical limits are constructible from products and
pullbacks of isofibrations. Moreover, \(\cosmos\) has cotensors with
small simplicial sets \(\sh\cotensor \cat\) characterized by the
equivalence (isomorphism, in fact) of \(\infty\)-categories
\[
\cosmos( \catI, \sh \cotensor \cat) \simeq \Fun ( \sh, \cosmos(\catI,\cat)).
\]
Given an \(\infty\)-category, we may take its \emph{homotopy category}, which is just an ordinary category. Applying this to the hom spaces of an \(\infty\)-cosmos gives rise to a 2-category. Adjunctions and equivalences in \(\infty\)-cosmoi are then defined in the usual way using this 2-category.
\begin{example}
We denote by \(\kCAT_\infty\) the \(\infty\)-cosmos of (possibly large) \(\infty\)-categories. That is, \(\kCAT_\infty\) is the simplicial category with objects (possibly large) \(\infty\)-categories and hom simplicial sets \(\Fun(\cat,\catI)\). The cotensor \(\sh \cotensor \cat\) in \(\kCAT_\infty\) is given by the functor \(\infty\)-category \(\Fun(\sh ,\cat)\) and adjunctions and equivalences agree with the standard notions of \(\infty\)-categories.
\end{example}
Cartesian fibrations and right fibrations in \(\infty\)-cosmoi are
characterized by analogy with those in complete 2-categories. Here we
prefer to work with versions of these concepts that are invariant
under equivalence. The following definition coincides with
\citeauthor{riehl2018elements}'s when \(\fun\) is an isofibration.
\begin{definition}
A functor \(\fun\colon \cat \to \catI\) in an \(\infty\)-cosmos \(\cosmos\) is said to be a \emph{cartesian fibration} if the functor
\[
(\ev_1, \fun_\ast)\colon \Delta^1 \cotensor \cat \to \cat \times_{\catI} \Delta^1 \cotensor \catI
\]
has a right adjoint with invertible counit. A \emph{fibred functor}
between cartesian fibrations is a morphism in \(\cosmos^{\to}\)
that commutes with the right adjoint of \((\ev_{1}, \fun_{\ast})\). A
cartesian fibration is a \emph{right fibration} if
\((\ev_1, \fun_\ast)\) is an equivalence.
For a small \(\infty\)-category \(\cat\), we denote by
\(\CartFib_{\cat}\subset \Cat_\infty/\cat\) the \(\infty\)-category of
cartesian fibrations over \(\cat\) and fibred functors over
\(\cat\). We denote by \(\RFib_{\cat}\subset \CartFib_{\cat}\) the
full subcategory spanned by the right fibrations over \(\cat\). Note
that any functor between right fibrations over \(\cat\) is
automatically a fibred functor, so \(\RFib_{\cat}\) is a full
subcategory of \(\Cat_{\infty}/\cat\). We
write \(\RFib \subset \cat_{\infty}^{\to}\) for the full subcategory
spanned by the right fibrations.
\end{definition}
\subsection{Compactly generated \(\infty\)-categories}
\begin{definition}[{\textcite[Definition 5.5.7.1 and Theorem
5.5.1.1]{lurie2009higher}}]
An \(\infty\)-category \(\cat\) is said to be
\emph{compactly generated} if it is an
\(\omega\)-accessible localization of \(\Fun(\catI^{\op}, \Space)\),
that is, a reflective full subcategory of \(\Fun(\catI^{\op},
\Space)\) closed under filtered colimits,
for some small \(\infty\)-category \(\catI\). The subcategory of \(\CAT_{\infty}\) spanned by the compactly generated \(\infty\)-categories and \(\omega\)-accessible right adjoints is denoted by \(\PrR_{\omega}\). We will moreover denote by \(\kPrR_{\omega} \subset \kCAT_{\infty}\) the locally full subcategory spanned by the compactly
generated \(\infty\)-categories and \(\omega\)-accessible right
adjoints.
\end{definition}
Recall \parencite[Proposition 5.5.7.6]{lurie2009higher} that \(\PrR_{\omega}\subset \CAT_{\infty}\) is closed under small limits. By definition, compactly generated \(\infty\)-categories are closed in \(\kCAT_{\infty}\) under cotensors with small simplicial sets. Hence, the subcategory
\(\kPrR_{\omega} \subset \kCAT_{\infty}\) is an \(\infty\)-cosmos, and the inclusion
\(\kPrR_{\omega} \to \kCAT_{\infty}\) preserves the structures of
\(\infty\)-cosmoi and reflects equivalences.
\begin{example}
The \(\infty\)-category \(\Space\) of small spaces is compactly
generated. The \(\infty\)-category \(\Cat_{\infty}\) of small
\(\infty\)-categories is compactly generated, and the functor
\(\lgpd(\Delta^{n}, {-}) : \Cat_{\infty} \to \Space\) sending an
\(\infty\)-category \(\cat\) to the space of \(n\)-cells of \(\cat\)
is an \(\omega\)-accessible right adjoint. This is because
\(\Cat_{\infty}\) is regarded as an \(\omega\)-accessible
localization of \(\Fun(\Delta^{\op}, \Space)\) using the equivalence
of quasicategories and complete Segal spaces
\parencite{joyal2007quasi}.
\end{example}
\begin{example}
For a small \(\infty\)-category \(\cat\), the \(\infty\)-category
\(\CartFib_{\cat}\) of cartesian fibrations over \(\cat\) and fibred
functors over \(\cat\) is compactly generated as
\(\CartFib_{\cat} \simeq \Fun(\cat^{\op}, \Cat_{\infty})\). The
forgetful functor \(\CartFib_{\cat} \to \Cat_{\infty}/\cat\) is an
\(\omega\)-accessible right adjoint. To see this, observe that this
forgetful functor is the right derived functor of the forgetful
functor
\[
\SSet^{+}/\cat^{\sharp} \to \SSet/\cat
\]
which is a right Quillen functor with respect to the cartesian model
structure and the slice model structure of the Joyal model structure
on \(\SSet\)\parencite[Proposition 3.1.5.2]{lurie2009higher} or \parencite[Proposition
3.1.18]{nguyen2019thesis}. The functor
\(\SSet^{+}/\cat^{\sharp} \to \SSet/\cat\) preserves filtered
colimits, and filtered colimits are homotopy colimits in both model
structures, from which it follows that the right derived functor
preserves filtered colimits.
\end{example}
\begin{example}
\label{LAdj-cg}
We define a subcategory \(\LAdj \subset
\Cat_{\infty}^{\to}\) to be the pullback
\[
\begin{tikzcd}
\LAdj
\arrow[rr]
\arrow[d] & &
\CartFib_{\Delta^{1}}
\arrow[d] \\
\Cat_{\infty}^{\to}
\arrow[r,"\simeq"'] &
\coCartFib_{\Delta^{1}}
\arrow[r] &
\Cat_{\infty}/\Delta^{1}.
\end{tikzcd}
\]
By construction, \(\LAdj\) is compactly generated, and the forgetful
functor \(\LAdj \to \Cat_{\infty}^{\to}\) is a conservative,
\(\omega\)-accessible right adjoint. Since a functor
\(\fun : \catII \to \Delta^{1}\) that is both a cocartesian
fibration and a cartesian fibration can be identified with an
adjunction between the fibers over \(0\) and \(1\), the
\(\infty\)-category \(\LAdj\) can be described as follows:
\begin{itemize}
\item the objects are the functors \(\fun : \cat \to \catI\) that
have a right adjoint \(\fun^{*}\);
\item the morphisms
\((\fun_{1} : \cat_{1} \to \catI_{1}) \to (\fun_{2} : \cat_{2} \to
\catI_{2})\) are the squares
\[
\begin{tikzcd}
\cat_{1}
\arrow[r,"\funI"]
\arrow[d,"\fun_{1}"'] &
\cat_{2}
\arrow[d,"\fun_{2}"] \\
\catI_{1}
\arrow[r,"\funII"'] &
\catI_{2}
\end{tikzcd}
\]
satisfying the Beck-Chevalley condition: the canonical natural
transformation
\[
\begin{tikzcd}
\cat_{1}
\arrow[r,"\funI"]
\arrow[dr,Rightarrow,start
anchor={[xshift=1ex,yshift=-1ex]},end
anchor={[xshift=-1ex,yshift=1ex]}] &
\cat_{2} \\
\catI_{1}
\arrow[u,"\fun_{1}^{*}"]
\arrow[r,"\funII"'] &
\catI_{2}
\arrow[u,"\fun_{2}^{*}"']
\end{tikzcd}
\]
is invertible.
\end{itemize}
\end{example}
We use \cref{LAdj-cg} to verify that an \(\infty\)-category whose
objects are small \(\infty\)-categories with a certain structure
defined by adjunction is compactly generated.
\begin{example}
For a finitely presentable simplicial set \(\sh\), we define
\(\Lex_{\infty}^{(\sh)}\) to be the pullback
\[
\begin{tikzcd}
\Lex_{\infty}^{(\sh)}
\arrow[r]
\arrow[d] &
[10ex]
\LAdj
\arrow[d] \\
\Cat_{\infty}
\arrow[r,"\cat \mapsto (\diagonal : \cat \to \sh \cotensor
\cat)"'] &
\Cat_{\infty}^{\to}.
\end{tikzcd}
\]
\(\Lex_{\infty}^{(\sh)}\) is the \(\infty\)-category of small
\(\infty\)-categories with limits of shape \(\sh\). We define the
\(\infty\)-category \(\Lex_{\infty}\) of small left exact
\(\infty\)-categories to be the wide pullback of
\(\Lex_{\infty}^{(\sh)}\) over \(\Cat_{\infty}\) for all finitely
presentable simplicial sets \(\sh\). By construction,
\(\Lex_{\infty}^{(\sh)}\) and \(\Lex_{\infty}\) are compactly
generated, and the forgetful functors to \(\Cat_{\infty}\) are
conservative, \(\omega\)-accessible right adjoints.
\end{example}
We remark that codomain functors are always cartesian fibrations in
\(\kPrR_{\omega}\).
\begin{proposition}
\label{PrR-codomain-fibration}
For a compactly generated \(\infty\)-category \(\cat\),
the functor \(\cod : \cat^{\to} \to \cat\) is a cartesian fibration
in \(\kPrR_{\omega}\).
\end{proposition}
\begin{proof}
Recall that finite limits commute with filtered colimits
in any compactly generated \(\infty\)-category
\(\cat\). This implies that \(\cat\) is finitely complete in the
\(\infty\)-cosmos \(\kPrR_{\omega}\) (that is, the diagonal functor
\(\cat \to \sh \cotensor \cat\) has a right adjoint for every
finitely presentable simplicial set \(\sh\)). Hence, the codomain
functor \(\cat^{\to} \to \cat\) is a cartesian fibration.
\end{proof}
\subsection{Exponentiable arrows}
\label{sec:exponentiable-arrows}
\begin{definition}
An arrow \(\arr : \obj \to \objI\) in a left exact
\(\infty\)-category \(\cat\) is said to be \emph{exponentiable} if
the pullback functor \(\arr^{*} : \cat/\objI \to \cat/\obj\) has a
right adjoint. If this is the case, we refer to the right adjoint of
\(\arr^{*}\) as the \emph{pushforward along \(\arr\)} and denote it by
\(\arr_{*} : \cat/\obj \to \cat/\objI\).
\end{definition}
\begin{definition}
For an exponentiable arrow \(\arr : \obj \to \objI\) in a left exact
\(\infty\)-category \(\cat\), the \emph{associated polynomial
functor} \(\poly_{\arr} : \cat \to \cat\) is the composite
\[
\begin{tikzcd}
\cat
\arrow[r,"\obj^{*}"] &
\cat/\obj
\arrow[r,"\arr_{*}"] &
\cat/\objI
\arrow[r,"\objI_{!}"] &
\cat
\end{tikzcd}
\]
where \(\obj^{*}\) is the pullback along \(\obj \to \terminal\) and
\(\objI_{!}\) is the forgetful functor.
\end{definition}
Recall that polynomials can be \emph{composed}
\parencite{gambino2013polynomial,weber2015polynomials}: given two
exponentiable arrows \(\arr_{1} : \obj_{1} \to \objI_{1}\) and
\(\arr_{2} : \obj_{2} \to \objI_{2}\), we have an exponentiable arrow
\(\arr_{1} \otimes \arr_{2}\) such that
\(\poly_{\arr_{1} \otimes \arr_{2}} \simeq \poly_{\arr_{1}} \circ
\poly_{\arr_{2}}\). We may also concretely define \(\arr_{1} \otimes
\arr_{2}\) as follows: \(\cod (\arr_{1} \otimes \arr_{2}) =
\poly_{\arr_{1}} \objI_{2}\); \(\dom (\arr_{1} \otimes \arr_{2})\) is
the pullback
\[
\begin{tikzcd}
\dom (\arr_{1} \otimes \arr_{2})
\arrow[r]
\arrow[d] &
\obj_{2}
\arrow[d, "\arr_{2}"] \\
\poly_{\arr_{1}} \objI_{2} \times_{\objI_{1}} \obj_{1}
\arrow[r, "\ev"'] &
\objI_{2};
\end{tikzcd}
\]
\(\arr_{1} \otimes \arr_{2}\) is the composite
\(\dom (\arr_{1} \otimes \arr_{2}) \to \poly_{\arr_{1}} \objI_{2}
\times_{\objI_{1}} \obj_{1} \to \poly_{\arr_{1}} \objI_{2} = \cod
(\arr_{1} \otimes \arr_{2})\).
\subsection{Representable maps of right fibrations}
\label{sec:repr-maps-right}
We review the notion of a \emph{representable map} of right
fibrations, which is a generalization of a representable map of
discrete fibrations over a \(1\)-category. We think of a representable
map of right fibrations as an \(\infty\)-categorical analogue of a
natural model of type theory \parencite{awodey2018natural} and a
category with families \parencite{dybjer1996internal}.
\begin{definition}
We say a map \(\map : \sh \to \shI\) of right fibrations over an
\(\infty\)-category \(\cat\) is \emph{representable} if it has a
right adjoint.
\end{definition}
\begin{proposition}
\label{rfib-slice}
Let \(\proj : \sh \to \cat\) be a right fibration between
\(\infty\)-categories. A functor \(\map : \shI \to \sh\) is a right
fibration if and only if the composite \(\proj\map : \shI \to \cat\)
is. Consequently, we have a canonical equivalence of
\(\infty\)-categories
\[
\RFib_{\cat}/\sh \simeq \RFib_{\sh}.
\]
\end{proposition}
\begin{proof}
By definition.
\end{proof}
\begin{corollary}
\label{rep-right-adjoint}
A representable map \(\map : \sh \to \shI\) of right fibrations over
an \(\infty\)-category \(\cat\) is exponentiable, and the
pushforward along \(\map\) is given by the pullback along the right
adjoint \(\rarep : \shI \to \sh\) of \(\map\).
\[
\begin{tikzcd}
\RFib_{\cat}/\sh \simeq \RFib_{\sh}
\arrow[rr,bend right,"\rarep^{*}"',start anchor=south east,end
anchor=south west] &
\rotatebox[origin=c]{270}{\(\adj\)} &
\RFib_{\shI} \simeq \RFib_{\cat}/\shI
\arrow[ll,bend right,"\map^{*}"',start anchor=north west,end
anchor=north east]
\end{tikzcd}
\]
\end{corollary}
\begin{corollary}
\label{rep-pullback}
Representable maps of right fibrations over an \(\infty\)-category
\(\cat\) are stable under pullbacks: if
\[
\begin{tikzcd}
\sh_{1}
\arrow[r,"\mapI"]
\arrow[d,"\map_{1}"'] &
\sh_{2}
\arrow[d,"\map_{2}"] \\
\shI_{1}
\arrow[r,"\mapII"'] &
\shI_{2}
\end{tikzcd}
\]
is a pullback in \(\RFib_{\cat}\) and \(\map_{2}\) is representable,
then \(\map_{1}\) is representable. Moreover, if this is the case,
the square satisfies the Beck-Chevalley condition.
\end{corollary}
\begin{proof}
By \cref{rfib-slice}, the functor \(\mapII\) is a right
fibration. Thus, the right adjoint of \(\map_{2}\) lifts to a fibred
right adjoint of \(\map_{1}\).
\[
\begin{tikzcd}
\sh_{1}
\arrow[rr,bend left,"\map_{1}"]
\arrow[d,"\mapI"'] &
\rotatebox[origin=c]{270}{\(\adj\)} &
\shI_{1}
\arrow[ll,dotted,bend left]
\arrow[d,"\mapII"] \\
[6ex]
\sh_{2}
\arrow[rr,bend left,"\map_{2}"] &
\rotatebox[origin=c]{270}{\(\adj\)} &
\shI_{2}
\arrow[ll,bend left]
\end{tikzcd}
\]
\end{proof}
\begin{proposition}
\label{representable-map-1}
A map \(\map : \sh \to \shI\) of right fibrations over an
\(\infty\)-category \(\cat\) is representable if and only if, for
any section \(\elI : \cat/\objI \to \shI\), the pullback
\(\elI^{*}\sh\) is a representable right fibration over \(\cat\).
\end{proposition}
\begin{proof}
For a section \(\elI : \cat/\objI \to \shI\), an arrow
\(\arr : \map\el \to \elI\) in \(\shI\) for some \(\el \in \sh\)
corresponds to a square
\begin{equation}
\begin{tikzcd}
\cat/\obj
\arrow[r,"\el"]
\arrow[d,"\arr"'] &
\sh
\arrow[d,"\map"] \\
\cat/\objI
\arrow[r,"\elI"'] &
\shI.
\end{tikzcd}
\label{eq:4}
\end{equation}
\((\el, \arr)\) is a universal arrow from \(\map\) to \(\elI\) if
and only if \cref{eq:4} is a pullback.
\end{proof}
\section{\(\infty\)-type theories}
\label{sec:infty-type-theories}
We introduce notions of an \emph{\(\infty\)-type theory}, a
\emph{theory} over an \(\infty\)-type theory and a \emph{model} of an
\(\infty\)-type theory, translating the previous work of the second
author \parencite{uemura2019framework} into the language of
\(\infty\)-categories. The idea is to extend the functorial semantics
of algebraic theories \parencite{lawvere2004functorial}. Algebraic
theories are identified with categories with finite products, and
models of an algebraic theory are identified with functors into the
category of sets preserving finite products. For type theories, it is
natural to identify models of a type theory with functors into
presheaf categories, because (extensions of) natural models
\parencite{awodey2018natural} and categories with families
\parencite{dybjer1996internal} are diagrams in presheaf
categories. Since representable maps of presheaves play a special role
in the natural model semantics, some arrows in the source category
should be specified to be sent to representable maps. This motivates
the following definitions.
\begin{definition}
An \emph{\(\infty\)-category with representable maps} is a pair
\((\cat,\reps)\) where \(\cat\) is an \(\infty\)-category and
\(\reps \subseteq k(\Delta^1,\cat)\) is a subspace of the space of arrows
of \(\cat\) satisfying the conditions below. Arrows in \(\reps\) are
called \emph{representable arrows}.
\begin{enumerate}
\item \(\cat\) has finite limits.
\item All the identities are representable and representable arrows
are closed under composition.
\item Representable arrows are stable under pullbacks.
\item Representable arrows are exponentiable.
\end{enumerate}
A \emph{morphism of \(\infty\)-categories with representable maps}
is a functor preserving representable arrows, finite limits and
pushforwards along representable arrows.
\end{definition}
\begin{example}
For a small \(\infty\)-category \(\cat\), the \(\infty\)-category
\(\RFib_{\cat}\) of small right fibrations over \(\cat\) is an
\(\infty\)-category with representable maps in which a map is
representable if it has a right adjoint.
\end{example}
\begin{definition}
An \emph{\(\infty\)-type theory} is an \(\infty\)-category with
representable maps whose underlying \(\infty\)-category is small. A
\emph{morphism of \(\infty\)-type theories} is a morphism of
\(\infty\)-categories with representable maps.
By an \emph{\(\nat\)-type theory} for \(1 \le \nat < \infty\), we mean an
\(\infty\)-type theory whose underlying \(\infty\)-category is an
\(\nat\)-category.
\end{definition}
\begin{example}
The type theories in the sense of the previous work
\parencite{uemura2019framework} are the \(1\)-type theories.
\end{example}
\begin{definition}
Let \(\tth\) be an \(\infty\)-type theory.
\begin{itemize}
\item A \emph{model of \(\tth\)} consists of an \(\infty\)-category
\(\model(\bas)\) with a terminal object and a morphism of
\(\infty\)-categories with representable maps
\(\model : \tth \to \RFib_{\model(\bas)}\).
\item A \emph{theory over \(\tth\)} or a \emph{\(\tth\)-theory} is a
left exact functor \(\theory : \tth \to \Space\).
\end{itemize}
\end{definition}
\begin{example}
\label{natural-model}
We will construct in \cref{sec:infty-category-infty} a presentable
\(\infty\)-category \(\TTh_{\infty}\) of \(\infty\)-type theories
and their morphisms, so we have various \emph{free constructions} of
\(\infty\)-type theories. For example, there is an \(\infty\)-type
theory \(\tthG_{\infty}\) freely generated by one representable
arrow \(\typeof : \El \to \Ty\). Indeed, the functor
\(\TTh_{\infty} \to \Space\) that sends an \(\infty\)-type theory
\(\tth\) to the space of representable arrows in \(\tth\) preserves
limits and filtered colimits, and thus it is representable by
presentability. The universal property of \(\tthG_{\infty}\) asserts
that a morphism \(\tthG_{\infty} \to \cat\) of \(\infty\)-categories
with representable maps is completely determined by the image of the
representable arrow \(\typeof \in \tthG_{\infty}\). Thus, a model of
\(\tthG_{\infty}\) consists of the following data:
\begin{itemize}
\item an \(\infty\)-category \(\model(\bas)\) with a terminal
objects;
\item a representable map
\(\model(\typeof) : \model(\El) \to \model(\Ty)\) of right
fibrations over \(\model(\bas)\).
\end{itemize}
In other words, a model of \(\tthG_{\infty}\) is an
\(\infty\)-categorical analogue of a natural model
\parencite{awodey2018natural,fiore2012discrete}. One may think of an object
\(\ctx \in \model(\bas)\) as a context, a section
\(\sh : \model(\bas)/\ctx \to \model(\Ty)\) as a type over \(\ctx\),
and a section \(\el : \model(\bas)/\ctx \to \model(\El)\) as a term
over \(\ctx\). The representability of \(\model(\typeof)\) is used
for modeling context comprehension: for a section
\(\sh : \model(\bas)/\ctx \to \model(\Ty)\), the representing object
for \(\sh^{*}\model(\El)\) is though of as the context
\((\ctx, x : \sh)\) with \(x\) a fresh variable.
It is not simple to describe a \(\tthG_{\infty}\)-theory, but we
could say that the \(\infty\)-category of
\(\tthG_{\infty}\)-theories is an \(\infty\)-analogue of the
category of generalized algebraic theories
\parencite{cartmell1978generalised}. Indeed, the second named author
showed in \parencite{uemura2022universal} that the category of
generalized algebraic theories is equivalent to the category of left
exact functors \(\tthG \to \Set\) where \(\tthG\) is the left exact
category freely generated by an exponentiable arrow.
\end{example}
In
\cref{sec:infty-category-infty,sec:infty-categ-theor,sec:infty-categ-models}
below, we will construct an \(\infty\)-category \(\TTh_{\infty}\) of
\(\infty\)-type theories, an \(\infty\)-category \(\Th(\tth)\) of
\(\tth\)-theories and an \(\infty\)-category \(\Mod(\tth)\) of models
of \(\tth\). These \(\infty\)-categories are constructed inside the
\(\infty\)-cosmos \(\kPrR_{\omega}\) of compactly generated
\(\infty\)-categories and \(\omega\)-accessible right adjoints. In \cref{sec:univ-prop-modtth}
we give a universal property of \(\Mod(\tth)\) as an object of
\(\CAT_{\infty}/\Lex_{\infty}^{(\emptyset)}\) from which for example
it follows that the assignment \(\tth \mapsto \Mod(\tth)\) takes
colimits to limits. In \cref{sec:slice-infty-type} we see that a slice
of the underlying \(\infty\)-category of an \(\infty\)-type theory is
naturally equipped with a structure of \(\infty\)-type theory and has
a useful universal property.
\subsection{The \(\infty\)-category of \(\infty\)-type theories}
\label{sec:infty-category-infty}
We construct an \(\infty\)-category \(\TTh_{\infty}\) of \(\infty\)-type
theories and their morphisms.
\begin{definition}
Let \(\Cat^{+}_{\infty}\) be the pullback
\[
\begin{tikzcd}
\Cat^{+}_{\infty}
\arrow[r]
\arrow[d] &
(\Space^\to)_{\le -1}
\arrow[d,"\cod"] \\
\Cat_{\infty}
\arrow[r,"{\lgpd(\Delta^{1}, {-})}"'] &
\Space
\end{tikzcd}
\]
where \((\Space^\to)_{\le -1}\) denotes the full subcategory of
\(\Space^{\to}\) spanned by the \((-1)\)-truncated maps of spaces
which is an \(\omega\)-accessible localization of
\(\Space^{\to}\). \(\Cat^{+}_{\infty}\) is the \(\infty\)-category
of small \(\infty\)-categories equipped with a subspace of arrows. We
define \(\Lex^{+}_{\infty}\) to be the full subcategory of
\(\Lex_{\infty} \times_{\Cat_{\infty}} \Cat^{+}_{\infty}\) spanned
by the left exact \(\infty\)-categories with a class of arrows
closed under composition and stable under pullbacks.
\end{definition}
The inclusion
\(\Lex^{+}_{\infty} \to \Lex_{\infty} \times_{\Cat_{\infty}}
\Cat^{+}_{\infty}\) has a left adjoint by taking the closure of the specified subspace of arrows under
composition and pullbacks, and \(\Lex^{+}_{\infty}\) is closed in
\(\Lex_{\infty} \times_{\Cat_{\infty}} \Cat^{+}_{\infty}\) under
filtered colimits. Hence, \(\Lex^{+}_{\infty}\) is compactly
generated, and the inclusion
\(\Lex^{+}_{\infty} \to \Lex_{\infty} \times_{\Cat_{\infty}}
\Cat_{\infty}^{+}\) is an \(\omega\)-accessible right adjoint.
Let \((\cat, \reps)\) be an object of \(\Lex_{\infty}^{+}\). Since
\(\cat\) has finite limits, we have a functor \(\theta(\cat, \reps)\)
between isofibrations over \(\reps\) whose fiber over
\((\arr : \obj \to \objI) \in \reps\) is the pullback functor
\(\arr^{*} : \cat/\objI \to \cat/\obj\). An \(\infty\)-type theory is
nothing but an object \((\cat, \reps)\) of \(\Lex_{\infty}^{+}\) such
that \(\theta(\cat, \reps)\) has a fiberwise right adjoint. We show
that this condition is equivalent to the condition that the functor
has a right adjoint.
\begin{proposition}
\label{adjoint-over-groupoid}
Let
\[
\begin{tikzcd}
\cat
\arrow[rr,"\fun"]
\arrow[dr] & &
\catI
\arrow[dl] \\
& \sh
\end{tikzcd}
\]
be a functor between isofibrations in \(\kCAT_{\infty}\) such that
\(\sh\) is an \(\infty\)-groupoid. The following are equivalent:
\begin{enumerate}
\item \label{item:8} the functor \(\fun : \cat \to \catI\) has a
right adjoint;
\item \label{item:9} for every point \(\el \in \sh\), the functor
between fibers \(\fun_{\el} : \cat_{\el} \to \catI_{\el}\) has a
right adjoint.
\end{enumerate}
\end{proposition}
\begin{proof}
Suppose that each \(\fun_{\el} : \cat_{\el} \to \catI_{\el}\) has a
right adjoint \(\funI_{\el}\) with counit
\(\counit_{\el, \objI} : \fun_{\el}(\funI_{\el}(\objI)) \to
\objI\). It suffices to see that \(\counit_{\el, \objI}\) is
universal in \(\catI\). Let \(\obj \in \cat_{\el'}\) be an object in
another fiber and consider the induced map
\begin{equation*}
\cat(\obj, \funI_{\el}(\objI)) \to \catI(\fun_{\el'}(\obj), \objI).
\end{equation*}
This is a map over \(\sh(\el', \el)\), and thus it suffices to show
that this is fiberwise an equivalence. Since \(\sh\) is an
\(\infty\)-groupoid and since \(\cat \to \sh\) and \(\catI \to \sh\)
are isofibrations, the fibers over \(p \in \sh(\el', \el)\) are
equivalent to the fibers over \(\id \in \sh(\el, \el)\), but the map
between the fibers over \(\id\) is the equivalence
\(\cat_{\el}(\obj, \funI_{\el}(\objI)) \simeq
\catI_{\el}(\funI_{\el}(\obj), \objI)\).
Suppose that \(\fun\) has a right adjoint \(\funI : \catI \to \cat\)
with counit \(\counit : \fun\funI \To \id\). Since \(\sh\) is an
\(\infty\)-groupoid, the natural transformation
\[
\begin{tikzcd}
\catI
\arrow[r,"\funI"]
\arrow[dr,equal,""{name=a0}] &
\cat
\arrow[dr]
\arrow[d,"\fun"]
\arrow[to=a0,Rightarrow,"\counit"]\\
& \catI
\arrow[r] &
\sh
\end{tikzcd}
\]
is invertible. Then, since \(\cat \to \sh\) and \(\catI \to \sh\)
are isofibrations, one can replace \(\funI\) and \(\counit\) by a
functor \(\funI' : \catI \to \cat\) and a natural transformation
\(\counit' : \fun \funI' \To \id\), respectively, over \(\sh\). Then
\(\funI'\) and \(\counit'\) give a fiberwise right adjoint of
\(\fun\).
\end{proof}
\begin{remark}
The proposition also holds more generally when \(\sh\) is an \(\infty\)-category. See \parencite[Proposition 7.3.2.1]{lurie2017algebra}.
\end{remark}
The functor \(\theta(\cat, \reps)\) is constructed as follows. Since
\(\cat\) has finite limits, the functor
\((\Delta^{1} \times \Delta^{1}) \cotensor \cat \to \Lambda^{2}_{2}
\cotensor \cat\) sending a square to its bottom and right edges has a
right adjoint. Composing the right adjoint and the functor
\((\Delta^{1} \times \Delta^{1}) \cotensor \cat \to \Lambda^{2}_{1}
\cotensor \cat\) sending a square to its bottom and left edges, we
have a functor
\begin{equation*}
\theta' : \Lambda^{2}_{2} \cotensor \cat \to \Lambda^{2}_{1} \cotensor \cat
\end{equation*}
over \(\Delta^{\{1, 2\}} \cotensor \cat\). The functor
\(\theta(\cat, \reps)\) is then the pullback of \(\theta'\) along the
inclusion \(\reps \to \Delta^{\{1, 2\}} \cotensor \cat\). This
construction is functorial and preserves limits and filtered colimits,
yielding a functor
\(\theta : \Lex_{\infty}^{+} \to \Delta^{2} \cotensor \Cat_{\infty}\)
in \(\kPrR_{\omega}\).
\begin{definition}
We define \(\TTh_{\infty}\) to be the pullback
\[
\begin{tikzcd}
\TTh_{\infty}
\arrow[rr]
\arrow[d] & &
\LAdj
\arrow[d] \\
\Lex_{\infty}^{+}
\arrow[r,"\theta"'] &
\Delta^{2} \cotensor \Cat_{\infty}
\arrow[r] &
\Delta^{\{0, 1\}} \cotensor \Cat_{\infty}.
\end{tikzcd}
\]
By \cref{adjoint-over-groupoid}, the objects of \(\TTh_{\infty}\)
are precisely the \(\infty\)-type theories. It is also
straightforward to see that the morphisms of \(\TTh_{\infty}\)
are precisely the morphisms of \(\infty\)-type theories.
\end{definition}
\subsection{The \(\infty\)-category of theories over an
\(\infty\)-type theory}
\label{sec:infty-categ-theor}
\begin{definition}
For an \(\infty\)-type theory \(\tth\), we define \(\Th(\tth)\) to
be the full subcategory of \(\Fun(\tth, \Space)\) spanned by the
functors preserving finite limits.
\end{definition}
By definition, \(\Th(\tth)\) is
compactly generated, and the inclusion
\(\Th(\tth) \to \Fun(\tth, \Space)\) is an \(\omega\)-accessible
right adjoint. The \(\infty\)-category \(\Th(\tth)\) has the following alternative definitions:
\begin{itemize}
\item \(\Th(\tth)\) is the cocompletion of \(\tth^{\op}\) under
filtered colimits;
\item \(\Th(\tth)\) is the \(\omega\)-free cocompletion of
\(\tth^{\op}\), that is, the initial cocomplete \(\infty\)-category
equipped with a functor from \(\tth^{\op}\) preserving finite
colimits.
\end{itemize}
\subsection{The \(\infty\)-category of models of an \(\infty\)-type
theory}
\label{sec:infty-categ-models}
We construct an \(\infty\)-category \(\Mod(\tth)\) of models of an
\(\infty\)-type theory \(\tth\). The following description of
\(\Mod(\tth)\) is based on unpublished work by John Bourke and the second named author on the
\(2\)-category of \(1\)-models of a \(1\)-type theory.
Let \(\tth\) be an \(\infty\)-type theory. Recall that a functor to a
slice \(\infty\)-category \(\fun' : \cat \to \catI/\objI\) corresponds
to a functor \(\fun : \cat^{\rcone} \to \catI\) that sends
\(\bas \in \cat^{\rcone}\) to \(\objI\). Then a model \(\model\) of
\(\tth\) can be regarded as a functor
\(\model : \tth^{\rcone} \to \Cat_{\infty}\) satisfying the following
conditions:
\begin{enumerate}
\item \label{item:3} \(\model(\bas)\) has a terminal object;
\item \label{item:4} for every object \(\obj \in \tth\), the functor
\(\model(\obj) \to \model(\bas)\) is a right fibration;
\item \label{item:5} for every finite diagram \(\obj : \sh \to \tth\),
the canonical functor
\(\model(\lim_{\sh}\obj) \to
\lim_{\sh^{\rcone}}\model\obj^{\rcone}\) is an equivalence;
\item \label{item:6} for every representable arrow
\(\arr : \obj \to \objI\) in \(\tth\), the functor
\(\model(\arr) : \model(\obj) \to \model(\objI)\) has a right
adjoint \(\rarep_{\arr} : \model(\objI) \to \model(\obj)\);
\item \label{item:7} for every pair of arrows
\(\arr : \obj \to \objI\) and \(\arrI : \objI \to \objII\) with
\(\arrI\) representable, the canonical functor
\(\model(\arrI_{*}\obj) \to \rarep_{\arrI}^{*}\model(\obj)\) is an
equivalence (recall that the pushforward along \(\model(\arrI)\) in
\(\RFib_{\model(\bas)}\) is given by the pullback along
\(\rarep_{\arrI}\)).
\end{enumerate}
From this description, we will define \(\Mod(\tth)\) as a
subcategory of \(\Fun(\tth^{\rcone}, \Cat_{\infty})\).
\begin{definition}
We define \(\Mod_{\labelcref{item:3}}(\tth)\) to be the pullback
\[
\begin{tikzcd}
\Mod_{\labelcref{item:3}}(\tth)
\arrow[r]
\arrow[d] &
\Lex_{\infty}^{(\emptyset)}
\arrow[d] \\
\Fun(\tth^{\rcone}, \Cat_{\infty})
\arrow[r,"\ev_{\bas}"'] &
\Cat_{\infty}.
\end{tikzcd}
\]
\end{definition}
\begin{definition}
For an object \(\obj \in \tth\), we define
\(\Mod_{\labelcref{item:4}}^{\obj}(\tth)\) to be the pullback
\[
\begin{tikzcd}
\Mod_{\labelcref{item:4}}^{\obj}(\tth)
\arrow[r]
\arrow[d,hook] &
[4ex]
\RFib
\arrow[d,hook] \\
\Fun(\tth^{\rcone}, \Cat_{\infty})
\arrow[r,"\ev_{(\obj \to {*})}"'] &
\Cat_{\infty}^{\to}
\end{tikzcd}
\]
and \(\Mod_{\labelcref{item:4}}(\tth)\) to be the wide pullback of
\(\Mod_{\labelcref{item:4}}^{\obj}(\tth)\) over
\(\Fun(\tth^{\rcone}, \Cat_{\infty})\) for all objects
\(\obj \in \tth\).
\end{definition}
\begin{definition}
For a finite diagram \(\obj : \sh \to \tth\), we define
\(\Mod_{\labelcref{item:5}}^{(\sh, \obj)}(\tth)\) to be the pullback
\[
\begin{tikzcd}
\Mod_{\labelcref{item:5}}^{(\sh, \obj)}(\tth)
\arrow[r]
\arrow[d,hook] &
[12ex]
\Cat_{\infty}^{\simeq}
\arrow[d,hook] \\
\Fun(\tth^{\rcone}, \Cat_{\infty})
\arrow[r,"(\ev_{\lim_{\sh}\obj} \To
\lim_{\sh^{\rcone}}\ev_{\obj^{\rcone}})"'] &
\Cat_{\infty}^{\to}
\end{tikzcd}
\]
and \(\Mod_{\labelcref{item:5}}(\tth)\) to be the wide pullback of
\(\Mod_{\labelcref{item:5}}^{(\sh, \obj)}(\tth)\) over
\(\Fun(\tth^{\rcone}, \Cat_{\infty})\) for all finite diagrams
\((\sh, \obj : \sh \to \tth)\).
\end{definition}
\begin{definition}
For a representable arrow \(\arr : \obj \to \objI\) in \(\tth\), we
define \(\Mod_{\labelcref{item:6}}^{\arr}(\tth)\) to be the pullback
\[
\begin{tikzcd}
\Mod_{\labelcref{item:6}}^{\arr}(\tth)
\arrow[r]
\arrow[d] &
\LAdj
\arrow[d] \\
\Fun(\tth^{\rcone}, \Cat_{\infty})
\arrow[r,"\ev_{\arr}"'] &
\Cat_{\infty}^{\to}
\end{tikzcd}
\]
and \(\Mod_{\labelcref{item:6}}(\tth)\) to be the wide pullback of
\(\Mod_{\labelcref{item:6}}^{\arr}(\tth)\) over
\(\Fun(\tth^{\rcone}, \Cat_{\infty})\) for all representable arrows
\(\arr\) in \(\tth\).
\end{definition}
\begin{definition}
We denote by \(\Mod_{-\labelcref{item:7}}(\tth)\) the wide pullback
of \(\Mod_{\labelcref{item:3}}(\tth)\),
\(\Mod_{\labelcref{item:4}}(\tth)\),
\(\Mod_{\labelcref{item:5}}(\tth)\) and
\(\Mod_{\labelcref{item:6}}(\tth)\) over
\(\Fun(\tth^{\rcone}, \Cat_{\infty})\). By construction,
\(\Mod_{-\labelcref{item:7}}(\tth)\) is the \(\infty\)-category of
functors \(\model : \tth^{\rcone} \to \Cat_{\infty}\) satisfying
\cref{item:3,item:4,item:5,item:6}.
\end{definition}
\begin{definition}
For a pair of composable arrows \(\arr : \obj \to \objI\) and
\(\arrI : \objI \to \objII\) in \(\tth\) with \(\arrI\)
representable, we define
\(\Mod_{\labelcref{item:7}}^{(\arr, \arrI)}(\tth)\) to be the
pullback
\[
\begin{tikzcd}
\Mod_{\labelcref{item:7}}^{(\arr, \arrI)}(\tth)
\arrow[r]
\arrow[d,hook] &
[8ex]
\Cat_{\infty}^{\simeq}
\arrow[d,hook] \\
\Mod_{-\labelcref{item:7}}(\tth)
\arrow[r,"(\ev_{\arrI_{*}\obj} \To
\rarep_{\arrI}^{*}\ev_{\obj})"'] &
\Cat_{\infty}^{\to}
\end{tikzcd}
\]
and \(\Mod(\tth)\) to be the wide pullback of
\(\Mod_{\labelcref{item:7}}^{(\arr, \arrI)}(\tth)\) over
\(\Mod_{-\labelcref{item:7}}(\tth)\) for all pairs \((\arr, \arrI)\)
of composable arrows in \(\tth\) with \(\arrI\) representable.
\end{definition}
By
construction, the \(\infty\)-category \(\Mod(\tth)\) is compactly
generated, and the forgetful functor
\(\Mod(\tth) \to \Fun(\tth^{\rcone}, \Cat_{\infty})\) is a
conservative, \(\omega\)-accessible right adjoint. Moreover,
the objects of \(\Mod(\tth)\) are the models of \(\tth\) and the
morphisms in \(\Mod(\tth)\) are described as follows. Let
\(\model\) and \(\modelI\) be models of \(\tth\) and
\(\fun : \model \To \modelI : \tth^{\rcone} \to \Cat_{\infty}\) be a
natural transformation. Then \(\fun\) is in \(\Mod(\tth)\) if and
only if the following conditions hold:
\begin{itemize}
\item the component \(\fun(\bas) : \model(\bas) \to \modelI(\bas)\)
preserves terminal objects;
\item for any representable arrow \(\arr : \obj \to \objI\) in
\(\tth\), the square
\[
\begin{tikzcd}
\model(\obj)
\arrow[r,"\fun(\obj)"]
\arrow[d,"\model(\arr)"'] &
\modelI(\obj)
\arrow[d,"\modelI(\arr)"] \\
\model(\objI)
\arrow[r,"\fun(\objI)"'] &
\modelI(\objI)
\end{tikzcd}
\]
satisfies the Beck-Chevalley condition.
\end{itemize}
\subsection{Universal property of \(\Mod(\tth)\)}
\label{sec:univ-prop-modtth}
We give a universal property of \(\Mod(\tth)\) seen as an object of
\(\CAT_{\infty} / \Lex_{\infty}^{(\emptyset)}\). A consequence is that
the assignment \(\tth \mapsto \Mod(\tth)\) takes colimits of
\(\infty\)-type theories to limits of \(\infty\)-categories over
\(\Lex_{\infty}^{(\emptyset)}\) (\cref{presentation-of-Mod}).
\begin{definition}
For a functor \(\cat : \idxcat \to \Cat_{\infty}\), we denote by
\(\Fun(\idxcat, \RFib)_{\cat}\) the full subcategory of
\(\Fun(\idxcat, \Cat_{\infty})/\cat\) spanned by the natural
transformations \(\proj : \sh \To \cat : \idxcat \to \Cat_{\infty}\)
whose components are right fibrations. In other words,
\(\Fun(\idxcat, \RFib)_{\cat}\) is the fiber of the functor
\(\Fun(\idxcat, \cod) : \Fun(\idxcat, \RFib) \to \Fun(\idxcat,
\Cat_{\infty})\) over the object
\(\cat \in \Fun(\idxcat, \Cat_{\infty})\). We say a map
\(\map : \sh \to \shI\) in \(\Fun(\idxcat, \RFib)_{\cat}\) is
\emph{representable} if every component
\(\map(\idx) : \sh(\idx) \to \shI(\idx)\) is a representable map of
right fibrations over \(\cat(\idx)\) and if every naturality square
\[
\begin{tikzcd}
\sh(\idx)
\arrow[r,"\sh(\idxarr)"]
\arrow[d,"\map(\idx)"'] &
\sh(\idxI)
\arrow[d,"\map(\idxI)"] \\
\shI(\idx)
\arrow[r,"\shI(\idxarr)"'] &
\shI(\idxI)
\end{tikzcd}
\]
satisfies the Beck-Chevalley condition.
\end{definition}
\begin{proposition}
\label{diagram-rep-map-pullback}
Representable maps in \(\Fun(\idxcat, \RFib)_{\cat}\) are closed under
composition and stable under pullbacks.
\end{proposition}
\begin{proof}
Let
\[
\begin{tikzcd}
\sh_{1}
\arrow[r,"\mapI"]
\arrow[d,"\map_{1}"'] &
\sh_{2}
\arrow[d,"\map_{2}"] \\
\shI_{1}
\arrow[r,"\mapII"'] &
\shI_{2}
\end{tikzcd}
\]
be a pullback in \(\Fun(\idxcat, \RFib)_{\cat}\) and suppose that
\(\map_{2}\) is representable. By \cref{rep-pullback}, every
\(\map_{1}(\idx) : \sh_{1}(\idx) \to \shI_{1}(\idx)\) is a
representable map of right fibrations over \(\cat(\idx)\), and the
square
\[
\begin{tikzcd}
\sh_{1}(\idx)
\arrow[r,"\mapI(\idx)"]
\arrow[d,"\map_{1}(\idx)"'] &
\sh_{2}(\idx)
\arrow[d,"\map_{2}(\idx)"] \\
\shI_{1}(\idx)
\arrow[r,"\mapII(\idx)"'] &
\shI_{2}(\idx)
\end{tikzcd}
\]
satisfies the Beck-Chevalley condition. It remains to show that, for
any arrow \(\idxarr : \idx \to \idxI\) in \(\idxcat\), the square
\[
\begin{tikzcd}
\sh_{1}(\idx)
\arrow[r,"\sh_{1}(\idxarr)"]
\arrow[d,"\map_{1}(\idx)"'] &
\sh_{1}(\idxI)
\arrow[d,"\map_{1}(\idxI)"] \\
\shI_{1}(\idx)
\arrow[r,"\shI_{1}(\idxarr)"'] &
\shI_{1}(\idxI)
\end{tikzcd}
\]
satisfies the Beck-Chevalley condition. Since a map of right
fibrations over a fixed base is conservative, it suffices to show
that the composite of squares
\[
\begin{tikzcd}
\sh_{1}(\idx)
\arrow[r,"\sh_{1}(\idxarr)"]
\arrow[d,"\map_{1}(\idx)"'] &
\sh_{1}(\idxI)
\arrow[r,"\mapI(\idxI)"]
\arrow[d,"\map_{1}(\idxI)"] &
\sh_{2}(\idxI)
\arrow[d,"\map_{2}(\idxI)"] \\
\shI_{1}(\idx)
\arrow[r,"\shI_{1}(\idxarr)"'] &
\shI_{1}(\idxI)
\arrow[r,"\mapII(\idxI)"'] &
\shI_{2}(\idxI)
\end{tikzcd}
\]
satisfies the Beck-Chevalley condition, but this is true by the
Beck-Chevalley condition for \(\map_{2}\).
\end{proof}
\begin{proposition}
\label{diagram-rep-map-pushforward}
A representable map in \(\Fun(\idxcat, \RFib)_{\cat}\) is
exponentiable, and the pushforward is given by the pullback along
the right adjoint.
\end{proposition}
\begin{proof}
The same as \cref{rep-right-adjoint}.
\end{proof}
\begin{proposition}
\label{rmcat-of-diagrams}
For a functor \(\cat : \idxcat \to \Cat_{\infty}\), the
\(\infty\)-category \(\Fun(\idxcat, \RFib)_{\cat}\) together with
the class of representable maps is an \(\infty\)-category with
representable maps. For a functor \(\fun : \idxcat' \to \idxcat\),
the functor
\(\fun^{*} : \Fun(\idxcat, \RFib)_{\cat} \to \Fun(\idxcat',
\RFib)_{\cat\fun}\) defined by the precomposition of \(\fun\) is a
morphism of \(\infty\)-categories with representable maps.
\end{proposition}
\begin{proof}
By definition.
\end{proof}
Let \(\tth\) be an \(\infty\)-type theory and
\(\cat : \idxcat \to \Lex_{\infty}^{(\emptyset)}\) a functor. We have
equivalences
\begin{align*}
& \quad \CAT_{\infty}/\Lex_{\infty}^{(\emptyset)}((\idxcat, \cat),
(\Fun(\tth^{\rcone}, \Cat_{\infty}), \ev_{\bas})) \\
\simeq & \qquad \text{\{transposition\}} \\
& \quad \{\bas\}/\CAT_{\infty}((\tth^{\rcone}, \{\bas\}),
(\Fun(\idxcat, \Cat_{\infty}), \cat)) \\
\simeq & \qquad \text{\{adjunction of join and slice\}} \\
& \quad \CAT_{\infty}(\tth, \Fun(\idxcat, \Cat_{\infty})/\cat).
\end{align*}
\begin{proposition}
\label{models-ump}
Let \(\tth\) be an \(\infty\)-type theory and
\(\cat : \idxcat \to \Lex_{\infty}^{(\emptyset)}\) a functor. A
functor \(\fun : \idxcat \to \Fun(\tth^{\rcone}, \Cat_{\infty})\)
over \(\Lex_{\infty}^{(\emptyset)}\) factors through \(\Mod(\tth)\)
if and only if its transpose
\(\widetilde{\fun} : \tth \to \Fun(\idxcat, \Cat_{\infty})/\cat \)
factors through \(\Fun(\idxcat, \RFib)_{\cat}\) and is a morphism of
\(\infty\)-categories with representable maps. Consequently, we have
an equivalence
\[
\CAT_{\infty}/\Lex_{\infty}^{(\emptyset)}((\idxcat, \cat),
(\Mod(\tth), \ev_{\bas})) \simeq \RMCAT_{\infty}(\tth,
\Fun(\idxcat, \RFib)_{\cat}).
\]
\end{proposition}
\begin{proof}
Immediate from the definition of models of \(\tth\).
\end{proof}
\begin{corollary}
\label{presentation-of-Mod}
\(\Mod : \TTh_{\infty}^{\op} \to
\CAT_{\infty}/\Lex_{\infty}^{(\emptyset)}\) preserves limits.
\end{corollary}
\subsection{Slice \(\infty\)-type theories}
\label{sec:slice-infty-type}
\begin{definition}
For an \(\infty\)-category with representable maps \(\cat\) and an
object \(\obj \in \cat\), we regard the slice \(\cat/\obj\) as an
\(\infty\)-category with representable maps in which an arrow is
representable if it is representable in \(\cat\).
\end{definition}
The goal of this subsection is to show that \(\cat/\obj\) is the
\(\infty\)-category with representable maps obtained from \(\cat\) by
freely adjoining a global section of \(\obj\) (\cref{slice-rep}). We
first recall a universal property of a slice of a left exact
\(\infty\)-category.
\begin{proposition}
\label{lex-slice}
Let \(\cat\) be a left exact \(\infty\)-category and
\(\obj \in \cat\) an object. We denote by
\(\obj^{*} : \cat \to \cat/\obj\) the pullback functor along
\(\obj \to \terminal\) and
\(\diagonal_{\obj} : \terminal \to \obj^{*}\obj\) the arrow in
\(\cat/\obj\) which is the diagonal arrow
\(\obj \to \obj \times \obj\) in \(\cat\). For a left exact
\(\infty\)-category \(\catI\) and a left exact functor
\(\fun : \cat \to \catI\), the map
\begin{equation}
\label{eq:5}
\cat/\LEX_{\infty}(\cat/\obj, \catI) \ni \funI \mapsto
\funI(\diagonal_{\obj}) \in \catI(\terminal, \fun\obj)
\end{equation}
is an equivalence of spaces.
\end{proposition}
\begin{proof}
An object of \(\cat/\LEX_{\infty}(\cat/\obj, \catI)\) is a left
exact functor \(\funI : \cat/\obj \to \catI\) equipped with an
invertible natural transformation
\(\trans : \funI \circ \obj^{*} \To \fun\). By the adjunction
\(\obj_{!} \adj \obj^{*}\), such a natural transformation \(\trans\)
corresponds to a natural transformation
\(\widetilde{\trans} : \funI \To \fun \circ \obj_{!}\). One can
check that \(\trans : \funI \circ \obj^{*} \To \fun\) is invertible
if and only if
\(\widetilde{\trans} : \funI \To \fun \circ \obj_{!}\) is a
cartesian natural transformation, that is, any naturality square is
a pullback. Therefore, the statement is equivalent to that, given a
global section \(\arr : \terminal \to \fun\obj\), the space of pairs
\((\funI, \trans)\) consisting of a left exact functor
\(\funI : \cat/\obj \to \catI\) and a cartesian natural
transformation \(\trans : \funI \To \fun \circ \obj_{!}\) extending
\(\arr\) is contractible.
Since \(\catI\) has finite limits, the evaluation at the terminal
object of \(\cat/\obj\) defines a cartesian fibration
\(\Fun(\cat/\obj, \catI) \to \catI\) in which a natural
transformation \(\trans : \funI_{1} \To \funI_{2}\) is cartesian if
and only if the naturality square
\[
\begin{tikzcd}
\funI_{1}\objI
\arrow[r, "\trans"]
\arrow[d] &
\funI_{2}\objI
\arrow[d] \\
\funI_{1}\terminal
\arrow[r, "\trans"'] &
\funI_{2}\terminal
\end{tikzcd}
\]
is a pullback for any object \(\objI \in \cat/\obj\), which
is equivalent to that \(\trans\) is a cartesian natural
transformation. Therefore, given a functor
\(\funI_{2} : \cat/\obj \to \catI\) and an arrow
\(\arr : \obj' \to \funI_{2}\terminal\), the space of pairs
\((\funI_{1}, \trans)\) consisting of a functor
\(\funI_{1} : \cat/\obj \to \catI\) and a cartesian natural
transformation \(\trans\) extending \(\arr\) is contractible. When
\(\funI_{2} = \fun \circ \obj_{!}\), the functor \(\funI_{1}\) must
preserve pullbacks since \(\funI_{2}\) does. If, in addition,
\(\obj' \simeq \terminal\), then \(\funI_{1}\) preserves terminal
objects, and thus it is left exact. We conclude that, given an arrow
\(\arr : \terminal \to \fun\obj \simeq \fun(\obj_{!}\terminal)\),
the space of pairs \((\funI, \trans)\) consisting of a left exact
functor \(\funI : \cat/\obj \to \catI\) and a cartesian natural
transformation \(\trans : \funI \To \fun \circ \obj_{!}\) extending
\(\arr\) is contractible, as we have a unique cartesian lift
\(\funI \To \fun \circ \obj_{!}\) and \(\funI\) must be left exact.
\end{proof}
From this proof, we can extract the inverse of the map
\labelcref{eq:5}: it is given by
\[
\catI(\terminal, \fun\obj) \ni \arr \mapsto \arr^{*} \circ
\fun/\obj \in \cat/\LEX_{\infty}(\cat/\obj, \catI)
\]
where \(\fun/\obj : \cat/\obj \to \catI/\fun\obj\) is the functor
induced by \(\fun\).
\begin{definition}
We denote by \(\RMCAT_{\infty}\) the \(\infty\)-category of large
\(\infty\)-categories with representable maps and their morphisms.
\end{definition}
\begin{proposition}
\label{slice-rep}
Let \(\cat\) be an \(\infty\)-category with representable maps and
\(\obj \in \cat\) an object.
\begin{enumerate}
\item \label{item:-1} The functor \(\obj^{*} : \cat \to \cat/\obj\)
is a morphism of \(\infty\)-categories with representable maps.
\item \label{item:-2} For an \(\infty\)-category with representable
maps \(\catI\) and a morphism \(\fun : \cat \to \catI\), the map
\[
\cat/\RMCAT_{\infty}(\cat/\obj, \catI) \ni \funI \mapsto
\funI(\diagonal_{\obj}) \in \catI(\terminal, \fun\obj)
\]
is an equivalence of spaces.
\end{enumerate}
\end{proposition}
\begin{proof}
\Cref{item:-1} is because pullbacks preserve all limits and all pushforwards. \Cref{item:-2} follows from
\cref{lex-slice}, because
\(\arr^{*} \circ \fun/\obj : \cat/\obj \to \catI\) is a morphism of
\(\infty\)-categories with representable maps for every global
section \(\arr : \terminal \to \fun\obj\).
\end{proof}
\Cref{slice-rep} can be reformulated as follows. Let \(\free{\Box}\)
be the free \(\infty\)-type theory generated by one object \(\Box\)
and \(\free{\widetilde{\Box} : \terminal \to \Box}\) the free
\(\infty\)-type theory generated by one object \(\Box\) and one global
section \(\widetilde{\Box}\) of \(\Box\). By definition, a morphism
\(\free{\Box} \to \cat\) corresponds to an object of \(\cat\), and a
morphism \(\free{\widetilde{\Box} : \terminal \to \Box} \to \cat\)
corresponds to a pair \((\obj, \arr)\) consisting of an object
\(\obj\) of \(\cat\) and a global section
\(\arr : \terminal \to \obj\). Then, for an \(\infty\)-category with
representable maps \(\cat\) and an object \(\obj \in \cat\), we can
form a square
\begin{equation}
\label{eq:-1}
\begin{tikzcd}
\free{\Box}
\arrow[r,"\obj"]
\arrow[d] &
\cat
\arrow[d,"\obj^{*}"] \\
\free{\widetilde{\Box} : \terminal \to \Box}
\arrow[r,"\diagonal_{\obj}"'] &
\cat/\obj.
\end{tikzcd}
\end{equation}
\Cref{slice-rep} is equivalent to that, for any \(\infty\)-category
with representable maps, the diagram
\[
\begin{tikzcd}
\RMCAT_{\infty}(\cat/\obj, \catI)
\arrow[r]
\arrow[d] &
\RMCAT_{\infty}(\free{\widetilde{\Box} : \terminal \to \Box},
\catI)
\arrow[d] \\
\RMCAT_{\infty}(\cat, \catI)
\arrow[r] &
\RMCAT_{\infty}(\free{\Box}, \catI)
\end{tikzcd}
\]
induced by \cref{eq:-1} is a pullback of spaces. In other words:
\begin{proposition}
\label{slice-rep-2}
\label{tth-slice}
For an \(\infty\)-category with representable maps \(\cat\) and an
object \(\obj \in \cat\), \cref{eq:-1} is a pushout in
\(\RMCAT_{\infty}\).
\end{proposition}
Using \cref{models-ump} and its corollary, we have the following
description of \(\Mod(\tth / \obj)\) for an \(\infty\)-type theory
\(\tth\) and an object \(\obj \in \tth\). We first observe:
\begin{enumerate}
\item \label{item:14} \(\Mod(\free{\Box}) \simeq \RFib'\) where
\(\RFib'\) is the base change of \(\RFib\) along the forgetful
functor \(\Lex_{\infty}^{(\emptyset)} \to \Cat_{\infty}\);
\item \label{item:19}
\(\Mod(\free{\widetilde{\Box} : \terminal \to \Box}) \simeq
\RFib'_{\bullet}\) where \(\RFib'_{\bullet}\) is the
\(\infty\)-category of right fibrations \(\sh \to \cat\) with a
terminal object in \(\cat\) and a global section
\(\el : \cat \to \sh\).
\end{enumerate}
These follow from \cref{models-ump}, for example
\begin{align*}
&\CAT_{\infty} / \Lex_{\infty}^{(\emptyset)}((\idxcat, \cat),
(\Mod(\free{\Box}), \ev_{\bas})) \\
\simeq{}& \RMCAT_{\infty}(\free{\Box}, \Fun(\idxcat, \RFib)_{\cat}) \\
\simeq{}& (\Fun(\idxcat, \RFib)_{\cat})^{\simeq} \\
\simeq{}& \CAT_{\infty} / \Lex_{\infty}^{(\emptyset)}((\idxcat, \cat), (\RFib', \cod))
\end{align*}
for any \(\cat : \idxcat \to \Lex_{\infty}^{(\emptyset)}\). Then, by
\cref{presentation-of-Mod,tth-slice}, we have:
\begin{proposition}
\label{slice-model}
For any \(\infty\)-type theory \(\tth\) and any object \(\obj \in
\tth\), we have a pullback
\[
\begin{tikzcd}
\Mod(\tth/\obj)
\arrow[rr]
\arrow[d] & &
\RFib'_{\bullet}
\arrow[d] \\
\Mod(\tth)
\arrow[r,"\obj^{*}"'] &
\Mod(\free{\Box})
\arrow[r, "\simeq"'] &
\RFib'.
\end{tikzcd}
\]
\end{proposition}
\section{The theory-model correspondence}
\label{sec:theory-model-corr}
Given an \(\infty\)-type theory \(\tth\), we establish an adjunction
between the \(\infty\)-category of \(\tth\)-theories and the
\(\infty\)-category of models of \(\tth\). The right adjoint assigns
an \emph{internal language} to each model of \(\tth\), and the left
adjoint assigns a \emph{syntactic model} to each \(\tth\)-theory. Not
all models of \(\tth\) are syntactic ones. We give a characterization
of syntactic models.
All the results in this section are \(\infty\)-categorical analogues
of results from the previous work of the second author
\parencite{uemura2019framework}, but proofs are simplified and
improved.
\begin{itemize}
\item In the previous work \((2,1)\)-categorical (co)limits are
distinguished from \(1\)-categorical (co)limits, but there is no
such difference in the \(\infty\)-categorical setting.
\item In the previous work the left adjoint of the internal language
functor is made by hand, but in this work we construct the internal
language functor inside the \(\infty\)-cosmos \(\kPrR_{\omega}\), so
it has a left adjoint by definition. Therefore, all we have to do is
to analyze the unit and counit of the adjunction.
\end{itemize}
Let \(\tth\) be an \(\infty\)-type theory. Since the base
\(\infty\)-category \(\model(\bas)\) of a model \(\model\) of
\(\tth\) has a terminal object
\(\terminal : \Delta^{0} \to \model(\bas)\), we have a natural
transformation
\[
\begin{tikzcd}
\Mod(\tth)
\arrow[r]
\arrow[d] &
\Fun(\tth^{\rcone}, \Cat_{\infty})
\arrow[d,"\ev_{\bas}"] \\
\Delta^{0}
\arrow[r,"\Delta^{0}"']
\arrow[ur,Rightarrow,"\terminal",start
anchor={[xshift=1ex,yshift=1ex]},end
anchor={[xshift=-1ex,yshift=-1ex]}] &
\Cat_{\infty}.
\end{tikzcd}
\]
Since \(\Fun(\tth^{\rcone}, \Cat_{\infty})\) is the pullback
\[
\begin{tikzcd}
\Fun(\tth^{\rcone}, \Cat_{\infty})
\arrow[r]
\arrow[d,"\ev_{\bas}"'] &
\Fun(\tth, \Cat_{\infty})^{\to}
\arrow[d,"\cod"] \\
\Cat_{\infty}
\arrow[r,"\diagonal"'] &
\Fun(\tth, \Cat_{\infty}),
\end{tikzcd}
\]
the functor
\(\ev_{\bas} : \Fun(\tth^{\rcone}, \Cat_{\infty}) \to
\Cat_{\infty}\) is a cartesian fibration in
\(\kPrR_{\omega}\). Thus, the natural transformation \(\terminal\)
induces an \(\omega\)-accessible right adjoint
\(\terminal^{*} : \Mod(\tth) \to (\Delta^{0})^{*}\Fun(\tth^{\rcone},
\Cat_{\infty}) \simeq \Fun(\tth, \Cat_{\infty})\). By the definition
of a model of \(\tth\), the functor
\(\terminal^{*} : \Mod(\tth) \to \Fun(\tth, \Cat_{\infty})\) factors
through
\(\Th(\tth) = \Lex(\tth, \Space) \subset \Fun(\tth,
\Cat_{\infty})\). We denote this functor
\(\Mod(\tth) \to \Th(\tth)\) by \(\IL\). By definition,
\(\IL(\model)\) is the composite
\[
\begin{tikzcd}
\tth
\arrow[r,"\model"] &
\RFib_{\model(\bas)}
\arrow[r,"\terminal^{*}"] &
\Space
\end{tikzcd}
\]
where \(\terminal^{*}\sh\) is the fiber over
\(\terminal \in \model(\bas)\) for a right fibration \(\sh\) over
\(\model(\bas)\). As the functor \(\IL : \Mod(\tth) \to \Th(\tth)\)
lies in \(\kPrR_{\omega}\), it has a left adjoint
\(\SM : \Th(\tth) \to \Mod(\tth)\).
\begin{definition}
For a model \(\model\) of
\(\tth\), the \(\tth\)-theory \(\IL(\model)\) is called the
\emph{internal language of \(\model\)}.
For a theory \(K\) over \(\tth\), we call \(\SM(\theory)\) the
\emph{syntactic model} generated by a \(\tth\)-theory \(\theory\).
\end{definition}
In this section, we prove the following:
\begin{enumerate}
\item \label{item:1} the unit of the adjunction \(\SM \adj \IL\) is
invertible, so the functor \(\SM : \Th(\tth) \to \Mod(\tth)\) is
fully faithful;
\item \label{item:2} the essential image of
\(\SM : \Th(\tth) \to \Mod(\tth)\) is the class of democratic models
of \(\tth\) defined below.
\end{enumerate}
Consequently, the adjunction \(\SM \adj \IL\) induces an equivalence
between \(\Th(\tth)\) and the full subcategory of \(\Mod(\tth)\)
spanned by the democratic models of \(\tth\). We define the notion of
a democratic model in \cref{sec:democratic-models}. The components of
the unit \(\unit : \id \To \IL\SM\) are completely determined by the
components at the representable \(\tth\)-theories
\({\tth}(\obj, {-})\), because \(\Th(\tth)\) is the cocompletion of
\(\tth^{\op}\) under filtered colimits and the right adjoint
\(\IL : \Mod(\tth) \to \Th(\tth)\) preserves filtered colimits. We
thus study in details the syntactic model generated by a representable
\(\tth\)-theory. In \cref{sec:initial-model} we concretely describe
the initial model of \(\tth\) which is the syntactic model generated
by the initial \(\tth\)-theory \({\tth}(\terminal, {-})\). We then
generalize it in \cref{sec:synt-models-repr} to a description of the
syntactic model generated by an arbitrary representable
\(\tth\)-theory. Finally we prove the main results in
\cref{sec:equiv-theor-democr}.
\subsection{Democratic models}
\label{sec:democratic-models}
For a model \(\model\) of an \(\infty\)-type theory, we think of an
object \(\ctx \in \model(\bas)\) as a context (see
\cref{natural-model}), but contexts from the syntax of type theory
satisfy an additional property: every context is obtained from the
empty context by context comprehension. A model of an \(\infty\)-type
theory satisfying this property is said to be \emph{democratic},
generalizing the notion of a democratic category with families
\parencite{clairambault2014biequivalence}.
\begin{definition}
Let \(\model\) be a model of \(\tth\), \(\arr : \obj \to \objI\) a
representable arrow in \(\tth\), \(\ctx \in \model(\bas)\) an object
and \(\elI : \model(\bas)/\ctx \to \model(\objI)\) a section. Let
\(\rarep_{\arr} : \model(\objI) \to \model(\obj)\) be the right
adjoint of \(\model(\arr)\). Then the counit
\(\ctxproj_{\arr}(\elI) : \model(\arr)(\rarep_{\arr}(\elI)) \to
\elI\) is a pullback square
\[
\begin{tikzcd}
\model(\bas)/\{\elI\}_{\arr}
\arrow[r,"\rarep_{\arr}(\elI)"]
\arrow[d,"\ctxproj_{\arr}(\elI)"'] &
\model(\obj)
\arrow[d,"\model(\arr)"] \\
\model(\bas)/\ctx
\arrow[r,"\elI"'] &
\model(\objI).
\end{tikzcd}
\]
We refer to the object \(\{\elI\}_{\arr}\) the \emph{context
comprehension of \(\elI\) with respect to \(\arr\)}.
\end{definition}
\begin{definition}
Let \(\model\) be a model of \(\tth\). The class of \emph{contextual
objects of \(\model\)} is the smallest replete class of objects of
\(\model(\bas)\) containing the terminal object and closed under
context comprehension.
\end{definition}
In other words, the contextual objects of
\(\model\) are inductively defined as follows:
\begin{itemize}
\item the terminal object \(\terminal \in \model(\bas)\) is
contextual;
\item if \(\ctx \in \model(\bas)\) is a contextual object,
\(\arr : \obj \to \objI\) is a representable arrow in \(\tth\) and
\(\elI : \model(\bas)/\ctx \to \model(\objI)\) is a section, then
the context comprehension \(\{\elI\}_{\arr}\) is contextual;
\item if \(\ctx \in \model(\bas)\) is a contextual object and
\(\ctx \simeq \ctxI\), then \(\ctxI\) is contextual.
\end{itemize}
\begin{definition}
We call a model \(\model\) \emph{democratic} if all the objects of
\(\model(\bas)\) are contextual. We denote by \(\Mod^{\dem}(\tth)\) the
full subcategory of \(\Mod(\tth)\) spanned by the democratic models.
\end{definition}
One can always find a largest democratic model contained in an
arbitrary model of \(\tth\).
\begin{definition}
For a model \(\model\) of \(\tth\), we define a model
\(\model^{\heartsuit}\) of \(\tth\) called the \emph{heart of
\(\model\)} as follows:
\begin{itemize}
\item the base \(\infty\)-category \(\model^{\heartsuit}(\bas)\) is
the full subcategory of \(\model(\bas)\) spanned by the contextual
objects;
\item the functor
\(\model^{\heartsuit} : \tth \to \RFib_{\model^{\heartsuit}(*)}\)
is the composite with the pullback along the inclusion
\(\model^{\heartsuit}(\bas) \to \model(\bas)\)
\[
\begin{tikzcd}
\tth
\arrow[r,"\model"] &
\RFib_{\model(\bas)}
\arrow[r] &
\RFib_{\model^{\heartsuit}(\bas)}.
\end{tikzcd}
\]
\end{itemize}
\(\model^{\heartsuit}\) is indeed a model of \(\tth\), and the
inclusion \(\model^{\heartsuit} \hookrightarrow \model\) is a
morphism of models of \(\tth\). By definition, the functor
\(\model^{\heartsuit} : \tth \to \RFib_{\model^{\heartsuit}(*)}\)
preserves finite limits. Since \(\model^{\heartsuit}(\bas)\) is
closed under context comprehension, for a representable arrow
\(\arr : \obj \to \objI\) in \(\tth\), the composite
\(\model^{\heartsuit}(\objI) \hookrightarrow \model(\objI)
\overset{\rarep_{\arr}}{\longrightarrow} \model(\obj)\) factors
through \(\model^{\heartsuit}(\obj) \hookrightarrow \model(\obj)\)
\[
\begin{tikzcd}
\model^{\heartsuit}(\objI)
\arrow[r,dotted]
\arrow[d,hook] &
\model^{\heartsuit}(\obj)
\arrow[d,hook] \\
\model(\objI)
\arrow[r,"\rarep_{\arr}"'] &
\model(\obj).
\end{tikzcd}
\]
This means that \(\model^{\heartsuit}(\arr) :
\model^{\heartsuit}(\obj) \to \model^{\heartsuit}(\objI)\) has a
right adjoint, and the square
\[
\begin{tikzcd}
\model^{\heartsuit}(\obj)
\arrow[r,hook]
\arrow[d,"\model^{\heartsuit}(\arr)"'] &
\model(\obj)
\arrow[d,"\model(\arr)"] \\
\model^{\heartsuit}(\objI)
\arrow[r,hook] &
\model(\objI)
\end{tikzcd}
\]
satisfies the Beck-Chevalley condition. Since the pushforward along
\(\model^{\heartsuit}(\arr)\) is given by the pullback along its right
adjoint \(\rarep_{\arr}\), we see that
\(\model^{\heartsuit} : \tth \to \RFib_{\model^{\heartsuit}(\bas)}\)
preserves pushforwards along representable maps.
\end{definition}
\begin{proposition}
\label{heart-of-model}
For a democratic model \(\model\) of \(\tth\) and an arbitrary model
\(\modelI\) of \(\tth\), the inclusion
\(\modelI^{\heartsuit} \hookrightarrow \modelI\) induces an
equivalence of spaces
\[
\Mod^{\dem}(\tth)(\model, \modelI^{\heartsuit}) \simeq
\Mod(\tth)(\model, \modelI).
\]
In other words, \(({-})^{\heartsuit}\) is a right adjoint of the
inclusion \(\Mod^{\dem}(\tth) \hookrightarrow \Mod(\tth)\).
\end{proposition}
\begin{proof}
Because any morphism of models of \(\tth\) preserves contextual
objects, any morphism \(\model \to \modelI\) from a democratic
model \(\model\) factors through \(\modelI^{\heartsuit}\).
\end{proof}
\subsection{The initial model}
\label{sec:initial-model}
\begin{definition}
Recall that the Yoneda embedding
\(\yoneda_{\tth} : \tth \to \RFib_{\tth}\) preserves all existing
limits and pushforwards. Therefore, the pair
\((\tth, \yoneda_{\tth})\) is regarded as a model of \(\tth\). We
define the \emph{initial model} \(\IM(\tth)\) to be the heart of the
model \((\tth, \yoneda_{\tth})\).
\end{definition}
The goal of this subsection is to show that \(\IM(\tth)\) is indeed an
initial object of \(\Mod(\tth)\).
By definition, the model \(\IM(\tth)\) is described as follows:
\begin{itemize}
\item the base \(\infty\)-category is \(\tth_{r}\), the full
subcategory of \(\tth\) spanned by the objects \(\obj\) such that
the arrow \(\obj \to \terminal\) is representable;
\item \(\IM(\tth)(\objI) = \tth_{r}/\objI\) defined by the pullback
\[
\begin{tikzcd}
\tth_{r}/\objI
\arrow[r,hook]
\arrow[d] &
\tth/\objI
\arrow[d] \\
\tth_{r}
\arrow[r,hook] &
\tth
\end{tikzcd}
\]
for \(\objI \in \tth\).
\end{itemize}
Alternatively, the functor \(\IM(\tth) : \tth \to \RFib_{\tth_{r}}\)
is defined as the left Kan extension of the Yoneda embedding
\(\yoneda_{\tth_{r}} : \tth_{r} \to \RFib_{\tth_{r}}\) along the
inclusion \(\tth_{r} \hookrightarrow \tth\).
\[
\begin{tikzcd}
\tth_{r}
\arrow[r,"\yoneda_{\tth_{r}}",""'{name=a0}]
\arrow[d,hook] &
\RFib_{\tth_{r}} \\
\tth
\arrow[ur,dotted,bend right,"\IM(\tth)"',""{name=a1}]
\arrow[from=a0,to=a1,dotted,Rightarrow,"\simeq"'{near start}]
\end{tikzcd}
\]
\begin{theorem}
For an \(\infty\)-type theory \(\tth\), the model \(\IM(\tth)\) is
an initial object in \(\Mod(\tth)\).
\end{theorem}
\begin{proof}
We first note that, since \(\Mod(\tth)\) has finite limits, it
suffices to show that \(\IM(\tth)\) is an initial object in the
homotopy category of \(\Mod(\tth)\) \parencite[Proposition
2.2.2]{nrs2019adjointfunctors}, that is, for any model \(\model\) of
\(\tth\), there exists a morphism \(\IM(\tth) \to \model\) and any
two morphisms \(\IM(\tth) \to \model\) are equivalent.
Let \(\model\) be a model of \(\tth\). Suppose that we have a
morphism \(\funI : \IM(\tth) \to \model\). It is regarded as a pair
\((\funI(\bas), \funI)\) consisting of a functor
\(\funI(\bas) : \tth_{r} \to \model(\bas)\) and a natural
transformation
\(\funI : \IM(\tth) \To \funI(\bas)^{*}\model : \tth \to
\RFib_{\tth_{r}}\).
The Beck-Chevalley condition for a representable arrow
\(\arrI : \objI \to \objII\) in \(\tth\) means that, for any object
\((\arr : \obj \to \objII) \in \tth_{r}/\objII\), the square
\[
\begin{tikzcd}
\model(\bas)/\funI(\bas)(\arr^{*}\objI)
\arrow[r,"\funI(\arr^{*}\objI)(\arrI^{*}\arr)"]
\arrow[d,"\arr^{*}\arrI"'] &
[4ex]
\model(\objI)
\arrow[d,"\model(\arrI)"] \\
\model(\bas)/\funI(\bas)(\obj)
\arrow[r,"\funI(\bas)(\arr)"'] &
\model(\objII)
\end{tikzcd}
\]
is a pullback. From the special case when \(\objII\) is the terminal
object, we see that the canonical map
\(\funI(\objI)(\id_{\objI}) : \model(\bas)/\funI(\bas)(\objI) \to
\model(\objI)\) is an equivalence for every object
\(\objI \in \tth_{r}\). In other words, the diagram
\[
\begin{tikzcd}
\tth_{r}
\arrow[r,"\funI(\bas)"]
\arrow[d,hook] &
\model(\bas)
\arrow[d,"\yoneda_{\model(\bas)}"] \\
\tth
\arrow[r,"\model"'] &
\RFib_{\model(\bas)}
\end{tikzcd}
\]
commutes up to equivalence, and we have an equivalence
\[
\begin{tikzcd}
\tth_{r}
\arrow[r,"\funI(\bas)"]
\arrow[d,hook] &
\model(\bas)
\arrow[d,"\yoneda_{\model(\bas)}"] \\
\tth
\arrow[r,"\model"]
\arrow[dr,bend right,"\IM(\tth)"',""{name=a0}] &
\RFib_{\model(\bas)}
\arrow[d,"\funI(\bas)^{*}"]
\arrow[from=a0,Rightarrow,"\funI"] \\
& \RFib_{\tth_{r}}
\end{tikzcd}
\simeq
\begin{tikzcd}
\tth_{r}
\arrow[r,"\funI(\bas)"]
\arrow[ddr,bend right,"\yoneda_{\tth_{r}}"',""{name=a0}] &
\model(\bas)
\arrow[d,"\yoneda_{\model(\bas)}"]
\arrow[from=a0,Rightarrow,"\funI(\bas)_{1}"] \\
& \RFib_{\model(\bas)}
\arrow[d,"\funI(\bas)^{*}"] \\
& \RFib_{\tth_{r}},
\end{tikzcd}
\]
where \(\funI(\bas)_{1}\) is the natural transformation
\(\funI(\bas)_{\obj, \objI} : \tth_{r}(\obj, \objI) \to
\model(\bas)(\funI(\bas)(\obj), \funI(\bas)(\objI))\). Hence,
\(\funI(\bas) : \tth_{r} \to \model(\bas)\) is uniquely determined,
and then
\(\funI : \IM(\tth) \To \funI(\bas)^{*}\model : \tth \to
\RFib_{\tth_{r}}\) is uniquely determined because
\(\IM(\tth) : \tth \to \RFib_{\tth_{r}}\) is the left Kan extension
of the Yoneda embedding \(\tth_{r} : \tth_{r} \to \RFib_{\tth_{r}}\)
along the inclusion \(\tth_{r} \hookrightarrow \tth\). This shows
that morphisms \(\IM(\tth) \to \model\) are unique up to
equivalence.
It suffices now to construct a morphism
\(\fun : \IM(\tth) \to \model\) of models of \(\tth\). We first
construct a functor \(\fun(\bas) : \tth_{r} \to \model(\bas)\). For
an object \(\obj \in \tth_{r}\), the map
\(\model(\obj) \to \model(\terminal) \simeq \model(\bas)\) of right
fibrations over \(\model(\bas)\) is representable. Thus, since
\(\model(\bas)\) has a terminal object, the right fibration
\(\model(\obj)\) is representable. Hence, the restriction of
\(\model : \tth \to \RFib_{\model(\bas)}\) along the inclusion
\(\tth_{r} \to \tth\) factors as a functor
\(\fun(\bas) : \tth_{r} \to \model(\bas)\) followed by the Yoneda
embedding.
\[
\begin{tikzcd}
\tth_{r}
\arrow[r,dotted,"\fun(\bas)"]
\arrow[d,hook] &
\model(\bas)
\arrow[d,"\yoneda_{\model(\bas)}"] \\
\tth
\arrow[r,"\model"'] &
\RFib_{\model(\bas)}
\end{tikzcd}
\]
We then define a natural transformation
\(\fun : \IM(\tth) \To \fun(\bas)^{*}\model : \tth \to
\RFib_{\tth_{r}}\) to be the one whose restriction to \(\tth_{r}\)
is the natural transformation
\(\fun(\bas) : \yoneda_{\tth_{r}} \To \fun(\bas)^{*}\model\fun(\bas)
: \tth_{r} \to \RFib_{\tth_{r}}\).
\[
\begin{tikzcd}
\tth_{r}
\arrow[r,"\fun(\bas)"]
\arrow[d,hook] &
\model(\bas)
\arrow[d,"\yoneda_{\model(\bas)}"] \\
\tth
\arrow[r,"\model"]
\arrow[dr,bend right,"\IM(\tth)"',""{name=a0}] &
\RFib_{\model(\bas)}
\arrow[d,"\fun(\bas)^{*}"]
\arrow[from=a0,Rightarrow,"\fun"] \\
& \RFib_{\tth_{r}}
\end{tikzcd}
\simeq
\begin{tikzcd}
\tth_{r}
\arrow[r,"\fun(\bas)"]
\arrow[ddr,bend right,"\yoneda_{\tth_{r}}"',""{name=a0}] &
\model(\bas)
\arrow[d,"\yoneda_{\model(\bas)}"]
\arrow[from=a0,Rightarrow,"\fun(\bas)_{1}"] \\
& \RFib_{\model(\bas)}
\arrow[d,"\fun(\bas)^{*}"] \\
& \RFib_{\tth_{r}}
\end{tikzcd}
\]
In order to show that \(\fun\) is a morphism of models of \(\tth\),
it remains to prove that \(\fun(\bas) : \tth_{r} \to \model(\bas)\)
preserves terminal objects and that \(\fun\) satisfies the
Beck-Chevalley condition for representable arrows. The first claim
is clear by definition. For the second, we have to show that, for
any representable arrow \(\arrI : \objI \to \objII\) in \(\tth\),
the square
\[
\begin{tikzcd}
\tth_{r}/\objI
\arrow[r,"\fun(\objI)"]
\arrow[d,"\arrI"'] &
\model(\objI)
\arrow[d,"\model(\arrI)"] \\
\tth_{r}/\objII
\arrow[r,"\fun(\objII)"'] &
\model(\objII)
\end{tikzcd}
\]
satisfies the Beck-Chevalley condition. It suffices to show that,
for any arrow \(\arr : \obj \to \objII\) with \(\obj \in \tth_{r}\),
the composite of squares
\begin{equation}
\label{eq:8}
\begin{tikzcd}
\tth_{r}/\arr^{*}\objI
\arrow[r,"\arrI^{*}\arr"]
\arrow[d,"\arr^{*}\arrI"'] &
\tth_{r}/\objI
\arrow[r,"\fun(\objI)"]
\arrow[d,"\arrI"] &
\model(\objI)
\arrow[d,"\model(\arrI)"] \\
\tth_{r}/\obj
\arrow[r,"\arr"'] &
\tth_{r}/\objII
\arrow[r,"\fun(\objII)"'] &
\model(\objII)
\end{tikzcd}
\end{equation}
satisfies the Beck-Chevalley condition. By the definition of
\(\fun\), \cref{eq:8} is equivalent to
\begin{equation}
\label{eq:9}
\begin{tikzcd}
\tth_{r}/\arr^{*}\objI
\arrow[r,"\fun(\bas)"]
\arrow[d,"\arr^{*}\arrI"'] &
\model(*)/\fun(\bas)(\arr^{*}\objI)
\arrow[r,"\simeq"]
\arrow[d,"\fun(\bas)(\arr^{*}\arrI)"'] &
\model(\arr^{*}\objI)
\arrow[r,"\model(\arrI^{*}\arr)"]
\arrow[d,"\model(\arr^{*}\arrI)"] &
\model(\objI)
\arrow[d,"\model(\arrI)"] \\
\tth_{r}/\obj
\arrow[r,"\fun(\bas)"'] &
\model(*)/\fun(\bas)(\obj)
\arrow[r,"\simeq"'] &
\model(\obj)
\arrow[r,"\model(\arr)"'] &
\model(\objII).
\end{tikzcd}
\end{equation}
The right square of \cref{eq:9} satisfies the Beck-Chevalley
condition by \cref{rep-pullback}. The middle square satisfies the
Beck-Chevalley condition as the horizontal maps are
equivalences. The Beck-Chevalley condition for the left square
asserts that \(\fun(\bas)\) preserves pullbacks of representable
arrows in \(\tth_{r}\), which is true by the definition of
\(\fun(\bas)\).
\end{proof}
\subsection{Syntactic models generated by representable theories}
\label{sec:synt-models-repr}
We describe the model \(\SM(\yoneda(\obj))\) for \(\obj \in \tth\),
where
\(\yoneda : \tth^{\op} \to \Th(\tth) \subset \Fun(\tth,
\Space)\) is the Yoneda embedding.
\begin{proposition}
\label{models-of-slice}
For an object \(\obj\) of \(\tth\), we have a pullback
\[
\begin{tikzcd}
\Mod(\tth/\obj)
\arrow[r]
\arrow[d] &
\yoneda(\obj)/\Th(\tth)
\arrow[d] \\
\Mod(\tth)
\arrow[r,"\IL"'] &
\Th(\tth).
\end{tikzcd}
\]
\end{proposition}
\begin{proof}
Recall (\cref{slice-model}) that we have a pullback
\[
\begin{tikzcd}
\Mod(\tth/\obj)
\arrow[r]
\arrow[d] &
\RFib'_{\bullet}
\arrow[d] \\
\Mod(\tth)
\arrow[r,"\obj^{*}"'] &
\RFib'.
\end{tikzcd}
\]
For an object \((\sh \to \cat) \in \RFib'\), the fiber of
\(\RFib'_{\bullet}\) over \((\sh \to \cat)\) is the space of global
sections of \(\sh\). Since the base \(\infty\)-category \(\cat\) has
a terminal object \(\terminal\), that space is equivalent to the
fiber of \(\sh\) over \(\terminal\). In other words, we have a
pullback
\[
\begin{tikzcd}
\RFib'_{\bullet}
\arrow[r]
\arrow[d] &
\terminal / \Space
\arrow[d] \\
\RFib'
\arrow[r,"\terminal^{*}"'] &
\Space.
\end{tikzcd}
\]
By the definition of \(\IL\), the composite \(\terminal^{*} \circ
\obj^{*}\) is equivalent to the composite
\[
\begin{tikzcd}
\Mod(\tth)
\arrow[r,"\IL"] &
\Th(\tth)
\arrow[r,"\ev_{\obj}"] &
\Space.
\end{tikzcd}
\]
By Yoneda, we have a pullback
\[
\begin{tikzcd}
\yoneda(\obj)/\Th(\tth)
\arrow[r]
\arrow[d] &
\terminal / \Space
\arrow[d] \\
\Th(\tth)
\arrow[r,"\ev_{\obj}"'] &
\Space,
\end{tikzcd}
\]
and then we get a pullback as in the statement.
\end{proof}
By \cref{models-of-slice}, we get an equivalence
\[
\Mod(\tth/\obj) \simeq (\yoneda(\obj) \downarrow \IL).
\]
Since \(\SM(\yoneda(\obj))\) is the initial object of
\((\yoneda(\obj) \downarrow \IL)\), it is obtained from the initial
model \(\IM(\tth/\obj)\) of \(\tth/\obj\) by restricting the morphism
of \(\infty\)-categories with representable maps
\(\IM(\tth/\obj) : \tth/\obj \to \RFib_{\IM(\tth/\obj)(\bas)}\) along
\(\obj^{*} : \tth \to \tth/\obj\). We thus have a concrete description
of \(\SM(\yoneda(\obj))\) as follows:
\begin{itemize}
\item the base \(\infty\)-category \(\SM(\yoneda(\obj))(\bas)\) is the
full subcategory of \(\tth/\obj\) spanned by the representable
arrows over \(\obj\);
\item for objects \(\objI \in \tth\) and
\((\arr : \obj' \to \obj) \in \SM(\yoneda(\obj))(\bas)\), the fiber
of \(\SM(\yoneda(\obj))(\objI)\) over \(\arr\) is
\(\tth/\obj(\arr, \obj^{*}\objI) \simeq \tth(\obj', \objI)\).
\end{itemize}
\subsection{The equivalence of theories and democratic models}
\label{sec:equiv-theor-democr}
\begin{proposition}
\label{unit-invertible}
The unit of the adjunction
\(\SM \adj \IL : \Th(\tth) \to \Mod(\tth)\) is
invertible. Consequently, the left adjoint
\(\SM : \Th(\tth) \to \Mod(\tth)\) is fully faithful.
\end{proposition}
\begin{proof}
Since both functors \(\SM\) and \(\IL\) preserves filtered colimits,
it suffices to show that the unit
\(\unit_{\theory} : \theory \to \IL(\SM(\theory))\) is invertible
for every representable functor \(\theory : \tth \to \Space\). From
the description of \(\SM(\yoneda(\obj))\) in
\cref{sec:synt-models-repr}, we have that
\(\IL(\SM(\yoneda(\obj)))(\objI) \simeq \tth(\obj, \objI) =
\yoneda(\obj)(\objI)\) and \(\unit_{\yoneda(\obj)}\) is just the
identity.
\end{proof}
\begin{proposition}
\label{syntactic-model-democratic}
The functor \(\SM : \Th(\tth) \to \Mod(\tth)\) factors through
\(\Mod^{\dem}(\tth) \subset \Mod(\tth)\).
\end{proposition}
\begin{proof}
Since \(\Mod^{\dem}(\tth) \subset \Mod(\tth)\) is a coreflective
subcategory by \cref{heart-of-model}, it is closed under
colimits. Thus, it suffices to show that \(\SM(\yoneda(\obj))\) is
democratic for every \(\obj \in \tth\), but this follows from the
description of \(\SM(\yoneda(\obj))\) in
\cref{sec:synt-models-repr}.
\end{proof}
\begin{proposition}
\label{IL-dem-conservative}
The restriction of \(\IL : \Mod(\tth) \to \Th(\tth)\) to
\(\Mod^{\dem}(\tth) \subset \Mod(\tth)\) is conservative.
\end{proposition}
We first show the following lemma.
\begin{lemma}
\label{IL-dem-conservative-1}
Let \(\fun : \model \to \modelI\) be a morphism of models of
\(\tth\) such that \(\IL(\fun) : \IL(\model) \to \IL(\modelI)\) is
an equivalence of \(\tth\)-theories. Then, the map
\[
\fun(\obj)_{\ctx} : \model(\obj)_{\ctx} \to
\modelI(\obj)_{\fun(\bas)(\ctx)}
\]
is an equivalence of spaces for any contextual object
\(\ctx \in \model(\bas)\) and any object \(\obj \in \tth\).
\end{lemma}
\begin{proof}
By induction on the contextual object \(\ctx \in \model(\bas)\). When
\(\ctx = \terminal\), the map \(\fun(\obj)_{\terminal}\) is an
equivalence by assumption. Suppose that \(\ctx = \{\el\}_{\arr}\)
for some contextual object \(\ctx' \in \model(\bas)\), representable
arrow \(\arr : \objI \to \objII\) in \(\tth\) and section
\(\el : \model(\bas)/\ctx' \to \model(\objII)\). Since
\(\model : \tth \to \RFib_{\model(\bas)}\) commutes with the polynomial
functor \(\poly_{\arr}\), the sections
\(\model(\bas)/\{\el\}_{\arr} \to \model(\obj)\) correspond to the
sections of \(\model(\poly_{\arr}\obj) \to \model(\objII)\) over
\(\el : \model(\bas)/\ctx' \to \model(\objII)\). Thus,
\(\model(\obj)_{\{\el\}_{\arr}}\) is the pullback
\[
\begin{tikzcd}
\model(\obj)_{\{\el\}_{\arr}}
\arrow[r]
\arrow[d] &
\model(\poly_{\arr}\obj)_{\ctx'}
\arrow[d] \\
\Delta^{0}
\arrow[r,"\el"'] &
\model(\objII)_{\ctx'}.
\end{tikzcd}
\]
By the induction hypothesis, \(\fun(\poly_{\arr}\obj)_{\ctx'}\) and
\(\fun(\objII)_{\ctx'}\) are equivalences, and thus
\(\fun(\obj)_{\{\el\}_{\arr}}\) is an equivalence.
\end{proof}
\begin{proof}[Proof of \cref{IL-dem-conservative}]
Let \(\fun : \model \to \modelI\) be a morphism between democratic
models of \(\tth\) and suppose that
\(\IL(\fun) : \IL(\model) \to \IL(\modelI)\) is an equivalence of
\(\tth\)-theories. We show that \(\fun\) is an equivalence of models
of \(\tth\). Since the forgetful functor
\(\Mod(\tth) \to \Fun(\tth^{\rcone}, \Cat_{\infty})\) is
conservative, it suffices to show that
\(\fun(\obj) : \model(\obj) \to \modelI(\obj)\) is an equivalence of
\(\infty\)-categories for every object \(\obj \in
\tth^{\rcone}\). \Cref{IL-dem-conservative-1} implies that the
square
\[
\begin{tikzcd}
\model(\obj)
\arrow[r,"\fun(\obj)"]
\arrow[d] &
\modelI(\obj)
\arrow[d] \\
\model(\bas)
\arrow[r,"\fun(\bas)"'] &
\modelI(*)
\end{tikzcd}
\]
is a pullback for every \(\obj \in \tth\). It remains to show that
the functor \(\fun(\bas) : \model(\bas) \to \modelI(*)\) is fully
faithful and essentially surjective.
We show by induction on \(\ctxI\) that
\(\fun(\bas) : \model(\bas)(\ctx, \ctxI) \to \modelI(*)(\fun(\bas)(\ctx),
\fun(\bas)(\ctxI))\) is an equivalence of spaces for any objects
\(\ctx, \ctxI \in \model(\bas)\). The case when \(\ctxI = \terminal\)
is trivial. Suppose that \(\ctxI = \{\el\}_{\arr}\) for some object
\(\ctxI' \in \model(\bas)\), representable arrow
\(\arr : \obj \to \objI\) in \(\tth\) and section
\(\el : \model(\bas)/\ctxI' \to \model(\objI)\). By definition, we have
a pullback
\[
\begin{tikzcd}
\model(\bas)(\ctx, \{\el\}_{\arr})
\arrow[r]
\arrow[d] &
\model(\obj)_{\ctx}
\arrow[d,"\model(\arr)_{\ctx}"] \\
\model(\bas)(\ctx, \ctxI')
\arrow[r,"\ctxmor \mapsto \ctxmor^{*}\el"'] &
\model(\objI)_{\ctx}.
\end{tikzcd}
\]
Then, by the induction hypothesis and \cref{IL-dem-conservative-1},
the map
\(\fun(\bas) : \model(\bas)(\ctx, \{\el\}_{\arr}) \to
\modelI(*)(\fun(\bas)(\ctx), \fun(\bas)(\{\el\}_{\arr}))\) is an
equivalence.
Finally, we show by induction on \(\ctxI\) that, for any object
\(\ctxI \in \modelI(*)\), there exists an object
\(\ctx \in \model(\bas)\) such that
\(\fun(\bas)(\ctx) \simeq \ctxI\). The case when
\(\ctxI = \terminal\) is trivial. Suppose that
\(\ctxI = \{\elI\}_{\arr}\) for some object
\(\ctxI' \in \modelI(*)\), representable arrow
\(\arr : \obj \to \objI\) in \(\tth\) and section
\(\elI : \modelI(*)/\ctxI' \to \modelI(\objI)\). By the induction
hypothesis, we have an object \(\ctx' \in \model(\bas)\) such that
\(\fun(\bas)(\ctx') \simeq \ctxI'\). By
\cref{IL-dem-conservative-1}, we have a section
\(\el : \model(\bas)/\ctx' \to \model(\objI)\) such that
\(\fun(\objI)_{\ctx'}(\el) \simeq \elI\). Then
\(\fun(\bas)(\{\el\}_{\arr}) \simeq \{\elI\}_{\arr}\).
\end{proof}
\begin{theorem}
\label{theory-model-correspondence}
For an \(\infty\)-type theory, the restriction of
\(\IL : \Mod(\tth) \to \Th(\tth)\) to
\(\Mod^{\dem}(\tth) \subset \Mod(\tth)\) is an equivalence
\[
\Mod^{\dem}(\tth) \simeq \Th(\tth).
\]
\end{theorem}
\begin{proof}
By \cref{syntactic-model-democratic}, the functor
\(\IL : \Mod^{\dem}(\tth) \to \Th(\tth)\) has the left adjoint
\(\SM\). By \cref{unit-invertible}, the unit of this adjunction is
invertible. By \cref{IL-dem-conservative} and the triangle
identities, the counit is also invertible.
\end{proof}
\section{Correspondence between type-theoretic structures and categorical structures}
\label{sec:corr-betw-type}
We discuss a correspondence between type-theoretic structures and
categorical structures. Given an \(\infty\)-category \(\cat\) whose
objects are small \(\infty\)-categories equipped with a certain
structure and morphisms are structure-preserving functors, we try to
find an \(\infty\)-type theory \(\tth\) such that
\(\Th(\tth) \simeq \cat\). Such an \(\infty\)-type theory \(\tth\) can
be understood in a couple of ways. Type-theoretically, \(\tth\)
provides \emph{internal languages} for \(\infty\)-categories in
\(\cat\). We will find type-theoretic structures corresponding to
categorical structures like finite limits and
pushforwards. Categorically, \(\tth\) gives a \emph{presentation} of
the \(\infty\)-category \(\cat\) as a localization of a presheaf
\(\infty\)-category. Such a presentation has the advantage that the
\(\infty\)-type theory \(\tth\) often has a simple universal property
from which one can derive a universal property of \(\cat\) (see
\cref{Lex-ump} for example).
The fundamental example of such an \(\infty\)-category \(\cat\) is
\(\cat = \Lex_{\infty}\), the \(\infty\)-category of small left exact
\(\infty\)-categories. In \cref{sec:left-exact-infty}, we introduce an
\(\infty\)-type theory \(\etth_{\infty}\) which is an
\(\infty\)-analogue of Martin-L{\"o}f type theory with extensional identity
types. The main result of this section is to establish an equivalence
\(\Th(\etth_{\infty}) \simeq \Lex_{\infty}\), and this is a higher analogue of the
result of \textcite{clairambault2014biequivalence}. To do this, we
need two preliminaries: one is the \emph{representable map classifier}
of right fibrations over a left exact \(\infty\)-category
(\cref{sec:repr-map-class}) which is used for constructing a
democratic model of \(\etth_{\infty}\) out of a left exact
\(\infty\)-category; the other is the notion of \emph{univalence} in
\(\infty\)-categories with representable maps (\cref{sec:univ-repr-arrows})
which for example makes a type constructor unique up to contractible
choice. We also give two other examples \(\cat = \LCCC_{\infty}\), the
\(\infty\)-category of small locally cartesian closed
\(\infty\)-categories (\cref{sec:locally-cart-clos}), and
\(\cat = \TTh_{\infty}\), the \(\infty\)-category of \(\infty\)-type theories
(\cref{sec:infty-type-theories-1}). The latter example shows that the
notion of \(\infty\)-type theories itself can be written in the
\(\infty\)-type-theoretic language.
\subsection{The representable map classifier}
\label{sec:repr-map-class}
In this preliminary subsection, we review a \emph{representable map
classifier} over a left exact \(\infty\)-category \(\cat\), that is,
a classifying object for the class of representable maps of right
fibrations over \(\cat\).
\begin{definition}
Let \(\sect\) denote the category
\[
\begin{tikzcd}
& 0
\arrow[d] \\
1
\arrow[ur]
\arrow[r,equal] &
1.
\end{tikzcd}
\]
The inclusion \(\Delta^{1} = \{0 \to 1\} \to \sect\)
induces a functor
\(\sect \cotensor \cat \to \Delta^{1} \cotensor \cat = \cat^{\to}\)
for an \(\infty\)-category \(\cat\). Note that
\(\sect \cotensor \cat\) is the \(\infty\)-category of sections in
\(\cat\).
\end{definition}
\begin{definition}
\label{def-rmcls}
Let \(\cat\) be a left exact \(\infty\)-category. We define
\(\rmcls_{\cat}\) to be the largest right fibration over \(\cat\)
contained in the cartesian fibration \(\cod : \cat^{\to} \to \cat\)
and \(\genrm_{\cat} : \ptrmcls_{\cat} \to \rmcls_{\cat}\) to be the
pullback
\[
\begin{tikzcd}
\ptrmcls_{\cat}
\arrow[r,hook]
\arrow[d,"\genrm_{\cat}"'] &
\sect \cotensor \cat
\arrow[d] \\
\rmcls_{\cat}
\arrow[r,hook] &
\cat^{\to}.
\end{tikzcd}
\]
That is, \(\rmcls_{\cat}\) is the wide subcategory of \(\cat^{\to}\)
whose morphisms are the pullback squares. We refer to
\(\rmcls_{\cat}\) as the \emph{representable map classifier} over
\(\cat\) and \(\genrm_{\cat}\) as the \emph{generic representable
map} of right fibrations over \(\cat\) because of the following
proposition.
\end{definition}
\begin{proposition}
\label{rep-map-classifier}
Let \(\cat\) be a left exact \(\infty\)-category.
\begin{enumerate}
\item \(\genrm_{\cat} : \ptrmcls_{\cat} \to \rmcls_{\cat}\) is a
representable map of right fibrations over \(\cat\).
\item For any right fibration \(\sh\) over \(\cat\), the map
\begin{equation*}
\RFib_{\cat}(\sh, \rmcls_{\cat}) \to (\RFib_{\cat} / \sh)_{r}
\end{equation*}
defined by the pullback of \(\genrm_{\cat}\) is an equivalence,
where \((\RFib_{\cat} / \sh)_{r}\) denotes the subspace of
\((\RFib_{\cat} / \sh)^{\simeq}\) spanned by the representable maps
over \(\sh\).
\end{enumerate}
\end{proposition}
\begin{proof}
We first observe that
\(\genrm_{\cat} : \ptrmcls_{\cat} \to \rmcls_{\cat}\) is a right
fibration, that is, the functor
\begin{equation*}
(\ev_{1}, (\genrm_{\cat})_{*}) : \Delta^{1} \cotensor \ptrmcls_{\cat} \to
\ptrmcls_{\cat} \times_{\rmcls_{\cat}} \Delta^{1} \cotensor \rmcls_{\cat}
\end{equation*}
is an equivalence. Since \(\Delta^{1} \cotensor \rmcls_{\cat}\) is a
subcategory of \((\Delta^{1} \times \Delta^{1}) \cotensor \cat\) whose objects are
the pullback squares, this follows from the universal property of
pullbacks.
For the representability of \(\genrm_{\cat}\), we use
\cref{representable-map-1}. Let
\(\kappa_{\objI} : \cat / \obj \to \rmcls_{\cat}\) be a section which
corresponds to an arrow \(\objI \to \obj\) in \(\cat\). We show that
\(\kappa_{\objI}^{*} \ptrmcls_{\cat}\) is representable by
\(\objI\). Since the diagonal map
\(\objI \to \objI \times_{\obj} \objI\) is a section of the first
projection, it determines a section
\(\diagonal : \cat / \objI \to \ptrmcls_{\cat}\) such that the diagram
\begin{equation*}
\begin{tikzcd}
\cat / \objI
\arrow[r, "\diagonal"]
\arrow[d] &
\ptrmcls_{\cat}
\arrow[d, "\genrm_{\cat}"] \\
\cat / \obj
\arrow[r, "\kappa_{\objI}"'] &
\rmcls_{\cat}
\end{tikzcd}
\end{equation*}
commutes. This square is a pullback. Indeed, for an object
\((\arr : \objII \to \obj) \in \cat / \obj\), the fiber of
\(\genrm_{\cat}\) over \(\kappa_{\objI}(\arr)\) is the space of sections
of \(\objII \times_{\obj} \objI \to \objII\) which is equivalent to the
space of sections of \(\objI \to \obj\) over \(\arr\).
For the second claim, observe that
\((\RFib_{\cat} / \colim_{\idx \in \idxcat}\sh_{\idx})_{r} \simeq
\lim_{\idx \in \idxcat}(\RFib_{\cat} / \sh_{\idx})_{r}\) for any
diagram \((\sh_{\idx})_{\idx \in \idxcat}\) in
\(\RFib_{\cat}\). Indeed, since \(\RFib_{\cat}\) is an
\(\infty\)-topos, we have
\((\RFib_{\cat} / \colim_{\idx \in \idxcat} \sh_{\idx})^{\simeq} \simeq
\lim_{\idx \in \idxcat}(\RFib_{\cat} / \sh_{\idx})^{\simeq}\), and this
equivalence is restricted to representable maps by
\cref{representable-map-1}. Then it is enough to show that the map
in the statement is an equivalence in the case when \(\sh\) is
representable by some \(\obj \in \cat\). By definition
\(\RFib_{\cat}(\cat / \obj, \rmcls_{\cat}) \simeq (\cat / \obj)^{\simeq}\). By
\cref{representable-map-1}, \((\RFib_{\cat} / (\cat / \obj))_{r}\)
is the space of arrows \(\objI \to \obj\) of which the pullback along
an arbitrary arrow \(\objII \to \obj\) exists, but since \(\cat\) has
pullbacks this is \((\cat / \obj)^{\simeq}\).
\end{proof}
\begin{remark}
The representable map classifier in \(\RFib_{\cat}\) exists even
when \(\cat\) is not left exact. In the above proof, we have seen
that
\((\RFib_{\cat} / \colim_{\idx \in \idxcat} \sh_{\idx})_{r} \simeq
\lim_{\idx \in \idxcat}(\RFib_{\cat} / \sh_{\idx})_{r}\) and that
\((\RFib_{\cat} / \sh)_{r}\) is essentially small. Then, by
\parencite[Proposition 5.5.2.2]{lurie2009higher}, the functor
\(\RFib_{\cat}^{\op} \ni \sh \mapsto (\RFib_{\cat} / \sh)_{r} \in \Space\) is
representable, and the representing object is the representable map
classifier. From the concrete construction given in
\cref{def-rmcls}, the construction of the representable map
classifier in the case when \(\cat\) is left exact is moreover
functorial: any left exact functor \(\fun : \cat \to \catI\) induces a
map of right fibrations \(\rmcls_{\cat} \to \rmcls_{\catI}\) over
\(\fun\).
\end{remark}
\subsection{Univalent representable arrows}
\label{sec:univ-repr-arrows}
In this preliminary subsection, we extend the notion of a univalent
map in a (presentable) locally cartesian closed \(\infty\)-category
\parencite{gepner2017univalence,rasekh2018objects,rasekh2021univalence} to a notion of a
univalent representable arrow in an \(\infty\)-categories with
representable maps.
\begin{definition}
For objects \(\obj\) and \(\objI\) of an \(\infty\)-category
\(\cat\) with finite products, let \(\Map(\obj, \objI) \to \cat\)
denote the right fibration whose fiber over \(\objII\) is
\(\cat/\objII(\obj \times \objII, \objI \times \objII) \simeq
\cat(\obj \times \objII, \objI)\). It is defined by the pullback
\[
\begin{tikzcd}
\Map(\obj, \objI)
\arrow[r]
\arrow[d] &
\cat/\objI
\arrow[d] \\
\cat
\arrow[r,"({-} \times \obj)"'] &
\cat.
\end{tikzcd}
\]
If \(\Map(\obj, \objI)\) is representable, we denote by
\(\intern{\Map}(\obj, \objI)\) the representing object. We define
\(\Equiv(\obj, \objI)\) to be the subfibration of
\(\Map(\obj, \objI)\) spanned by the equivalences
\(\obj \times \objII \simeq \objI \times \objII\). If
\(\Equiv(\obj, \objI)\) is representable, we denote by
\(\intern{\Equiv}(\obj, \objI)\) the representing object.
\end{definition}
\begin{definition}
Let \(\arr : \obj \to \objI\) be an arrow in a left exact
\(\infty\)-category \(\cat\). We regard
\(\arr \times \objI : \obj \times \objI \to \objI \times \objI\) and
\(\objI \times \arr : \objI \times \obj \to \objI \times \objI\) as
objects of \(\cat/\objI \times \objI\) and denote by
\(\Equiv(\arr)\) the right fibration
\(\Equiv(\arr \times \objI, \objI \times \arr) \to \cat/\objI \times
\objI\). If \(\Equiv(\arr)\) representable, we denote by
\(\intern{\Equiv}(\arr)\) the representing object.
\end{definition}
By definition, an
arrow \(\objII \to \intern{\Equiv}(\arr)\) corresponds to a triple
\((\arrI_{1}, \arrI_{2}, \arrII)\) consisting of arrows
\(\arrI_{1}, \arrI_{2} : \objII \to \objI\) and an equivalence
\(\arrII : \arrI_{1}^{*}\obj \simeq \arrI_{2}^{*}\obj\) over
\(\objII\).
\begin{definition}
Let \(\arr : \obj \to \objI\) be an arrow in a left exact
\(\infty\)-category \(\cat\) such that \(\Equiv(\arr)\) is
representable. We have a section
\(|\id| : \objI \to \intern{\Equiv}(\arr)\) over the diagonal
\(\diagonal : \objI \to \objI \times \objI\) corresponding to the
identity \(\id : \obj \to \obj\). We say \(\arr\) is
\emph{univalent} if the arrow
\(|\id| : \objI \to \intern{\Equiv}(\arr)\) is an equivalence.
\end{definition}
\begin{proposition}
\label{univalent-arrow-1}
Let \(\arr : \obj \to \objI\) be an arrow in a left exact
\(\infty\)-category \(\cat\) such that \(\Equiv(\arr)\) is
representable. Let \(\kappa_{\arr} : \cat/\objI \to \rmcls_{\cat}\)
be the section corresponding to \(\arr\) by Yoneda. The following
are equivalent:
\begin{enumerate}
\item \(\arr\) is univalent:
\item the square
\[
\begin{tikzcd}
\cat/\objI
\arrow[r,"\kappa_{\arr}"]
\arrow[d,"\diagonal"'] &
\rmcls_{\cat}
\arrow[d,"\diagonal"] \\
\cat/\objI \times \objI
\arrow[r,"\kappa_{\arr} \times \kappa_{\arr}"'] &
\rmcls_{\cat} \times_{\cat} \rmcls_{\cat}
\end{tikzcd}
\]
is a pullback;
\item \label{item:10} \(\kappa_{\arr} : \cat/\objI \to \rmcls_{\cat}\) is a
\((-1)\)-truncated map of right fibrations over
\(\cat\). Equivalently, for any object \(\objII \in \cat\), the map
\(\cat(\objII, \objI) \to (\cat / \objII)^{\simeq}\) defined by the
pullback of \(\arr\) is \((-1)\)-truncated.
\end{enumerate}
\end{proposition}
\begin{proof}
The same proof as \parencite[Proposition 3.8
(1)--(3)]{gepner2017univalence} works only assuming the
representability of \(\Equiv(\arr)\).
\end{proof}
\begin{example}
\label{univalent-gen-rep-map}
For any left exact \(\infty\)-category \(\cat\), the generic
representable map
\(\genrm_{\cat} : \ptrmcls_{\cat} \to \rmcls_{\cat}\) is a univalent
representable map in \(\RFib_{\cat}\) by \cref{rep-map-classifier}.
\end{example}
\begin{proposition}
\label{equiv-repr}
Let \(\obj\) and \(\objI\) be objects in a left exact
\(\infty\)-category \(\cat\) and suppose that \(\obj \times \objII\)
and \(\objI \times \objII\) are exponentiable in \(\cat/\objII\) for
any object \(\objII \in \cat\).
\begin{enumerate}
\item The right fibration \(\Equiv(\obj, \objI) \to \cat\) is
representable.
\item Let \(\catI\) be a left exact \(\infty\)-category and
\(\fun : \cat \to \catI\) a left exact functor. If \(\fun\) sends
\(\obj \times \objII\) and \(\objI \times \objII\) to
exponentiable objects over \(\fun\objII\) and commutes with
exponentiation by \(\obj \times \objII\) and
\(\objI \times \objII\) for any \(\objII \in \cat\), then the
canonical arrow
\(\fun(\intern{\Equiv}(\obj, \objI)) \to
\intern{\Equiv}(\fun(\obj), \fun(\objI))\) is an equivalence.
\end{enumerate}
\end{proposition}
\begin{proof}
The right fibration \(\Equiv(\obj, \objI)\) is equivalent to the
right fibration \(\BiInv(\obj, \objI)\) of bi-invertible arrows
whose fiber over \(\objII \in \cat\) is the space of tuples
\((\arr, \arrI, \unit, \arrII, \counit)\) consisting of arrows
\(\arr : \obj \times \objII \to \objI \times \objII\) and
\(\arrI, \arrII : \objI \times \objII \to \obj \times \objII\) over
\(\objII\) and homotopies \(\unit : \arrI\arr \simeq \id\) and
\(\counit : \arr\arrII \simeq \id\) over \(\objII\). The right
fibration \(\BiInv(\obj, \objI)\) is representable by the
exponentiability of \(\obj \times \objII\) and
\(\objI \times \objII\). The second assertion is clear from the
construction of the representing object for \(\BiInv(\obj, \objI)\).
\end{proof}
\begin{corollary}
Let \(\arr : \obj \to \objI\) be a representable arrow in an
\(\infty\)-category with representable maps \(\cat\).
\begin{enumerate}
\item The right fibration
\(\Equiv(\arr) \to \cat/\objI \times \objI\) is representable.
\item If \(\arr\) is univalent, so is \(\fun\arr\) for any morphism
of \(\infty\)-categories with representable maps
\(\fun : \cat \to \catI\).
\end{enumerate}
\end{corollary}
\subsection{Left exact \(\infty\)-categories}
\label{sec:left-exact-infty}
We define an \(\infty\)-type theory \(\etth_{\infty}\) whose theories
are equivalent to small left exact \(\infty\)-categories.
\begin{definition}
Let \(\cat\) be an \(\infty\)-category with representable maps and
\(\typeof : \El \to \Ty\) a representable arrow in \(\cat\).
\begin{itemize}
\item A \emph{\(\Unit\)-type structure on \(\typeof\)} is a
pullback square of the form
\begin{equation}
\label{eq:1}
\begin{tikzcd}
\terminal
\arrow[r, dotted, "\elUnit"]
\arrow[d,equal] &
\El
\arrow[d,"\typeof"] \\
\terminal
\arrow[r, dotted, "\Unit"'] &
\Ty.
\end{tikzcd}
\end{equation}
\item A \emph{\(\dSum\)-type structure on \(\typeof\)} is a
pullback square of the form
\begin{equation}
\label{eq:2}
\begin{tikzcd}
\dom(\typeof \otimes \typeof)
\arrow[r, dotted, "\pair"]
\arrow[d,"\typeof \otimes \typeof"'] &
\El
\arrow[d,"\typeof"] \\
\cod(\typeof \otimes \typeof)
\arrow[r, dotted, "\dSum"'] &
\Ty
\end{tikzcd}
\end{equation}
where \(\otimes\) is the composition of polynomials
(\cref{sec:exponentiable-arrows}).
\item An \emph{\(\Id\)-type structure on \(\typeof\)} is a
pullback square of the form
\begin{equation}
\label{eq:3}
\begin{tikzcd}
\El
\arrow[r, dotted, "\refl"]
\arrow[d,"\diagonal"'] &
\El
\arrow[d,"\typeof"] \\
\El \times_{\Ty} \El
\arrow[r, dotted, "\Id"'] &
\Ty.
\end{tikzcd}
\end{equation}
\end{itemize}
\end{definition}
\begin{proposition}
\label{univalent-type-constructors}
Let \(\typeof : \El \to \Ty\) be a univalent representable arrow in
an \(\infty\)-category with representable maps \(\cat\). Then
\(\Unit\)-type structures, \(\dSum\)-type structures and
\(\Id\)-type structures are unique up to contractible
choice. Moreover, we have the following:
\begin{enumerate}
\item \label{item:-3} \(\typeof\) has a \(\Unit\)-type structure if
and only if all the identity arrows are pullbacks of \(\typeof\);
\item \label{item:-4} \(\typeof\) has a \(\dSum\)-type structure if
and only if pullbacks of \(\typeof\) are closed under composition;
\item \label{item:-5} \(\typeof\) has an \(\Id\)-type structure if
and only if pullbacks of \(\typeof\) are closed under equalizers:
if \(\arr : \obj \to \objI\) is a pullback of \(\typeof\) and
\(\arrI_{1}, \arrI_{2} : \obj' \to \obj\) are arrows such that
\(\arr\arrI_{1} \simeq \arr\arrI_{2}\), then the equalizer
\(\obj'' \to \obj'\) of \(\arrI_{1}\) and \(\arrI_{2}\) in
\(\cat/\objI\) is a pullback of \(\typeof\).
\end{enumerate}
\end{proposition}
\begin{proof}
The uniqueness follows from \cref{item:10} of
\cref{univalent-arrow-1}. The rests are straightforward.
\end{proof}
\begin{definition}
By a \emph{left exact universe} in an \(\infty\)-category with
representable maps \(\cat\) we mean a univalent representable arrow
\(\typeof : \El \to \Ty\) equipped with a \(\Unit\)-type structure,
a \(\dSum\)-type structure and an \(\Id\)-type structure. We denote
by \(\etth_{\infty}\) the initial \(\infty\)-type theory containing
a left exact universe \(\typeof : \El \to \Ty\).
\end{definition}
\begin{theorem}
\label{etth-dem-lex}
The functor
\(\ev_{\bas} : \Mod^{\dem}(\etth_{\infty}) \to \Cat_{\infty}\)
factors through \(\Lex_{\infty}\) and induces an equivalence
\[
\Mod^{\dem}(\etth_{\infty}) \simeq \Lex_{\infty}.
\]
\end{theorem}
\begin{lemma}
\label{etth-representable-arrows}
An arrow in \(\etth_{\infty}\) is representable if and only if it is
a pullback of \(\typeof : \El \to \Ty\).
\end{lemma}
\begin{proof}
Let \(\etth_{\infty}'\) denote the \(\infty\)-category with
representable maps whose underlying \(\infty\)-category is the same
as \(\etth_{\infty}\) and representable arrows are the pullbacks of
\(\typeof\). As \(\typeof\) is equipped with a \(\Unit\)-type
structure and a \(\dSum\)-type structure, the representable arrows in
\(\etth_{\infty}'\) include all the identities and are closed under
composition by \cref{univalent-type-constructors}, so \(\etth_{\infty}'\) is
indeed an \(\infty\)-category with representable maps. By the
initiality of \(\etth_{\infty}\), the inclusion
\(\etth_{\infty}' \to \etth_{\infty}\) has a section, and thus
\(\etth_{\infty}' \simeq \etth_{\infty}\).
\end{proof}
\begin{definition}
Let \(\model\) be a model of an \(\infty\)-type theory \(\tth\). By
a \emph{display map} we mean an arrow \(\ctxmor : \ctxI \to \ctx\)
in \(\model(\bas)\) that is equivalent over \(\ctx\) to
\(\ctxproj_{\arr} : \{\elI\}_{\arr} \to \ctx\) for some
representable arrow \(\arr : \obj \to \objI\) in \(\tth\) and
section \(\elI : \model(\bas)/\ctx \to \model(\objI)\). By
definition, display maps are stable under pullbacks.
\end{definition}
\begin{lemma}
\label{etth-model-display-maps-1}
Let \(\model\) be a model of \(\etth_{\infty}\). An arrow \(\ctxmor
: \ctx_{1} \to \ctx_{2}\) in \(\model(\bas)\) is a display map if and
only if there exists a pullback of the form
\[
\begin{tikzcd}
\model(\bas)/\ctx_{1}
\arrow[r,dotted]
\arrow[d,"\ctxmor"'] &
\model(\El)
\arrow[d,"\model(\typeof)"] \\
\model(\bas)/\ctx_{2}
\arrow[r,dotted] &
\model(\Ty).
\end{tikzcd}
\]
\end{lemma}
\begin{proof}
By \cref{etth-representable-arrows}.
\end{proof}
\begin{lemma}
\label{disp-map-id-cancel}
Let \(\disp\) be a pullback-stable class of arrows in an
\(\infty\)-category \(\cat\). Suppose that \(\disp\) contains all
the identity and is closed under composition and equalizers. Then,
for arrows \(\arr : \obj \to \objI\) and
\(\arrI : \objI \to \objII\), if \(\arrI\) and \(\arrI\arr\) are in
\(\disp\), so is \(\arr\).
\end{lemma}
\begin{proof}
The assumption implies that the full subcategory of \(\cat/\objII\)
spanned by the arrows \(\obj \to \objII\) in \(\disp\) is an
\(\infty\)-category of fibrant objects in which the weak
equivalences are the equivalences and the fibrations are the arrows
in \(\disp\). Therefore, any arrow between fibrations is equivalent
to a fibration.
\end{proof}
\begin{lemma}
\label{etth-dem-model-disp}
Let \(\model\) be a democratic model of \(\etth_{\infty}\). Then
every arrow in \(\model(\bas)\) is a display map.
\end{lemma}
\begin{proof}
By
\cref{univalent-type-constructors,etth-model-display-maps-1,disp-map-id-cancel}.
\end{proof}
\begin{proof}[Proof of \cref{etth-dem-lex}]
Since display maps are stable under pullbacks and morphisms of
models commute with pullbacks of display maps,
\cref{etth-dem-model-disp} implies that the base \(\infty\)-category
of a democratic model of \(\etth_{\infty}\) has all finite limits
and that any morphism between democratic models of
\(\etth_{\infty}\) commutes with finite limits in the base
\(\infty\)-categories. In other words, the functor
\(\ev_{\bas} : \Mod^{\dem}(\etth_{\infty}) \to \Cat_{\infty}\)
factors through \(\Lex_{\infty}\).
Let \(\cat\) be a left exact \(\infty\)-category. We define a model
\(\rmcls_{\cat}\) of \(\etth_{\infty}\) by setting
\(\rmcls_{\cat}(\bas)\) to be \(\cat\) and
\(\rmcls_{\cat}(\typeof) : \rmcls_{\cat}(\El) \to \rmcls_{\cat}(\Ty)\)
to be the generic representable map of right fibrations over
\(\cat\). We have seen in \cref{univalent-gen-rep-map} that the
generic representable map is univalent. Since \(\cat\) has finite
limits, representable maps of right fibrations over \(\cat\) are
closed under equalizers. Thus, by
\cref{univalent-type-constructors}, \(\rmcls_{\cat}\) is indeed a
model of \(\etth_{\infty}\). Since the construction of the generic
representable map for a left exact \(\infty\)-category is functorial, the
assignment \(\cat \mapsto \rmcls_{\cat}\) is part of a functor
\[
\rmcls : \Lex_{\infty} \to \Mod(\etth_{\infty}).
\]
The model \(\rmcls_{\cat}\) is democratic as the map
\(\cat/\obj \to \cat/\terminal\) is representable for every object
\(\obj \in \cat\).
We show that the functor
\(\rmcls : \Lex_{\infty} \to \Mod^{\dem}(\etth_{\infty})\) is an
inverse of
\(\ev_{\bas} : \Mod^{\dem}(\etth_{\infty}) \to \Lex_{\infty}\). By
definition, \(\ev_{\bas} \circ \rmcls \simeq \id\). To show the other
equivalence \(\rmcls \circ \ev_{\bas} \simeq \id\), let \(\model\) be
a democratic model of \(\etth_{\infty}\). Since
\(\rmcls_{\model(\bas)}(\typeof)\) is the generic representable map,
we have a unique pullback
\[
\begin{tikzcd}
\model(\El)
\arrow[r, dotted]
\arrow[d,"\model(\typeof)"'] &
\rmcls_{\model(\bas)}(\El)
\arrow[d,"\rmcls_{\model(\bas)}(\typeof)"] \\
\model(\Ty)
\arrow[r, dotted, "\map"'] &
\rmcls_{\model(\bas)}(\Ty).
\end{tikzcd}
\]
It suffices to show that \(\map\) is an equivalence of right
fibrations over \(\model(\bas)\). Since \(\model(\typeof)\) is
univalent, the map \(\map\) is \((-1)\)-truncated
\parencite[Corollary 3.10]{gepner2017univalence}. Recall that the
objects of \(\rmcls_{\model(\bas)}(\Ty)\) are the arrows of
\(\model(\bas)\). \Cref{etth-model-display-maps-1} implies that the
essential image of \(\map\) is the class of display maps in
\(\model(\bas)\). Then, by \cref{etth-dem-model-disp}, the map
\(\map\) is essentially surjective and thus an equivalence.
\end{proof}
Consider the image of the arrow \(\typeof : \El \to \Ty\) by the
inclusion
\(\etth_{\infty} \to \Th(\etth_{\infty})^{\op} \simeq
\Mod^{\dem}(\etth_{\infty})^{\op} \simeq \Lex_{\infty}^{\op}\). For a
left exact \(\infty\)-category \(\cat\), we have
\begin{gather*}
\Th(\etth_{\infty})(\yoneda(\Ty), \IL(\rmcls_{\cat})) \simeq
\rmcls_{\cat}(\Ty)_{\terminal} \simeq \cat^{\simeq} \\
\Th(\etth_{\infty})(\yoneda(\El), \IL(\rmcls_{\cat})) \simeq
\rmcls_{\cat}(\El)_{\terminal} \simeq (\terminal/\cat)^{\simeq}.
\end{gather*}
Hence, the object \(\Ty\) corresponds to the free left exact
\(\infty\)-category \(\free{\Box}\) generated by an object \(\Box\),
the object \(\El\) corresponds to the free left exact
\(\infty\)-category \(\free{\widetilde{\Box} : \terminal \to \Box}\)
generated by an object \(\Box\) and a global section
\(\widetilde{\Box} : \terminal \to \Box\), and the arrow
\(\typeof : \El \to \Ty\) corresponds to the inclusion
\(\inc : \free{\Box} \to \free{\widetilde{\Box} : \terminal \to
\Box}\). Since
\(\yoneda(\obj)/\Th(\etth_{\infty}) \simeq \Th(\etth_{\infty}/\obj)\),
we see that the inclusion \(\inc\) becomes an exponentiable arrow in
\(\Lex_{\infty}^{\op}\). This makes \(\Lex_{\infty}^{\op}\) an
\(\infty\)-category with representable maps in which the representable
arrows are the pullbacks of \(\inc\), and \(\inc\) is a left exact
universe in \(\Lex_{\infty}^{\op}\). Since \(\Th(\etth_{\infty})\) is
the \(\omega\)-free cocompletion of \(\etth_{\infty}^{\op}\), the
universal property of \(\etth_{\infty}\) gives the following universal
property of \(\Lex_{\infty}\).
\begin{corollary}
\label{Lex-ump}
Let \(\cat\) be an \(\infty\)-category with representable
maps that has all small limits and \(\arr : \obj \to \objI\) a left exact universe in
\(\cat\). Then there exists a unique morphism of
\(\infty\)-categories with representable maps
\(\fun : \Lex_{\infty}^{\op} \to \cat\) that sends \(\inc\) to
\(\arr\) and preserves small limits.
\end{corollary}
\begin{proof}
By the definition of \(\etth_{\infty}\), we have a
unique morphism of \(\infty\)-categories with representable maps
\(\overline{\fun} : \etth_{\infty} \to \cat\) that sends \(\typeof\)
to \(\arr\), which uniquely extends to a limit-preserving functor
\(\fun : \Lex_{\infty}^{\op} \simeq \Th(\etth_{\infty})^{\op} \to
\cat\). The functor \(\fun\) sends pushforwards along \(\inc\) to
pushforwards along \(\arr\) because the pushforward functors
preserve limits and every object of \(\Th(\etth_{\infty})^{\op}\)
is a limit of objects from \(\etth_{\infty}\).
\end{proof}
\subsection{Locally cartesian closed \(\infty\)-categories}
\label{sec:locally-cart-clos}
We define an \(\infty\)-type theory \(\etth_{\infty}^{\dProd}\) whose
theories are equivalent to small locally cartesian closed
\(\infty\)-categories.
\begin{definition}
Let \(\cat\) be an \(\infty\)-category with representable maps and
\(\typeof : \El \to \Ty\) a representable arrow in \(\cat\). A
\emph{\(\dProd\)-type structure on \(\typeof\)} is a pullback square
of the form
\begin{equation}
\label{eq:10}
\begin{tikzcd}
\poly_{\typeof}\El
\arrow[r, dotted, "\lam"]
\arrow[d,"\poly_{\typeof}\typeof"'] &
\El
\arrow[d,"\typeof"] \\
\poly_{\typeof}\Ty
\arrow[r, dotted, "\dProd"'] &
\Ty.
\end{tikzcd}
\end{equation}
\end{definition}
The following is straightforward.
\begin{proposition}
\label{univalent-pi-types}
Let \(\typeof : \El \to \Ty\) be a univalent representable arrow in
an \(\infty\)-category with representable maps. Then \(\dProd\)-type
structures on \(\typeof\) are unique up to contractible
choice. Moreover, there exists a \(\dProd\)-type structure on
\(\typeof\) if and only if pullbacks of \(\typeof\) are closed under
pushforwards along pullbacks of \(\typeof\).
\end{proposition}
\begin{definition}
Let \(\etth_{\infty}^{\dProd}\) denote the initial \(\infty\)-type
theory containing a left exact universe \(\typeof : \El \to \Ty\)
with a \(\dProd\)-type structure.
\end{definition}
\begin{theorem}
\label{EPi-dem-lccc}
The functor
\(\ev_{\bas} : \Mod^{\dem}(\etth_{\infty}^{\dProd}) \to \Cat_{\infty}\)
factors through the \(\infty\)-category \(\LCCC_{\infty}\) of small locally
cartesian closed \(\infty\)-categories and induces an equivalence
\[
\Mod^{\dem}(\etth_{\infty}^{\dProd}) \simeq \LCCC_{\infty}.
\]
\end{theorem}
\begin{proof}
\Cref{etth-representable-arrows} holds for
\(\etth_{\infty}^{\dProd}\): an arrow in \(\etth_{\infty}^{\dProd}\)
is representable if and only if it is a pullback of \(\typeof\). It
follows from this that the restriction of a democratic model of
\(\etth_{\infty}^{\dProd}\) along the inclusion
\(\etth_{\infty} \to \etth_{\infty}^{\dProd}\) is a democratic model
of \(\etth_{\infty}\). Thus, by \cref{etth-dem-lex}, it suffices to
show that the composite
\(\Mod^{\dem}(\etth_{\infty}^{\dProd}) \to
\Mod^{\dem}(\etth_{\infty}) \overset{\ev_{\bas}}{\longrightarrow}
\Lex_{\infty}\) factors through \(\LCCC_{\infty}\) and gives rise a
pullback square
\[
\begin{tikzcd}
\Mod^{\dem}(\etth_{\infty}^{\dProd})
\arrow[r,dotted]
\arrow[d] &
\LCCC_{\infty}
\arrow[d] \\
\Mod^{\dem}(\etth_{\infty})
\arrow[r,"\ev_{\bas}"',"\simeq"] &
\Lex_{\infty}.
\end{tikzcd}
\]
It suffices to show the following:
\begin{enumerate}
\item \label{item:-6} an object \(\model\) in
\(\Mod^{\dem}(\etth_{\infty})\) is in
\(\Mod^{\dem}(\etth_{\infty}^{\dProd})\) if and only if
\(\model(\bas)\) is in \(\LCCC_{\infty}\);
\item \label{item:-7} for objects
\(\model, \modelI \in \Mod^{\dem}(\etth_{\infty}^{\dProd})\), a
morphism \(\fun : \model \to \modelI\) in
\(\Mod^{\dem}(\etth_{\infty})\) is in
\(\Mod^{\dem}(\etth_{\infty}^{\dProd})\) if and only if
\(\fun(\bas) : \model(\bas) \to \modelI(\bas)\) is in
\(\LCCC_{\infty}\).
\end{enumerate}
\Cref{item:-6}. By \cref{univalent-pi-types}, an object
\(\model \in \Mod^{\dem}(\etth_{\infty})\) is in
\(\Mod^{\dem}(\etth_{\infty}^{\dProd})\) if and only if
representable maps of right fibrations over \(\model(\bas)\) are
closed under pushforwards along representable maps. This is
equivalent to that the \(\infty\)-category \(\model(\bas)\) is
locally cartesian closed.
\Cref{item:-7}. A morphism \(\fun : \model \to \modelI\) in
\(\Mod^{\dem}(\etth_{\infty})\) between objects from
\(\Mod^{\dem}(\etth_{\infty}^{\dProd})\) is in
\(\Mod^{\dem}(\etth_{\infty}^{\dProd})\) if and only if it commutes
with \(\dProd\)-type structures. Observe that
\(\model(\dProd) : \model(\poly_{\typeof}\Ty) \to \model(\Ty)\)
sends a pair of composable arrows \(\arr : \obj \to \objI\) and
\(\arrI : \objI \to \objII\) in \(\model(\bas)\) to the pushforward
\(\arrI_{*}\arr : \arrI_{*}\obj \to \objII\). Thus, \(\fun\)
commutes with \(\dProd\)-type structures if and only if
\(\fun(\bas) : \model(\bas) \to \modelI(\bas)\) commutes with
pushforwards.
\end{proof}
\subsection{\(\infty\)-type theories}
\label{sec:infty-type-theories-1}
We define an \(\infty\)-type theory \(\tthR_{\infty}\) whose theories
are equivalent to \(\infty\)-type theories.
\begin{definition}
Let \(\typeof_{1} : \El_{1} \to \Ty_{1}\),
\(\typeof_{2} : \El_{2} \to \Ty_{2}\) and
\(\typeof_{3} : \El_{3} \to \Ty_{3}\) be representable arrows in an
\(\infty\)-category with representable maps. A
\emph{\((\typeof_{1}, \typeof_{2}, \typeof_{3})\)-\(\dProd\)-type
structure} is a pullback of the form
\begin{equation}
\label{eq:11}
\begin{tikzcd}
\poly_{\typeof_{1}}\El_{2}
\arrow[r, dotted, "\lam"]
\arrow[d,"\poly_{\typeof_{1}}\typeof_{2}"'] &
\El_{3}
\arrow[d,"\typeof_{3}"] \\
\poly_{\typeof_{1}}\Ty_{2}
\arrow[r, dotted, "\dProd"'] &
\Ty_{3}.
\end{tikzcd}
\end{equation}
Note that if \(\typeof_{3}\) is univalent, then
\((\typeof_{1}, \typeof_{2}, \typeof_{3})\)-\(\dProd\)-type
structure are unique up to contractible choice, and there exists a
\((\typeof_{1}, \typeof_{2}, \typeof_{3})\)-\(\dProd\)-type
structure if and only if the pushforward of a pullback of
\(\typeof_{2}\) along a pullback of \(\typeof_{1}\) is a pullback of
\(\typeof_{3}\).
\end{definition}
\begin{definition}
We denote by \(\tthR_{\infty}\) the initial \(\infty\)-type theory
containing the following data:
\begin{itemize}
\item a left exact universe \(\typeof : \El \to \Ty\);
\item a \((-1)\)-truncated arrow \(\Rep \hookrightarrow \Ty\). We
denote by \(\typeof_{\Rep}\) the pullback of \(\typeof\) along the
inclusion \(\Rep \hookrightarrow \Ty\);
\item a \(\Unit\)-type structure and a \(\dSum\)-type
structure on \(\typeof_{\Rep}\);
\item a \((\typeof_{\Rep}, \typeof, \typeof)\)-\(\dProd\)-type
structure.
\end{itemize}
Note that the inclusion \(\Rep \hookrightarrow \Ty\) automatically
commutes with \(\Unit\)-type structures and \(\dSum\)-type
structures because of univalence.
\end{definition}
\begin{definition}
Let \(\model\) be a model of \(\tthR_{\infty}\). We say an arrow in
\(\model(\bas)\) is \emph{representable} if it is a context
comprehension with respect to \(\typeof_{\Rep}\). Using the
\((\typeof_{\Rep}, \typeof, \typeof)\)-\(\dProd\)-type structure, we
see that the pushforward of a display map along a representable map
exists and is a display map. In particular, if \(\model\) is
democratic, then \(\model(\bas)\) is an \(\infty\)-type theory and,
for any morphism \(\fun : \model \to \modelI\) between democratic
models, \(\fun(\bas) : \model(\bas) \to \modelI(\bas)\) is a
morphism of \(\infty\)-type theories. Hence, we have a functor
\[
\ev_{\bas} : \Mod^{\dem}(\tthR_{\infty}) \to \TTh_{\infty}
\]
\end{definition}
\begin{theorem}
\label{tthR-dem-TTh}
The functor
\(\ev_{\bas} : \Mod^{\dem}(\tthR_{\infty}) \to \TTh_{\infty}\) is an
equivalence.
\end{theorem}
\begin{proof}
Similar to \cref{etth-dem-lex}. For an \(\infty\)-type theory
\(\cat\), the representable map classifier \(\rmcls_{\cat}(\Ty)\)
has the full subfibration
\(\rmcls_{\cat}(\Rep) \subset \rmcls_{\cat}(\Ty)\) spanned by the
representable arrows in \(\cat\), which defines a democratic model
of \(\tthR_{\infty}\).
\end{proof}
\section{Internal languages for left exact $\infty$-categories}
\label{sec:intern-lang-left}
In this section, we show \citeauthor{kapulkin2018homotopy}'s
conjecture that the \(\infty\)-category of small left exact
\(\infty\)-categories is a localization of the category of theories
over Martin-L{\"o}f type theory with intensional identity types
\parencite{kapulkin2018homotopy}.
We first introduce a structure of \emph{intensional identity types} in
the context of \(\infty\)-type theory.
\begin{definition}
Let \(\cat\) be an
\(\infty\)-category with representable maps and \(\typeof : \El \to
\Ty\) a representable arrow in \(\cat\). An \emph{\(\Id^{+}\)-type
structure on \(\typeof\)} is a commutative square of the form
\begin{equation}
\label{eq:16}
\begin{tikzcd}
\El
\arrow[r, dotted ,"\refl"]
\arrow[d, "\diagonal"'] &
\El
\arrow[d, "\typeof"] \\
\El \times_{\Ty} \El
\arrow[r, dotted, "\Id"'] &
\Ty
\end{tikzcd}
\end{equation}
equipped with a section \(\elim_{\Id^{+}}\) of the induced arrow
\[
(\refl^{*}, \typeof_{*}) : (\Id^{*}\El \To_{\Ty} \Ty^{*} \El) \to
(\El \To_{\Ty} \Ty^{*} \El) \times_{(\El \To_{\Ty} \Ty^{*} \Ty)}
(\Id^{*}\El \To_{\Ty} \Ty^{*} \Ty),
\]
where \(\To_{\Ty}\) is the exponential in the slice \(\cat /
\Ty\).
\end{definition}
The codomain of the arrow \((\refl^{*}, \typeof_{*})\) classifies
\emph{lifting problems} for \(\refl\) against \(\typeof\), and the
section \(\elim_{\Id^{+}}\) is considered as a \emph{uniform solution}
to the lifting problems. See
\parencite{awodey2009homotopy,awodey2018natural,kapulkin2021simplicial}
for how this definition is related to syntactically presented
intensional identity types. We note that for an \(\Id\)-type structure
\((\Id, \refl)\), the arrow \((\refl^{*}, \typeof_{*})\) is
invertible, and thus any \(\Id\)-type structure is uniquely extended
to an \(\Id^{+}\)-type structure.
Let \(\itth\) denote the \(1\)-type theory freely generated by a
representable arrow \(\typeof : \El \to \Ty\) equipped with a
\(\Unit\)-type structure, a \(\dSum\)-type structure, and an
\(\Id^{+}\)-type structure. \Textcite{kapulkin2018homotopy}
conjectured that the \(\infty\)-category \(\Lex_{\infty}\) is a
localization of the \(1\)-category \(\Th(\itth)\). Strictly, they work
with contextual categories with a unit type, \(\dSum\)-types, and
intensional identity types instead of theories over \(\itth\) in our
sense, but it is straightforward to see that those contextual
categories are equivalent to democratic models of \(\itth\). They also
gave a specific functor \(\Th(\itth) \to \Lex_{\infty}\) and
conjectured that it is a localization functor. We prove their
conjecture using the theory of \(\infty\)-type theories and the
equivalence \(\Th(\etth_{\infty}) \simeq \Lex_{\infty}\).
We construct the functor
\(\Th(\itth) \to \Lex_{\infty} \simeq \Th(\etth_{\infty})\)
differently from \citeauthor{kapulkin2018homotopy}. A first attempt is
to construct a morphism between \(\itth\) and \(\etth_{\infty}\), but
this fails: since the generating representable arrow \(\typeof\) is
not univalent in \(\itth\), we do not have a morphism
\(\etth_{\infty} \to \itth\); since \(\typeof\) is not \(0\)-truncated
in \(\etth_{\infty}\), we do not have a morphism
\(\itth \to \etth_{\infty}\). We thus introduce an intermediate
\(\infty\)-type theory \(\itth_{\infty}\) defined as the free
\(\infty\)-type theory generated by the same data as \(\itth\) but without
truncatedness. Then
\(\itth\) is the universal \(1\)-type theory under \(\itth_{\infty}\),
and \(\etth_{\infty}\) is the universal \(\infty\)-type theory under
\(\itth_{\infty}\) inverting the morphisms
\(\refl : \El \to \Id^{*}\El\) and
\(|\id| : \Ty \to \intern{\Equiv}(\typeof)\). We thus have a span of
\(\infty\)-type theories
\begin{equation}
\label{eq:6}
\begin{tikzcd}
\itth &
\itth_{\infty}
\arrow[l, "\truncmap"']
\arrow[r, "\locmap"] &
\etth_{\infty}.
\end{tikzcd}
\end{equation}
Since any morphism \(\fun : \tth \to \tth'\) between \(\infty\)-type
theories induces an adjunction \(\fun_{!} \adj \fun^{*} : \Th(\tth)
\to \Th(\tth')\) as \(\Th(\tth) = \Lex(\tth, \Space)\), we have a
functor
\begin{equation}
\label{eq:7}
\begin{tikzcd}
\Th(\itth)
\arrow[r, "\truncmap^{*}"] &
\Th(\itth_{\infty})
\arrow[r, "\locmap_{!}"] &
\Th(\etth_{\infty}).
\end{tikzcd}
\end{equation}
We define the \emph{weak equivalences} in \(\Th(\itth)\) to be the
morphisms inverted by the functor \(\locmap_{!} \truncmap^{*}\) and
write \(\Loc(\Th(\itth))\) for the localization by the weak
equivalences.
\begin{theorem}
\label{itth-etth-infty}
The functor \(\locmap_{!} \truncmap^{*} : \Th(\itth) \to
\Th(\etth_{\infty})\) induces an equivalence of
\(\infty\)-categories
\[
\Loc(\Th(\itth)) \simeq \Th(\etth_{\infty}).
\]
Moreover, the composite \(
\begin{tikzcd}
\Th(\itth)
\arrow[r, "\locmap_{!} \truncmap^{*}"] &
\Th(\etth_{\infty})
\arrow[r, "\simeq"] &
\Lex_{\infty}
\end{tikzcd}
\) coincides with the functor considered by \textcite[Conjecture
3.7]{kapulkin2018homotopy}.
\end{theorem}
\begin{remark}
The construction of the functor
\(\locmap_{!} \truncmap^{*} : \Th(\itth) \to \Th(\etth_{\infty})\)
is easily generalized to extensions with type-theoretic structures
such as \(\dProd\)-types, (higher) inductive types, and
universes. For example, if we extend \(\itth\) with
\(\dProd\)-types, then we have a span
\[
\begin{tikzcd}
\itth^{\dProd} &
\itth_{\infty}^{\dProd}
\arrow[l, "\truncmap"']
\arrow[r, "\locmap"] &
\etth_{\infty}^{\dProd}
\end{tikzcd}
\]
by extending \(\itth_{\infty}\) with \(\dProd\)-types. We expect
that similar results to \cref{itth-etth-infty} can be proved for a
wide range of extensions of \(\itth\), which is left as future
work. See \cref{sec:generalizations} for discussion.
\end{remark}
\subsection{Proof of the theorem}
\label{sec:proof-theorem}
This subsection is devoted to the proof of
\cref{itth-etth-infty}. We use \citeauthor{cisinski2019higher}'s
results on localizations of \(\infty\)-categories \parencite[Chapter
7]{cisinski2019higher}. We first give the category \(\Th(\itth)\) the structure of a \emph{category
with weak equivalences and cofibrations} (we recall the definition below) and show that the functor
\(\locmap_{!} \truncmap^{*}\) is \emph{right exact}. We then show that the functor \(\locmap_{!} \truncmap^{*}\) satisfies the \emph{left approximation property} (also recalled below), which implies that the induced functor on localization is an equivalence.
\begin{definition}
\label{cof-cat}
A \emph{category with weak equivalences and cofibrations} is a
category \(\cat\) equipped with two classes of arrows called
\emph{weak equivalences} and \emph{cofibrations} satisfying the
conditions below. An object \(\obj \in \cat\) is \emph{cofibrant} if
the arrow \(\initial \to \obj\) is a cofibration. An arrow is a
\emph{trivial cofibration} if it is both a weak equivalence and a
cofibration.
\begin{enumerate}
\item \label{item:cof-cat-1} \(\cat\) has an initial object.
\item \label{item:cof-cat-2} All the identities are trivial cofibrations, and weak
equivalences and cofibrations are closed under composition.
\item \label{item:cof-cat-3} The weak equivalences satisfy the \emph{2-out-of-3} property:
if \(\arr\) and \(\arrI\) are a composable pair of arrows and if
two of \(\arr\), \(\arrI\), and \(\arrI \arr\) are weak
equivalences, then so is the rest.
\item \label{item:cof-cat-4} (Trivial) cofibrations are stable under pushouts along
arbitrary arrows between cofibrant objects: if
\(\obj, \obj' \in \cat\) are cofibrant objects,
\(\cof : \obj \to \objI\) is a (trivial) cofibration, and
\(\arr : \obj \to \obj'\) is an arbitrary arrow, then the pushout
\(\arr_{!} \objI\) exists and the arrow
\(\obj' \to \arr_{!} \objI\) is a (trivial) cofibration.
\item \label{item:cof-cat-5} Any arrow \(\arr : \obj \to \objI\) with cofibrant domain
factors into a cofibration \(\obj \to \objI'\) followed by a weak
equivalence \(\objI' \to \objI\).
\end{enumerate}
\end{definition}
\begin{definition}
Let \(\cat\) be a category with weak equivalences and cofibrations
and \(\catI\) an \(\infty\)-category with finite colimits. A functor
\(\fun : \cat \to \catI\) is \emph{right exact} if it sends trivial
cofibrations between cofibrant objects to invertible arrows and
preserves initial objects and pushouts of cofibrations along arrows
between cofibrant objects. A right exact functor \(\fun : \cat \to
\catI\) has the \emph{left approximation property} if the following
conditions hold:
\begin{enumerate}
\item an arrow in \(\cat\) is a weak equivalence if and only if it
becomes invertible in \(\catI\);
\item for any cofibrant object \(\obj \in \cat\) and any arrow
\(\arr : \fun(\obj) \to \objI\) in \(\catI\), there exists an
arrow \(\arr' : \obj \to \objI'\) in \(\cat\) such that
\(\fun(\objI') \simeq \objI\) under \(\fun(\obj)\).
\end{enumerate}
\end{definition}
\begin{proposition}
\label{derived-equiv}
Any right exact functor \(\fun : \cat \to \catI\) with the left
approximation property induces an equivalence
\(\Loc(\cat) \simeq \catI\).
\end{proposition}
\begin{proof}
By \parencite[Proposition 7.6.15]{cisinski2019higher}.
\end{proof}
Our first task will be to show that \(\Th(\itth)\) admits the structure of a category with weak equivalences and cofibrations. We have already defined the weak equivalences in \(\Th(\itth)\) as those
morphisms inverted by \(\locmap_{!} \truncmap^{*}\). We define the
cofibrations in \(\Th(\itth)\) as follows. Recall that
\(\yoneda : \itth^{\op} \to \Th(\itth)\) is the Yoneda embedding and
\(\poly_{\typeof} : \itth \to \itth\) is the polynomial functor
associated with \(\typeof : \El \to \Ty\).
\begin{definition}
The \emph{generating cofibrations in \(\Th(\itth)\)} are the
following morphisms:
\begin{itemize}
\item \(\yoneda (\poly_{\typeof}^{\nat}(\terminal)) \to \yoneda
(\poly_{\typeof}^{\nat}(\Ty))\) for \(\nat \ge 0\);
\item \(\yoneda (\poly_{\typeof}^{\nat}(\typeof)) : \yoneda
(\poly_{\typeof}^{\nat}(\Ty)) \to \yoneda
(\poly_{\typeof}^{\nat}(\El))\) for \(\nat \ge 0\).
\end{itemize}
The class of \emph{cofibrations in \(\Th(\itth)\)} is the closure of
the generating cofibrations under retracts, pushouts along arbitrary
morphisms, and transfinite composition. Cofibrations in
\(\Th(\itth_{\infty})\) and \(\Th(\etth_{\infty})\) are defined in
the same way. Note that the functors \(\truncmap_{!}\) and
\(\locmap_{!}\) preserve generating cofibrations.
\end{definition}
\begin{remark}
\label{remark-itth-cof}
Our choice of generating cofibrations in \(\Th(\itth)\) coincides
with the choice by \textcite{kapulkin2018homotopy}. That is,
\(\yoneda (\poly_{\typeof}^{\nat}(\terminal))\) is the theory freely
generated by a context of length \(\nat\),
\(\yoneda (\poly_{\typeof}^{\nat}(\Ty))\) is the theory freely
generated by a type over a context of length \(\nat\), and
\(\yoneda (\poly_{\typeof}^{\nat}(\El))\) is the theory freely
generated by a term over a context of length \(\nat\). This is
verified as follows. Let \(\theory\) be an \(\itth\)-theory and let
\(\model\) be the democratic model of \(\itth\) corresponding to
\(\theory\) via the equivalence
\(\Th(\itth) \simeq \Mod^{\dem}(\itth)\). By construction, a
morphism \(\yoneda \obj \to \theory\) correspond to a global section
\(\model(\bas) \to \model(\obj)\) for any object \(\obj \in
\itth\). Then, by the universal property of \(\poly_{\typeof}\), a
morphism
\(\ctx : \yoneda (\poly_{\typeof}^{\nat}(\terminal)) \to \theory\)
corresponds to a list of sections
\begin{equation}
\label{eq:13}
\begin{split}
\ctx_{1}
&: \model(\bas) / \{\ctx_{0}\} \to \model(\Ty) \\
\ctx_{2}
&: \model(\bas) / \{\ctx_{1}\} \to \model(\Ty) \\
\vdots \\
\ctx_{\nat}
&: \model(\bas) / \{\ctx_{\nat - 1}\} \to \model(\Ty)
\end{split}
\end{equation}
where \(\{\ctx_{0}\} = \terminal\) and
\(\model(\bas) / \{\ctx_{\idx + 1}\} \simeq \ctx_{\idx}^{*}
\model(\El)\). Since we think of sections of \(\model(\Ty)\) as
types, such a list of sections can be regarded as a context of
length \(\nat\). Under this identification, an extension
\(\yoneda (\poly_{\typeof}^{\nat}(\Ty)) \to \theory\) of \(\ctx\)
corresponds to a section
\(\model(\bas) / \{\ctx_{\nat}\} \to \model(\Ty)\), that is, a type
over \(\ctx\). Similarly, an extension
\(\yoneda (\poly_{\typeof}^{\nat}(\El)) \to \theory\) of \(\ctx\)
corresponds to a term
\(\model(\bas) / \{\ctx_{\nat}\} \to \model(\El)\), that is, a term
over \(\ctx\). Hence, morphisms from
\(\yoneda (\poly_{\typeof}^{\nat}(\terminal))\),
\(\yoneda (\poly_{\typeof}^{\nat}(\Ty))\), and
\(\yoneda (\poly_{\typeof}^{\nat}(\El))\) correspond to contexts,
types, and terms, respectively. In this view, a cofibration in
\(\Th(\itth)\) is an extension by types and terms, but without any
equation. In particular, cofibrant \(\itth\)-theories are those
freely generated by types and terms.
\end{remark}
\begin{theorem}
\label{itth-cof-cat}
The classes of cofibrations and weak equivalences endow \(\Th(\itth)\) with the structure of a category with weak equivalences and cofibrations.
\end{theorem}
By definition, \(\Th(\itth)\) satisfies
\cref{item:cof-cat-1,item:cof-cat-2,item:cof-cat-3} of \cref{cof-cat},
and cofibrations are stable under arbitrary pushouts. To make
\(\Th(\itth)\) a category with weak equivalences and cofibrations, it
remains to verify the stability of trivial cofibrations under pushouts
and the factorization axiom. The former is true if the functor
\(\locmap_{!} \truncmap^{*} : \Th(\itth) \to \Th(\etth_{\infty})\)
preserves initial objects and pushouts of cofibrations along morphisms
between cofibrant objects. Note that this also implies that the
functor \(\locmap_{!} \truncmap^{*}\) must be right exact. Since
\(\locmap_{!}\) preserves all colimits, it suffices to show that
\(\truncmap^{*}\) has this property. For the latter, we introduce the notion of \emph{trivial fibration}.
\begin{definition}
A morphism \(\map : \theory \to \theoryI\) in \(\Th(\itth)\) is a
\emph{trivial fibration} if it has the right lifting property
against cofibrations: for any commutative square
\[
\begin{tikzcd}
\sh
\arrow[r, "\mapI"]
\arrow[d, "\cof"'] &
\theory
\arrow[d, "\map"] \\
\shI
\arrow[r, "\mapII"'] &
\theoryI
\end{tikzcd}
\]
in which \(\cof\) is a cofibration, there exists a morphism
\(\mapIII : \shI \to \theory\) such that \(\map \mapIII = \mapII\)
and \(\mapIII \cof = \mapI\). Trivial fibrations in
\(\Th(\itth_{\infty})\) and \(\Th(\etth_{\infty})\) are defined in
the same way. By a standard argument in model category theory,
\(\map\) is a trivial fibration if and only if it has the right
lifting property against generating cofibrations.
\end{definition}
By the \emph{small object argument}, we know that any morphism in
\(\Th(\itth)\) factors into a cofibration followed by a trivial
fibration. Thus, to show that \(\Th(\itth)\) satisfies the factorization axiom it is enough to show that trivial fibrations are
inverted by \(\locmap_{!} \truncmap^{*}\). In conclusion, theorem \ref{itth-cof-cat} will follow from the following two propositions.
\begin{proposition}
\label{itth-cof-obj}
The functor \(\truncmap^{*} : \Th(\itth) \to \Th(\itth_{\infty})\)
preserves initial objects and pushouts of cofibrations along
morphisms between cofibrant objects.
\end{proposition}
\begin{proposition}
\label{itth-triv-fib}
Trivial fibrations in \(\Th(\itth)\) are inverted by the functor
\(\locmap_{!} \truncmap^{*} : \Th(\itth) \to
\Th(\etth_{\infty})\).
\end{proposition}
We begin by proving \cref{itth-triv-fib}. It can be broken into the
following two lemmas.
\begin{lemma}
\label{etth-triv-fib}
In \(\Th(\etth_{\infty})\), the trivial fibrations are precisely the
invertible morphisms. Equivalently, all the morphisms are
cofibrations.
\end{lemma}
\begin{lemma}
\label{itth-unit-triv-fib}
For any \(\itth\)-theory \(\theory\), the unit
\(\truncmap^{*} \theory \to \locmap^{*} \locmap_{!} \truncmap^{*}
\theory\) is a trivial fibration.
\end{lemma}
\begin{proof}[Proof of \cref{itth-triv-fib}]
Let \(\map : \theory \to \theoryI\) be a trivial fibration in
\(\Th(\itth)\). Consider the naturality square
\[
\begin{tikzcd}
\truncmap^{*} \theory
\arrow[r, "\unit_{\truncmap^{*} \theory}"]
\arrow[d, "\truncmap^{*} \map"'] &
\locmap^{*} \locmap_{!} \truncmap^{*} \theory
\arrow[d, "\locmap^{*} \locmap_{!} \truncmap^{*} \map"] \\
\truncmap^{*} \theoryI
\arrow[r, "\unit_{\truncmap^{*} \theoryI}"'] &
\locmap^{*} \locmap_{!} \truncmap^{*} \theoryI
\end{tikzcd}
\]
where \(\unit\) is the unit of the adjunction
\(\locmap_{!} \adj \locmap^{*}\). By an adjoint argument,
\(\truncmap^{*} \map\) is a trivial fibration. By
\cref{itth-unit-triv-fib}, \(\unit_{\truncmap^{*} \theory}\) and
\(\unit_{\truncmap^{*} \theoryI}\) are trivial fibrations. Since the
domains of the generating cofibrations are cofibrant, it follows
that \(\locmap^{*} \locmap_{!} \truncmap^{*} \map\) is a trivial
fibration. Then, again by an adjoint argument,
\(\locmap_{!} \truncmap^{*} \map\) is a trivial fibration and thus
invertible by \cref{etth-triv-fib}.
\end{proof}
\Cref{etth-triv-fib} is straightforward.
\begin{proof}[Proof of \cref{etth-triv-fib}]
Since the representable arrow \(\typeof\) in \(\etth_{\infty}\) has
an \(\Id\)-type structure, the diagonal
\(\El \to \El \times_{\Ty} \El\) is a pullback of \(\typeof\). This
implies that the codiagonal
\(\yoneda (\poly_{\typeof}^{\nat}(\El \times_{\Ty} \El)) \to \yoneda
(\poly_{\typeof}^{\nat}(\El))\) in \(\Th(\etth_{\infty})\) is a
cofibration for \(\nat \ge 0\). Similarly, the univalence of
\(\typeof\) implies that the codiagonal
\(\yoneda (\poly_{\typeof}^{\nat}(\Ty \times \Ty)) \to \yoneda
(\poly_{\typeof}^{\nat}(\Ty))\) in \(\Th(\etth_{\infty})\) is a cofibration
for \(\nat \ge 0\). Hence, for any generating cofibration
\(\cof : \sh \to \shI\) in \(\Th(\etth_{\infty})\), the codiagonal
\(\shI +_{\sh} \shI \to \shI\) is a cofibration, and thus
cofibrations in \(\Th(\etth_{\infty})\) are closed under
codiagonal. It then follows that cofibrations in
\(\Th(\etth_{\infty})\) has the right cancellation property: for a
composable pair of morphisms \(\map\) and \(\mapI\), if \(\map\) and
\(\mapI \map\) are cofibrations, then so is \(\mapI\). Therefore, it
suffices to show that all the objects of \(\Th(\etth_{\infty})\) are
cofibrant. One can show that
\(\yoneda (\poly_{\typeof}^{\nat}(\Ty))\)'s and
\(\yoneda (\poly_{\typeof}^{\nat}(\El))\)'s generate
\(\Th(\etth_{\infty})\) under colimits. Since they are cofibrant,
all the objects are cofibrant.
\end{proof}
For \cref{itth-cof-obj,itth-unit-triv-fib}, we need analysis of the
functors \(\truncmap^{*}\) and \(\locmap_{!}\). We work with democratic
models instead of theories via the equivalence
\(\Mod^{\dem}(\tth) \simeq \Th(\tth)\)
(\cref{theory-model-correspondence}). We first note that the functors
\(\truncmap^{*} : \Mod(\itth) \to \Mod(\itth_{\infty})\) and
\(\locmap^{*} : \Mod(\etth_{\infty}) \to \Mod(\itth_{\infty})\) are
fully faithful. More precisely, the models of \(\itth\) are the models
\(\model\) of \(\itth_{\infty}\) such that \(\model(\Ty)\) and
\(\model(\El)\) are \(0\)-truncated objects in
\(\RFib_{\model(\bas)}\), and the models of \(\etth_{\infty}\) are the
models \(\model\) of \(\itth_{\infty}\) such that the map
\(\model(\refl) : \model(\El) \to \model(\Id^{*} \El)\) is invertible
and the representable map \(\model(\typeof)\) is a univalent. It is
also clear from this description that the functors \(\truncmap^{*}\)
and \(\locmap^{*}\) preserve democratic models. Hence, we may identify
the functors \(\truncmap^{*} : \Th(\itth) \to \Th(\itth_{\infty})\)
and \(\locmap^{*} : \Th(\etth_{\infty}) \to \Th(\itth_{\infty})\) with
the inclusions
\(\Mod^{\dem}(\itth) \subset \Mod^{\dem}(\itth_{\infty})\) and
\(\Mod^{\dem}(\etth_{\infty}) \subset \Mod^{\dem}(\itth_{\infty})\),
respectively.
To prove \cref{itth-unit-triv-fib}, we concretely describe
\(\locmap_{!} \model \in \Mod^{\dem}(\etth_{\infty})\) for a
democratic model \(\model\) of \(\itth\). By a standard argument in
the categorical semantics of homotopy type theory
\parencite[e.g.][Theorem 3.2.5]{avigad2015homotopy}, the base category
\(\model(\bas)\) is a category of fibrant objects whose fibrations are
the display maps and whose weak equivalences are homotopy equivalences
defined by the identity types. By the result of
\textcite{szumilo2014two}, the localization \(\Loc(\model(\bas))\) has
finite limits, and the localization functor
\(\model(\bas) \to \Loc(\model(\bas))\) is left exact. The
construction \(\model \mapsto \Loc(\model(\bas))\) is the one
considered by \textcite[Conjecture 3.7]{kapulkin2018homotopy}, and
thus the following lemma implies the second assertion of
\cref{itth-etth-infty}.
\begin{lemma}
\label{itth-model-localization}
The functor
\[
\locmap_{!} : \Mod^{\dem}(\itth) \subset
\Mod^{\dem}(\itth_{\infty}) \to \Mod^{\dem}(\etth_{\infty}) \simeq
\Lex_{\infty}
\]
is naturally equivalent to the functor
\(\model \mapsto \Loc(\model(\bas))\).
\end{lemma}
\begin{proof}
Let \(\cat\) be a left exact \(\infty\)-category and let
\(\rmcls_{\cat}\) be the corresponding democratic model of
\(\etth_{\infty}\). We construct an equivalence of spaces
\begin{equation}
\label{eq:12}
\Mod^{\dem}(\itth_{\infty})(\model, \rmcls_{\cat}) \simeq
\Lex_{\infty}(\model(\bas), \cat).
\end{equation}
Then we see that \(\locmap_{!} \model\) and \(\Loc(\model(\bas))\)
has the same universal property. Given a morphism
\(\fun : \model \to \rmcls_{\cat}\) of models of \(\itth_{\infty}\),
since the weak equivalences in \(\model(\bas)\) is defined by the
intensional identity types, and since the intensional identity types
become extensional one in \(\cat\), the underlying functor
\(\fun_{\bas} : \model(\bas) \to \cat\) is left exact. This defines
one direction of \cref{eq:12}. For the other, let
\(\fun_{\bas} : \model(\bas) \to \cat\) be a left exact
functor. Recall that \(\rmcls_{\cat}(\Ty)\) is the right fibration
of arrows in \(\cat\) and that \(\rmcls_{\cat}(\El)\) is the right
fibration of sections in \(\cat\). Then we can construct maps
\(\fun_{\Ty} : \model(\Ty) \to \rmcls_{\cat}(\Ty)\) and
\(\fun_{\El} : \model(\El) \to \rmcls_{\cat}(\El)\) of right
fibrations over \(\fun_{\bas}\) by context comprehension followed by
\(\fun_{\bas}\). It is straightforward to see that these define a
morphism \(\model \to \rmcls_{\cat}\) of democratic models of
\(\itth_{\infty}\). The two constructions are mutually
inverses.
\end{proof}
We characterize trivial fibrations of democratic models of
\(\itth_{\infty}\) in the same way as \textcite{kapulkin2018homotopy}.
\begin{lemma}
A morphism \(\fun : \model \to \modelI\) in
\(\Mod^{\dem}(\itth_{\infty})\) is a trivial fibration if and only
if the following conditions are satisfied:
\begin{description}
\item[Type lifting] for any object \(\ctx \in \model(\bas)\)
and any section
\(\sh : \modelI(\bas) / \fun(\ctx) \to \modelI(\Ty)\), there
exists a section \(\sh' : \model(\bas) / \ctx \to \model(\Ty)\)
such that \(\fun(\sh') \simeq \sh\);
\item[Term lifting] for any object \(\ctx \in \model(\bas)\), any
section \(\sh : \model(\bas) / \ctx \to \model(\Ty)\), and any
section \(\el : \modelI(\bas) / \fun(\ctx) \to \modelI(\El)\) over
\(\fun(\sh)\), there exists a section
\(\el' : \model(\bas) / \ctx \to \model(\El)\) over \(\sh\) such
that \(\fun(\el') \simeq \el\) over \(\fun(\sh)\).
\end{description}
\end{lemma}
\begin{proof}
Let \(\theory\) be the \(\itth_{\infty}\)-theory corresponding to
\(\model\), that is, \(\theory(\obj)\) is the space of global
sections of \(\model(\obj)\) for \(\obj \in \itth_{\infty}\). As we
saw in \cref{remark-itth-cof}, a morphism
\(\ctx : \yoneda (\poly_{\typeof}^{\nat}(\terminal)) \to \theory\)
corresponds to a list of sections \labelcref{eq:13}, and extensions
\(\yoneda (\poly_{\typeof}^{\nat}(\Ty)) \to \theory\) and
\(\yoneda (\poly_{\typeof}^{\nat}(\El)) \to \theory\) of \(\ctx\)
correspond to sections
\(\model(\bas) / \{\ctx_{\nat}\} \to \model(\Ty)\) and
\(\model(\bas) / \{\ctx_{\nat}\} \to \model(\El)\),
respectively. Then, type lifting and term lifting implies the right
lifting property against the generating cofibrations. The converse
is also true because, since \(\model\) is democratic, any object of
\(\model(\bas)\) is of the form \(\{\ctx_{\nat}\}\) for some list of
sections \labelcref{eq:13}.
\end{proof}
\begin{proof}[Proof of \cref{itth-unit-triv-fib}]
We check type lifting and term lifting along the unit
\(\unit : \model \to \locmap_{!} \model \simeq \Loc(\model(\bas))\)
for a democratic model \(\model\) of \(\itth\). Type lifting is
immediate because any object in \(\Loc(\model(\bas)) / \unit(\ctx)\)
is represented by a fibration \(\sh \to \ctx\) in
\(\model(\bas)\). For term lifting, we also need the fact that
\(\model(\bas)\) is not only a category of fibrant objects but also
a \emph{tribe} \parencite{joyal2017clans} and in particular a
\emph{path category} \parencite{vandenberg2018exact}. In this
special case, a section of \(\unit(\sh) \to \unit(\ctx)\) in
\(\Loc(\model(\bas))\) for a fibration \(\sh \to \ctx\) in
\(\model(\bas)\) is represented by a section in \(\model(\bas)\) by
\parencite[Corollary 2.19]{vandenberg2018exact}.
\end{proof}
This concludes the proof that trivial fibrations are weak equivalences in \(\Th(\itth)\). It remains to show \cref{itth-cof-obj}, which follows from the following theorem.
\begin{theorem}
\label{itth-infty-cof-obj}
Any cofibrant object of \(\Mod^{\dem}(\itth_{\infty})\) belongs to
\(\Mod^{\dem}(\itth)\).
\end{theorem}
\begin{proof}[Proof of \cref{itth-cof-obj}]
Initial objects are cofibrant, and the pushout of a cofibration
along a morphism between cofibrant objects is cofibrant. Thus, by
\cref{itth-infty-cof-obj},
\(\Mod^{\dem}(\itth) \subset \Mod^{\dem}(\itth_{\infty})\) is closed
under these colimits.
\end{proof}
\Cref{itth-infty-cof-obj} is the hardest
part. We may think of this theorem as a form of \emph{coherence
problem}. A general democratic model of \(\itth_{\infty}\) may
contain a lot of non-trivial homotopies, but \cref{itth-infty-cof-obj}
says that all the homotopies in a cofibrant democratic model of
\(\itth_{\infty}\) are trivial.
A successful approach to coherence problems in the categorical
semantics of type theory is to replace a non-split model by a split
model
\parencite{hofmann1995interpretation,lumsdaine2015local}. Following
them, we construct, given a democratic model \(\model\) of
\(\itth_{\infty}\), a democratic model \(\Sp \model\) of \(\itth\)
equipped with a trivial fibration \(\counit : \Sp \model \to
\model\). Then \cref{itth-infty-cof-obj} follows from a retract
argument.
The construction of \(\Sp \model\) crucially relies on
\citeauthor{shulman2019toposes}'s result of replacing any
(Grothendieck) \(\infty\)-topos by a well-behaved model category
called a \emph{type-theoretic model topos}
\parencite{shulman2019toposes}. Let \(\model\) be a democratic model
of \(\itth_{\infty}\). Recall that it consists of a base
\(\infty\)-category \(\model(\bas)\), a representable map
\(\model(\typeof) : \model(\El) \to \model(\Ty)\) of right fibrations
over \(\model(\bas)\), and some other structures. Since the
\(\infty\)-category \(\RFib_{\model(\bas)}\) is an \(\infty\)-topos,
it is a localization
\(\locmap_{\catX} : \catX \to \RFib_{\model(\bas)}\) of some
type-theoretic model topos \(\catX\) \parencite[Theorem
11.1]{shulman2019toposes}. Then there exists a fibration
\(\typeof_{\catX} : \El_{\catX} \to \Ty_{\catX}\) between fibrant
objects in \(\catX\) such that
\(\locmap_{\catX}(\typeof_{\catX}) \simeq \model(\typeof)\). We will
choose \(\typeof_{\catX}\) that has a \(\Unit\)-type structure, a
\(\dSum\)-type structure, and an \(\Id^{+}\)-type structure so that it
induces a model of \(\itth\).
We remind the reader that the type-theoretic model topos \(\catX\) has
nice properties by definition \parencite[Definition
6.1]{shulman2019toposes}: the underlying \(1\)-category is a
Grothendieck \(1\)-topos, the cofibrations are the monomorphisms, and
the model structure is right proper. The right properness in
particular implies that the localization functor
\(\locmap_{\catX} : \catX \to \RFib_{\model(\bas)}\) preserves
pullbacks of fibrations and pushforwards of fibrations along
fibrations used in the definitions of \(\Unit\)-type, \(\dSum\)-type,
and \(\Id^{+}\)-type structures.
\begin{lemma}
\label{lemma-1}
For any cofibration \(\cof : \sh \to \shI\) between fibrant objects
in \(\catX\) and for any fibration \(\fib : \shXI \to \shX\) in
\(\catX\), the induced map
\[
(\cof^{*}, \fib_{*}) : \shXI^{\shI} \to \shXI^{\sh}
\times_{\shX^{\sh}} \shX^{\shI}
\]
is a fibration.
\end{lemma}
\begin{proof}
By an adjoint argument, it suffices to show that for any trivial
cofibration \(\cof' : \sh' \to \shI'\), the induced map
\[
(\cof', \cof) : \sh' \times \shI \amalg_{\sh' \times \sh} \shI'
\times \sh \to \shI' \times \shI
\]
is a trivial cofibration. Since the class of cofibrations are the
class of monomorphisms in the Grothendieck \(1\)-topos \(\catX\),
the map \((\cof', \cof)\) is a cofibration. Since \(\sh\) and
\(\shI\) are fibrant and since the model structure is right proper,
the maps \(\cof' \times \sh : \sh' \times \sh \to \shI' \times \sh\)
and \(\cof' \times \shI : \sh' \times \shI \to \shI' \times \shI\)
are weak equivalences. By 2-out-of-3, the map \((\cof', \cof)\) is a
weak equivalence.
\end{proof}
\begin{lemma}
For any choice of \(\typeof_{\catX}\), there exists an
\(\Id^{+}\)-type structure on \(\typeof_{\catX}\) sent by
\(\locmap_{\catX} : \catX \to \RFib_{\model(\bas)}\) to the
\(\Id^{+}\)-type structure on \(\model(\typeof)\).
\end{lemma}
\begin{proof}
Since all the objects in \(\catX\) are cofibrant and
\(\typeof_{\catX}\) is a fibration between fibrant objects, the
commutative square \labelcref{eq:16} for \(\model(\typeof)\) can be
lifted to one for \(\typeof_{\catX}\). The map
\(\refl : \El \to \Id^{*} \El\) is a monomorphism in \(\catX\) and
thus a cofibration. Applying \cref{lemma-1} for the slice
\(\catX / \Ty\) instead of \(\catX\), we see that the induced map
\((\refl^{*}, \typeof_{*})\) is a fibration. The codomain of the map
\((\refl^{*}, \typeof_{*})\) is fibrant by the right
properness. Hence, the section of \((\refl^{*}, \typeof_{*})\) for
\(\model(\typeof)\) can be lifted to one for \(\typeof_{\catX}\).
\end{proof}
\begin{lemma}
\label{split-repl-sigma}
One can choose \(\typeof_{\catX}\) that has a \(\Unit\)-type
structure and a \(\dSum\)-type structure sent by
\(\locmap_{\catX} : \catX \to \RFib_{\model(\bas)}\) to those
structures on \(\model(\typeof)\).
\end{lemma}
\begin{proof}
A \(\Unit\)-type structure and a \(\dSum\)-type structure on
\(\typeof\) are a pullback of the form
\[
\begin{tikzcd}
\dom (\typeof^{\otimes \nat})
\arrow[r, dotted, "\pair_{\nat}"]
\arrow[d, "\typeof^{\otimes \nat}"'] &
\El
\arrow[d, "\typeof"] \\
\cod (\typeof^{\otimes \nat})
\arrow[r, dotted, "\dSum_{\nat}"'] &
\Ty
\end{tikzcd}
\]
for \(\nat = 0\) and \(\nat = 2\), respectively, where
\(\typeof^{\otimes \nat}\) is the \(\nat\)-fold composition of the
polynomial \(\typeof\). Since \(\model\) is a model of
\(\itth_{\infty}\), the map \(\model(\typeof)\) is equipped with
such pullbacks in \(\RFib_{\model(\bas)}\). However, they gives rise
to only \emph{homotopy} pullbacks
\begin{equation}
\label{eq:18}
\begin{tikzcd}
\dom (\typeof_{\catX}^{\otimes \nat})
\arrow[r, dotted, "\pair_{\nat}"]
\arrow[d, "\typeof_{\catX}^{\otimes \nat}"'] &
\El_{\catX}
\arrow[d, "\typeof_{\catX}"] \\
\cod (\typeof_{\catX}^{\otimes \nat})
\arrow[r, dotted, "\dSum_{\nat}"'] &
\Ty_{\catX}
\end{tikzcd}
\end{equation}
in \(\catX\), and thus
\(\typeof_{\catX}\) need not have \(\Unit\)-type and \(\dSum\)-type
structures.
The idea of fixing this issue is to replace \(\typeof_{\catX}\) by
another fibration
\(\typeof_{\catX}' : \El_{\catX}' \to \Ty_{\catX}'\) between fibrant
objects such that the pullbacks of \(\typeof_{\catX}'\) are the
homotopy pullbacks of \(\typeof_{\catX}\). Let \(\card\) be a
regular cardinal such that \(\typeof_{\catX}\) is
\(\card\)-small. By \parencite[Theorem 5.22]{shulman2019toposes},
there exists a fibration
\(\typeof_{\catX}^{\card} : \El_{\catX}^{\card} \to
\Ty_{\catX}^{\card}\) between fibrant objects that classifies
\(\card\)-small fibrations. Moreover, \(\typeof_{\catX}^{\card}\)
satisfies the univalence axiom with respect to the model
structure. Since \(\typeof_{\catX}\) is \(\card\)-small, we have a
pullback
\[
\begin{tikzcd}
\El_{\catX}
\arrow[r, dotted]
\arrow[d, "\typeof_{\catX}"'] &
\El_{\catX}^{\card}
\arrow[d, "\typeof_{\catX}^{\card}"] \\
\Ty_{\catX}
\arrow[r, dotted, "\inc"'] &
\Ty_{\catX}^{\card}.
\end{tikzcd}
\]
Factor \(\inc\) into a weak equivalence
\(\inc' : \Ty_{\catX} \to \Ty_{\catX}'\) followed by a fibration
\(\proj : \Ty_{\catX}' \to \Ty_{\catX}^{\card}\), and define
\(\typeof_{\catX}' : \El_{\catX}' \to \Ty_{\catX}'\) to be the
pullback of \(\typeof_{\catX}^{\card}\) along \(\proj\). Since
\(\typeof_{\catX}^{\card}\) satisfies the univalence axiom, we can
choose \(\Ty_{\catX}'\) such that the maps \(\sh \to \Ty_{\catX}'\)
correspond to the triples \((\shI_{1}, \shI_{2}, \map)\) consisting
of maps \(\shI_{1} : \sh \to \Ty_{\catX}\) and
\(\shI_{2} : \sh \to \Ty_{\catX}^{\card}\) and a weak equivalence
\(\map : \shI_{1}^{*} \El_{\catX} \to \shI_{2}^{*}
\El_{\catX}^{\card}\) over \(\sh\). In particular, we have a
\emph{generic} homotopy pullback from a \(\card\)-small fibration
\begin{equation}
\label{eq:17}
\begin{tikzcd}
\El_{\catX}'
\arrow[r]
\arrow[d, "\typeof_{\catX}'"'] &
\El_{\catX}
\arrow[d, "\typeof_{\catX}"] \\
\Ty_{\catX}'
\arrow[r, "\fib"'] &
\Ty_{\catX}
\end{tikzcd}
\end{equation}
in the sense that any homotopy pullback from a \(\card\)-small
fibration to \(\typeof_{\catX}\) factors into a strict pullback
followed by the homotopy pullback \labelcref{eq:17}.
We now construct \(\Unit\)-type and \(\dSum\)-type structures on
\(\typeof_{\catX}'\). There are homotopy pullbacks as in
\cref{eq:18} for \(\nat = 0\) and \(\nat = 2\) sent by
\(\locmap_{\catX} : \catX \to \RFib_{\model(\bas)}\) to the
\(\Unit\)-type and \(\dSum\)-type structures, respectively, on
\(\model(\typeof)\). Since \(\typeof_{\catX}\) is the pullback of
\(\typeof_{\catX}'\) along the weak equivalence
\(\inc' : \Ty_{\catX} \to \Ty_{\catX}'\), one can construct a
commutative square
\[
\begin{tikzcd}
\dom (\typeof_{\catX}^{\otimes \nat})
\arrow[r, dotted]
\arrow[d, "\typeof_{\catX}^{\otimes \nat}"'] &
\dom ((\typeof_{\catX}')^{\otimes \nat})
\arrow[d, "(\typeof_{\catX}')^{\otimes \nat}"] \\
\cod (\typeof_{\catX}^{\otimes \nat})
\arrow[r, dotted] &
\cod ((\typeof_{\catX}')^{\otimes \nat})
\end{tikzcd}
\]
in which the horizontal maps are weak equivalences. Then we have a
homotopy pullback from \((\typeof_{\catX}')^{\otimes \nat}\) to
\(\typeof_{\catX}\), which factors into a strict pullback followed
by the homotopy pullback \labelcref{eq:17} because the composition
of polynomials preserves \(\card\)-smallness. By construction, this
strict pullback is sent by
\(\locmap_{\catX} : \catX \to \RFib_{\model(\bas)}\) to the
\(\Unit\)-type structure on \(\model(\typeof)\) when \(\nat = 0\)
and to the \(\dSum\)-type structure on \(\model(\typeof)\) when
\(\nat = 2\).
\end{proof}
By the preceding lemmas, we can choose \(\typeof_{\catX}\) that has a
\(\Unit\)-type structure, a \(\dSum\)-type structure, and an
\(\Id^{+}\)-type structure. Then we have a morphism of
\(1\)-categories with representable maps \(\itth \to \catX\), and we
define \(\Sp \model\) to be the heart of the model of \(\itth\)
defined by the composite \(\itth \to \catX \to \RFib_{\catX}\) with
the Yoneda embedding. Concretely, the base category
\((\Sp \model)(\bas)\) is the full subcategory of \(\catX\) spanned by
the objects \(\ctx\) such that the map \(\ctx \to \terminal\) is a
composite of pullbacks of \(\typeof_{\catX}\), and the sections
\((\Sp \model)(\bas) / \ctx \to (\Sp \model)(\Ty)\) and
\((\Sp \model)(\bas) / \ctx \to (\Sp \model)(\El)\) are the maps
\(\ctx \to \Ty_{\catX}\) and \(\ctx \to \El_{\catX}\), respectively,
in \(\catX\).
Since the localization functor
\(\locmap_{\catX} : \catX \to \RFib_{\model(\bas)}\) sends
\(\typeof_{\catX}\) to the representable map \(\model(\typeof)\) and
preserves pullbacks of \(\typeof_{\catX}\) along maps between fibrant
objects, the restriction of \(\locmap_{\catX}\) to
\((\Sp \model)(\bas)\) factors through the Yoneda embedding
\(\model(\bas) \to \RFib_{\model(\bas)}\). Let
\(\counit_{\bas} : (\Sp \model)(\bas) \to \model(\bas)\) be the
induced functor.
\[
\begin{tikzcd}
(\Sp \model)(\bas)
\arrow[r, dotted, "\counit_{\bas}"]
\arrow[d, hook] &
\model(\bas)
\arrow[d, hook, "\yoneda"] \\
\catX
\arrow[r, "\locmap_{\catX}"'] &
\RFib_{\model(\bas)}
\end{tikzcd}
\]
The functor \(\locmap_{\catX}\) also induces maps
\(\counit_{\Ty} : (\Sp \model)(\Ty) \to \model(\Ty)\) and
\(\counit_{\El} : (\Sp \model)(\El) \to \model(\El)\) of right
fibrations over \(\counit_{\bas}\), and these define a morphism
\(\counit : \Sp \model \to \model\) of models of \(\itth_{\infty}\).
\begin{lemma}
The morphism \(\counit : \Sp \model \to \model\) is a trivial
fibration.
\end{lemma}
\begin{proof}
We verify type lifting and term lifting. To give type lifting, let
\(\ctx \in (\Sp \model)(\bas)\) be an object and
\(\sh : \model(\bas) / \counit_{\bas}(\ctx) \to \model(\Ty)\) a
section. Since
\(\model(\bas) / \counit_{\bas}(\ctx) \simeq \locmap_{\catX}(\ctx)\)
and \(\model(\Ty) \simeq \locmap_{\catX}(\Ty_{\catX})\), the section
\(\sh\) is represented by some map \(\ctx \to \Ty_{\catX}\) in
\(\catX\), that is, a section
\((\Sp \model)(\bas) / \ctx \to (\Sp \model)(\Ty)\). Term lifting
can be checked in the same way.
\end{proof}
\begin{proof}[Proof of \cref{itth-infty-cof-obj}]
Let \(\model\) be a cofibrant democratic model of
\(\itth_{\infty}\). Then we have a section of the trivial fibration
\(\counit : \Sp \model \to \model\). Since
\(\Mod^{\dem}(\itth) \subset \Mod^{\dem}(\itth_{\infty})\) is closed
under retracts, \(\model\) belongs to \(\Mod^{\dem}(\itth)\).
\end{proof}
In conclusion we have shown that \(\Th(\itth)\) is a category with weak equivalences and cofibrations. Moreover, \cref{itth-cof-obj} also implies that \(\locmap_{!}\truncmap^{*}\) is left exact. Thus, to show that this map induces an equivalence after localization, it is enough to show the left approximation property. Since the first axiom is
satisfied by definition, we only have to show the second. But this is now an easy task using \cref{etth-triv-fib,itth-unit-triv-fib,itth-infty-cof-obj}.
\begin{lemma}
\label{itth-left-approximation}
For any cofibrant \(\itth\)-theory \(\theory\) and any morphism
\(\map : \locmap_{!} \truncmap^{*} \theory \to \theoryI\) in
\(\Th(\etth_{\infty})\), there exists a morphism
\(\map' : \theory \to \theoryI'\) in \(\Th(\itth)\) such that
\(\locmap_{!} \truncmap^{*} \theoryI' \simeq \theoryI\) under
\(\locmap_{!} \truncmap^{*} \theory\).
\end{lemma}
\begin{proof}
Let \(\map : \locmap_{!} \truncmap^{*} \theory \to \theoryI\) be a
morphism in \(\Th(\etth_{\infty})\) where \(\theory\) is a cofibrant
\(\itth\)-theory. By \cref{etth-triv-fib} and the small object
argument, \(\map\) is written as a transfinite composite of pushouts
of generating cofibrations. Thus, it suffices to prove the case when
\(\map\) is a pushout of a generating cofibration. Let us assume
that \(\map\) is a pushout of the form
\begin{equation}
\label{eq:14}
\begin{tikzcd}
\locmap_{!} \sh
\arrow[r, "\mapI"]
\arrow[d, "\locmap_{!} \cof"'] &
\locmap_{!} \truncmap^{*} \theory
\arrow[d, "\map"] \\
\locmap_{!} \shI
\arrow[r, "\mapII"'] &
\theoryI
\end{tikzcd}
\end{equation}
where \(\cof : \sh \to \shI\) is one of the generating cofibrations
in \(\Th(\itth_{\infty})\). Since \(\sh\) is cofibrant, the
transpose \(\sh \to \locmap^{*} \locmap_{!} \truncmap^{*} \theory\)
of \(\mapI\) factors through the unit
\(\truncmap^{*} \theory \to \locmap^{*} \locmap_{!} \truncmap^{*}
\theory\) by \cref{itth-unit-triv-fib}. Let
\(\mapI' : \truncmap_{!} \sh \to \theory\) be the transpose of the
induced morphism \(\sh \to \truncmap^{*} \theory\) and take the
pushout
\begin{equation}
\label{eq:15}
\begin{tikzcd}
\truncmap_{!} \sh
\arrow[r, "\mapI'"]
\arrow[d, "\truncmap_{!} \cof"'] &
\theory
\arrow[d] \\
\truncmap_{!} \shI
\arrow[r] &
\theoryI'.
\end{tikzcd}
\end{equation}
By \cref{itth-infty-cof-obj}, the units
\(\sh \to \truncmap^{*} \truncmap_{!} \sh\) and
\(\shI \to \truncmap^{*} \truncmap_{!} \shI\) are invertible, and
\(\locmap_{!} \truncmap^{*}\) sends the pushout \labelcref{eq:15} to
the pushout \labelcref{eq:14} since \(\theory\) is cofibrant. Hence,
\(\locmap_{!} \truncmap^{*} \theoryI'\) is equivalent to
\(\theoryI\) under \(\locmap_{!} \truncmap^{*} \theory\).
\end{proof}
\begin{proof}[Proof of \cref{itth-etth-infty}]
By \cref{itth-cof-cat}, the category \(\Th(\itth)\) is
a category with weak equivalences and cofibrations, and \cref{itth-cof-obj} implies that the functor
\(\locmap_{!} \truncmap^{*} : \Th(\itth) \to \Th(\etth_{\infty})\)
is right exact. We checked the left approximation property in
\cref{itth-left-approximation}. Thus, by \cref{derived-equiv},
\(\locmap_{!} \truncmap^{*}\) induces an equivalence
\(\Loc(\Th(\itth)) \simeq \Th(\etth_{\infty})\).
\end{proof}
\subsection{Generalizations}
\label{sec:generalizations}
We end this section with discussion about generalizations of
\cref{itth-etth-infty}. Let \(\widetilde{\itth}\) be an extension of
\(\itth\) with some type-theoretic structures such as \(\dProd\)-types
and (higher) inductive types, and we similarly define extensions
\(\widetilde{\itth}_{\infty}\) and \(\widetilde{\etth}_{\infty}\) of
\(\itth_{\infty}\) and \(\etth_{\infty}\), respectively. We have a
span
\[
\begin{tikzcd}
\widetilde{\itth} &
\widetilde{\itth}_{\infty}
\arrow[l, "\truncmap"']
\arrow[r, "\locmap"] &
\widetilde{\etth}_{\infty}
\end{tikzcd}
\]
and ask if the functor
\(\locmap_{!} \truncmap^{*} : \Th(\widetilde{\itth}) \to
\Th(\widetilde{\etth}_{\infty})\) induces an equivalence
\[
\Loc(\Th(\widetilde{\itth})) \simeq \Th(\widetilde{\etth}_{\infty}).
\]
Most part of the proof of \cref{itth-etth-infty} works also for this
case, but we have to modify
\cref{itth-model-localization,split-repl-sigma}. For
\cref{itth-model-localization}, we need to find a
\(\infty\)-categorical structure corresponding to
\(\widetilde{\etth}_{\infty}\)-theories and show that the localization
\(\Loc(\model(\bas))\) for a democratic model \(\model\) of
\(\widetilde{\itth}\) has that structure. For example, in the case
when \(\widetilde{\itth}\) is the extension \(\itth^{\dProd}\) of
\(\itth\) with \(\dProd\)-types satisfying function extensionality
in the sense of \parencite[Section 2.9]{hottbook}, we have
\(\Th(\etth_{\infty}^{\dProd}) \simeq \LCCC_{\infty}\)
(\cref{EPi-dem-lccc}), and by the results of
\textcite{kapulkin2017locally}, \(\Loc(\model(\bas))\) is indeed
locally cartesian closed. When we extend \(\itth\) with
(higher) inductive types, the corresponding \(\infty\)-categorical
structure will be some form of pullback-stable initial algebras. For \cref{split-repl-sigma}, we have to choose
the fibration \(\typeof_{\catX}\) such that it also has the
type-theoretic structures that \(\widetilde{\itth}\) has. In the case
of \(\widetilde{\itth} = \itth^{\dProd}\), one might want to choose
the regular cardinal \(\card\) in the proof of \cref{split-repl-sigma} such that \(\card\)-small
fibrations are closed under pushforwards. However, there is no
guarantee of the existence of such a regular cardinal within the same
Grothendieck universe, unless the Grothendieck universe is
\emph{\(1\)-accessible}, that is, there are unboundedly many
inaccessible cardinals \parencite{monaco2019dependent}. Nevertheless,
the existence of \(\Sp \model\) in a larger universe is enough to
prove \cref{itth-infty-cof-obj}, and thus we have the second part of
Conjecture 3.7 of \textcite{kapulkin2018homotopy} under an extra
assumption on universes.
\begin{theorem}
Suppose that our ambient Grothendieck universe is \(1\)-accessible
or contained in a larger universe. Then the functor \(\locmap_{!}
\truncmap^{*} : \Th(\itth^{\dProd}) \to
\Th(\etth_{\infty}^{\dProd})\) induces an equivalence of
\(\infty\)-categories
\[
\Loc(\Th(\itth^{\dProd})) \simeq \Th(\etth_{\infty}^{\dProd})
\simeq \LCCC_{\infty}.
\]
\end{theorem}
The current proof of \cref{split-repl-sigma} has some issues when
generalizing it. As we have seen, it could cause a rise in universe
levels. Furthermore, the same proof does not work when we extend
\(\itth\) with (higher) inductive types, because having (higher)
inductive types is not a closure property. One possible approach to
these issues is to refine the construction of \(\Sp \model\). The
current construction does not depend on the choice of a type-theoretic
model topos \(\catX\) that presents \(\RFib_{\model(\bas)}\), but
there should be a convenient choice to work with. Another approach is
to give a totally different proof of \cref{itth-infty-cof-obj} without
the use of \(\Sp \model\). There has been a syntactic approach to
coherence problems initiated by \textcite{curien1993substitution}. In
this approach, coherence problems are solved by rewriting techniques,
and we expect that it works for a wide range of type-theoretic
structures without a rise of universe levels. Of course, we first have
to develop nice syntax for \(\infty\)-type theories, and this is not
obvious.
\section{Conclusion and future work}
\label{sec:concl-future-work}
We introduced \(\infty\)-type theories as a higher dimensional
generalization of type theories and as an application proved
\citeauthor{kapulkin2018homotopy}'s conjecture that the
\(\infty\)-category of small left exact \(\infty\)-categories is a localization
of the category of theories over Martin-L{\"o}f type theory with
intensional identity types \parencite{kapulkin2018homotopy}. The
technique developed in this paper also works for the internal language
conjecture for locally cartesian closed \(\infty\)-categories, but further
generalization including (higher) inductive types is left as future
work.
\subsection{Syntax for \(\infty\)-type theories}
\label{sec:syntax-infty-type}
Coherence problems are often solved by syntactic arguments
\parencite{curien1993substitution}. Therefore, syntactic presentations
of \(\infty\)-type theories will be helpful for solving internal
language conjectures for structured \(\infty\)-categories. We have not
figured out syntax for \(\infty\)-type theories. Here we consider one
possibility based on \emph{logical frameworks}.
In the previous work \parencite{uemura2019framework}, the author
introduced a logical framework to define type theories. For every
signature \(\sig\) in that logical framework, the syntactic category
\(\synrm(\sig)\) is naturally equipped with a structure of a category
with representable maps and satisfies a certain universal property. To
define \(\infty\)-type theories syntactically, we modify the logical
framework as follows:
\begin{itemize}
\item the new logical framework has intensional identity types instead
of extensional identity types;
\item dependent product types indexed over representable types satisfy
the function extensionality axiom.
\end{itemize}
\begin{remark}
A similar kind of framework is used by \textcite[Section
7]{bocquet2020coherence} to represent space-valued models of a type
theory.
\end{remark}
\begin{proposition}
Let \(\sig\) be a signature in this new logical framework.
\begin{enumerate}
\item \label{item:12} The syntactic category \(\synrm(\sig)\) is
equipped with a structure of a fibration category.
\item \label{item:13} The localization \(\Loc(\synrm(\sig))\) is
equipped with a structure of an \(\infty\)-category with
representable maps.
\end{enumerate}
\end{proposition}
\begin{proof}
It is known \parencite{avigad2015homotopy} that the syntactic
category of a type theory with intensional identity types is a
fibration category. The second claim is proved in the same way as
the fact that the localization of a locally cartesian closed
fibration category is a locally cartesian closed \(\infty\)-category
\parencite{kapulkin2017locally,cisinski2019higher}.
\end{proof}
We expect that the syntactic \(\infty\)-category with representable maps
\(\Loc(\synrm(\sig))\) satisfies a universal property analogous to
\parencite[Theorem 5.17]{uemura2019framework} so that the logical
framework with intensional identity types provides syntactic
presentations of \(\infty\)-type theories.
|
1,116,691,498,674 | arxiv | \section{Introduction}
The study of the interaction between light and matter has been extremely prolific, leading to the invention of novel techniques that have had a profound impact in many different fields. In particular, the use of light to exert forces and torques on microscopic and nanoscopic particles has blossomed: since the pioneering work of Arthur Ashkin \cite{AshkinPRL70,AshkinOL86}, optical tweezers (OT) have become an indispensable tool in, e.g., biophysics, experimental statistical physics and nanotechnology \cite{GrierN03,NeumanRSI04,JonasE08,MaragoNN13}. At the same time, starting from the relatively simple tightly focused laser beam employed by Ashkin and co-workers to confine a microscopic particle in three dimensions \cite{AshkinOL86}, experimental set-ups for optical trapping and manipulation have grown ever more complex, leading to OT capable, e.g., of trapping multiple particles \cite{GrierN03}, of rotating particles by transferring spin \cite{,BishopPRL04,DonatoNC14} and orbital angular momentum \cite{GrierN03,PadgettNP11}, of measuring forces in the femtonewton range \cite{RohrbachPRL05,ImparatoPRE07}, and of sorting particles based on their physical properties \cite{MacDonaldN03,JonasE08,VolpeSR14}.
Many OT set-ups are built on commercial microscopes, especially for experiments on biological samples such as DNA, molecular motors, and cells \cite{BustamanteN03}. While commercial microscopes offer several advantages, e.g., high quality imaging of the sample, easy integration with powerful imaging techniques such as phase contrast, fluorescence excitation and differential interference contrast, they provide limited flexibility when dealing with more elaborate experiments. Thus, many research groups have started to develop home-made microscopes, which have the great advantages of being very versatile, much less expensive, and, if implemented properly, with a superior mechanical stability.
In the literature, there are various articles that provide some guidance on how to implement an OT set-up. For example, Refs.~\cite{BechhoeferAJP02,AppleyardAJP07} explain how to build and calibrate a single-beam OT set-up for undergraduates laboratories and Refs.~\cite{LeeNP07,MathewRSI09} explain how to transform a commercial microscope into an OT. However, to the best of our knowledge, a detailed explanation of how to build a home-made OT set-up with advanced functionalities such as multiple holographic optical traps and speckle OT is still missing.
In this article, we provide a detailed step-by-step guide to the implementation of state-of-the-art research-grade OT. The only requirement to be able to follow the procedure we explain is a basic knowledge of experimental optics. We start by implementing a high-stability home-made optical microscope and, then, we transform it into a series of OT with increasingly more advanced capabilities. In particular, we explain in detail the implementation of three OT set-ups with cutting-edge performance: (1) a high-stability single-beam OT capable of measuring forces down to few fN \cite{ImparatoPRE07}; (2) a holographic optical tweezers (HOT) \cite{PadgettLOC11} capable of creating multiple trap and advanced optical beams such as Laguerre-Gaussian beams; and (3) a speckle OT particularly suited to develop robust applications in highly scattering environments and in microfluidic devices \cite{VolpeOE14,VolpeSR14}. For each set-up, we also provide a time-lapse movie (see supplementary information) in order to make it easier to understand and follow each step of the implementation.
In Section~II, we provide a brief overview of the set-ups. In Section~III, we provide a detailed step-by-step procedure for the realization of each set-up: first, we build a home-made microscope capable of performing digital video microscopy (DVM) by analysing the images acquired with a CCD camera; then we show how to upgrade this microscope to a single-beam OT equipped with a position sensitive detection device based on a quadrant photodiode (QPD); finally, we describe how to build a HOT and a speckle OT. In Appendix~A, we discuss how to calibrate the single-beam OT. In Appendix~B, we explain the fundamentals of how spatial light modulators (SLM) work and we provide some examples of the most commonly used phase masks and algorithms employed with HOT.
\section{Optical Tweezers set-ups}
\subsection{Home-made microscope}
In the implementation of every OT experiment, it is crucial to start with a microscope that is as mechanically stable as possible. For this reason our home-made microscope (Fig.~\ref{fig:singleot}a and \ref{fig:singleot}b) is built on a stabilized optical table, which permits us to minimize environmental vibrations, and its structure is realized with AISI 316 stainless steel, which offers one of the lowest thermal expansion coefficient and a great rigidity. This structure consists of three levels. The first level, which lies on the optical table, is where the optical components necessary to focus the sample image on the camera (and to prepare the optical beam for the OT, Subsection~II.B) are hosted. In particular, there is a mirror arranged at \ang{45} degrees to reflect the illumination light toward the CCD camera (and the laser beam toward the objective lens for the OT, Subsection~II.B). Videos of the sample are recorded by a fast CCD camera (Mikrotron, MotionBLITZ EoSens Mini 1).
The second level is realized with a breadboard held on eight columns; here is where the objective mount as well as the stages to hold and manipulate the sample are hosted. The objective holder is mounted vertically and comprises a high stability translational stage (Physik Instrumente, M-105.10) with manual micrometer actuator (resolution $1\,{\rm \mu m}$); if needed, this stage can be easily equipped with a piezo actuator for computer-controllable and more precise movements. The sample stage is realized by a combination of a manual stage for coarse (micrometric) movement (Newport, M406 with HR-13 actuators) and a piezoelectric stage (Physik Instrumente, PI-517.3CL) with nanometer resolution.
Finally, the third level, which is build on the second-level breadboard using four columns, hosts the illumination lamp (and the position detector devices, for the photonic force microscope (PFM), Subsection~II.C). To illuminate the sample we opted for a cold-light white LED (Thorlabs, MCWHL5), which offers high intensity without heating the sample. The white LED is collimated by an aspheric lens. The whole system is easily realized using modular opto-mechanics components (Thorlabs, cage system). As condenser lens we use an objective with magnification $10\times$ or $20\times$ mounted on a five-axis stage (Thorlabs, K5X1), which assures fine positioning of the lens and good stability at the same time.
\subsection{Single-beam optical tweezers (OT)}
The single-beam OT (Fig.~\ref{fig:singleot}a and \ref{fig:singleot}b) is realized by coupling a laser beam to the home-made microscope described in Subsection 2.1. We use a solid state laser featuring a monolithic Nd:YAG crystal NPRO (Non-Planar Ring Oscillator) (Innolight, Mephisto 500, $\lambda = 1064\,{\rm nm}$, $500\,{\rm mW}$ max power), which exhibits excellent optical properties for optical manipulation experiments that require high stability (e.g., for the measurement of femtonewton forces), i.e., extremely low intensity noise and very good pointing stability. The output of this kind of laser is slightly diverging and features elliptical polarization. Therefore, we linearize its polarization with a zero-order quarter-wave plate. To change the power without altering the injection current of the pump diode laser, which could affect the laser beam emission quality and, therefore, the laser performance, we used the combination of a zero-order half-wave plate to rotate the linear polarization and a linear polarizer to finely tune the laser power. The laser beam is directed through the objective making use of a series of mirrors. A $5\times$ telescope is added along the optical train in order to create a beam with an appropriate size to overfill the objective back aperture and to generate a strong optical trap, as typically the beam waist should be comparable to the objective back aperture in order to obtain an optimal trapping performance \cite{JonasE08}.
In order to focus the laser beam inside the sample cell and generate the optical trap, we use a water-immersion objective (Olympus, UPLSAPO60XW) because water-immersion objectives present less spherical aberrations than oil-immersion objectives, and, thus, a better trapping performance. Ref.~\cite{AlexeevEJP12} provides a detailed comparison of the performance of various objectives.
In order to achieve the highest mechanical stability, the OT set-up is realized keeping the optical train as short as possible. Nevertheless, the optical train can be modified according to the specific requirements of each experiment. For example, if a movable trap is needed, a beam steering system can be added according to the procedure described in Ref.~\cite{FallmanAO97}.
\subsection{Photonic force microscope (PFM)}
An optically trapped particle scatters the trapping s so that the light field in the forward direction is the superposition of the incoming and scattered light. This basic observation has been harnessed in the development of interferometric position detection techniques \cite{GittesOL98}. In our set-up (Figs.~\ref{fig:qpd}a and \ref{fig:qpd}b), a condenser (Olympus PlanC N, $10\times$) collects the interference pattern arising from the interference between the incoming and scattered fields and a photodetector located on the condenser back-focal plane (BFP) records the resulting signals. As detector, we use a QPD based on an InGaAs junction (Hamamatsu, G6849), which, differently from silicon-based photodiodes \cite{SorensenJAP03}, has a good high-frequency response in the infrared region of the spectrum. The output signals are acquired with a home-made analog circuit \cite{FinerBJ96} and then analyzed in order to obtain the three-dimensional particle position. These signals can then be used to calibrate the optical trap as explained in Appendix~A.
\subsection{Holographic optical tweezers (HOT)}
The experimental set-up for HOT is slightly more complex than the one for a single-beam OT. HOT use a diffractive optical element (DOE) to split a single collimated laser beam into several separate beams, each of which can be focused inside the sample to generate an OT \cite{GrierN03}. These optical traps can be made dynamic and displaced in three dimensions by projecting a sequence of computer-generated holograms. Furthermore, non-Gaussian beam profiles can be straightforwardly encoded in the holographic mask.
In our implementation (Figs.~\ref{fig:hot}a and \ref{fig:hot}b), the DOE is provided by a spatial light modulator (SLM) (Hamamatsu, LCOS-SLM, X10468-03). The SLM permits us to modulate the phase of the incoming beam wavefront by up to $2\pi$. In practice, the phase mask that alters the beam phase profile is a $8\,{\rm bit}$ greyscale image generated and projected on the SLM using a computer (Appendix~2). We placed the SLM after a $6\times$ telescope, which increases the beam size so that it completely fills the active area of the SLM. Then, we employ two lenses (with equal focal length, $f=750\,{\rm mm}$) to image the SLM plane onto the BFP of the objective (and the necessary mirror to direct the optical train). In practice, the first lens is placed after the SLM at a distance equal to $f$, the second lens is placed at a distance equal to $2f$ from the first lens, and the BFP is at a distance $f$ from the second lens. Thus, the total distance from the SLM to the BFP is $4f$ (for this reason this is often referred to as a $4f$-configuration). We note that it is also possible to use lenses with different focal lengths to change the total path length and magnify/de-magnify the beam, for which we refer the readers to Ref.~\cite{FallmanAO97}. Therefore, the SLM (just like the BFP) is placed at the Fourier plane of the front-focal plane (FFP) of the objective and the field distribution in the FFP is (in first approximation) equal to the Fourier transform of the field distribution at the SLM \cite{PadgettLOC11}.
We built our HOT on the same home-made microscope used for the single-beam OT. Apart from the addition of the SLM and necessary optics in the optical train, the only modification concerns the illumination system: since the QPD is no longer needed, we preferred to increase the illumination intensity at the sample by moving the LED nearer to the condenser. High illumination intensity is needed in order to achieve high acquisition rates in digital video microscopy, as increasing the acquisition rate entails a decrease of the shutter time. For example, when we acquire images ($128\times 100$ pixels) at $18000$ frames per second, the maximum shutter time is only $55\,{\rm \mu s}$.
\subsection{Speckle optical tweezers}
The experimental set-up for speckle OT is a simplified version of the set-up for a single-beam OT. The main difference is that the condenser and illumination are substituted by a multimode optical fiber (core diameter $105\,{\rm \mu m}$, numerical aperture ${\rm NA}=0.22$, Thorlabs, M15L01) that delivers both the trapping laser light and the illumination light directly in close proximity to the sample upper surface, while the trapping beam must be spatially coherent and monochromatic to generate the speckle pattern through interference, the illumination should be generated with a spatially and/or temporally incoherent source in order to ensure a uniform illumination of the sample. The speckle light pattern is generated by directly coupling the laser beam into the multimode optical fiber \cite{MoskNP12}. The random appearance of the speckle light patterns (Fig.~\ref{fig:speckle}d) is the result of the interference of a large number of optical waves with random phases, corresponding to different eigenmodes of the fiber. More generally, speckle patterns can be generated by different processes: scattering of a laser on a rough surface, multiple scattering in an optically complex medium, or, like in this work, mode-mixing in a multimode fiber \cite{MoskNP12}. The choice of a multimode optical fiber provides some practical advantages over other methods, namely generation of homogeneous speckle fields over controllable areas, flexibility and portability in the implementation of the device, as well as higher transmission efficiency.
In our set-up (Figs.~\ref{fig:speckle}a and \ref{fig:speckle}b), the fiber output is brought in close proximity of the upper wall of the sample by a micrometric three-axis mechanical stage. Optical scattering forces push the particles in the direction of light propagation towards the bottom surface of the microfluidic channel, so that they effectively confine the particles in a quasi two-dimensional space. The particles are then tracked by digital video microscopy.
\section{Step-by-step procedure}
\subsection{Home-made inverted microscope}
\begin{enumerate}
\itemsep1em
\item Build the first two levels of the microscope structure on a vibration-insulated optical table. The first level can be either the optical table itself or, as in our case, a custom designed breadboard. The main advantage of using an independent breadboard as first level is that the microscope is completely stand-alone and can be easily moved, if necessary. The optical components necessary to align the optical beam and to image the sample on the camera will be hosted on the optical table. Particular care should be taken in positioning the microscope: it is preferable to mount it at the center of the optical table, since placing it on a side could decrease the damping performance of the table. The second level is composed of a breadboard held on four columns; here is where the stages to hold and manipulate the sample will be hosted. Verify that these planes are stable and horizontal using a bubble-level. (The third level will be mounted later and will host the illumination white LED, step 5, and the quadrant photodiode for particle tracking, Subsection~III.C.)
\item Fix a \ang{45} gold mirror (M1, Fig.~\ref{fig:singleot}a and \ref{fig:singleot}b) on the first level of the structure. This mirror will serve to deflect the white light from the microscope towards the camera (and to deflect the laser beam towards the back aperture of the objective, Subsection~III.B). The mirror center should be aligned with the central vertical axis of the structure (which will correspond to the axis of the objective). The height of the white light beam (and of the laser beam) above the optical table is determined by the height of the center of the \ang{45} mirror. In our particular case, it is fixed at $7\,{\rm cm}$.
\item Fix on the bottom side of the second level breadboard the vertical translation stage where the microscope objective will be mounted. The vertical stage has to be perfectly perpendicular to the breadboard. Use a right angle bracket to check the alignment (thanks to the length of the objective holder, angles as small as a few tenths of degree can be easily detected). Since the vertical stage should be firmly attached to the second level, it is preferable to avoid kinematic mounts even if of the best quality. Instead, to correct small misalignments we suggest putting thin micrometric spacers between the mechanical parts and then tightly fastening the screws. This technique will lead to a very precise vertical alignment of the objective with respect to the vertical axis of the microscope.
\item Mount the sample translation stage on the upper side of the second-level breadboard. In general, this translation stage can be either manual or automatized. Manual translation stages typically offer precision down to a fraction of a micrometer over a range that can easily reach several millimeters. Automatized translation stages, which are often driven by mechanical or piezoelectric actuators, permit one to achieve subnanometer position precision over a range that typically does not exceed a few tens or hundreds of micrometers and have the advantage that they can be controlled in remote. In the realization of our set-up, we combine a manual $xy$-translation (horizontal) stage for the rough positioning of the sample and an automatized piezoelectric $xyz$-translation stage for fine adjustment.
\item Proceed to build the third level of the structure, which is constituted of a breadboard held by four columns and will host the illumination and detection components.
\item Place a five-axis translation stage on the bottom side of the third level plane to mount the condenser. This stage permits one to align the condenser position along the lateral (horizontal) directions and, therefore, to center the position of the collected illumination beam. The alignment along the vertical direction will be performed at step 12 to optimize the illumination of the sample.
\item Mount the objective to collect the image of the sample (and later to focus a laser beam for the OT, Subsection~III.B). At the beginning, it is convenient to use a low-magnification (for example, $10\times$ or $20\times$) objective to simplify the alignment. Later, a high-numerical aperture water- or oil-immersion objective (in our case, a water-immersion objective, Olympus, UPLSAPO60XW) will be needed for optical trapping. Using objective lenses with the same parfocal length guarantees that switching between them will not modify the relative position of the image plane with respect to the objective stage (and, thus, it will not require any adjustment of the objective position).
\item Mount the condenser. This lens can be either a $10\times$ or $20\times$ infinity corrected objective or an aspheric condenser lens. The choice depends on the position detection system: the aspheric condenser lens is preferable for digital video microscopy because it permits higher and more uniform illumination intensity; the objective lens is preferable if the detection is based on a QPD because it permits one to collect the scattered light over a larger angle (Subsection~III.C). The condenser position along the vertical axis will be optimized later (step 12).
\item Place a sample on the sample stage. Start by using a calibration glass slide (best choice to set up the microscope), or a sample of microspheres stuck to a coverslip, or a glass coverslip to which some color has been applied on one side, e.g., by using an indelible marker.
\item Add an illumination source on the third level to illuminate and observe the sample. A white LED lamp is better than a halogen lamp because it will not heat the sample. The light from the LED can be collimated using an aspheric lens. The collimated light is then focused on the sample by the condenser. It is possible to see the resulting image by the naked eye by placing a white screen along the path of the light collected by the objective. The image of a calibration glass slide can be easily placed in the centre of the circular spot of the light transmitted by the objective lens. With the illumination at the maximum intensity, project this image at the longest possible distance ($>5\,{\rm m}$). Adjust the focus position to have a clear image of the sample. This will set the objective in the correct position to image the sample to infinity, i.e., the objective is properly imaging objects that are in the image plane. Once this procedure is done, do not move the relative position between the objective and the sample until the CCD camera is placed in its final position (next step).
\item Add a CCD camera. When using infinite-corrected objectives, it is necessary to use a tube lens (L1) to produce the image of the sample at a specific position. The CCD sensor should be placed at a distance from the tube lens equal to its focal length. Place the tube lens (L1) and then the camera. If needed, it is possible to use a some mirrors (M2 and M3) to place the lens and the camera in a more convenient position. The alignment of the camera and of the tube lens is crucial to have a clear image and to minimize distortions; in particular: (1) the white light spot must be in the center of the lens and the lens face must be perpendicular to the light path; (2) the heights of the tube lens and of the camera must be equal; (3) the mirrors along the image path should be adjusted so that the light beam makes \ang{90} angles (the screw holes of the optical table may be useful as guides in the alignment procedure). The focal length of the tube lens determines the magnification of the microscope with a fixed objective: the longer the focal length, the higher the magnification. Nevertheless, increasing the focal length reduces the amount of light that reaches the camera and, thus, limits the acquisition speed. Therefore, a compromise between magnification and acquisition speed should be reached. To avoid damaging the camera sensor, perform this whole procedure with low lamp power, well below the camera saturation threshold.
\item Optimize the illumination brightness by adjusting the condenser axial position. More complex illumination configurations, e.g., K\"ohler illumination, are also possible if the coherence of the illumination as well its homogeneity need to be controlled.
\item It is useful to calibrate the CCD measuring the pixel-to-meter conversion factor. This can be done using a calibration glass slide or moving (by a known controllable displacement) a particle stuck to the bottom coverslip by means of a calibrated piezoelectric stage.
\end{enumerate}
\subsection{Single-beam OT and alignment of the laser beam}
\begin{figure}[ht]
\centerline{\includegraphics[width=0.7\columnwidth ]{Fig1.pdf}}
\caption{Single-beam optical tweezers (OT). (a) Photograph and (b) schematic of the experimental set-up. See Movie 1 for the complete building sequence. (c) A $1\,{\rm \mu m}$ diameter polystyrene bead (circled particle) is trapped by the OT and then the stage is moved (d) vertically and (e) laterally; notice how, as the stage is moved, the trapped particle remains in focus in the image plane, while the background particles get defocused.}
\label{fig:singleot}
\end{figure}
\begin{enumerate}
\itemsep1em
\addtocounter{enumi}{13}
\item Fix firmly on the optical table a laser. In our case, we use a solid state laser with wavelength $1064\,{\rm nm}$ and maximum power $500\,{\rm mW}$, even though typically we will only need $1\,{\rm mW}$ for trapping a single particle. Place the laser so that the beam height coincides to the height of the center of the \ang{45} mirror (M1, Fig.~\ref{fig:singleot}a and \ref{fig:singleot}b) on the first level of the microscope (step 2). This is the height of the axis of the optical train ($7\,{\rm cm}$ in our case). Importantly, keep in mind that correct safety procedures when working with lasers are of paramount importance; thus, we urge all users to undergo suitable safety training before starting to work with lasers and to observe all local safety regulations, including, in particular, the use of appropriate eye safety equipment.
\item Since we use a diode-pumped solid state (DPSS) laser, whose output beam has elliptical polarization, we place a quarter-wave plate to linearize the polarization just after the laser (the first element in P, Fig.~\ref{fig:singleot}a and \ref{fig:singleot}b). In order to maintain the power stability of the laser beam, we prefer not to alter the laser settings (and in particular its injection current) in order to change the laser output power. Therefore, we opt for an alternative way of controlling the laser power: we place along the beam path, a half-wave plate and a beam polarizer (second and third elements in P, Fig.~\ref{fig:singleot}a and \ref{fig:singleot}b), which permit us to tune the beam power by rotating the half-wave plate. The next step is to mount the optical train to direct the laser beam through the objective. In order to have a stable optical trap, keep the laser beam path as short as possible and, also, consider using black plastic pipes to enclose the laser beam and to built a thermal and acoustic isolation enclosure for the whole experimental set-up. For safety reason and to avoid damaging the camera, perform this procedure with low laser power.
\item Use dielectric laser line mirrors to deflect the laser beam at right angle (M4 and M5, Fig.~\ref{fig:singleot}a and \ref{fig:singleot}b). Check that the deflected beam travels at the same height of the incident beam. The last mirror of the optical train should be a dichroic mirror (M6, Fig.~\ref{fig:singleot}a and \ref{fig:singleot}b), which permits one to split the light of the laser beam, which is routed towards the objective, and the light of the lamp, which is routed towards the camera. We used a short-pass dichroic mirror that reflects (reflection coefficient $>98\%$) in the region from $750$ to $1100\,{\rm nm}$ and transmits shorter wavelengths.
\item Remove the objective and the condenser from their respective holders and, using only the last two mirrors (M5 and M6, Fig.~\ref{fig:singleot}a and \ref{fig:singleot}b), align the laser beam so that it goes straight through the objective holder. This is a crucial point as the laser beam must be perfectly aligned along the vertical axis of the objective holder. To achieve this, you can use two alignment tools like pinholes (Thorlabs, VRC4D1) or fluorescing disks (Thorlabs, VRC2RMS). It is not critical to perfectly center the condenser at this point, because it will be optimized later while aligning the detection components.
\item Add a telescope in order to create a beam with an appropriate size to overfill the objective back aperture and to generate a strong optical trap. The beam waist can be measured by fitting a Gaussian profile to an image of the beam acquired by a digital camera (some neutral density filters might be necessary to prevent the camera from saturating). The typical beam waist (radius) of a DPSS laser is about $1$ to $1.5\,{\rm mm}$, therefore a telescope with magnification of about $5\times$ guarantees the proper overfilling condition. The magnification can be easily obtained with lenses with focal lengths of $5\,{\rm cm}$ and $25\,{\rm cm}$. To realize the telescope, first place the lens with the longer focal length (L2, Fig.~\ref{fig:singleot}a and \ref{fig:singleot}b). It should be positioned while controlling the position of the laser beam on the target placed on the objective lens mount: the position of the spot center must remain the same with and without the lens. Then place the lens with shorter focal length (L3, Fig.~\ref{fig:singleot}a and \ref{fig:singleot}b), between the first lens and mirror M4, controlling, at the same time, that the position of the laser beam on the target does not change, as done for the first lens, and the collimation of the laser beam. The latter can be done using a shear plate interferometer or checking that the size of the beam waist is constant on a long travel distance ($>5\,{\rm m}$).
\item Double-check the alignment of the laser beam along the vertical axis of the microscope using the alignment tools.
\item Add a short-pass filter (Thorlabs, FGS900) in the light path leading to the camera in order to remove any laser light that might saturate the camera (F, Fig.~\ref{fig:singleot}a and \ref{fig:singleot}b). It is preferable to mount the filter on a repositionable mount (Thorlabs, FM90) to easily block or unblock the laser light.
\item Mount again the objective, the condenser and the sample cell. Allow the laser light to reach the CCD camera by removing the filter F (Fig.~\ref{fig:singleot}a and \ref{fig:singleot}b).
\item Place a droplet of the immersion medium on the objective (water in our set-up) and gently move the objective towards the sample cell until the focus is at the glass-air interface. This is best done by using a sample cell without any liquid in order to increase the amount of light reflected back to the camera, but can also be done with a sample filled with a solution, even though the intensity of the back-scattered light will be significantly (about two orders of magnitude) lower. In practice, one has to slowly approach the objective to the sample, until the back-scattered pattern size is minimized. The resulting pattern (a series of concentric Airy rings, possibly with a cross-shaped dark area) should be as symmetric as possible. Furthermore, the pattern should remain symmetric and with the same center as the relative distance between focus and interface is changed.
\item Build a sample cell with a microscope slide and a coverslip (thickness \#1, $150\,{\rm \mu m}$ ca.): (1) clean the slide and coverslip; (2) place two stripes of Parafilm\textsuperscript{\textregistered} above the slide to work as spacers (Parafilm\textsuperscript{\textregistered} is a polymer resistant to many solvents and acids; it melts at about $80\textsuperscript{$\circ$}{\rm C}$ and it works like a glue when cooled); (3) place the coverslip above the Parafilm\textsuperscript{\textregistered} stripes; (4) heat with a hot air gun or placing the cell on a hot plate; (5) fill the cell with the sample solution and seal it with silicon grease, epoxy resin or UV glue. The thickness of the cell is around 120 $\mu m$, but it is possible to have thicker cells overlapping two or three layers of Parafilm\textsuperscript{\textregistered} or thinner by stretching Parafilm\textsuperscript{\textregistered} before placing the coverslip.
\item Place the sample cell filled with a mixture of water and microspheres on the stage (with the coverslip side below towards the objective) and adjust the position of the objective and sample holder such that the plane just above the glass-solution interface is on focus. The image recorded by the camera should show the sedimented particles, as in Fig.~\ref{fig:singleot}c.
\item Move the sample stage a few micrometers down (so that the laser focus effectively moves a few micrometers above the interface) and approach some of the particles. If a particle is free to move and not stuck to the glass, you will see the particle jump into the laser beam and get optically trapped. To get the image in focus, it might be again necessary to slightly adjust the lens in front of the camera. It could happen that a bead is attracted by the laser beam but instead of being trapped it will be pushed in the direction of the laser beam propagation. This means that the laser beam is not well aligned or is not overfilling the aperture sufficiently. Go back to steps~17-19 to check the alignment and overfilling. In order to double-check that the particle is actually optically trapped, try and move it around. First move the sample stage further down so that the optically trapped particle gets moved vertically above the glass-solution interface, as in Fig.~\ref{fig:singleot}d: non-optically trapped particles on the sample cell bottom get out-of-focus, while the optically trapped particle remains in focus. Then, move the sample stage horizontally, as in Fig.~\ref{fig:singleot}e: the image of the optically trapped particle remains on the same spot within the image captured by the camera, while the background particles sedimented on the coverslip appear to be displaced.
\end{enumerate}
\subsection{Photonic force microscope (PFM) and alignment of the quadrant photodetector (QPD)}
\begin{figure}[ht]
\centerline{\includegraphics[width=0.7\columnwidth ]{Fig2.pdf}}
\caption{Photonic force microscope (PFM). (a) Photograph and (b) schematic of the detection apparatus that permits us to project the interference pattern at the back-focal plane (BFP) of the condenser lens onto a quadrant photodiode (QPD). The position of the particle can be monitored also using (c) digital video microscopy (DVM) or (d) the three interferometric signals received at the QPD. We obtain an excellent agreement between the positions measured with the two techniques as shown in panels (e) and (f), where the horizontal ($x$ and $y$) coordinates of the particle using DVM (green/gray solid lines) are overlapped to those obtained from interferometry (black solid lines). (g) The QPD permits to measure also the vertical ($z$) coordinate of the particle.}
\label{fig:qpd}
\end{figure}
\begin{enumerate}
\itemsep1em
\addtocounter{enumi}{24}
\item In order to use a QPD, the image plane of the condenser lens should be the same as the one of the objective, i.e., they should form a telescope. In order to achieve this, adjust the height of the condenser until the laser beam emerging from the condenser is perfectly collimated.
\item Position a dichroic mirror on the third level below the illumination LED (DM$_{\textrm{qpd}}$, Fig.~\ref{fig:qpd}a and \ref{fig:qpd}b).
\item Immediately after the dichroic mirror, place a $20\,{\rm cm}$ or $10\,{\rm cm}$ focal length lens
(L$_{\textrm{qpd}}$, Fig.~\ref{fig:qpd}a and \ref{fig:qpd}b) to project the image of the condenser BFP on the QPD.
\item Mount the QPD on a three-axis translation stage for precise alignment. Depending on the space available on the third level, use mirrors to deflect the laser light towards the QPD (two mirrors in our case, Fig.~\ref{fig:qpd}a and \ref{fig:qpd}b). The QPD sensitive area should be as perpendicular to the laser beam as possible.
\item Proceed to connect the output of the QPD to the computer or to a digital oscilloscope to observe the output signals.
\item Proceed to align the QPD. This involves first of all roughly centring the light collected by the condenser on the QPD and then proceeding to maximize the QPD sum signal while zeroing the differential QPD signals. This should be done without any trapped particle.
\item Pick a particle and observe the three signals. You should observe the thermal motion of the particles plus a small offset (Fig.~\ref{fig:qpd}d). Typically, the smaller the offsets the better the alignment.
\item Acquire the position signals of a particle as a function of time (Figs.~\ref{fig:qpd}e, \ref{fig:qpd}f and \ref{fig:qpd}g) and analyze them according to the procedures presented in Appendix~A in order to calibrate the optical trap. It is also possible to compare the trajectories obtained with the QPD to the ones obtained though digital video microscopy (Figs.~\ref{fig:qpd}e and \ref{fig:qpd}f). The agreement between the two results is an indication of the quality and stability of the set-up.
\end{enumerate}
\subsection{Holographic optical tweezers (HOT)}
\begin{figure}[ht]
\centerline{\includegraphics[width=0.7\columnwidth ]{Fig3.pdf}}
\caption{Holographic optical tweezers (HOT). (a) Photograph and (b) schematic of the experimental set-up. See Movie~2 for the complete building sequence. (c) In the absence of phase modulation on the spatial light modulator (SLM), the SLM works essentially as a mirror, reflecting almost the whole impinging light into a $0^{\rm th}$-order beam. When (d) a blazed grating phase profile is imposed on the SLM, (e) most of the beam light is deflected into a $1^{\rm st}$-order beam, while there is only some residual light in the $0^{\rm th}$-order beam (circled). The amount of light in the $0^{\rm th}$-beam can be further reduced by the use of a pinhole (PH) placed at the beam waist between lenses L4 and L5.}
\label{fig:hot}
\end{figure}
\begin{figure}[ht]
\centerline{\includegraphics[width=0.7\columnwidth ]{Fig4.pdf}}
\caption{HOT at work. (a) Phase mask obtained by a random superposition of diffraction gratings and Fresnel lenses and (b) corresponding multiple optical traps placed at various lateral and vertical positions. (c) Phase mask to generate two Laguerre-Gaussian beams with topological charge of +10 (inner circle) and -40 (outer circle), and (d) corresponding optically trapped particles; in this case, due to the strong scattering forces, the beads cannot be trapped in three dimension but are pushed towards the top glass slide, where the particles rotate along two circles in opposite direction due to the transfer of orbital angular momentum (the solid lines represent the trajectory of two particles).}
\label{fig:mask}
\end{figure}
\begin{enumerate}
\itemsep1em
\addtocounter{enumi}{32}
\item In order to realize a HOT, we need to change the optical beam input train. In particular, we need to place the SLM in a plane conjugate to the back aperture of the objective using two lenses arranged in a $4f$-configuration. In order to fit this optical train within the optical table, it is necessary to carefully plan the laser beam path. In our case (Fig.~\ref{fig:hot}a and \ref{fig:hot}b), the laser beam is first reflected by the SLM, then by a series of mirrors (M4, M5, M6, M7, M1), and finally it reaches the objective. The total distance between the SLM and the objective back aperture is $3000\,{\rm mm}$, so that we can realize a $4f$-configuration using two lenses with $f = 750\,{\rm mm}$ (L4 and L5, step~41).
\item Change the illumination system in order to increase the light intensity. In particular, place the LED light close to the sample and use a couple of aspherical lenses to collimate the LED light.
\item Repeat steps~14-15 to place the laser. Notice that to create multiple traps more laser power is required. Therefore we used a different DPSS laser with a maximum power of $3\,{\rm W}$ (Laser Quantum, Ventus 1064).
\item Expand the optical beam with a two-lens telescope (L2 and L3, Fig.~\ref{fig:hot}a and \ref{fig:hot}b) as explained in step~18, so that it fills the SLM active area. The laser spot should be perfectly centered on the SLM active area, lest the efficiency and the wavefront shape will be affected.
\item Rotate the SLM by a small (less than \ang{10}) angle with respect to the incidence direction of the laser beam. Larger angles would result in a significant decrease of the SLM efficiency.
\item Display a blazed diffraction grating on the SLM. The SLM will split the incoming beam into several diffraction orders. Usually the strongest are the $0^{\rm th}$-order beam, i.e., the beam that is simply reflected by the active area, and the $1^{\rm st}$-order diffracted beam, which is the one that will be used to generate the optical traps. The intensity of the $1^{\rm st}$-order beam varies according to the efficiency of the SLM.
\item Use large optics (at least $2\,{\rm inch}$ diameter) in the optical train in order to collect as much as possible of the light modulated into the $1^{\rm st}$-order beam.
\item Deflect the laser beam towards the objective back aperture. Align the laser beam so that it goes straight through the objective holder using mirrors M6 and M7 (Fig.~\ref{fig:hot}a and \ref{fig:hot}b), as done in step~17.
\item Add lenses L4 and L5 ($f = 750\,{\rm mm}$, Fig.~\ref{fig:hot}a and \ref{fig:hot}b) to realize the $4f$-configuration. Lens L5 must be placed exactly at a distance from the back aperture of the objective lens equal to $f$. While aligning L5 check that the position of the laser beam on the alignment tools does not change. Then, place L4 at a distance of $1500\,{\rm mm}$ from the first lens and of $750\,{\rm mm}$ from the SLM surface. Again, check that the position of the laser spot does not change after both lenses have been positioned.
\item The middle point between L4 and L5 is conjugated to the objective FFP and, thus, it is possible to place a pinhole at this position to select only the $1^{\rm st}$-order beam and cut out the $0^{\rm th}$-order beam and the higher diffraction orders.
\item Repeat steps~20-24 to prepare the sample cell and observe the particles.
\item Observe how the optical trap can be moved in the lateral direction by using gratings with different orientations and pitches, and in the axial direction by using a Fresnel lens (Appendix~B). Note also how with a combination of gratings and Fresnel lenses it is possible to displace a trap in three-dimensions. Generate multiple optical traps and trap various particles, as in Figs.~\ref{fig:mask}a and \ref{fig:mask}b. Generate Laguerre-Gaussian beams and see the transfer of orbital angular momentum (OAM), as in Figs.~\ref{fig:mask}c and \ref{fig:mask}d.
\end{enumerate}
\subsection{Speckle optical tweezers}
\begin{figure}[ht]
\centerline{\includegraphics[width=0.7\columnwidth ]{Fig5.pdf}}
\caption{Speckle optical tweezers. (a) Photograph and (b) schematic of the experimental set-up. See Movie~3 for the complete building sequence. (c) Illumination light and (d) speckle light field output from the optical fiber. (e) Mean square displacement (MSD) of particles trapped at various laser powers; a transition from trapping at high laser powers to subdiffusion at low laser powers can be observed.}
\label{fig:speckle}
\end{figure}
\begin{enumerate}
\itemsep1em
\addtocounter{enumi}{44}
\item Repeat steps~14-15 to place the laser.
\item Mount a telescope to increase the size of the laser beam waist (L2 and L3, Fig.~\ref{fig:speckle}a and \ref{fig:speckle}b).
\item It is necessary to overlap the illumination light on the laser beam to have them together at the fiber output. In order to do this use mirrors (M5 and M6, Fig.~\ref{fig:speckle}a and \ref{fig:speckle}b) to steer the laser beam towards the entrance port of the fiber. One mirror (in our case, M6) must be a dichroic mirror to allow the transmission of the white light of the illumination LED.
\item Align the laser beam into the fiber (Thorlabs M15L01, fiber core $105\,{\rm \mu m}$ in diameter) input using a lens (L4, Fig.~\ref{fig:speckle}a and \ref{fig:speckle}b). To maximize the coupling and to generate a fully developed speckle pattern, the focused beam should match the numerical aperture of the fiber. Since in our case the fiber has a numerical aperture of 0.20, we used a $20\,{\rm mm}$ focal length lens with $12.7\,{\rm mm}$ and, in order to match the numerical aperture of the fiber, the beam waist was set to $4\,{\rm mm}$. We suggest to built a cage system to firmly hold the focusing lens and the fiber end. The fiber end, or equivalently the focusing lens, can be mounted on a translation stage (Thorlabs, SMZ1) to easily align the beam. Using a power-meter, maximize the power at the output end of the fiber.
\item Place the illumination LED behind the dichroic mirror (M6, Fig.~\ref{fig:speckle}a and \ref{fig:speckle}b) and observe the white light at the end of the fiber with the laser light off. Optimize the transmission of the illumination LED using the CCD camera. This can be easily done by moving the whole LED mount behind the dichroic mirror.
\item Using an IR card, you should now be able to observe a diverging speckle pattern from the output of the fiber, as shown in Fig.~\ref{fig:speckle}d.
\item Build a cage system to host the output fiber adapter. It should be mounted vertically upside down just above the sample cell. Mount it on a precision three-axis translation stage to align the fiber output face along the optical axis of the observation objective (Fig.~\ref{fig:speckle}a and \ref{fig:speckle}b) and to control the size of the average speckle grain by precisely translating the fiber vertically.
\item Place this system on the third level of the microscope (Fig.~\ref{fig:speckle}a).
\item In order to be able to observe the whole area corresponding to the fiber core ($105\,{\rm \mu m}$ in diameter), use an imaging objective with a relatively low magnification (we used a $40\times$ objective).
\item The sample cell should be realized in a slight different way the one followed in step~23: since the output fiber end must be very close to the sample ($< 300\,{\rm \mu m}$), the $1\,{\rm mm}$ glass slide needs to be replaced by a much thinner coverslip. Place the sample cell on the piezoelectric translation stage and move the objective until you observe the particles on the bottom of the sample cell.
\item Change the height of the output fiber end until you can see its clear image (Fig.~\ref{fig:speckle}c). Then move it back by a very small amount. You should observe, using the CCD camera, a blurred bright disk.
\item Repeat step~20 to filter out the laser light. Note that in this case the laser light collected by the objective is sent towards the CCD. It could be necessary to use multiple filters to eliminate all the laser light.
\item Set the laser output power at a moderate level. Remove the short-pass filter from in front of the CCD camera. The speckle pattern will be now visible inside the bright disk imaged by the camera, as shown in Fig.~\ref{fig:speckle}d.
\item At this point you should observe that some of the free diffusing particles on the bottom of the sample cell (inset in Fig.~\ref{fig:speckle}e) are attracted towards the high intensity maxima of the speckle pattern. Increasing the laser power will increase the number of hot spots with enough power to actually trap particles.
\item Measure the trajectories of the particles using digital video microscopy.
\item Observe how increasing the laser power the particles are more and more confined in the speckle grains. Use, e.g., the mean square displacement analysis (see Appendix B) to quantify how much the particles are trapped, as shown in Fig.~\ref{fig:speckle}e.
\item Moving the piezoelectric stage, observe how the speckle pattern captures particles as they approach the high intensity hot spots.
\item Prepare a binary or ternary mixture of particles of different size and/or different material. Observe how the speckle optical tweezers is able to selectively capture different particles as the piezoelectric stage is moved.
\end{enumerate}
\section{Conclusions}
In this article we presented a detailed step-by-step guide for the construction of advanced optical manipulation techniques based on the construction of a home-made inverted optical microscope. Readers with some basic experience in experimental optics should be able to follow the explained procedure and build a single-beam OT, a HOT and, finally, a speckle OT. Readers will also be able to easily upgrade the designs we propose and realize in this article in order to fit their experimental needs.
Giovanni Volpe was partially supported by Marie Curie Career Integration Grant (MC-CIG) under Grant PCIG11 GA-2012-321726 and a T\"UBA-GEBIP Young Researcher Award.
|
1,116,691,498,675 | arxiv | \section{Introduction \label{sec:introduction}}
\Glspl{rtil} are a fascinating class of amorphous materials with many practical applications\cite{2015-Hayes-CR-115-6357,MacFarlane2014Rev}, such as lubrication in space applications and other low-pressure environments\cite{Haskins2016}. As in high temperature molten salts, strong Coulomb forces yield a liquid with significant \emph{structure}. Pair distribution functions from scattering experiments reveal an ion arrangement of alternating charges\cite{Tosi1993Rev,Murphy2015Rev,Bowron2010}, resulting in a large and strongly temperature dependent viscosity $\eta$. In contrast to simple salts, \glspl{rtil} consist of large, low-symmetry molecular ions and they remain liquid at ambient temperature. Many \glspl{rtil} are notoriously difficult to crystallize. Rather, they are easily supercooled, eventually freezing into a glassy state at the glass transition temperature $T_\mathrm{g}$ far below the thermodynamic melting point, $T_\mathrm{m}$\cite{Mudring2010Rev}.
A key feature of supercooled liquids and glasses is \emph{dynamic heterogeneity}\cite{Ediger2012Rev,Castner2010Rev,Sillescu1999Rev}. Distinct from homogeneous liquid or crystalline phases, the local \gls{md} exhibit fluctuations which are transient in both time and space. These non-trivial fluctuations are characterized by a growing dynamic correlation length, and are found to be stronger closer to the glassy phase\cite{Berthier2011d}. An understanding of dynamic heterogeneity may be central to a fundamental theoretical description of glass formation.
With highly localized probes in the form of nuclear spins, \gls{nmr} is one of the few methods with the spatial and temporal resolution to quantify this heterogeneity and reveal its characteristics\cite{Kaplan1982,Khudozhitkov2018a,Sillescu1999Rev}. The degree of heterogeneity can be modelled by the ``stretching'' of an exponential nuclear \gls{slr}, ${\exp\left\{-[(\lambda t)^\beta]\right\}}$, where ${\lambda = 1/T_1}$ is the \gls{slr} rate and $\beta$ is the stretching exponent. Single exponential relaxation ($\beta=1$), corresponds to homogeneous dynamics, whereas $\beta<1$ describes a broad distribution of exponentials \cite{Lindsey1980}, the case where each probe nucleus relaxes at a different rate. The breadth of the distribution of rates is determined by $\beta$, with ${\beta=1}$ corresponding to a delta function.
While stretched exponential relaxation is suggestive of dynamic heterogeneity, it is worth considering whether it instead results from a population which homogeneously relaxes in an intrinsically stretched manner. To this point, \gls{md} simulations of a supercooled model binary liquid have shown $\beta$ to be independent of scale, at least down to a few hundred atoms\cite{Shang2019}. This implies that the stretching is intrinsic and homogeneous; however, the \gls{nmr} nuclei are each coupled to far fewer atoms, and are capable of identifying dynamical heterogeneity \cite{Kaplan1982,Khudozhitkov2018a}. This sensitivity is clearly demonstrated by 4D exchange \gls{nmr}, where subsets of nuclei in supercooled polyvinyl acetate were tracked by their local relaxation rate, revealing a broad distribution of relaxation times\cite{SchmidtRohr1991}. Furthermore, dynamical heterogeneities have been theoretically shown to be a prerequisite for stretched exponential relaxation in dynamically frustrated systems, such as supercooled liquids\cite{Simdyankin2003}. A reduction of $\beta$ below one is a signature of dynamic heterogeneity.
Potential applications of the \gls{rtil} \gls{emim-ac}, with ions depicted in \Cref{fig:emim-ac}, have motivated detailed studies of its properties, including neutron scattering measurements of its liquid structure\cite{Bowron2010}, its bulk physical properties\cite{Bonhote1996,2011-Fendt-JCED-56-31,2011-Pinkert-PCCP-13-5136,2012-Pereiro-CC-48-3656,2016-Castro-JCED-61-2299,2009-Evlampieva-RJAC-82-666,2017-Zhang-JML-233-471,2015-Nazet-JCED-60-2400,2013-Araujo-JCT-57-1,2012-QuijadaMaldonado-JCT-51-51},
and its ability to dissolve cellulosic material\cite{2011-Freire-JCED-56-4813,2014-Castro-IECR-53-11850}. Here, we use implanted-ion \gls{bnmr} to study the development of dynamic heterogeneity and ionic mobility of implanted \ce{^8Li^+}\ in supercooled \gls{emim-ac}. The \gls{bnmr} signal is due to the anisotropic $\beta$-decay of a radioactive \gls{nmr} nucleus\cite{Heitjans2005,Ackermann1983,Heitjans1986}, similar to muon spin rotation. The probe in our case is the short-lived \ce{^8Li}, produced as a low-energy spin-polarized ion beam and implanted into the sample\cite{2015-MacFarlane-SSNMR-68-1}. At any time during the measurement, the \ce{^8Li^+}\ are present in the sample at ultratrace (\SI{e-13}{\molar}) concentration. Implanted-ion \gls{bnmr} has been developed primarily for studying solids, particularly thin films. It is not easily amenable to liquids, since the sample must be mounted in the beamline vacuum, yet the exceptionally low vapor pressure of \glspl{rtil} makes the present measurements feasible\cite{2018-Szunyogh-DT-47-14431}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/horizontal_labelled_longform.pdf}
\caption{Structure of the EMIM cation (\textit{left}) and Ac anion (\textit{right}).}
\label{fig:emim-ac}
\end{figure}
We have measured the strong temperature dependence of the \gls{slr} ($1/T_1$) and resonance of \ce{^8Li}\ in \gls{emim-ac}. The relaxation shows a characteristic \gls{bpp} peak at \SI{298}{\kelvin}, coinciding with the emergence of dynamical heterogeneity, indicated by stretched exponential relaxation. Resonance measurements clearly demonstrate motional narrowing as the \gls{rtil} is heated out of the supercooled regime. Our findings show that \gls{bnmr} could provide a new way to study depth-resolved dynamics in \textit{thin films} of \glspl{rtil}\cite{Nishida2018film}.
\section{Experiment \label{sec:experiment}}
\gls{bnmr} experiments were performed at TRIUMF's ISAC facility in Vancouver, Canada. A highly polarized beam of \ce{^8Li^+}\ was implanted into the sample in the high-field \gls{bnmr} spectrometer with static field ${B_0 = \SI{6.55}{\tesla}}$~\cite{2014-Morris-HI-225-173,2004-Morris-PRL-93-157601}. The incident beam had a typical flux of \SI{\sim e6}{ions\per\second} over a beam spot \SI{\sim 2}{\milli\metre} in diameter. With a beam energy of \SI{19}{\kilo\electronvolt}, the average implantation depth was calculated by SRIM\cite{Ziegler2013} to be \SI{\sim200}{nm}, but solvent diffusion (see discussion) modifies this initial implantation profile significantly during the \ce{^8Li}\ lifetime. Spin-polarization of the \ce{^8Li}\ nucleus was achieved in-flight by collinear optical pumping with circularly polarized light, yielding a polarization of \SI{\sim 70}{\percent}\cite{Levy2002}. The \ce{^8Li}\ probe has nuclear spin ${I=2}$, gyromagnetic ratio ${\gamma = \SI{6.3016}{\mega\hertz\per\tesla}}$, nuclear electric quadrupole moment ${Q=\SI[retain-explicit-plus]{+32.6}{\milli b}}$, and radioactive lifetime ${\tau_{\beta}=\SI{1.21}{\second}}$. The nuclear spin-polarization of \ce{^8Li}\ is monitored through its anisotropic $\beta$-decay, where the observed \emph{asymmetry} of the $\beta$-decay is proportional to the average longitudinal nuclear spin-polarization\cite{2015-MacFarlane-SSNMR-68-1}. The proportionality factor is fixed and is determined by the $\beta$-decay properties of \ce{^8Li}\ and the detector geometry. The asymmetry of the $\beta$-decay was combined for two opposite polarizations of the \ce{^8Li}\ by alternately flipping the helicity of the pumping laser. This corrects for differences in detector rates and baseline\cite{Widdra1995,2015-MacFarlane-SSNMR-68-1}.
Similar to other quadrupolar (${I>1/2}$) nuclei in nonmagnetic materials, the strongest interaction between the \ce{^8Li}\ nuclear spin and its surroundings is typically the electric quadrupolar interaction, even when the time average of this interaction is zero. In \gls{emim-ac}, it is very likely that the spin relaxation is due primarily to fluctuations in the local \gls{efg} at the position of the \ce{^8Li}\ nucleus. The relaxation of a single ${I=2}$ nucleus is fundamentally bi-exponential, regardless of the functional form of the \gls{efg} spectral density, although the bi-exponential is not very evident in high fields where the faster exponential has a small relative amplitude\cite{Becker1982,Korblein1985}.
\Gls{slr} measurements used a pulsed \ce{^8Li^+}\ beam. The transient decay of spin-polarization was monitored both during and following the \SI{4}{\second} pulse, where the polarization approaches a steady-state value during the pulse, and relaxes to \num{\sim 0} afterwards. The effect is a pronounced kink at the end of the beam pulse, characteristic of \gls{bnmr} \gls{slr} data (Figure~\ref{fig:slr-spectra}). No \gls{rf} magnetic field is required for the \gls{slr} measurements, as the probe nuclei were implanted in a spin state already far from thermal equilibrium. Thus, it is typically faster and easier to measure \gls{slr} than to measure the resonance spectrum; however, this type of relaxation measurement has no spectral resolution, unlike to conventional \gls{nmr}, and reflects the spin relaxation of \emph{all} the \ce{^8Li}.
Resonances were acquired by stepping a \gls{cw} transverse \gls{rf} magnetic field slowly through the \ce{^8Li}\ Larmor frequency, with a continuous \ce{^8Li^+}\ beam. The spin of any on-resonance \ce{^8Li}\ is rapidly nutated by the \gls{rf} field, resulting in a loss in time-averaged asymmetry.
The sample consisted of a \glsdesc{emim-ac} solution (Sigma-Aldrich). To avoid the response being dominated by trace-level \ce{Li}-trapping impurities, we introduced a stable isotope ``carrier'' (\ce{LiCl}) at low, but macroscopic concentration to saturate impurity \ce{Li^+}\ binding sites. Additional characterization of a similar solution, prepared in the same manner, can be found in the supplementary information of Ref.~\citenum{2018-Szunyogh-DT-47-14431}. The solution was kept in a dry-pumped rough vacuum for approximately \SI{12}{\hours} prior to the measurement. A \SI{\sim3}{\micro\liter} droplet was placed in a \SI{3}{\mm} diameter blank hole set \SI{0.5}{\mm} into a \SI{1}{\mm} thick aluminum plate. The Al plate was then bolted vertically into an ultrahigh vacuum (\num{e-10}~Torr) coldfinger liquid He cryostat and the temperature was varied from \SIrange{220}{315}{\kelvin}. The viscosity was sufficient to prevent the liquid from flowing out of the holder during the experiment. Sample mounting involved a few minutes exposure to air, followed by pumping for \SI{30}{\min} in the spectrometer's load lock.
Separately, we determined the self-diffusion coefficients of the \ce{LiCl} \gls{emim-ac} solution using conventional bi-polar \gls{pfg} \gls{nmr} with an in-house probe\cite{Michan2012} and spectrometer\cite{Michal2002} at \SI{8.4}{\tesla} and room temperature. A gradient pulse of ${\delta=\SI{3.2}{\ms}}$ was applied in varying strength, $g$, from \SIrange{50}{1200}{\gauss\per\cm}. The probe frequency was set to either \ce{^1H} or \ce{^7Li}, and the diffusion time $\Delta$ was varied between \SIrange{100}{450}{\ms}, according to the species diffusion rate. A delay of \SI{30}{\ms} allowed eddy currents to decay before acquisition. Diffusion coefficients were extracted by fitting the resulting Gaussian to the Stejkal-Tanner diffusion equation \cite{Stejskal1965}.
\section{Results and Analysis \label{sec:results}}
\subsection{Relaxation \label{sec:results:relaxation}}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/slr-spectra.pdf}
\caption{The $\beta$-decay asymmetry of \ce{^8Li}\ in \glstext{emim-ac}, with stretched exponential fits. The \Glstext{slr} is strongly temperature-dependent, and is well described by \Cref{eq:strexp} convoluted with the square beam pulse, as evidenced by $\tilde{\chi}^2_\mathrm{global} \approx 0.99$. The data have been binned by a factor of \num{20} for clarity. }
\label{fig:slr-spectra}
\end{figure}
Typical \ce{^8Li}\ \gls{bnmr} \gls{slr} measurements are shown in \Cref{fig:slr-spectra}. Clearly, the relaxation shows a strong temperature dependence. At low temperatures it is slow, but its rate increases rapidly with temperature, revealing a maximum near room temperature. Besides the rate, the {\it form} of the relaxation also evolves with temperature. At low temperature it is highly non-exponential, but gradually crosses-over to nearly exponential at room temperature. For a \ce{^8Li^+}\ ion implanted at time $t^{\prime}$, the spin polarization $P(t,t^\prime)$ at time ${t>t^{\prime}}$ is well-described by a stretched exponential:
\begin{equation} \label{eq:strexp}
P \left( t, t^\prime \right) = \exp \left\{ - \left[ \lambda \left( t-t^\prime \right) \right]^\beta \right\},
\end{equation}
where $t^\prime$ is integrated out as a result of convolution with the \SI{4}{\s} beam pulse\cite{2015-MacFarlane-PRB-92-064409}. A very small fraction, about \SI{2}{\percent}, of the \gls{slr} signal can be attributed to \ce{^8Li^+}\ stopping in the sample holder. While this background signal is nearly negligible, it is accounted for with an additive signal: ${P_b(t,t^\prime) = \exp\left\{-[\lambda_b(t-t^\prime)]^{0.5}\right\}}$, a phenomenological choice given the disorder in the \ce{Al} alloy. The relaxation rate, $\lambda_b$, was obtained from a separate calibration run at \SI{300}{\kelvin} and was assumed to follow the Korringa law: $\lambda_b \propto T$.
The \gls{slr} time series at all $T$ were fit simultaneously with a common initial asymmetry. To find the global least-squares fit, we used C++ code leveraging the MINUIT~\cite{1975-James-CPC-10-343} minimization routines implemented within ROOT\cite{1997-Brun-NIMA-389-81}, accounting for the strongly time-dependent statistical uncertainties in the data. The fitting quality was excellent, with ${\tilde{\chi}^2_\mathrm{global} \approx 0.99}$.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/slr-fits_T.pdf}
\caption{The stretched exponential parameters from fits to the \glstext{slr} in \gls{emim-ac} (refer to \Cref{fig:slr-spectra} for fit curves). For the rate ($1/T_1$), the line denotes a fit using \Cref{eq:model,eq:j,eq:tau}, as detailed in the text. For the stretching exponent ($\beta$), the line is a guide for the eye. Both are highly temperature dependent, showing a monotonic increase until the $1/T_1$ \glstext{bpp}\cite{1948-Bloembergen-PR-73-679} peak at \SI{298}{\kelvin}, above which $\beta\approx1$. The shaded region indicates the approximate temperature range of supercooling between $T_\mathrm{g}$ and $T_\mathrm{m}$. }
\label{fig:slr-fits}
\end{figure}
As shown in \Cref{fig:slr-fits}, the change in $1/T_1$ over the measured \SI{~100}{\kelvin} range is remarkable, varying over 3 orders of magnitude. These changes coincide with the relaxation converging to monoexponentiality with increasing temperature, as evidenced by $\beta \rightarrow 1$ (upper panel). The temperature dependence of $1/T_1$ is, however, not monotonic; the rate is clearly maximized at room temperature, corresponding to a \gls{bpp} peak\cite{1948-Bloembergen-PR-73-679}. At this temperature, the characteristic fluctuation rate of the dynamics responsible for the \gls{slr} ($\tau_c^{-1}$) matches the probe's Larmor frequency (${\omega_0=\gamma B_0}$), i.e., ${\tau_c\omega_0 \approx 1}$. The \gls{slr} due to a fluctuating \gls{efg} can be described by the following simple model\cite{Abragam1962}:
\begin{equation} \label{eq:model}
\frac{1}{T_1} = a \left[ J_1(\omega_0) + 4J_2(2\omega_0) \right] + b,
\end{equation}
where $a$ is a coupling constant related to the strength of the \gls{efg}, $b$ is a small phenomenological temperature-independent relaxation rate important at low $T$\cite{1991-Heitjans-JNCS-131-1053}, and the $J_{n}$ are the $n$-quantum \gls{nmr} fluctuation spectral density functions. If the local dynamics relax exponentially, the spectral density has the Debye (Lorentzian) form:
\begin{equation} \label{eq:j}
J_n(n\omega) = \frac{2 \tau_c}{1 + \left(n \omega \tau_c \right)^2},
\end{equation}
where $\tau_c$ is the (exponential) correlation time.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/emim-ac-dvisc_inset.pdf}
\caption{The dynamic viscosity ($\eta$) from the literature\cite{Bonhote1996, 2009-Evlampieva-RJAC-82-666, 2011-Fendt-JCED-56-31, 2011-Freire-JCED-56-4813, 2011-Pinkert-PCCP-13-5136, 2012-Pereiro-CC-48-3656, 2012-QuijadaMaldonado-JCT-51-51, 2013-Araujo-JCT-57-1, 2014-Castro-IECR-53-11850, 2015-Nazet-JCED-60-2400, 2016-Castro-JCED-61-2299, 2017-Zhang-JML-233-471}, fitted with a \glstext{vft} model. \glstext{emim-ac} is a fragile glass former, as evidenced by a super-Arrhenius $\eta$. \emph{Inset}: the resonance linewidth as a function of $\eta(T)/T$, with linear fit for $T\ge \SI{250}{\kelvin}$. This linear scaling is expected from the Stokes-Einstein relation.}
\label{fig:viscosity}
\end{figure}
Local fluctuations may be related to other macroscopic properties of the liquid such as the viscosity. Using values from the literature~\cite{Bonhote1996, 2009-Evlampieva-RJAC-82-666, 2011-Fendt-JCED-56-31, 2011-Freire-JCED-56-4813, 2011-Pinkert-PCCP-13-5136, 2012-Pereiro-CC-48-3656, 2012-QuijadaMaldonado-JCT-51-51, 2013-Araujo-JCT-57-1, 2014-Castro-IECR-53-11850, 2015-Nazet-JCED-60-2400, 2016-Castro-JCED-61-2299, 2017-Zhang-JML-233-471}, \Cref{fig:viscosity} shows that the dynamic viscosity ($\eta$) of \gls{emim-ac} is non-Arrhenius, characteristic of a fragile glass-former, and can be described with the phenomenological \gls{vft} model. The inset shows that the linewidth is proportional to $\eta/T$, consistent with the Stokes-Einstein relation (Equation 48 of \citet{1948-Bloembergen-PR-73-679}). We then assume that $\tau_c$ is proportional to $\eta/T$:
\begin{equation} \label{eq:tau}
\tau_c=\frac{c}{T}\exp\left[ \frac{E_A}{k_B \left(T-T_\mathrm{VFT}\right)} \right],
\end{equation}
where $c$ is a prefactor, $E_A$ is the activation energy, $k_B$ is the Boltzmann constant, $T$ is the absolute temperature, and $T_\mathrm{VFT}$ is a constant. Together, \Cref{eq:model,eq:j,eq:tau} encapsulate the temperature and frequency dependence of the \ce{^8Li}\ $1/T_1$ in the supercooled ionic liquid. A fit of this model to the data is shown in \Cref{fig:slr-fits}, and parameter values can be found in \Cref{tbl:fitpar}. The correlation times from \SIrange{220}{315}{\kelvin} are on the order of nanoseconds. The choice of \Cref{eq:j} assumes that the $\beta <1$ stretching arises from a population of exponential relaxing environments with a broad distribution of $\tau_c$. As mentioned, this assumption is likely good for the \ce{^8Li}\ \gls{bnmr} probe; especially since the basic local relaxation of \ce{^8Li}\ due to quadrupolar coupling is not intrinsically stretched, independent of the dynamical fluctuation spectrum\cite{Becker1982}. Under this construction, the departure from ${\beta=1}$ in the supercooled regime is consistent with the emergence of dynamical heterogeneity.
\begin{table}
\bgroup
\arrayrulecolor{Gray!70}
\begin{tabular}{ll|ll}
&& $\bm{1/T_1(T)}$ & $\bm{\eta(T)}$ \\ \hline
$c$ &(\si{n\K\cdot\s}) & \num{1.02\pm0.09}&\\
$\eta_0$ &(\si{\micro\Pa\cdot \s}) & & \num{59\pm4}\\
$E_A$ &(\si{\meV}) & \num{74.8\pm1.5} & \num{81.7\pm1.3}\\
$T_\mathrm{VFT}$&(\si{\K}) & \num{165.8\pm0.9} & \num{175.8\pm0.9}\\
$a$ &(\si{\s^{-2}}) & \num{1.550\pm0.006e9}&\\
$b$ &(\si{\per\s}) & \num{5.13\pm0.07e-2}&
\end{tabular}
\egroup
\caption{Fit results for $1/T_1$ (\gls{bnmr}) and $\eta$ (literature). Parameters are defined in \Cref{eq:model,eq:tau}, substituting $c/T\leftrightarrow\eta_0$ in the latter. Corresponding curves are shown in \Cref{fig:slr-fits,fig:viscosity}.}
\label{tbl:fitpar}
\end{table}
\subsection{Resonance \label{sec:results:resonance}}
Typical \ce{^8Li}\ resonances are shown in \Cref{fig:resonance-spectra}. Similar to the \gls{slr}, they show a strong temperature dependence. At low $T$, the resonance is broad with a typical solid-state linewidth on the order of \SI{10}{\kilo\hertz}. The lack of resolved quadrupolar splitting reflects the absence of a single well-defined \gls{efg}; the width likely represents an inhomogeneous distribution of static, or partially averaged, \glspl{efg} giving a broad ``powder pattern'' lineshape convoluted with the \gls{cw} \gls{nmr} excitation, a Lorentzian of width ${\gamma B_1}$, where $B_1 \approx\SI{0.1}{\gauss}$. This inhomogeneous quadrupolar broadening is qualitatively consistent with the heterogeneity in the dynamics implied by the stretched exponential relaxation.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/resonance-spectra.pdf}
\caption{The \ce{^8Li}\ resonance in \glstext{emim-ac}, shifted by the Larmor frequency ($\nu_0\approx\SI{41.27}{\MHz}$), with Lorentzian fit. The line narrows and increases in height as the temperature is raised, with a peak in the latter near \SI{260}{\kelvin} (see \Cref{fig:resonance-fits}). The vertical scale is the same for all spectra, which have been offset for clarity. Spectra are inverted for consistency with the presentation in conventional \gls{nmr}.}
\label{fig:resonance-spectra}
\end{figure}
The resonances are well-described by a simple Lorentzian. The baseline (time-integrated) asymmetry is also strongly temperature dependent due to the temperature dependence of $1/T_1$. The shift of the resonance relative to a single crystal of \ce{MgO} (our conventional frequency standard) is about \SI{-9}{\ppm}, but a slow drift of the magnetic field prevents a more accurate determination or a reliable measurement of any slight $T$ dependence. The other fit parameters extracted from this analysis; the linewidth, peak height, and intensity (area of normalized spectra); are shown in \Cref{fig:resonance-fits}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/emim-ac-licl-1f-fits.pdf}
\caption{The Lorentzian fit parameters of the \ce{^8Li}\ resonance in \gls{emim-ac}, illustrated in \Cref{fig:resonance-spectra}, with lines to guide the eye. Narrowing of the line suggests an onset of solvent molecular motion above the melting point. The corresponding drop in intensity (area of the normalized spectra), and the non-monotonic peak height suggests inhomogeneous broadening at low temperature, and slow spectral dynamics occurring on the scale of \SI{1}{\s}, the integration time at each frequency. Shading indicates the supercooled region between $T_\mathrm{m}$ and $T_\mathrm{g}$ (off scale).}
\label{fig:resonance-fits}
\end{figure}
As anticipated from the most striking features in \Cref{fig:resonance-spectra}, the linewidth and peak height evolve considerably with temperature. Note that the peak height in \Cref{fig:resonance-fits} is measured from the baseline, and is normalized to be in units of the baseline, accounting for changes in the \gls{slr}. Reduction in the linewidth by several orders of magnitude is compatible with motional narrowing, where rapid molecular motion averages out static inhomogeneous broadening. Saturation of the narrowing by room temperature
\footnote{The high temperature linewidth (\SI{\sim1.6}{\ppm}) is compatible with the limit imposed by the homogeneity of the magnet at its center (\SI{\sim10}{\ppm} over a cubic centimeter).}
with an onset far below the $1/T_1$ maximum is consistent with the \gls{bpp} interpretation of the \gls{slr} peak~\cite{1948-Bloembergen-PR-73-679}.
\section{Discussion \label{sec:discussion}}
Mediated by a strong Coulomb interaction, \glspl{rtil} are known to contain a significant amount of structure. One might expect pairing of anions and cations, but calculations based on a simplified ion interaction model suggest that such pairs are short-lived\cite{Lee2015dilute}. Dielectric relaxation experiments confirm this, placing a \SI{100}{\pico\second} upper bound on their lifetime, rendering them a poor description of the average ionic structure\cite{Daguenet2006}. Rather, the arrangement can be described as two interpenetrating ionic networks. As revealed by neutron scattering \cite{Tosi1993Rev,Murphy2015Rev,Bowron2010}, each network forms cages about the other that are highly anisotropic due to the tendency for \ce{EMIM} rings to stack \cite{Bowron2010}. In fragile glass formers, such as \gls{emim-ac}, \gls{md} simulations indicate that the motion of the caged ion and the center of mass motion of the cage are correlated\cite{Habasaki2015}.
Presumably, in our case, the small \ce{^8Li^+}\ cation is coordinated by several acetates and a similar correlation will exist for the \ce{^8Li^+}\ in the absence of independent long-range diffusion.
Naturally, the motion of the surrounding ionic solvent cage will cause the local \gls{efg} to fluctuate, and a strong temperature dependence is reasonable since these same fluctuations have a role in determining the strongly temperature dependent viscosity $\eta(T)$ shown in \Cref{fig:viscosity}. While a direct relation between the specific motions sensed by \ce{^8Li}\ and the bulk $\eta$ is complex and unclear\cite{1987-Akitt-MNNMR-7-189}, one may anticipate a consistency between their kinetics should a single mechanism govern both. The similarity of both $E_A$ and $T_\mathrm{VFT}$ with those found from the viscosity of the pure \gls{emim-ac} suggests that this is the case and provides further justification for the choice of \Cref{eq:tau}.
The inset of \Cref{fig:viscosity} shows that motional narrowing causes the resonance linewidths to scale as ${\eta/T}$ in the liquid state above $T_\mathrm{g}$, a situation also observed in \glstext{deme}-\glstext{tfsa} [\glstext{deme} = \glsdesc{deme}; and \glstext{tfsa} = \glsdesc{tfsa}] with solute \ce{^7Li}\ \gls{nmr}\cite{Shirai2011LiNMR}. That this relationship holds for \ce{^8Li}\ is surprising; our \gls{bnmr} signal is due to the dynamics of a population of implanted local probes, for which solvent self-diffusion and probe tracer-diffusion are not differentiated, whereas the viscosity is a bulk property. If \ce{^8Li^+}\ is diffusing, it implies that the diffusion is controlled by the solvent dynamics. In the limiting case of a solid, interstitial diffusion can be fast, yet the viscosity infinite, and the decoupling of diffusion and the host viscosity is self-evident. Many \glspl{rtil} violate the Stokes-Einstein relation that linearly relates self-diffusivity $D$ to ${T/\eta}$, and its violation at low $T$ in the inset of \Cref{fig:viscosity} shows that ionic diffusion in supercooled \glspl{rtil} may contain some of the character expected from a solid. At \SI{295}{\K} however, our \ce{^7Li}\ \cgls{pfg} \gls{nmr} in \gls{emim-ac} with \SI{30}{\m\molar} \ce{LiCl} shows that the diffusion is not significantly larger than the solvent (${D_{\ce{Li}}=\SI{3.46\pm0.11 e-10}{\m^2\s^{-1}}}$ vs ${D_{\ce{H}}=\SI{3.61\pm0.07 e-10}{\m^2\s^{-1}}}$), demonstrating that the \ce{^8Li}\ is primarily sensing the mobility of its surrounding solvent cage.
Relatively little is known about \ce{Li^+}\ as a solute in \gls{emim-ac}, compared to other imidazolium-based \glspl{rtil}, which have been explored as electrolytes for lithium-ion batteries\cite{Garcia2004batt,Shirai2011LiNMR}. Their properties should be qualitatively comparable, but the details certainly differ as both anion size and shape play a role in the diffusivity\cite{Tokuda2005PFGNMR}. Shown to compare favorably with implanted-ion \gls{bnmr}\cite{2018-Szunyogh-DT-47-14431}, conventional \gls{nmr} can provide a comparison to some closely related \glspl{rtil}: \ce{EMIM-TFSA} and \ce{EMIM-FSA} [\glstext{fsa} = \glsdesc{fsa}]. In both cases, the diffusion of \ce{^7Li}\ was similar to that of the solvent ions\cite{Hayamizu2011}. Differences in the tracer diffusion are reflected in the activation barrier for \ce{^7Li}\ hopping: \SI{222\pm6}{\milli\electronvolt} and \SI{187\pm2}{\milli\electronvolt}, respectively\cite{Hayamizu2011}. This correlates well with anion molecular weight, \SI{280}{\g\per\mol} and \SI{180}{\g\per\mol}, and with the barrier we report for \ce{^8Li}: \SI{74.8\pm1.5}{\milli\electronvolt} for acetate of \SI{59}{\g\per\mol}. This further emphasizes the probe sensitivity to the solvent dynamics.
The motional narrowing immediately apparent in \Cref{fig:resonance-fits} is analogous to conventional pulsed \gls{rf} \gls{nmr}, but the use of \gls{cw} \gls{rf} modifies the detailed description significantly. While the details are beyond the scope of this work, and will be clarified at a later date, we now give a qualitative description. In the slow fluctuation regime, the line is \emph{broadened} relative to the static limit at $T=0$ due to slow spectral dynamics occurring over the second-long integration time at each RF frequency. Both the peak height and the intensity (area of the normalized curve) are increased through the resulting double counting of spins at multiple \gls{rf} frequencies. In the fast fluctuation limit, the time spent with a given local environment is small and the RF is relatively ineffective at nutating the spins. Unlike the slow fluctuation limit, transverse coherence is now needed to destroy polarization. Coherence is maintained only in a small range about the Larmor frequency, narrowing as the fluctuation rate increases. The intensity (area) is also reduced from the preservation of off-resonance polarization, but the peak height is unaffected.
The local maximum in the peak height is explicable from the small slow relaxing background. When the \gls{rf} is applied on resonance, the signal from the sample is eliminated and the asymmetry is independent of the \gls{slr}. Increasing the \gls{slr} will, however, reduce the off-resonance asymmetry and results in a reduction in the fraction of destroyed polarization. This competes with the increase in peak height from motional narrowing and produces the local maximum in \Cref{fig:resonance-fits}.
The development of dynamic heterogeneity at the nanosecond timescale ($\omega_0^{-1}$) is demonstrated by the stretched exponential \gls{slr}, as shown in \Cref{fig:slr-fits}. Concurrently, the line broadening shows that this heterogeneity reaches down to the static timescale. There are no definitive measurements of the melting point of \gls{emim-ac}, since it has not yet been crystallized, but $T_\mathrm{m}$ is no larger than \SI{250}{K}\cite{Sun2009}. In contrast, a calorimetric glass transition has been observed at about \SI{\sim198}{K}\cite{Guan2011}. Thus, the dynamic inhomogeneity develops in a range of $T$ that corresponds well to the region of supercooling, indicated by the shading in \Cref{fig:slr-fits}. Stretched exponential relaxation, reflecting dynamic heterogeneity, is a well-known feature of \gls{nmr} in disordered solids\cite{Bohmer2000Rev,Schnauss1990}. In some cases, diffusive spin dynamics, driven by mutual spin flips of identical near-neighbor nuclei, can act to wash out such heterogeneity. Such spin diffusion may be quenched by static inhomogeneities that render the nuclei non-resonant with their neighbors\cite{Bernier1973}. However, a unique feature of \gls{bnmr} is that spin diffusion is absent: even in homogeneous systems, the probe isotope is always distinct (as an \gls{nmr} species) from the stable host isotopes, and the \gls{bnmr} nuclei are, themselves, always isolated from one another. In the absence of spin diffusion, on quite general grounds, it has been shown\cite{1991-Heitjans-JNCS-131-1053,Stockmann1984} that the stretching exponent $\beta$ should be 0.5. Our data in \Cref{fig:slr-fits} appear to be approaching this value at the lowest temperatures. While stretched exponential relaxation is very likely a consequence of microscopic inhomogeneity, unequivocal confirmation requires more sophisticated measurements such as spectral resolution of the SLR and reduced 4D-\gls{nmr}\cite{SchmidtRohr1991}.
Based on the non-Arrhenius behaviour of $\eta(T)$, \gls{emim-ac} is a reasonably fragile glass former, comparable to toluene which has been studied in some detail using \ce{^2H} \gls{nmr}\cite{Hinze1996Toluene,Bohmer2000Rev}, providing us with a useful point of comparison to a nonionic liquid. Like \ce{^8Li}, $^2$H should exhibit primarily quadrupolar relaxation. Toluene is supercooled between its melting point \SI{178}{\kelvin} and glass transition \SI{\sim 117}{\kelvin}, though it shows stretched exponential relaxation only below about $1.1~T_\mathrm{g}$, considerably deeper into the supercooled regime than in our case, with an onset near $1.25~T_\mathrm{g}$, likely due to the stronger tendency to order in the ionic liquid.
The closest analogue to our experiment is, perhaps, an early (neutron activated) \ce{^8Li}\ \gls{bnmr} study in \ce{LiCl.7D2O}\cite{1991-Heitjans-JNCS-131-1053,Faber1989}. There, the observed temperature dependence of the \gls{slr} is qualitatively similar (see Figure~9 of Ref.~\citenum{1991-Heitjans-JNCS-131-1053}): at low temperatures, the relaxation is nearly temperature independent, followed by a rapid increase above the glass transition, leading eventually to the \gls{bpp} peak at higher temperatures. This behaviour was interpreted as the onset of molecular motion above \SI{\sim 80}{\kelvin}, whose characteristic correlation times reflect the diffusion and orientational fluctuations in \ce{D2O}. This is consistent with the picture outlined here, although in our more limited temperature range the relaxation can be ascribed to a single dynamical process.
At present, there are few examples of \ce{^8Li}\ \gls{bnmr} in organic materials, as this application is in its infancy. Nevertheless, several trends from these early investigations have emerged, which serve as an important point of comparison. From an initial survey of organic polymers~\cite{2014-McGee-JPCS-551-012039}, it was remarked that resonances were generally broad and unshifted, with little or no temperature dependence. In contrast, the SLR was typically fast and independent of the proton density, implying a quadrupolar mechanism caused by the \gls{md} of the host atoms. These dynamics turned out to be strongly depth dependent, increasing on approach to a free surface\cite{2015-McKenzie-SM-11-1755} or buried interface\cite{2018-McKenzie-SM-14-7324}. In addition to dynamics of the polymer backbone, certain structures admitted \ce{Li^+}\ diffusion\cite{2014-McKenzie-JACS-136-7833}, whose mobility was found to depend on the ionicity of the anion of the dissolved \ce{Li} salt\cite{2017-McKenzie-JCP-146-244903}. A few small molecular glasses have also been investigated, where the relaxation is similarly fast\cite{2018-Karner-JPSCP-21-011022}.
Common to all of these studies is the non-exponential decay of the \ce{^8Li}\ spin-polarization, which is well described by a stretched exponential. In these disordered materials, the ``stretched'' behavior is compatible with the interpretation of a distribution of local environments, leading to an inhomogeneous \gls{slr}. Due to their high $T_\mathrm{g}$, the dynamics did not homogenize below the spectrometer's maximum temperature of \SI{\sim315}{\K}, unlike \gls{emim-ac}. This work is an important first example where the liquid state is attainable to a degree where we recover simple exponential \gls{slr}, accompanied by motional narrowing and a \gls{bpp} peak.
\section{Conclusion \label{sec:conclusion}}
We report the first measurements of \ce{^8Li}\ \gls{bnmr} in the ionic liquid \glsdesc{emim-ac}. Our results demonstrate that the quadrupolar interaction does not hinder our ability to follow the \gls{bnmr} signal through both the liquid and glassy state. We observed clear motional narrowing as the temperature is raised, accompanied by enhanced spin-lattice relaxation, whose rate is maximized at room temperature. From an analysis of the temperature dependent \gls{slr} rate, we extract an activation energy and \gls{vft} constant for the solvation dynamics, which are in relatively good agreement with the dynamic viscosity of (bulk) \gls{emim-ac}. At low temperatures near $T_\mathrm{m}$, the resonance is broad and intense, reflective of our sensitivity to slow heterogeneous dynamics near the glass transition. In this temperature range, the form of the relaxation is well-described by a stretched exponential, again indicative of dynamic heterogeneity. These findings suggest that \ce{^8Li}\ \gls{bnmr} is a good probe of both solvation dynamics and their heterogeneity. The depth resolution of ion-implanted \gls{bnmr} may provide a unique means of studying nanoscale phenomena in ionic liquids, such as ion behaviour at the liquid-vacuum interface or the dependence of diffusivity on film thickness~\cite{2018-Maruyama-ACSN-12-10509}.
\begin{acknowledgments}
The authors thank R.~Abasalti, D.~J.~Arseneau, S.~Daviel, B.~Hitti, and D.~Vyas for their excellent technical support.
This work was supported by NSERC Discovery grants to RFK, CAM, and WAM;
AC and RMLM acknowledge the support of their NSERC CREATE IsoSiM Fellowships;
MHD, DF, VLK, and JOT acknowledge the support of their SBQMI QuEST Fellowships;
LH thanks the Danish Council for Independent Research | Natural Sciences for financial support.
\end{acknowledgments}
|
1,116,691,498,676 | arxiv | \section{Introduction} \label{sec:intro}
Large-scale surveys of molecular clouds in the Milky Way revealed that the major part of molecular clouds is accumulated into cloud complexes called giant molecular clouds (GMCs) \citep{1985ApJ...295..422C,1987ApJ...322..706D,1998ApJS..115..241H,2001ApJ...547..792D,2006ApJS..163..145J,2013PASA...30...44B}. GMCs have a hierarchical and complex structure which can be divided into substructures as clouds, clumps, and cores \citep{1999ASIC..540....3B}. Previous studies found that GMCs are gravitationally bound while the constituent clumps of GMCs and isolated molecular clouds with M $<$ $10^3$ M$_\odot$ are not in self-gravitational equilibrium \citep{1987ApJ...319..730S, 2001ApJ...551..852H}. The mass function of molecular clouds follows a power law, $dN/dM \propto M^{\gamma}$, with an index of around $-$1.7 \citep{1997ApJ...476..166W, 2005PASP..117.1403R} and there exists a scaling relation between the line-width and the size of molecular clouds \citep{1981MNRAS.194..809L}.
Massive stars are influential in Galactic evolution by ionizing and dynamically affecting the interstellar medium and also by chemically enriching heavy elements. Stellar feedback, in particular that from young OB clusters, has strong influence on the evolution of molecular clouds through the expansion of \ion{H}{2} regions \citep{2008ApJ...681.1341W,2010ApJ...716.1478W}, stellar winds \citep{2006ApJ...649..759C}, and the supernova events \citep{2015Sci...347..526M}. The stellar feedback on the surrounding interstellar medium may trigger the formation of the new generation of stars. Recent studies show that the formation of 14$\%$ to 22$\%$ massive young stellar objects (YSOs) in the Milky Way may be triggered by the expanding \ion{H}{2} regions \citep{2012ApJ...755...71K}. Through searching for the characteristic mid-infrared (MIR) ring-like morphology,\citet{2014ApJS..212....1A} identified 8399 Galactic \ion{H}{2} regions and \ion{H}{2} region candidates, which is the most complete catalog of massive star forming regions in the Milky Way. However, the physical properties of molecular clouds in \ion{H}{2} regions are still unclear. For example, \cite{2010ApJ...709..791B} found that the molecular gas around the infrared bubbles created by young massive stars lies in a ring, rather than a sphere, whereas \cite{2011ApJS..194...32A} showed that the majority of the bubbles in their sample are three-dimensional structures. Meanwhile, the dynamic effect of \ion{H}{2} regions on the surrounding molecular gas is an important factor to understand the origins of turbulence in molecular clouds and the role of triggered star formation \citep{2011ApJ...742..105A,2017ApJ...849..140X}. Observations of molecular line emission are essential to address these important questions. To investigate the spatial distribution of molecular gas in \ion{H}{2} regions and the dynamical interaction between \ion{H}{2} regions and molecular clouds, we have conducted a large-scale survey of $^{12}$CO, $^{13}$CO, and C$^{18}$O $J=1-0$ emission toward the region of Galactic longitude of 207.7$^{\circ}$ $< l <$ 211.7$^{\circ}$ and Galactic latitude of $-$4.5$^{\circ}$ $< b <$ 0$^{\circ}$ (4.0\arcdeg $\times$ 4.5\arcdeg).
The region surveyed in this work did not obtain much attention in previous surveys of molecular clouds. \citet{1953ApJ...118..362S,1959ApJS} and \citet{1982ApJS...49..183B} identified four \ion{H}{2} regions (Sh2-280, Sh2-282, Sh2-283, and BFS54) and \citet{2014ApJS..212....1A} identified six \ion{H}{2} region candidates in this field. Five of the ten \ion{H}{2} regions/candidates mentioned above are spatially coincident with radio continuum emission. Sh2-282, also called LBN 978, is located near the OB-association Mon OB 2 and is a curved nebula. The O9.7Ib star HD 47432 \citep{2001KFNT...17..409K,2011ApJS..193...24S} has been proposed as the ionizing source of Sh2-282 \citep{1959ApJS,1981A&A...100...28F}. Several brightened-rims faced to the exciting star are identified in this region \citep{2006A&A...445L..43C}. No remarkable molecular clouds are found in this region in previous large-scale low sensitivity surveys \citep{2001ApJ...547..792D,2004PASJ...56..313K}. BFS54 is located in an isolated molecular cloud of a few thousand solar mass \citep{1986ApJ...303..375M}. Although BFS54 is about 3$^{\circ}$ away from Mon OB2, it is probably associated with this OB association. BFS54 is catalogued in surveys for outer Galactic \ion{H}{2} regions \citep{1982ApJS...49..183B,1984NInfo..56...59A,1993ApJS...86..475F,1995AZh....72..168K} and is also listed as a reflection nebula (NGC 2282) \citep{1966AJ.....71..990V,1968AJ.....73..233R,1980ApJ...237..734K,1984A&A...135L..14C}. BFS54 hosts a star cluster which was first studied by \citet{1997AJ....113.1788H} with near-infrared (NIR) data. Based on the optical and near-infrared color-magnitude diagrams and disc fraction ($\sim$58 percent) of stars, the age of the BFS54 star cluster was determined to be $2-5$ Myr \citep{2018MNRAS.476.2813D} and the masses of the YSOs are $0.1-2.0$ M$_{\odot}$ \citep{2015MNRAS.454.3597D}.
This paper is organized as follows. The survey is described in Section \ref{observation} and the results are presented in Section \ref{results}. We discuss our results in Section \ref{discussion} and present a summary in Section \ref{summary}.
\section{Observations and Data Reduction}\label{observation}
\subsection{PMO-13.7 m CO Data}
The sky region has been observed as part of the Milky Way Imaging Scroll Painting (MWISP \footnote{\url{http://www.radioast.nsdc.cn/mwisp.php}}) project which aims to survey molecular gas along the northern Galactic plane. The simultaneous observations of $^{12}$CO, $^{13}$CO, and C$^{18}$O $J=1-0$ emission presented in this work were carried out from 2012 January to 2016 June using the PMO-13.7 m millimeter telescope at Delingha in China. A superconducting spectroscopic array receiver (SSAR) containing 3 $\times$ 3 beams was used as the front-end. The receiver is a two sideband Superconductor-Insulator-Superconductor (SIS) mixer. A specific local oscillator (LO) frequency was carefully selected so that the upper sideband is centered at the $^{12}$CO $J=1-0$ line while the lower sideband covers the $^{13}$CO and C$^{18}$O $J=1-0$ lines \citep{2012ITTST...2..593S}. For each sideband, a Fast Fourier Transform Spectrometer (FFTS) containing 16384 channels with a bandwidth of 1 GHz was used as the back-end. The effective spectral resolution of each FFTS is 61.0 KHz, corresponding to a velocity resolution of 0.16 km s$^{-1}$ at the 115 GHz frequency of the $^{12}$CO $J=1-0$ line. The observations were conducted in the position-switched on-the-fly (OTF) mode with a scanning rate of 50$\arcsec$ per second and a dump time of 0.3 seconds. The survey area is split into cells of the size of 30$\arcmin$ $\times$ 30$\arcmin$. Each cell was scanned at least twice, once along the Galactic longitude and once along the Galactic latitude, to reduce scanning effects. The pointing of the telescope has an RMS accuracy of about 5$\arcsec$ and the beam widths are about 55\arcsec and 52\arcsec at 110 GHz and 115 GHz, respectively.
The Data are processed using the CLASS package of the GILDAS \footnote{\url{http://www.iram.fr/IRAMFR/GILDAS}} software. The raw data are re-grided and converted to FITS files. All FITS files related to the same survey cells are then combined to produce the final FITS data cubes. The spatial pixel of the FITS data cube has a size of 30$\arcsec$ $\times$ 30$\arcsec$. The antenna temperature ($T_A$) is converted to the main-beam temperature with the relation $T_{mb}$ = $T_A$/$B_{eff}$, where the beam efficiency $B_{eff}$ is 0.51 at 115 GHz and 0.56 at 110 GHz according to the status report of the PMO 13.7 m telescope. The calibration accuracy is estimated to be within 10$\%$. The typical system temperature during the observation is about 350 K for the upper sideband and 250 K for the lower sideband. The sensitivity of our observation is estimated to be around 0.5 K for the $^{12}$CO $J=1-0$ emission and around 0.3 K for the $^{13}$CO and C$^{18}$O $J=1-0$ emission. Throughout this paper, all velocities are given with respect to the local standard of rest (LSR).
\subsection{Archival Data}
The complementary infrared data used in this work were obtained from the Wide-field Infrared Survey Explorer ($WISE$) \citep{2010AJ....140.1868W}. $WISE$ covers the entire sky in four photometric bands: W1 (3.4 $\mu$m), W2 (4.6 $\mu$m), W3 (12 $\mu$m), and W4 (22 $\mu$m), at angular resolutions of 6.1$\arcsec$, 6.4$\arcsec$, 6.5$\arcsec$, and 12$\arcsec$, respectively. The 5$\sigma$ point-source sensitivities at the four bands are 0.08, 0.11, 1, and 6 mJy, respectively. The $WISE$ data were retrieved from the NASA/IPAC Infrared Science Archive (IRSA)\footnote{\url{http://irsa.ipac.caltech.edu/frontpage/}}. Data for the ionized emission used in this work are taken from the NVSS 1.4 GHz map \citep{1998AJ....115.1693C}. We also make use of data products from the Southern H$\alpha$ Sky Survey Atlas (SHASSA) \citep{2001PASP..113.1326G}, which consists of 2168 images covering 542 fields south of declination 15$^{\circ}$.
\section{Results}\label{results}
\subsection{Overall Distribution of Molecular Clouds in the Region}
Figure \ref{fig:avespec} shows the average spectra of the CO, $^{13}$CO, and C$^{18}$O $J=1-0$ emission toward the sky region of Galactic longitude of 207.7$^{\circ}$ $< l <$ 211.7$^{\circ}$ and Galactic latitude of $-$4.5$^{\circ}$ $< b <$ 0$^{\circ}$. Among the spectra, the $^{12}$CO emission shows the highest brightness temperature while C$^{18}$O shows the lowest. As shown in Figure \ref{fig:avespec}, the $^{12}$CO average spectrum can be divided into three velocity components, i.e., $-$3 km s$^{-1}$ to 16.5 km s$^{-1}$ (first velocity component), 16.5 km s$^{-1}$ to 30 km s$^{-1}$ (second velocity component), and 30 km s$^{-1}$ to 55 km s$^{-1}$ (third velocity component). Most of the molecular clouds have velocities ranging from $-$3 km s$^{-1}$ to 30 km s$^{-1}$ (first and second velocity components). The major emission of $^{13}$CO has velocities ranging from 5 km s$^{-1}$ to 16.5 km s$^{-1}$. Comparatively, the average C$^{18}$O spectrum of this region is weak. The inset in Figure \ref{fig:avespec} shows the average of the C$^{18}$O spectra. The C$^{18}$O emission is detected mainly in the eastern part of the Rosette Molecular Cloud in the velocity range from 10 to 16 km s$^{-1}$, but rarely in other part of the surveyed area.
\begin{figure}[h]
\centering
\includegraphics[width=0.35\textwidth,angle=270]{spec_L2077_2117_B-045_0_class-eps-converted-to.pdf}
\caption{Average spectra of the 4.0$^{\circ}$ $\times$ 4.5$^{\circ}$ region with the blue, green, and red indicating the $^{12}$CO, $^{13}$CO and C$^{18}$O emission, respectively. The velocity range of the $^{12}$CO emission is roughly divided into three components ($-$3 to 16.5, 16.5 to 30, and 30 to 55 km s$^{-1}$), which are indicated with vertical dashed lines. The inset shows the average of the C$^{18}$O spectra in the 4.0$^{\circ}$ $\times$ 4.5$^{\circ}$ region that have at least three contiguous channels with C$^{18}$O emission above 3$\sigma$. The green line in the inset shows the 3$\sigma$ noise level of the average spectrum. From this average spectrum, it can be seen that the C$^{18}$O emission is detected in the velocity range from 10 to 16 km s$^{-1}$.}
\label{fig:avespec}
\end{figure}
The position-velocity map of $^{12}$CO emission along the Galactic longitude is displayed in Figure \ref{fig:lv_arm}. According to the Galactic rotation model A5 of \citet{2014ApJ...783..130R}, we calculated the relationship between distance and velocity for the spiral arms between $l=207.7\arcdeg$ and $l=211.7\arcdeg$. The corresponding spiral arms are displayed with dashed lines. As shown in Figure \ref{fig:lv_arm}, the molecular clouds in this region exhibit multiple velocity components, with corresponding kinematic distance ranging from 0.5 kpc to $\sim$7.0 kpc. Most of the molecular clouds with velocities from $-$3 km s$^{-1}$ to 16.5 km s$^{-1}$ (first velocity component) are located between the Local and the Perseus Arms. We can see that the molecular clouds of the first velocity component are distributed to the east of the Rosette molecular cloud (RMC) \citep{2018ApJS..238...10L}. The distance to the RMC has been estimated to be 1.4$-$1.7 kpc with stellar photometry \citep{1981PASJ...33..149O,2002AJ....123..892P} and 1.39 $\pm$ 0.1 kpc with optical spectroscopy \citep{2000A&A...358..553H}. As in \cite{2018ApJS..238...10L}, we adopt a distance of 1.4 kpc for the first velocity component in this work.
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{lv_arm-eps-converted-to.pdf}
\caption{The longitude-velocity map of $^{12}$CO ($J=1-0$) emission from 207.7$^{\circ}$ to 211.7$^{\circ}$. The dashed lines indicate the spiral arms derived from the rotation model A5 of \citet{2014ApJ...783..130R}. The overlaid contours are $^{12}$CO emission with the minimal level and interval of the overlaid contours both being 0.18 K.}
\label{fig:lv_arm}
\end{figure}
Molecular clouds of the second velocity component are located on the Perseus Arm. Three known \ion{H}{2} regions (Sh2-280, Sh2-282, and BFS54) are associated with molecular clouds of the second velocity component (Figure \ref{fig:inter}). From spectrophotometric observations, the distance of the cluster in BFS54 is 1.65 kpc \citep{2018MNRAS.476.2813D}. \citet{2006A&A...445L..43C} studied the Sh2-282 \ion{H}{2} region and suggested that this \ion{H}{2} region is photo-ionized by the O type star HD 47432. From its $V$ magnitude, $(B-V)$ color, and spectral type, the star HD 47432 is estimated to be at a distance of $\sim$1.25 kpc \citep{2006A&A...445L..43C}, which is similar to the results of \citet{1981A&A...100...28F} and \citet{1982ApJS...49..183B}. However, the parallax of HD 47432 from Gaia satellite data release 2 (DR2) \citep{2018AJ....156...58B,2018A&A...616A...1G} is 0.38 milliarcseconds, which corresponds to 2.6 kpc. The \ion{H}{2} region Sh2-280 is associated with the O type star HD 46573 (see Figures \ref{fig:S280}, \ref{fig:Kinematics_S280} and Section \ref{Kinematics}) which has a parallax of 0.64 milliarcseconds (1.6 kpc). Taking all these available distance measurements together, we adopt in this work the parallax distance of HD 46573 for the distance of molecular clouds of the second velocity component. For molecular clouds with velocities higher than 30 km s$^{-1}$ (third velocity component), we adopt kinematic distances from the rotation model A5 of \citet{2014ApJ...783..130R}. These kinematic distances are in the range from 3.0 to 7.0 kpc, therefore indicating that the clouds of the third velocity component are located between the Perseus and the Outer (Cygnus) Arms.
\begin{figure}[h]
\centering
\includegraphics[width=0.32\textwidth]{guilemap_-3_16-eps-converted-to.pdf}
\includegraphics[width=0.32\textwidth]{guilemap_16_30-eps-converted-to.pdf}
\includegraphics[width=0.32\textwidth]{guilemap_30_55-eps-converted-to.pdf}
\includegraphics[width=0.32\textwidth]{guilemap_-3_16_L-eps-converted-to.pdf}
\includegraphics[width=0.32\textwidth]{guilemap_16_30_L-eps-converted-to.pdf}
\includegraphics[width=0.32\textwidth]{guilemap_30_55_L-eps-converted-to.pdf}
\caption{$^{12}$CO and $^{13}$CO $J = 1-0$ integrated intensity maps of molecular clouds in the region. The panels in the upper row show the $^{12}$CO emission and those in the lower row show the $^{13}$CO emission. Left: molecular clouds of the first velocity component. Middle: molecular clouds of the second velocity component. Right: molecular clouds of the third velocity component. The red, yellow, and cyan circles indicate the locations of the known, candidate, and radio quiet \ion{H}{2} regions. The circle sizes approximate the radius of the \ion{H}{2} regions from \citet{2014ApJS..212....1A}. The blue pentagrams and crosses indicate the O and B0 stars from the SIMBAD database, respectively.}
\label{fig:inter}
\end{figure}
The $^{12}$CO and $^{13}$CO $J = 1-0$ integrated intensity maps of the molecular clouds of the three velocity components are displayed in Figures \ref{fig:inter} and the velocity channel map of $^{12}$CO emission of the region is displayed in Figure \ref{channel} in the Appendix. The majority of the molecular clouds of the first velocity component belong to the eastern part of the RMC. Other molecular clouds of the first velocity component exhibit diffuse morphology. Weak $^{12}$CO emission is associated with the known \ion{H}{2} regions Sh2-280 and Sh2-282 and the candidate \ion{H}{2} regions G208.506-02.304, G209.208-00.127, and G209.971-00.698. The molecular clouds in the first velocity component exhibit rare $^{13}$CO emission except for the eastern part of RMC.
The molecular clouds of the second velocity component clearly exhibit filamentary structures. Most of the clouds are located in the Sh2-280, Sh2-282, BFS54, and the southeastern region. The $^{12}$CO emission shows the highest brightness in the BFS54 \ion{H}{2} region. Elephant trunk and cometary structures are seen in the molecular clouds within the Sh2-282 \ion{H}{2} region. These morphologies are frequently found in \ion{H}{2} regions \citep{2006A&A...454..201G,2017A&A...605A..82M}, and are predicted by the radiation driven implosion (RDI) simulations \citep{1989ApJ...346..735B,2011ApJ...736..142B}. As shown in the $^{12}$CO and $^{13}$CO emission maps (Figure \ref{fig:inter}), the stellar winds and radiation from the massive stars in the BFS54 and Sh2-280 \ion{H}{2} regions have apparently destroyed molecular clouds within the central parts of these \ion{H}{2} regions and have excavated a circle-like (BFS54) or semicircle-like (Sh2-280) cavity, similar to the S287 \citep{2016A&A...588A.104G} and the N4 \citep{2017ApJ...838...80C} regions. As shown in Figure \ref{fig:inter}, the morphology of the molecular clouds of the third velocity component is more fragmentary and clumpy as compared to those of the first and second velocity components, which may be caused by the farther distances to the clouds of the third velocity component (3.0$-$7.0 kpc).
\begin{figure}[h]
\centering
\includegraphics[width=0.32\textwidth]{guilemap_-3_16_m1_U-eps-converted-to.pdf}
\includegraphics[width=0.32\textwidth]{guilemap_16_30_m1_U-eps-converted-to.pdf}
\includegraphics[width=0.32\textwidth]{guilemap_30_55_m1_U-eps-converted-to.pdf}
\caption{First moment maps of $^{12}$CO emission for molecular clouds of each velocity component. The dashed lines in the middle pannel indicate the four main filaments in the second velocity component. The blue squares near the bottom in the right pannel indicate the locations of the four clumps (shown in Figure \ref{fig:clump}) that are far below the Galactic mid-plane. The red, yellow, and cyan circles indicate the locations of the known, candidate, and radio quiet \ion{H}{2} regions. The blue pentagrams and crosses mark the O and B0 stars in this region from the SIMBAD database, respectively.}
\label{fig:m1}
\end{figure}
Figure \ref{fig:m1} shows the $^{12}$CO emission intensity weighted centroid velocity distributions of the molecular clouds. As shown in Figure \ref{fig:m1}, most of the molecular clouds of the first velocity component have velocities in the range from 5 km s$^{-1}$ to 13 km s$^{-1}$. Molecular clouds in the region of Galactic longitude of $208.5^{\circ} < l < 209.7^{\circ}$ and Galactic latitude of $-2.0^{\circ} < b < -1.0^{\circ}$ have significantly lower velocities compared to other molecular clouds (see the left panel of Figure \ref{fig:m1}). The molecular cloud within the G208.506-02.304 \ion{H}{2} region is located near the eastern part of the Rosette molecular cloud, but with a velocity of about 6 km s$^{-1}$, which is 5 km s$^{-1}$ lower than the velocity of the RMC. For molecular clouds of the second velocity component, a velocity gradient can be seen in the direction along the Galactic longitude, with the western molecular clouds possessing a larger ($20-27$ km s$^{-1}$) velocity than the eastern clouds ($16.5-20$ km s$^{-1}$). Moreover, many filament exhibits coherent and narrow velocity distribution, so these filaments are coherent structures both in the spatial and in the velocity dimensions, rather than just chance projection of individual clumps of different velocities. Four main filaments (filament 1$-$4) are marked with dashed lines in the middle panel of Figure \ref{fig:m1}. The molecular clouds of the third velocity component have velocities higher than 30 km s$^{-1}$ and their kinematic distances are larger than 3.0 kpc. As shown in Figure \ref{fig:m1}, these molecular clouds as a whole exhibit a velocity gradient in the direction from the north (35$-$40 km s$^{-1}$) to the south (40$-$50 km s$^{-1}$). According to the rotation model A5 of \citet{2014ApJ...783..130R}, the distances of the northern molecular clouds are around 4 kpc, while the southern molecular clouds are located farther away ($\sim$5.5 kpc). In the datacube, we also identified four molecular clumps with both low Galactic latitudes and high velocities (Figure \ref{fig:clump}), which implies that these clumps are located far away from the Galactic mid-plane. The locations of the four clumps are displayed with blue squares in the right panel of Figure \ref{fig:m1}. According to the rotation model A5 of \citet{2014ApJ...783..130R}, the kinematic distances of the four clumps are calculated to be from 6.8 to 7.3 kpc. Combining the Galactic longitude/latitude of the clumps and the distance of the sun from the Galactic center (8.34 kpc), the distances of the four clumps from the Galactic center are calculated to be $\sim$14 kpc. Taking the offset angle and the distance of the Sun above the Galactic physical mid-plane to be 0.072$^{\circ}$ and 17.1 pc \citep{Su_2016}, respectively, the distances of these clumps from the Galactic mid-plane are calculated to be -525 to -508 pc. The FWHM thickness of the Galactic molecular gas disk at Galactic radius of $\sim$14 kpc is about 180$-$440 pc \citep{1990A&A...230...21W,Digel_1991,2015ARA&A..53..583H}, i.e., the $\sigma$ thickness of the Galactic molecular gas disk at the Galactic centric distance of 14 kpc is about 80$-$190 pc. Therefore, the four clumps are significantly far away from the Galactic mid-plane. With a high abundance and a low excitation energy and critical density, $^{12}$CO is frequently employed to measure molecular gas masses according to the relationship between the observed CO integrated intensity and the column density of molecular hydrogen. Taking the ratio between the column density of molecular hydrogen and the CO integrated intensity ($X$ factor) to be 2.0 $\times$ 10$^{20}$ H$_{2}$ cm$^{-2}$ (K km s$^{-1}$)$^{-1}$ \citep{doi:10.1146/annurev-astro-082812-140944} and applying the fluxes of the clumps (Table \ref{tab_clump}), the masses of the clumps are calculated to be $37-120$ M$_{\odot}$.
\begin{figure}[h]
\centering%
\includegraphics[width=0.23\textwidth]{clump1-eps-converted-to.pdf
\includegraphics[width=0.23\textwidth]{clump2-eps-converted-to.pdf
\includegraphics[width=0.23\textwidth]{clump3-eps-converted-to.pdf}
\includegraphics[width=0.23\textwidth]{clump4-eps-converted-to.pdf}\\
\includegraphics[width=0.21\textwidth]{clump1_spec-eps-converted-to.pdf}\hspace{1.3em}%
\includegraphics[width=0.21\textwidth]{clump2_spec-eps-converted-to.pdf}\hspace{1.3em}%
\includegraphics[width=0.21\textwidth]{clump3_spec-eps-converted-to.pdf}\hspace{1.3em}%
\includegraphics[width=0.21\textwidth]{clump4_spec-eps-converted-to.pdf}\hspace{1.3em}%
\caption{Top: $^{12}$CO $J = 1-0$ integrated intensity maps of the four clumps located far below the Galactic mid-plane. The black circles indicate the positions of the four clumps. Bottom: average spectra of the 1.5$\arcmin$ $\times$ 1.5$\arcmin$ region of the four clumps with blue and green indicating the $^{12}$CO and $^{13}$CO emission, respectively.}
\label{fig:clump}
\end{figure}
\begin{deluxetable}{rccccccccccc}
\decimals
\tabletypesize{\footnotesize}
\tablewidth{0pt}
\tablenum{1}
\tablecaption{Clumps far away from the Galactic physical mid-plane\label{tab_clump}}
\tablehead{
\colhead{ID} & \colhead{Name} & \colhead{l} & \colhead{b} & \colhead{$\rm{V_{lsr}}$} &\colhead{D} & \colhead{T$_{ex}$} & \colhead{Flux}& \colhead{Mass} & \colhead{D$_{GC}$}& \colhead{D$_z$}\\[1pt]
\colhead{ } & \colhead{ } & \colhead{($\arcdeg$)} & \colhead{($\arcdeg$)} & \colhead{(km/s)} & \colhead{
(kpc)} & \colhead{(K)}& \colhead{(K km s$^{-1}$)} & \colhead{(M$_{\odot}$)} &\colhead{(kpc)}&\colhead{(pc)} }
\startdata
Clump1 & MWISP G208.445-4.380 & 208.445 & -4.380 & 48.8 & 7.0 & 4.5 & 17.0 & 78 & 14.5& -509 \\
Clump2 & MWISP G208.489-4.330 & 208.489 & -4.330 & 50.3 & 7.3 & 4.4 & 13.6 & 68 & 14.8& -525 \\
Clump3 & MWISP G208.587-4.287 & 208.587 & -4.287 & 50.5 & 7.3 & 4.2 & 7.4 & 37 & 14.7& -519 \\
Clump4 & MWISP G209.533-4.505 & 209.533 & -4.505 & 49.8 & 6.8 & 4.9 & 27.6 & 120 & 14.3& -508 \\
\enddata
\tablecomments{Columns 3-5 give the position centroids of the clumps in the PPV space. Column 6 gives the kinematic distance derived from model A5 in \cite{2014ApJ...783..130R}. The excitation temperature, integrated intensity, and mass derived from $^{12}$CO are list in columns 7-9. Column 10 gives the distances of the clumps from the Galactic center. Column 11 gives the vertical distances of the clumps from the Galactic physical mid-plane.}
\end{deluxetable}
\subsection{Physical Properties of Molecular Clouds in the Region}\label{Physical properties}
Assuming that the molecular clouds are under the local thermodynamic equilibrium (LTE) conditions, the mass of the molecular cloud can be calculated with the measured brightness of $^{12}$CO, $^{13}$CO, and C$^{18}$O $J=1-0$ emission. Taking the filling factor of $^{12}$CO $J=1-0$ emission to be unity and assuming that the emission is optically thick, the excitation temperature in units of Kelvin can be calculated according to the following formula \citep{1998AJ....116..336N}
\begin{equation}
T_{ex} = \frac{5.53}{ln (1+\frac{5.53}{T_{mb} (^{12}CO)+0.819})},
\end{equation}
where $T_{mb}$ is the main-beam brightness temperature of $^{12}$CO emission. The excitation temperatures derived from above equation for the four clumps far away from the Galactic physical mid-plane are listed in Table \ref{tab_clump}.
\begin{figure}[h]
\centering%
\includegraphics[width=0.32\textwidth]{tex_-3_16_white-eps-converted-to.pdf
\includegraphics[width=0.32\textwidth]{tex_16_30_white-eps-converted-to.pdf
\includegraphics[width=0.32\textwidth]{tex_30_55_white-eps-converted-to.pdf}
\caption{Distributions of excitation temperature of molecular clouds of the three velocity components in the region. The red, yellow, and cyan circles indicate the locations of the known, candidate, and radio quiet \ion{H}{2} regions. The blue pentagrams and crosses indicate the O and B0 stars in this region from the SIMBAD database, respectively.}
\label{fig:tex_s282}
\end{figure}
The spatial distributions of the excitation temperature of the molecular clouds for each velocity component are displayed in Figure \ref{fig:tex_s282} and the histograms are presented in Figure \ref{fig:tex_123}. For molecular clouds of the first velocity component, the eastern part of the Rosette molecular cloud possesses relatively high temperature of around 12 K. According to the map of the 21 cm radio continuum emission from \citet{1997A&AS..126..413R}, the ionization front of the Rosette \ion{H}{2} region is far away from the eastern part of the RMC. The circle-like cavity of the Rosette molecular cloud shows that the shock front just reaches the Extended Ridge and Monoceros Ridge of RMC \citep{2018ApJS..238...10L}. The relatively high temperature of the eastern part of RMC may be due to the local evolution of the molecular clouds rather than the feedback of the OB cluster of the RMC \ion{H}{2} region. The molecular cloud within G208.506-02.304 possesses the highest temperature (15 K), indicating that this cloud has been influenced by the G208.506-02.304 \ion{H}{2} region. Except for the eastern part of the RMC and the molecular cloud associated with the G208.506-02.304 \ion{H}{2} region, most of the other molecular clouds of the first velocity component have temperatures around 6 K.
Most of the molecular clouds of the second velocity component possess excitation temperatures ranging from 8 to 15 K. The highest temperature (33 K) occurs in the BFS54 region, which indicating that the molecular clouds within the BFS54 \ion{H}{2} region have been heated by the \ion{H}{2} region BFS54.
Most of the molecular clouds of the third velocity component possess temperatures ranging from 7 to 15 K, with a typical value of about 9K. Considering that the distances of these molecular clouds are from 3.0 kpc to 7.0 kpc, the derived temperatures should be heavily influenced by the effect of the filling factor. Therefore, the excitation temperatures of the molecular clouds of the third velocity component should be strongly underestimated.
\begin{figure}[h]
\centering%
\includegraphics[width=0.6\textwidth,angle=270]{tex_123-eps-converted-to.pdf}
\caption{Probability distributions of the excitation temperatures of molecular clouds of the three velocity components. Only the spatial pixels with the integrated intensity of $^{12}$CO emission above 5$\sigma$ are used in this statistics.}
\label{fig:tex_123}
\end{figure}
As shown in Figure \ref{fig:tex_123}, the excitation temperatures of molecular clouds in the region are low as a whole. The peak probability excitation temperatures are at $5-6$ K. Above the excitation temperature of 9 K, the first velocity component has the most number of molecular clouds and most of these clouds belong to the eastern part of RMC. The second velocity component has the least number of clouds with high temperature. However, the molecular cloud within the BFS54 \ion{H}{2} region, which is a second velocity component cloud, processes the highest temperature (33 K) among the three velocity components.
Using the method in \citet{2018ApJS..238...10L} and assuming that the molecular clouds of the first velocity component are located at the same distance as the RMC (1.4 kpc), the total mass of molecular clouds of the first velocity component is calculated to be 3.8 $\times$ 10$^{4}$ M$_{\odot}$ with the $X$ factor method, whereas 8.5 $\times$ 10$^{3}$ M$_{\odot}$ from $^{13}$CO, and 4.2 $\times$ 10$^{2}$ M$_{\odot}$ from C$^{18}$O with the LTE approach. Most of the mass of the first velocity is contained in the eastern part of the RMC, which is 3.1 $\times$ 10$^{4}$, 7.7 $\times$ 10$^{3}$, and 2.1 $\times$ 10$^{2}$ M$_{\odot}$ from $^{12}$CO, $^{13}$CO, and C$^{18}$O, respectively. Taking the distance of molecular clouds of the second velocity component to be 1.6 kpc, the total mass of the molecular clouds is calculated to be 2.1 $\times$ 10$^{4}$ M$_{\odot}$ ($^{12}$CO), 2.2 $\times$ 10$^{3}$ M$_{\odot}$ ($^{13}$CO), and 2.5 $\times$ 10$^{2}$ M$_{\odot}$ (C$^{18}$O), respectively. Considering that the distances of the molecular clouds of the third velocity component vary greatly (3.0$-$7.0 kpc), the total mass of clouds of the third velocity component is not calculated. Due to its higher abundance, $^{12}$CO could trace the diffuse clouds whereas the $^{13}$CO and C$^{18}$O emission is detectable only in relatively dense clouds. Therefore, the mass from $^{12}$CO is larger than those from $^{13}$CO and C$^{18}$O. We note that although there is less molecular mass in the surveyed region than its neighboring massive star forming regions such as Monoceros OB1 (1.3 $\times$ 10$^{5}$ M$_{\odot}$ from $^{12}$CO \citep{1996A&A...315..578O}) and RMC (2.0 $\times$ 10$^{5}$ M$_{\odot}$ from $^{12}$CO \citep{2018ApJS..238...10L}), the number of \ion{H}{2} regions/candidates (10) is similar to those of Monoceros OB1 (11) and RMC (10) \citep{2014ApJS..212....1A}.
\subsection{Molecular Clouds Associated with the \ion{H}{2} Regions/Candidates}\label{sec:HII}
Ten \ion{H}{2} regions/candidates are located in the region, of which four \ion{H}{2} regions (Sh2-280, Sh2-282, Sh2-283, and BFS54) are identified by \citet{1953ApJ...118..362S,1959ApJS} and \citet{1982ApJS...49..183B}. By visual and automatic search of bubble morphology in the WISE 12 $\micron$ and 22 $\micron$ images, \citet{2014ApJS..212....1A} identified six \ion{H}{2} regions or \ion{H}{2} region candidates (G208.506-02.304, G209.208-00.127, G209.971-00.698, G210.009-00.292, G210.229-01.525, and G211.149-01.004). \citet{2014ApJS..212....1A} also reported the angular radii of the ten \ion{H}{2} regions/candidates on the basis of their visual check on the morphologies of the \ion{H}{2} regions/candidates in the MIR images. In this work we investigate whether there are any molecular clouds associated with these known or candidate \ion{H}{2} regions. For this purpose, we extract the mean spectra of $^{12}$CO, $^{13}$CO, and C$^{18}$O within the infrared bubble radii of the ten \ion{H}{2} regions/candidates. These spectra, ordered by Galactic longitude of \ion{H}{2} regions/candidates, are presented in Figure \ref{fig:spec_HII}. Single velocity component is present in the spectra of the \ion{H}{2} regions/candidates BFS54, G208.506-02.304, G210.009-00.292, and G211.149-01.004, while multiple velocity components exist in the other six known or candidate \ion{H}{2} regions. Radio recombination line (RRL) measurements are available for the four \ion{H}{2} regions (Sh2-280, Sh2-282, Sh2-283, and BFS54). The velocities of ionized gas from RRL \citep{2014ApJS..212....1A} are different from the velocities of the molecular clouds. The velocity offsets are 13 km s$^{-1}$ (Sh2-280), 3 km s$^{-1}$ (Sh2-282), 13 or 70 km s$^{-1}$ (Sh2-283), and 14 km s$^{-1}$ (BFS54), respectively. \citet{2009ApJS..181..255A} calculated the velocity offsets between the RRL and the $^{13}$CO emission line for 301 \ion{H}{2} regions. They found that the velocity offsets are smaller than 5 km s$^{-1}$ for most of the \ion{H}{2} regions. In our case, the velocity offsets are significantly larger than this value in three \ion{H}{2} regions (Sh2-280, Sh2-283, and BFS54)
\begin{figure}[h]
\centering
{\includegraphics[width=0.43\textwidth]{channel_spec2-eps-converted-to.pdf}}
\caption{Average spectra of the \ion{H}{2} regions/candidates with the blue, green, and red indicating the $^{12}$CO, $^{13}$CO and C$^{18}$O emission, respectively. The purple dashed line represents the velocity of the radio recombination line (RRL).}
\label{fig:spec_HII}
\end{figure}
The distributions of the molecular clouds in the \ion{H}{2} regions/candidates are presented in Figures \ref{fig:G2085-023}$-$\ref{fig:BFS54} in the Appendix. It can be seen that the 12 $\micron$ emission in G208.506-02.304 exhibits a typical bubble structure (Figure \ref{fig:G2085-023}). The molecular cloud associated with G208.506-02.304 has a similar morphology to the dust emission at 12 and 22 $\micron$. From the center of the \ion{H}{2} region to the outside, the velocity of the molecular cloud gradually increases, which may imply the influence of the \ion{H}{2} region on the molecular cloud. The molecular cloud associated with Sh2-280 consists of a giant filamentary structure surrounding the \ion{H}{2} region (Figure \ref{fig:S280}), which is similar to case of N131 studied by \citet{2016A&A...585A.117Z}. These authors suggest that the massive star in the N131 bubble was born from the disruption of the gas filament. Two compact sources of 1.4 GHz continuum emission are detected by the NRAO VLA Sky Survey (NVSS) within the infrared bubble. However, the 1.4 GHz continuum emission does not cover the central O-type star HD 46573. This may be caused by the poor (u, v) coverage in the snapshots of the NVSS observations which results in the NVSS images being insensitive to smooth radio emission structures larger than several arcminutes \citep{1998AJ....115.1693C}. For G209.208-00.127, two molecular clouds with distinct velocities are located within the radius of the \ion{H}{2} region/candidate (Figure \ref{fig:G2092-001}). We designate the lower velocity component as the 'near' cloud and the higher velocity component as the 'far' cloud. The 'near' and 'far' clouds have a velocity difference of about 20 km s$^{-1}$ . The G209.208-00.127\_far cloud has similar morphology to the mid-infrared emission and shows higher excitation temperature than the G209.208-00.127\_near cloud. Therefore, G209.208-00.127\_far is more likely to be associated with the \ion{H}{2} region than the 'near' cloud. However, Both the 'far' and the 'near' clouds are offset from the center of G209.208-00.127. It is also possible that both of the clouds are interacting with G209.208-00.127, with the 'far' cloud being pushed away and the 'near' cloud toward the Sun. Indeed, the 'far' cloud exhibits an increase of velocity from the center of G209.208-00.127 to the outside. Similar to G209.208-00.127, G209.971-00.698 is spatially associated with two clouds of distinct velocities (Figure \ref{fig:G2099-006}). The 'far' cloud has a velocity about 20 km s$^{-1}$ larger than that of the 'near' cloud. Both the 'near' and 'far' clouds are filamentary. The 'near' cloud is oriented in the northeast-southwest direction while the 'far' cloud roughly in the east-west direction when it is within the radius of G209.971-00.698 and then in the direction of northwest-southeast when it is outside of G209.971-00.698. Within the radius of G209.971-00.698, the 'far' cloud is similar to the MIR emission in morphology and it also shows a higher excitation temperature than the 'near' cloud. These facts suggest the 'far' cloud is more likely to be associated with G209.971-00.698 than the 'near' cloud. As both clouds have sizes much larger than the radius of G209.971-00.698 and that each cloud has its own coherent velocity, it is unlikely that the clouds are both accelerated by G209.971-00.698. The molecular cloud in the G210.009-00.292 region is mainly constrained within the radius of G210.009-00.292 and exhibits a clumpy morphology (Fig.\ref{fig:G2100-002}). The molecular cloud well matches the distribution of heated dust grains as traced by the MIR 22 $\micron$ emissions. The molecular clouds in the Sh2-282 region exhibit elephant trunk and cometary structures facing to the center (Fig.\ref{fig:S282}). \citet{2006A&A...445L..43C} found several bright-rimed structures in the Sh2-282\_far cloud that are faced to HD 47432, which supports that the Sh2-282\_far cloud is interacting with the Sh2-282 \ion{H}{2} region. For G210.229-01.525, as in the case of G209.208-00.127, both the 'far' and the 'near' clouds could be associated with the \ion{H}{2} region, considering that the 'near' cloud is located to the northeast and the 'far' cloud roughly to the southwest of G210.229-01.525 (Fig.\ref{fig:G2102-015}). The molecular cloud in the Sh2-283 region exhibits a filamentary morphology and is well coincident with the distribution of heated dusts as traced by the 12 and 22 $\micron$ emission (Figure \ref{fig:S283}). Therefore, the molecular cloud should be associated with the Sh2-283 \ion{H}{2} region. From Figure \ref{fig:G2111-010}, it can be seen that the molecular cloud in the G211.149-01.004 region exhibits the filamentary morphology. The velocity of this molecular filament increases from the center of G211.149-01.004 to the outside. Weak radio continuum and H$\alpha$ emission exist within the radius of the MIR bubble. According to \citet{2014ApJS..212....1A}, bubbles that have measured RRL or H$\alpha$ emission are classified as true \ion{H}{2} regions. Thus we adjust G211.149-01.004 from a candidate to a known \ion{H}{2} region in Table \ref{tab_HII}. In the BFS54 \ion{H}{2} region (Figure \ref{fig:BFS54}), the molecular cloud shows a central cavity which coincides not only with the MIR 12 and 22 $\micron$ emission but also with the radio continuum and H$\alpha$ emission, suggesting that the molecular cloud is being excavated by the BFS54 \ion{H}{2} region. The part of the molecular cloud that surrounds the cavity also possesses an unusually high temperature at around 30 K, which further supports that the BFS54 \ion{H}{2} region has a strong influence on its surrounding molecular cloud.
\startlongtable
\begin{deluxetable}{ccccccccccccccc}
\tabletypesize{\footnotesize}
\tablewidth{10pt}
\tablenum{2}
\tablecaption{Properties of molecular clouds in the regions of \ion{H}{2} regions/candidates}
\label{tab_HII}
\tablehead{
\colhead{Name} & \colhead{Category}& \colhead{Glon} & \colhead{Glat} & \colhead{V$_{lsr}$}
&\colhead{Distance} & \colhead{Linear Radius} & \colhead{T$_{ex}$} & \colhead{Mass} & \colhead{$\Delta$v} \\[1pt]
\colhead{ }&\colhead{ }& \colhead{(deg)} &
\colhead{(deg)} & \colhead{(km s$^{-1}$)}& \colhead{
(kpc)}& \colhead{(pc)} & \colhead{(K)}& \colhead{(M$_{\odot}$)} & \colhead{(km/s)}}
\startdata
\colhead{ }&\colhead{ }&\colhead{ }&\colhead{ }&\colhead{1st velocity component} \\[1pt]
\hline
G208.506-02.304 &Q & 208.506 & -2.304 & 5.9 & 0.6 & 0.7 & 9.2 & 2.6 $\times$ 10$^1$ & 1.0 \\
\hline
\colhead{ }&\colhead{ }&\colhead{ }&\colhead{ }&\colhead{2nd velocity component} \\[1pt]
\hline
Sh2-280 &K & 208.741 & -2.633 & 24.2 & 2.6 & 7.7 & 7.0 & 1.0 $\times$ 10$^4$ & 1.5 \\
G209.208-00.127 &Q & 209.208 & -0.127 & 29.7 & 3.3 & 1.2 & 6.8 & 1.3 $\times$ 10$^2$ & 1.1 \\
Sh2-282 &K & 210.187 & -2.168 & 22.1 & 2.2 & 19.1 & 6.4 & 2.0 $\times$ 10$^4$ & 1.7 \\
BFS54 &K & 211.245 & -0.424 & 21.4 & 2.1 & 3.9 & 12.4 & 3.0 $\times$ 10$^3$ & 1.2 \\
\hline
\colhead{ }&\colhead{ }&\colhead{ }&\colhead{ }&\colhead{3rd velocity component} \\[1pt]
\hline
G209.971-00.698 &Q & 209.971 & -0.698 & 35.0 & 4.0 & 2.5 & 8.5. & 1.1 $\times$ 10$^3$ & 1.1 \\
G210.009-00.292 &Q & 210.009 & -0.292 & 32.2 & 3.6 & 2.0 & 8.7 & 4.5 $\times$ 10$^2$ & 1.3 \\
G210.229-01.525 &Q & 210.229 & -1.525 & 40.5 & 4.8 & 6.1 & 8.3 & 3.6 $\times$ 10$^3$ & 1.4 \\
Sh2-283 &K & 210.788 & -2.545 & 49.6 & 6.5 & 3.1 & 7.8 & 2.9 $\times$ 10$^3$ & 2.3 \\
G211.149-01.004 &K & 211.149 & -1.004 & 37.1 & 4.2 & 3.9 & 7.6 & 4.9 $\times$ 10$^2$ & 1.1 \\\hline
\enddata
\tablecomments{Column 2 gives the category of the \ion{H}{2} regions/candidates, with K and Q indicating the known and radio quiet category, respectively. Column 6 gives the kinematic distance derived from model A5 in \cite{2014ApJ...783..130R}. Columns 9$-$10 give the mass and line width derived from $^{12}$CO. The physical properties are extracted within regions of two times the infrared bubble radii from \citet{2014ApJS..212....1A} because molecular clouds are generally larger than the infrared bubbles.}
\end{deluxetable}
By combining the data of dust, ionized gas, and molecular gas, the molecular clouds associated with the ten \ion{H}{2} regions/candidates are identified. The three velocity components contain one (G208.506-02.304), four (Sh2-280, G209.208-00.127, Sh2-282, and BFS54), and five (G209.971-00.698, G210.009-00.292, G210.229-01.525, Sh2-283, and G211.149-01.004) \ion{H}{2} regions/candidates, respectively. We derived the physical properties for each \ion{H}{2} region/candidate and list the results in Table \ref{tab_HII}. The masses of the \ion{H}{2} regions/candidates derived from the $X$ factor are from 26 to 2$\times 10^4$ M$_{\odot}$. Combining the angular radii from the MIR emission \citep{2014ApJS..212....1A} with the kinematic distances from the $^{12}$CO molecular line, we derived the linear radii of the ten \ion{H}{2} regions/candidates to be from 0.7 pc to 19.1 pc. The mean temperatures of the molecular clouds associated with the ten \ion{H}{2} regions/candidates are 6$-$12K, which are relatively higher than those of the other molecular clouds in the surveyed area.
Massive stars (B0 or earlier) are suggested to be the excitation sources of \ion{H}{2} regions \citep{2014ApJS..212....1A}. From the SIMBAD database, we searched for the O and B0 stars located within the \ion{H}{2} regions/candidates. Massive stars are found within Sh2-280 (Figure \ref{fig:S280}), Sh2-282 (Figure \ref{fig:S282}), Sh2-283 (Figure \ref{fig:S283}), G211.149-01.004 (Figure \ref{fig:G2111-010}), and BFS54 (Figure \ref{fig:BFS54}) and we list the properties of these massive stars in Table \ref{tab_HII_star}. Among the five massive stars list in Table \ref{tab_HII_star}, HD 46573, HD 47432, ALS 18674, and 2MASS J06465642+0116405, are approximately located within the emissions of the MIR, H$\alpha$, and radio continuum in regions of Sh2-280, Sh2-282, Sh2-283, and BFS54, respectively, and thus we suggest them to be the candidate excitation sources of the \ion{H}{2} regions. However, it is insufficient to identify the excitation sources of \ion{H}{2} regions based on the locations of the stars only and further investigation is needed to confirm the ionizing sources of the \ion{H}{2} regions. In Sh2-282, the radial velocity of HD 47432 is about 40 km s$^{-1}$ larger than that of the molecular clouds. However, the velocities of RRL and molecular clouds in this region are similar (Figure \ref{fig:spec_HII}). The large radial velocity of HD 47432 may come from its peculiar motion. No massive stars are found in the other \ion{H}{2} region candidates. As the OB-type star EM* RJHA 48 is located outside the well defined MIR bubble in the 12 and 22 $\micron$ images (Figure \ref{fig:G2111-010}), we do not consider it to be the excitation source of G211.149-01.004.
\clearpage
\startlongtable
\begin{deluxetable}{ccccccccccccccc}
\tabletypesize{\scriptsize}
\tablewidth{10pt}
\tablenum{3}
\tablecaption{Properties of the massive stars within the \ion{H}{2} regions/candidates}
\label{tab_HII_star}
\tablehead{
\colhead{Region} & \colhead{Category} & \colhead{Massive star}& \colhead{Glon} & \colhead{Glat} & \colhead{Radial velocity}
&\colhead{Parallax} & \colhead{Spectral type}& \colhead{Association} \\[1pt]
\colhead{ }&\colhead{ }&\colhead{ }& \colhead{(deg)} &
\colhead{(deg)} &\colhead{(km s$^{-1}$)}& \colhead{
(mas)}& \colhead{ } & \colhead{ }
\startdata
\colhead{ }&\colhead{ }&\colhead{ }&\colhead{ }&\colhead{1st velocity component} \\[1pt]
\hline
G208.506-02.304 &Q&\nodata \\%& 208.506 & -2.304 & 5.9 & 0.6 & 0.7 & 9.2 & 2.6 $\times$ 10$^1$ & 1.0 \\
\hline
\colhead{ }&\colhead{ }&\colhead{ }&\colhead{ }&\colhead{2nd velocity component} \\[1pt]
\hline
Sh2-280 &K &HD 46573 & 208.730 & -02.631 & 29.3 & 0.6471 & O7V((f))z & Yes\\%& 7.0 & 1.0 $\times$ 10$^4$ & 1.5 \\
G209.208-00.127 &Q&\nodata \\% & 209.208 & -0.127 & 29.7 & 3.3 & 1.2 & 6.8 & 1.3 $\times$ 10$^2$ & 1.1 \\
Sh2-282 &K &HD 47432 & 210.035 & -02.111 & 60.1 & 0.3873 & O9.7Ib & Yes\\%& 6.4 & 2.0 $\times$ 10$^4$ & 1.7 \\
BFS54 &K &2MASS J06465642+0116405 & 211.282 & -00.418 & \nodata & 0.4240 & B0.5Ve & Yes\\%& 12.4 & 3.0 $\times$ 10$^3$ & 1.2 \\
\hline
\colhead{ }&\colhead{ }&\colhead{ }&\colhead{ }&\colhead{3rd velocity component} \\[1pt]
\hline
G209.971-00.698 &Q&\nodata \\%& 209.971 & -0.698 & 35.0 & 4.0 & 2.5 & 8.5. & 1.1 $\times$ 10$^3$ & 1.1 \\
G210.009-00.292 &Q&\nodata \\% & 210.009 & -0.292 & 32.2 & 3.6 & 2.0 & 8.7 & 4.5 $\times$ 10$^2$ & 1.3 \\
G210.229-01.525 &Q&\nodata \\% & 210.229 & -1.525 & 40.5 & 4.8 & 6.1 & 8.3 & 3.6 $\times$ 10$^3$ & 1.4 \\
Sh2-283 &K&ALS 18674 & 210.788 & -02.550 & 70.6 & \nodata& B0V & Yes\\%& 3.1 & 7.8 & 2.9 $\times$ 10$^3$ & 2.3 \\
G211.149-01.004 &K&EM* RJHA 48 & 211.168 & -01.043 & \nodata & \nodata & OB & No\\\hlin
\enddata
\tablecomments{Column 3 gives the massive stars within the \ion{H}{2} regions/candidates. Columns $4-5$ give the Galactic longitude and latitude of the massive stars. Columns 6$-$8 give the radial velocity \citep{2006AstL...32..759G}, parallax \citep{2018yCat.1345....0G}, and spectral type \citep{2011ApJS..193...24S,2015MNRAS.454.3597D} of the massive stars, respectively.}
\end{deluxetable}
\section{Discussion}\label{discussion}
\subsection{Kinematics of the Molecular Clouds Associated with the \ion{H}{2} Regions/Candidates}\label{Kinematics}
Classical theories on \ion{H}{2} regions assume the interstellar medium around the \ion{H}{2} regions to be three-dimensional structures. However, \citet{2010ApJ...709..791B} did not find the associated foreground or background molecular clouds at the center for 43 infrared bubbles. Many recent studies argue that the molecular clouds associated with the infrared bubbles are two-dimensional ring-like structures \citep{2012ApJ...755...87S,2017ApJ...849..140X}.
The kinematic of molecular clouds provides unique information on the interaction between \ion{H}{2} regions and the surrounding medium and can be used to reveal the spatial relation between the molecular clouds and the \ion{H}{2} regions. We present the position-velocity maps of the ten \ion{H}{2} regions/candidates in Figures \ref{fig:Kinematics_G2085-023}$-$\ref{fig:Kinematics_BFS54} in the Appendix. In order to examine the possible influence of the \ion{H}{2} regions/candidates on the surrounding gas, the position paths of these p-v diagrams are set to pass through the centers of the \ion{H}{2} regions/candidates and along the major axes of the molecular clouds. All the paths have the same width of 5 pixels.
In the G208.506-02.304 region, the velocity of the molecular clouds along the position path is roughly constant at first and then increases toward the edge of the infrared bubble (Fig.\ref{fig:Kinematics_G2085-023}). The position-velocity map of Sh2-280 exhibits a partial cavity structure (Figure \ref{fig:Kinematics_S280}). Fitting the cavity structure in the position-velocity map with the model of expanding gas shell and taking the velocity offset of the most red-shifted gas at the middle of the position path as the expansion velocity, we estimate the expansion velocity of the Sh2-280 \ion{H}{2} region to be 3 km s$^{-1}$. This expansion velocity gives a dynamical time of 2.5 Myr for the Sh2-280 \ion{H}{2} region, assuming a uniform expansion. The velocity width of the molecular cloud at the position offset of HD 46573 is broadened, showing the feedback of HD 46573 on the cloud and supporting that HD 46573 is the excitation source of the Sh2-280 \ion{H}{2} region. The p-v diagram of G209.208-00.127\_far (Fig.\ref{fig:Kinematics_G2092-001}) shows two clumps with a velocity difference of about 1.5 km s$^{-1}$, supporting the idea that the molecular cloud is interacting with the \ion{H}{2} region candidate G209.208-00.127. Along the position path, the velocity of G209.971-00.698\_far is nearly constant (Fig.\ref{fig:Kinematics_G2099-006}), implying that G209.971-00.698\_far is either the background cloud or the foreground cloud to the \ion{H}{2} region candidate G209.971-00.698, but unlikely to contain the both. The velocity of the molecular cloud in G210.009-00.292 increases gradually from the center to the outside (Fig.\ref{fig:Kinematics_G2100-002}). The cloud associated with Sh2-282, Sh2-282\_far, is clumpy in the p-v diagram and the clumps show broadened velocity range (Figure \ref{fig:Kinematics_S282}), indicating the influence of the O-type star HD 47432 on the molecular cloud. As in G210.009-00.292, the the 'far' molecular cloud in G210.229-01.525 shows a velocity increase from the center of the region to the outside (Fig.\ref{fig:Kinematics_G2102-015}). In Sh2-283, the velocity at first decreases from the outside to the center of the region and then increases from the center to the outside (Figure \ref{fig:Kinematics_S283}). In G211.149-01.004 (Figure \ref{fig:Kinematics_G2111-010}) the velocity increases obviously from the center to the outside, whereas in BFS54 (Fig.\ref{fig:Kinematics_BFS54}), the velocity gradient is relatively small.
\subsection{Radial Temperature and Line Width Profiles of the \ion{H}{2} Regions/Candidates}
Expanding \ion{H}{2} regions feed momentum and energy to the surrounding interstellar medium, which affects the temperature and turbulence of the surrounding molecular clouds. Assuming three-dimensional structure and uniform density for the surrounding gas, \citet{2006ApJ...646..240H} calculated the dynamical expansion of \ion{H}{2} regions, including the evolution of velocity, density, temperature, and pressure of the gas. Their results show that the peak of expansion velocity occurs at the position of the shock front and the temperature of gas decreases with the distance from the excitation star (see their Figs 3 and 4). If the \ion{H}{2} regions and the surrounding gas are two-dimensional structures, it is reasonable to assume that the results for the molecular gas are qualitatively similar to those in the three-dimensional case. In the two-dimensional case, the projected distance to the excitation star is closely related to the physical distance. Therefore we could expect that a line width maximum would occur around the radius of the \ion{H}{2} region and that the gas temperature would decrease with the projected distance from the center of the \ion{H}{2} region. In the three-dimensional case, however, the projected distance does no directly represent the physical distance, and therefore the trends in two-dimensional case may not appear. To examine whether these two trends appear in the observed molecular line emission images, we calculated the azimuthally averaged radial profiles of excitation temperature and line width for the 10 \ion{H}{2} regions/candidates and show the results in Figure \ref{fig:intensity_radius_tex_m2}. The temperatures in three regions, BFS54, G208.506-02.304, and G210.009-00.292 gradually decrease with the distance to the center of the \ion{H}{2} regions, whereas no such trend is found for the other seven \ion{H}{2} regions/candidates. Except for Sh2-283, the ten \ion{H}{2} regions/candidates show smooth distributions of line width with the projected distance. Sh2-283 shows decreasing line width with the projected distance and has a higher line width across the whole range of distance than the other \ion{H}{2} regions/candidates. For all the ten \ion{H}{2} regions/candidates, no obvious maximum of line width occurs around the nominal radius. Therefore, our data favor the scenario that \ion{H}{2} regions and their surrounding molecular gas are three-dimensional structures.
\begin{figure}[h]
\centering
\subfigure[]{
\label{intensity_radius_tex}
\includegraphics[width=0.45\textwidth]{intensity_radius_tex-eps-converted-to.pdf}}
\subfigure[]{
\label{intensity_radius_m2}
\includegraphics[width=0.443\textwidth]{intensity_radius_m2-eps-converted-to.pdf}}
\caption{Azimuthally averaged radial profiles of excitation temperature (a) and line width (b) of the molecular clouds in the ten \ion{H}{2} regions/candidates.}
\label{fig:intensity_radius_tex_m2}
\end{figure}
\section{Summary}\label{summary}
Based on the MWISP project, we performed $^{12}$CO, $^{13}$CO, and C$^{18}$O observations covering the region of Galactic longitude of 207.7$^{\circ}$ $< l <$ 211.7$^{\circ}$ and Galactic latitude of $-$4.5$^{\circ}$ $< b <$ 0$^{\circ}$ (4.0$^{\circ}$ $\times$ 4.5$^{\circ}$). The velocity resolution is 0.16 km s$^{-1}$ and the corresponding sensitivities are 0.5 K for $^{12}$CO emission and 0.3 K for $^{13}$CO and C$^{18}$O emission. For the first time, we find abundant molecular clouds in this region. We summarize our results as follows,
1. According to the velocity distribution, the molecular clouds are divided into three groups of different velocity components, i.e., $-$3 km s$^{-1}$ to 16.5 km s$^{-1}$ (first velocity component), 16.5 km s$^{-1}$ to 30 km s$^{-1}$ (second velocity component), and 30 km s$^{-1}$ to 55 km s$^{-1}$ (third velocity component), with kinematic distances from 0.5 kpc to 7.0 kpc. The molecular clouds of the second velocity component typically have filamentary morphology. Four molecular clumps with velocities of $48-51$ km s$^{-1}$ at Galactic latitudes of around -4.4$^{\circ}$ are estimated to be -525 to -508 pc from the Galactic mid-plane, which are significantly greater than the $\sigma$ thickness of the Galactic molecular gas disk (80$-$190 pc) at Galactic radius of $\sim$14 kpc.
2. By combining the data of dust, ionized gas, and molecular gas, the molecular clouds associated with the ten infrared bubbles in this region are identified and the physical properties of these molecular clouds are obtained. The linear radii of the ten \ion{H}{2} regions/candidates are from 0.7 pc to 19.1 pc and the masses derived from the $X$ factor are from 26 to 2$\times 10^4$ M$_{\odot}$. We suggest G211.149-01.004 to be a true \ion{H}{2} region based on its detection of radio continuum and H$\alpha$ emission. Massive stars are found within Sh2-280, Sh2-282, Sh2-283, and BFS54, and we suggest them to be the candidate excitation sources of the \ion{H}{2} regions. No massive stars with spectral type earlier than B0 have been found in the other six \ion{H}{2} regions/candidates.
3. The distributions of excitation temperature and line width with the projected distance from the center of \ion{H}{2} region/candidate suggest that the majority of the ten \ion{H}{2} regions/candidates and their associated molecular gas are three-dimensional structures, rather than two-dimensional structures.
\section{Acknowledgements}
We would like to thank the PMO-13.7 m telescope staff for their supports during the observation and thank Min Fang, Yang Su, Zhiwei Chen, and Shaobo Zhang for the helpful discussions. We thank the anonymous referee for valuable comments and suggestions that helped to improve this paper. This work is supported by the National Key R\&D Program of China (NO. 2017YFA0402701) and Key Research Program of Frontier Sciences of CAS under grant QYZDJ-SSW-SLH047. C. Li acknowledges supports by NSFC grants 11973091, 11503086, and 11503087. H. Wang acknowledges supports by NSFC grant 11973091. This work makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Lab-oratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. This work makes use of data products from the Southern H-Alpha Sky Survey Atlas (SHASSA), which is supported by the National Science Foundation. This work has also made use of the SIMBAD database, operated at CDS, Strasbourg, France.
|
1,116,691,498,677 | arxiv | \section{Introduction}
\label{intro}
The vast availability of integer-valued data, emerging from several real world applications, has
motivated the growth of a large literature for modelling and inference for count time series processes.
For comprehensive surveys, see \cite{fok2002}, \cite{Davisetal(2016)} and \cite{weiss2018}. Early
contributions to the development of count time series models were the Integer Autoregressive models (INAR) \cite{al1987, al1990}
and observation \citep{liang1986} or parameter driven models \citep{Zeger(1988)}. The latter classification, due to \cite{cox1981},
will be particular useful as we will be developing theory for Poisson observation-driven models.
In this contribution, we appeal to the generalized linear model (GLM) framework, see \citet{McCullaghandNelder(1989)}, as it provides a natural extension of continuous-valued time series
to integer-valued processes. The GLM framework accommodates likelihood inference and supplies a toolbox whereby
testing and diagnostics can be also advanced. Some examples of observation-driven models
for count time series include the works by \cite{davis2003}, \cite{Heinen(2003)}, \cite{FokianosandKedem(2004)}
and \cite{fer2006}, among others. More recent work includes \cite{fok2009} and \cite{fok2011} who develop properties
and estimation for a a class of linear and log-linear count time series models. Further related contributions
have been appeared over the last years; see \cite{chri2014} (for quasi-likelihood inference of negative binomial processes),
\cite{fra2016} (for quasi-likelihood inference based on suitable moment assumptions) and \cite{douc2013}, \cite{davis2016}, \cite{cui2017} and \cite{fok2017}, among others, for further generalizations of observation-driven models.
Theoretical properties of such models have been fully investigated using various techniques; \cite{fok2009} developed
initially a perturbation approach, \cite{Neumann(2010)} employed the notion of $\beta$-mixing, \citet{Doukhanetal(2011)} (weak dependence approach), \cite{Woodardetall(2010)} and
\cite{douc2013} (Markov chain theory without irreducibility assumptions) and \cite{Wangetal(2014)}
(using $e$-chains theory; see \cite{MeynandTweedie(1993)}).
Univariate count time series models have been developed and studied in detail, as the previous list of references shows. However, multivariate models, which are essential to the development of network data, have been neglected. Studies of multivariate
INAR models include those of \cite{lat1997}, \cite{pedeli2011, pedeli2013, pedeli22013}. Theory and inference for multivariate count time series models is a research topic which is receiving increasing attention. In particular, observation-driven models and their properties are discussed by \cite{HeinenandRegifo(2007)},
\cite{Liu(2012)}, \cite{Andreassen(2013)}, \cite{Ahmad(2016)} and \cite{Leeetal(2017)}. More recently, \cite{fok2020} introduced a multivariate extension of the linear and log-linear Poisson autoregression model, by employing a copula-based construction for the joint distribution of the counts. The authors employ Poisson processes properties to introduce joint dependence of counts over time.
In doing so, they avoid technical difficulties associated with the non-uniqueness of copula for discrete distributions
\citep[pp.~507-508]{GenestandNeslehova(2007)}. They propose a plausible data generating process which preserves, marginally, Poisson processes properties. Further details are given by the recent review of \cite{fokianos_2021}.
The aim of this contribution is to link multivariate observation-driven count time series models with time-varying network data. Such data is increasingly available in many scientific areas (social networks, epidemics, etc.).
Measuring the impact of a network structure to a multivariate time series process has attracted considerable attention over the last years; see \cite{zhu2017} for the development of Network Autoregressive models (NAR). The authors have introduced autoregressive models for continuous network data and established associated least squares inference under two asymptotic regimes (a) with increasing time sample size $T\to\infty$ and fixed network dimension $N$ and (b) with both $N,T$ increasing, i.e. $\min\left\lbrace N,T\right\rbrace \to\infty$. The regime (a) corresponds to standard asymptotic inference in time series analysis. However, in network analysis it is important to understand the behavior of the process when the network's dimension grows. This is a relevant problem in fields where typically the network is large, see, for example, social networks in \cite{wass1994}. From the empirical point of view, it is crucial to have conditions for large network structures, such that time series inference is still valid; those problems motivate study of asymptotics (b). Significant extension of this work to network quantile autoregressive models has been recently reported by \cite{zhu2019}. Some other extensions of the NAR model include the grouped least squares estimation \citep{zhu2020} and a network version of the GARCH model, see \cite{zhou2020} for the case of $T\to\infty$ and fixed network dimension $N$. Under the standard asymptotic regime (a), related work was also developed by \citet{Knightetal(2020)} who specified a Generalized Network Autoregressive model (GNAR) for continuous random variables, which takes into account different layers of relationships within neighbours of the network. Moreover, the same authors provide \textsf{R} software (package \textsf{GNAR}) for fitting such models.
Following the discussion in the concluding remarks of \citet[p.~1116]{zhu2017}, an extension of the NAR model to accommodate discrete responses would be of crucial importance. Indeed, integer-valued responses are commonly encountered in real applications and are strongly connected to network data. For example, several data of interest in social network analysis correspond to integer-valued responses (number of posts, number of likes, counts of digit employed in comments, etc). Another typical field of application for count variable is the number of cases in epidemic models for studying the spread of infection diseases in a population; this is even more important in the current COVID-19 pandemic outbreak. Recently, an application of this type which employs a model similar to the NAR with count data has been presented in \cite{bracher_2020endemic}. Therefore, the extension of the NAR model to multivariate count time series is an important theoretical and methodological contribution which is not covered by the existing literature, to the best of our knowledge. The main goal of this work is to fill this gap by specifying linear and log-linear Poisson network autoregressions (PNAR) for count processes and by studying in detail the two related types of asymptotic inference discussed above. Moreover, the development of all network time series models discussed so far relies strongly on the Independent and Identic Distributed assumption (IID) for the innovations term. Such a condition might not be realistic in many applications. We overcome this limitation by employing the notion of $L^p$-near epoch dependence (NED), see \cite{and1988}, \cite{PoetscherandPrucha(1997)}, and the related concept of $\alpha$-mixing \citep{rosen1956,douk1994}. These notions allow relaxation of the independence assumption as they provide some guarantee of \emph{asymptotic independence} over time. An elaborate and flexible dependence structure among variables, over time and over the nodes composing the network, is available for all models we consider due to the definition of a full covariance matrix, where the dependence among variables is captured by the copula construction introduced in \cite{fok2020}.
For the continuous-valued case, \cite{zhu2017} employed ordinary least square (OLS) estimation combined with specific properties imposed on the adjacency matrix for the estimation of unknown parameters. However, this method is not applicable to general time series models. In the case we study, estimation is carried out by using quasi-likelihood methods; see \cite{hey1997}, for example. When the network dimension $N$ is fixed and $T\to\infty$, standard results already available for Quasi Maximum Likelihood Estimation (QMLE) of Poisson stationary time series, as developed by \cite{fok2020}, carry over to the case of PNAR models. However, the asymptotic properties of the estimators rely on the convergence of sample means to the associated expectations due to the ergodicity of a stationary random process $\left\lbrace \mathbf{Y}_t : t\in\mathbb{Z} \right\rbrace$ (or a perturbed version of it). The stationarity of an $N$-dimensional time series, with $N\to\infty$, is still an open problem and it is not clear how it can be introduced.
As a consequence, it is not clear if all the results involved by the ergodicity of the time series are available in the increasing dimension case. In the present contribution, this problem is overcome by providing an alternative proof, based on the laws of large numbers for $L^p$-NED processes of \cite{and1988}. Our method employs the working definition of stationarity \citet[Def.~1]{zhu2017} for processes of increasing dimension.
The paper is organized as follows: Section \ref{SEC:Linear Models} discusses the PNAR($p$) model specification for the linear and the log-linear case, with lag order $p$, and the related stability properties. In Section \ref{SEC: inference}, the quasi-likelihood inference is established, showing consistency and asymptotic normality of the quasi maximum likelihood estimator (QMLE) for the two types of asymptotics $T\to\infty$ and $\min\left\lbrace N,T\right\rbrace \to\infty$. Section \ref{SEC: application} discusses the results of a simulation study and an application on real data. The paper concludes with an Appendix containing all the proofs of the main results and the specification of the first two moments for the linear PNAR model. Further results are included in the Supplementary Material.
\paragraph{Notation:} We denote $|\mathbf{x}|_r=(\sum_{j=1}^{p}\norm{x_j}^r)^{1/r}$ the $l^r$-norm of a $p$-dimensional vector $\mathbf{x}$. If $r=\infty$, $|\mathbf{x}|_\infty=\max_{1\leq j\leq p}|x_{j}|$. Let $\lVert \mathbf{X}\rVert_r=(\sum_{j=1}^{p}\mathrm{E}(|X_j|^r))^{1/r}$ the $L^r$-norm for a random vector $\mathbf{X}$. For a $q \times p$ matrix $\mathbf{A}=(a_{ij})$, ${i=1,\ldots,q, j=1,\ldots,p}$, denotes the generalized matrix norm $\vertiii{\mathbf{A}}_{r}= \max_{\norm{\bf x}_{r}=1} \norm{\mathbf{A}\textbf{x}}_{r}$. If $r=1$, then $\vertiii{\mathbf{A}}_1=\max_{1\leq j\leq p}\sum_{i=1}^{q}|a_{ij}|$. If $r=2$, $\vertiii{\mathbf{A}}_2=\rho^{1/2}(\mathbf{A}^T\mathbf{A})$, where $\rho(\cdot)$ is the spectral radius. If $r=\infty$, $\vertiii{\mathbf{A}}_\infty=\max_{1\leq i\leq q}\sum_{j=1}^{p}|a_{ij}|$. If $q=p$, then these norms are matrix norms. Define $\norm{\mathbf{x}}^r_v=(\norm{x_1}^r,\dots,\norm{x_p}^r)^\prime$, $\norm{\mathbf{A}}_v=(\norm{a_{i,j}})_{(i,j)}$, $\lnorm{\mathbf{X}}_{r,v}=(\mathrm{E}^{1/r}\norm{X_1}^r,\dots,\mathrm{E}^{1/r}\norm{X_p}^r)^\prime$ and $\preceq$ a partial order relation on $\mathbf{x},\mathbf{y}\in\mathbb{R}^p$ such that $\mathbf{x}\preceq \mathbf{y}$ means $x_i\leq y_i$ for $i=1,\dots,p$. For a $d$-dimensional vector $\mathbf{z}$, with $d\to\infty$, set the following compact notation $\max_{1\leq i< \infty}z_i=\max_{i\geq 1}z_i$.
The notations $C_r$ and $D_r$ denote a constant which depend on $r$, where $r\in\mathbb{N}$. In particular $C$ denotes a generic constant.
\section{Models and stability results}
\label{SEC:Linear Models}
We consider a network with $N$ nodes (network size) and index $i=1,\dots N$. The structure of the network is completely described by the adjacency matrix $\mathbf{A}=(a_{ij})\in\mathbb{R}^{N\times N}$, i.e. $a_{ij}=1$ provided that there exists a directed edge from $i$ to $j$, $i\to j$ (e.g. user $i$ follows $j$ on Twitter), and $a_{ij}=0$ otherwise. However, undirected graphs are allowed ($i\leftrightarrow j$). The structure of the network is assumed nonrandom. Self-relationships are not allowed, i.e. $a_{ii}=0$ for any $i=1,\dots,N$; this is a typical assumption, and it is reasonable for various real situations, e.g. social networks, where an user cannot follow himself. For details about the definition of social networks see \cite{wass1994}, \cite{kola2014}. Let us define a certain count variable $Y_{i,t}\in\mathbb{R}$ for the node $i$ at time $t$. We want to assess the effect of the network structure on the count variable $\left\lbrace Y_{i,t} \right\rbrace $ for $i=1,\dots,N$ over time $t=1,\dots,T$.
In this section, we study the properties of linear and log-linear models. We initiate this study by considering a simple, yet illuminating, case of a linear model of order one and then we consider the
more general case of $p$'th order model. Finally, we discuss log-linear models.
In what follows, we denote by $\left\lbrace\mathbf{Y}_t=(Y_{i,t},\,i=1,2\dots N,\,t=0,1,2\dots,T)\right\rbrace $ an $N$-dimensional vector of count time series with
$\left\lbrace \boldsymbol{\lambda}_t=(\lambda_{i,t},\,i=1,2\dots N,\,t=1,2\dots,T)\right\rbrace $ be the corresponding $N$-dimensional intensity process vector.
Define by $\mathcal{F}_t=\sigma(\mathbf{Y}_s: s\leq t)$. Based on the specification of the model, we assume that $\boldsymbol{\lambda}_t=\mathrm{E}(\mathbf{Y}_t|\mathcal{F}_{t-1})$.
\subsection{Linear PNAR(1) model}
\label{Sec:Properties of order 1 model}
A linear count network model of order 1, is given by
\begin{equation}
Y_{i,t}|\mathcal{F}_{t-1}\sim Poisson(\lambda_{i,t}), ~~~ \lambda_{i,t}=\beta_0+\beta_1n_i^{-1}\sum_{j=1}^{N}a_{ij}Y_{j,t-1}+\beta_2Y_{i,t-1}\,,
\label{lin1}
\end{equation}
where $n_i=\sum_{j\neq i}a_{ij}$ is the out-degree, i.e the total number of nodes which $i$ has an edge with. From the left hand side equation of \eqref{lin1}, we observe that the process $Y_{i,t}$ is assumed to be marginally Poisson. We call \eqref{lin1} linear Poisson network autoregression of order 1, abbreviated by PNAR(1).
Model \eqref{lin1} postulates that, for every single node $i$, the marginal conditional mean of the process is regressed on the past count of the variable itself for $i$
and the average count of the other nodes $j\neq i$ which have a connection with $i$.
This model assumes that only the nodes which are directly followed by the focal node $i$ possibly have an impact on the mean process of counts.
It is a reasonable assumption in many applications; for example, in a social network the activity of node $k$, which satisfies $a_{ik}=0$, does not affect node $i$.
The parameter $\beta_1$ is called network effect, as it measures the average impact of node $i$'s connections $n_i^{-1}\sum_{j=1}^{N}a_{ij}Y_{j,t-1}$. The coefficient $\beta_2$ is called momentum effect because it provides a weight for the impact of past count $Y_{i,t-1}$. This interpretation is in line with the Gaussian network vector autoregression (NAR) as discussed by \cite{zhu2017} for the case of continuous variables.
Equation \eqref{lin1} does not include informations about the joint dependence structure of the PNAR(1) model. It is then convenient to rewrite \eqref{lin1} in vectorial form, following \cite{fok2020},
\begin{equation}
\mathbf{Y}_t=\mathbf{N}_t(\boldsymbol{\lambda}_t), ~~~ \boldsymbol{\lambda}_t=\boldsymbol{\beta}_0+\mathbf{G}\mathbf{Y}_{t-1}\,,
\label{lin2}
\end{equation}
where $\boldsymbol{\beta}_0=\beta_0\mathbf{1}_N\in\mathbb{R}^N$, with $\mathbf{1}=(1,1,\dots,1)^T\in\mathbb{R}^N$, and the matrix $\mathbf{G}=\beta_1\mathbf{W}+\beta_2\mathbf{I}_N$, where $\mathbf{W}=\textrm{diag}\left\lbrace n_1^{-1},\dots, n_N^{-1}\right\rbrace \mathbf{A}$ is the row-normalized adjacency matrix, with
$\mathbf{A}=(a_{ij})$, so $\mathbf{w}_i=(a_{ij}/n_i,\,j=1,\dots,N)^T\in\mathbb{R}^N$ is the $i$-th row vector of the matrix $\mathbf{W}$, and $\mathbf{I}_N$ is the $N \times N$ identity matrix. Note that the matrix $\mathbf{W}$ is a stochastic matrix, as $\vertiii{\mathbf{W}}_\infty=1$ \citep[Def.~9.16]{seber2008}.
Finally, $\left\lbrace \mathbf{N}_t \right\rbrace $ is a sequence of independent $N$-variate copula-Poisson processes. By this we mean that $\mathbf{N}_t(\boldsymbol{\lambda}_t)$ is a sequence of $N$-dimensional IID Poisson count processes, with intensity 1, counting the number of events in the interval of time $[0,\lambda_{1,t}]\times\dots\times[0,\lambda_{N,t}]$, and whose structure of dependence is modelled through a copula construction $C(\dots)$ on their associated exponential waiting times random variables. More precisely, consider a set of values $(\beta_0,\beta_1, \beta_2)^T$ and a starting vector $\boldsymbol{\lambda}_0=(\lambda_{1,0},\dots,\lambda_{N,0})^T$,
\begin{enumerate}
\item Let $\mathbf{U}_{l}=(U_{1,l},\dots,U_{N,l})$, for $l=1,\dots,K$ a sample from a $N$-dimensional copula $C(u_1,\dots, u_N)$, where $U_{i,l}$ follows a Uniform(0,1) distribution, for $i=1,\dots,N$.
\item The transformation $X_{i,l}=-\log{U_{i,l}}/\lambda_{i,0}$ is exponential with parameter $\lambda_{i,0}$, for $i=1,\dots,N$.
\item If $X_{i,1}>1$, then $Y_{i,0}=0$, otherwise
$Y_{i,0}=\max\left\lbrace k\in[1,K]: \sum_{l=1}^{k}X_{i,l}\leq 1\right\rbrace$, by taking $K$ large enough.
Then, $Y_{i,0}\sim Poisson(\lambda_{i,0})$, for $i=1,\dots,N$. So, $\mathbf{Y}_{0}=(Y_{1,0},\dots, Y_{N,0})$ is a set of marginal Poisson processes with mean $\boldsymbol{\lambda}_0$.
\item By using the model \eqref{lin2}, $\boldsymbol{\lambda}_1$ is obtained.
\item Return back to step 1 to obtain $\mathbf{Y}_1$, and so on.
\end{enumerate}
In practical applications the sample size $K$ should be a large value, e.g. $K=1000$; its value clearly depends, in general, on the magnitude of the counting phenomenon. Moreover, the copula construction $C(\dots)$ will depend on one or more unknown parameters, say $\rho$, which capture the contemporaneous correlation among the variables.
The development of a multivariate count time series
model would be based on specification of a joint distribution, so that the standard likelihood inference and testing procedures can be developed. Although several alternatives have been proposed in the literature, see the review in \citet[Sec. 2]{fokianos_2021}, the choice of a suitable multivariate version of the Poisson probability mass function (p.m.f) is a challenging problem. In fact, multivariate Poisson-type p.m.f have usually complicated closed form and the associated likelihood inference is theoretically and computationally cumbersome. Furthermore, in many cases, the available multivariate Poisson-type p.m.f. implicitly imply restricted models, which are of limited use in applications (e.g. covariances always positive, constant pairwise correlations). For these reasons, in this work the joint distribution of the vector $\left\lbrace \mathbf{Y}_t \right\rbrace $ is constructed by following the copula approach described above. The proposed data generating process ensures that all marginal distributions of $Y_{i,t}$ are univariate Poisson, conditionally to the past, as described in \eqref{lin1}, while it introduces an arbitrary dependence among them in a flexible and general way by the copula construction.
For a comprehensive discussion on the choice of a multivariate count distribution and the comparison between the alternatives proposed in the literature, the interested reader can refer to \cite{inouye_2017} and \cite{fokianos_2021}. Further results regarding the empirical properties of model \eqref{lin2} are discussed in Section~\ref{empirical_lin} of the Supplementary Material (henceforth SM).
\subsection{Linear PNAR($p$) model}
\label{Sec:Linear model of order p}
More generally, we introduce and study an extension of model \eqref{lin1} by allowing $Y_{i,t}$ to depend on the last $p$ lagged values. We call this the
linear Poisson NAR($p$) model and its defined analogously to \eqref{lin1} but with
\begin{equation}
\lambda_{i,t}=\beta_0+\sum_{h=1}^{p}\beta_{1h}\left( n_i^{-1}\sum_{j=1}^{N}a_{ij}Y_{j,t-h}\right) +\sum_{h=1}^{p}\beta_{2h}Y_{i,t-h}\,,
\label{lin1_p}
\end{equation}
where $\beta_0, \beta_{1h}, \beta_{2h} \geq 0$ for all $h=1\dots,p$. If $p=1$, set $\beta_{11}=\beta_1$, $\beta_{22}=\beta_2$ to obtain \eqref{lin1}. The joint distribution of the vector $\mathbf{Y}_{t}$ is defined by means of the copula construction
discussed in Sec. \ref{Sec:Properties of order 1 model}. Without loss
of generality, we can set coefficients equal to zero if the parameter order is different in both terms of \eqref{lin1_p}. Its is easy to see that \eqref{lin1_p} can be rewritten as
\begin{equation}
\mathbf{Y}_t=\mathbf{N}_t(\boldsymbol{\lambda}_t), ~~~ \boldsymbol{\lambda}_t=\boldsymbol{\beta}_0+ \sum_{h=1}^{p} \mathbf{G}_ h\mathbf{Y}_{t-h}\,,
\label{lin2_p}
\end{equation}
where $\mathbf{G}_h=\beta_{1h}\mathbf{W}+\beta_{2h}\mathbf{I}_N$ for $h=1,\dots,p$ by recalling that $\mathbf{W}=\textrm{diag}\left\lbrace n_1^{-1},\dots, n_N^{-1}\right\rbrace \mathbf{A}$. We have the following result
which gives sharp verifiable conditions.
\begin{proposition} \rm
\label{Prop. Ergodicity of linear model}
Consider model \eqref{lin2_p}, with fixed $N$. Suppose that $\rho(\sum_{h=1}^{p} \mathbf{G}_{h})< 1$. Then,
the process $\{ \mathbf{Y}_{t},~ t \in \mathbb{Z} \}$ is stationary and ergodic with $\mbox{E}\norm{\mathbf{Y}_{t}}_1^{r} < \infty$ for any $r\geq1$.
\end{proposition}
The result follows from \citet[Thm.~2]{tru2021}.
Similar results have been recently proved by \cite{fok2020} when there exists a feedback process in the model. Following these authors, we obtain the same results of Proposition \ref{Prop. Ergodicity of linear model} but under stronger conditions. For example, when $p=1$, we will need to assume either $\vertiii{\mathbf{G}}_1< 1$ or $\vertiii{\mathbf{G}}_2<1$ to obtain identical conclusions. Results about the first and second order properties of model \eqref{lin1_p} are given in Appendix \ref{moment_lin};
see also \citet[Prop.~3.2]{fok2020}.
Prop.~\ref{Prop. Ergodicity of linear model} establishes the existence of the moments of the count process with fixed $N$, but this property is not guaranteed in the case that $N\to\infty$. The following results state that even when $N$ tends to infinity all the moments exist and are uniformly bounded by positive constants not depending on the sample sizes $N$ and $T$.
\begin{proposition} \label{finite_moment} \rm
Consider model \eqref{lin2_p} and $\sum_{h=1}^{p}(\beta_{1h} + \beta_{2h})<1$.
Then, $\max_{i\geq 1}\mathrm{E}\norm{Y_{i,t}}^r\leq C_r<\infty$, for any $r\in\mathbb{N}$.
\end{proposition}
In order to investigate the stability results of the process $\left\lbrace \mathbf{Y}_t \in\mathbb{N}^N \right\rbrace$ when the network size is diverging ($N\to\infty$) we employ the working definition of stationarity for increasing dimensional processes described in \citet[Def.~1]{zhu2017}. The following result holds.
\begin{theorem} \rm
\label{Thm. Ergodicity of linear model N}
Consider model \eqref{lin2_p}. Assume $\sum_{h=1}^{p}(\beta_{1h} + \beta_{2h})<1$ and $N\to\infty$.
Then, there exists a unique strictly stationary solution $\{ \mathbf{Y}_{t}\in\mathbb{N}^N,~ t \in \mathbb{Z}\}$ to the linear PNAR model, with $\max_{i\geq 1}\mathrm{E}\norm{Y_{i,t}}^r\leq C_r <\infty$, for all $r\geq 1$.
\end{theorem}
Thm.~\ref{Thm. Ergodicity of linear model N} is a counterpart of \citet[Thm.1]{zhu2017} for continuous values NAR model but it assures all the moments of the integer-valued process are uniformly bounded. Although stronger than the requirements in Prop.~\ref{Prop. Ergodicity of linear model}, the condition $\sum_{h=1}^{p}(\beta_{1h}+\beta_{2h})<1$ allows to prove stationarity for increasing network size $N$ and the existence of moments of the process; moreover, it is more natural than $\rho(\sum_{h=1}^{p} \mathbf{G}_{h})< 1$, and it complements the existing work for continuous valued models; \cite{zhu2017}. In addition, the former will be also required for the inferential results of Section~\ref{SEC: inference} below.
It is worth pointing out that the copula construction is not used in the proof of Thm.~\ref{Thm. Ergodicity of linear model N} (see also Thm.~\ref{Thm. Ergodicity of log-linear model N} for log-linear model). However, it is used in Section \ref{simulations} where we report a simulation study. It is interesting though that even under this setup stability conditions are independent of the correlation in the innovations, similar to multivariate ARMA models.
\subsection{Log-linear PNAR models}
Recall model \eqref{lin1}. The network effect $\beta_1$ of model \eqref{lin1} is typically expected to be positive, see \cite{chen2013}, and the impact of $Y_{i,t-1}$ is positive, as well. Hence, positive constraints on the parameters are theoretically justifiable as well as practically sound. However, in order to allow a natural link to the GLM theory, \cite{McCullaghandNelder(1989)}, and allowing the possibility to insert covariates as well as coefficients which take values on the entire real line, we propose the following log-linear model, see \cite{fok2011}:
\begin{equation}
Y_{i,t}|\mathcal{F}_{t-1}\sim Poisson(\exp(\nu_{i,t})), ~~~ \nu_{i,t}=\beta_0+\beta_1n_i^{-1}\sum_{j=1}^{N}a_{ij}\log(1+Y_{j,t-1})+\beta_2\log(1+Y_{i,t-1})\,,
\label{log_lin1}
\end{equation}
where $\nu_{i,t}=\log(\lambda_{i,t})$ for every $i=1,\dots,N$. No parameters constraints are required for model \eqref{log_lin1} since $\nu_{i,t}\in\mathbb{R}$. The interpretation of all parameters remains the same as in the case of \eqref{lin1}. Again, the model can be rewritten in vectorial form, as in the case of model \eqref{lin2}
\begin{equation}
\mathbf{Y}_t=\mathbf{N}_t(\boldsymbol{\nu}_t), ~~~ \boldsymbol{\nu}_t=\boldsymbol{\beta}_0+\mathbf{G}\log(\mathbf{1}_N+\mathbf{Y}_{t-1})\,,
\label{log_lin2}
\end{equation}
with $\boldsymbol{\nu}_t \equiv\log(\boldsymbol{\lambda}_t)$, componentwise. Furthermore, it can be useful rewriting the model as follow.
\begin{equation}
\log(\mathbf{1}_N+\mathbf{Y}_{t})=\boldsymbol{\beta}_0+\mathbf{G}\log(\mathbf{1}_N+\mathbf{Y}_{t-1})+\boldsymbol{\psi}_{t}\,,\nonumber
\end{equation}
where $\boldsymbol{\psi}_{t}=\log(\mathbf{1}_N+\mathbf{Y}_{t})-\boldsymbol{\nu}_{t}$. By Lemma A.1 in \cite{fok2011} $\mathrm{E}(\boldsymbol{\psi}_{t}|\mathcal{F}_{t-1})\to0$ as $\boldsymbol{\nu}_{t}\to\infty$, so $\boldsymbol{\psi}_{t}$ is approximately martingale difference sequence (MDS). This means that the formulation of first two moments established for the linear model in Appendix~\ref{moment_lin} hold, approximately, for $\log(\mathbf{1}_N+\mathbf{Y}_{t})$. We discuss empirical properties of the count process $\mathbf{Y}_t$ of model \eqref{log_lin1} in Section~\ref{empirical_log} of the SM. Moreover, $\boldsymbol{\xi}_t=\mathbf{Y}_t-\exp(\boldsymbol{\nu}_t)$ is a MDS.
We define the log-linear PNAR($p$) by
\begin{equation}
\nu_{i,t}=\beta_0+\sum_{h=1}^{p}\beta_{1h}\left(n_i^{-1}\sum_{j=1}^{N}a_{ij}\log(1+Y_{j,t-h})\right) +\sum_{h=1}^{p}\beta_{2h}\log(1+Y_{i,t-h})\,,
\label{log_lin1_p}
\end{equation}
using the same notation as before. The interpretation of this model is the same as of the linear model. Furthermore,
\begin{equation}
\mathbf{Y}_t=\mathbf{N}_t(\boldsymbol{\nu}_t), ~~~ \boldsymbol{\nu}_t=\boldsymbol{\beta}_0+\sum_{h=0}^{p}\mathbf{G}_h\log(\mathbf{1}_N+\mathbf{Y}_{t-h})\,,
\label{log_lin2_p}
\end{equation}
where $\mathbf{G}_h=\beta_{1h}\mathbf{W}+\beta_{2h}\mathbf{I}_N$ for $h=1,\dots,p$. The following results are complementing Prop.~\ref{Prop. Ergodicity of linear model}-\ref{finite_moment} and Thm.~\ref{Thm. Ergodicity of linear model N} proved for the case of log-linear model.
\begin{proposition} \rm
\label{Prop. Ergodicity of log-linear model}
Consider model \eqref{log_lin2_p}, with fixed $N$. Suppose that $\rho(\sum_{h=1}^{p} \norm{\mathbf{G}_{h}}_v)< 1$. Then
the process $\{ \mathbf{Y}_{t},~ t \in \mathbb{Z} \}$ is stationary and ergodic with $\mbox{E}\norm{\mathbf{Y}_{t}}_1 < \infty$. Moreover, if $\vertiii{\norm{\mathbf{G}_{h}}_v}_\infty< 1$, there exists some $\delta>0$ such that $\mbox{E}[\exp(\delta\norm{\mathbf{Y}_{t}}_1)] < \infty$ and $\mbox{E}[\exp(\delta\norm{\boldsymbol{\nu}_{t}}_1)] < \infty$.
\end{proposition}
The result follows from \citet[Thm. 5]{tru2019}. Analogously to the linear model, we need to show the uniform boundedness of moments of the process and the stationarity of the model with increasing dimension. Since the noise $\boldsymbol{\psi}_t$ is approximately MDS, the following result is proved by employing approximate arguments.
\begin{proposition} \label{finite_moment_log} \rm
Consider model \eqref{log_lin2_p} and $\norm{\sum_{h=1}^{p}(\beta_{1h}+\beta_{2h})}<1$.
Then, $\max_{i\geq 1}\mathrm{E}\norm{Y_{i,t}}^r \leq C_r<\infty$, and $\max_{i\geq 1}\mathrm{E}[\exp(r\norm{\nu_{i,t}})] \leq D_r<\infty$, for any $r\in\mathbb{N}$.
\end{proposition}
Analogously to Thm~\ref{Thm. Ergodicity of linear model N}, a strict stationarity result for network of increasing order is given for the log-linear PNAR model \eqref{log_lin2_p}.
\begin{theorem} \rm
\label{Thm. Ergodicity of log-linear model N}
Consider model \eqref{log_lin2_p}. Assume $\sum_{h=1}^{p}(\norm{\beta_{1h}}+\norm{\beta_{2h}})<1$ and $N\to\infty$.
Then, there exists a unique strictly stationary solution $\{ \mathbf{Y}_{t}\in\mathbb{N}^N,~ t \in \mathbb{Z} \}$ to the log-linear PNAR model, with $\max_{i\geq 1}\mathrm{E}\norm{Y_{i,t}}^r \leq C_r<\infty$, and $\max_{i\geq 1}\mathrm{E}[\exp(r\norm{\nu_{i,t}})] \leq D_r<\infty$, for all $r\geq 1$.
\end{theorem}
\begin{rem} \rm \label{rem_covariates}
Although, for simplicity, model \eqref{lin2_p} has been specified without covariates, time-invariant positive covariates $\mathbf{Z}\in\mathbb{R}^d_+$ are allowed to be included without affecting the results of the present contribution, under suitable moments existence assumptions.
These may be useful, for example, in order to consider available node-specific characteristics. Moreover, the log-linear version \eqref{log_lin2_p} ensures the inclusion of covariates lying in the whole real line $\mathbb{R}^d$.
\end{rem}
\begin{rem} \rm \label{rem_gnar}
A count GNAR($p$) extension similar to the model introduced by \citet[eq.~1]{Knightetal(2020)}, for the standard asymptotic regime ($T\to\infty$), in the context of continuous-valued random variables, is included in the framework we consider. Such model adds an average neighbour impact for several stages of connections between the nodes of a given network. That is, $\mathcal{N}^{(r)}(i)=\mathcal{N}\left\lbrace \mathcal{N}^{(r-1)}(i) \right\rbrace / \left[ \left\lbrace \cup_{q=1}^{r-1}\mathcal{N}^{(q)}(i)\right\rbrace\cup\left\lbrace i\right\rbrace \right] $, for $r=2,3,\dots$ and $\mathcal{N}^{(1)}(i)=\mathcal{N}(\left\lbrace i\right\rbrace )$, with $\mathcal{N}(\left\lbrace i\right\rbrace )=\left\lbrace j\in\left\lbrace 1,\dots,N\right\rbrace : i\to j\right\rbrace $ the set of neighbours of the node $i$. (So, for example, $\mathcal{N}^{(2)}(i)$ describes the neighbours of the neighbours of the node $i$, and so on.)
In this case, the row-normalized adjacency matrix have elements $\left( \mathbf{W}^{(r)}\right)_{i,j}=w_{i,j}\times I(j\in\mathcal{N}^{(r)}(i))$, where $w_{i,j}=1/\mathrm{card}(\mathcal{N}^{(r)}(i))$, $\mathrm{card}(\cdot)$ denotes the cardinality of a set and $I(\cdot)$ is the indicator function. Several $M$ types of edges are allowed in the network.
The Poisson GNAR($p$) has the following formulation.
\begin{equation}
\lambda_{i,t}=\beta_0+\sum_{h=1}^{p}\left(\sum_{m=1}^{M}\sum_{r=1}^{s_h}\beta_{1,h,r,m} \sum_{j\in \mathcal{N}^{(r)}_t(i)}w_{i,j,m}Y_{j,t-h} +\beta_{2,h}Y_{i,t-h}\right)\,,
\label{gnar}
\end{equation}
where $s_h$ is the maximum stage of neighbour dependence for the time lag $h$.
Model \eqref{gnar} can be included in the formulation \eqref{lin2_p} by setting $\mathbf{G}_{h}=\sum_{m=1}^{M}\sum_{r=1}^{s_h}\beta_{1,h,r,m}\mathbf{W}^{(r,m)}+\beta_{2,h}\mathbf{I}_N$. Since it holds that $\sum_{j\in \mathcal{N}^{(r)}(i)}\sum_{m=1}^{M}w_{i,j,m}=1$, we have $\vertiii{\sum_{m=1}^{M}\mathbf{W}^{(r,m)}}_\infty=1$.
Hence, the result of the present contribution, i.e. existence of the moments of the model, the related stability properties and the associated inferential results, under the standard asymptotic regime, apply to \eqref{gnar}. Analogous arguments hold true for the log-linear model \eqref{log_lin1_p}.
\end{rem}
\section{Quasi-likelihood inference for increasing network size} \label{SEC: inference}
The aim of this section is to establish inference for the unknown vector of parameters of models \eqref{lin2_p},\eqref{log_lin2_p}, denoted by $\boldsymbol{\theta}=(\beta_0, \beta_{11},\dots, \beta_{1p}, \beta_{21},\dots, \beta_{2p})^T\in\mathbb{R}^m$, where $m=2p+1$. We approach the estimation problem by using the theory of estimating functions; see \cite{basa1980}, \cite{liang1986} and \cite{hey1997}, among others. In particular, developing proofs of consistency and asymptotic normality of the Quasi Maximum Likelihood Estimation (QMLE) is the main goal of the present section. Since, in such framework, $\mathbf{W}$ is a nonrandom sequence of matrices indexed by $N$, the specification of the asymptotic properties of the estimator deals with to diverging indexes, $N\to\infty$ and $T\to\infty$, allowing to establish a double-dimensional-type of converge, when both the temporal size and the network dimension grow together.
Define the conditional quasi-log-likelihood function for the vector of unknown parameters $\boldsymbol{\theta}$:
\begin{equation}
l_{NT}(\boldsymbol{\theta})=\sum_{t=1}^{T}\sum_{i=1}^{N} Y_{i,t}\log\lambda_{i,t}(\boldsymbol{\theta})-\lambda_{i,t}(\boldsymbol{\theta})\,,
\label{log-lik}
\end{equation}
which is the log-likelihood one would obtain if time series modelled in \eqref{lin2_p}, or \eqref{log_lin2_p}, would be contemporaneously independent. This simplifies computations but guarantees consistency and asymptotic normality of the resulting estimator. Although the joint copula structure $C(\dots, \rho)$ and the set of parameters $\rho$, are not included in the maximization of the ``working" log-likelihood \eqref{log-lik}, this
does not mean that the QMLE is computed under the assumption of independence; this is easily detected by the shape of the information matrix \eqref{B} below, which depends on the true conditional covariance matrix of the process $\mathbf{Y}_t$.
\cite{fok2017}, among others, established inference theory for (QMLE) for observation driven models, under the standard asymptotic regime ($T\to\infty$). Assuming that there exist a true vector of parameter, say $\boldsymbol{\theta}_0$, such that the mean model specification \eqref{lin2_p} (or equivalently \eqref{log_lin2_p}) is correct, regardless the true data generating process, then we obtain a consistent and asymptotically normal estimator by maximizing the quasi-log-likelihood \eqref{log-lik}. Denote by $\hat{\boldsymbol{\theta}}\coloneqq\argmax_{\boldsymbol{\theta}} l_{NT}(\boldsymbol{\theta})$, the QMLE for $\boldsymbol{\theta}$. The score function for the linear model is given by
\begin{eqnarray}
&\textbf{S}_{NT}(\boldsymbol{\theta})&=\sum_{t=1}^{T}\sum_{i=1}^{N}\left( \frac{Y_{i,t}}{\lambda_{i,t}(\boldsymbol{\theta})}-1\right) \frac{\partial\lambda_{i,t}(\boldsymbol{\theta})}{\partial\boldsymbol{\theta}}\nonumber\\
&&=\sum_{t=1}^{T}\frac{\partial\boldsymbol{\lambda}^T_{t}(\boldsymbol{\theta})}{\partial\boldsymbol{\theta}}\mathbf{D}_t^{-1}(\boldsymbol{\theta})\Big(\mathbf{Y}_t-\boldsymbol{\lambda}_{t}(\boldsymbol{\theta})\Big)=\sum_{t=1}^{T}\textbf{s}_{Nt}(\boldsymbol{\theta})\,,
\label{score}
\end{eqnarray}
where
\begin{equation}
\frac{\partial\boldsymbol{\lambda}_{t}(\boldsymbol{\theta})}{\partial\boldsymbol{\theta}^T}=(\mathbf{1}_N, \mathbf{W}\mathbf{Y}_{t-1},\dots, \mathbf{W}\mathbf{Y}_{t-p}, \mathbf{Y}_{t-1}, \dots, \mathbf{Y}_{t-p})
\nonumber
\end{equation}
is a $N\times m$ matrix and $\mathbf{D}_t(\boldsymbol{\theta})$ is the $N\times N$ diagonal matrix with diagonal elements equal to $\lambda_{i,t}(\boldsymbol{\theta})$ for $i=1,\dots,N$. The Hessian matrix is given by
\begin{equation}
\mathbf{H}_{NT}(\boldsymbol{\theta})=\sum_{t=1}^{T}\frac{\partial\boldsymbol{\lambda}^T_{t}(\boldsymbol{\theta})}{\partial\boldsymbol{\theta}}\mathbf{C}_t(\boldsymbol{\theta})\frac{\partial\boldsymbol{\lambda}_{t}(\boldsymbol{\theta})}{\partial\boldsymbol{\theta}^T}=\sum_{t=1}^{T}{\textbf{H}}_{Nt}(\boldsymbol{\theta})\,,
\label{H_T}
\end{equation}
with $\mathbf{C}_t(\boldsymbol{\theta})=\textrm{diag}\left\lbrace Y_{1,t}/\lambda^2_{1,t}(\boldsymbol{\theta})\dots Y_{N,t}/\lambda^2_{N,t}(\boldsymbol{\theta})\right\rbrace $ and the conditional information matrix is
\begin{equation}
\mathbf{B}_{NT}(\boldsymbol{\theta})=\sum_{t=1}^{T}\frac{\partial\boldsymbol{\lambda}^T_{t}(\boldsymbol{\theta})}{\partial\boldsymbol{\theta}}\mathbf{D}^{-1}_t(\boldsymbol{\theta})\mathbf{\Sigma}_t(\boldsymbol{\theta})\mathbf{D}^{-1}_t(\boldsymbol{\theta})\frac{\partial\boldsymbol{\lambda}_{t}(\boldsymbol{\theta})}{\partial\boldsymbol{\theta}^T}=\sum_{t=1}^{T}{\textbf{B}}_{Nt}(\boldsymbol{\theta})\,,
\label{B_T}
\end{equation}
where $\boldsymbol{\Sigma}_t(\boldsymbol{\theta})=\mathrm{E}(\boldsymbol{\xi}_{t}\boldsymbol{\xi}_{t}^T|\mathcal{F}_{t-1})$ denotes the \emph{true} conditional covariance matrix of the vector $\mathbf{Y}_t$ and recalling $\boldsymbol{\xi}_{t} \equiv \mathbf{Y}_t-\boldsymbol{\lambda}_t$. Expectation is taken with respect to the stationary distribution of $\left\lbrace \mathbf{Y}_t\right\rbrace $. Moreover, the theoretical counterpart of the Hessian and information matrices, respectively, are the following.
\begin{equation}
\mathbf{H}_N(\boldsymbol{\theta})=\mathrm{E}\Bigg[\frac{\partial\boldsymbol{\lambda}^T_{t}(\boldsymbol{\theta})}{\partial\boldsymbol{\theta}}\mathbf{D}_t^{-1}(\boldsymbol{\theta})\frac{\partial\boldsymbol{\lambda}_{t}(\boldsymbol{\theta})}{\partial\boldsymbol{\theta}^T}\Bigg]\,,
\label{H}
\end{equation}
\begin{equation}
\mathbf{B}_N(\boldsymbol{\theta})=\mathrm{E}\Bigg[\frac{\partial\boldsymbol{\lambda}^T_{t}(\boldsymbol{\theta})}{\partial\boldsymbol{\theta}}\mathbf{D}_t^{-1}(\boldsymbol{\theta})\mathbf{\Sigma}_t(\boldsymbol{\theta}) \mathbf{D}_t^{-1}(\boldsymbol{\theta})\frac{\partial\boldsymbol{\lambda}_{t}(\boldsymbol{\theta})}{\partial\boldsymbol{\theta}^T}\Bigg]\,.
\label{B}
\end{equation}
Similarly for the log-linear PNAR model, we have that the score function is given by:
\begin{eqnarray}
&\textbf{S}_{NT}(\boldsymbol{\theta})&=\sum_{t=1}^{T}\sum_{i=1}^{N}\Big( Y_{i,t}-\exp(\nu_{i,t}(\boldsymbol{\theta}))\Big)\frac{\partial\nu_{i,t}(\boldsymbol{\theta})}{\partial\boldsymbol{\theta}}
=\sum_{t=1}^{T}\frac{\partial\boldsymbol{\nu}^T_{t}(\boldsymbol{\theta})}{\partial\boldsymbol{\theta}}\Big(\mathbf{Y}_t-\exp(\boldsymbol{\nu}_{t}(\boldsymbol{\theta}))\Big),
\label{score_log}
\end{eqnarray}
where
\begin{equation}
\frac{\partial\boldsymbol{\nu}_{t}(\boldsymbol{\theta})}{\partial\boldsymbol{\theta}^T}=(\mathbf{1}_N, \mathbf{W}\log(\mathbf{1}_N+\mathbf{Y}_{t-1}),\dots,\mathbf{W}\log(\mathbf{1}_N+\mathbf{Y}_{t-p}), \log(\mathbf{1}_N+\mathbf{Y}_{t-1}),\dots, \log(\mathbf{1}_N+\mathbf{Y}_{t-p}))
\nonumber
\end{equation}
is a $N\times m$ matrix, and
\begin{equation} \label{H_T_log}
\mathbf{H}_{NT}(\boldsymbol{\theta})=\sum_{t=1}^{T}\frac{\partial\boldsymbol{\nu}^T_{t}(\boldsymbol{\theta})}{\partial\boldsymbol{\theta}}\mathbf{D}_t(\boldsymbol{\theta})\frac{\partial\boldsymbol{\nu}_{t}(\boldsymbol{\theta})}{\partial\boldsymbol{\theta}^T}\,,
\end{equation}
\begin{equation}
\mathbf{B}_{NT}(\boldsymbol{\theta})=\sum_{t=1}^{T}\frac{\partial\boldsymbol{\nu}^T_{t}(\boldsymbol{\theta})}{\partial\boldsymbol{\theta}}\mathbf{\Sigma}_t(\boldsymbol{\theta})\frac{\partial\boldsymbol{\nu}_{t}(\boldsymbol{\theta})}{\partial\boldsymbol{\theta}^T}\,,\nonumber
\end{equation}
where $\mathbf{D}_t(\boldsymbol{\theta})$ is the $N\times N$ diagonal matrix with diagonal elements equal to $\exp(\nu_{i,t}(\boldsymbol{\theta}))$ for $i=1,\dots,N$ and $\boldsymbol{\Sigma}_t(\boldsymbol{\theta})=\mathrm{E}(\boldsymbol{\xi}_{t}\boldsymbol{\xi}_{t}^T|\mathcal{F}_{t-1})$ with $\boldsymbol{\xi}_{t}=\mathbf{Y}_t-\exp(\boldsymbol{\nu}_t(\boldsymbol{\theta}))$. Moreover,
\begin{equation} \label{H_log}
\mathbf{H}_N(\boldsymbol{\theta})=\mathrm{E}\Bigg[\frac{\partial\boldsymbol{\nu}^T_{t}(\boldsymbol{\theta})}{\partial\boldsymbol{\theta}}\mathbf{D}_t(\boldsymbol{\theta})\frac{\partial\boldsymbol{\nu}_{t}(\boldsymbol{\theta})}{\partial\boldsymbol{\theta}^T}\Bigg]\,,
\end{equation}
\begin{equation}
\mathbf{B}_N(\boldsymbol{\theta})=\mathrm{E}\Bigg[\frac{\partial\boldsymbol{\nu}^T_{t}(\boldsymbol{\theta})}{\partial\boldsymbol{\theta}}\boldsymbol{\Sigma}_t(\boldsymbol{\theta})\frac{\partial\boldsymbol{\nu}_{t}(\boldsymbol{\theta})}{\partial\boldsymbol{\theta}^T}\Bigg]
\label{B_log}
\end{equation}
are respectively (minus) the Hessian matrix and the information matrix.
Define $l_{NT}(\boldsymbol{\theta})=\sum_{t=1}^{T}\sum_{i=1}^{N}l_{i,t}(\boldsymbol{\theta})$, where $l_{i,t}(\boldsymbol{\theta})=Y_{i,t}\log\lambda_{i,t}(\boldsymbol{\theta})-\lambda_{i,t}(\boldsymbol{\theta})$. Let $M$ be a finite constant.
Consider model \eqref{lin2}. Let $\mu=\beta_0/(1-\beta_1-\beta_2)$, $\boldsymbol{\mu}=\mu\mathbf{1}_N$, $\boldsymbol{\mu}=\mathrm{E}(\mathbf{Y}_t)$, $\boldsymbol{\Sigma_\xi}=\mathrm{E}\norm{\boldsymbol{\xi}_t\boldsymbol{\xi}_t^T}_v$ and $\mathcal{F}^{N}_{t}=\sigma\left(Y_{i,s}:\,1\leq i\leq N, s\leq t\right)$. We drop the dependence on $\boldsymbol{\theta}$ when a quantity is evaluated at $\boldsymbol{\theta}_0$. Moreover, define $\lambda_{\max}(\mathbf{X})$ the largest absolute eigenvalue of an arbitrary symmetric matrix $\mathbf{X}$, $\Pi_{222}=N^{-1}\sum_{i=1}^{N}\mathrm{E}[(\mathbf{w}_i^T(\mathbf{Y}_{t-1}-\boldsymbol{\mu}))^3/\lambda_{i,t}]$, $\Pi_{223}=N^{-1}\sum_{i=1}^{N}\mathrm{E}[(\mathbf{w}_i^T(\mathbf{Y}_{t-1}-\boldsymbol{\mu}))^2Y_{i,t-1}/\lambda_{i,t}]$, $\Pi_{233}=N^{-1}\sum_{i=1}^{N}\mathrm{E}[\mathbf{w}_i^T(\mathbf{Y}_{t-1}-\boldsymbol{\mu})Y_{i,t-1}^2/\lambda_{i,t}]$, $\Pi_{333}=N^{-1}\sum_{i=1}^{N}\mathrm{E}[Y_{i,t-1}^3/\lambda_{i,t}]$. Consider the set $\Omega_d=\left\lbrace (2,2,2), (2,2,3), (2,3,3), (3,3,3)\right\rbrace $ and $(j^*,l^*,k^*)=\argmax_{j,l,k}\norm{{N}^{-1}\sum_{i=1}^{N}\partial^3l_{i,t}(\boldsymbol{\theta})/\partial\boldsymbol{\theta}_j\partial\boldsymbol{\theta}_l\partial\boldsymbol{\theta}_k}$. To establish consistency of the QMLE, the following conditions are required.
\begin{enumerate}[label=B\arabic*]
\item The process $\left\lbrace \boldsymbol{\mathbf{Y}}_t,\,\mathcal{F}^{N}_{t}:\,N\in\mathbb{N}, t\in\mathbb{Z}\right\rbrace $
is $\alpha$-mixing.
\item Let $\mathbf{W}$ be a sequence of nonstochastic matrices indexed by $N$.
\begin{enumerate}[label*=.\arabic*]
\item Consider $\mathbf{W}$ as a transition probability matrix of a Markov chain, whose state space is defined as the set of all the nodes in the network (i.e., $\left\lbrace 1,\dots, N\right\rbrace$). The Markov chain is assumed to be irreducible and aperiodic. Further, define $\boldsymbol{\pi} = (\pi_1,\dots, \pi_N)^T\in\mathbb{R}^N$ as the stationary distribution of the Markov chain, such that $\boldsymbol{\pi}\geq 0$, $\sum_{i=1}^{N}\pi_i=1$ and $\boldsymbol{\pi}=\mathbf{W}^T\boldsymbol{\pi}$. Furthermore, $\lambda_{\max}(\boldsymbol{\Sigma_\xi})\sum_{i=1}^{N}\pi_i^2\to0$ as $N\to\infty$.
\item Define $\mathbf{W}^*=\mathbf{W}+\mathbf{W}^T$ as a symmetric matrix. Assume $\lambda_{\max}(\mathbf{W}^*)=\mathcal{O}(\log N)$ and $\lambda_{\max}(\boldsymbol{\Sigma_\xi})=\mathcal{O}((\log N)^\delta)$, for some $\delta\geq 1$.
\end{enumerate}
\item Set $\boldsymbol{\Lambda}=\mathrm{E}(\mathbf{D}^{-1}_t)$, $\bar{\boldsymbol{\Gamma}}(0)=\mathrm{E}[\mathbf{D}^{-1/2}_t(\mathbf{Y}_{t-1}-\boldsymbol{\mu})(\mathbf{Y}_{t-1}-\boldsymbol{\mu})^T\mathbf{D}^{-1/2}_t]$ and $\boldsymbol{\Delta}(0)=\mathrm{E}[\mathbf{D}^{-1/2}_t\mathbf{W}(\mathbf{Y}_{t-1}-\boldsymbol{\mu})(\mathbf{Y}_{t-1}-\boldsymbol{\mu})^T\mathbf{W}^T\mathbf{D}^{-1/2}_t]$. Assume the following limits exist: $d_1=\lim_{N\to\infty}N^{-1}\textrm{tr}\left( \boldsymbol{\Lambda}\right)$, $d_2=\lim_{N\to\infty}N^{-1}\textrm{tr}\left[ \bar{\boldsymbol{\Gamma}}(0)\right] $, $d_3=\lim_{N\to\infty}N^{-1}\textrm{tr}\left[ \mathbf{W}\bar{\boldsymbol{\Gamma}}(0)\right] $, $d_4=\lim_{N\to\infty}N^{-1}\textrm{tr}\left[ \boldsymbol{\Delta}(0)\right] $ and, if $(j^*,l^*,k^*)\in\Omega_d$, $d_*=\lim_{N\to\infty}\Pi_{j^*,l^*,k^*}$.
\end{enumerate}
Assumption B1 is a crucial assumption when processes with dependent errors are studied (see \cite{douk1994}) as the $\alpha$-mixing condition is a measure of \emph{asymptotic independence} of the process. Condition B1 implies that the errors $\boldsymbol{\xi}_t=\mathbf{Y}_t-\boldsymbol{\lambda}_t$ are $\alpha$-mixing, with the same sigma algebra $\mathcal{F}^N_t$, and it is weaker than the IID assumption made on the errors in the previous literature \citep{zhu2017, zhu2019}. In particular, the process $\boldsymbol{\xi}_t$ is an $\alpha$-mixing array, namely,
\begin{equation}
\alpha(J)=\sup_{t\in\mathbb{Z}, N\geq 1}\sup_{A\in\mathcal{F}^{N}_{-\infty, t},B\in\mathcal{F}^{N}_{t+J,\infty}}\left|\mathrm{P}(A\cap B)-\mathrm{P}(A)\mathrm{P}(B) \right|\xrightarrow{J\to\infty}0
\nonumber
\end{equation}
where $\mathcal{F}^{N}_{t}\equiv\mathcal{F}^{N}_{-\infty, t}=\sigma\left(\xi_{i,s}: 1\leq i\leq N, s\leq t\right)$, $\mathcal{F}^{N}_{t+J,\infty}=\sigma\left(\xi_{i,s}: 1\leq i\leq N, s\geq t+J\right)$ and it is clear that the dependence between two events $A$ and $B$ tends to vanish as they are separated in time, uniformly in $N$.
Moreover, note that no rate of decay for the dependence measured by $\alpha(J)$ along time is specified, as a consequence, the $\alpha$-mixing process can depend on several lags of its past before becoming ``asymptotically" independent. When $N$ is fixed, a combination of Theorem~1-2 in \cite{doukhan_2012} and Remark~2.1 in \cite{Doukhanetal(2011)} shows that the process $\left\lbrace \boldsymbol{\xi}_t: t \in \mathbb{Z}\right\rbrace$, is $\beta$-mixing (then $\alpha$-mixing), with exponentially decaying coefficients, provided that $\vertiii{\mathbf{G}}_1<1$. A similar conclusion holds, when $p=1$, by \citet[Prop.~3.1-3.4]{fok2020}.
Assumption B2 requires some uniformity conditions on the network structure, and it is equivalent set of conditions as \citet[C2]{zhu2017} and \citet[C2.1-C2.2]{zhu2019}. Moreover, B2.2 requires that the network structure admits certain uniformity property ($\lambda_{\max}(\mathbf{W}^*)$ diverges slowly). \citet[Supp. Mat., Sec.~7.1-7.3]{zhu2017} found empirically that this is the case for several network models.
In our case, regularity assumptions on the structure of dependence among the errors, when the network grows, are required by imposing that the diverging rate of $\lambda_{\max}(\boldsymbol{\Sigma_\xi})$ will be slower than order $\mathcal{O}(N)$, in B2.2, and its product with the squared sum of the stationary distribution of the chain, $\boldsymbol{\pi}$, will tends to 0, in B2.1. We give an empirical verification of such conditions in Section~\ref{SUPP network} of the SM. In the continuous-valued case introduced in \cite{zhu2017} such assumptions are, in fact, not needed as the errors are IID with common variance $\sigma^2$. Moreover, in this case, the absolute value is not required in $\boldsymbol{\Sigma_\xi}=\mathrm{E}(\boldsymbol{\xi}_t\boldsymbol{\xi}_t^T)=\sigma^2\mathbf{I}_N$; it can be understood since, in this case, $\mathbf{D}_t=\mathbf{I}_N$, so the second inequality in \eqref{Dt abs} is not needed any more; consequently, $\lambda_{\max}(\mathrm{E}(\boldsymbol{\xi}_t\boldsymbol{\xi}_t^T))=\sigma^2$, which reduces condition B2 to C2 in \cite{zhu2017}, obtained as a special case.
The conditions outlined in B3 are law of large numbers-like assumptions, which are quite standard in the existing literature, since little is known about the behaviour of the process as $N\to\infty$. These assumptions are required to guarantee that the Hessian matrix \eqref{H_T} converges to an existing limiting matrix (see \eqref{H div N} below). SM~\ref{SUPP network} includes numerical study examples showing the validity of these limits. If OLS estimation with IID errors was performed, conditions B3 would correspond exactly to those in \citet[C3]{zhu2017}. In fact, recall that, in such framework, $\mathbf{D}_t=\mathbf{I}_N$ in \eqref{H}, so that $N^{-1}\textrm{tr}\left( \boldsymbol{\Lambda}\right)=1$, $d_2=\kappa_1$, $d_3=\kappa_2$ and $d_4=\kappa_6$, where $\kappa_1,\kappa_2, \kappa_6$ are defined in C3 of \cite{zhu2017}. A condition for the third derivative is assumed to guarantee valid large sample quasi-likelihood convergence.
\begin{lemma}\rm
For the linear model \eqref{lin2}, suppose $\beta_1+\beta_2 < 1$ and B1-B3 hold. Consider $\mathbf{S}_{NT}$ and $\mathbf{H}_{NT}$ defined as in \eqref{score} and \eqref{H_T}, respectively. Then, as $\min\left\lbrace N,T\right\rbrace \to\infty$
\begin{enumerate}
\item $(NT)^{-1}\mathbf{H}_{NT}\xrightarrow{p}\mathbf{H}\,,$
\item $(NT)^{-1}\mathbf{S}_{NT}\xrightarrow{p}\textbf{0}_m\,,$
\item $\max_{j,l,k}\sup_{\boldsymbol{\theta}\in\mathcal{O}(\boldsymbol{\theta}_0)}\left|\frac{1}{NT}\sum_{t=1}^{T}\sum_{i=1}^{N}\frac{\partial^3l_{i,t}(\boldsymbol{\theta})}{\partial\boldsymbol{\theta}_j\partial\boldsymbol{\theta}_l\partial\boldsymbol{\theta}_k}\right|\leq M_{NT}\xrightarrow{p}M\,,$
\end{enumerate}
where $M_{NT}\coloneqq\frac{1}{NT}\sum_{t=1}^{T}\sum_{i=1}^{N}m_{i,t}$, $\mathbf{H}=\lim_{N\to\infty}N^{-1}\mathbf{H}_N$ is nonsingular and
\begin{equation}
\mathbf{H}=\begin{pmatrix}
d_1 & \mu d_1 & \mu d_1 \\
& \mu^2 d_1+d_4 & \mu^2 d_1+d_3 \\
& & \mu^2 d_1+d_2
\end{pmatrix}\,. \label{H div N}
\end{equation}
\label{limits}
\end{lemma}
Some preliminary lemmas employed to show the result are described in Appendix~\ref{proof preliminary lemmata}. The proof of Lemma~\ref{limits} follows in Appendix~\ref{proof}.
\begin{theorem} \rm\label{can2}
Consider model \eqref{lin2}. Let $\boldsymbol{\theta}\in\boldsymbol{\Theta}\subset\mathbb{R}^m_{+}$. Suppose that $\boldsymbol{\Theta}$ is compact and assume that the true value $\boldsymbol{\theta}_0$ belongs to the interior of $\boldsymbol{\Theta}$. Suppose that the conditions of Lemma \ref{limits} hold. Then, there exists a fixed open neighbourhood $\mathcal{O}(\boldsymbol{\theta}_0)=\left\lbrace \boldsymbol{\theta}:|\boldsymbol{\theta}-\boldsymbol{\theta}_0|<\delta\right\rbrace$ of $\boldsymbol{\theta}_0$ such that with probability tending to 1
as $\min\left\lbrace N,T\right\rbrace \to\infty$, for the score function \eqref{score}, the equation $S_{NT}(\boldsymbol{\theta})=\mathbf{0}_m$ has a unique solution, called $\hat{\boldsymbol{\theta}}$, such that $\hat{\boldsymbol{\theta}}\xrightarrow{p}\boldsymbol{\theta}_0$.
\end{theorem}
An application of \citet[Thm. 3.2.23]{tani2000}, for example,
establishes the result.
In order to derive the asymptotic normality of the QMLE we need to assume the following conditions:
\begin{enumerate
\item[B3$^\prime$] Set $\boldsymbol{\Lambda}_t=\mathbf{\Sigma}_t^{1/2}\mathbf{D}^{-1}_t$, $\boldsymbol{\Lambda}=\mathrm{E}(\boldsymbol{\Lambda}^T_t\boldsymbol{\Lambda}_t)$, $\bar{\boldsymbol{\Gamma}}(0)=\mathrm{E}[\boldsymbol{\Lambda}_t(\mathbf{Y}_{t-1}-\boldsymbol{\mu})(\mathbf{Y}_{t-1}-\boldsymbol{\mu})^T\boldsymbol{\Lambda}^T_t]$ and $\boldsymbol{\Delta}(0)=\mathrm{E}[\boldsymbol{\Lambda}_t\mathbf{W}(\mathbf{Y}_{t-1}-\boldsymbol{\mu})(\mathbf{Y}_{t-1}-\boldsymbol{\mu})^T\mathbf{W}^T\boldsymbol{\Lambda}^T_t]$. Assume that the following limits exist:\\ $f_1=\lim_{N\to\infty}N^{-1}\left( \mathbf{1}_N^T\boldsymbol{\Lambda} \mathbf{1}_N\right)$, $f_2=\lim_{N\to\infty}N^{-1}\textrm{tr}\left[ \bar{\boldsymbol{\Gamma}}(0)\right] $, $f_3=\lim_{N\to\infty}N^{-1}\textrm{tr}\left[ \mathbf{W}\bar{\boldsymbol{\Gamma}}(0)\right] $,\\ $f_4=\lim_{N\to\infty}N^{-1}\textrm{tr}\left[ \boldsymbol{\Delta}(0)\right] $ and, if $(j^*,l^*,k^*)\in\Omega_d$, $d_*=\lim_{N\to\infty}\Pi_{j^*,l^*,k^*}$.
\item[B4] There exists a non negative, non increasing sequence $\left\lbrace \varphi_h \right\rbrace_{h=1,\dots,\infty}$ such that $\sum_{h=1}^{\infty} \varphi_h = \Phi<\infty$ and, for $i<j$,
\begin{equation}
\norm{\textrm{Corr}(Y_{i,t}, Y_{j,t}\left| \right. \mathcal{F}_{t-1} )}\leq \varphi_{j-i} \label{weak dependence}
\end{equation}
\end{enumerate}
Condition B3$^\prime$ is simply an extension of assumption B3, required for the convergence of the conditional information matrix \eqref{B_T} to a valid limiting information matrix, see $\eqref{B div N}$ below. More precisely, the reader can verify that B3 is just a special case of B3$^\prime$, when $\mathbf{\Sigma}_t=\mathbf{D}_t$. The main reason of this set of assumptions is that, for the QMLE, the conditional information matrix and the Hessian matrix are, in general, different. This does not occur in the case studied by \citep{zhu2017}. Analogously to B3, when $\mathbf{Y}_t$ is continuous-valued random vector, and we deal with IID errors $\boldsymbol{\xi}_t$, B3$^\prime$ reduces again to the conditions in \citet[C3]{zhu2017}.
Assumption B4 could be considered as a contemporaneous weak dependence condition. Indeed, even in the very simple case of independence model, i.e. $\lambda_{i,t}=\beta_0$, for all $i=1,\dots,N$, the reader can easily verify that, without any further constraints, $N^{-1}\mathbf{B}_N=\mathcal{O}(N)$, so the limiting variance of the estimator will eventually diverge, since it depends on the limit of the conditional information matrix (see Thm.~\ref{can3} below). Instead, under B4, $N^{-1}\mathbf{B}_N=\mathcal{O}(1)$, and the existence of the limiting covariance matrix can be shown, as in Lemma~\ref{limits 2} and Thm.~\ref{can3} below.
Insights about weak dependence conditions have been stated in \citet[p.~1102]{zhu2017}. When the errors are independent over different nodes and the past \citep[C1]{zhu2017}, B4 is trivially satisfied, since $\norm{\textrm{Cov}(Y_{i,t}, Y_{j,t}\left| \right. \mathcal{F}_{t-1} )}=\norm{\mathrm{E}(\xi_{i,t} \xi_{j,t})}=0$, for $i\neq j$. See Section~\ref{SUPP weak dependence} of the SM, for an example where B4 is empirically verified. Define $\boldsymbol{\eta}\in\mathbb{R}^m$, a non-null real-valued vector.
\begin{lemma}\rm
For the linear model \eqref{lin2}, suppose $\beta_1+\beta_2 < 1$ and B1-B2, B3$^\prime$-B4 hold. Consider $\mathbf{S}_{NT}$ and $\mathbf{B}_{NT}$ defined as in \eqref{score} and \eqref{B_T}, respectively. Assume $N^{-2}\mathrm{E}(\boldsymbol{\eta}^T \mathbf{s}_{Nt})^4<\infty$. Then, as $\min\left\lbrace N,T\right\rbrace \to\infty$
\begin{enumerate}
\item $(NT)^{-1}\mathbf{B}_{NT}\xrightarrow{p}\mathbf{B}\,,$
\item $(NT)^{-\frac{1}{2}}\mathbf{S}_{NT}\xrightarrow{d}N(\mathbf{0}_m,\mathbf{B})\,,$
\end{enumerate}
where $\mathbf{B}=\lim_{N\to\infty}N^{-1}\mathbf{B}_N$ and
\begin{equation}
\mathbf{B}=\begin{pmatrix}
f_1 & \mu f_1 & \mu f_1 \\
& \mu^2 f_1+f_4 & \mu^2 f_1+f_3 \\
& & \mu^2 f_1+f_2
\end{pmatrix}\,. \label{B div N}
\end{equation}
\label{limits 2}
\end{lemma}
The condition $N^{-2}\mathrm{E}(\boldsymbol{\eta}^T \mathbf{s}_{Nt})^4<\infty$ is not implied by B4. This condition is satisfied provided that \eqref{weak dependence} holds true for higher-order moments of the vectors $\left\lbrace \mathbf{Y}_t \right\rbrace $;
See SM~\ref{fourth moments} for a formal proof and more extensive discussion.
\begin{theorem} \rm\label{can3}
Consider model \eqref{lin2}. Suppose the conditions of Theorem~\ref{can2} and Lemma~\ref{limits 2} hold. Then, as $\min\left\lbrace N,T\right\rbrace \to\infty$,
$ \sqrt{NT}(\hat{\boldsymbol{\theta}}-\boldsymbol{\theta}_0)\xrightarrow{d}N(\mathbf{0}_m,\mathbf{H}^{-1}\mathbf{B}\mathbf{H}^{-1})$
\end{theorem}
The same application of \citet[Thm. 3.2.23]{tani2000}
establishes the result. The extension of Thm.~\ref{can2}-\ref{can3} to the general order linear PNAR($p$) model is immediate,
by using the well-know VAR(1) companion form \eqref{var1_c}. Assumptions B1-B2 and B4 remain substantially unaffected, using \citet[Lemma~1.1]{tru2021}. B3-B3$^\prime$ can be suitably rearranged similarly to \citet[C4]{zhu2017} and the result follows by \citet[Supp. Mat., Sec.~4]{zhu2017}. We omit the details.
\begin{rem} \rm \label{Rem fixed N or T}
A standard asymptotic inference result, with $T\to\infty$, is obtained for the QMLE $\hat{\boldsymbol{\theta}}$, where the ``sandwich`" covariance is $\mathbf{H}^{-1}_N\mathbf{B}_N\mathbf{H}^{-1}_N$, by Thm.~\ref{can2}-\ref{can3}, as a special case, when $N$ is fixed. This result requires only the stationarity conditions of Prop.~\ref{Prop. Ergodicity of linear model}, the compactness of the parameter space, and assuming that the true value of the parameters belongs to its interior. Such result is proved along the lines of Theorem 4.1 in \cite{fok2020}. Similar comments apply also for the log-linear model below. The case,
where $T$ fixed and $N$ diverging, cannot be studied in the framework we consider, since the convergence of the quantities involved in Lemmas~\ref{limits}-\ref{limits 2} requires both indexes to diverge together. For details see also the related proofs in the Appendix~\ref{proof}. This is empirically confirmed by some numerical bias found in the simulations of Sec.~\ref{simulations}, when $T$ is small compared to $N$ and it is due to the fact that $\left\lbrace \boldsymbol{\xi}_t \right\rbrace $ is not an IID sequence.
\end{rem}
We now state the analogous result for the log-linear model \eqref{log_lin2} and the notation corresponds to eq. \eqref{score_log}--\eqref{B_log}. Set $\mathbf{Z}_t=\log(\textbf{1}_N+\mathbf{Y}_t)$, and note that $\mathrm{E}(\mathbf{Z}_t)\approx\boldsymbol{\mu}$. Define $\sigma_{ij}=\mathrm{E}(\xi_{i,t}\xi_{j,t})$, $\Pi_{222}^L=N^{-1}\sum_{i=1}^{N}\mathrm{E}[\exp(\nu_{i,t})(\mathbf{w}_i^T(\mathbf{Z}_{t-1}-\boldsymbol{\mu}))^3]$, $\Pi_{223}^L=N^{-1}\sum_{i=1}^{N}\mathrm{E}[\exp(\nu_{i,t})(\mathbf{w}_i^T(\mathbf{Z}_{t-1}-\boldsymbol{\mu}))^2Y_{i,t-1}]$, $\Pi_{233}^L=N^{-1}\sum_{i=1}^{N}\mathrm{E}[\exp(\nu_{i,t})\mathbf{w}_i^T(\mathbf{Y}_{t-1}-\boldsymbol{\mu})Y_{i,t-1}^2]$, $\Pi_{333}^L=N^{-1}\sum_{i=1}^{N}\mathrm{E}[\exp(\nu_{i,t})Y_{i,t-1}^3]$. Assumption B1$^L$ is equal to B1 in the linear model. This holds also for B2$^L$, by considering $\boldsymbol{\Sigma_\psi}=\mathrm{E}\norm{\boldsymbol{\psi}_t\boldsymbol{\psi}_t^T}_v$ instead $\boldsymbol{\Sigma_\xi}$ in B2 above.
\begin{itemize}
\item[B3$^L$] Set $\bar{\boldsymbol{\Gamma}}^L(0)=\mathrm{E}[\boldsymbol{\Sigma}^{1/2}_t(\mathbf{Z}_{t-1}-\boldsymbol{\mu})(\mathbf{Z}_{t-1}-\boldsymbol{\mu})^T\boldsymbol{\Sigma}^{1/2}_t]$ and $\boldsymbol{\Delta}^L(0)=\mathrm{E}[\boldsymbol{\Sigma}^{1/2}_t\mathbf{W}(\mathbf{Z}_{t-1}-\boldsymbol{\mu})(\mathbf{Z}_{t-1}-\boldsymbol{\mu})^T\mathbf{W}^T\boldsymbol{\Sigma}^{1/2}_t]$. Assume the following limits exist: $l_1=\lim_{N\to\infty}N^{-1}\mathrm{E}[\mathbf{1}_N^T\mathbf{D}_t\mathbf{W}(\mathbf{Z}_{t-1}-\boldsymbol{\mu})]$, $l_2=\lim_{N\to\infty}N^{-1}\mathrm{E}[\mathbf{1}_N^T\mathbf{D}_t(\mathbf{Z}_{t-1}-\boldsymbol{\mu})]$, $\varsigma=\lim_{N\to\infty}N^{-1}\sum_{i\neq j}\sigma_{ij}$, $g_3=\lim_{N\to\infty}N^{-1}\textrm{tr}\left[ \bar{\boldsymbol{\Gamma}}^L(0)\right] $, $g_4=\lim_{N\to\infty}N^{-1}\textrm{tr}\left[ \mathbf{W}\bar{\boldsymbol{\Gamma}}^L(0)\right] $, $g_5=\lim_{N\to\infty}N^{-1}\textrm{tr}\left[ \boldsymbol{\Delta}^L(0)\right] $ and, if $(j^*,l^*,k^*)\in\Omega_d$, $d_*=\lim_{N\to\infty}\Pi^L_{j^*,l^*,k^*}$.
\item[B4$^L$] There exists a non negative, non increasing sequence $\left\lbrace \phi_h \right\rbrace_{h=1,\dots,\infty}$ such that $\sum_{h=1}^{\infty} \phi_h = \Phi<\infty$ and, for $i<j$,
\begin{equation}
\norm{\textrm{Cov}(Y_{i,t}, Y_{j,t}\left| \right. \mathcal{F}_{t-1} )}\leq \phi_{j-i} \label{weak dependence_log}
\end{equation}
\end{itemize}
The same discussion about the assumptions stated for the linear model applies in this case. In particular, for B3$^L$, if an OLS estimator is applied to $\mathbf{Z}_t$, with $\xi_{i,t}\sim IID(0,\sigma^2)$ for all $i,t$, then $\boldsymbol{\Sigma}_t=\mathbf{D}_t=\mathbf{I}_N$, so $l_1=l_2=\varsigma=0$, $g_3=\kappa_1$, $g_4=\kappa_2$, $g_5=\kappa_6$, discovering again the conditions by \cite{zhu2017}. Since the errors are independent, B4$^L$ holds also in this case. Condition B4$^L$ has been rescaled in terms of conditional covariances instead of correlations. This is simply due to the different form of the information matrix \eqref{B_log}, which only includes the conditional covariance matrix $\boldsymbol{\Sigma}_t$. In contrast the linear model information matrix which corresponds to \eqref{B} is given by $\mathbf{B}_N=\mathrm{E}(\partial\boldsymbol{\lambda}^T_{t}/\partial\boldsymbol{\theta}\mathbf{D}_t^{-1/2}\mathbf{R}_t \mathbf{D}_t^{-1/2}\partial\boldsymbol{\lambda}_{t}/\partial\boldsymbol{\theta}^T)$, where $\mathbf{R}_t= \mathbf{D}_t^{-1/2}\mathbf{\Sigma}_t \mathbf{D}_t^{-1/2}$ is conditional correlation matrix, and $\mathbf{D}_t^{-1/2}\preceq \beta_0^{-1}\mathbf{I}_N$, so that working with the correlations is more natural and convenient. A numerical analysis of assumptions B2$^L$-B3$^L$ have been performed in SM~\ref{SUPP network}, and complement the results of the linear model. Recall that $\boldsymbol{\eta}\in\mathbb{R}^m$, denotes a non-null real-valued vector.
\begin{lemma}\rm
For the log-linear model \eqref{log_lin2}, suppose $\norm{\beta_1}+\norm{\beta_2} < 1$ and B1$^L$-B4$^L$ hold. Consider $\mathbf{S}_{NT}$ and $\mathbf{H}_{NT}$ defined as in \eqref{score_log} and \eqref{H_T_log}, respectively. Assume
$N^{-2}\mathrm{E}(\boldsymbol{\eta}^T \mathbf{s}_{Nt})^4<\infty$. Then, as $\min\left\lbrace N,T\right\rbrace \to\infty$
\begin{enumerate}
\item $(NT)^{-1}\mathbf{H}_{NT}\xrightarrow{p}\mathbf{H}\,,$
\item $(NT)^{-\frac{1}{2}}\mathbf{S}_{NT}\xrightarrow{d}N(\mathbf{0}_m,\mathbf{B})\,,$
\item $\max_{j,l,k}\sup_{\boldsymbol{\theta}\in\mathcal{O}(\boldsymbol{\theta}_0)}\left|\frac{1}{NT}\sum_{t=1}^{T}\sum_{i=1}^{N}\frac{\partial^3l_{i,t}(\boldsymbol{\theta})}{\partial\boldsymbol{\theta}_j\partial\boldsymbol{\theta}_l\partial\boldsymbol{\theta}_k}\right|\leq M_{NT}\xrightarrow{p}M\,,$
\end{enumerate}
where $M_{NT}\coloneqq\frac{1}{NT}\sum_{t=1}^{T}\sum_{i=1}^{N}m_{i,t}$, $\mathbf{B}=\lim_{N\to\infty}N^{-1}\mathbf{B}_N$, $\mathbf{H}=\lim_{N\to\infty}N^{-1}\mathbf{H}_N$ is nonsingular and
\begin{equation}
\mathbf{H}=\begin{pmatrix}
\mu_y & l^*_1 & l^*_2 \\
& \mu(l^*_1+l_1)+l_5 & \mu(l^*_1+l_2)+l_4 \\
& & \mu(l^*_2+l_2)+l_3
\end{pmatrix}
\,,\,\,\mathbf{B}=\begin{pmatrix}
\mu^*_y & g^*_1 & g^*_2 \\
& \mu(g^*_1+l_1)+g_5 & \mu(g^*_1+l_2)+g_4 \\
& & \mu(g^*_2+l_2)+g_3
\end{pmatrix} \,, \label{H,B div N log}
\end{equation}
where $\mu_y=\mathrm{E}(Y_{i,t})$, $l^*_1=\mu\mu_y+l_1$, $l^*_2=\mu\mu_y+l_2$, $(l_3, l_4, l_5)$ equal $(g_3,g_4,g_5)$, respectively, when $\boldsymbol{\Sigma}_t=\mathbf{D}_t$, $\mu_y^*=\mu_y+\varsigma$, $g^*_1=\mu\mu^*_y+l_1$ and $g^*_2=\mu\mu^*_y+l_2$.
\label{limits_log}
\end{lemma}
\begin{theorem}\rm \label{can2_log}
Consider model \eqref{log_lin2}. Let $\boldsymbol{\theta}\in\boldsymbol{\Theta}\subset\mathbb{R}^m$. Suppose that $\boldsymbol{\Theta}$ is compact and assume that the true value $\boldsymbol{\theta}_0$ belongs to the interior of $\boldsymbol{\Theta}$. Suppose that the conditions of Lemma~\ref{limits_log} hold. Then, there exists a fixed open neighbourhood $\mathcal{O}(\boldsymbol{\theta}_0)=\left\lbrace \boldsymbol{\theta}:|\boldsymbol{\theta}-\boldsymbol{\theta}_0|<\delta\right\rbrace$ of $\boldsymbol{\theta}_0$ such that with probability tending to 1 as $\min\left\lbrace N,T\right\rbrace \to\infty$, for the score function \eqref{score_log}, the equation $S_{NT}(\boldsymbol{\theta})=\mathbf{0}_m$ has a unique solution, called $\hat{\boldsymbol{\theta}}$, which is consistent and asymptotically normal:
\begin{equation}
\sqrt{NT}(\hat{\boldsymbol{\theta}}-\boldsymbol{\theta}_0)\xrightarrow{d}N(\mathbf{0}_m,\mathbf{H}^{-1}\mathbf{B}\mathbf{H}^{-1})\,.
\nonumber
\end{equation}
\end{theorem}
The conclusion follows as above. An analogous result can be established for $p>1$, since also $\log(\mathbf{1}_N+\mathbf{Y}_t)$ in \eqref{log_lin2} can be approximately rewritten as a VAR(1) model, similarly to \eqref{var1_c}.
In practical application one needs to specify a suitable estimator for the limiting covariance matrix of the quasi maximum likelihood estimators. To this aim, define the following matrix
\begin{equation}
\hat{\mathbf{B}}_{NT}(\hat{\boldsymbol{\theta}})=\sum_{t=1}^{T}\textbf{s}_{Nt}(\hat{\boldsymbol{\theta}})\textbf{s}_{Nt}(\hat{\boldsymbol{\theta}})^T\,. \label{B_hat}
\end{equation}
Let $\mathbf{V}\coloneqq \mathbf{H}^{-1}\mathbf{B}\mathbf{H}^{-1}$ and $\mathbf{V}(\hat{\boldsymbol{\theta}})\coloneqq (NT)\mathbf{H}^{-1}_{NT}(\hat{\boldsymbol{\theta}})\hat{\mathbf{B}}_{NT}(\hat{\boldsymbol{\theta}})\mathbf{H}^{-1}_{NT}(\hat{\boldsymbol{\theta}})$. The following results establish the inference for the limiting covariance matrix obtained by Theorems \ref{can3} and \ref{can2_log}.
\begin{theorem} \rm
Consider model \eqref{lin2} ( respectively, model \eqref{log_lin2}). Suppose the conditions of Theorem~\ref{can2}-\ref{can3} (respectively, Theorem~\ref{can2_log}) hold true.
Then, as $\min\left\lbrace N,T\right\rbrace \to\infty$, $\mathbf{V}(\hat{\boldsymbol{\theta}})\xrightarrow{p}\mathbf{V}$.
\label{covariance}
\end{theorem}
These results allow to consistently estimate the covariance matrix of Thm.~\ref{can3}-\ref{can2_log} using the usual sandwich estimator $\mathbf{H}^{-1}_{NT}(\hat{\boldsymbol{\theta}})\hat{\mathbf{B}}_{NT}(\hat{\boldsymbol{\theta}})\mathbf{H}^{-1}_{NT}(\hat{\boldsymbol{\theta}})$.
For what concerns the estimation of the copula parameter $\rho$, although a complete study of the problem require separate treatment, we report in Section~\ref{SUPP copula estimation} of the SM the results of a simulation study after employing a heuristic parametric bootstrap estimation algorithm. Such method potentially can be useful to select an adequate copula structure and estimate the associated copula parameter.\\
Finally, in Sec.~\ref{SUPP 2 step GEE} of the SM, a novel regression estimator is proposed by considering a two-step Generalized Estimating Equations (GEE); the latter has been proved to be more efficient than the QMLE, based on numerical studies, especially when there exists considerable correlation among the counts.
\section{Applications} \label{SEC: application}
\subsection{Simulations}\label{simulations}
We study finite sample behaviour of the QMLE for models \eqref{lin2_p} and \eqref{log_lin2_p}. For this goal we ran a simulation study with $S=1000$ repetitions and different time series length and network dimension. We
consider the cases $p=(1,2)$. The adjacency matrix is generated by
using one of the most popular network structure, the stochastic block model (SBM):
\begin{ex}\rm \label{sbm}(Stochastic Block Model). A block label $(k = 1,\dots, K=5)$ is assigned for each node with equal probability and $K$ is the total number of blocks. Then, set $\mathrm{P}(a_{ij}=1) = N^{-0.3}$ if $i$ and $j$ belong to the same block, and $\mathrm{P}(a_{ij}=1)= N^{-1}$ otherwise. Practically, the model assumes that nodes within the same block are more likely to be connected with respect to nodes from different blocks.
\end{ex}
For details on SBM see \cite{wang1987}, \cite{nowicki_2001}, and \cite{zhao_2012}, among others.
The SBM model with $K=5$ blocks is generated by using the \texttt{igraph} \texttt{R} package \citep{csardi_2006}. The network density is set equal to 1\%. We performed simulations with a network density equal to 0.3\% and 0.5\%, as well, but we obtained similar results, hence we do not report them here. The parameters are set to
$(\beta_0, \beta_1, \beta_2)^T=(0.2, 0.3, 0.2)^T$.
The observed time series are generated using the copula-based data generating process of Section~\ref{Sec:Properties of order 1 model}. The specified copula structure is Gaussian, $C^{Ga}_\mathbf{R}(\dots)$, with correlation matrix $\mathbf{R}=(R_{ij})$, where $R_{ij}=\rho^{|i-j|}$, the so called first-order autoregressive correlation matrix, henceforth AR-1. Then $C^{Ga}_\mathbf{R}(\dots)=C^{Ga}(\dots,\rho)$.
Tables~\ref{sim_gauss_lin_05} and Table~\ref{sim_gauss_log_05} summarize the simulation results. Additional findings are given
in Section~\ref{simulations_app} of the SM-- Tables \ref{sim_gauss_lin_02}--\ref{sim_gauss_log_00}.
The estimates for parameters and their standard errors (in brackets) are obtained by averaging out the results from all simulations; the third row below each coefficient shows the percentage frequency of $t$-tests which reject $H_0: \beta=0$ at the level $5\%$ over the $S$ simulations. We also report the percentage of cases where various information criteria select the correct generating model. In this study, we employ the Akaike (AIC), the Bayesian (BIC) and the Quasi (QIC) information criteria. The latter is a special case of the AIC which takes into account that estimation is done by quasi-likelihood methods. See \cite{pan2001} for more details.
We observe that the estimates are close to the real values and the standard errors are small for all the cases considered. When there is a strong correlation between count variables $Y_{i,t}$--see Table \ref{sim_gauss_lin_05}--
and $T$ is small when compared to the network size $N$, then the estimates of the network effect ($\hat{\beta}_1$) may suffer of a slight bias. The same conclusion is drawn from Table \ref{sim_gauss_lin_02}. Instead, when both $T$ and $N$ are reasonably large (or at least $T$ is large), then the approximation to the true values of the parameters is adequate.
Standard errors reduce as $T$ increases. Regarding estimators of the log-linear model (see Table \ref{sim_gauss_log_05} and \ref{sim_gauss_log_02}), we obtain similar results, with the same exceptions regarding the intercept.
The $t$-tests and percentage of right selections due to various information criteria provide empirical confirmation for the model selection procedure.
Based on these results, the BIC provides the best selection procedure for the case of the linear model; its success selection rate is about 99\%; this is so because it tends to select models with fewer parameters. For the opposite reason, the AIC is not performing as well as BIC but still selects the right model around 92\% of time. The QIC provides a good balance between the other two IC, around 95\%. Moreover, it has the advantage to be more robust, especially when used for misspecified models. This fact is further confirmed by the results
concerning the log-linear model, even though the rate of right selections for the QIC does not exceed 88\%.
To validate these results, we consider the case where all series are independent (Gaussian copula with $\rho=0$). Then
QMLE provides satisfactory results if $N$ is large enough, even if $T$ is small (see Table \ref{sim_gauss_lin_00}, \ref{sim_gauss_log_00}). Moreover, the slight bias reported, for some coefficients, when $\rho>0$, is not observed in this case.
From the QQ-plot shown in Figure \ref{qq_lin}-\ref{qq_log} we can conclude that, with $N$ and $T$ large enough,
the asserted asymptotic normality is quite adequate.
A more extensive discussion and further simulation results can be found in Sec.~\ref{simulations_app} of the SM.
\begin{table}[H]
\centering
\caption{Estimators obtained from $S=1000$ simulations of model \eqref{lin2}, for various values of $N$ and $T$. Network generated by Ex.~\ref{sbm}. Data are generated by using the Gaussian AR-1 copula, with $\rho=0.5$ and $p=1$. Model \eqref{lin2_p} is also fitted using $p=2$ to check the performance of various information criteria.}
\hspace*{-1cm}
\scalebox{0.75}{
\begin{tabular}{c|c|ccc|ccccc|ccc}\hline\hline
\multicolumn{2}{c|}{Dim.} & \multicolumn{3}{c|}{$p=1$} & \multicolumn{5}{c|}{$p=2$} & \multicolumn{3}{c}{IC $(\%)$}\\\hline
$N$ & $T$ & $\hat{\beta}_0$& $\hat{\beta}_1$ & $\hat{\beta}_2$ & $\hat{\beta}_0$ & $\hat{\beta}_{11}$ & $\hat{\beta}_{21}$ & $\hat{\beta}_{12}$ & $\hat{\beta}_{22}$ & $AIC$ & $BIC$ &$QIC$\\\hline
\multirow{6}{*}{20} & \multirow{3}{*}{100} & 0.201 & 0.296 & 0.199 & 0.197 & 0.292 & 0.196 & 0.009 & 0.007 & \multirow{3}{*}{94.1.0} & \multirow{3}{*}{99.5} & \multirow{3}{*}{95.1}\\
& & (0.019) & (0.036) & (0.028) & (0.021) & (0.037) & (0.029) & (0.031) & (0.023) & & & \\\addlinespace[-0.4ex]
& & 100 & 100 & 100 & 100 & 100 & 100 & 1.4 & 1.5 & & & \\\cline{2-13}
& \multirow{3}{*}{200} & 0.200 & 0.297 & 0.199 & 0.197 & 0.294 & 0.197 & 0.008 & 0.005 & \multirow{3}{*}{93.9} & \multirow{3}{*}{99.9} & \multirow{3}{*}{95.2} \\%\addlinespace[-0.5ex]
& & (0.013) & (0.027) & (0.020) & (0.014) & (0.028) & (0.021) & (0.023) & (0.016) & & & \\\addlinespace[-0.4ex]
& & 100 & 100 & 100 & 100 & 100 & 100 & 1.5 & 1.6 & & & \\
\hline
\multirow{12}{*}{100} & \multirow{3}{*}{20} & 0.203 & 0.292 & 0.198 & 0.196 & 0.286 & 0.195 & 0.015 & 0.008 & \multirow{3}{*}{93.1} & \multirow{3}{*}{97.1} & \multirow{3}{*}{93.5}\\%\addlinespace[-0.5ex]
& & (0.024) & (0.048) & (0.028) & (0.029) & (0.050) & (0.029) & (0.046) & (0.024) & & & \\\addlinespace[-0.4ex]
& & 100 & 100 & 100 & 100 & 100 & 100 & 2.9 & 2.2 & & & \\\cline{2-13}
& \multirow{3}{*}{50} & 0.202 & 0.294 & 0.199 & 0.197 & 0.290 & 0.197 & 0.011 & 0.005 & \multirow{3}{*}{91.4} & \multirow{3}{*}{98.8} & \multirow{3}{*}{94.1} \\%\addlinespace[-0.5ex]
& & (0.015) & (0.032) & (0.018) & (0.018) & (0.033) & (0.019) & (0.031) & (0.015) & & & \\\addlinespace[-0.4ex]
& & 100 & 100 & 100 & 100 & 100 & 100 & 3.3 & 2.0 & & & \\
\cline{2-13}
& \multirow{3}{*}{100} & 0.201 & 0.299 & 0.200 & 0.198 & 0.296 & 0.198 & 0.008 & 0.004 & \multirow{3}{*}{91.9} & \multirow{3}{*}{99.2} & \multirow{3}{*}{94.9}\\%\addlinespace[-0.5ex]
& & (0.011) & (0.023) & (0.013) & (0.013) & (0.023) & (0.013) & (0.022) & (0.011) & & & \\\addlinespace[-0.4ex]
& & 100 & 100 & 100 & 100 & 100 & 100 & 2.0 & 1.8 & & & \\\cline{2-13}
&\multirow{3}{*}{200} & 0.200 & 0.299 & 0.200 & 0.198 & 0.298 & 0.199 & 0.005 & 0.003 & \multirow{3}{*}{92.3} & \multirow{3}{*}{99.7} & \multirow{3}{*}{95.2} \\%\addlinespace[-0.5ex]
& & (0.008) & (0.016) & (0.009) & (0.009) & (0.017) & (0.009) & (0.015) & (0.008) & & & \\\addlinespace[-0.4ex]
& & 100 & 100 & 100 & 100 & 100 & 100 & 2.0 & 1.6 & & & \\
\hline
\hline
\end{tabular}
}
\hspace*{-1cm}
\label{sim_gauss_lin_05}
\end{table}
\begin{table}[H]
\centering
\caption{Estimators obtained from $S=1000$ simulations of model \eqref{log_lin2}, for various values of $N$ and $T$. Network generated by Ex.~\ref{sbm}. Data are generated by using the Gaussian AR-1 copula, with $\rho=0.5$ and $p=1$. Model \eqref{log_lin2_p} is also fitted using $p=2$ to check the performance of various information criteria.}
\hspace*{-1cm}
\scalebox{0.75}{
\begin{tabular}{c|c|ccc|ccccc|ccc}\hline\hline
\multicolumn{2}{c|}{Dim.} & \multicolumn{3}{c|}{$p=1$} & \multicolumn{5}{c|}{$p=2$} & \multicolumn{3}{c}{IC $(\%)$}\\\hline
$N$ & $T$ & $\hat{\beta}_0$ & $\hat{\beta}_1$ & $\hat{\beta}_2$ & $\hat{\beta}_0$ & $\hat{\beta}_{11}$ & $\hat{\beta}_{21}$ & $\hat{\beta}_{12}$ & $\hat{\beta}_{22}$ & $AIC$ & $BIC$ &$QIC$\\\hline
\multirow{6}{*}{20} & \multirow{3}{*}{100} & 0.206 & 0.298 & 0.196 & 0.208 & 0.298 & 0.196 & -0.002 & -0.001 & \multirow{3}{*}{81.6} & \multirow{3}{*}{97.5} & \multirow{3}{*}{86.1}\\%\addlinespace[-0.5ex]
& & (0.061) & (0.040) & (0.034) & (0.072) & (0.041) & (0.035) & (0.040) & (0.034) & & & \\\addlinespace[-0.4ex]
& & 91.3 & 100 & 100 & 81.3 & 100 & 99.9 & 2.0 & 2.7 & & & \\\cline{2-13}
& \multirow{3}{*}{200} & 0.203 & 0.298 & 0.199 & 0.203 & 0.298 & 0.199 & 0.001 & -0.001 & \multirow{3}{*}{80.7} & \multirow{3}{*}{98.9} & \multirow{3}{*}{85.8} \\%\addlinespace[-0.5ex]
& & (0.043) & (0.030) & (0.025) & (0.049) & (0.032) & (0.025) & (0.032) & (0.024) & & & \\\addlinespace[-0.4ex]
& & 99.5 & 100 & 100 & 98.1 & 100 & 100 & 2.3 & 2.4 & & & \\
\hline
\multirow{12}{*}{100} & \multirow{3}{*}{20} & 0.209 & 0.292 & 0.196 & 0.215 & 0.293 & 0.197 & -0.006 & -0.002 & \multirow{3}{*}{74.6} & \multirow{3}{*}{88.2} & \multirow{3}{*}{80.7}\\%\addlinespace[-0.5ex]
& & (0.082) & (0.069) & (0.036) & (0.097) & (0.069) & (0.037) & (0.067) & (0.036) & & & \\\addlinespace[-0.4ex]
& & 70.0 & 97.5 & 99.9 & 59.7 & 97.4 & 99.9 & 3.8 & 3.3 & & & \\\cline{2-13}
& \multirow{3}{*}{50} & 0.204 & 0.296 & 0.200 & 0.207 & 0.296 & 0.200 & -0.004 & -0.001 & \multirow{3}{*}{78.4} & \multirow{3}{*}{94.6} & \multirow{3}{*}{86.6} \\%\addlinespace[-0.5ex]
& & (0.053) & (0.045) & (0.023) & (0.065) & (0.045) & (0.023) & (0.045) & (0.022) & & & \\\addlinespace[-0.4ex]
& & 96.3 & 100 & 100 & 86.9 & 100 & 100 & 2.9 & 2.3 & & & \\
\cline{2-13}
& \multirow{3}{*}{100} & 0.203 & 0.297 & 0.199 & 0.204 & 0.297 & 0.200 & 0.000 & -0.001 & \multirow{3}{*}{78.9} & \multirow{3}{*}{97.2} & \multirow{3}{*}{85.7}\\%\addlinespace[-0.5ex]
& & (0.037) & (0.031) & (0.016) & (0.046) & (0.032) & (0.016) & (0.031) & (0.016) & & & \\\addlinespace[-0.4ex]
& & 100 & 100 & 100 & 99.4 & 100 & 100 & 3.1 & 2.0 & & & \\\cline{2-13}
&\multirow{3}{*}{200} & 0.201 & 0.300 & 0.199 & 0.203 & 0.300 & 0.199 & -0.002 & 0.000 & \multirow{3}{*}{80.5} & \multirow{3}{*}{97.5} & \multirow{3}{*}{88.1} \\%\addlinespace[-0.5ex]
& & (0.026) & (0.022) & (0.011) & (0.033) & (0.022) & (0.011) & (0.022) & (0.011) & & & \\\addlinespace[-0.4ex]
& & 100 & 100 & 100 & 100 & 100 & 100 & 2.9 & 2.7 & & & \\
\hline
\hline
\end{tabular}
}
\hspace*{-1cm}
\label{sim_gauss_log_05}
\end{table}
\subsection{Data analysis}
The application on real data concerns the monthly number of burglaries on the south side of Chicago from 2010-2015 ($T=72$). The counts are registered for the $N=552$ census block groups. The data are taken by \cite{clark_2021}, \href{https://github.com/nick3703/Chicago-Data}{\url{https://github.com/nick3703/Chicago-Data}}. The undirected network structure raises naturally, as an edge between block $i$ and $j$ is set if the locations share a border. The density of the network is 1.74\%. The maximum number of burglaries in a month in a census block is 17. The variance to mean ratio in the data is 1.82, suggesting there is some overdispersion in the data.
The median of degrees is 5. On this dataset we fit the linear and log-linear PNAR(1) and PNAR(2) model. The results are summarized in Table \ref{chicago_est}-\ref{chicago_inc}. All the models have significant parameters. The magnitude of the network effects $\beta_{11}$ and $\beta_{12}$ seems reasonable, as an increasing number of burglaries in a block can lead to a growth in the same type of crime committed in a close area. Also, the lagged effects have a positive impact on the counts. Interestingly, the log-linear model is able to account for the general downward trend registered from 2010 to 2015 for this type of crime in the area analysed. All the information criteria select the PNAR(2) models, in accordance with the significance of the estimates.
\begin{table}[H]
\centering
\caption{Estimation results for Chicago crime data.}
\scalebox{0.8}{
\begin{tabular}{cccc|ccc}
\hline\hline
& \multicolumn{3}{c|}{Linear PNAR(1)} & \multicolumn{3}{c}{Log-linear PNAR(1)} \\
\hline
& Estimate & SE ($\times10^2$) & $p$-value & Estimate & SE ($\times10^2$) & $p$-value \\
\hline
$\beta_0$ & 0.4551 & 2.1607 & $<$0.01 & -0.5158 & 3.8461 & $<$0.01 \\
$\beta_1$& 0.3215 & 1.2544 & $<$0.01 & 0.4963 & 2.8952 & $<$0.01\\
$\beta_2$ & 0.2836 & 0.8224 & $<$0.01 & 0.5027 & 1.2105 & $<$0.01\\
\hline
& \multicolumn{3}{c|}{Linear PNAR(2)} & \multicolumn{3}{c}{Log-linear PNAR(2)} \\
\hline\hline
& Estimate & SE ($\times10^2$) & $p$-value & Estimate & SE ($\times10^2$) & $p$-value \\
\hline
$\beta_0$ & 0.3209 & 1.8931 & $<$0.01 & -0.5059 & 4.7605 & $<$0.01 \\
$\beta_{11}$& 0.2076 & 1.1742 & $<$0.01 & 0.2384 & 3.4711 & $<$0.01\\
$\beta_{21}$ & 0.2287 & 0.7408 & $<$0.01 & 0.3906 & 1.2892 & $<$0.01 \\
$\beta_{12}$ & 0.1191 & 1.4712 & $<$0.01 & 0.0969 & 3.3404 & $<$0.01 \\
$\beta_{22}$ & 0.1626 & 0.7654 & $<$0.01 & 0.2731 & 1.2465 & $<$0.01 \\
\hline\hline
\end{tabular}
}
\label{chicago_est}
\end{table}
\begin{table}[H]
\centering
\caption{Information criteria for Chicago crime data. Smaller values in bold.}
\scalebox{0.8}{
\begin{tabular}{ccc|cc|cc}
\hline\hline
& \multicolumn{2}{c|}{AIC$\times10^{-3}$} & \multicolumn{2}{c|}{BIC$\times10^{-3}$} & \multicolumn{2}{c}{QIC$\times10^{-3}$} \\
\hline
& linear & log-linear & linear & log-linear & linear & log-linear\\
PNAR(1) & 115.06 & 115.37 & 115.07 & 115.38 & 115.11 & 115.44\\
PNAR(2) & \textbf{111.70} & \textbf{112.58} & \textbf{111.72} & \textbf{112.60} & \textbf{111.76} & \textbf{112.68}\\
\hline\hline
\end{tabular}
}
\label{chicago_inc}
\end{table}
Estimation of the copula is done according to the algorithm of Sec.~\ref{SUPP copula estimation} of the SM. As a preliminary step, we order the observations $Y_{i,t}$ for $i=1,\dots,N$ with respect to their sample variance, in decreasing order. The Gaussian AR-1 copula, described in Sec.~\ref{simulations}, is compared versus the Clayton copula, over a grid of values for the associated copula parameter, with 100 bootstrap simulations. The former is selected 100\% and 97\% of the times, for the linear and the log-linear PNAR(1) model, respectively. The estimated copula parameter is $\hat{\rho}=0.656$ and $\hat{\rho}=0.546$, for the linear and log-linear model, respectively, with small standard errors 0.046 and 0.058, correspondingly. A complete estimation of the DGP defined in Sec.~\ref{Sec:Properties of order 1 model} is now available.
A further estimation step for the PNAR(1) models is performed by applying the two-step GEE estimation method introduced in Sec.~\ref{SUPP 2 step GEE}. The previous QMLE estimates are used as starting values of the two-step procedure. An AR-1 working correlation matrix $\mathbf{P(\tau)}$ is selected, with $\hat{\tau}_1$ as the estimator of the correlation parameter. To compare the relative efficiency of the GEE ($\tilde{\boldsymbol{\theta}}$) versus QMLE ($\hat{\boldsymbol{\theta}}$), the bootstrap standard errors have been performed, with 100 simulations, for both estimation methods, by using the estimated copula to generate bootstrap replicates; then we compute the ratio of the standard errors obtained, $q(\hat{\boldsymbol{\theta}}, \tilde{\boldsymbol{\theta}})=\sum_{h=1}^{m}SE(\hat{\beta}_h)/\sum_{h=1}^{m}SE(\tilde{\beta}_h)$. The results are $q(\hat{\boldsymbol{\theta}}, \tilde{\boldsymbol{\theta}})=1.022$ and $q(\hat{\boldsymbol{\theta}}, \tilde{\boldsymbol{\theta}})=1.003$, for the linear and log-liner model, respectively.
We note a marginal gain in efficiency from the GEE estimation; this is probably due the a small value of the estimated correlation parameter $\tau$, which is found to be around 0.005 and 0.003, on average, for linear and log-linear model, respectively. Using different kind of estimator for the correlation parameter might yield significant
efficiency improvement but a further study in this direction is needed.
\section*{Acknowledgements}
The authors would like to thank the Editor, Associate Editor and two reviewers for valuable comments and suggestions.
Both authors acknowledge the hospitality of the Department of Mathematics \& Statistics at Lancaster University, where this work was initiated. This work has been co-financed by the European Regional Development Fund and the Republic of Cyprus through the Research and Innovation Foundation, under the project INFRASTRUCTURES/1216/0017 (IRIDA).
|
1,116,691,498,678 | arxiv | \section{Introduction}
In view of the development of autonomous underwater vehicles, the capability of such vehicles to interact with the environment by the use of a robot manipulator, had gained attention in the literature. Most of the underwater manipulation tasks, such as maintenance of ships, underwater pipeline or weld inspection, surveying, oil and gas searching, cable burial and mating of underwater connector, require the manipulator mounted on the vehicle to be in contact with the underwater object or environment. The aforementioned systems are complex and they are characterized by several strong constraints, namely the complexity in the mathematical model and the difficulty to control the vehicle. These constraints should be taken into consideration when designing a force control scheme. In order to increase the adaptability of UVMS, force control must be included into the control system of the UVMS. Although many force control schemes have been developed for earth-fixed manipulators and space robots, these control schemes cannot be used directly on UVMS because of the unstructured nature of the underwater environment.
From the control perspective, achieving these type of tasks requires specific approaches\citep{Siciliano_Sciavicco}. However, speaking about underwater robotics, only few publications deal with the interaction control using UVMS. One of the first underwater robotic setups for interaction with the environment was presented in \citep{Casalino20013220}. Hybrid position/force control schemes for UVMS were developed and tested in \citep{lane1,lane2}. However, dynamic coupling between the manipulator and the underwater vehicle was not considered in the system model. In order to compensate the contact force, the authors in \citep{kajita} proposed a method that utilizes the restoring force generated by the thrusters. In the same context, position/force \citep{lapierre}, impedance control \citep{cui1,cui2,cui3} and external force control schemes \citep{antonelli_tro, antonelli_cdc,Antonelli2002251} can be found in the literature.
Over the last years, the interaction control of UVMS is gaining significant attention again. Several control issues for an UVMS in view of intervention tasks has been presented in \citep{Marani2010175}. In \citep{Cataldi2015524} based on the interaction schemes presented in \citep{antonelli_tro} and \citep{Antonelli2002251}, the authors proposed a control protocol for turning valve scenarios. Recent study \citep{moosavian} proposed a multiple impedance control scheme for a dual manipulator mounted on AUV. Moreover, the two recent European projects TRIDENT(see, e.g. \citep{Fernández2013121},\citep{Prats201219},\citep{Simetti2014364}) and PANDORA (see, e.g. \citep{Carrera2014}, \citep{Carrera2015}) have given boost to underwater interaction with relevant results.
In real applications, the UVMS needs to interact with the environment via its end-effector in order to achieve a desired task. During the manipulation process the following issues occur: the environment is potentially unknown, the system is in the presence of unknown (but bounded) external disturbances (sea currents and sea waves) and the sensor measurements are not always accurate (we have noise in the measurements). These issues can cause unpredicted instabilities to the system and need to be tackled during the control design. From the control design perspective, the UVMS dynamical model is highly nonlinear, complicated and has significant uncertainties. Owing to the aforementioned issues, underwater manipulation becomes a challenging task in order to achieve low overshoot, transient and steady state performance.
Motivated by the above, in this work we propose a force - position control scheme which does not require any knowledge of the UVMS dynamic parameters, environment model as well as the disturbances. More specifically, it tackles all the aforementioned issues and guarantees a predefined behavior of the system in terms of desired overshoot and prescribed transient/steady state performance. Moreover, noise measurements, UVMS model uncertainties (a challenging issue in underwater robotics) and external disturbance are considered during control design. In addition, the complexity of the proposed control law is significantly low. It is actually a static scheme involving only a few calculations to
output the control signal, which enables its implementation on most of current UVMS.
\begin{comment}
In this paper, we aim at enhancing the interaction
controllers above in order to tackle the problem of modeled
disturbances that may cause unpredicted instabilities. Specifically, we study the case of an UVMS in interaction with a compliant environment. The practical motivation of this case consists of various underwater applications such as sampling from sea organisms for research purposes or undersea constructions (e.g. underwater welding) where better steady state performance, smoother interaction impact and reduced overshoot of the interaction force error, are required. The methodology of Nonlinear Model Predictive Control (NMPC) owing to its ability to handle input and state constraints is a suitable approach to be used in interaction manipulation tasks. Concerning NMPC in interaction manipulation tasks, some previous works have appeared. In \citep{BaptistaJIRS01}, authors utilize an impedance controller integrated with a fuzzy predictive algorithm. The proposed scheme incorporates a nonlinear model of the contact and uncertainties in the model of the robot. By deploying this strategy, a
considerable reduction in the force error compared to classic approaches has been derived in \citep{BaptistaCAP01}. Compared to classic interaction control schemes for UVMSs, the scheme in this paper ensures better steady state performance, reduced overshoot of interaction force error and smoother interaction impact. Moreover, state and input constraints can be incorporated to the controller design process, which is a vital result, owing to the existence of multiple constraints in the case of UVMS. Specifically, in this work, constraints of position and velocity on manipulator joints, constraints on linear and angular velocity of the robotic vehicle, and an approximate model of compliant environment, are being considered during the control design.
\end{comment}
The rest of this paper is organized as follows: in Section 2 the mathematical model of UVMS and preliminary background are given. Section 3 provides the problem statement that we aim to solve in this paper. The control methodology is presented in Section 4. Section 5 validates our approach via a simulation study. Finally, conclusions and future work directions are discussed in Section 6.
\section{Preliminaries}
\subsection{Mathematical model of the UVMS}
In this work, the vectors are denoted with lower bold letters whereas the matrices by capital bold letters. The end effector coordinates with respect to (w.r.t) the inertial frame $\{I\}$ are denoted by $\boldsymbol{x}_e\in \mathbb{R}^6$. Let $\boldsymbol{q}=[\boldsymbol{q}^\top_a,~\boldsymbol{q}^\top_m]^\top\in \mathbb{R}^n$ be the state variables of the UVMS, where $\boldsymbol{q}_a=[\boldsymbol{\eta}_1^\top,\boldsymbol{\eta_2}^\top]^\top\in \mathbb{R}^6$ is the vector that involves the position vector $\boldsymbol{\eta}_{1}=[x,y,z]^\top$ and Euler-angles orientation $\boldsymbol{\eta}_{2}=[\phi,\theta,\psi]^\top$ of the vehicle w.r.t to the inertial frame $\{I\}$ and $\boldsymbol{q}_m\in \mathbb{R}^{n-6}$ is the vector of angular position of the manipulator's joints. Thus, we have \citep{Fossen2,antonelli}:
\begin{gather}
\dot{\boldsymbol{q}}_a= \boldsymbol{J}^a(\boldsymbol{q}_a)\boldsymbol{v} \label{eq1}
\end{gather}
where
\begin{align*}
\boldsymbol{J}^a(\boldsymbol{q}_a)= \begin{bmatrix}
\boldsymbol{J}_t(\eta_2) & \boldsymbol{0}_{3 \times 3} \\
\boldsymbol{0}_{3 \times 3} & \boldsymbol{J}_{r}(\eta_2) \\
\end{bmatrix}\in \mathbb{R}^{6\times 6}
\end{align*}
is the Jacobian matrix transforming the velocities from the body-fixed to the inertial frame
and where, $\boldsymbol{0}_{3\times 3}$ is the zero matrix of the respective dimensions, $\boldsymbol{v}$ is the vector of body velocities of the vehicle and $\boldsymbol{J}_t(\eta_2)$ and $\boldsymbol{J}_r(\eta_2)$ are the corresponding parts of the Jacobian related to position and orientation respectively. Let also $\dot{\boldsymbol{\chi}}=[\dot{\boldsymbol{\eta}}_{1}^\top,\boldsymbol{\omega}^\top]^\top$ denotes the velocity of UVMS's End-Effector, where $\dot{\boldsymbol{\eta}}_{1}$, $\boldsymbol{\omega}$ are the linear and angular velocity of the UVMS's End-Effector, respectively. Without loss of generality, for the augmented UVMS system we have \cite{antonelli}:
\begin{equation}
\dot{\boldsymbol{\chi}}= \boldsymbol{J}^g({\boldsymbol{q}})\boldsymbol{\zeta}\label{eq222}
\end{equation}
where $\boldsymbol{\zeta}=[\boldsymbol{v}^\top,\dot{\boldsymbol{q}}_{m,i}^\top]^\top \in \mathbb{R}^{n}$ is the velocity vector including the body velocities of the vehicle as well as the joint velocities of the manipulator and $ \boldsymbol{J}^g(\boldsymbol{q})$ is the geometric Jacobian Matrix \citep{antonelli}. In this way, the task space velocity vector of UVMS's End-Effector can be given by:
\begin{equation}
\dot{\boldsymbol{x}}_e= \boldsymbol{J}({\boldsymbol{q}})\boldsymbol{\zeta}\label{eq122}
\end{equation}
where: $\boldsymbol{J}({\boldsymbol{q}})$ is analytical Jacobian matrix given by:
\begin{gather*}
\boldsymbol{J}({\boldsymbol{q}})={\boldsymbol{J}'({\boldsymbol{q}})}^{-1}\boldsymbol{J}^g({\boldsymbol{q}})
\end{gather*}
with $\boldsymbol{J}'({\boldsymbol{q}})$ to be a Jacobian matrix that maps the Euler angle rates to angular velocities $\boldsymbol{\omega}$ and is given by:
\begin{gather*}
\boldsymbol{J}'({\boldsymbol{q}})=\begin{bmatrix}
\boldsymbol{I}_{3\times 3} & \boldsymbol{0}_{3\times 3}\\
\boldsymbol{0}_{3\times 3} & \boldsymbol{J}''({\boldsymbol{q}})
\end{bmatrix},\\
\boldsymbol{J}''({\boldsymbol{q}})=\begin{bmatrix}
1 & 0&-\sin(\theta)\\
0&\cos(\phi)&\cos(\theta)\sin(\phi)\\
0&-\sin(\phi)&\cos(\theta)\cos(\phi)
\end{bmatrix}.
\end{gather*}
\subsection{Dynamics}
Without loss of generality, the dynamics of the UVMS can be given as \cite{antonelli}:
\begin{gather}
\boldsymbol{M}(\boldsymbol{q})\dot{\boldsymbol{\zeta}}\!+\!\boldsymbol{C}({\boldsymbol{q}},\boldsymbol{\zeta}){\boldsymbol{\zeta}}\!+\!\boldsymbol{D}({\boldsymbol{q}},\boldsymbol{\zeta}){\boldsymbol{\zeta}}\!+\boldsymbol{g}(
\boldsymbol{q})\!+\!{\boldsymbol{J}^g}^\top\boldsymbol{\lambda}+\boldsymbol{\delta}(t)=\!\boldsymbol{\tau}\!\label{eq4}
\end{gather}
where $\boldsymbol{\delta}(t)$ are bounded disturbances including system's uncertainties as well as the external disturbances affecting on the system from the environment (sea waves and currents), $\boldsymbol{\lambda}=[\boldsymbol{f}^\top_e,\boldsymbol{\nu}^\top_e]^\top$ the generalized vector including force $\boldsymbol{f}_e$
and torque $\boldsymbol{\nu}_e$ that the UVMS exerts on the environment at its end-effector frame. Moreover, $\boldsymbol{\tau} \in \mathbb{R}^n$ denotes the control input at the joint level, $\boldsymbol{{M}}(\boldsymbol{q})$ is the positive definite inertial matrix, $\boldsymbol{{C}}({\boldsymbol{q}},\boldsymbol{\zeta})$ represents coriolis and centrifugal terms, $\boldsymbol{{D}}({\boldsymbol{q}},\boldsymbol{\zeta})$ models dissipative effects, $\boldsymbol{{g}}(\boldsymbol{q})$ encapsulates the gravity and buoyancy effects.
\subsection{Dynamical Systems}
Consider the initial value problem:
\begin{equation}
\dot{\xi} = H(t,\xi), \xi(0)=\xi^0\in\Omega_{\xi}, \label{eq:initial_value_problem}
\end{equation}
with $H:\mathbb{R}_{\geq 0}\times\Omega_{\xi} \to \mathbb{R}^n$, where $\Omega_{\xi}\subseteq\mathbb{R}^n$ is a non-empty open set.
\begin{definition} \citep{Sontag}
A solution $\xi(t)$ of the initial value problem \eqref{eq:initial_value_problem} is maximal if it has no proper right extension that is also a solution of \eqref{eq:initial_value_problem}.
\end{definition}
\begin{theorem} \citep{Sontag} \label{thm:dynamical systems}
Consider the initial value problem \eqref{eq:initial_value_problem}. Assume that $H(t,\xi)$ is: a) locally Lipschitz in $\xi$ for almost all $t\in\mathbb{R}_{\geq 0}$, b) piecewise continuous in $t$ for each fixed $\xi\in\Omega_{\xi}$ and c) locally integrable in $t$ for each fixed $\xi\in\Omega_{\xi}$. Then, there exists a maximal solution $\xi(t)$ of \eqref{eq:initial_value_problem} on the time interval $[0,\tau_{\max})$, with $\tau_{\max}\in\mathbb{R}_{> 0}$ such that $\xi(t)\in\Omega_{\xi},\forall t\in[0,\tau_{\max})$.
\end{theorem}
\begin{proposition} \citep{Sontag} \label{prop:dynamical systems}
Assume that the hypotheses of Theorem \ref{thm:dynamical systems} hold. For a maximal solution $\xi(t)$ on the time interval $[0,\tau_{\max})$ with $\tau_{\max}<\infty$ and for any compact set $\Omega'_{\xi}\subseteq\Omega_{\xi}$, there exists a time instant $t'\in[0,\tau_{\max})$ such that $\xi(t')\notin\Omega'_{\xi}$.
\end{proposition}
\section{Problem Statement}
We define here the problem that we aim to solve in this paper:
\begin{problem}
Given a UVMS system as well as a desired force profile that should be applied by the UVMS on an entirely unknown model compliant environment, assuming the uncertainties on the UVMS dynamic parameters, design a feedback control law such that the following are guaranteed:
\begin{enumerate}
\item a predefined behavior of the system in terms of desired overshoot and prescribed transient and steady state performance.
\item robustness with respect to the external disturbances and noise on measurement devises.
\end{enumerate}
\end{problem}
\section{Control Methodology}
In this work we assume that the UVMS is equipped with a force/torque sensor at its end-effector frame. However, we assume that its accuracy is not perfect and the system suffers from noise in the force/torque measurements. In order to combine the features of stiffness and force control, a parallel force/position regulator is designed. This can be achieved by closing a force feedback loop around a position/velocity feedback loop, since the output of the force controller becomes the reference input to the dynamic controller of the UVMS.
\subsection{Control Design}
Let $\boldsymbol{f}_e^d(t)$ be the desired force profile which should be exerted on the environment by the UVMS. Hence, let us define the force error:
\begin{align}
\boldsymbol{e}_f(t)=\boldsymbol{f}_e(t)+\Delta\boldsymbol{f}_e(t)-\boldsymbol{f}_e^d(t)\in \mathbb{R}^3, \label{eq8}
\end{align}
where $\Delta\boldsymbol{f}_e(t)$ denotes the bounded noise on the force's measurement.
\begin{comment}
The term $f_{\text{des}} \in \mathbb{R}^3$ represents a feedforward force aimed at creating the presence of a force error defined as $\tilde{f}=f_{\text{des}}-f_e \in \mathbb{R}^3$ in the closed-loop equation of the system. The end-effector position should follow this frame during the interaction task. Accordingly,
the actual end-effector linear velocity $\dot{p}$ is taken to follow the linear velocity $\dot{p}_c$ of the compliant frame.
\end{comment}
Also we define the end-effector orientation error as:
\begin{align}
\boldsymbol{e}_o(t)= {^o\boldsymbol{x}}_e(t)- {^o\boldsymbol{x}}^d_e(t) \in \mathbb{R}^3, \label{eq_or}
\end{align}
where ${^o\boldsymbol{x}}^d_e(t)\in \mathbb{R}^3$ is predefined desired orientation of the end-effector (e.g. ${^o\boldsymbol{x}}^d_e(t)=[0,~0,~0]^\top$). Now we can set the vector of desired end-effector configuration as $\boldsymbol{x}^d_e(t)= [\boldsymbol{f}_e^d(t)^\top,({^o\boldsymbol{x}^d_e(t)})^\top]^\top$. In addition the overall error vector is given as:
\begin{align}
\boldsymbol{e}_x(t)=[e_{x_1}(t),\ldots,e_{x_6}(t)]=[\boldsymbol{e}^\top_f(t),\boldsymbol{e}^\top_o(t)]^\top\label{eq:ov:er}
\end{align}
A suitable methodology for the control design in hand is that of prescribed performance control, recently proposed in \citep{Bechlioulis20141217,C-2011}, which is adapted here in order to achieve predefined transient and steady state response bounds for the errors. Prescribed performance characterizes the behavior where the aforementioned errors evolve strictly within a predefined region that is bounded by absolutely decaying functions of time, called performance functions. The mathematical expressions of prescribed performance are given by the inequalities: $-\rho_{x_j}(t)<e_{x_j}(t)<\rho_{x_j}(t),~ j=1,\ldots,6$, where $\rho_{x_j}:[t_0,\infty)\rightarrow\mathbb{R}_{>0}$ with $\rho_{x_j}(t)=(\rho^0_{x_j}-\rho_{x_j}^\infty)e^{-l_{x_j}t}+\rho_{x_j}^\infty$ and $l_{x_j}>0,\rho^0_{x_j}>\rho^\infty_{x_j}>0$, are designer specified, smooth, bounded and decreasing positive functions of time with positive parameters $l_{x_j},\rho^\infty_{x_j}$, incorporating the desired transient and steady state performance respectively. In particular, the decreasing rate of $\rho_{x_j}$, which is affected by the constant $l_{x_j}$ introduces a lower bound on the speed of convergence of $e_{x_j}$. Furthermore, the constants $\rho^\infty_{x_j}$ can be set arbitrarily small, achieving thus practical convergence of the errors to zeros.
Now, we propose a state feedback control protocol $\boldsymbol{\tau}(t)$, that does not incorporate any information regarding the UVMS dynamic model \eqref{eq4} and model of complaint environment, and achieves tracking of the smooth and bounded desired force trajectory $\boldsymbol{f}_e^d(t)\in \mathbb{R}^3$ as well as ${^o\boldsymbol{x}}^d_e(t)$ with an priori specified convergence rate and steady state error. Thus, given the errors \eqref{eq:ov:er}:
\textbf{Step I-a}: Select the corresponding functions $\rho_{x_j}(t)=(\rho^0_{x_j}-\rho_{x_j}^\infty)e^{-l_{x_j}t}+\rho_{x_j}^\infty$ with $\rho^0_{x_j}>|e_{x_j}(t_0)|, \forall j\in\{1\ldots,6\}$ $\rho^0_{x_j}>\rho^\infty_{x_j}>0$, $ l_{x_j}>0,\forall j\in\{1,\ldots 6\}$, in order to incorporate the desired transient and steady state performance specification and define the normalized errors:
\begin{align}
\xi_{x_j}(t)=\frac{e_{x_j}(t)}{\rho_{x_j}(t)},~j=\{1,\ldots,6\}\label{eq11}
\end{align}
\textbf{Step I-b}: Define the transformed errors $\varepsilon_{x_j}$ as:
\begin{align}
\varepsilon_{x_j}(\xi_{x_j})=\ln \Big(\frac{1+\xi_{x_j}}{1-\xi_{x_j}}\Big),~j=\{1,\ldots,6\}\label{eq12}
\end{align}
Now, the reference velocity as $\dot{\boldsymbol{x}}^r_e=[\dot{x}^r_{e_1},\ldots,\dot{x}^r_{e_6}]^\top$ is designed as:
\begin{align}
\dot{x}^r_{e_j}(t)=-k_{x_j}\varepsilon_{x_j}(\xi_{x_j}),~k_j>0,~j=\{1,\ldots,6\}\label{eq13}
\end{align}
The task-space desired motion profile $\dot{\boldsymbol{x}}^r_e$ can be extended to the joint level using the kinematic equation \eqref{eq122}:
\begin{equation}
{\boldsymbol{\zeta}}^r(t)=\boldsymbol{J}(\boldsymbol{q})^{+}\dot{\boldsymbol{x}}^r_e \label{eq9}
\end{equation}
where $\boldsymbol{J}(\boldsymbol{q})^{+}$ denotes the Moore-Penrose pseudo-inverse of Jacobian $\boldsymbol{J}(\boldsymbol{q})$.
\begin{remark}
It is worth mentioning that the $\dot{\boldsymbol{x}}^r_e$ can also be extended to the joint level via:
\begin{align*}
{\boldsymbol{\zeta}}^r(t)=\boldsymbol{J}(\boldsymbol{q})^{\#}\dot{\boldsymbol{x}}^r_e+\big(\boldsymbol{I}_{n\times n}\!-\!\boldsymbol{J}(\boldsymbol{q})^{\#}\boldsymbol{J}\big(\boldsymbol{q}\big)\big)\dot{\boldsymbol{x}}^0
\end{align*}
where $\boldsymbol{J}(\boldsymbol{q})^{\#}$ denotes the generalized pseudo-inverse \citep{citeulike:6536020} of Jacobian $\boldsymbol{J}(\boldsymbol{q})$ and $\dot{\boldsymbol{x}}^0$ denotes secondary tasks which can be regulated independently to achieve secondary goals (e.g., maintaining manipulator's joint limits, increasing of manipulability) and does not contribute to the end effector's velocity \citep{Simetti2016877}.
\end{remark}
\textbf{Step II-a}: Define the velocity error vector at the end-effector frame as:
\begin{align}
\boldsymbol{e}_\zeta(t)=[{e}_{\zeta_1}(t),\ldots,{e}_{\zeta_n}(t)]^\top= {{\boldsymbol{\zeta}}}(t)- {{\boldsymbol{\zeta}}}^r(t) \label{eq14}
\end{align}
and select the corresponding functions $\rho_{\zeta_j}(t)=(\rho^0_{\zeta_j}-\rho_{\zeta_j}^\infty)e^{-l_{\zeta_j}t}+\rho_{\zeta_j}^\infty$ with $\rho^0_{\zeta_j}>|e_{\zeta_j}(t_0)|,\forall j\in\{1\ldots,n\}$, $\rho^0_{\zeta_j}>\rho^\infty_{\zeta_j}>0$, $ l_{\zeta_j}>0,\forall j\in\{1,\ldots n\}$, and define the normalized velocity errors $\boldsymbol{\xi}_\zeta$ as:
\begin{align}
\boldsymbol{\xi}_{\zeta}(t)=[\xi_{\zeta_1},\ldots,\xi_{\zeta_n}]^\top=\boldsymbol{P}^{-1}_\zeta(t)\boldsymbol{e}_\zeta(t)\label{eq15}
\end{align}
where $\boldsymbol{P}_\zeta(t)=\text{diag}\{\rho_{\zeta_j}\},j\in\{1,\ldots,n\}$.\\
\textbf{Step II-b}: Define the transformed errors $\boldsymbol{\varepsilon}_{\zeta}(\boldsymbol{\xi}_{\zeta})=[\varepsilon_{\zeta_1}(\xi_{\zeta_1}),\ldots,\varepsilon_{\zeta_n}(\xi_{\zeta_n})]^\top$ and the signal $\boldsymbol{R}_{\zeta}(\boldsymbol{\xi}_{\zeta})=\text{diag}\{r_{\zeta_j}\}$, $~j\in\{1,\ldots,n\}$ as:
\begin{align}\label{eq16}
&\boldsymbol{\varepsilon}_{\zeta}(\boldsymbol{\xi}_{\zeta})=\Big[\ln \Big(\frac{1+\xi_{\zeta_1}}{1-\xi_{\zeta_1}}\Big),\ldots,\ln \Big(\frac{1+\xi_{\zeta_n}}{1-\xi_{\zeta_n}}\Big)\Big]^\top\\
& \boldsymbol{R}_{\zeta}(\boldsymbol{\xi}_{\zeta})\!=\!\text{diag}\{r_{\zeta_j}\!(\xi_{\zeta_j}\!)\}\!=\!\text{diag}\!\Big\{\!\frac{2}{1-\xi_{\zeta_j}^2\!}\Big\},j\!=\!\{1,\ldots,n\}\label{eq17}
\end{align}
and finally design the state feedback control law $\tau_j,~j\in\{1,\ldots,n\}$ as:
\begin{align}
\tau_j\!(\xi_{x_j}\!,\xi_{\zeta_j}\!,t)=-k_{\zeta_j}\frac{r_{\zeta_j}(\xi_{\zeta_j})\varepsilon_{\zeta_j}(\xi_{\zeta_j})}{\rho_{\zeta_j}(t)},~j=\{1,\ldots,n\}\label{eq18}
\end{align}
where $k_{\zeta_j}$ to be a positive gain. The control law \eqref{eq18} can be written in vector form as:
\begin{align}
\boldsymbol{\tau}(\boldsymbol{e}_x(t),\boldsymbol{e}_\zeta(t),t)&=[ \tau_1(\xi_{x_1},\xi_{\zeta_1},t),\ldots, \tau_n(\xi_{x_n},\xi_{\zeta_n},t)]^\top\nonumber\\
&-\boldsymbol{K}_\zeta\boldsymbol{P}^{-1}(t)\boldsymbol{R}_{\zeta}(\boldsymbol{\xi}_{\zeta})\boldsymbol{\varepsilon}_{\zeta}(\boldsymbol{\xi}_{\zeta})\label{eq19}
\end{align}
with $\boldsymbol{K}_\zeta$ to be the diagonal matrix containing $k_{\zeta_j}$. Now we are ready to state the main theorem of the paper:
\begin{theorem}
Given the error defined in \eqref{eq:ov:er} and the required transient and steady state performance specifications, select the exponentially decaying performance function $\rho_{x_j}(t)$, $\rho_{\zeta_j}(t)$ such that the desired performance specifications are met. Then the state feedback control law of \eqref{eq19} guarantees tracking of the trajectory $\boldsymbol{f}_e^d(t)\in \mathbb{R}^3$ as well as ${^o\boldsymbol{x}}^d_e(t)$:
\begin{align*}
\lim_{t\rightarrow\infty}\boldsymbol{f}_e(t)=\boldsymbol{f}^d_e(t)~\text{and}\lim_{t\rightarrow\infty}{^o\boldsymbol{x}_e(t)}={^o\boldsymbol{x}^d_e(t)}
\end{align*}
with the desired transient and steady state performance specifications.
\end{theorem}
\begin{pf}
For the proof we follow parts of the approach in \citep{Bechlioulis20141217}. We start by differentiating \eqref{eq11} and \eqref{eq15} with respect to the time and substituting the system dynamics \eqref{eq4} as well as \eqref{eq13} and \eqref{eq18} and employing \eqref{eq:ov:er} and \eqref{eq14}, obtaining:
\begin{align}
\dot{\xi}_{x_j}(\xi_{x_j},t)&=h_{x_j}(\xi_{x_j},t)\nonumber\\
&=\rho^{-1}_{x_j}(t)(\dot{e}_{x_j}(t)-\dot{\rho}_{x_j}\!(t)\xi_{x_j} )\nonumber\\
&=\rho^{-1}_{x_j}(t)(-k_{x_j}\varepsilon_{x_j}(\xi_{x_j})+\boldsymbol{J}_{(j,:)}\boldsymbol{P}_\zeta\boldsymbol{\xi}_\zeta-\dot{x}^d_{e_j}(t))\nonumber\\
&-\rho^{-1}_{x_j}(t)(\dot{\rho}_{x_j}\!(t)\xi_{x_j}), \forall j\in\{1,\ldots,6\}\label{eq21}
\end{align}
\begin{align}
\dot{\boldsymbol{{\xi}}}_{\zeta}(\xi_{\zeta},t&)=h_{\zeta}(\boldsymbol{\xi}_{\zeta},t)\nonumber\\
& = \boldsymbol{P}_\zeta^{-1}(\dot{\boldsymbol{\zeta}}-\dot{\boldsymbol{\zeta}}^r)-\dot{\boldsymbol{P}}_\zeta^{-1}{\boldsymbol{\xi}}_\zeta ) \nonumber\\
& = -\boldsymbol{K}_\zeta\boldsymbol{P}_\zeta^{-1}\boldsymbol{M}^{-1}\boldsymbol{P}_\zeta^{-1}\boldsymbol{R}_\zeta\boldsymbol{\varepsilon}_{\zeta}-\nonumber\\
&-\boldsymbol{P}_\zeta^{-1}\Big[\boldsymbol{M}^{-1}\Big(\boldsymbol{C}\cdot(\boldsymbol{P}_\zeta\boldsymbol{\xi}_\zeta+{\boldsymbol{\zeta}}^r)+\boldsymbol{D}\cdot(\boldsymbol{P}_\zeta\boldsymbol{\xi}_\zeta+{\boldsymbol{\zeta}}^r)\nonumber\\
&+\boldsymbol{g}+{\boldsymbol{J}^g}^\top\boldsymbol{\lambda}+\!\boldsymbol{\delta}(t)\Big)+\dot{\boldsymbol{P}}_\zeta\boldsymbol{\xi}_\zeta+\frac{\partial}{\partial t}{\boldsymbol{\zeta}}^r\Big]\label{eq22}
\end{align}
where $\boldsymbol{J}_{(j,:)}$ denotes all elements of jacobian $\boldsymbol{J}$ at its $j$ row. Now let us to define the vectors of normalized state error and the generalized normalized error as $\boldsymbol{\xi}_{x}=[{\xi}_{x_1},\ldots,{\xi}_{x_6}]^\top\!$, and $\boldsymbol{\xi}=[\boldsymbol{\xi}_x^\top,\boldsymbol{\xi}_\zeta^\top]^\top$, respectively. Moreover, let us define $\dot{\boldsymbol{\xi}}_x=h_x(\boldsymbol{\xi_x},t)=[h_{x_1}(\xi_{x_1},t),\ldots,h_{x_6}(\xi_{x_6},t)]^\top$. The equations of \eqref{eq21} and \eqref{eq22} now can be written in compact form as:
\begin{align}
\dot{\boldsymbol{\xi}}=h(\boldsymbol{\xi},t)= [h_x^\top(\boldsymbol{\xi_x},t),h_\zeta^\top(\boldsymbol{\xi_\zeta},t)]\top\label{eq23}
\end{align}
Let us define the open set $\Omega_\xi=\Omega_{\xi_x}\times\Omega_{\xi_\zeta}$ with $\Omega_{\xi_x}=(-1,1)^6$ and $\Omega_{\xi_\zeta}=(-1,1)^n$. In what follows, we proceed in two phases. First we ensure the existence of a unique maximal solution $\boldsymbol{\xi}(t)$ of \eqref{eq23} over the set $\Omega_\xi$ for a time interval $[0,t_{\text{max}}]$ (i.e., $\boldsymbol{\xi}(t)\in\Omega_\xi, \forall t\in[0,t_{\text{max}}]$). Then, we prove that the proposed controller \eqref{eq19} guarantees, for all $t\in[0,t_{\text{max}}]$ the boundedness of all closed loop signal of \eqref{eq23} as well as that $\boldsymbol{\xi}(t)$ remains strictly within the set $\Omega_\xi$, which leads that $t_{\text{max}}=\infty$ completes the proof.
\textbf{Phase A}: The set $\Omega_\xi$ is nonempty and open, thus by selecting $\rho^0_{x_j}>|e_{x_j}(0)|$, $\forall j\in\{1,\ldots 6\}$ and $\rho^0_{v_j}>|e_{v_j}(0)|$, $\forall j\in\{1,\ldots n\}$ we guarantee that $\boldsymbol{\xi}_x(0)\in\Omega_{\xi_x}$ and $\boldsymbol{\xi}_\zeta(0)\in\Omega_{\xi_\zeta}$. Additionally, $h$ is continuous on $t$ and locally Lipschitz on $\boldsymbol{\xi}$ over $\Omega_\xi$. Therefore, the hypotheses of Theorem\ref{thm:dynamical systems} hold and the existence of a maximal solution $\boldsymbol{\xi}(t)$ of \eqref{eq23} on a time interval $[0,t_{\text{max}}]$ such that $\boldsymbol{\xi}(t) \in \Omega_\xi,~\forall t\in[0,t_{\text{max}}]$ is ensured.
\begin{figure*
\centering
\begin{tikzpicture
\node at (-13.0, 2.87) {$\begin{bmatrix}f_e^d(t)\\ ^o{x}_e^d(t) \end{bmatrix}$};
\filldraw[fill=green!10, dashed, line width=.045cm] (-10.80, 0.5) rectangle +(7.2, 3.2);
\node at (-7.2, 0.9) {$\text{Proposed Control Algorithm}$};
\filldraw[fill=blue!10, line width=.045cm] (-12.0,2.20) circle (0.25cm);
\node at (-11.25,2.50) {$e_x(t)$};
\draw [color=black,thick,->,>=stealth', line width=.045cm](-11.75, 2.2) to (-10.5, 2.2);
\draw [color=black,thick,->,>=stealth', line width=.045cm](-13.1, 2.2) to (-12.25, 2.2);
\filldraw[fill=orange!12, line width=.045cm] (-10.5, 1.58) rectangle +(2.2, 1.3);
\node at (-9.4,2.60) {$\text{first level}$};
\node at (-9.4,1.95) {${\boldsymbol{\zeta}}^r\!(\xi_{x_j}\!,t)$};
\draw [black, line width = 0.030cm] (-10.5, 2.30) -- (-8.3,2.30);
\draw [color=black,thick,->,>=stealth', line width=.045cm](-8.3, 2.2) to (-6.0, 2.2);
\node at (-7.2,2.50) {${\boldsymbol{\zeta}}^r\!(\xi_{x_j}\!,t)$};
\filldraw[fill=orange!12, line width=.045cm] (-6.0, 1.58) rectangle +(2.2, 1.3);
\node at (-4.9,2.60) {$\text{second level}$};
\node at (-4.9,1.95) {$\boldsymbol{\tau}(\boldsymbol{e}_x,\boldsymbol{e}_\zeta,t)$};
\draw [black, line width = 0.030cm] (-6.00, 2.30) -- (-3.8,2.30);
\draw [color=black,thick,->,>=stealth', line width=.045cm](-3.8, 2.2) to (-1.8, 2.2);
\filldraw[fill=orange!12, line width=.045cm] (-1.8, 1.58) rectangle +(1.80, 1.3);
\node at (-0.9,2.25) {$\text{UVMS}$};
\draw [color=black,thick,->,>=stealth', line width=.045cm](-1.0, 3.7) to (-1.0, 2.90);
\node at (-1.0,4.38) {$\text{external disturbance}$};
\node at (-1.0,3.97) {$\delta(t)$};
\draw [color=black,thick,->,>=stealth', line width=.045cm](0.0, 2.2) to (2.0, 2.2);
\filldraw[fill=red!10, line width=.045cm] (2.0, 1.58) rectangle +(2.2, 1.3);
\node at (3.1,2.25) {$\text{Environment}$};
\filldraw[fill=orange!12, line width=.045cm] (-5.0, -2.00) rectangle +(2.20, 1.3);
\node at (-3.9,-1.33) {$\text{force sensor}$};
\draw [color=black,thick,->,>=stealth', line width=.045cm](-4.0, -2.8) to (-4.0, -2.0);
\node at (-4.0,-3.00) {$\text{noise}$};
\draw [black, line width = .045cm] (-12.03,-1.35) -- (-5.0,-1.350);
\draw [color=black,thick,->,>=stealth', line width=.045cm](-12.0, -1.35) to (-12.0, 1.95);
\draw [black, thick,->,>=stealth', line width = .045cm] (3.335,-1.350) to (-2.8,-1.35);
\draw [black, line width = .045cm] (3.3,-1.35) -- (3.3,1.6);
\node at (-7.8,-1.70) {$f_e(t)+\Delta f_e(t)$};
\end{tikzpicture}
\centering
\caption{The closed loop block diagram of the proposed control scheme.}
\label{fig:closed_loop_control_scheme}
\end{figure*}
\textbf{Phase B}: In the Phase A we have proven that $\boldsymbol{\xi}(t) \in \Omega_\xi,~\forall t\in[0,t_{\text{max}}]$, thus it can be concluded that:
\begin{subequations}\label{eq24}
\begin{gather}
\xi_{x_j}(t)=\frac{e_{x_j}}{\rho_{x_j}}\in(-1,1),~ \forall j\{1,\ldots,6\} \\
\xi_{\zeta_j}(t)=\frac{e_{\zeta_j}}{\rho_{\zeta_j}}\in(-1,1),~ \forall j\{1,\ldots,n\}
\end{gather}
\end{subequations}
for all $t\in[0,t_{\text{max}}]$, from which we obtain that $e_{x_j}(t)$ and $e_{\zeta_j}(t)$ are absolutely bounded by $\rho_{x_j}$ and $\rho_{\zeta_j}$, respectively. Therefore, the error vectors $\varepsilon_{x_j}(\xi_{x_j}),\forall j\in\{1,\ldots,6\}$ and $\varepsilon_{\zeta_j}(\xi_{\zeta_j}),\forall j\in\{1,\ldots,n\}$ defined in \eqref{eq12} and \eqref{eq16}, respectively, are well defined for all $t\in [0,t_{\text{max}}]$. Hence, consider the positive definite and radially unbounded functions $V_{x_j}(\varepsilon_{x_j})=\varepsilon^2_{x_j},~\forall j\{1,\ldots,6\}$. Differentiating of $V_{x_j}$ w.r.t time and substituting \eqref{eq21}, results in:
\begin{align}
\dot{V}_{x_j}\!=-\frac{4\varepsilon_{x_j}\rho^{-1}_{x_j}}{(1-\xi^2_{x_j})}\!\Big(k_{x_j}\varepsilon_{x_j}(\xi_{x_j})\!+\!\dot{x}^d_{e_j}\!+\!\dot{\rho}_{x_j}\!(t)\xi_{x_j}\!-\!\boldsymbol{J}_{(j,:)}\boldsymbol{P}_\zeta\boldsymbol{\xi}_\zeta\! \Big)\label{eq25}
\end{align}
It is well known that the Jacobian $\boldsymbol{J}$ is depended only on bounded vehicle's orientation and angular position of manipulator's joint. Moreover, since, $\dot{x}^d_{e_j}$, $\rho_{x_j}$ and $\rho_{v_j}$ are bounded by construction and $\xi_{x_j}$,$\xi_{v_j}$ are also bounded in $(-1,1)$, owing to \eqref{eq24}, $\dot{V}_{x_j}$ becomes:
\begin{align}
\dot{V}_{x_j} \leq\frac{4\rho^{-1}_{x_j}}{(1-\xi^2_{x_j})}\!\Big(B_x |\varepsilon_{x_j}| -k_{x_j}|\varepsilon_{x_j}|^2\Big)\label{eq26}
\end{align}
$\forall t\in [0,t_{\text{max}}]$, where $B_x$ is an unknown positive constant independent of $t_{\text{max}}$ satisfying $B_x>|\dot{x}^d_{e_j}\!+\!\dot{\rho}_{x_j}\!(t)\xi_{x_j}\!-\!\boldsymbol{J}_{(j,:)}\boldsymbol{P}_\zeta\boldsymbol{\xi}_\zeta|$. Therefore, we conclude that $\dot{V}_{x_j}$ is negative when $\varepsilon_{x_j}>\frac{B_x}{k_{j_x}}$ and subsequently that
\begin{align}
|\varepsilon_{x_j}(\xi_{x_j}(t)) |\leq \bar{\varepsilon}_{x_j}=\max\{\varepsilon_{x_j}(\xi_{x_j}(0)),\frac{B_x}{k_{j_x}}\}\label{eq27}
\end{align}
$\forall t\in [0,t_{\text{max}}], \forall j\{1,\ldots,6\}$. Furthermore, from \eqref{eq12}, taking the inverse logarithm, we obtain:
\begin{align}
-1<\frac{e^{-\bar{\varepsilon}_{x_j}}-1 }{e^{-\bar{\varepsilon}_{x_j}}+1}=\underline{\xi}_{x_j} \leq \xi_{x_j}(t)\leq\bar{\xi}_{x_j} =\frac{e^{\bar{\varepsilon}_{x_j}}-1 }{e^{\bar{\varepsilon}_{x_j}}+1}<1 \label{eq28}
\end{align}
$\forall t\in [0,t_{\text{max}}],~j\in\{1,\ldots,6\}$. Due to \eqref{eq28}, the reference velocity vector $\dot{\boldsymbol{x}}^r_e$ as defined in \eqref{eq13}, remains bounded for all $t\in[0,t_{\text{max}}]$. Moreover, invoking $\dot{\boldsymbol{x}}_e={\dot{\boldsymbol{x}}}^r_e(t)+\boldsymbol{P}_v(t)\boldsymbol{\xi}_v$ from \eqref{eq14}, \eqref{eq15} and \eqref{eq24}, we also conclude the boundedness of $\dot{\boldsymbol{x}}_e$ for all $t\in [0,t_{\text{max}}]$. Finally, differentiating ${\dot{\boldsymbol{x}}}^r_e(t)$ w.r.t time and employing \eqref{eq21}, \eqref{eq24} and \eqref{eq28}, we conclude the boundedness of $\frac{\partial}{\partial t}{\dot{\boldsymbol{x}}}^r_e(t)$, $\forall t\in [0,t_{\text{max}}]$.
Applying the aforementioned line of proof, we consider the positive definite and radially unbounded function $V_\zeta(\boldsymbol{\varepsilon}_\zeta)=\frac{1}{2}||\boldsymbol{\varepsilon}_\zeta||^2$. By differentiating $V_\zeta$ with respect to time, substituting \eqref{eq22} and by employing continuity of $\boldsymbol{M}$, $\boldsymbol{C}$, $\boldsymbol{D}$, $\boldsymbol{g}$, $\boldsymbol{\lambda}$, $\boldsymbol{\delta}$, $\boldsymbol{\xi}_x$, $\boldsymbol{\xi}_\zeta$,$\dot{\boldsymbol{P}}_\zeta$, $\frac{\partial}{\partial t}{{\boldsymbol{\zeta}}}^r$, $\forall t\in [0,t_{\text{max}}]$, we obtain:
\begin{align*}
\dot{V}_\zeta\leq ||\boldsymbol{P}^{-1}_\zeta\boldsymbol{R}_\zeta(\boldsymbol{\xi}_\zeta)\boldsymbol{\varepsilon}_\zeta||\Big(B_\zeta-\boldsymbol{K}_\zeta\lambda_M||\boldsymbol{P}^{-1}_\zeta\boldsymbol{R}_\zeta(\boldsymbol{\xi}_\zeta) \boldsymbol{\varepsilon}_\zeta|| \Big)
\end{align*}
$\forall t\in [0,t_{\text{max}}]$, where $\lambda_M$ is the minimum singular value of the positive definite matrix $\boldsymbol{M}^{-1}$ and $B_v$ is a positive constant independent of $t_{\text{max}}$, satisfying
\begin{align*}
B_\zeta\geq &|| \boldsymbol{M}^{-1}\Big( \boldsymbol{C}\cdot(\boldsymbol{P}_\zeta\boldsymbol{\xi}_\zeta +{\boldsymbol{\zeta}}^r(t)) + \boldsymbol{D}\cdot(\boldsymbol{P}_\zeta\boldsymbol{\xi}_\zeta +{\boldsymbol{\zeta}}^r(t)) \\
&+\boldsymbol{g}+{\boldsymbol{J}^g}^\top\boldsymbol{\lambda}+\boldsymbol{\delta}(t)+ \dot{\boldsymbol{P}}_\zeta\boldsymbol{\xi}_\zeta+\frac{\partial}{\partial t}{{\boldsymbol{\zeta}}}^r \Big) ||
\end{align*}
Thus, $\dot{V}_\zeta$ is negative when $||\boldsymbol{P}^{-1}_\zeta\boldsymbol{R}_\zeta(\boldsymbol{\xi}_\zeta) \boldsymbol{\varepsilon}_\zeta|| >B_\zeta(\boldsymbol{K}_\zeta\lambda_M)^{-1}$, which by employing the definitions of $\boldsymbol{P}_\zeta$ and $\boldsymbol{R}_\zeta$, becomes $||\boldsymbol{\varepsilon}_\zeta ||> B_\zeta(\boldsymbol{K}_\zeta\lambda_M)^{-1}\max\{\rho^0_{\zeta_1},\ldots,\rho^0_{\zeta_n}\} $. Therefore, we conclude that:\begin{small}
\begin{align*}
||\boldsymbol{\varepsilon}_\zeta (\boldsymbol{\xi}_\zeta\!(\!t\!))\!\leq\boldsymbol{\bar{\varepsilon}}_\zeta=\!\max\!\Big\{\!\boldsymbol{\varepsilon}_\zeta (\boldsymbol{\xi}_\zeta(0)),\! B_\zeta(\!\boldsymbol{K}_\zeta\lambda_M)^{-1}\cdot\!\max\{\rho^0_{\zeta_1},\ldots,\rho^0_{\zeta_n}\}\! \Big\}
\end{align*}\end{small}
$\forall t\in [0,t_{\text{max}}]$. Furthermore, from \eqref{eq17}, invoking that $|\varepsilon_{\zeta_j}|\leq || \boldsymbol{\varepsilon}_\zeta||$, we obtain:
\begin{align}
-1<\frac{e^{-\bar{\varepsilon}_{\zeta_j}}-1 }{e^{-\bar{\varepsilon}_{\zeta_j}}+1}=\underline{\xi}_{\zeta_j} \leq \xi_{\zeta_j}(t)\leq\bar{\xi}_{\zeta_j} =\frac{e^{\bar{\varepsilon}_{\zeta_j}}-1 }{e^{\bar{\varepsilon}_{\zeta_j}}+1}<1 \label{eq29}
\end{align}
$\forall t\in [0,t_{\text{max}}],~j\in\{1,\ldots,n\}$ which also leads to the boundedness of the control law \eqref{eq19}. Now, we will show that the $t_{\text{max}}$ can be extended to $\infty$. Obviously, notice by \eqref{eq28} and \eqref{eq29} that $\boldsymbol{\xi}(t)\in\Omega^{'}_\xi=\Omega^{'}_{\xi_x}\times \Omega^{'}_{\xi_\zeta},\forall t\in [0,t_{\text{max}}]$, where:
\begin{align*}
\Omega^{'}_{\xi_x}=[\underline{\xi}_{x_1},\bar{\xi}_{x_1}]\times\ldots,\times[\underline{\xi}_{x_6},\bar{\xi}_{x_6} ],\\
\Omega^{'}_{\xi_\zeta}=[\underline{\xi}_{\zeta_1},\bar{\xi}_{\zeta_1}]\times\ldots,\times[\underline{\xi}_{\zeta_n},\bar{\xi}_{\zeta_n} ],
\end{align*}
are nonempty and compact subsets of $\Omega_{\xi_x}$ and $\Omega_{\xi_\zeta}$, respectively. Hence, assuming that $t_{\text{max}}<\infty$ and since $\Omega_\xi\subset \Omega^{'}_\xi$, Proposition 1, dictates the existence of a time instant $t{^{'}}\in \forall t\in [0,t_{\text{max}}]$ such that $\boldsymbol{\xi}(t^{'})\notin \Omega^{'}_\xi$, which is a clear contradiction. Therefore, $t_{\text{max}}=\infty$. Thus, all closed loop signals remain bounded and moreover $\boldsymbol{\xi}(t)\in \Omega^{'}_\xi,\forall t\geq0$.
Finally, from \eqref{eq11} and \eqref{eq28} we conclude that:
\begin{align}
-\rho_{x_j}<\frac{e^{-\bar{\varepsilon}_{x_j}}-1 }{e^{-\bar{\varepsilon}_{x_j}}+1}\rho_{x_j} \leq e_{x_j}(t)\leq \rho_{x_j}\frac{e^{\bar{\varepsilon}_{x_j}}-1 }{e^{\bar{\varepsilon}_{x_j}}+1}<\rho_{x_j} \label{eq30}
\end{align}
for $j\in\{1,\ldots,6\}$ and for all $t\geq 0$ and consequently, completes the proof.
\end{pf}
\begin{remark}
From the aforementioned proof, it is worth noticing that the proposed control scheme is model free with respect to the matrices $\boldsymbol{M}$, $\boldsymbol{C}$, $\boldsymbol{D}$, $\boldsymbol{g}$ as well as the external disturbances $\boldsymbol{\delta}$ that affect only the size of $\bar{\varepsilon}_{x_j}$ and of $\bar{\varepsilon}_{\zeta_j}$, but leave unaltered the achieved convergence properties as \eqref{eq30} dictates. In fact, the actual transient and steady state performance is determined by the selection of the performance function $\rho_{c_j},c\in \{x,\zeta\}$. Finally the closed loop block diagram of the proposed control scheme is indicated in Fig.\ref{fig:closed_loop_control_scheme}.
\end{remark}
\section{Simulation Results}
Simulation studies were conducted employing a hydrodynamic simulator built in MATLAB. The dynamic equation of UVMS used in this simulator is derived following \cite{Schjølberg94modellingand}. The UVMS model considered in the simulations is the SeaBotix LBV150 equipped with a small 4 DoF manipulator. We consider a scenario involving 3D motion in workspace, where the end-effector of the UVMS is in interaction on a compliant environment with stiffens matrix $\boldsymbol{K}_f=\text{diag}\{2\}$ which is unknown for the controller.
\begin{figure}[h!]
\centering
\includegraphics[scale = 0.30]{workspace.png}
\caption{Workspace including the UVMS and the compliant environment. The UVMS is run under the influence of external disturbances.}
\label{fig:workspace}
\end{figure}
The workspace at the initial time, including UVMS and the compliant environment are depicted in Fig.\ref{fig:workspace}. More specifically, we adopt: $\boldsymbol{f}_e(0)=[0,0,0]^\top$ and $^o\boldsymbol{x}_e=[0.2,0.2,-0.2]^\top$. It means that at the initial time of the simulation study we assume that the uvms has attached the compliant environment with a rotation at its end-effector frame. The control gains for the two set of the simulation studies were selected as follows: $k_{x_j}=-0.2 j\in\{1,\ldots,6\}$, $k_{v_j}=-5 j\in\{1,\ldots,n\}$. Moreover, the dynamic parameters of UVMSs as well as the stiffens matrix $\boldsymbol{K}_f$ were considered unknown for the controller. The parameters of the performance functions in sequel stimulation studies were chosen as follows:$\rho^0_{x_j}=1,~j\in\{1,2,3\}$, $\rho^0_{x_j}=0.9,~j\in\{4,5,6\}$, $\rho^0_{v_j}=1,~j\in\{1,\ldots,n\}$, $\rho^\infty_{x_j}=0.2~j\in\{1,\ldots,6\}$, $\rho^\infty_{v_j}=0.2~j\in\{1,\ldots,6\}$, $\rho^\infty_{v_j}=0.4~j\in\{7,\ldots,n\}$, $l_{x_j}=3~j\in\{1,2,3\}$, $l_{v_j}=2.2~j\in\{1,\ldots,n\}$. Finally, the whole system was running under the influence of external disturbances (e.g., sea current) acting along $x$, $y$ and $z$ axes (on the vehicle body) given by $0.15\sin(\frac{2\pi}{7}t)$, in order to test the robustness of the proposed scheme. Moreover, bounded noise on measurement devices were considered during the simulation study. In the the presented simulation scenario, a desired force trajectory should be exerted to the environment while predefined orientation $^o\boldsymbol{x}^d_e=[0.0,0.0,0.0]^\top$ must be kept. One should bear in mind that this is a challenging task owing to dynamic nature of the underwater environment. The UVMS's model uncertainties, noise of measurement devices as well as external disturbances in this case can easily cause unpredicted instabilities to the system. The desired force trajectory which should be exerted by UVMS is defined as $f^d_{e_1}=0.4\sin(\frac{2\pi}{2}t)+0.4$. The results are depicted in Figs 3-5. Fig\ref{fig:forc} show the evolution of the force trajectory. Obviously, the actual force exerted by the UVMS (indicated by red color) converges to the desired one (indicated by green color) without overshooting and follows the desired force profile. The evolution of the errors at the first and second level of the proposed controller are indicated in Fig.\ref{fig:ppx_force} and Fig.\ref{fig:ppv_force}, respectively. It can be concluded that even with the influence of external disturbances as well as noise in measurements, the errors in all directions converge close to zero and remain bounded by the performance functions.
\begin{figure}[h!]
\centering
\includegraphics[scale = 0.35]{Forcetraj.pdf}
\caption{Trajectory scenario: The evolution of the force trajectory. The desired force trajectory and the actual force exerted by the UVMS are indicated by green and red color respectively.}
\label{fig:forc}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[scale = 0.56]{PP_X_force.pdf}
\caption{Trajectory scenario: The evolution of the errors at the first level of the proposed control scheme. The errors and performance bounds are indicated by blue and red color respectively.}
\label{fig:ppx_force}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[scale = 0.5]{PP_V_force.pdf}
\caption{Trajectory scenario: The evolution of the errors at the second level of the proposed control scheme. The errors and performance bounds are indicated by blue and red color respectively.}
\label{fig:ppv_force}
\end{figure}
\section{Conclusions and Future Work}
This work presents a robust force/position control scheme for a UVMS in interaction with a compliant environment, which could have direct applications in the underwater robotics (e.g. sampling of the sea organisms, underwater welding, pushing a button). Our proposed control scheme does not required any priori knowledge of the UVMS dynamical parameters as well as environment model. It guarantees a predefined behavior of the system in terms of desired overshot and transient and steady state performance. Moreover, the proposed control scheme is robust with respect to the external disturbances and measurement noises. The proposed controller of this work exhibits the following important characteristics: i) it is of low complexity and thus it can be used effectively in most of today UVMS. ii) The performance of the proposed scheme (e.g. desired overshot, steady state performance of the systems) is a priori and explicitly imposed by certain designer-specified performance functions, and is fully decoupled by the control gains selection, thus simplifying the control design. The simulations results demonstrated the efficiency of the proposed control scheme. Finally, future research efforts will be devoted towards addressing the torque controller as well as conducting experiments with a real UVMS system.
|
1,116,691,498,679 | arxiv |
\section{First principles methods}
\subsection{DFT+HI}
In order to evaluate the effective Hamiltonian from first principles, we start by calculating the electronic structure of paramagnetic Ba$_2M$OsO$_6$ with the DFT+dynamical mean-field theory(DMFT) method. Local correlations on the whole Os 5$d$ shell are treated within the quasi-atomic Hubbard-I (HI) approximation \cite{hubbard_1}; the method is abbreviated below as DFT+HI. We employ a self-consistent DFT+DMFT implementation\cite{Aichhorn2009,Aichhorn2011,Aichhorn2016} based on the full-potential LAPW code Wien2k\cite{Wien2k} and including the spin-orbit with the standard variational treatment. Wannier orbitals representing Os 5$d$ orbitals are constructed from the Kohn-Sham (KS) bands in the energy range [-1.2:6.1]~eV relative to the KS Fermi level; this energy window includes all $t_{2g}$ and $e_g$ states of Os but not the oxygen 2$p$ bands. The on-site Coulomb repulsion for the Os 5$d$ shell is parametrized by $U=F_0=3.2$~eV for the BMOO and BZOO; for BCOO we employ a slightly larger value of $U=$3.5 eV to stabilize the $d^2$ ground state in DFT+HI. We use $J_H$=0.5~eV for all three compounds. Our values for $U$ and $J_H$ are consistent with previous calculations of d$^1$ Os perovskites by DFT+HI\cite{FioreMosca2021}. The double-counting correction is evaluated using the fully-localized limit with the nominal 5$d$ shell occupancy of 2.
All calculations are carried out for the experimental cubic lattice structures of Ba$_2M$OsO$_6$, the lattice parameter $a=$8.346, 8.055, and 8.082~\AA\ for $M=$Ca, Mg, and Zn, respectively\cite{Thompson2014,Marjerrison2016}. We employ the local density approximation as the DFT exchange-correlation potential, 1000 ${\bf k}$-point in the full Brillouin zone, and the Wien2k basis cutoff $R_{\mathrm{mt}}K_{\mathrm{max}}=$8.
\subsection{Calculations of inter-site exchange interactions (IEI)}
In order to evaluate all IEI $V_{KK'}^{QQ'}(\Delta{\bf R})$ acting within $J_{eff}$=2 manifold, we employ the HI-based force-theorem approach of Ref.~\onlinecite{Pourovskii2016} (abbreviated below as FT-HI). Within this approach, the matrix elements of IEI $V(\Delta{\bf R})$ coupling $J_{eff}$=2 shells on two Os sites read
\begin{equation}\label{V}
\langle M_1 M_3| V(\Delta{\bf R})| M_2 M_4\rangle=\mathrm{Tr} \left[ G_{\Delta{\bf R}}\frac{\delta\Sigma^{at}_{{\bf R}+\Delta{\bf R}}}{\delta \rho^{M_3M_4}_{{\bf R}+\Delta{\bf R}}} G_{-\Delta{\bf R}}\frac{\delta\Sigma^{at}_{{\bf R}}}{\delta \rho^{M_1M_2}_{{\bf R}}}\right],
\end{equation}
where $\Delta{\bf R}$ is the lattice vector connecting the two sites, $M=-2...2$ is the projection quantum number, $\rho^{M_iM_j}_{{\bf R}}$ is the corresponding element of the $J_{eff}$ density matrix on site ${\bf R}$, $\frac{\delta\Sigma^{at}_{{\bf R}}}{\delta \rho^{M_iM_j}_{{\bf R}}}$ is the derivative of atomic (Hubbard-I) self-energy $\Sigma^{at}_{\bf R}$ over a fluctuation of the $\rho^{M_iM_j}_{{\bf R}}$ element, $G_{{\bf R}}$ is the inter-site Green's function. The self-energy derivatives are calculated with analytical formulas from atomic Green's functions.
The FT-HI method is applied as a post-processing on top of DFT+HI, hence, all quantities in the RHS of eq.~\ref{V} are evaluated from the fully converged DFT+HI electronic structure of a given system.
Once all matrix elements (\ref{V}) are calculated, they are directly mapped into the corresponding couplings $V_{KK'}^{QQ'}(\Delta{\bf R})$ between on-site moments (eq. 22 in Ref.~\onlinecite{Pourovskii2016}). To have a correct mapping into the $J_{eff}$ pseudo-spin basis the phases of $|J_{eff}M\rangle$ must be aligned, i.~e. $\langle J_{eff}M|J_{+}|J_{eff}M-1\rangle$ must be a positive real number.
The calculations of IEI within the $E_g$ space proceed in the same way starting from the same converged DFT+HI electronic structure. The density matrices fluctuations $\rho^{M_iM_j}_{{\bf R}}$ are restricted within the $E_g$ doublet, and $M=\pm \frac{1}{2}$. The conversion to the spin-1/2 pseudospin IEI is carried out in accordance with eq.~24 of Ref.~\onlinecite{Pourovskii2016}.
In the converged DFT+HI electronic structure the chemical potential $\mu$ is sometimes found to be pinned at the very top of the valence (lower Hubbard) band instead of being strictly inside the Mott gap. Since the FT-HI method breaks down if any small metallic spectral weight is present, in those cases we calculated the IEI with $\mu$ shifted into the gap.
\subsection{Generalized dynamical susceptibility.}
We evaluated the generalized dynamical susceptibility in the FO $xyz$ ordered state using the random phase approximation (RPA), see, e.~g., Ref.~\cite{RareEarthMag_book}.
Within the RPA, the general susceptibility matrix in the $J_{eff}$=2 space reads
\begin{equation}\label{eq:chi}
\bar{\chi}({\bf q},E)=\left[I-\bar{\chi}_0(E)\bar{V}_{{\bf q}}\right]^{-1}\bar{\chi}_0(E),
\end{equation}
where $\bar{\chi}_0(E)$ is the on-site bare susceptibility, $\bar{V}_{{\bf q}}$ is the Fourier transform of IEI matrices $\hat{V}(\Delta {\bf R})$, the bar $\bar{...}$ designates a matrix in the combined $\mu=[K,Q]$ index labeling $J_{eff}$ multipoles Notice, that $\hat{V}(\Delta {\bf R})$ and, correspondingly, $V_{{\bf q}}$ do not couple time-odd and time-even multipoles. The on-site susceptibility $\hat{\chi}_{0}(E)$ is calculated in accordance with eq.~4 of the main text.
\renewcommand\floatpagefraction{0.1}
\begin{table}[tp]
\caption{\label{Tab:SEI}
Calculated IEI $V_{KK'}^{QQ'}$ for the $J_{eff}$=2 multiplet. First two columns list
$Q$ and $Q'$ , respectively. Third and fourth column displays the $KQ$ and $K'Q'$ tensors in the Cartesian representation, respectively. The three last columns display the values of IEI for BCOO, BMOO, and BZOO in meV.
}
\begin{center}
\begin{ruledtabular}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{c c c c c c c }
& & & {Dipole-Dipole} & BCOO & BMOO & BZOO \\
\hline
-1 & -1 & y & y & 1.62 & 1.47 & 0.97 \\
0 & 0 & z & z & 4.17 & 4.12 & 2.96 \\
1 & -1 & x & y & 1.26 & 1.42 & 1.02 \\
1 & 1 & x & x & 1.62 & 1.47 & 0.97 \\
\hline
\hline
\multicolumn{7}{c}{Quadrupole-Quadrupole} \\
\hline
-2 & -2 & xy & xy & -0.41 & -0.52 & -0.31 \\
-1 & -1 & yz & yz & -0.79 & -0.75 & -0.60 \\
0 & -2 & z$^2$ & xy & 0.16 & 0.07 & 0.07 \\
0 & 0 & z$^2$ & z$^2$ & 1.32 & 1.49 & 0.99 \\
1 & -1 & xz & yz & -0.23 & -0.22 & -0.19 \\
1 & 1 & xz & xz & -0.79 & -0.75 & -0.60 \\
2 & 2 & x$^2$-y$^2$ & x$^2$-y$^2$ & -0.58 & -0.65 & -0.48 \\
\hline
\hline
\multicolumn{7}{c}{Octupole-Octupole} \\
\hline
-3 & -3 & y(x$^2$-3y$^2$) & y(x$^2$-3y$^2$) & 1.16 & 1.25 & 1.24 \\
-2 & -2 & xyz & xyz & -1.49 & -1.47 & -0.85 \\
-1 & -3 & yz$^2$ & y(x$^2$-3y$^2$) & -0.14 & -0.19 & -0.21 \\
-1 & -1 & yz$^2$ & yz$^2$ & 0.80 & 0.81 & 0.33 \\
0 & -2 & z$^3$ & xyz & -0.79 & -0.88 & -0.57 \\
0 & 0 & z$^3$ & z$^3$ & 2.35 & 2.42 & 1.33 \\
1 & -3 & xz$^2$ & y(x$^2$-3y$^2$) & -0.29 & -0.37 & -0.38 \\
1 & -1 & xz$^2$ & yz$^2$ & -0.98 & -1.12 & -0.82 \\
1 & 1 & xz$^2$ & xz$^2$ & 0.80 & 0.81 & 0.33 \\
2 & 2 & z(x$^2$-y$^2$) & z(x$^2$-y$^2$) & -1.89 & -2.00 & -1.42 \\
3 & -1 & x(3x$^2$-y$^2$) & yz$^2$ & 0.29 & 0.37 & 0.38 \\
3 & 1 & x(3x$^2$-y$^2$) & xz$^2$ & 0.14 & 0.19 & 0.21 \\
3 & 3 & x(3x$^2$-y$^2$) & x(3x$^2$-y$^2$) & 1.16 & 1.25 & 1.24 \\
\hline
\hline
\multicolumn{7}{c}{Dipole-Octupole} \\
\hline
-1 & -3 & y & y(x$^2$-3y$^2$) & & -0.07 & -0.06 \\
-1 & -1 & y & yz$^2$ & 1.97 & 1.92 & 0.93 \\
-1 & 1 & y & xz$^2$ & -0.20 & -0.23 & -0.27 \\
-1 & 3 & y & x(3x$^2$-y$^2$) & -0.89 & -1.01 & -0.99 \\
0 & -2 & z & xyz & -0.97 & -1.29 & -1.19 \\
0 & 0 & z & z$^3$ & 2.38 & 2.19 & 1.04 \\
1 & -3 & x & y(x$^2$-3y$^2$) & 0.89 & 1.01 & 0.99 \\
1 & -1 & x & yz$^2$ & -0.20 & -0.23 & -0.27 \\
1 & 1 & x & xz$^2$ & 1.97 & 1.92 & 0.93 \\
1 & 3 & x & x(3x$^2$-y$^2$) & & 0.07 & 0.06 \\
\end{tabular}
\end{ruledtabular}
\end{center}
\end{table}
\section{Intersite exchange interactions in the $J_{eff}$=2 space}\label{sec:IEI_table}
The IEI between $J_{eff}$=2 multipoles for a pair of Os sites form a 24$\times$24 matrix $\hat{V}(\Delta{\bf R})$, since $K_{max}(K_{max}+2)$=24 with $K_{max}=2J_{eff}$.
In Supplementary Table~\ref{Tab:SEI} we list all calculated IEI in the three systems with magnitude above 0.05 meV. The IEI are given for the [0.5,0.5,0.0] Os-Os nearest-neighbor lattice vector. The calculated next-nearest-neighbor interactions are at least one order of magnitude smaller than the NN one; longer range IEI were neglected.
The IEI between hexadecapoles as well as between hexadecapoles and quadrupoles are below this cutoff and not shown. The same applies to the next-nearest-neighbour IEI, which are all below 0.05 meV in the absolute value.
\section{Projection of $J_{eff}$=2 multipolar operators into the $E_g$ space}
Only six $J_{eff}$=2 multipoles out of 24 have non-zero projection into the $E_g$ space; those projections expanded into the spin-1/2 operators are listed below. Namely, there are two quadrupoles
$$
O_2^0 \equiv O_{z^2} \to 2\sqrt{2/7} \tau_z, \; \; \;
O_2^2\equiv O_{x^2-y^2}\to 2\sqrt{2/7}\tau_x,
$$
the $xyz$ octupole
$$
O_3^{-2}\equiv O_{xyz}\to -\sqrt{2} \tau_y,
$$
as well as three hexadecapoles
$$
O_4^0\to \sqrt{7/40}I-\sqrt{5/14}\tau_z, \; \; \;
O_4^2\to \sqrt{6/7} \tau_x, \; \; \;
O_4^4\to (1/\sqrt{8})I+(1/\sqrt{2})\tau_z.
$$
The $O_4^0$ and $O_4^4$ hexadecapoles contribute to the remnant CF $H_{rcf}$; they have, correspondingly, non-zero traces in the $E_g$ space. Hence the presence of "monopole" (unit $2 \times 2$ matrix) $I$ in their projections to $E_g$.
To simplify subsequent expressions one may transform the hexadecapolar operators into a symmetry-adapted basis:
\begin{equation}\label{eq:trans_O4}
\begin{pmatrix}
O^I_4 \\ O^z_4
\end{pmatrix}
=
\begin{pmatrix}
\cos \theta & \sin \theta \\ -\sin \theta & \cos \theta
\end{pmatrix}
\begin{pmatrix}
O^0_4 \\ O^4_4
\end{pmatrix},
\end{equation}
where $\theta=\arccos(\sqrt{7/12})$. The transformed operators have the following projections into the $E_g$ space:
$$
O_4^I\to \sqrt{3/10}I, \; \; \;
O^z_4 \to \sqrt{6/7}\tau_z .
$$
Substituting those expressions for the relevant multipoles into the effective Hamiltonian $H_{eff}$ (eq.~1 of the main text) one may derive explicit formulas for the $E_g$ IEI in terms of the $J_{eff}$=2 IEI:
\begin{gather}
J_{yy}=2 V_{33}^{\bar{2}\bar{2}}, \\
J_{zz}=2\left[\frac{4}{7}V_{20}^{20}+\frac{4\sqrt{3}}{7}V_{24}^{2z}+\frac{3}{7}V_{44}^{zz}\right], \\
J_{xx}=2\left[\frac{4}{7}V_{22}^{22}+\frac{4\sqrt{3}}{7}V_{24}^{22}+\frac{3}{7}V_{44}^{22}\right],
\end{gather}
where we drop the ${\bf R}$ argument in $V_{KK'}^{QQ'}({\bf R})$ for brevity. $V_{24}^{2z}$ and $V_{44}^{zz}$ are the IEI transformed to the symmetry-adapted basis (\ref{eq:trans_O4}). The overall prefactor 2 is due to different normalizations of the spin operators and the spherical tensors.
One sees that the $xyz$ octupole IEI directly maps into $J_{yy}$. In contrast, $J_{zz}$ and $J_{xx}$ are combinations of quadrupole and hexadecapole IEI. Since the IEI involving hexadecapoles are small (see Sec.~\ref{sec:IEI_table}), $J_{xx}$ and $J_{zz}$ are essentially given by the $J_{eff}$=2 IEI coupling the two quadrupoles.
However, the admixture of hexadecapole IEI into $J_{xx}$ and $J_{yy}$ leads to a reduced prefactor for the quadrupole contributions.
Hence, one sees that $J_{yy}$ is equal to 2$V_{33}^{\bar{2}\bar{2}}$, while $J_{xx}$ and $J_{zz}$ are essentially given by 8/7 of the corresponding quadrupolar couplings, $V_{22}^{22}$ and $V_{22}^{00}$, in $J_
{eff}$=2.
By comparing the data in Table~I of the main text with Supp. Table~\ref{Tab:SEI} one see that this result holds for the IEI evaluated numerically using the FT-HI approach.
\section{Formalism for the inelastic neutron-scattering (INS) cross-section beyond the dipole approximation}
\subsection{INS cross-section through generalized multipolar susceptibility}
We start with the general formula for the magnetic neutron-scattering cross-section\cite{Lovesey_book_full,RareEarthMag_book} from a lattice of atoms:
\begin{equation}\label{eq:Xsec_gen}
\frac{d^2 \sigma}{d \Omega dE'}=r_0^2\frac{k'}{k}\sum_{n,n'}P_n|\langle n'|\hat{{\bf Q}}_t^{\perp}({\bf q})|n\rangle|^2\delta(\hbar\omega+E_n-E_{n'}),
\end{equation}
where $r_0=$-5.39$\cdot$10$^{-13}$~cm is the characteristic magnetic scattering length, $k$ and $k'$ are the magnitudes of initial and final neutron momentum, $|n\rangle$ and $|n'\rangle$ are the initial and final electronic states of the lattice, $E_n$ and $E_{n'}$ are the corresponding energies, $P_n$ is the probability for the lattice to be in the initial state $|n\rangle$. We consider the case of INS with the energy transfer to the system $\hbar\omega \ne $0. Finally, $\hat{{\bf Q}}_t^{\perp}({\bf q})$ is the neutron scattering operator, which is a sum of single-site contributions:
\begin{equation}\label{eq:Qlat}
\hat{{\bf Q}}_t^{\perp}({\bf q})={\bf q}\times \left(\sum_i \hat{{\bf Q}}_i({\bf q}) e^{i{\bf q}{\bf R}_i}\right) \times {\bf q}.
\end{equation}
The single-site one-electron operator ${\bf Q}_i({\bf q})$ at the site $i$ reads
\begin{equation}\label{eq:Qi}
\hat{{\bf Q}}_i({\bf q})=\hat{{\bf Q}}_{is}({\bf q})+\hat{{\bf Q}}_{io}({\bf q})=\sum_j e^{i{\bf q} {\bf r}_j}\left[\hat{{\bf s}}_j-\frac{i}{q^2}({\bf q}\times \hat{{\bf p}}_j)\right],
\end{equation}
where the sum includes all electrons on a partially-filled shell, ${\bf r}_j$ is the position of electron $j$ on this shell with respect to the position ${\bf R}_i$ of this lattice site, $\hat{{\bf p}}_j$ is the momentum operator acting on this electron. The on-site operator $\hat{{\bf Q}}_i$ consists of the spin $\hat{{\bf Q}}_{is}({\bf q})$ and orbital $\hat{{\bf Q}}_{io}({\bf q})$ terms. We note that $\hat{{\bf Q}}$, as any one-electron operator acting within an atomic multiplet with the total momentum $J$, can be decomposed into many-electron multipole operators of that multiplet\cite{Shiina2007}:
\begin{equation}\label{eq:Q_in_multipoles}
\hat{Q}_i^{\alpha}({\bf q})=\sum_{\mu}F_{\alpha\mu}({\bf q})O_{\mu}({\bf R}_i),
\end{equation}
where $\alpha=x$, $y$, or $z$, $O_{\mu}({\bf R}_i)$ is the multipole operator $\mu\equiv \{K,Q\}$ for the total momentum $J$ acting on the site $i$, and $F_{\alpha\mu}({\bf q})$ is the corresponding form-factor. In contrast to the usual dipole form-factors depending only on the magnitude $q$ of the momentum transfer, for a general multipole $\mu$ it may also depend on the momentum transfer's direction. Since $\hat{{\bf Q}}$ is a time-odd operator, only multipoles for odd $K$ contribute into (\ref{eq:Q_in_multipoles}).
By inserting (\ref{eq:Q_in_multipoles}) into (\ref{eq:Qlat}) and then the resulting expression for $\hat{{\bf Q}}_t^{\perp}({\bf q})$ into (\ref{eq:Xsec_gen}), one obtains an expression for the magnetic INS cross-section through the form-factors and matrix elements of the multipole operators:
\begin{equation}\label{eq:Xsec_multipoles}
\frac{d^2 \sigma}{d \Omega dE'}=r_0^2\frac{k'}{k}\sum_{ii'}\sum_{n,n'}P_n
\langle n|{\bf q}\times\left(\sum_{\mu}{\bf F}_{\mu}O_{\mu}({\bf R}_i)\right)\times{\bf q}|n'\rangle
\langle n'|{\bf q}\times\left(\sum_{\mu'}{\bf F}_{\mu'}O_{\mu'}({\bf R}_{i'})\right)\times{\bf q}|n\rangle
\delta(\hbar\omega+E_n-E_{n'}),
\end{equation}
where ${\bf F}_{\mu}$ is the 3D vector of form-factors for the multipole $\mu$. Finally, using the same steps as in the standard derivation of the cross-section within the dipole approximation \cite{RareEarthMag_book} we obtain the following expression for the magnetic INS cross-section of non-polarized neutrons:
\begin{equation}\label{eq:Xsec_multipoles}
\frac{d^2 \sigma}{d \Omega dE'}=r_0^2\frac{k'}{k}\sum_{\alpha\beta}\left(\delta_{\alpha\beta}-q_{\alpha}q_{\beta}\right) \\ \left[\sum_{\mu\mu'}F_{\alpha\mu}({\bf q})F_{\beta\mu'}({\bf q}) \frac{1}{2\pi\hbar}S_{\mu\mu'}({\bf q},E)\right],
\end{equation}
where the dynamic correlation function $S_{\mu\mu'}({\bf q},E)$ for ${\bf q}$ and the energy transfer $E=\hbar\omega$ is related to the generalized susceptibility $\bar{\chi}({\bf q},E)$ (eq.~\ref{eq:chi} above) by the fluctuation-dissipation theorem:
\begin{equation}\label{eq:FDT}
S_{\mu\mu'}({\bf q},E)=\frac{2\hbar}{1-e^{-E/T}}\chi''_{\mu\mu'}({\bf q},E),
\end{equation}
where $T$ is the temperature, and the absorptive part of susceptibility $\chi''_{\mu\mu'}({\bf q},E)=\mathrm{Im}\chi_{\mu\mu'}({\bf q},E)$ in the relevant case of a cubic lattice structure with the inversion symmetry. We then insert (\ref{eq:FDT}) into (\ref{eq:Xsec_multipoles}) omitting the detalied-balance prefactor $1/(1-e^{-E/T})\approx 1$ for the present case of a near-zero temperature and a large excitation gap. We also omit the constant prefactors and the ratio $k'/k$, which depends on the initial neutron energy in experiment, and thus obtain eq. 5 of the main text.
\subsection{Calculations of the form-factors}
In order to evaluate the form-factors $F_{\alpha\mu}({\bf q})$ one needs the matrix elements
\begin{equation}\label{eq:1el_mel}
\langle lms|\hat{{\bf Q}}({\bf q})|lm's'\rangle
\end{equation}
of the one-electron neutron scattering operator (\ref{eq:Qi}) for the 5$d$ shell ($l$=2) of Os$^{6+}$ ($l$, $m$, and $s$ are the orbital, magnetic and spin quantum numbers of one-electron orbitals, respectively). Lengthy expressions for those matrix elements of the spin and orbital part of $\hat{{\bf Q}}({\bf q})$ are derived in chap. 11 of the book by Lovesey \cite{Lovesey_book_full}; they are succinctly summarized by Shiina {\it et al.}~\cite{Shiina2007}. Notice that in eqs. 13 and 14 of Ref.~\onlinecite{Shiina2007} the matrix elements are given for the projected operator ${\bf q}\times\hat{{\bf Q}}({\bf q})\times{\bf q}$, but they are quite simply related to those of unprojected $\hat{{\bf Q}}({\bf q})$ (see also eq.~11.48 in Ref.~\onlinecite{Lovesey_book_full}). The radial integrals $\langle j_L(q)\rangle$ for the Os$^{6+}$ 5$d$ shell, which enter into the formulas for one-electron matrix elements, were taken from Ref.~\onlinecite{Kobayashi2011} .
\begin{figure}[!tb]
\begin{centering}
\includegraphics[width=0.85\columnwidth]{FF_JJ_state-eps-converted-to.pdf}
\par\end{centering}
\caption{${\bf q}$-dependent prefactor (\ref{eq:q_pref}) for the elastic neutron scattering along the direction [100] in the ${\bf q}$ space, for the saturated $|22\rangle$ state of the Os$^{6+}$ $J_{eff}$=2 multiplet. The meaning of various curves is explained in the text}
\label{fig:FF_JJstate}
\end{figure}
In order to evaluate the matrix elements of $\hat{{\bf Q}}({\bf q})$ for many-electron shells from (\ref{eq:1el_mel}), Refs.~\onlinecite{Stassis1976,Lovesey_book_full,Shiina2007} generally assume a certain coupling scheme for a given ion ($LS$ or $jj$). Instead we simply use the atomic two-electron states of Os$^{6+}$ $J_{eff}$=2 shell as obtained by converged DFT+HI for a given Ba$_2M$OsO$_6$ system to calculate those matrix elements numerically for each point of the ${\bf q}$-grid. Since the two-electron atomic eigenstates are expanded in the Fock space of $(lms)$ orbitals, such calculation is trivial. The resulting matrices in the $J_{eff}$ space with matrix elements $Q_{\alpha}^{MM'}({\bf q})=\langle J_{eff}M| \hat{Q}_{\alpha}({\bf q}) |J_{eff}M'\rangle$ are then expanded in the odd $J_{eff}$ multipoles in accordance with (\ref{eq:Q_in_multipoles}) to obtain the form-factors $F_{\alpha\mu}({\bf q})$ for each direction $\alpha$.
\subsection{Form-factors for the saturated $M=J$ state of the $J_{eff}$=2 multiplet}
As an illustration of the above described approach, let us consider the neutron-scattering form-factors for the saturated $|J=2,M=J\rangle\equiv |JJ\rangle$ state of the Os$^{6+}$ 5d$^2$ $J_{eff}$=2 multiplet. We evaluate the corresponding ${\bf q}$-dependent prefactor for elastic scattering
\begin{equation}\label{eq:q_pref}
A({\bf q})=\sum_{\alpha\beta}\left(\delta_{\alpha\beta}-q_{\alpha}q_{\beta}\right) \langle \hat{Q}_{\alpha}({\bf q})\rangle_{JJ} \langle \hat{Q}_{\beta}({\bf q})\rangle_{JJ} ,
\end{equation}
for the case of $|JJ\rangle$ ground state (which is, of course, not realized in the actual Ba$_2M$OsO$_6$ systems); $\hat{Q}_{\alpha}({\bf q})$ is the neutron-scattering operator (\ref{eq:Qi}) for the direction $\alpha$, by $\langle \hat{X}\rangle_{JJ}$ we designate the expectation value of an operator $X$ in the $|JJ\rangle$ state, $\langle \hat{X}\rangle_{JJ}\equiv\langle JJ|\hat{X}|JJ\rangle$ . We consider ${\bf q} $ along the [100] direction; corresponding $A({\bf q})$ vs. $q$ obtained by direct evaluation of the matrix elements using (\ref{eq:1el_mel}) is shown in Supplementary Fig.~\ref{fig:FF_JJstate} by dots. It can be compared with the same prefactor (shown in Supplementary Fig.~\ref{fig:FF_JJstate} in magenta) calculated within the dipole approximation for matrix elements:
\begin{equation}\label{eq:dip_app}
\langle \hat{{\bf Q}}({\bf q})\rangle_{JJ} \simeq \frac{1}{2}\left[ \langle j_0(q) \rangle \langle \mathbf{L}+2\mathbf{S}\rangle_{JJ} + \langle j_2(q) \rangle \langle \mathbf{L} \rangle_{JJ}\right],
\end{equation}
where $L$ and $S$ are orbital and spin moment operators, respectively, $\langle j_l(q) \rangle$ are the radial integrals\cite{Kobayashi2011} of the spherical Bessel function of order $l$ for Os$^{6+}$. Of course, within the dipole approximation (\ref{eq:dip_app}) the matrix elements of $\hat{{\bf Q}}$ depend only on the absolute value $q$ of momentum transfer. The total $M_{tot}=\langle \mathbf{L}+2\mathbf{S}\rangle_{JJ}$ and orbital $M_L=\langle \mathbf{L} \rangle_{JJ}$ magnetic moments are equal to 0.39 and -1.49, respectively. The oscillatory behavior of $A({\bf q})$ is thus due to $|M_{tot}| \ll |M_L|$ in conjunction with $\langle j_0 (q) \rangle$ being ever decreasing function and $\langle j_2(q) \rangle$ of Os$^{6+}$ peaked at non-zero $q\approx$4~\AA$^{-1}$. One may notice that $A({\bf q})$ calculated beyond the dipole approximation exhibits even much stronger oscillations reaching the overall maximum at large $q\approx$5~\AA$^{-1}$. Overall the dipole approximation is reasonable for $q <$2~\AA$^{-1}$; it underestimates very significantly the magnitude of $A({\bf q})$ for larger $q$ .
Let us now evaluate the same quantity (\ref{eq:q_pref}) using the multipole form-factors (\ref{eq:Q_in_multipoles}). The $|JJ\rangle$ state has only two non-zero odd-time multipoles: the dipole $\langle O_z \rangle_{JJ}$=0.632 and the octupole $\langle O_{z^3} \rangle_{JJ}$=0.316. For those multipoles and ${\bf q}||$[100] only the form-factors for the direction $z$ are non-zero. Thus by inserting (\ref{eq:Q_in_multipoles}) into (\ref{eq:q_pref}) one obtains:
\begin{equation}\label{eq:A_in_F}
A({\bf q})=F_{zz}^2({\bf q})\langle O_z \rangle^2_{JJ}+F_{zz^3}^2({\bf q})\langle O_{z^3} \rangle^2_{JJ}+2F_{zz}({\bf q})F_{zz^3}({\bf q})\langle O_z \rangle_{JJ}\langle O_{z^3} \rangle_{JJ}=A_{dd}({\bf q})+A_{oo}({\bf q})+A_{do}({\bf q}).
\end{equation}
One sees that the total value of $A({\bf q})$ thus calculated (red line in Supplementary Fig.~\ref{fig:FF_JJstate}) coincides, as expected, with that obtained by the direct evaluation of the $\hat{Q}$ matrix elements. The advantage of using the multipole form-factors is that one may separate total $A({\bf q})$ into contributions due to different multipoles and their mixtures. In the present case one obtains (Supplementary Fig.~\ref{fig:FF_JJstate}) a large oscillatory dipole contribution $A_{dd}({\bf q})$, a small octupole contribution $A{oo}({\bf q})$ exhibiting a shallow peak at $q\approx$3 \AA$^{-1}$, and mixed dipole-octupole $A_{do}({\bf q})$ with the magnitude comparable to that of $A_{dd}({\bf q})$.
\section{INS cross-section of BCOO and BMOO}
In Supp. Fig.~\ref{fig:INS_Ca_Mg} we display the calculated powder-averaged INS cross-section for cubic BCOO and BMOO, the analogous data for BZOO are shown in Fig.~4a of the main text. As in the case of BZOO, only crystal-field excitations contribute to the INS, with no discernible scattering intensity present below 20 meV.
\begin{figure}[!tb]
\includegraphics[width=0.85\columnwidth,left]{INS_sigma_BCOO.pdf}
\includegraphics[width=0.77\columnwidth,left]{INS_sigma_BMOO.pdf}
\par
\caption{Color map (in arb. units) of the calculated powder-averaged INS differential cross-section in BCOO (top) and BMOO (bottom) as a function of the energy transfer $E$ and momentum transfer $q$.}
\label{fig:INS_Ca_Mg}
\end{figure}
\section{Tetragonal crystal field in distorted BZOO}
In order to evaluate the dependence of tetragonal crystal field (CF) on the corresponding distortion in BZOO we have carried out self-consistent DFT+HI calculations for a set of tetragonally distorted unit cells. In these calculation we employed the tetragonal body-centered unit cell, which lattice parameters are $a'=a/\sqrt{2}$ and $c=a$ for an undistorted cubic lattice with the lattice parameter $a$. The tetragonal distortion was thus specified by $\delta=1-c/a=1-c/(\sqrt{2}a')$. Other parameters of those calculations ($U$, $J_H$, the choice of projection window) are the same as for the cubic structure (Supps. Sec.~I).
The local one-electron Hamiltonian for an Os 5$d$ shell in a tetragonal environment reads
\begin{equation}\label{eq:H1el}
H_{1el}=E_0+\lambda\sum l_i s_i+L_2^0\hat{T}_2^0+L_4^0\hat{T}_4^0+L_4^4\hat{T}_4^4,
\end{equation}
where the first two terms in the RHS are the uniform shift and spin-orbit coupling. The last three terms represent the CF through the one-electron Hermitian Wybourne's tensors $T_k^q$ (see, e.~g., Ref.~\cite{Delange2017} for details). The term $L_2^0\hat{T}_2^0$ arises due to the tetragonal distortion. By fitting the matrix elements of (\ref{eq:H1el}) to the converged Os 5$d$ one-electron level positions as obtained by DFT+HI for a given distortion $\delta$ we extracted \cite{Delange2017} the tetragonal CF parameter $L_2^0$ vs. $\delta$. The resulting almost perfect linear dependence for small $\delta$ is shown in Fig.~\ref{fig:L20_vs_ca}, giving $L_2^0=K'\delta$ with $K'=-13.3$~eV.
Within the Os $d^2$ $J_{eff}$=2 multiplet the one-electron tensor $\hat{T}_2^0$ can be substituted by the corresponding Stevens operator $\hat{T}_2^0=-0.020\mathcal{O}_2^0$, where $\mathcal{O}_2^0=3J_z^2-J_{eff}(J_{eff}+1)$. In result, for the tetragonal CF parameter in the Stevens normalization $V_t=K\delta$ one obtains $K=-0.020K'=266$~meV.
\begin{figure}[!tb]
\begin{centering}
\includegraphics[width=0.75\columnwidth]{L20_vs_ca-eps-converted-to.pdf}
\par\end{centering}
\caption{Calculated crystal field parameter $L_{20}$ vs. tetragonal distortion $\delta=1-c/a$ in BZOO. The circles are calculated points, the line is a linear regression fit.}
\label{fig:L20_vs_ca}
\end{figure}
|
1,116,691,498,680 | arxiv | \section{Introduction}
The appearance of the gap in the quasiparticle spectrum has been identified as a key feature of the superconducting state of matter since the early days of the field and the formulation of the foundational BCS theory of the superconducting phenomenon. \cite{schrieffer} It has also been long known that the gap may not extend everywhere on the Fermi surface, and that measure-zero sections of the Fermi surface in the form of gapless points and or gapless lines are also possible, and in fact common \cite{volovik, sigrist}. It came as a surprise, however, when it was shown recently that in centrosymmetric multiband superconductors with broken time-reversal symmetry, the outcome could be none of the above options, but a new, and typically much smaller surface in the momentum space, named the Bogoliubov-Fermi (BF) surface.\cite{agterberg, brydon, yang} In contrast to the previous examples of the BF surfaces, \cite{wilczek, gubankova} here it is not a portion of the normal Fermi surface that is being left ungapped, but the BF surface is better thought of as a gapless point or a line inflated to a surface by the presence of other bands. Of course, the presence of a BF surface in the quasiparticle spectrum of a superconducting state in principle leaves distinct a signature on the crucial low-temperature properties, such as the temperature dependence of the penetration depth, of the specific heat, and of the thermal conductivity, which would all reflect a finite density of states left. \cite{timm, setty} Signs of finite density of states in the superconducting state have been possibly observed in $\text{U}_{1-x} \text{Th}_x \text{Be}_{13}$ \cite{stewart, zieve}, although the precise nature of the superconducting order there seems not yet entirely clear.
The presence of inversion symmetry in centrosymmetric superconductors had been assumed to be crucial for the appearance of the BF surface, as well as for its protection by the $Z_2$ topological invariant, which requires the inversion symmetry for its definition. \cite{agterberg, bzdusek} However, examples of time-reversal-broken multiband superconductors without inversion that nevertheless featured BF surfaces emerged,\cite{volovik1, schnyder, sim, link1} and it has been subsequently shown that this is a rather generic feature of noncentrosymmetric superconductors as well. \cite{link2} Furthermore, the stability of the inversion-symmetric BF surface has been questioned \cite{oh, tamura}; as will be discussed in this paper at length as well, the inversion symmetry makes the BF surface everywhere doubly degenerate, and this degeneracy can be removed by a manifest or a spontaneous breaking of inversion. It was shown, for example,\cite{oh} that in presence of favorable effective electron-electron interactions inversion symmetry at zero temperature becomes spontaneously broken, and the BF surface then reduced or eliminated. Another example is an inversion-reducing lattice distortion, which via electron-phonon coupling can also cause the reduction of the BF surface in the quasiparticle spectrum. \cite{tim2} The net effect of these examples of dynamical breaking of inversion symmetry is either a fully gapped quasiparticle spectrum, or a new non-degenerate BF surface, of the type that exists in the noncentrosymmetric case. \cite{link2}
In this paper we first revisit the formation of the BF surface in the inversion-symmetric case and examine it from the point of view of the effective low-energy quasiparticle Hamiltonian $H_{ef}$ in the superconductor \cite{brydon, link2, berg, venderbos}, previously derived for the noncentrosymmetric superconductors in ref. \cite{link2}. The effective Hamiltonian describes the two particle and two hole states that intersect the Fermi level in the normal phase, intraband-coupled by the presence of the superconducting order parameter, and then ``renormalized" by the interband coupling to other states that lie farther from the Fermi level. We show that $H_{ef}$ is in certain preferred basis and at every momentum a four-dimensional imaginary matrix, and as such it is a generator of real representation of the group of four-dimensional rotations in Euclidean space, i. e. of the standard $SO(4)$. The emergence of $SO(4)$ suggests possible analogies to classical relativity, and indeed the time-dependent Schr{\"o}dinger equation governed by such an $H_{ef}$ is related to the covariant form of the classical second Newton law in the presence of an ``electromagnetic" Lorentz force in the momentum space. \cite{rindler} Although the full analogy between the two time evolutions does not, and as we explain, cannot exist, the BF surface can be understood as an orthogonality condition between the fictitious momentum-dependent ``electric" and ``magnetic" fields, which can be read off as the coefficients of $H_{ef}$ when expanded in terms of the generators of the $SO(4)$ Lie algebra. The orthogonality condition allows the Lorentz force to vanish on the BF surface provided that the velocity of the fictitious classical particle with the right magnitude is orthogonal to both the ``electric" and ``magnetic" fields, which is tantamount to finding the eigenstates with zero energy in the original quantum problem. Interestingly, since the quantum problem has two orthogonal zero modes at each momentum at the BF surface, whereas the analogous classical Lorentz equation of motion can have only one physical solution, the second quantum solution corresponds to the unphysical ``spacelike" tachyonic solution for the velocity four-vector. The latter has no physically acceptable classical analog, but is nevertheless formally a solution of the Lorentz equation, and as such it appears in the analogous quantum problem.
The relativistic analogy becomes particularly useful in studying the potential interaction-induced instability of the inversion-symmetric BF surface. To this purpose we formulate a single-particle model of spinless fermions hopping on the Lieb lattice designed to fall into the topological class D \cite{bzdusek}, i. e. to {\it anticommute} only with an antiunitary operator ``$\mathcal{A}$" with a positive square, and violate time reversal symmetry. The operator $\mathcal{A}$ can be thought of as representing the combined effects of inversion and particle-hole transformations, and its anticommutation with $H_{ef}$ is tied to the inversion symmetry of the full original Bogoliubov-de Genness (BdG) quasiparticle Hamiltonian. Since the Lieb lattice has a four-component unit cell our lattice single-particle Hamiltonian is then an $SO(4)$ generator, with a doubly degenerate manifold of zero-energy states, fully equivalent to a BF surface in the superconducting problem. Having such a real-space lattice model allows easy addition of two-body interaction terms of one's choice: we show that the simplest nearest-neighbor repulsion between the fermions, for example, favors spontaneous breaking of inversion, that is a dynamical generation of a single-particle term in the mean-field Hamiltonian which, in contrast to $H_{ef}$, {\it commutes} with the operator $\mathcal{A}$. At zero-temperature the combined effects of finite density of the zero-energy states and the matrix structure of the dynamically generated term makes the BF surface unstable at infinitesimal repulsion. The instability produces a smaller, deformed, and non-degenerate BF surface.
The paper is organized as follows. In sec.~\ref{sec:Sec2} we discuss the multiband BdG Hamiltonian as describing Cooper pairing between time-reversed states, for a general time-reversal operator. The advantage of this representation is that the existence of a nonunitary operator $\mathcal{A}$ that anticommutes with the BdG Hamiltonian can be seen to be a universal feature tied to the general commutativity of spatial symmetries such as inversion and the time reversal. A critical discussion of the standard construction of the all-important operator $\mathcal{A}$ is provided in Appendix~\ref{ap:Ap1}, and further support for the above mentioned commutativity on the example of the standard Dirac Hamiltonian is given in Appendix~\ref{ap:AP2}. In sec.~\ref{sec:Sec3} we derive the low-energy effective Hamiltonian by invoking the Schur complement, tantamount to integration over bands with finite energy, and discuss its energy eigenvalues and the $SO(4)$ structure. The effective Hamiltonian in the canonical representation of $SO(3)\times SO(3) \cong SO(4) $ and its relation to the inter- and intraband pairing, as well as the transformation between the $SO(4)$ representation to the canonical representation of the effective Hamiltonian, can be found in Appendix~\ref{ap:Ap3}. The zero-energy eigenstates are computed in sec.~\ref{sec:Sec4}, and the analogy with the classical Lorentz force equation is expounded in sec.~\ref{sec:Sec5}. How the preservation of time-reversal forbids the BF surface in this formulation is explained in sec.~\ref{sec:Sec6}. In sec.~\ref{sec:Sec7} we define a hopping Hamiltonian on the Lieb lattice that falls into the required topological class D and provides a realization of a BF surface, and introduce nearest-neighbor repulsive interactions. The mean-field theory of the BF surface instability is given in sections~\ref{sec:Sec8} and \ref{sec:Sec9}. Conclusions and discussion are presented in sec.~\ref{sec:Sec10}.
\section{BdG Hamiltonian with inversion}
\label{sec:Sec2}
The quantum-mechanical action for the Bogoliubov quasiparticles in the superconducting state is given by:
\begin{equation}
S= k_B T \sum_{\omega_n, \textbf{p}} \Psi^\dagger (\omega_n, \textbf{p}) [-i\omega_n + H_{\rm{BdG}} (\textbf{p}) ] \Psi(\omega_n, \textbf{p})
\:,
\label{eq:action_Eq1}
\end{equation}
where the Nambu spinor is here defined as ${ \Psi (\omega_n, \textbf{p}) = \big(\psi(\omega_n,\textbf{p}) , \mathcal{T} \psi(\omega_n, \textbf{p} ) \big) ^{\rm T} }$, $\textbf{p}$ is the momentum, $\omega_n = (2n+1) \pi k_B T$ is the Matsubara frequency, and $T$ is the temperature. $\psi=(\psi_1,\cdots,\psi_N)$ is a $N$-component Grassmann number describing $N$ eigenstates of the normal state Hamiltonian H(\textbf{p}), and its time-reversed counterpart is $\mathcal{T} \psi(\omega_n, \textbf{p} )= U \psi^* (-\omega_n, -\textbf{p})$, where $\mathcal{T}$ is the antiunitary time-reversal operator, with $U$ as its unitary part.
This way the BdG Hamiltonian becomes:
\begin{eqnarray}
H_{\rm BdG}(\textbf{p}) &=&
\begin{pmatrix} H(\textbf{p})-\mu & \Gamma (\textbf{p}) \\
\Gamma^\dagger (\textbf{p}) & - \big[ H(\textbf{p})- \mu \big]
\end{pmatrix}
\:.
\label{eq:BdG-Ham-Eq2}
\end{eqnarray}
For simplicity, we assume first that the $N$-dimensional Hermitian Hamiltonian $H(\textbf{p})$ is time-reversal-symmetric, so that
\begin{equation}
U^\dagger H(\textbf{p}) U= H^* (-\textbf{p}),
\label{eq:Eq3}
\end{equation}
or equivalently, in terms of the commutator, $[H(\textbf{p}), \mathcal{T}]=0$. The off-diagonal (pairing) matrix needs to satisfy
\begin{equation}
U^\dagger \Gamma (\textbf{p}) U= -s \Gamma^{\rm T} (-\textbf{p}),
\end{equation}
where $s= \mathcal{T}^2 = U U^* = \pm 1$. For real electrons the sign $s=-1$, of course, but we keep the general sign $s$ nevertheless, to include fermions with (effective) integer spin \cite{sim, nandkishore} as well. As any other matrix, the pairing matrix can also be written as $ \Gamma (\textbf{p}) = \Gamma_1 (\textbf{p}) - i \Gamma_2 (\textbf{p})$, where $\Gamma_{1,2}$ are Hermitian. Then
\begin{equation}
U^\dagger \Gamma_{1,2} (\textbf{p}) U= -s \Gamma^* _{1,2} (-\textbf{p}),
\end{equation}
and for $s=-1$ ($s=1$) $\Gamma_{1,2}$ are simply even (odd) under time reversal, and $[\Gamma_{1,2} (\textbf{p}), \mathcal{T}] =0$ ($ \{\Gamma_{1,2} (\textbf{p}), \mathcal{T}\} =0$, where $\{,\}$ is the anticommutator). \cite{boettcher1, boettcher2}
Let us now also assume the inversion symmetry, i. e. the existence of the inversion operator $P$ with the effect:
\begin{equation}
P^\dagger H(\textbf{p}) P = H(-\textbf{p}),
\end{equation}
\begin{equation}
P^\dagger \Gamma (\textbf{p}) P= \Gamma (-\textbf{p}).
\end{equation}
The inversion transformation $\mathcal{P} $ in momentum representation is then the combination of the operator $P$ and the momentum reversal $\textbf{p} \rightarrow -\textbf{p}$. The inversion symmetry of the BdG Hamiltonian means that $[O(\textbf{p}), \mathcal{P}]=0$, for $O=H$, and $O=\Gamma$.
In contrast to the time reversal, the inversion operator is unitary, and $P^\dagger P =1$. We also require that it is a physical observable, so that $P^\dagger =P$ as well. This enforces that
\begin{equation}
P^2 = +1 \:,
\label{eq:Eq8}
\end{equation}
so that the eigenvalues of the operator $P$ are $\pm 1$, i. e. the ``parity" of the eigenstates of $P$.
Finally, we postulate that, in general, inversion and time-reversal operations commute:
\begin{equation}
[\mathcal{P}, \mathcal{T}] =0
\:.
\label{eq:Eq9}
\end{equation}
The motivation is that inversion is an operation in real space, and as such should have its action completely independent of the notion of time. The same mutual commutation relation applies to any $SO(3)$ rotation and time reversal, which can also be understood as the underlying reason for the antiunitarity of the time-reversal operator. Additional arguments in support of this postulate are given in Appendix~\ref{ap:AP2}.
The BdG Hamiltonian can be rewritten as
\begin{equation}
H_{\rm BdG}(\textbf{p}) = \sigma_3 \otimes [H(\textbf{p}) -\mu] + \sigma_1 \otimes \Gamma_1 (\textbf{p}) + \sigma_2 \otimes \Gamma_2 (\textbf{p})
\:,
\label{eq:Eq10a}
\end{equation}
where $\sigma_i$, $i =1,2,3$ are the usual Pauli matrices. We observe that if $\Gamma_2 $ is finite, $[H_{\rm BdG}, 1\otimes \mathcal{T}] \neq 0$, if $s=-1$. Similarly, when $s=1$, $[H_{\rm BdG}, 1\otimes \mathcal{T}] \neq 0$ for finite $\Gamma_1$. When $s=1$ and $\Gamma_1 =0$ the overall phase factor of $i$ can be gauged away, and the matrix $\Gamma$ again chosen to be Hermitian. It is non-Hermiticity of the pairing matrix $\Gamma$ in either case that signals the breaking of the time reversal in the superconducting state. $[H_{\rm BdG}(\textbf{p}), 1\otimes \mathcal{P} ]=0$, on the other hand, and the BdG Hamiltonian is even under inversion.
One can now construct a new antiunitary operator
\begin{equation}
\mathcal{A}= \sigma_k \otimes (\mathcal{P} \mathcal{T})
\end{equation}
with $k=2$ for $s=-1$, and $k=1$ for $s=1$. Evidently,
\begin{equation}
\{ H_{\rm BdG} (\textbf{p}) , \mathcal{A} \} =0,
\end{equation}
and the BdG Hamiltonian is odd under $\mathcal{A}$. By construction
\begin{equation}
\mathcal{A}^2 = (\sigma_k \sigma_k ^*) \otimes ( \mathcal{P}^2 \mathcal{T}^2 ) = + 1
\:,
\end{equation}
where we used the fact that $ \sigma_k \sigma_k ^* = \mathcal{T}^2 =s$, and Eqs.~\eqref{eq:Eq8} and \eqref{eq:Eq9}. An equivalent antiunitary operator was constructed before,\cite{agterberg} and it was responsible for the topological nontriviality of the ensuing BF surface. The alternative construction is presented and critically discussed in Appendix~\ref{ap:Ap1}. We see here that its existence is guaranteed even when the inversion operator matrix $P$ is not diagonal, or a real matrix in a given representation, and that it may be understood as a consequence of basic postulates on the discrete symmetries involved. The existence of an operator that anticommutes with the BdG Hamiltonian implies that at fixed momentum the eigenstates of $H_{\rm BdG}(\textbf{p})$ come in pairs of states with opposite signs of energy. Such an operator does not exist when the system has no inversion symmetry in the normal phase \cite{link1}. $H_{\rm BdG}(\textbf{p})$ with inversion and without time reversal therefore falls into the topological class D. \cite{bzdusek}
We have so far assumed that the time reversal symmetry may be violated only by the off-diagonal pairing terms in $H_{\rm{BdG}} (\textbf{p})$ in Eq.~\eqref{eq:Eq10a}. One can, however, imagine it being broken, additionally or exclusively, by diagonal terms in Eq.~\eqref{eq:Eq10a}. In addition to the time-reversal-invariant part of the normal state Hamiltonian $H(\textbf{p})$, this would require an addition of a time-reversal-odd term to it: $H(\textbf{p}) \rightarrow H(\textbf{p}) + H' (\textbf{p}) $, with
\begin{equation}
U^\dagger H' (\textbf{p}) U= - (H' (-\textbf{p}) )^*.
\end{equation}
It is easy to see that the extra minus sign in the above expression relative to Eq.~\eqref{eq:Eq3} yields then an additional term in Eq.~\eqref{eq:Eq10a}:
\begin{equation}
1 \otimes H' (\textbf{p}).
\end{equation}
Assuming that $H' (\textbf{p})$ is also even under inversion, it is odd under the combined operation of time reversal and inversion, and the extra term then evidently also anticommutes with the operator $\mathcal{A}$. With this term included $H_{\rm {BdG} } (\textbf{p})$ in fact adopts its most general form that exhibits this property.
An important observation can be made at this point: the fact that $\mathcal{A}^2 =+1$ implies that there exist a ``real" basis in which the unitary part of $\mathcal{A}$ is trivial, and $\mathcal{A}= K$, i. e. it is just complex conjugation.\cite{herbutprb} In this basis therefore $H_{\rm BdG} (\textbf{p}) $ at every (real) momentum $\textbf{p} $ is a purely imaginary matrix. Of course, that also makes it antisymmetric, since it is Hermitian. Both of these facts play a role in the rest of our discussion.
\section{ Effective Hamiltonian and emergence of $SO(4)$ }
\label{sec:Sec3}
Let us define the eigenvalues and the eigenstates of the normal state Hamiltonian $H(\textbf{p})$, as $E_i(\textbf{p})$ and $ \phi_i (\textbf{p})$, $i=1,...N$. We may call the eigenstates with their energy arbitrary close to the Fermi surface $\phi_i (\textbf{p})$ with $i=1,...M$ ``light", and the remaining $N-M$ eigenstates ``heavy". When $s=-1$, the Kramers theorem implies that $M$ is even, and when $s=1$, $M$ can be both even or odd. Obviously, $M=2$, corresponding to the usual spin-1/2 fermions such as electrons, would be of the greatest interest.
The spectrum of the Bogoliubov quasiparticles at a momentum $\textbf{p}$ is given by the solution of the equation for the real frequency $\omega$:
\begin{equation}
\det (H_{\rm BdG}(\textbf{p})-\omega ) =0
\:.
\label{eq:Especrum-BdG_Eq4}
\end{equation}
With the separation into light and heavy states at a given momentum near the normal Fermi surface one can write the BdG Hamiltonian in the
basis $\{ (\phi_i (\textbf{p}),0)^T, (0,\phi_i (\textbf{p}))^T \}$, $i=1,...N$ as
\begin{eqnarray}
H_{\rm BdG}(\textbf{p}) &=&
%
\begin{pmatrix} H_l (\textbf{p}) & H_{lh}(\textbf{p}) \\
H_{lh}^\dagger (\textbf{p}) & H_h (\textbf{p})
\end{pmatrix}
%
\:.
\label{eq:Ham_light-heavy-modes_Eq5}
\end{eqnarray}
The block for the light particle and hole states $H_l(\textbf{p})$ is a $2M$-dimensional matrix and describes the dispersion of the light particle and hole states as well as the intraband pairing. The heavy modes are described by the $2(N-M)$-dimensional matrix $H_h (\textbf{p}) $ which denotes the energy eigenstates of the heavy particle and holes and the intra- and interband pairing only between the heavy modes. At last, the coupling between the light and heavy states $H_{lh}(\textbf{p})$ is a $2M\times 2(N-M)$ matrix. (An explicit expression of $H_{l,h,lh}$ for $M=2$ can be found in Appendix~\ref{ap:Ap3.1}). The above determinant can now be rewritten as
%
\begin{equation}
\det (H_{\rm BdG}(\textbf{p}) -\omega )= \det (H_{h}(\textbf{p}) -\omega ) \det L_{ef}(\omega, \textbf{p})
\:,
\label{eq:Schurcomplement-Eq6}
\end{equation}
where the effective Lagrangian $L_{ef}$ is the {\it Schur complement} \cite{schur} of the block matrix for the heavy modes:
%
\begin{equation}
L_{ef} (\omega, \textbf{p}) = H_l (\textbf{p}) -\omega - H_{lh}(\textbf{p}) (H_h (\textbf{p}) -\omega )^{-1} H_{lh}^\dagger (\textbf{p}).
\label{eq:Schurcomplement-Eq7}
\end{equation}
The first factor in Eq.~\eqref{eq:Schurcomplement-Eq6} may also be understood as the fermionic partition function for the heavy modes, and the second factor is therefore the residual partition function for the light modes, renormalized by the integration over the heavy modes \cite{link2}. $L_{ef} (\omega, \textbf{p})$ is well defined whenever the heavy block is invertible, which is fulfilled for $|\omega|< |E_i (\textbf{p})-\mu| $ for $i>M$. Under this condition the eigenvalue equation in Eq.~\eqref{eq:Especrum-BdG_Eq4} reduces to
$
\det L_{ef}(\omega, \textbf{p}) =0.
$
In particular, $\omega=0$ is a solution only when
\begin{equation}
\det H_{ef}(\textbf{p}) =0,
\label{eq:determinant_eff_Ham_Eq9}
\end{equation}
with $H_{ef} (\textbf{p}) = L_{ef} ( 0, \textbf{p} )$. We call $H_{ef} (\textbf{p}) $ the effective Hamiltonian. \cite{brydon, link2, venderbos} The same notion has been used in the past in studies of stability of point nodes in two-dimensional d-wave superconductors. \cite{berg} We emphasize, that only the solutions for zero modes of $H_{ef}(\textbf{p})$ are exactly the same as those for the original $H_{\rm BdG}(\textbf{p})$; the rest of their spectra differ. This is, however, all that is needed to understand the emergence of the BF surface, the dispersion of quasiparticles close to it, and even the instability of the BF surface, as we show below.
According to Eq.~\eqref{eq:Schurcomplement-Eq7} the effective Hamiltonian is thus
\begin{equation}
H_{ef} ( \textbf{p}) = H_l (\textbf{p}) - H_{lh}(\textbf{p}) H_h ^{-1} (\textbf{p}) H_{lh}^\dagger (\textbf{p})
\:.
\label{eq:Eq19}
\end{equation}
The effective Hamiltonian computed in the standard (``canonical") representation where the diagonal terms of the two matrices $H_{l,h}(\textbf{p})$ are the energy dispersions of the states and the off-diagonal terms of the three matrices $H_{l,h,lh}(\textbf{p})$ are the intra- and interband pairing between the different states can be found in Appendix~\ref{ap:Ap3}.
To understand its general structure, however, it is better to work in the real basis. In the real basis $\mathcal{A} = K$, and thus all of the matrices $H_l ( \textbf{p})$, $H_{lh} ( \textbf{p})$ and $H_{h} ( \textbf{p}) $ are imaginary. Clearly, $H_{ef} ( \textbf{p})$ is then imaginary as well. The effective low-energy Hamiltonian inherits the antiunitary (anticommuting) symmetry of the full BdG Hamiltonian, and therefore in general is a Hermitian imaginary $2M$-dimensional matrix, i. e. a generator of the real representation of $SO(2M)$ group of rotations. In the physically most pertinent case of $M=2$, $H_{ef} ( \textbf{p})$ is a generator of the $SO(4)$, and in the real basis can be written as
\begin{equation}
H_{ef} ( \textbf{p}) = \sum_{k=1} ^3 ( a_k ( \textbf{p} ) N_k + b_k ( \textbf{p}) J_k)
\label{eq:Eq20}
\end{equation}
where $[N_{k}]_{\mu \nu }= - [N_{k}]_{\nu \mu} = -i \delta_{\mu 0} \delta_{\nu k}$, and $[J_{k}]_{ij} = - i \epsilon_{ijk}$, $[J_{k}]_{0j} = [J_{k}]_{j0} =0$. Here the Greek indices run from 0 to 3, and Latin indices from 1 to 3. We observe that in the real basis the matrix elements of the
effective Hamiltonian may be written as
\begin{equation}
[ H_{ef} ( \textbf{p}) ] _{\mu \nu} = i F^{\mu \nu} ( \textbf{p} ),
\end{equation}
where $F^{\mu \nu} ( \textbf{p} )$ is the standard antisymmetric electromagnetic tensor, with the ``vector" coefficients $\textbf{a} ( \textbf{p} )= (a_1 ( \textbf{p} ), a_2 ( \textbf{p} ), a_3 ( \textbf{p} ))$ and $\textbf{b} ( \textbf{p} )= (b_1( \textbf{p} ),b_2 ( \textbf{p} ),b_3 ( \textbf{p} ))$ playing the role of momentum-dependent ``electric" and ``magnetic" fields. This analogy will be deepened and will come in handy shortly when we discuss the form of the
zero-energy eigenstates of the effective Hamiltonian.
The six four-dimensional imaginary matrices $N_k$ and $J_k$ are chosen to close the standard SO(4) Lie algebra in the following form:
\begin{equation}
[J_i, J_j ] = i\epsilon_{ijk} J_k,
\end{equation}
\begin{equation}
[N_i, J_j ] = i\epsilon_{ijk} N_k ,
\end{equation}
\begin{equation}
[N_i, N_j ] = i\epsilon_{ijk} J_k.
\end{equation}
Indeed, it is easily seen that the fully imaginary representation of the generators $N_k$ and $J_k$ defined above is equivalent to the more standard representation of real symmetric Lorentz boosts $K_k$, with $[K_k]_{\mu \nu} = \delta_{\mu 0} \delta_{\nu k}$, and the same imaginary generators of rotations $J_k$; explicitly $N_k = S K_k S^\dagger$, and $J_k = S J_k S^\dagger$, where
\begin{equation}
S= e^{-i \frac{\pi}{4} G}
\end{equation}
and the matrix $G = \diag (1,-1,-1,-1)$.
By forming the symmetric and the antisymmetric linear combinations
\begin{equation}
R_{i, \pm} = \frac{1}{2} (J_i \pm N_i) \:,
\end{equation}
it readily follows that
\begin{equation}
[R_{i,r}, R_{j,r} ] = i\epsilon_{ijk} R_{k,r}\:,
\end{equation}
for $r=\pm$, whereas
\begin{equation}
[R_{i,+}, R_{j,-} ] = 0
\:.
\end{equation}
The Lie algebra of the generators of the $SO(4)$ is the same as the Lie algebra of the generators of the $SO(3) \times SO(3)$, as is well known.\cite{georgi} The effective Hamiltonian can therefore be rewritten as
\begin{equation}
H_{ef} ( \textbf{p}) = \sum_{k=1}^3 \sum_{r=\pm} ( a_k ( \textbf{p} ) + r b_k ( \textbf{p})) R_{k, r}.
\label{eq:Eq29}
\end{equation}
The four-dimensional matrices $N_i$ and $J_{i}$ form the irreducible $(1/2,1/2)$ representation of the Lie algebra $SO(3)\times SO(3)$, where $j=1/2$ refers to the spin-1/2 representation of $SO(3)$.\cite{georgi} The matrices $R_{k,r}$ can thus be brought by a unitary transformation into $ 1\otimes (\sigma_k /2)$ and $(\sigma_k /2) \otimes 1$. The explicit unitary transformation that does so is provided in Appendix~\ref{ap:Ap33}. The spectrum of $H_{ef}$ can then be readily discerned as
\begin{equation}
E (\textbf{p}) = \pm \frac{1}{2} (| \textbf{a} ( \textbf{p} ) + \textbf{b}( \textbf{p}) | \pm
| \textbf{a} ( \textbf{p} ) - \textbf{b} ( \textbf{p})|)
\:.
\label{eq:Eq30}
\end{equation}
In particular, it is evident that there are two zero eigenvalues at the momenta at which
\begin{equation}
\textbf{a} ( \textbf{p} ) \cdot \textbf{b} ( \textbf{p}) = 0
\:.
\label{eq:Eq31}
\end{equation}
Since this is a single equation for three components of the momentum, the solutions, when they exist, will form a surface in the momentum space.
Multiplying the four eigenvalues $E (\textbf{p})$ yields $\det[ H_{ef}(\textbf{p}) ] = (\textbf{a} ( \textbf{p} ) \cdot \textbf{b} ( \textbf{p}))^2$. The last equation is therefore precisely the condition for vanishing of the Pfaffian \cite{agterberg} of the effective Hamiltonian. The relation between our electric and magnetic fields $\textbf{a}(\textbf{p})$ and $\textbf{b}(\textbf{p})$ and the coefficients of the canonical representation of the effective Hamiltonian, which describe the emergence of the BF surface in terms of the ``pseudomagnetic'' field of Refs. \cite{agterberg, brydon}, can be found in Appendix~\ref{ap:Ap3} (Eqs.~ (C26)-(C27)).
\section{Zero modes at the BF surface}
\label{sec:Sec4}
We will also need the explicit form of the eigenstates of $H_{ef} ( \textbf{p} ) $ with zero energy, measured of course from the chemical potential. The eigenvalue equation is then
\begin{equation}
H_{ef} ( \textbf{p} ) \Psi ( \textbf{p} ) = 0
\:,
\end{equation}
where $\Psi = (v_0, v_1, v_2, v_3)^{\rm T}$, and the ubiquitous momentum dependence of all variables suppressed for legibility.
The eigenvalue equation can then be compactly written in the vector notation
\begin{equation}
\textbf{a} \cdot \textbf{v} =0
\:,
\end{equation}
\begin{equation}
v_0 \textbf{a} + \textbf{b} \times \textbf{v} =0
\:,
\label{eq:Eq34}
\end{equation}
with $\textbf{v} = (v_1, v_2, v_3)$. Assume $v_0 \neq 0$ first. Multiplying Eq.~\eqref{eq:Eq34} with $\textbf{b}$ we get that
\begin{equation}
v_0 \textbf{b} \cdot \textbf{a} + \textbf{b}\cdot (\textbf{b} \times \textbf{v}) =0.
\end{equation}
and therefore $\textbf{b}\cdot \textbf{a}=0$, as we already found. In this case then $\textbf{v} \sim \textbf{b} \times \textbf{a}$. When normalized, the first zero-energy solution may be taken to be
\begin{equation}
\Psi_t = \frac{1}{\sqrt{ 1+ \textbf{v}^2} } (1, \textbf{v})^{\rm T},
\end{equation}
where $\textbf{v} = (\textbf{b}\times \textbf{a}) / \textbf{b}^2$. The second, orthogonal, solution is then with $v_0=0$: in this case $\textbf{v}$ needs to be orthogonal to $\textbf{a}$ and parallel to $\textbf{b}$, which again requires that the vectors $\textbf{a}$ and $\textbf{b}$ are mutually orthogonal. In that case therefore $\textbf{v} \sim \textbf{b}$, and the normalized zero-energy solution is
\begin{equation}
\Psi_s = (0, \textbf {b}/|\textbf{b}| )^{\rm T}.
\end{equation}
Both solutions are manifestly real, and $\Psi_t ^\dagger \Psi_s =0$. One can rotate them into a pair of complex conjugate zero-energy solutions
\begin{equation}
\Psi_{\pm} = \frac{1}{\sqrt{2}} ( \Psi_t \pm i \Psi_s)
\:,
\label{eq:Eq38}
\end{equation}
which satisfy $\Psi_+ = \mathcal{A} \Psi_-$, since $\mathcal{A} = K$ in the real basis we are assuming.
We explain the motivation behind the labels ``t" and ``s" in the two basic zero-energy solutions next.
\section{Relativistic analogy to Lorentz force equation}
\label{sec:Sec5}
There exists an instructive analogy between our time-dependent Schr{\"o}dinger equation at low energies and the classical covariant second Newton law with the Lorentz force for a charged particle in the electromagnetic field. The time-dependent Schr{\"o}dinger equation for the effective Hamiltonian is
\begin{equation}
\frac{d}{dt} \Psi = F \Psi,
\end{equation}
once one recalls that $H_{ef}= i F$, with $F=F^{\mu \nu}$ as the real antisymmetric electromagnetic tensor. Newton's second law in the electromagnetic field, on the other hand, in the covariant formulation takes the form
\begin{equation}
m \frac{d}{d\tau} V = F G V,
\end{equation}
where $G=G_{\mu \nu}= \diag(1,-1,-1,-1)$ is Minkowski's metric tensor, $V=V^{\mu} = \gamma(v) (c, \textbf{v})^{\rm T}$ is the velocity four-vector, $c$ the velocity of light, $\textbf{v}$ the velocity three-vector, and $\gamma(v) =1/\sqrt{ 1- (v/c)^2 }$. $\tau$ is the proper time, and $m$ the rest mass of the particle. The velocity four-vector has the fixed positive norm with respect to the Minkowski metric: \cite{rindler}
\begin{equation}
V^\mu V_\mu = V^{\rm T} G V = c^2.
\end{equation}
The presence of Minkowski's metric tensor $G$ in the Lorentz equation, of course, makes it decidedly not a Schr{\"o}dinger equation; the Lorentz group in not $SO(4)$ but $SO(1,3)$, which is not compact, and its finite-dimensional representations are consequently not unitary.\cite{georgi} Multiplying both sides of the Lorentz equation by the imaginary unit will fail to make the matrix $i F G$, which appears in place of a Hamiltonian, Hermitian, for example. Nevertheless, the solutions of the Schr{\"o}dinger equation for which $F \Psi =0$ do have a classical analog: they correspond to the four-velocity $V$ for which the forces from the electrical and magnetic fields precisely cancel. Obviously this is possible only at the points in space where the electric and the magnetic fields are mutually orthogonal, and the unique three-velocity, of the right magnitude and right direction, is orthogonal to both. Apart from our normalization with respect to Euclidean and not Minkowski's metric, the zero-energy solution $\Psi_t$ is precisely such a four-vector, with the velocity of light being simply unity. Index ``t" in this solution was chosen to suggest a ``timelike" four-vector that would have positive Minkowski norm for velocities below the velocity of light, as the physical velocity four-vector by its definition has to be.
The second real solution we found, on the other hand, does not correspond to a physical velocity in our analogy, since the form of $\Psi_s$ is ``spacelike", i. e. with a negative Minkowski norm. As a physical solution for four-velocity of the classical Lorentz equation it is thus unacceptable. But as a solution of the Schr{\"o}dinger equation it is perfectly regular, and it can be used in a linear combination with the timelike solution to form a pair of complex-conjugate zero modes. It is the fact that the positive-norm quantum state $\Psi$ can be complex whereas the positive-norm four-velocity $V$ can only be real that leads to an additional zero-mode in the quantum case, relative to the closely related but not entirely equivalent Newton equation with the Lorentz electromagnetic force.
\begin{figure*}
\includegraphics[width=1.5\columnwidth]{Figure1.png}
\caption{Hoppings on the Lieb lattice as defined by the Hamiltonian in Eq.~\eqref{eq:Lieb-Ham}. The fermions hop with amplitude $+i$ along the direction of the arrow, and with the amplitude $-i$ in the direction opposing the arrow on a link between two sites. Pink lines connect nearest neighbors and correspond to hopping of magnitude $t$, and yellow lines connect next-nearest neighbors with hopping of magnitude $\chi$. The two-body repulsion is between the fermions residing on the nearest-neighboring sites. Panel (a) shows the full three dimensional picture of the Lieb lattice, while panel (b) shows the projection of the Lieb lattice in the $xy$-plane.}
\label{Fig1}
\end{figure*}
\section{Time reversal preserved}
\label{sec:Sec6}
When the $H_{\rm BdG}$ preserves not only inversion but time reversal symmetry as well, there cannot be a Bogoliubov-Fermi surface of zero modes. The elimination of the heavy modes will in this case produce an effective Hamiltonian which will commute with an operator that represents the combined operation of $\mathcal{P}\mathcal{T}$, i. e. with an antiunitary operator with a square of $-1$. At the level of $H_{ef}$ let us call this operator $\mathcal{B} = W K$, with a unitary representation-dependent matrix $W$, which is four-dimensional if we focus on the physically most urgent case of $M=2$. Operation $\mathcal{B}$ leaves the momentum invariant. To recognize the matrix $W$ it is useful to write the explicit form of the matrices $R_{k,\pm}$ in our representation:
\begin{equation}
R_{1,+} = 1 \otimes \frac{\sigma_2}{2} \:,
\label{eq:Eq42}
\end{equation}
\begin{equation}
R_{2,+} = \sigma_2 \otimes \frac{\sigma_3}{2}\:,
\end{equation}
\begin{equation}
R_{3,+} = \sigma_2 \otimes \frac{\sigma_1}{2}\:,
\end{equation}
and similarly for $R_{k,-}$:
\begin{equation}
R_{1,-} = -\frac{\sigma_3}{2} \otimes \sigma_2 \:,
\end{equation}
\begin{equation}
R_{2,-} = -\frac{\sigma_2}{2} \otimes 1\:,
\end{equation}
\begin{equation}
R_{3,-} = -\frac{\sigma_1}{2} \otimes \sigma_2 \:.
\label{eq:Eq47}
\end{equation}
Consider now $W = \sigma_2 \otimes X $, with $X$ a Pauli matrix. The matrix $X$ only needs to be real, so that $\mathcal{B}^2=-1$ as required. Direct inspection then gives that for any such $X$ the operator $\mathcal{B}$ would commute with two, and anticommute with the remaining four out of six matrices $R_{i,\pm}$. Furthermore, the three of the latter four matrices are either all $R_{i,+}$, or all $R_{i,-}$. For example, for $X=\sigma_1$, $\mathcal{B}$ commutes only with $R_{1,+}$ and $R_{2,+}$. This means that when time reversal symmetry is present, first, it must be that
\begin{equation}
\textbf{a} ( \textbf{p} ) + r \textbf{b} ( \textbf{p}) \equiv 0
\:,
\label{eq:Eq48}
\end{equation}
for either $r=+1$ or $r=-1$. In the relativistic analogy this means that the electric and magnetic fields are either parallel or antiparallel everywhere, and therefore the Lorentz force can never vanish, unless both fields vanish. Second, since for one of the components we also have that
\begin{equation}
a_k ( \textbf{p} ) - r b_k( \textbf{p}) =0
\:,
\label{eq:Eq49}
\end{equation}
there are only two finite terms in the representation in Eq.~\eqref{eq:Eq29}. For the specific choice in the example above the spectrum would therefore be
\begin{equation}
E( \textbf{p} ) = \pm 2 ( a_1 ( \textbf{p} ) ^2 + a_2 ( \textbf{p} ) ^2 )^{1/2}.
\end{equation}
In general therefore $E( \textbf{p} )=0$ leads to two conditions to be satisfied for three components of the momenta, i. e. a line in the momentum space. \cite{boettcher2}
The algebra involved in the above argument becomes particularly transparent in the canonical representation of the generators $R_{k, \pm}$ (Appendix~\ref{ap:Ap3}).
\section{Lattice Hamiltonian and interactions}
\label{sec:Sec7}
We now define a lattice single-particle Hamiltonian which provides a minimal realization of the above $H_{ef} (\textbf{p})$ in Eq.~\eqref{eq:Eq20} for spin-1/2 electrons. The only requirement is that it is a four-dimensional matrix Hamiltonian that admits an antiunitary operator with positive square that anticommutes with it.
With this in mind we consider the Lieb lattice in three dimensions: the unit cell consists of four sites, one that is at the sites of the primitive cubic lattice at positions $\textbf{R}= \sum_{i} n_i \textbf{e}_i$ with $n_i$ as integers, $\textbf{e}_i \cdot \textbf{e}_j =\delta_{ij} $, and the other three which are at the centers of the three links in orthogonal directions that connect the sites of the cubic lattice at positions $\textbf{R} + (\textbf{e}_i /2)$, with $i=1,2,3$. The Hamiltonian is then defined as:
\begin{widetext}
\begin{eqnarray}
\label{eq:Lieb-Ham}
H_0 = -i t \sum _{ \textbf{R}, k=1,2,3} c^\dagger ( \textbf{R} ) c ( \textbf{R} \pm \frac{\textbf{e}_k }{2} )
+ \big[ i \chi \sum _{\textbf{R}, s=\pm 1 } (s) c ^\dagger (\textbf{R} + \frac{ \textbf{e}_1}{2} ) [ c ( \textbf{R} + \textbf{e}_1 + s \frac{\textbf{e}_2}{2} ) \\ \nonumber
- c ( \textbf{R} + s \frac{\textbf{e}_2}{2} ) ]
+ (1\rightarrow 2, 2\rightarrow 3) + (2 \rightarrow 3, 3\rightarrow 1) \big] + herm. conj.
\end{eqnarray}
\end{widetext}
with parameters $t$ and $\chi$ real, so that the hoppings are all purely imaginary. $c^\dagger (\textbf{R})$ is the usual fermionic creation operator on site $\textbf{R}$. (See Figure~\ref{Fig1}.) The phases of the hopping terms are chosen so that in momentum space the Hamiltonian becomes
\begin{equation}
H_0 = \sum_{\textbf{p} } \Psi^\dagger (\textbf{p}) H_{ef} (\textbf{p}) \Psi ( \textbf{p}),
\end{equation}
with
\begin{equation}
\Psi (\textbf{R})= \big[c(\textbf{R}), c( \textbf{R} + \frac{\textbf {e}_1}{2}), c( \textbf{R} + \frac{\textbf {e}_2 }{2}), c( \textbf{R} + \frac{\textbf {e}_3}{2}) \big]^{\rm T},
\end{equation}
and $H_{ef}(\textbf{p})$ precisely as in Eq.~\eqref{eq:Eq20}, with
\begin{equation}
a_i (\textbf{p}) = 2 t \cos( \frac{p_i }{2} ),
\end{equation}
and
\begin{equation}
b_i (\textbf{p}) = 4\chi \sin( \frac{p_j}{2}) \sin( \frac{p_k }{2} ),
\end{equation}
with $ i \neq j$, $i \neq k$, $j \neq k$ in the last equation. The BF surface is now determined by the equation
\begin{equation}
\big[ \prod_{i=1,2,3} \sin \big( \frac{p_i}{2} \big) \big] \sum_{i=1,2,3} \cot \big( \frac{p_i}{2} \big) =0,
\end{equation}
which is independent of the hopping parameters $t$ and $\chi$ as long as they are both finite. The BF surface is depicted in Fig.~\ref{Fig2}. Note that whereas the three axes belong to the BF surface, poles of the cotangents remove the planes $p_i =0$ from it.
\begin{figure}
\includegraphics[width=\columnwidth]{Figure2BF}
\caption{BF surface of zero-energy states of the Hamiltonian $H_0$ in Eq.~\eqref{eq:Lieb-Ham} in the first Brillouin zone.}
\label{Fig2}
\end{figure}
One may now also define the two-body interaction term as
\begin{equation}
H_{int} = V \sum_{\textbf{R}, i} n ( \textbf{R} ) n(\textbf{R} \pm \frac{\textbf{e}_i}{2} ),
\end{equation}
with $n(\textbf{R} ) = c^\dagger (\textbf{R}) c(\textbf{R})$ as the usual particle number operator, which describes repulsion between nearest neighbors on the Lieb lattice ($V > 0$). The full interacting lattice model is then
\begin{equation}
H= H_0 + H_{int}.
\end{equation}
We assume half filling, which corresponds to the spectral symmetry of the BdG Hamiltonian between positive and negative states. Besides possessing translational symmetry, the Hamiltonian remains invariant under $2\pi/3$ rotations around the $(1,1,1)$ diagonal and under inversion around any site $\textbf{R}$.
\section{Mean field theory}
\label{sec:Sec8}
To study the effects of two-body interactions we first rewrite the interaction Hamiltonian as
\begin{equation}
H_{int} = \frac{V}{4} \sum_{\textbf{R}, i} \big[\big(n ( \textbf{R} ) + n(\textbf{R} \pm \frac{\textbf{e}_i}{2} ) \big)^2- \big( n ( \textbf{R} )- n(\textbf{R} \pm \frac{\textbf{e}_i}{2} ) \big)^2\big].
\end{equation}
It may then be decoupled with two Hartree variables (in the sense of Hubbard - Stratonovich transformation)
\begin{eqnarray}
H_{int} = \frac{1}{V} \sum_{\textbf{R}, i} \big[\zeta ^2 (\textbf{R}, \textbf{R} \pm \frac{\textbf{e}_i}{2}) - \mu^2 (\textbf{R}, \textbf{R} \pm \frac{\textbf{e}_i}{2})\big] \\ \nonumber + \sum_{\textbf{R}, i} \{ \zeta (\textbf{R}, \textbf{R} \pm \frac{\textbf{e}_i}{2} ) \big[ n ( \textbf{R} )- n(\textbf{R} \pm \frac{\textbf{e}_i}{2} ) \big] \\ \nonumber
+ \mu (\textbf{R}, \textbf{R} \pm \frac{\textbf{e}_i}{2} ) \big[ n (\textbf{R} ) + n (\textbf{R} \pm \frac{\textbf{e}_i}{2} ) \big] \}.
\end{eqnarray}
Anticipating the energetically preferable uniform mean-field configuration, we take
\begin{equation}
\zeta(\textbf{R}, \textbf{R} \pm \textbf{e}_i/2)=\langle n (\textbf{R} ) - n (\textbf{R} \pm \textbf{e}_i/2 )\rangle =\zeta,
\end{equation}
and
\begin{equation}
\mu (\textbf{R}, \textbf{R} \pm \textbf{e}_i/2)=\langle n (\textbf{R} ) + n (\textbf{R} \pm \textbf{e}_i/2 )\rangle =\mu,
\end{equation}
and both constant. The mean-field interaction term then becomes
\begin{eqnarray}
H_{int, mf} &=& \frac{6N}{V} (\zeta ^2 - \mu^2 ) \\ \nonumber
&+& 2 \sum_{\textbf{R}, i} \big[ (\zeta + \mu) n (\textbf{R} ) + ( \mu - \zeta) n (\textbf{R} + \frac{\textbf{e}_i}{2} ) \big]
\:,
\end{eqnarray}
with $N$ as the number of primitive lattice sites. In the momentum space the full mean-field Hamiltonian $H_{mf} = H_0 + H_{int, mf}$
can therefore be arranged into
\begin{eqnarray}
H_{mf} = \sum_{\textbf{p}} &&\big\{\Psi^\dagger (\textbf{p}) [ H_{ef} (\textbf{p}) + u \mathbb{1} + v G ] \Psi (\textbf{p})\\ \nonumber
& +& \frac{v^2 - u^2}{2V} \big\},
\end{eqnarray}
with the matrix $G$ as the previously encountered Minkowski metric matrix, and the two new Hubbard-Stratonovich variables being
$u/4 = \mu + (\zeta/2) $ and $v/4 =\zeta + (\mu/2)$.
Let us define the two ``critical" eigenvalues of the $H_{ef}$ which vanish at the BF surface as $\pm \xi(\textbf{p})$, with
\begin{equation}
\xi(\textbf{p}) = \frac{1}{2} (| \textbf{a} ( \textbf{p} ) + \textbf{b} ( \textbf{p}) | -
| \textbf{a} ( \textbf{p} ) - \textbf{b} ( \textbf{p})|)
\:,
\label{eq:Eq63}
\end{equation}
and the remaining two ``massive" eigenvalues which are finite everywhere as $\pm m ( \textbf{p} ) $ with
\begin{equation}
m (\textbf{p}) = \frac{1}{2} (| \textbf{a} ( \textbf{p} ) + \textbf{b} ( \textbf{p}) | +
| \textbf{a} ( \textbf{p} ) - \textbf{b} ( \textbf{p})|).
\end{equation}
There exists a unitary transformation $U_{ef} (\textbf{p})$ that diagonalizes $H_{ef} ( \textbf{p} ) $, so that
\begin{eqnarray}
U_{ef}(\textbf{p}) H_{ef} (\textbf{p}) U_{ef}^\dagger (\textbf{p}) &=&
\begin{pmatrix} m (\textbf{p}) \sigma_3 & 0 \\
0 & \xi (\textbf{p}) \sigma_3
\end{pmatrix}
\:.
\end{eqnarray}
The two-component fermions that correspond to the massive and critical states are then given by
$U_{ef} (\textbf{p}) \Psi (\textbf{p}) = (\Psi_m (\textbf{p}), \Psi_{\xi} (\textbf{p}))^T $. The mean-field Hamiltonian in terms of the critical
and massive fermions now becomes
\begin{eqnarray}
H_{mf}= \sum_{\textbf{p}} [ \Psi^\dagger _m (\textbf{p})( m (\textbf{p}) \sigma_3 + u \mathbb{1} + v X_m (\textbf{p}) ) \Psi_m (\textbf{p}) \\ \nonumber
+ \frac{ v^2 - u^2 }{2V} + \Psi^\dagger _\xi (\textbf{p}) (\xi (\textbf{p}) \sigma_3 + u \mathbb{1} + v X_\xi (\textbf{p}) ) \Psi_\xi (\textbf{p}) \\ \nonumber
+ v ( \Psi_m ^\dagger (\textbf{p}) X_{m \xi} (\textbf{p}) \Psi_\xi (\textbf{p}) + \Psi^\dagger _\xi (\textbf{p}) X_{m \xi}^\dagger (\textbf{p}) \Psi_m (\textbf{p}) )],
\end{eqnarray}
where the two-dimensional matrices $X$ are defined by
\begin{eqnarray}
U_{ef}(\textbf{p}) G U_{ef}^\dagger(\textbf{p}) &=&
%
\begin{pmatrix} X_m (\textbf{p}) & X_{m \xi} (\textbf{p}) \\
X_{m \xi}^\dagger (\textbf{p}) & X_\xi (\textbf{p})
\end{pmatrix}
%
\:.
\end{eqnarray}
The imaginary-time mean-field quantum mechanical action at finite temperatures is then
\begin{eqnarray}
S = \int_0 ^\beta d\tau [ \sum_{ \textbf{p}, r=m,\xi } \Psi_r ^\dagger (\textbf{p}, \tau) \partial_\tau \Psi_r (\textbf{p}, \tau) + H_{mf} ]
\end{eqnarray}
($\beta=1/k_B T$) in terms of the usual Grassmann variables for the massive and critical fermions. \cite{negele} Minimization of the free energy, which is the logarithm of the usual path integral over Grassmann and Hubbard-Stratonovich variables, determines the saddle-point values of $u$ and $v$, which then equal their expectation values in the ground state: $u=\langle \sum \Psi^\dagger(\textbf{p}) \Psi(\textbf{p}) \rangle$ is the shift in the chemical potential, and $v=\langle \sum \Psi^\dagger(\textbf{p}) G \Psi(\textbf{p}) \rangle$ is the ``staggered" chemical potential \cite{herbut2006, hjr}, i. e. the imbalance between the average occupations of sites on the corners $\textbf{R}$ and sites on the links $\textbf{R} \pm \textbf{e}_i / 2$. If either $u$ or $v$ is finite the inversion symmetry is broken, since $H_{mf}$ would acquire real terms and so cease to anticommute with the operator $\mathcal{A}$.
We now integrate over fermions to get the remaining action $S$ in terms of the variables $u$ and $v$ only, and expand in powers of both variables to examine the stability of the inversion-symmetric BF surface.
The integration over the massive fermions, of course, can only produce infrared-finite terms in the expansion of such $S$ in powers of $u$ and $v$.\cite{herbutbook} In particular, the terms $\sim uv $ and $\sim u^2$ produced by this integration vanish exactly at $T=0$. The same absence of $\sim uv $ and $\sim u^2$ terms is also found in the integration over the more important critical modes, as we explain below.
The integration over the critical modes yields the following term in the action $S$, quadratic in $u$ and $v$:
\begin{equation}
\frac{k_B T}{2} \sum_{\omega_n, \textbf{p}} \Tr \big[ \frac{i \omega_n + \xi \sigma_3}{ \omega_n ^2 +\xi ^2} ( u + v X_\xi) \frac{i \omega_n + \xi \sigma_3}{ \omega_n ^2 +\xi ^2} ( u + v X_\xi) \big],
\end{equation}
where $\xi = \xi(\textbf{p} )$. This can be rearranged into
\begin{eqnarray}
\frac{k_B T}{2} \sum_{\omega_n, \textbf{p}} &\Tr& \big[ \frac{-\omega_n ^2 + \xi^2 }{ (\omega_n ^2 +\xi ^2)^2 } u ( u + 2 v X_\xi) \\ \nonumber
&+& v^2 \frac{i \omega_n + \xi \sigma_3}{ \omega_n ^2 +\xi ^2} X_\xi \frac{i \omega_n + \xi \sigma_3}{ \omega_n ^2 +\xi ^2} X_{\xi } \big]
\:.
\end{eqnarray}
The first term ($\sim u^2$ and $\sim uv$) vanishes at $T=0$ due to the exact property of the integral over frequencies
\begin{equation}
\int_{-\infty} ^{\infty} d\omega \frac{\xi^2 -\omega ^2}{ ( \omega ^2 +\xi ^2 )^2 } =0
\:,
\label{eq:Eq71}
\end{equation}
whereas it would be finite at $T\neq 0$. It cannot therefore produce a $T=0$ instability of the BF surface at infinitesimal coupling by itself. The remaining second term ($\sim v^2$), on the other hand, upon expanding
\begin{equation}
X_\xi (\textbf{p}) = \sum_{\mu = 0}^3 g_\mu (\textbf{p}) \sigma_\mu
\end{equation}
becomes
\begin{widetext}
\begin{equation}
v^2 k_B T \sum_{\omega_n, \textbf{p}} \frac{ \xi^2 ( g_0^2 (\textbf{p}) + g_3 ^2 (\textbf{p}) - g_1 ^2 (\textbf{p}) - g_2 ^2 (\textbf{p}) )
-\omega_n ^2 g_\mu ^2 (\textbf{p}) }{ (\omega_n ^2 +\xi ^2 (\textbf{p}) ) ^2 }.
\end{equation}
\end{widetext}
At $T=0$, using Eq.~\eqref{eq:Eq71}, the last expression can be written as
\begin{eqnarray}
\label{eq:Eq74}
- v^2 \int_{-\infty} ^{+ \infty} \frac{d\omega}{2\pi} \sum_{\textbf{p}} \frac{ g_1 ^2 (\textbf{p}) + g_2 ^2 (\textbf{p}) }{\omega^2 +\xi^2 (\textbf{p})}=
\\ \nonumber
- v^2 \int _0 ^\Omega \frac{d\xi}{|\xi|} \mathcal{N} (\xi)
\:,
\end{eqnarray}
where
\begin{equation}
\mathcal{N} (\xi) = \sum_{ \textbf{p} } \delta( \xi - \xi(\textbf{p} )) \big(g_1 ^2 (\textbf{p}) + g_2 ^2 (\textbf{p}) \big)
\end{equation}
and $\Omega$ is a UV cutoff. The integral is logarithmically divergent if $\mathcal{N} (0) $ is finite, i. e. if the expansion coefficients $g_{1,2} ( \textbf{p} )$ of $X_{\xi} (\textbf{p}) $ have finite support on the BF surface. The sign of the integral implies that the coefficient of the quadratic term $\sim v^2 $ is in that case always negative, which signals the instability of the inversion-symmetric BF surface at $T=0$. Computing the energy of the ground state with a finite uniform $v$ and minimizing it then yields the characteristic form when $V \mathcal{N}(0) \rightarrow 0$:
\begin{equation}
v = \Omega e^{-1/(2 V\mathcal{N}(0)) }.
\end{equation}
The critical temperature below which $v\neq 0$ exhibits the same essential singularity in the interaction $V$, common to all weak-coupling instabilities.
Since the integration over the fermions at $T=0$ does not contribute to the coefficients of $\sim u^2$ and $\sim uv$ terms in the action, the saddle-point value of $u$ vanishes at $T=0$.
Finally, it is easy to show that although the integration over massive states modifies the propagator for the critical fermions to the order of $v^2$, this does not alter the log-divergent coefficient of the quadratic term above.
\section{ Fate of BF surface}
\label{sec:Sec9}
The lesson of the previous section is that the stability of the BF surface depends only on whether the matrix $G$ that couples the light fermions to the order parameter $v$, once projected onto the critical states, has finite off-diagonal elements for the momenta at the BF surface. For momenta at the BF surface $\xi(\textbf{p})=0$, and the matrix $X_{\xi}$ is then by definition
\begin{eqnarray}
X_{\xi=0} (\textbf{p}) &=&
\begin{pmatrix} \Psi_+ ^\dagger (\textbf{p}) G \Psi_+ (\textbf{p}) & \Psi_+ ^\dagger (\textbf{p}) G \Psi_- (\textbf{p}) \\
\Psi_- ^\dagger (\textbf{p}) G \Psi_+ (\textbf{p}) & \Psi_- ^\dagger (\textbf{p}) G \Psi_- (\textbf{p})
\end{pmatrix}
\:,
\end{eqnarray}
with the states $\Psi_{\pm} (\textbf{p})$ given by Eq.~\eqref{eq:Eq38}. This readily yields $g_k (\textbf{p}) =0$ for $k=2,3$ and
\begin{equation}
g_0 (\textbf{p}) = - \frac{\textbf{a} ^2 (\textbf{p})}{\textbf{a} ^2 (\textbf{p}) + \textbf{b} ^2 (\textbf{p})},
\end{equation}
\begin{equation}
g_1 (\textbf{p}) = \frac{\textbf{b} ^2 (\textbf{p})}{\textbf{a} ^2 (\textbf{p}) + \textbf{b} ^2 (\textbf{p})} .
\end{equation}
$g_1 (\textbf{p}) $ is finite everywhere on the BF surface, except at the three coordinate axis. The integral in Eq.~\eqref{eq:Eq74} is then indeed logarithmically divergent, and the inversion-symmetric BF surface is unstable at $T=0$ and infinitesimal repulsive nearest-neighbor interaction $V$.
We may now examine the resulting low-energy spectrum of the quasiparticles in the inversion-symmetry-broken state with $v>0$ and $u=0$. It is given by the two-dimensional mean-field Hamiltonian for the critical fermions near the BF surface
\begin{equation}
H_{mf, \xi} = \xi (\textbf{p}) \sigma_3 + v g_1 (\textbf{p}) \sigma_1 + v g_0 (\textbf{p}),
\end{equation}
with $g_k(\textbf{p})$, $k=0,1$ given above, and $\xi(\textbf{p})$ as in Eq.~\eqref{eq:Eq63}. Near the BF surface one can approximate
\begin{equation}
\xi (\textbf{p}) = \frac{\textbf{a}(\textbf{p})\cdot \textbf{b}(\textbf{p}) } { [ \textbf{a}(\textbf{p})^2 + \textbf{b}(\textbf{p})^2]^{1/2} } \{ 1+
\frac{[\textbf{a}(\textbf{p})\cdot \textbf{b}(\textbf{p})]^2 } {2 [ \textbf{a}(\textbf{p})^2 + \textbf{b}(\textbf{p})^2]^2 }+...\}.
\end{equation}
The spectrum of $H_{mf,\xi}$ is therefore
\begin{equation}
E (\textbf{p}) = \pm [\xi^2 (\textbf{p}) + v^2 g_1 ^2 (\textbf{p})]^{1/2} + v g_0(\textbf{p})\:.
\end{equation}
In particular, the location in the momentum space of the zero modes of the new spectrum is in general given by the solution of
\begin{equation}
\xi^2 (\textbf{p})= v^2 [g_0 ^2 (\textbf{p}) - g_i ^2 (\textbf{p}) ]\:,
\end{equation}
which in the present case and with the order parameter $v$ small reduces to the simple condition
\begin{equation}
( \textbf{a}(\textbf{p}) \cdot \textbf{b}(\textbf{p}))^2 = v^2 ( \textbf{a}(\textbf{p})^2 - \textbf{b}(\textbf{p})^2).
\end{equation}
The left-hand side of the last equation vanishes at the original BF surface. The parts of the original BF surface where the right-hand side ($RHS= v^2(\textbf{a}^2 -\textbf{b}^2) $) of the equation is positive will thus split into two wings of the new surfaces of zero modes, which merge at the intersection of the original BF surface and the surface given by the zero value of the right-hand side of the equation ($RHS=0$). So if such an intersection of the two surfaces exists, a part of the original BF surface will become gapped, and its complement will effectively remain gapless, i. e. transform into a new surface. If there is no such intersection of the two surfaces, on the other hand, the original BF surface is either completely gapped out (if $RHS<0$ everywhere on it), or split into two new separate nearby surfaces (if $RHS >0$ everywhere on it).
In our lattice model, since $\textbf{a} \sim t $ vanishes in the corners of the Brillouin zone, and $\textbf{b}\sim \chi $ vanishes at the three axis, the surface $RHS=0$ always intersects the original BF surface, and thus gaps out only a part of it. The size of the remaining surface when $v\neq 0$ depends on the ratio $\chi/t$: when $\chi/t\rightarrow 0$, the gapped part vanishes, whereas as $ \chi/t \rightarrow \infty$ only the parts of the BF surface around the axis survive, and the gap is finite almost everywhere. A typical result is depicted in Fig.~\ref{Fig3}.
\begin{figure
\includegraphics[width=\columnwidth]{Figure3BF}
\caption{New BF surface (yellow) after the inversion is spontaneously broken. Parts of the original BF surface (blue) outside of the new BF surface are gapped out, whereas those inside are split into the new surface, as described in the text. The values of the parameters are here chosen to be $t=1$, $\chi=0.5$, and $ v=0.6$.}
\label{Fig3}
\end{figure}
\section{Summary and discussion}
\label{sec:Sec10}
We have discussed the formation of the BF surface in the multiband superconductors with inversion symmetry by pointing out the analogy with classical relativity, furnished by the $SO(4)$-generator form of the low-energy Hamiltonian which ensues when the time reversal is broken, either in the superconducting or the normal phase, or in both. In this analogy the zero-energy solutions of the BdG Hamiltonian correspond to four-velocities for which classical Lorentz force in fictitious corresponding electric and magnetic fields vanishes, and the BF surface is linked to the orthogonality of the electric and magnetic fields. The latter condition is found to be tantamount to vanishing of the Pfaffian of the low-energy Hamiltonian. The relativistic analogy suggested a simple single-particle lattice model which falls into the class D, that is, which yields a hopping Hamiltonian that anticommutes with an antiunitary operator of a positive square, the latter encoding the joint particle-hole and inversion symmetries of the superconducting state. We then added a two-body repulsive term between nearest neighbors on the lattice, to find that the inversion symmetry becomes spontaneously broken at $T=0$ at infinitesimal such interaction. The BF surface of the noninteracting lattice model deforms and reduces in size as a result, but does not completely disappear.
The relativistic analogy offers maybe the simplest way to understand why a BF surface arises when the time reversal is broken: since the effective Hamiltonian is a four-dimensional $SO(4)$ generator which belongs to the $(1/2,1/2)$ representation equivalent to standard boosts and rotations in the Minkowski space,
the quasiparticle spectrum is a linear combination of two familiar spectra of spin-1/2 particles (Eq.~\eqref{eq:Eq30}). As such it yields a single zero-energy condition on the three momentum components, which when satisfied leads to a surface in the momentum space. The preservation of the time reversal prevents the condition to be fulfilled, and leads to two equations on momenta with zero energy, i. e. a line.
Following the same mode-elimination procedure of Ref. \onlinecite{link2} for the present inversion-symmetric case, outlined also here in {Appendix~\ref{ap:Ap3}}, one finds that at weak Cooper paring BF surfaces will inevitably form around those points in the momentum space where the intraband pairing between the light states happens to vanish. The size of the BF surface is then $\sim \Delta^2 / E_0$, where $\Delta$ is the overall norm of the multi-component pairing order parameter, and $E_0$ the energy gap to the first higher energy level in the normal state, and thus typically small in the weak-coupling limit. In precise analogy to the case without inversion,\cite{link2} increasing the pairing order parameter initially inflates the BF surfaces, but only up to a point, beyond which it begins to reduce them, until they disappear via an example of a Lifshitz transition. \cite{lifshitz}
It was pointed out \cite{oh, tamura} that the inversion symmetry is in danger of being spontaneously broken by residual interaction effects, and the concomitant BF surface further reduced or gapped out. This ensues, however, only if the effective residual interactions between the low-energy quasiparticles with momenta near the BF surface are attractive in the particular inversion-symmetry-breaking channel, which seems difficult to ascertain without a specific model in mind. To that purpose we proposed a lattice model which is motivated by the phenomenon of the BF surface in the inversion-symmetric and
time-reversal-broken multiband superconductor; the only requirement on it is that it falls into the class D \cite{bzdusek}, as dictated by the symmetries of the superconducting problem under consideration. The model then features spinless fermions hopping on the three-dimensional Lieb lattice and repelling each other when found on nearest-neighboring sites. We show that this model indeed exhibits a surface of Weyl points, which spans the entire Brillouin zone, and serves therefore as a magnified version of a BF surface. Infinitesimal nearest-neighbor interaction leads however to spontaneous dynamical breaking of the D-class condition in the mean-field Hamiltonian, which should be interpreted as breaking of inversion in the superconducting problem. The non-interacting BF surface is found to be deformed and reduced by this mechanism, with its final size dependent on the model parameters.
The dynamical inversion symmetry breaking in the present lattice model is interesting from the point of view of the theory of quantum phase transitions in fermionic systems. At the level of the model, it is not really, as usual, a symmetry (a commuting linear operator) that becomes broken, but an ``antisymmetry" (an anticommuting, and even antiunitary operator) that does so. Other modifications of our lattice model with different two-body interaction terms, or disorder, may lead to further insights into this new phenomenon.
\section{Acknowledgments}
We are grateful to Igor Boettcher and Carsten Timm for useful comments on the manuscript. JML is supported by DFG grant No. LI 3628/1-1, and IFH by the NSERC of Canada.
\begin{appendix}
\section{CP symmetry of the BdG Hamiltonian}
\label{ap:Ap1}
Let us redefine the quantum-mechanical action for the Bogoliubov quasiparticles in the superconducting state:
\begin{equation}
S= k_B T \sum_{\omega_n, \textbf{p}} \Phi^\dagger (\omega_n, \textbf{p}) [-i\omega_n + H_{\rm{BdG}} (\textbf{p}) ] \Phi(\omega_n, \textbf{p}),
\label{eq:action_Eq1_app}
\end{equation}
where the Nambu spinor is now simply ${ \Phi (\omega_n, \textbf{p}) = \big(\psi(\omega_n,\textbf{p}) , \psi^* (-\omega_n, -\textbf{p} ) \big) ^{\rm T} }$, without the unitary part of the time reversal in the lower, hole component. In this representation the BdG Hamiltonian assumes the standard form: \cite{agterberg}
\begin{eqnarray}
H_{\rm BdG}(\textbf{p}) &=&
\begin{pmatrix} H(\textbf{p})-\mu & \Delta (\textbf{p}) \\
\Delta ^\dagger (\textbf{p}) & - \big[ H^{\rm T} (-\textbf{p})- \mu \big]
\end{pmatrix}
\:,
\label{eq:BdG-Ham-Eq2_ap}
\end{eqnarray}
related to our form in an obvious way. The pairing matrix needs to satisfy
\begin{equation}
\Delta^{\rm T} (-\textbf{p}) = - \Delta (\textbf{p}).
\end{equation}
It is straightforward to check that the BdG Hamiltonian in this representation possesses the particle-hole symmetry (by construction) in the following form:
\begin{equation}
(\sigma_1 \otimes 1_{N\times N} ) H_{\rm BdG} ^{\rm T} (-\textbf{p}) (\sigma_1 \otimes 1_{N\times N} ) = - H_{\rm BdG} (\textbf{p}).
\end{equation}
We now additionally assume that there is an inversion symmetry, so:
\begin{equation}
P^\dagger H (-\textbf{p}) P = H (\textbf{p})
\end{equation}
\begin{equation}
P^\dagger \Delta (-\textbf{p}) P = \Delta (\textbf{p}).
\end{equation}
For the BdG Hamiltonian this implies that
\begin{equation}
(1\otimes P^\dagger) H_{\rm BdG} (-\textbf{p}) (1\otimes P) = H_{\rm BdG} (\textbf{p})
\end{equation}
Recognizing that transposing the (Hermitian) BdG Hamiltonian is the same as complex-conjugating it, one discerns the antiunitary operator $\mathcal{A}'$
\begin{equation}
\mathcal{A}' = (\sigma_1 \otimes \mathcal{P} ) K
\end{equation}
which has the desired effect of anticommuting with the BdG Hamiltonian at fixed momentum, i. e.
\begin{equation}
(\mathcal{A}' )^{-1} H_{\rm BdG} (\textbf{p}) \mathcal{A}' = - H_{\rm BdG} (\textbf{p})
\:.
\end{equation}
The square of this operator is now
\begin{equation}
(\mathcal{A}' )^2 = (\sigma_1 )^2 \otimes (P P^*)= P P^*.
\end{equation}
When $P$ is simply a unit matrix this is $+1$, but when it is not, even if one assumes the usual Hermiticity of $P$, i.e. that $P=P^\dagger$, the square of $\mathcal{A}' $ depends on whether the matrix for $P$ in the given representation is real or imaginary. A simple example of the Dirac Hamiltonian with imaginary Hermitian $P$ is provided in the next appendix. Of course, one can always from the outset work in the eigenbasis of $P$ itself, in which it is a real diagonal matrix, and in which consequently $P P^* = P^2 =+1$. The antiunitary operator that anticommutes with the BdG Hamiltonian therefore always exists. Another way to see that is to construct the BdG Hamiltonian by defining the hole component of the Nambu spinor as a time-reversed particle component, as done in the body of the paper. Then the fact that $\mathcal{A}^2 = +1$ simply reflects the fundamental commutation relation between spatial transformations such as inversion and time reversal. More on this is next.
\section{Commutation between inversion and time reversal}
\label{ap:AP2}
Let us provide an argument as to why inversion and time reversal operations need to be assumed to be commuting in general on the familiar example of the Dirac Hamiltonian. First, modulo an overall sign, there is a unique four-dimensional representation of five-dimensional Clifford algebra, which can always be chosen so that three of the matrices are real ($\alpha_i$, $i=1,2,3$), and two imaginary ($\beta_i$, $i=1,2$).\cite{herbutprb} We may choose all five matrices to be Hermitian, and to be squaring to unity. These are simple generalizations of the known properties of the Pauli matrices. Consider then a massless inversion-symmetric Dirac Hamiltonian, which is the sum of two Weyl Hamiltonians of opposite chirality. It can be written, for example, as
\begin{equation}
H_W ( \textbf{p} ) = \sum_{i=1}^3 p_i \alpha_i.
\end{equation}
There is not one, but two options for the matrix part of the inversion operation $\mathcal{P}$ at this stage: $P_1 = \beta_1$, or $P_2 = \beta_2$. Both have the desired effect on the massless inversion-symmetric Dirac Hamiltonian:
\begin{equation}
P_i ^\dagger H_W ( -\textbf{p}) P_i = H_W ( \textbf{p}),
\end{equation}
and both are Hermitian and unitary matrices.
Likewise, there are two options for the time reversal operator: $T_1 = \beta_1 K$, and $T_2 = \beta_2 K $. The time reversal operation $\mathcal{T}$ in the momentum space is then given by the combined action of $T_k$ and the momentum reversal $\textbf{p} \rightarrow - \textbf{p}$. Since $\beta_{1,2}$ are imaginary, we have
\begin{equation}
[\mathcal{P}_i, \mathcal{T}_j]=0
\end{equation}
only if $i\neq j$, otherwise the two operations anticommute instead of commuting. Let us chose then one anticommuting pair, say $\mathcal{P}_1$ and $\mathcal{T}_1$. Is this a sensible choice? Add a relativistic mass term to the massless Dirac Hamiltonian, and consider
\begin{equation}
H_D ( \textbf{p}) = H_W ( \textbf{p}) + m \beta_k,
\end{equation}
with $k=1$ or $k=2$. The mass $m$ is real. These are the only two options for the mass term, since there are no further four dimensional matrices that would anticommute with all three matrices $\alpha_i$. If we chose $k=1$, $H_D$ is symmetric under inversion operation $\mathcal{P}_1$, but the mass term violates time reversal $\mathcal{T}_1$. If we had chosen $k=2$, then the mass term would respect the time reversal $ \mathcal{T} _1$, but violate the inversion $\mathcal{P}_1$. Obviously if we would choose the second anticommuting pair $\mathcal{P}_2$ and $\mathcal{T}_2$ it would be the other way around. Still, either choice of the mass term would violate one of the two discrete symmetries, if we allowed them to anticommute with each other.
So the very existence of massive relativistic fermions in the world which is both inversion-symmetric and time-reversal-symmetric implies that these two symmetries must be assumed to be commuting. Then the mass term uniquely selects the corresponding operators: if $k=1$, the required pair is $\mathcal{P}_1$ and $\mathcal{T}_2$.
We may also note, in relation to the previous appendix, that in the above representation, in spite of $P_k ^2 =1$, $P_k P_k ^* =-1$, for both $k=1,2$.
\section{The effective Hamiltonian in the canonical representation of $SO(3) \times SO(3)$}
\label{ap:Ap3}
\begin{figure*}
\centering
\includegraphics[width=2\columnwidth]{Figure4BF}
\caption{(a) The energy bands of light particle and hole states (red) and the heavy particle and hole states. Each energy band is doubly degenerate. (b) The energy dispersion of the BdG quasiparticles with broken TR symmetry in the special direction where the intraband coupling between the light states vanishes and $f_{1,2} \propto \mathcal{O}(\Delta^3)$. The interband pairing induces a shift in momentum and energy with $f_3\propto \mathcal{O}(\Delta^2)$ and $f_0 \propto \mathcal{O}(\Delta^2)$, respectively. Meanwhile $f_{1,2}$ introduces a gap between the critical- and the massive energy band. The two critical energy bands are intersecting the Fermi level at $p_1$ and $p_2$ in that special direction. If one deviates from this special direction, the two points $p_1$ and $p_2$ will approach each other until they merge. This way a closed BF surface nucleates. (c) The energy dispersion of the BdG quasiparticles with preserved TR symmetry again in the special direction. We see that $f_0=0$, i.e. there is no shift in energy of the energy bands induced by the interband pairing but only a shift in momentum. This will lead to either point or line nodes. }
\label{fig:Fig4}
\end{figure*}
In this appendix, we consider systems in the normal state with $M=2$ and $\mathcal{T}^2=-1$, i.e. the energy band $E_i(\textbf{p})$ is doubly degenerated due to the inversion symmetry and has the eigenstates $\phi_{+,i}(\textbf{p})$ and $\phi_{-,i}(\textbf{p})$. The emergence of the BF surface in such a system will be explained in terms of shifts in momentum and in energy of the critical and massive energy bands due to inter- and intraband pairing. To this end, the effective Hamiltonian is written in the canonical representation of $SO(3) \times SO(3)$ and has the form
\begin{eqnarray}
H_{ef}
&=&
f_0(\textbf{p}) \: \sigma_3 \otimes \mathbb{1}_{2\times 2}+
\sum_{j=1}^3 [f_j(\textbf{p}) \mathbb{1}_{2\times 2}\otimes \sigma_j]
\:,
\label{eq:Hef_general}
\end{eqnarray}
where the function $f_0(\textbf{p})$ is defined as
\begin{equation}
f_0(\textbf{p})=\sqrt{Z_1(\textbf{p})^2+Z_2(\textbf{p})^2+Z_3(\textbf{p})^2}
\end{equation}
with $Z_i(\textbf{p})$ being the coefficient of $Z_i(\textbf{p}) \sigma_i \otimes \mathbb{1}_{2\times 2}$.
The function $f_0(\textbf{p})$ acts as a ``pseudomagnetic field'' responsible for the emergence of the BF \cite{agterberg,brydon}. How $f_0(\textbf{p})$ is related to the electric field $\textbf{a}(\textbf{p})$ and the magnetic field $\textbf{b}(\textbf{p})$ in the body of paper is shown in the second part of this appendix.
In the normal state with the pairing matrix $\Gamma(\textbf{p})=0$ and the superconducting gap being zero, i.e. $\Delta=0$, the effective Hamiltonian is only proportional to $f_{3} (\textbf{p}) = E_1 (\textbf{p}) - \mu$ and describes the two particle and two hole states of the light mode which arises due to the inversion symmetry, see Fig.~\ref{fig:Fig4}.
However, in the superconducting state with broken time-reversal (TR) symmetry, i.e. $\Gamma=\Gamma_1-{\rm i} \Gamma_2$ with $\Gamma_{1,2}$ being finite, the term $f_{3}(\textbf{p})-\big[E_1(\textbf{p})-\mu\big]$ introduces a shift in the momentum of the energy band of order $\mathcal{O}(\Delta^2)$ due to interband pairing, $f_0(\textbf{p})$ introduces a shift in the energy of order $\mathcal{O}(\Delta^2)$, while $f_{1,2}(\textbf{p})\propto \mathcal{O}(\Delta)+\mathcal{O}(\Delta^3)$ which introduces a gap between the light particle and hole states. Whenever the leading order term of $f_{1,2}(\textbf{p})$, which describes the intraband pairing between the light particle and light hole state, vanishes in a certain direction and $f_{1,2}(\textbf{p})\propto \mathcal{O}(\Delta^3)$, the shift in momentum and energy of the energy bands leads to two points $p_1$ and $p_2$ along that special direction, where the energy bands of the quasiparticles are zero, see Fig.~\ref{fig:Fig4}. If one deviates from this special direction, the two points will come closer to each other and merge at one point due to continuity. This will lead to a closed BF surface.
In the case of a superconducting state with preserved TR symmetry, $f_0(\textbf{p})\equiv 0$, which means that no shift in the energy occurs. There is only a shift in the momentum of the energy bands introduced by $f_3(\textbf{p})-\big[E_1(\textbf{p})- \mu \big]$. This leads in general to a line of gapless nodes.
\subsection{The relation between the effective Hamiltonan and inter- and intraband pairing}
\label{ap:Ap3.1}
Next, we want to relate the intra- and interband pairing of the different energy bands to the functions $f_\alpha(\textbf{p})$ with $\alpha=\{1,2,3,4\}$ which shift the light states in energy and momentum ($f_{0,3}$) and open up a gap between the critical and massive energy bands ($f_{1,2}$).
To derive the effective Hamiltonian, we employ Eq.~\eqref{eq:Eq19} where the effective Hamiltonian is (again) given by
\begin{equation}
H_{ef}(\textbf{p})= H_l(\textbf{p})-H_{lh}(\textbf{p}) H_h^{-1}(\textbf{p}) H_{lh}^\dagger(\textbf{p})
\:.
\end{equation}
For the doubly degenerated energy bands, the matrix describing the light states $H_l=H_{11}^{(0)}$, while $H_{ii}^{(0)}$ with $i \ge 2$ denote the intraband pairing between the heavy states.
The matrices $H_{ii}^{(0)}$ describing the energy dispersion and the intraband pairing between one energy band are thus given by
\begin{equation}
H_{ii}^{(0)}=\begin{pmatrix}
E_i(\textbf{p})-\mu & R_i^{(0)} & 0 & 0\\
\bar{R}_i^{(0)} & -E_i(\textbf{p})+\mu & 0 & 0\\
0 & 0 & E_i(\textbf{p})-\mu & R_i^{(0)}\\
0 & 0 & \bar{R}_i^{(0)} & -E_i(\textbf{p})+\mu
\end{pmatrix}
\:,
\end{equation}
with $R_i^{(0)} = \phi_{\pm,i}^\dagger(\textbf{p}) \Gamma \phi_{\pm,i}(\textbf{p})$. The form of the matrices is determined by the inversion and TR symmetry of the normal state Hamiltonian and the pairing matrix $\Gamma_{1,2}$ which is defined by the operator $D=\mathcal{P}\cdot \mathcal{T}= P\cdot U K$ with $D^2=-1$. The unitary part of $D$ is defined as $\tilde{\mathcal{U}}=P \cdot U$. A consequence of this property is the fact that the eigenstates transform as
\begin{eqnarray}
\phi_{-,i}(\textbf{p})&=& +D\phi_{+,i}(\textbf{p})\\
\phi_{+,i}(\textbf{p})&=& -D\phi_{-,i}(\textbf{p})
\:,
\end{eqnarray}
while the pairing term transforms as
\begin{equation}
\tilde{\mathcal{U}}^\dagger\: \Gamma(\textbf{p}) \: \tilde{\mathcal{U}}= \Gamma(\textbf{p})^{\rm T}
\:.
\end{equation}
The elements $(1,4),(2,3),(3,2)$, and $(4,1)$ of $H_{ii}^{(0)}$ are zero, since these matrix elements describe the coupling between the Kramers pairs and
\begin{eqnarray}
\phi_{+,i}^\dagger \Gamma \phi_{-,i}
=
\big(-D \phi_{-,i} \big)^\dagger \Gamma D \phi_{+,i}
=
-\phi_{+,i}^\dagger \Gamma \phi_{-,i}
=0
\:.
\end{eqnarray}
The matrices $H_{ij}^{(0)}$ with $i\neq j$ describe the coupling between the light state and the $j$th heavy state in the case of $i=1$ and $j\neq 1$ and between the $i$th and $j$th heavy state. They are defined as
\begin{equation}
H_{ij}^{(0)}
=
\begin{pmatrix}
0 & C_{ij}^{(0)} & 0 & B_{ij}^{(0)}\\
\bar{A}_{ij}^{(0)} & 0 & \bar{D}_{ij}^{(0)} & 0\\
0 & -D_{ij}^{(0)} & 0 & A_{ij}^{(0)} \\
- \bar{B}_{ij}^{(0)} & 0 & \bar{C}_{ij}^{(0)} & 0
\end{pmatrix}
\:,
\end{equation}
where the coefficients are given by
\begin{eqnarray}
A_{ij}^{(0)}
&=&
\phi_{-,i}^\dagger(\textbf{p}) \Gamma \phi_{-,j}(\textbf{p})=\phi_{+,j}^\dagger(\textbf{p}) \Gamma \phi_{+,i}(\textbf{p})
\:,
\\
B_{ij}^{(0)}
&=&
\phi_{+,i}^\dagger(\textbf{p}) \Gamma \phi_{-,j}(\textbf{p})
\:,
\\
C_{ij}^{(0)}
&=&
\phi_{+,i}^\dagger(\textbf{p}) \Gamma \phi_{+,j}(\textbf{p})
\:,
\\
\bar{D}_{ij}^{(0)}
&=&
\phi_{+,i}^\dagger(\textbf{p}) \Gamma^\dagger \phi_{-,j}(\textbf{p})
\:.
\end{eqnarray}
Note that the diagonal blocks $H_{ii}^{(0)}$ are Hermitian matrices whereas the off-diagonal blocks $H_{ij}^{(0)}$ in general are not.
To obtain a physical intuition for how the functions $f_\alpha$ are related to the inter- and intraband pairing, we consider the result of second-order perturbation theory. Since the matrix blocks belonging to $H_{lh}$ are in first order of $\Delta$, we neglect all intra- and interband coupling between the heavy states, i.e. we set $\Gamma=0$ in all $H_{ij}^{(0)}$ with $i>2$, which yields
\begin{equation}
H_{ef} = H_{11}^{(0)} - \sum_{k=2}^N H_{1k}^{(0)} \big(H_{kk,\Gamma=0}^{(0)}\big) ^{-1} H_{1k}^{(0) \dagger} + \mathcal{O}(\Delta^3)
\:.
\end{equation}
The BF surface emerges when the leading order of the intraband pairing between the light particle and light hole state is vanishing in one special direction, which is described in the effective Hamiltonian by
\begin{equation}
f_1(\textbf{p}) -i f_{2}(\textbf{p})= \phi_{+,1}^\dagger(\textbf{p}) \Gamma \phi_{+,1}(\textbf{p}) + \mathcal{O}(\Delta^3)
\:.
\label{eq:f1+f2}
\end{equation}
This can also be rewritten in terms of $\Gamma=\Gamma_1 -{\rm i}\Gamma_2$ as
\begin{eqnarray}
f_1 &=& \phi_{+,1}^{\dagger}(\textbf{p}) \Gamma_1 \phi_{+,1}(\textbf{p}) + \mathcal{O}(\Delta^3) \:,\\
f_2 &=& \phi_{+,1}^{\dagger}(\textbf{p}) \Gamma_2 \phi_{+,1}(\textbf{p}) +\mathcal{O}(\Delta^3)
\:.
\end{eqnarray}
The interband pairing between the light state and the heavy states shifts the energy band crossing of the light particle and light hole state in momentum and is given by
\begin{widetext}
\begin{equation}
f_3(\textbf{p})=E_1(\textbf{p})-\mu+\sum_{k=2}^N \frac{1}{2 [E_k(\textbf{p})-\mu]}\big[
|\phi_{+,1}^\dagger(\textbf{p}) \Gamma \phi_{-,k}(\textbf{p})|^2 +
|\phi_{+,1}^\dagger (\textbf{p}) \Gamma \phi_{+,k}(\textbf{p})|^2 +
|\phi_{+,1}^\dagger(\textbf{p}) \Gamma^\dagger \phi_{-,k}(\textbf{p})|^2 +
|\phi_{+,1}^\dagger (\textbf{p}) \Gamma^\dagger \phi_{+,k}(\textbf{p})|^2 \big]\:,
\label{eq:f3}
\end{equation}
\end{widetext}
The function $f_0(\textbf{p})$ which introduces a shift in energy of the energy bands and is thus responsible for the nucleation of the BF surface, is defined as $f_0=\sqrt{Z_1^2+Z_2^2+Z_3^2}$. The functions $Z_1(\textbf{p})$ and $Z_{2}(\textbf{p})$ are defined as the interband pairing between the light states and the heavy states and are only finite when TR symmetry is broken (i.e. $\Gamma_2$ is finite), as can be seen in
\begin{widetext}
\begin{equation}
Z_{1}(\textbf{p}) -i Z_{2}(\textbf{p})
=
\sum_{k=2}^N \frac{1}{ E_k(\textbf{p})-\mu}\big[\big(\phi_{+,1}^\dagger (\textbf{p}) \Gamma^\dagger \phi_{+,k}(\textbf{p})\big) \big(\phi_{+,1} \Gamma \phi_{-,k}(\textbf{p}) \big) -\big( \phi_{+,1}^\dagger \Gamma^\dagger \phi_{-,k}(\textbf{p}) \big) \big( \phi_{+,1}^\dagger(\textbf{p}) \Gamma \phi_{+,k}(\textbf{p}) \big) \big]
\:,
\label{eq:z1+z2}
\end{equation}
or also with $\Gamma=\Gamma_1 -{\rm i} \Gamma_2$
\begin{eqnarray}
Z_1 &=& 0\\
Z_2 &=& \sum_{k=2}^N \frac{2}{E_k(\textbf{p})-\mu}[(\phi_{+,1}^{\dagger}(\textbf{p}) \Gamma_2 \phi_{+,k}(\textbf{p})) (\phi_{+,1}^\dagger(\textbf{p}) \Gamma_1 \phi_{-,k}(\textbf{p}))
-
(\phi_{+,1}^{\dagger}(\textbf{p}) \Gamma_1 \phi_{+,k}(\textbf{p})) (\phi_{+,1}^\dagger(\textbf{p}) \Gamma_2 \phi_{-,k}(\textbf{p}))]
\:.
\end{eqnarray}
The same is true for $Z_3$ which is given by
\begin{equation}
Z_3(\textbf{p})=\sum_{k=2}^N \frac{1}{2 (E_k(\textbf{p})-\mu)}\big[
|\phi_{+,1}^\dagger(\textbf{p}) \Gamma \phi_{-,k}(\textbf{p})|^2 +
|\phi_{+,1}^\dagger (\textbf{p}) \Gamma \phi_{+,k}(\textbf{p})|^2 -
|\phi_{+,1}^\dagger(\textbf{p}) \Gamma^\dagger \phi_{-,k}(\textbf{p})|^2 -
|\phi_{+,1}^\dagger (\textbf{p}) \Gamma^\dagger \phi_{+,k}(\textbf{p})|^2 \big]
\:.
\label{eq:z3}
\end{equation}
\end{widetext}
In Eqs.~\eqref{eq:z1+z2}-\eqref{eq:z3}, we see explicitly that $f_0(\textbf{p}) \equiv 0$ for a TR-preserved superconducting state with $\Gamma_2=0$. This implies that the interband pairing induces no shift in energy of the critical and massive energy bands. The only shift induced by the interband pairing is in the momentum of the light particle and hole states which exhibits only line or point nodes, as can be seen in Fig.~\ref{fig:Fig4}.
\subsection{The effective Hamiltonian in the $SO(4)$ representation and in the canonical $SO(3)\times SO(3)$ representation}
\label{ap:Ap33}
In this section, we want to relate the $SO(4)$ representation of the effective Hamiltonian to the canonical $SO(3)\times SO(3)$ representation.
Although the $SO(4)$ commutation relations guarantee the existence of the unitary transformation that would bring the matrices in Eqs.~\eqref{eq:Eq42}-\eqref{eq:Eq47} into the standard form, we nevertheless provide it here, for completeness:
\begin{eqnarray}
\mathcal{U} &=&
%
\frac{1}{2} \begin{pmatrix} 1 & - i & i & 1 \\
-1 & -i & -i & 1 \\
-i & -1 & -1 & i \\
-i & 1 & -1 & -i
\end{pmatrix}
%
\:.
\end{eqnarray}
Then
\begin{equation}
\mathcal{U} R_{+} \mathcal{U}^\dagger = \frac{1}{2} ( 1\otimes \sigma_3 , 1\otimes\sigma_1, 1\otimes\sigma_2 ),
\end{equation}
\begin{equation}
\mathcal{U} R_{-} \mathcal{U}^\dagger =\frac{1}{2} ( \sigma_2 \otimes 1, \sigma_3 \otimes 1, \sigma_1 \otimes 1 ),
\end{equation}
which are cyclic permutations of the canonical form $1\otimes \sigma_k /2$, $k=1,2,3$, and $\sigma_k /2 \otimes 1$, respectively.
%
Hence, we can relate the coefficients $a_i(\textbf{p})$ and $b_i(\textbf{p})$ of the $SO(4)$ representation to their canonical counter part $f_i(\textbf{p})$ and $Z_i(\textbf{p})$ in the following way:
\begin{eqnarray}
a_1(\textbf{p})&=\frac{1}{2}(f_3(\textbf{p})+ Z_2(\textbf{p})),
\:
b_1(\textbf{p})&=\frac{1}{2}(f_3(\textbf{p})- Z_2(\textbf{p}))\\
a_2(\textbf{p})&=\frac{1}{2}(f_1(\textbf{p})+ Z_3(\textbf{p})),
\:
b_2(\textbf{p})&=\frac{1}{2}(f_1(\textbf{p})- Z_3(\textbf{p}))\\
a_3(\textbf{p})&=\frac{1}{2}(f_2(\textbf{p})+ Z_1(\textbf{p})),
\:
b_3(\textbf{p})&=\frac{1}{2}(f_2(\textbf{p})- Z_1(\textbf{p}))
\:.
\end{eqnarray}
The condition for the emergence of the BF surface (see Eq.~\eqref{eq:Eq31}) can now be expressed by the coefficients $f_i$ and $Z_i$ as
\begin{equation}
\boldsymbol{a}(\textbf{p})\cdot \boldsymbol{b}(\textbf{p})=
\sum_{i=1}^3[f_i^2(\textbf{p})-Z_i^2(\textbf{p})]= \sum_{i=1}^3 f_i^2(\textbf{p})-f_0^2(\textbf{p})=0
\:,
\end{equation}
which is the same condition as Eq.(10) of Ref.~\onlinecite{link2}, and corresponds to the condition that the Pfaffian of the effective Hamiltonian has to vanish. Or in other words: the condition for the emergence of the BF surface in the $SO(4)$ representation is the orthogonality of the electric and magnetic fields $\textbf{a}(\textbf{p})$ and $\textbf{b}(\textbf{p})$, whereas in the canonical representation the condition for the emergence of the BF surface translates into the fact that the interband coupling, which introduces the shift in momentum of the critical and massive energy bands as well as the gap between the critical and massive bands, has to be as large as the shift in energy of the bands induced by the ``pseudomagnetic'' field $f_0$.
For a superconducting state with preserved TR symmetry, $Z_i(\textbf{p})=0$ (as proven above) which yields that the electric and magnetic fields are parallel with $\textbf{a}(\textbf{p})=\textbf{b}(\textbf{p})$ (as demanded in Eq.~\eqref{eq:Eq48}). Furthermore, we see that the component $a_3(\textbf{p})=b_3(\textbf{p})=0$ vanishes when TR symmetry is preserved, i.e. $\Gamma_2$ is zero, which is in agreement with Eq.~\eqref{eq:Eq49}.
\end{appendix}
|
1,116,691,498,681 | arxiv | \section{Introduction}
Global conservation laws give rise to superselection rules (SSRs) which forbid the observation of coherences between
particular subspaces of states \cite{WWW,AhaSus}. Such global laws do not apply in subsystems
\cite{AhaSus,KitMayPre}. For example, the angular momentum of an object can be changed provided the total angular
momentum of the object and another system, the ancilla, is conserved. The ancilla here acts as a {\em reference system}
which alleviates the affect of the SSR by locally breaking the associated symmetry \cite{AhaSus}. Conversely, the lack of
a reference system induces the SSR. For example, without a spatial orientation frame, the state of a spin-$\frac{1}{2}$
particle will be completely mixed.
The last few years has witnessed a resurgence of interest in SSRs and quantum reference systems particularly within the
context of quantum information theory. The recent review by Bartlett, Rudolph and Spekkens \cite{BRS-RMP}
describes the current state of affairs. For example, Eisert {\it et al.} \cite{EisFelPapPle} and recently Jones {\it et al. }
\cite{JonWisBar} studied the decrease in distillable entanglement due to the loss of relative-ordering information for
sets of ebits. The optimal cost of aligning reference frames has been calculated in a number of different settings
\cite{ARF}. Communication in the presence or absence of shared reference frames has been extensively studied by
Bartlett {\em et al.} \cite{BarRudSpe}. The conservation of particle number was shown by two of us \cite{WisVac} to
limit shared particle entanglement. The repercussions for various systems including those in condensed matter physics
was explored by Dowling {\em et al.} \cite{DDW}. This constraint on shared entanglement of particles has been
generalized to arbitrary SSRs by Bartlett and Wiseman \cite{BarWis}. For the special case of a U(1)-SSR, a new
resource, the shared phase reference, has been studied by Vaccaro {\em et al.} \cite{VacAnsWis}, and quantified in the
asymptotic \cite{SchVerCir} and nonasymptotic \cite{Enk05} regimes.
In this paper we investigate the effect of a SSR on the resources represented by a quantum state. Following Oppenheim
{\em et al.} we quantify the resources in terms of {\em mechanical} work extractable from a heat bath and {\em logical}
work as performed in quantum information processing (QIP) \cite{OppHor}. We expose a fundamental tradeoff between
the extractable work under the SSR and the ability to act as a reference system for the SSR. We treat both the unipartite
and the bipartite case. The latter shows a triality between the accessible entanglement, locally extractable mechanical
work and the ability to act as a shared reference system. These results are crucial for fully understanding and quantifying
resources used in QIP.
We wish to emphasize from the outset that the resources are determined in the {\em non-asymptotic} regime in the
following sense. While the asymptotic limit $\rho^{\otimes n}$ for $n\to\infty$ is often taken when studying resources
such as entanglement, this limit is not appropriate for the problems addressed here. Indeed, in the asymptotic limit
reference systems such as those for spatial orientation and quantum phase reduce to their less-interesting classical
counterparts. Instead the situation we consider is when the resources such as accessible entanglement, local work and
reference ability are measured for just {\it one copy of the state $\rho$}. The same situation has been treated in previous
work \cite{WisVac, VacAnsWis} for the specific case of the accessible entanglement of indistinguishable particles. In
operational terms, we imagine that the state of the system is {\it transferred} by operations that are allowed by the SSR to
ancillary systems which are not themselves subject to the SSR. Once transferred to the SSR-free ancillas, the resources
are fungible in the sense that they can be used, processed, transferred etc. in a manner free of the SSR. Our results
quantify the amount of the resources that are transferable in this way from the single copy of the state $\rho$ and made
SSR free. This is what we mean by the terms {\it extractable work} and {\it accessible entanglement}. Thereafter one
could consider the asymptotic limit of the resources contained in the SSR-free ancillary systems and this would justify
the entropic measures for work and entanglement.
\section{Extractable work and asymmetry}
\subsection{Framework for the SSR }
An SSR is associated with a set $\tau=\{T(g)\}$ of unitary operations indexed by $g$ whose action on the system is both
forbidden and undetectable. There are two physically-motivated conditions on the set $\tau$. If an operator $T(g)$ is
forbidden under the SSR, then so is the time-reversed process which is given by the inverse $T^{-1}(g)$. This means if
$T(g)\in \tau$ then $T^{-1}(g)\in \tau$. If two operations $T(g_1)$ and $T(g_2)$ are forbidden then their product
$T(g_1)T(g_2)$ is necessarily forbidden and correspondingly the product is an element of $\tau$. The set $\tau$ is
therefore closed under multiplication. These conditions endow $\tau$ with a group structure, i.e. the set
\begin{equation}
\tau=\{T(g):g\in G\}
\end{equation}
is a unitary representation of the abstract group $G=\{g\}$. We shall label the SSR associated with group $G$ as
$G$-SSR \cite{LieGroupNote}.
Let $\rho$ be an arbitrary density operator representing the (possibly mixed) state of a system. A $G$-SSR restricts not
this state, but rather the allowed {\em operations} on it to those that are $G$-invariant \cite{BarWis}. That is, an
allowed operation ${\cal O}$ must satisfy
\begin{equation}
{\cal O}[T(g) \rho T^\dagger(g)] = T(g) ({\cal O}\rho) T^\dagger(g), \forall g\in G\ .
\label{G invariant op}
\end{equation}
Under this restriction, our effective knowledge of the system is represented not by $\rho$ but by the ``twirl'' of $\rho$
\cite{BarWis}
\begin{equation}
{\cal G}_G[\rho] \equiv \frac{1}{|G|}\sum_{g\in G}
T(g) \rho T^\dagger(g)\ ,
\label{inv_op}
\end{equation}
where $|G|$ is the order of the group $G$.
We will require that the representation factorizes as
\begin{equation}
T(g)=T_1(g)\otimes T_2(g)\otimes\cdots
\label{T_factorisation}
\end{equation}
for multipartite systems whose corresponding Hilbert space is given by ${\cal H}={\cal H}_1\otimes{\cal
H}_2\otimes\cdots$ where ${\cal H}_n$ is the Hilbert space for the system labeled by $n$.
\subsection{Extractable work}
The purpose of a reference system is to mask the effects of the $G$-SSR by yielding less mixing than given in
\erf{inv_op}. A {\it physically meaningful} definition of the ability of a system to act as a reference system should
therefore be based on a physical quantity that measures a state's mixedness. This measure is conveniently provided by the
amount of mechanical work that can be extracted from a thermal reservoir at temperature $T$ using quantum state
$\rho$. This is given by \cite{vonN,OppHor}
\begin{equation}
W(\rho) = k_{\rm B}T[\log D - S(\rho)],
\label{W}
\end{equation}
where $D$ is the dimension of the Hilbert space and
\begin{equation}
S(\rho) \equiv - \tr{\rho\log\rho}
\end{equation}
is the von Neumann entropy of $\rho$. This expression shows that the more pure the state $\rho$ is, the more work that
can be extracted using it. For convenience, in the following we set $k_{\rm B}T=1$ and use the binary logarithm. In the
presence of the $G$-SSR this resource reduces to the {\em extractable} work
\begin{equation}
W_G(\rho) \equiv W({\cal G}_G[\rho])\ .
\label{W_G}
\end{equation}
The proof follows the same lines as that of Ref.~\cite{BarWis} for accessible entanglement. The crucial point here is
that once the work $W_G(\rho)$ has been extracted by a $G$-invariant operation, applying ${\cal G}$ to the system
does not change the amount of work that was extracted. According to \eqr{G invariant op}, the same result is obtained if
${\cal G}$ is applied to the system {\em before} the work is extracted, and so the extractable work is $W({\cal
G}_G[\rho])$. A {\em symmetric state}, i.e. one for which
\begin{equation}
{\cal G}_G[\rho] = \rho\ ,
\end{equation}
suffers no loss in its ability to do work. In contrast, the extractable work possible for {\em
asymmetric} states (${\cal G}_G[\rho] \neq \rho$), is reduced under the $G$-SSR.
As an example, consider a spin-$\smallfrac{1}{2} $ particle prepared in state $\rho$ by Alice and sent to Bob, and let Bob have
knowledge only of the direction of Alice's $z$ axis. Bob cannot distinguish rotations by Alice about the $z$ axis. Thus
his knowledge of the state is constrained by the SSR induced by the $U(1)$ symmetry group associated with the unitary
representation
\begin{equation}
\left\{T(\theta):\theta \in [0,2\pi), T(\theta) = \exp(i \theta J_z/\hbar) \right\}
\end{equation}
where $J_z = \frac{\hbar}{2}\sigma_z$ and $\sigma_{z}$ is the Pauli operator for the $z$ component of spin.
Accordingly Bob ascribes the state
\begin{equation}
{\cal G}_U[\rho] = \frac{1}{2\pi}\int_0^{2\pi}e^{i\theta J_{z}/\hbar}\rho e^{-i\theta J_{z}/\hbar}d\theta
\end{equation}
to the spin. Consider the spin-up state $\rho=\ket{1}\bra{1}$, where $\sigma_{z}\ket{\pm 1}=\pm
\ket{\pm 1}$.
The state $\ket{1}$ is symmetric with respect to $\{T(\theta)\}$ so for this state $W=1$ and the amount of extractable
work is also $W_{U}=1$. In contrast, the state $\ket{+}=(\ket{1}+\ket{-1})/\sqrt{2}$ is asymmetric with respect to
$\{T(\theta)\}$, with
\begin{equation}
{\cal G}_U[\rho] = \smallfrac{1}{2}(\ket{1}\bra{1}+\ket{-1}\bra{-1})\ .
\end{equation}
Even though the state $\ket{+}$ has $W=1$, under the SSR Bob can extract no work as $W_U=0$.
\subsection{Asymmetry}
A SSR thus introduces the need for a new resource: a system acting as a reference system to break the underlying
symmetry. We now show that
\begin{equation}
A_G (\rho)\equiv S({\cal G}_G[\rho]) - S(\rho)\ ,
\label{A_G}
\end{equation}
which is the natural entropic measure of the {\em asymmetry} of $\rho$ with respect to $G$, is a measure that {\it
quantifies} the ability of a system to act as a reference system. To do this we need to show that $A_G$ has the following
properties:
\begin{list}{({\em\roman{enumi}})}{\usecounter{enumi}}
\item $A_G(\rho) \ge 0$;
\item $A_G(\rho) = 0$ iff $\rho$ is symmetric;
\item $A_G(\rho)$ cannot increase under the restriction of the $G$-SSR; and
\item $A_G(\rho)$ quantifies the ability of $\rho$ to act as a reference system.
\end{list}
The first two follow directly from the properties of the entropy function \cite{SWW}. For the
third, we have
\begin{theo}
\label{thm_A_no_inc}
No $G$-invariant operation can increase (on average) the asymmetry
$A_G(\rho)$ of a state $\rho$.
\end{theo}
\begin{proof}
The most general $G$-invariant operation is a measurement that transforms an initial state $\rho$
into one of $M$ states
\begin{equation}
\rho_j = \frac{1}{P_j}{\cal O}_j[\rho]\ ,
\end{equation}
such that
\begin{equation}
{\cal O}_j[T(g) \rho T^\dagger(g)] = T(g) ({\cal O}_j[\rho]) T^\dagger(g), \forall g\in G\ ,
\end{equation}
with probability $P_j = \mbox{Tr}({\cal O}_j[\rho])$.This operation includes the possibility of adding ancillas in
prepared states and performing unitary operations and measurements on the combined system and ancillas. We wish to
show that
\begin{equation}
A_G(\rho)\ge \sum_j P_j A_G(\rho_j)\ ,
\end{equation}
i.e. from \eqr{A_G}
\begin{equation}
\label{ineq}
S({\cal G}_G[\rho]) - S(\rho) \geq \sum_j P_j [ S({\cal G}_G[\rho_j]) - S(\rho_j) ]\ ,
\end{equation}
which can be rearranged as
\begin{equation}
S({\cal G}_G[\rho]) - \sum_j P_j S({\cal G}_G[\rho_j])\ \geq\ S(\rho) - \sum_j P_j S(\rho_j) \ .
\label{ineq_rearranged}
\end{equation}
Note that, because the ${\cal O}_j$ are $G$-invariant, we can interchange the order of the twirl ${\cal G}_G$ and the
${\cal O}_j$ operations.
Denoting the average change in entropy under the measurement operation by
\begin{equation}
\langle \Delta S_{\cal O}(\rho) \rangle=S(\rho)- \sum_j P_j S(\rho_j)
\end{equation}
allows us to rewrite the inequality we wish to prove as
\begin{equation}
\langle \Delta S_{\cal O}({\cal G}_G[\rho]) \rangle \geq \langle \Delta S_{\cal O}(\rho) \rangle\ .
\label{ineq_final}
\end{equation}
We now use the following three facts:
\begin{list}{({\em\roman{enumi}})}{\usecounter{enumi}}
\item for all operations the average entropy reduction, $\langle \Delta S_{\cal O}(\rho) \rangle$,
is {\em concave} in $\rho$~\cite{KJ}, and so, e.g.
$\ip{\Delta S_{\cal O}(\sum_j p_j\rho_j)}\ge\sum_jp_j\ip{\Delta S_{\cal O}(\rho_j)}$;
\item the twirl operation produces the convex mixture
\begin{equation}
{\cal G}_G[\rho] = \frac{1}{|G|}\sum_g \sigma_g
\end{equation}
where $\sigma_g = T(g)\rho T^{\dagger}(g)$; and
\item $\langle \Delta S_{\cal O}(\sigma_g)
\rangle = \langle \Delta S_{\cal O}(\rho) \rangle$ for all $g$ due to the $G$-invariance of the ${\cal
O}_j$ and the unitarity of the $T(g)$.
\end{list}
Putting these together we have
\begin{eqnarray}
\langle \Delta S_{\cal O}({\cal G}_G[\rho]) \rangle
&=& \langle \Delta S_{\cal O}\Big(\frac{1}{|G|}\sum_g \sigma_g\Big) \rangle\nonumber\\
&\geq& \frac{1}{|G|}\sum_g \langle \Delta S_{\cal O}(\sigma_g) \rangle\nonumber\\
&=& \frac{1}{|G|}\sum_g \langle \Delta S_{\cal O}(\rho) \rangle
= \langle \Delta S_{\cal O}(\rho) \rangle\nonumber\\
\end{eqnarray}
which completes the proof of \eqr{ineq_final}.
\end{proof}
To show the fourth property let us first define $\Upsilon (X;\rho_1,\rho_2)$, the {\em synergy} of a quantity $X$, as
\begin{equation}
\Upsilon (X;\rho_1,\rho_2)
\equiv X(\rho_{1}\otimes\rho_{2}) - [X(\rho_{1}) +
X(\rho_{2})]
\label{DeltaX}
\end{equation}
for two systems in states $\rho_1$ and $\rho_2$. The extent to which system 1 acts as a reference system for system 2
(or vice versa) is the synergy of the extractable work, $\Upsilon(W_G; \rho_1,\rho_2)$; that is, the amount by which the
extractable work of the whole is larger than the sum of the extractable work of the parts. Then we have the following:
\begin{theo}
\label{thm_A_ref}
The synergy of the extractable work is bounded by asymmetry:
\begin{equation}
\Upsilon(W_G; \rho_1,\rho_2) \leq \mbox{\em min}\{ A_G(\rho_1), A_G(\rho_2)\}
\end{equation}
where $\rho_1$ and $\rho_2$ are arbitrary states of two systems sharing the same symmetry group $G$. Further, this
bound is achievable, in the sense that for every $\rho_1$ there exists a $\rho_2$ such that $\Upsilon(W_G; \rho_1,\rho_2)
= A_G(\rho_1)$.
\end{theo}
\begin{proof}
We first note from Eqs.~(\ref{W}), (\ref{W_G}) and (\ref{A_G}) that the extractable work can be written as
\begin{equation}
W_G(\rho)=W(\rho)-A_G(\rho)
\label{WG_is_W_minus_A}
\end{equation} and, because $W(\rho_1\otimes\rho_2)=W(\rho_1)+W(\rho_2)$, the synergy of the extractable work
may be written as
\begin{eqnarray}
&&\Upsilon (W_G; \rho_1,\rho_2)\nonumber\\
&&\ = W_G(\rho_1\otimes\rho_2)-[W_G(\rho_1)+W_G(\rho_2)]\nonumber\\
&&\ = [W(\rho_1\otimes\rho_2)-A_G(\rho_1\otimes\rho_2)]\nonumber\\
&&\hspace{1cm}-[W(\rho_1)-A_G(\rho_1)+W(\rho_2)-A_G(\rho_2)]\nonumber\\
&&\ = A_{G}(\rho_1) + A_{G}(\rho_2) - A_{G}(\rho_1\otimes\rho_2)\ .
\label{DeltaW_G}
\end{eqnarray}
We next note that $A_{G}(\rho_1\otimes\rho_2)$ is equal to the Holevo $\chi$ quantity \cite{SWW},
$\chi_{12}$, for the ensemble
\begin{equation}
\left\{ \left(P_g, \sigma_g\right)\ \forall g\in G\right\}
\end{equation}
where $P_g=|G|^{-1}$ is the probability associated with the state $\sigma_g$ and
\begin{equation}
\sigma_g=[T(g) \rho_1 T(g)^\dagger] \otimes[T(g)\rho_2 T(g)^\dagger]\ .
\end{equation}
Similarly, the Holevo $\chi$ for the ensemble traced over subsystem 2 or 1 is $\chi_1 =
A_{G}(\rho_1)$ or $\chi_2 = A_{G}(\rho_2)$, respectively. The Holevo $\chi$ is non-increasing under
partial trace~\cite{SWW}, so
\begin{equation}
A_{G}(\rho_1\otimes\rho_2)\geq A_{G}(\rho_j)\ \ {\rm for\ } j=1,2\ .
\end{equation}
Applying this to \eqr{DeltaW_G} gives the desired result.
To show achievability, choose $\rho_2=\ket{\psi}\bra{\psi}$ such that $\ip{\psi_{g'}|\psi_g} = \delta_{g,g'}$ where
$\ket{\psi_g}\equiv T_2(g)\ket{\psi}$ . For finite groups this can be done with a normalisable state $\rho_2$ whereas
for Lie groups one can choose a normalisable state on a subspace of sufficiently large dimension~\cite{KitMayPre}.
Then using Eqs.~(\ref{inv_op}) and (\ref{T_factorisation}) we have
\begin{eqnarray}
&&{\cal G}_G[\rho_1\otimes\rho_2])\nonumber\\
&&\ \ = \frac{1}{|G|}\sum_{g\in G}
[T_1(g)\otimes T_2(g)]\rho_1\otimes\rho_2[T^\dagger_1(g)\otimes T^\dagger_2(g)]\nonumber\\
&&\ \ = \frac{1}{|G|}\sum_{g\in G}
[T_1(g)\rho_1T^\dagger_1(g)] \otimes \ket{\psi_g}\bra{\psi_g}\ .
\end{eqnarray}
The orthonormality of the set $\{\ket{\psi_g}:g\in G\}$ ensures that
\begin{eqnarray}
S({\cal G}_G[\rho_1\otimes\rho_2])
&=&\sum_{g\in G} \frac{1}{|G|}\left\{S[T_1(g)\rho_1 T^\dagger_1(g)]-\log(\frac{1}{|G|})\right\}\nonumber\\
&=& S(\rho_1) + S({\cal G}_G[\rho_2])
\end{eqnarray}
where we have used $S_G(\rho_2)=\log(|G|)$. Finally, using this result with Eqs.~(\ref{A_G}) and
(\ref{DeltaW_G}) and noting that $S(\rho_1\otimes\rho_2)=S(\rho_1)+S(\rho_2)$ shows
\begin{eqnarray}
&& \Upsilon (W_G; \rho_1,\rho_2) \nonumber\\
&&\quad = \{S({\cal G}_G[\rho_1])-S(\rho_1)\} +\{S({\cal G}_G[\rho_2])-S(\rho_2)\}\nonumber\\
&&\hspace{1cm} -\left\{S({\cal G}_G[\rho_1\otimes\rho_2])-S(\rho_1\otimes\rho_2)\right\}\nonumber\\
&&\quad= S({\cal G}_G[\rho_1]) + S({\cal G}_G[\rho_2]) - S({\cal G}_G\rho_1\otimes\rho_2])\nonumber\\
&&\quad= S({\cal G}_G[\rho_1]) - S(\rho_1)\nonumber\\
&&\quad= A_{G}(\rho_1)
\end{eqnarray}
which completes the proof of achievability.
\end{proof}
To illustrate the phenomenon of synergy, consider the previous spin-$\smallfrac{1}{2}$ example but now with {\em two} spins in
the state $\ket{+}$. That is, Alice sends to Bob the state $\rho_1\otimes \rho_2$, with $\rho_i = \ket{+}\bra{+}$ for
$i=1,2$. Bob again assigns the state
\begin{equation}
{\cal G}_U[\rho_1\otimes \rho_2] =\frac{1}{2\pi}\int_0^{2\pi}e^{i\theta J_{z}/\hbar}(\rho_1\otimes \rho_2) e^{-i\theta J_{z}/\hbar}d\theta\ ,
\end{equation}
but now
\begin{equation}
J_z = J_z^{\rm (1)} \otimes I^{\rm (2)} + I^{\rm (1)} \otimes J_z^{\rm (2)}
\end{equation}
where $I^{i}$ is the identity operator for system $i$. We find
\begin{equation}
{\cal G}_U[\rho_1\otimes \rho_2] =\uplus\smallfrac{1}{2} \ket{1,1} \uplus \smallfrac{1}{2} (\ket{1,-1} + \ket{-1,1}) \uplus
\smallfrac{1}{2} \ket{-1,-1}\ .
\end{equation}
Here, for clarity, we have used the following notational convention which was introduced in Ref.
\cite{JonWisBar}: $\uplus
\ket{\psi}$ is to be read as $+ \ket{\psi}\bra{\psi}$. Thus, for example,
\begin{equation}
\uplus \alpha\ket{\psi} \uplus \beta\ket{\phi}
\equiv |\alpha|^2 \ket{\psi}\bra{\psi}
+ |\beta|^2 \ket{\phi}\bra{\phi}\ .
\end{equation}
As before, $W(\rho_i)=1$, $W_{U}(\rho_i)=0$, and $A_{U}(\rho_i)=1$. But for the two spins together,
$W(\rho_1\otimes \rho_2)=2$, $W_{U}(\rho_1\otimes \rho_2)=\smallfrac{1}{2}$, and $A_{U}(\rho_1\otimes
\rho_2)=\smallfrac{3}{2}$. Thus the synergy is
\begin{equation}
\Upsilon (W_{U}; \rho_1,\rho_2) = \smallfrac{1}{2} > 0\ .
\end{equation}
One spin acts as a reference for the other and partially breaks this $U(1)$-SSR. Notice that the work synergy is less than
the asymmetries of the individual systems, $\Upsilon (W_{U}; \rho_1,\rho_2) < A_{U}(\rho_i)=1$, in accord with
Theorem \ref{thm_A_ref}.
Having established the significance of $A_G(\rho)$ for indicating the ability of a system to act as a $G$-reference
system, we now observe that \eqr{WG_is_W_minus_A} represents a tradeoff or duality between this ability and the
amount of work that can be extracted under the $G$-SSR:
\begin{equation}
W(\rho) = W_G(\rho) + A_G(\rho)\ .
\label{comp_W}
\end{equation}
That is, under the $G$-SSR, the extractable work $W(\rho)$ represented by a given state is split into two new resources,
the extractable work $W_{G}$ and the asymmetry $A_{G}$.
\section{Extension to bipartite systems}
\subsection{Global and local SSRs}
Consider a system shared by two parties, Alice and Bob, such that the unitary representation of $G$
factorizes according to:
\begin{equation}
T(g)=T_{\rm A}(g)\otimes T_{\rm B}(g)\ \forall\ g\in G\ .
\label{T_is_TA_times_TB}
\end{equation}
There are two ways the $G$-SSR operates on the bipartite system, {\em globally} and {\em locally}. They can be
illustrated by considering their effect on the system state $\rho$. The global $G$-SSR acts when we have access to the
whole system using either non-local operations or transporting the whole system to one site. Thus in direct accord with
\eqr{inv_op} for the uni-partite case, our effective knowledge of the system under the global $G$-SSR is not $\rho$ but
\begin{eqnarray}
{\cal G}_G[\rho] &=& \frac{1}{|G|}\sum_{g\in G}T(g)\rho T^\dagger(g)\nonumber\\
&=& \frac{1}{|G|}\sum_{g\in G}T_{\rm A}(g)\otimes T_{\rm B}(g)
\,\rho\, T^\dagger_{\rm A}(g)\otimes T^\dagger_{\rm B}(g)\ .\nonumber\\
\end{eqnarray}
In contrast, each party A and B has access only to the part of the system at their respective site. Accordingly the $G$-SSR
restricts their knowledge of the system to
\begin{equation}
{\cal G}_{G\otimes G}[\rho] = \frac{1}{|G|^2}\sum_{g\in G}\sum_{g'\in G}
T_{\rm A}(g)\otimes T_{\rm B}(g')
\,\rho\, T^\dagger_{\rm A}(g)\otimes T^\dagger_{\rm B}(g')\ .
\label{twirledrho}
\end{equation}
We use the tensor product operator in the symbol ${\cal G}_{G\otimes G}$ to indicate that the twirl operation acts
locally on systems A and B; this is manifest in the sums over the independent indices $g$ and $g'$ in \eqr{twirledrho}.
We refer to the effect of ${\cal G}_{G\otimes G}$ as the {\em local} $G$-SSR.
The local $G$-SSR restricts the kinds of operations that the two parties can perform to local $G$-invariant operations
${\cal O}_{\rm AB}$ where
\begin{eqnarray}
&&{\cal O}_{\rm AB}
\left\{\left[T^{\phantom{\dagger}}_{\rm A}(g)\otimes T_{\rm B}(g')\right]
\rho\left[T^\dagger_{\rm A}(g)\otimes T^\dagger_{\rm B}(g')\right]\right\}\nonumber \\
&&\ \ = T_{\rm A}(g)\otimes T_{\rm B}(g')\left\{{\cal O}_{\rm AB}
[\rho]\right\} T^\dagger_{\rm A}(g)\otimes T^\dagger_{\rm B}(g')\ .\ \ \
\label{local_G_O_AB}
\end{eqnarray}
for all $g, g'\in G$. This class includes (but is not limited to) products of local operations ${\cal O}_{\rm
A}\otimes{\cal O}_{\rm B}$, which could represent measurement outcomes. A wider class of allowed operations will
be defined below. Moreover, any operation ${\cal O}_{\rm AB}$ which is local $G$-invariant is also global
$G$-invariant, because \eqr{local_G_O_AB} implies
\begin{equation}
{\cal O}_{\rm AB}\left[T(g)\rho T^\dagger(g)\right]
= T(g)\left\{{\cal O}_{\rm AB}[\rho]\right\} T^\dagger(g)
\label{global_G_O_AB}
\end{equation}
for $T(g)=T_{\rm A}(g)\otimes T_{\rm B}(g)$.\\
\subsection{Globally-symmetric pure state $\rho^\beta$}
In this paper, we restrict our analysis to globally symmetric pure states:
\begin{equation}
\ket{\Psi}\bra{\Psi}={\cal G}_G\left[\ket{\Psi}\bra{\Psi}\right]\ .
\end{equation}
This requires $\ket{\Psi}$ to belong to a one-dimensional irrep of $G$. That is, using $\beta$ to
label the irrep,
\begin{equation}
T(g)\ket{\Psi}=\lambda^\beta(g)\ket{\Psi} \ \ \forall\ g\in G
\end{equation}
where $T(g)$ is given by \eqr{T_is_TA_times_TB} and $\lambda^\beta(g)$ is the unit-modulus eigenvalue.
Let $G$ have $N_G$ distinct irreps $T^\mu(g)$ for $\mu=1,2, \cdots N_G$ and let Alice's operator
$T_{\rm A}(g)$ in \eqr{T_is_TA_times_TB} decompose into $K_{\rm A}$ irreps as
\begin{equation}
T_{\rm A}(g) = \bigoplus_{n=1}^{K_{\rm A}}T^{f_{\rm A}(n)}(g) \ \ \forall\ g\in G
\label{T_reduced}
\end{equation}
where
\begin{equation}
f_{\rm A}(n) \in \{1,2, \cdots N_G\}
\end{equation}
labels an irrep for each $n$. The total number of irreps in $T_{\rm A}(g)$ can be written as $K_{\rm A} =
\sum_{\mu=1}^{N_G} M_{\rm A}^ \mu$, where $M_{\rm A}^\mu$ is the multiplicity (i.e. the number of copies) of
irreps of type $T^\mu$. The irrep $T^\mu$ operates on the $D_\mu$-dimensional subspaces spanned by
\cite{KitMayPre}
\begin{equation}
\{\ket{\mu,m_\mu,i}: i=1,2,\cdots,D_\mu\}\ .
\end{equation}
The ``charge'' $\mu=1,2,\ldots N_G$ indexes the irreps $T^\mu$, the ``flavor'' $m_\mu=1,2,\ldots
M_{\rm A}^\mu$ indexes the copy of the irrep $T^\mu$ in the above decomposition, the ``color''
$i=1,2,\ldots D_\mu$ indexes an orthogonal basis set in which $T^\mu$ operates, and
\begin{equation}
\ip{\mu,m_\mu,i|\nu,m_\nu,j} =\delta_{\mu,\nu}\delta_{m_\mu,m_\nu}\delta_{i,j}\ .
\end{equation}
Let Bob's operator $T_{\rm B}(g)$ have a similar decomposition. To find the form of the global $G$-invariant states
we need to consider pairs of conjugate irreps, that is pairs of irreps, say $T^\mu$ and $T^\nu$, whose tensor product
$T^\mu\otimes T^\nu$, can be reduced to a direct sum involving a given 1-dimensional irrep $T^\beta$ of $G$, i.e.
$T^\mu\otimes T^\nu\cong T^\beta\oplus\ldots$. To do this we define $R^\beta$ to be the set of conjugate couples
\begin{equation}
R^\beta=\{(\mu,\bar{\mu}):T^\mu(g)=C^\mu T^\beta(g)
[T^{\bar{\mu}}(g)]^* (C^\mu)^{\dagger}\ \forall g\in G\}\label{R_beta}
\end{equation}
where $T^\beta$ is the given one-dimensional irrep and $C^\mu$ is a unitary operator. The entangled state
\begin{equation}
\label{psi}
\ket{\psi^{\mu,\beta}_{m_\mu,m_{\bar{\mu}}}}
=\frac{1}{\sqrt{D_\mu}}\sum_{i,j}{C^\mu_{i,j}}\ket{\mu,m_\mu,i}\otimes\ket{\bar{\mu},m_{\bar{\mu}},j}
\end{equation}
for $(\mu,\bar{\mu})\in R^\beta$, is an eigenstate of $T(g)$ with eigenvalue
$\lambda^\beta(g)=T^\beta(g)$, and so it is globally symmetric. The proof of this result is given
in Appendix \ref{app_global_symm}.
The most general, pure, globally symmetric state for a given value of $\beta$ is given by
\begin{equation}
\rho^\beta=\uplus \ket{\Psi^\beta}
\end{equation}
where
\begin{equation}
\ket{\Psi^\beta} =\sum_{\mu}\sum_{m_\mu,m_{\bar{\mu}}}
d^{\mu}_{m_\mu,m_{\bar{\mu}}}
\ket{\psi^{\mu,\beta}_{m_\mu,m_{\bar{\mu}}}}
\end{equation}
for arbitrary coefficients $d^{\mu}_{m_\mu,m_{\bar{\mu}}}$ satisfying
\begin{equation}
\sum_{m_\mu,m_{\bar{\mu}}}
|d^{\mu}_{m_\mu,m_{\bar{\mu}}}|^2=1\ .
\end{equation}
In the following we evaluate the effect of the $G$-SSR on the resources represented by this general state $\rho^\beta$.
\subsection{Unconstrained entanglement of $\rho^\beta$}
We begin by evaluating the total entanglement in $\rho^\beta$, measured in terms of the entropy of entanglement,
without the restriction of the $G$-SSR. It is convenient to factorize the representation into ``flavor'' (indexed by
$m_\mu$) and ``color'' (indexed by $i$) subsystems as
\begin{equation}
\ket{\mu,m_\mu,i}\equiv\ket{\mu,m_\mu}\ket{\mu,i}
\label{charge_flavour}
\end{equation}
and rewrite the state in terms of states of the flavor and color subsystems as
\begin{equation}
\ket{\Psi^\beta} =\sum_{\mu}\sqrt{P_\mu}\ket{\varphi_\mu}
\Big(\frac{1}{\sqrt{D_\mu}}\sum_{i,j}{C^\mu_{i,j}}\ket{\mu,i}\otimes\ket{\bar{\mu},j}\Big)
\end{equation}
where
\begin{eqnarray}
\ket{\varphi_\mu}
&=& \sum_{m_\mu,m_{\mu}}
\frac{d^\mu_{m_\mu,m_{\bar{\mu}}}}{\sqrt{P_\mu}}\ket{\mu,m_\mu}\otimes\ket{\bar\mu,m_{\bar\mu}}\\
\label{flavor_subsystem}
P_\mu&=& \sum_{m_\mu,m_{\bar{\mu}}} |d^\mu_{m_\mu,m_{\bar{\mu}}}|^2\ .
\end{eqnarray}
Then taking the partial trace of $\rho^\beta$ over Bob's state space and making use of the unitarity of $C^\mu$ yields
\begin{equation}
{\rm Tr}_{\rm B}(\rho^\beta)
=\sum_\mu P_\mu\ro{ \biguplus_{k}
\Lambda^\mu_k\ket{{\rm A}^\mu_k}} \otimes \ro{
\biguplus_i \frac{1}{\sqrt{D_\mu}}\ket{\mu,i}}\ ,
\end{equation}
where
\begin{equation}
\biguplus_i \ket{\psi_i} \equiv \sum_i \uplus \ket{\psi_i} = \sum_i \ket{\psi_i}\bra{\psi_i}\ ,
\end{equation}
and we have used the Schmidt decomposition
\begin{equation}
\ket{\varphi_\mu}
=\sum_{k}\Lambda^\mu_k\ket{{\rm A}^\mu_k}\otimes\ket{{\rm
B}^\mu_k}\label{SchmidtDecomposition}
\end{equation}
of the state of the bipartite flavor subsystem with
\begin{equation}
\ip{{\rm A}^\mu_k|{\rm A}^\mu_l}=\ip{{\rm B}^\mu_k|{\rm B}^\mu_l}=\delta_{k,l}\ .
\end{equation}
Thus the entanglement is given by
\begin{equation}
E(\rho^\beta)=-\sum_{\mu,k}P_\mu|\Lambda^\mu_k|^2
\log(P_\mu|\Lambda^\mu_k|^2)+\sum_\mu P_\mu \log (D_\mu)\ .
\end{equation}
\subsection{Resources in $\rho^\beta$ under the local $G$-SSR}
\subsubsection{Entanglement accessible under local $G$-SSR} The accessible entanglement in the state $\ket{\Psi^\beta}$
constrained by the local $G$-SSR is, according to \cite{BarWis}, given by the total entanglement in the state
\cite{GlobalNote}.
\begin{equation}
\label{twirledrhobeta}
{\cal G}_{G\otimes G} [\rho^\beta]
= \frac{1}{|G|^2}\biguplus_{g,g'\in G} \left([T_{\rm A}(g)
\otimes T_{\rm B}(g')] \ket{\Psi^\beta}\right)\ .
\end{equation}
where ${\cal G}_{G\otimes G}$ is defined in \eqr{twirledrho}. Using the unitarity of the matrices $C^\mu$ and the
grand orthogonality theorem \cite{ortho}
\begin{equation}
\sum_{g\in G} T^\mu_{k,l}(g)[T^{\nu}_{n,m}(g)]^\ast =\frac{|G|}{D_\mu}\delta_{\mu,\nu}\delta_{k,n}\delta_{l,m}
\label{ortho_theorem}
\end{equation}
where $T^{\eta}_{i,j}=\ip{\eta,m_\eta,i|T^\eta|\eta,m_\eta,j}$ \cite{ortho}, yields
\begin{eqnarray}
&&{\cal G}_{G\otimes G} [\rho^\beta] =\sum_\mu P_\mu\big(\uplus \ket{\varphi_\mu}\big)\nonumber\\
&&\hspace{2.5cm} \otimes\biguplus_{i,j}
\ro{ \frac{1}{D_\mu}{\ket{\mu,i}\otimes\ket{\bar{\mu},j}}}\ .\label{local_G}
\end{eqnarray}
The entropy of this state is easily found using \eqr{twirledrhobeta} and \eqr{ortho_theorem} to be
\begin{equation}
S({\cal G}_{G\otimes G} [\rho^\beta]) = H_{G\otimes G}^{\rm (ch)}(\rho^\beta)
+ 2H_{G\otimes G}^{\rm (co)}(\rho^\beta)
\label{S_GxG=Hch+Hco}
\end{equation}
where we have defined color and charge correlations
\begin{eqnarray}
H_{G\otimes G}^{\rm (co)}(\rho^\beta)&\equiv&\sum_\mu P_\mu \log D_\mu\ ,
\label{H_Gco}\\
H_{G\otimes G}^{\rm (ch)}(\rho^\beta)&\equiv&-\sum_\mu P_\mu \log P_\mu\ .
\label{H_Gch}
\end{eqnarray}
We note that Alice (or, equivalently, Bob) can make a measurement of the charge without changing the amount of
accessible entanglement, because the measurement commutes with all $G$-invariant operations \cite{WisVac,BarWis}.
The proof can be found in Appendix \ref{app_charge_meas}.
This local measurement of charge yields the value of $\mu$ with probability $P_\mu$, resulting in the pure entangled
state $\ket{\varphi_\mu}$ of the flavor subsystem. The entanglement in the flavor subsystem is then the entropy
$-\sum_k |\Lambda^\mu_k|^2\log (|\Lambda^\mu_k|^2)$ of Alice's reduced state, $\biguplus_k (\Lambda^\mu_k
\ket{{\rm A}^\mu_k})$. The corresponding state of the color subsystem in \erf{local_G} is
$\biguplus_{i,j}\ro{\ket{\mu,i}\otimes\ket{\bar{\mu},j}/D_\mu }$ which is clearly separable, and so the color
subsystem makes no contribution to the entanglement. Averaging over all $\mu$ values gives the {\em accessible
entanglement $E_{G\otimes G}$ under local $G$-SSR} as
\begin{equation}
E_{G\otimes G}(\rho^\beta)=E(\rho^\beta)-H_{G\otimes G}^{\rm (co)}(\rho^\beta)
-H_{G\otimes G}^{\rm (ch)}(\rho^\beta)\ .
\label{E_G}
\end{equation}
The quantity $E_{G\otimes G}(\rho^\beta)$ in (\ref{E_G}) represents the ability of the system under the local
$G$-SSR to do ``logical work'' in the form of bipartite quantum information processing \cite{OppHor}.
\subsubsection{Work extractable under local $G$-SSR and LOCC}
Just as in the unipartite case in the absence of the $G$-SSR, a bipartite state $\rho$ can be used to extract mechanical
work {\em locally} at each site from local thermal reservoirs \cite{OppHor,HorOpp03}. Only local operations and
classical communication (LOCC) are allowed for the extraction process which results in a maximum amount of work
$W_{\rm L}(\rho)$ being extracted in total. Oppenheim {\em et al.} \cite{OppHor} showed that the quantum
correlations in the state $\rho$ reduce the amount of work that can be extracted in this way. Alternatively one could
transmit the system at Alice's site to Bob's site through a dephasing channel and then extract the work locally at Bob's
site. They showed that for pure states $\rho$ an equivalent amount of work $W_{\rm L}(\rho)$ is obtained if the
dephasing channel produces a classically correlated state of minimum entropy. The allowed operations $\cal Q$ for this
method are those that can be realized using local unitaries, local ancillas (whose extractable work must be subtracted off
at the end) and transmission through the dephasing channel. That is \cite{OppHor}
\begin{equation}
W_{\rm L}(\rho)=W(\widetilde{\cal Q}[\rho])\label{W_Q_P}
\end{equation}
where $ \widetilde{\cal Q}$ is the optimum allowed operation that yields a classically-correlated
state with minimal entropy $S(\widetilde{\cal Q}[\rho])$. For pure states there is a duality
between abilities to do mechanical and ``logical'' work \cite{OppHor}:
\begin{equation}
W(\rho) = W_{\rm L}(\rho) + E(\rho)\ .\label{W_is_Wlo+E}
\end{equation}
In particular, consider the locally extractable mechanical work from the pure state
\begin{equation}
\sigma=\uplus \big(\sum_{n}\sqrt{p_n}\ket{\phi_n}_{\rm AB} \otimes\ket{\chi_n}_{\rm AB}
\otimes\ket{\psi_n}_{\rm AB} \big)
\end{equation}
in the absence of the $G$-SSR. Here the $\ket{\phi_m}$, $\ket{\chi_m}$ and $\ket{\psi_m}$ represent states of three
bipartite systems satisfying
\begin{equation}
\ip{\phi_n|\phi_m}=\ip{\chi_n|\chi_m}=\ip{\psi_n|\psi_m}=\delta_{n,m}\ .
\end{equation}
From \cite{OppHor} the optimum operation $\widetilde{\cal Q}$ dephases $\sigma$ in its Schmidt basis; this can be
carried out by first dephasing in the Schmidt basis of $\{\ket{\chi_n}\otimes\ket{\psi_n}\}$ followed by dephasing in
the Schmidt basis of $\{\ket{\phi_n}\}$. Let the Schmidt bases be given by
\begin{eqnarray}
\ket{x_n} &=& \sum_i x_{n,i}\ket{x_{n,i}}
\end{eqnarray}
where $x_{n,i}$ are the Schmidt coefficients and $\ket{x_{n,i}}$ are a set of orthonormal states, for $x$ being $\phi$,
$\psi$ or $\chi$. The first step yields a state of the form
\begin{eqnarray}
\sigma^\prime &=&\sum_{n}p_n
\big(\uplus\ket{\phi_n}_{\rm AB}\big)\nonumber\\
&&\otimes\biguplus_{i,j} \big(
\chi_{n,i}\ket{\chi_{n,i}}_{\rm AB} \otimes\psi_{n,j}\ket{\psi_{n,j}}_{\rm AB} \big)
\label{sigma_prime}
\end{eqnarray}
and the second step yields a state of the form
\begin{eqnarray}
\sigma^{\prime\prime}&=&\sum_{n}p_n \biguplus_{k}\big(\phi_{n,k}\ket{\phi_{n,k}}_{\rm AB}\big)\nonumber\\
&&\otimes\biguplus_{i,j} \big(\chi_{n,i}\ket{\chi_{n,i}}_{\rm AB}
\otimes\psi_{n,j}\ket{\psi_{n,j}}_{\rm AB} \big)\ .
\label{sigma_prime_prime}
\end{eqnarray}
This can be {\em reversibly} transformed using the dephasing channel into
\begin{eqnarray}
&&\sum_{n}p_n \biguplus_{k}\big(\phi_{n,k}\ket{\phi_{n,k}}_{\rm BB}\big)\nonumber\\
&&\quad \otimes\biguplus_{i,j} \big( \chi_{n,i}\ket{\chi_{n,i}}_{\rm BB}
\otimes\psi_{n,j}\ket{\psi_{n,j}}_{\rm BB} \big)\ ,
\end{eqnarray}
where the whole system is located at site B. The maximum amount of mechanical work that can be extracted from
$\sigma$ locally at each site is equal to the maximum that can be extracted locally at site B from
$\sigma^{\prime\prime}=\widetilde{\cal Q}[\sigma]$, as given in \eqr{W_Q_P}. We use this result below.
We now consider the local mechanical work $W_{G\otimes G-{\rm L}}(\rho)$ that is extractable from state $\rho$
under both the local $G$-SSR and the LOCC restrictions. This is given by
\begin{eqnarray}
W_{G\otimes G-{\rm L}}(\rho)
=W_{\rm L}({\cal G}_{G\otimes G}[\rho])
= W(\widetilde{\cal Q}_{G\otimes G}
\{{\cal G}_{G\otimes G}[\rho]\})\ , \nonumber\\
\end{eqnarray}
where $\widetilde{\cal Q}_{G\otimes G}$ is locally $G$-invariant.
We first evaluate $W(\widetilde{\cal Q}_{G\otimes G}\{{\cal G}_{G\otimes G} [\rho^\beta]\})$. As ${\cal
G}_{G\otimes G} [\rho^\beta]$ in \erf{local_G} is equivalent in form to $\sigma^\prime$ in \eqr{sigma_prime}, the
optimum operation $\widetilde{\cal Q}_{G\otimes G}$ is dephasing in the Schmidt basis of the flavor subsystem
$\{\ket{\varphi_\mu}\}$ in \eqr{flavor_subsystem}. This can be done by making a {\em local} measurement in the
Schmidt basis given in Eqs.~(\ref{SchmidtDecomposition}) and (\ref{flavor_subsystem}). This operation can be shown
to be local $G$-invariant as follows. For example, a local measurement by Alice that projects onto the Schmidt basis is
described by the set of projection operators
\begin{equation}
\widetilde{\Pi}_k =\Big(\ket{{\rm A}^\mu_k}\bra{{\rm A}^\mu_k}\otimes\openone^{\rm (co)}_\mu\Big)_{\rm A}
\otimes \openone_{\rm B}
\end{equation}
for the same set of values of $k$ as in \eqr{SchmidtDecomposition} and where $\openone^{\rm (co)}_\mu =
\sum_{i=1}^{D_\mu}\ket{\mu,i}\bra{\mu,i}$ projects onto the color subsystem. Recalling the decomposition
\eqr{T_reduced} and noting that the irrep $T^\mu(g)$ acts on the corresponding color subsystem only, we find
\begin{eqnarray}
&&\hspace{-10mm}\Big[T_{\rm A}(g)\otimes T_{\rm B}(g')\Big]\widetilde{\Pi}_k\nonumber\\
&=& T_{\rm A}(g)\otimes T_{\rm B}(g')\Big[\Big(\ket{{\rm A}^\mu_k}\bra{{\rm A}^\mu_k}
\otimes \openone^{\rm (co)}_\mu\Big)_{\rm A} \otimes \openone_{\rm B}\Big]\nonumber\\
&=& \Big[\ket{{\rm A}^\mu_k}\bra{{\rm A}^\mu_k}\otimes T^\mu(g)\Big]_{\rm A}
\otimes T_{\rm B}(g')\nonumber\\
&=& \Big[\Big(\ket{{\rm A}^\mu_k}\bra{{\rm A}^\mu_k}\otimes \openone^{\rm (co)}_\mu\Big)_{\rm A}
\otimes \openone_{\rm B}\Big]\Big[T_{\rm A}(g)\otimes T_{\rm B}(g')\Big]\nonumber\\
&=& \widetilde{\Pi}_k \Big[T_{\rm A}(g)\otimes T_{\rm B}(g')\Big]\ .
\end{eqnarray}
That is, projection by $\widetilde{\Pi}_k$ is a locally $G$-invariant operation according to \eqr{local_G_O_AB}. The
average result of Alice's measurement gives the desired optimal dephasing, i.e.
\begin{eqnarray}
&&\hspace{-5mm} \widetilde{\cal Q}_{G\otimes G}\{{\cal G}_{G\otimes G} [\rho^\beta]\}\nonumber\\
&=& \sum_k p_k\widetilde{\Pi}_k\, \rho^\beta\,\widetilde{\Pi}_k\nonumber\\
&=& \biguplus_{\mu,k,i,j} \left( \frac{1}{D_\mu}\sqrt{P_\mu}\Lambda^\mu_k \ket{{\rm
A} ^\mu_k}\otimes\ket{{\rm B}^\mu_k}\otimes \ket{\mu,i}\otimes\ket{\bar{\mu},j} \right)\ ,\nonumber\\
\end{eqnarray}
where $p_k={\rm Tr}(\widetilde{\Pi}_k \rho^\beta)$.
Now using \eqr{W_Q_P} the locally extractable work is found to be
\begin{eqnarray}
W_{G\otimes G-{\rm L}}(\rho^\beta)
&=& W(\widetilde{\cal Q}_{G\otimes G}\{{\cal G}_{G\otimes G} [\rho^\beta]\})\nonumber\\
&=& \ln(D)-S(\widetilde{\cal Q}_{G\otimes G}\{{\cal G}_{G\otimes G} [\rho^\beta]\})\nonumber\\
&=& \log D - [E(\rho^\beta) + H^{\rm (co)}_{G\otimes G}(\rho^\beta)]\nonumber\\
\label{W_GxG_is_lnD_E_Hco}
\end{eqnarray}
The global symmetry of the pure state $\rho^\beta$ ensures that
\begin{equation}
W_G(\rho^\beta) = W({\cal G}_G[\rho^\beta]) = W(\rho^\beta)=\log D
\end{equation}
and so the locally extractable work can be written as
\begin{equation} \label{WGlo}
W_{G\otimes G-{\rm L}}(\rho^\beta) = W(\rho^\beta) - [E(\rho^\beta)
+ H^{\rm (co)}_{G\otimes G}(\rho^\beta)]\ .
\end{equation}
Finally, using \eqr{W_is_Wlo+E} we can rearrange \eqr{WGlo} as
\begin{equation}
W_{G\otimes G-{\rm L}}(\rho^\beta) = W_{\rm L}(\rho^\beta) - H^{\rm (co)}_{G\otimes G}(\rho^\beta)
\end{equation}
which shows that the reduction in $W_{\rm L}$ due to the local $G$-SSR is manifest
in the mixing in the color subsystems.\\
\subsubsection{Shared asymmetry with respect to local $G$-SSR}
Eqs.~(\ref{E_G}) and (\ref{WGlo}) show that under the local $G$-SSR the duality between logical and local
mechanical work in \eqr{W_is_Wlo+E} is broken, i.e. $W \ne W_{G\otimes G-{\rm L}} + E_{G\otimes G}$. Just as
in the unipartite case, the lack of a reference system results in the loss of the ability to do work. In this
globally-symmetric bipartite case what is lacking is a {\em shared} reference system. For a globally symmetric system to
act as a shared reference there must be correlations (quantum or classical) between the asymmetries for each party. These
correlations are unaffected by a global transformation ${\cal G}$, but are destroyed by local mixing ${\cal
G}_{G\otimes G}$. This suggests the natural entropic measure for the ability of such a system to act as a shared
reference is
\begin{equation}
A_{G\otimes G}^{\rm (sh)}(\rho)
\equiv S({\cal G}_{G\otimes G}\{{\cal G}_G[\rho]\})-S({\cal G}_G[\rho])\label{A_sh}
\end{equation}
which we call the {\em shared asymmetry}. Notice that here $\rho$ is an arbitrary state which is not necessarily pure nor
globally symmetric. By shared we mean that both Alice and Bob have access to this type of asymmetry for unlocking the
resources represented by $\rho$ at their sites. The global asymmetry $A_G[\rho]$ of the state is not, in itself, useful for
this purpose. To eliminate the effects of {\em global} asymmetry we have defined $A_{G\otimes G}^{\rm (sh)}(\rho)$
in \eqr{A_sh} in terms of the globally-symmetric state ${\cal G}_G[\rho]$. The result is that $A_{G\otimes G}^{\rm
(sh)}(\rho)$ is equal to the increase in entropy due to the local $G$-SSR only. For the U(1) case the {\em refbit}
\cite{Enk05} has $A_{G\otimes G}^{\rm (sh)}=1$, as one would like.
In analogy with $A_G$ for the unipartite case, we now show that $A_{G\otimes G}^{\rm (sh)}$ similarly quantifies the
resource of acting as a shared reference system for arbitrary states $\rho$. First we note that
\begin{widetext}
\begin{eqnarray}
{\cal G}_{G\otimes G}\{{\cal G}_G[\rho]\}
&=& \frac{1}{|G|^2}\sum_{g,g'\in G}\frac{1}{|G|}\sum_{g''\in G}[T(g')\otimes T(g)][T(g'')\otimes T(g'')]\rho
[T^{\dagger}(g'')\otimes T^{\dagger}(g'')][T^{\dagger}(g')\otimes T^{\dagger}(g)]\nonumber\\
&=& \frac{1}{|G|^2}\sum_{g,g'\in G}\frac{1}{|G|}\sum_{g''\in G}[T(g'\circ g'')\otimes T(g\circ g'')]\rho
[T^{\dagger}(g'\circ g'')\otimes T^{\dagger}(g\circ g'')]\nonumber\\
&=& \frac{1}{|G|^2}\sum_{g,g'\in G}\frac{1}{|G|}\sum_{h\in G}[T(g'\circ g^{-1}\circ h)\otimes T(h)]\rho
[T^{\dagger}(g'\circ g^{-1}\circ h)\otimes T^{\dagger}(h)]\nonumber\\
&=& \frac{1}{|G|}\sum_{h'\in G}\frac{1}{|G|}\sum_{h\in G}[T(h')T(h)\otimes T(h)]\rho
[T^{\dagger}(h)T^{\dagger}(h')\otimes T^{\dagger}(h)]\label{G_times_G_G}\nonumber\\
&=& \frac{1}{|G|}\sum_{h'\in G}\frac{1}{|G|}\sum_{h\in G}[T(h')\otimes \openone][T(h)\otimes T(h)]\rho
[T^{\dagger}(h)\otimes T^{\dagger}(h)][T^{\dagger}(h')\otimes \openone]\nonumber\\
&=& {\cal G}_{G\otimes \{e\}}\{{\cal G}_G[\rho]\}= {\cal G}_{\{e\}\otimes G}\{{\cal G}_G[\rho]\}
\end{eqnarray}
\end{widetext}
where $h=g\circ g''$, $h'=g'\circ g^{-1}$ and $\{e\}$ is the group containing only the identity element so that, for
example,
\begin{eqnarray}
{\cal G}_{G\otimes \{e\}}[\rho]
&=&\frac{1}{|G|}\sum_{g\in G}[T_{\rm A}(g)\otimes \openone_{\rm B}]\rho
[T^{\dagger}_{\rm A}(g)\otimes \openone_{\rm B}]\label{G_times_e} \ .\nonumber\\
\end{eqnarray}
Similarly we can show
\begin{equation}
{\cal G}_{G\otimes G}\{{\cal G}_G[\rho]\}
= {\cal G}_{\{e\}\otimes G}\{{\cal G}_G[\rho]\}
\end{equation}
and this means that the shared asymmetry may be written equivalently as
\begin{eqnarray}
A_{G\otimes G}^{\rm (sh)}(\rho)
&=& S({\cal G}_{G\otimes G} [\rho])- S({\cal G}_G[\rho])\ .
\end{eqnarray}
The properties of the entropy function show $A_{G\otimes G}^{\rm (sh)}$ has the following two properties:
\begin{list}{({\em\roman{enumi}})}{\usecounter{enumi}}
\item $A_{G\otimes G}^{\rm (sh)}(\rho) \geq 0$~;
\item $A_{G\otimes G}^{\rm (sh)}(\rho) = 0$ iff ${\cal G}_{G\otimes G}[\rho] = {\cal G}_{G}[\rho]$.
\end{list}
For global $G$-invariant states $\rho^\beta$ we have ${\cal G}_G[\rho^\beta]=\rho^\beta$ and so the second property
becomes
\begin{list}{({\em ii})$^\prime$}{}
\item $A_{G\otimes G}^{\rm (sh)}(\rho^\beta) = 0$ iff ${\cal G}_{G\otimes G}[\rho^\beta] =
\rho^\beta$, or, equivalently, iff ${\cal G}_{G\otimes \{e\}}[\rho^\beta]
= {\cal G}_{\{e\}\otimes G}[\rho^\beta]=\rho^\beta$\ .
\end{list}
A third property is
\begin{list}{({\em\roman{enumi}})}{\usecounter{enumi}\setcounter{enumi}{2}}
\item $A_{G\otimes G}^{\rm (sh)}(\rho)$ is non-increasing on average under {\em local} $G$-invariant local
operations and classical communication (LOCC),
\end{list}
which is analogous to the third property of $A_G$. We define local $G$-invariant LOCC as those LOCC that are
allowed by the local $G$-SSR or, equivalently, those that satisfy \eqr{local_G_O_AB}. These include products of local
operations of the form ${\cal O}_{\rm A}\otimes{\cal O}_{\rm B}$. For classical communication to be permitted
under the $G$-SSR, the information must be carried by physical processes that are permitted by the $G$-SSR. This can
be done, for example, by using a $G$-SSR--free system as the carrier, which we assume is the case. The class of LOCC
is a subset of the class of {\it separable} operations \cite{BennDiVi}. It is straight-forward to show, using the same
reasoning as in Ref.~ \cite{BennDiVi}, that every local $G$-invariant LOCC operation is a local $G$-invariant
separable operation. So to prove the third property it is sufficient to show that $A_{G\otimes G}^{\rm (sh)}(\rho)$ is
non-increasing under local $G$-invariant {\em separable} operations of the type
\begin{equation}
\{{\cal O}_{{\rm A},i}\otimes{\cal O}_{{\rm B},j}:i=1,2,\ldots, j=1,2,\ldots,\}
\label{local_G_invariant_sep}
\end{equation}
where ${\cal O}_{{\rm A},i}\otimes{\cal O}_{{\rm B},j}$ satisfies \eqr{local_G_O_AB}. We therefore wish to show
that
\begin{equation}
A_{G\otimes G}^{\rm (sh)}(\rho)\ge \sum_{i,j} P_{i,j} A_{G\otimes G}^{\rm (sh)}(\rho_{i,j})
\end{equation}
where
\begin{eqnarray*}
\rho_{i,j}&=&\frac{1}{P_{i,j}}\left({\cal O}_{{\rm A},i}\otimes{\cal O}_{{\rm B},j}\right)[\rho]\\
P_{i,j}&=&{\rm Tr}\left[\left({\cal O}_{{\rm A},i}\otimes{\cal O}_{{\rm
B},j}\right)\rho\right]\ ,
\end{eqnarray*}
or equivalently, from \eqr{A_sh},
\begin{eqnarray}
&&S({\cal G}_{G\otimes G}\{{\cal G}_{G}[\rho]\}) - S({\cal G}_{G}[\rho])\nonumber\\
&&\ \geq \sum_{i,j} P_{i,j}\Big[S({\cal G}_{G\otimes G}\{{\cal G}_{G}[\rho_{i,j}]\})
- S({\cal G}_{G}[\rho_{i,j}]) \Big].\quad\quad
\label{A_sh_3rd_prop}
\end{eqnarray}
We note that according to Eqs.~(\ref{local_G_O_AB}) and (\ref{global_G_O_AB}) each element in the set in
\eqr{local_G_invariant_sep} is also global $G$-invariant, and so
\begin{equation}
{\cal G}_G\left\{({\cal O}_{{\rm A},i}\otimes{\cal O}_{{\rm B},j})[\rho]\right\}
= ({\cal O}_{{\rm A},i}\otimes{\cal O}_{{\rm B},j})
\{{\cal G}_G[\rho]\}\ .
\end{equation}
This means we can interchange the twirl and measurement operations in \eqr{A_sh_3rd_prop}. Let $ \varrho \equiv
{\cal G}_{G}[\rho]$ and
\begin{eqnarray}
\varrho_{i,j} \equiv \frac{1}{P_{i,j}}({\cal O}_{{\rm A},i}\otimes{\cal O}_{{\rm B},j})\left\{{\cal
G}_{G}[\rho]\right\} = {\cal G}_{G}[\rho_{i,j}]\ .
\end{eqnarray}
We can now rewrite \eqr{A_sh_3rd_prop} as
\begin{eqnarray}
S({\cal G}_{G\otimes G}[\varrho]) - S(\varrho)
\ \geq \sum_{i,j} P_{i,j}\Big[S({\cal G}_{G\otimes G}[\varrho_{i,j}])
- S(\varrho_{i,j}) \Big]\nonumber\\
\label{A_sh_3rd_prop_equiv}
\end{eqnarray}
which is in the same form as \erf{ineq}. The same arguments which follow \erf{ineq} can be used to
show that the right hand side of \erf{A_sh_3rd_prop_equiv}, and thus $A_{G\otimes G}^{\rm
(sh)}(\rho)$, is non-increasing under local $G$-invariant {\em separable} operations, and by
implication, that the third property is therefore valid.\\
The {\em total} amount of extractable work under the local $G$-SSR is the sum of the logical work and the locally
extracted mechanical work, i.e. $(W_{G\otimes G-{\rm L}} + E_{G\otimes G})$. Using
Eqs.~(\ref{S_GxG=Hch+Hco}), (\ref{E_G}) and (\ref{W_GxG_is_lnD_E_Hco}) we find
\begin{eqnarray}
&&W_{G\otimes G-{\rm L}}(\rho^\beta) + E_{G\otimes G}(\rho^\beta)\nonumber\\
&&\hspace{1cm}= \log D-H_{G\otimes G}^{\rm (ch)}(\rho^\beta)
- 2H_{G\otimes G}^{\rm (co)}(\rho^\beta) \nonumber\\
&&\hspace{1cm}= \log D - S({\cal G}_{G\otimes G}[\rho^\beta])\ .
\label{W_total_logical_mech}
\end{eqnarray}
This total is equivalent to just the mechanical work $W_{G\otimes G}(\rho^\beta)$, where we define
\begin{equation}
W_{G\otimes G}(\rho) \equiv \log D - S({\cal G}_{G\otimes G}[\rho])
\label{W_GxG}
\end{equation}
for arbitrary states $\rho$ (which are not necessarily globally symmetric). The physical interpretation of $W_{G\otimes
G}(\rho)$ is that under the local $G$-SSR, ${\cal G}_{G\otimes G}(\rho)$ is the effective state of the system which
can be transferred locally to SSR-free ancillas at each site. Once the transfer is done the amount of work that can be
extracted {\em globally} from a thermal reservoir (i.e. without the LOCC restriction) using the ancillas is
$W_{G\otimes G}(\rho)$. Eqs.~(\ref{W_total_logical_mech}) and (\ref{W_GxG}) show that an equivalent physical
interpretation of $W_{G\otimes G}(\rho)$ is that it is the total extractable work, both logical and mechanical, that can
be extracted under the local $G$-SSR and LOCC. This result leads to the fourth and final property that
\begin{list}{({\em\roman{enumi}})}{\usecounter{enumi}\setcounter{enumi}{3}}
\item the shared asymmetry is an achievable upper bound on the
synergy of the total extractable work $W_{G\otimes G}$.
\end{list}
This follows from the following theorem.
\begin{theo}
\label{thm_A_sh_ref}
The synergy of the
total work $W_{G\otimes G}$ under the local $G$-SSR is bounded by the shared asymmetry, i.e.
\begin{equation}
\Upsilon(W_{G\otimes G}^{\rm (tot)}; \rho_1,\rho_2) \leq
\mbox{\em min}\{A_{G\otimes G}^{\rm (sh)}(\rho_1),A_{G\otimes G}^{\rm (sh)}(\rho_2)\}\ .
\end{equation}
The upper bound is achievable in the sense of theorem~\ref{thm_A_ref}.
\end{theo}
We omit the proof, which has the same form as that of Theorem~\ref{thm_A_ref}. The achievability
follows from the existence of bipartite globally symmetric states $\ket{\Psi}$ such that
\begin{equation}
\bra{\Psi}T_{\rm A}(g)^{\dagger} T_{\rm A}(g') \ket{\Psi} = \delta_{g,g'}\ .
\end{equation}
Thus we have identified three resources that emerge in a bipartite setting under a $G$-SSR: the
locally extractable mechanical work $W_{G\otimes G-{\rm L}}$, the accessible entanglement or
logical work $E_{G\otimes G}$, and the shared asymmetry $A_{G\otimes G}^{\rm (sh)}$. Finally, we
show that, for globally $G$-invariant states, there is a triality relation between them,
generalizing the duality (\ref{comp_W}) from the unipartite setting. A straightforward calculation
gives
\begin{equation}
A_{G\otimes G}^{\rm (sh)}(\rho^\beta) = 2H_{G\otimes G}^{\rm (co)}(\rho^\beta)+H_{G\otimes G}^{\rm (ch)}(\rho^\beta)\ ,
\end{equation}
which, together with \erf{E_G} and \erf{WGlo}, then gives the main result of this paper
\begin{equation}
W_{G}(\rho^\beta)=W_{G\otimes G-{\rm L}}(\rho^\beta)
+E_{G\otimes G}(\rho^\beta) +A_{G\otimes G}^{\rm (sh)}(\rho^\beta)\ .
\label{comp_E}
\end{equation}
\subsubsection{Local asymmetry with respect to local $G$-SSR}
We defined the shared asymmetry of state $\rho$ in \eqr{A_sh} as the extra entropy generated by the local $G$-SSR
acting on the state ${\cal G}_G[\rho]$, i.e. $A^{\rm (sh)}_{G\otimes G}(\rho)=S({\cal G}_{G\otimes G}\{{\cal
G}_G[\rho]\})- S({\cal G}_G[\rho])$. It is interesting to consider the entropy generated by the local $G$-SSR acting on
the state $\rho$ itself. For this purpose we define
\begin{equation}
A^{\rm (lo)}_{G\otimes G}(\rho)\equiv S({\cal G}_{G\otimes G}[\rho])-S(\rho)\ ,
\label{A_lo}
\end{equation}
which we call the {\em local asymmetry} of $\rho$. $A^{\rm (lo)}_{G\otimes G}$ is related to the
shared $A^{\rm (sh)}_{G\otimes G}$ and global $A_{G}(\rho)$ asymmetries by
\begin{eqnarray}
A^{\rm (sh)}_{G\otimes G}(\rho) &=& A^{\rm (lo)}_{G\otimes G}({\cal G}_G[\rho])\\
A^{\rm (lo)}_{G\otimes G}(\rho) &=& A^{\rm (sh)}_{G\otimes G}(\rho)+A_{G}(\rho)\ .
\label{A_lo_is_A_sh_plus_A_G}
\end{eqnarray}
As $A^{\rm (sh)}_{G\otimes G}(\rho)\ge 0$ and $A_{G}(\rho)\ge 0$ then clearly
\begin{equation}
0 \le A^{\rm (sh)}_{G\otimes G}(\rho) \le A^{\rm (lo)}_{G\otimes G}(\rho)\ .
\end{equation}
The local asymmetry $A_{G\otimes G}^{\rm (lo)}(\rho)$ is the asymmetry of $\rho$ with respect to the local $G$-SSR
which restricts our knowledge of the state to ${\cal G}_{G\otimes G}[\rho]$. It is clearly related to the total extractable
work $W_{G\otimes G}(\rho)$ that is represented by the state ${\cal G}_{G\otimes G}[\rho]$. Indeed from
Eqs.~(\ref{W}), (\ref{A_lo}) and (\ref{W_GxG}) we find
\begin{equation}
W(\rho)=W_{G\otimes G}(\rho)+A_{G\otimes G}^{\rm (lo)}(\rho)\ .
\label{W_is_W_GxG_A_GxG}
\end{equation}
We can now list the properties of the local asymmetry as
\begin{list}{({\em\roman{enumi}})}{\usecounter{enumi}}
\item $A_{G\otimes G}^{\rm (lo)}(\rho) \geq 0$~;
\item $A_{G\otimes G}^{\rm (lo)}(\rho) = 0$ iff
\begin{equation}
{\cal G}_{G\otimes \{e\}}[\rho] = \rho \label{A_lo_II_a}
\end{equation}
{\em and}
\begin{equation}
{\cal G}_{\{e\}\otimes G}[\rho] = \rho \label{A_lo_II_b}\ ;
\end{equation}
\item $A_{G\otimes G}^{\rm (lo)}(\rho)$ is non-increasing on average under {\em local} $G$-invariant LOCC; and
\item $A_{G\otimes G}^{\rm (lo)}(\rho)$ is an achievable upper bound on the
synergy of the extractable work $W_{G\otimes G}$ under the local $G$-SSR restriction.
\end{list}
Once again the first property follows from the properties of the entropy function. The proof of the
third property is a minor modification of the proof of the third property of the shared asymmetry
$A^{\rm (sh)}_{G\otimes G}$. Similarly, the proof of the fourth property is of the same form as
that of the fourth property of the asymmetry $A_G$. We leave these proofs for the interested
reader.
The second property can be proved as follows. We note that the conditions Eqs.~(\ref{A_lo_II_a})
and (\ref{A_lo_II_b}) taken together imply ${\cal G}_{G\otimes G}[\rho]=\rho$ for which $A^{\rm
(lo)}_{G\otimes G}(\rho)=0$ according to \eqr{A_lo} and so the conditions are sufficient. Also the
concavity of the entropy function yields $S({\cal G}_{G\otimes G}[\rho])\ge S({\cal G}_{G\otimes
\{e\}}[\rho])\ge S(\rho)$. Thus $A^{\rm (lo)}_{G\otimes G}(\rho)=0$ implies $S({\cal G}_{G\otimes
\{e\}}[\rho])= S(\rho)$ and hence ${\cal G}_{G\otimes \{e\}}[\rho]= \rho$. By a similar argument,
$A^{\rm (lo)}_{G\otimes G}(\rho)=0$ implies that ${\cal G}_{\{e\}\otimes G}[\rho]= \rho$. The
conditions are therefore necessary as well.
Either condition \erf{A_lo_II_a} or \erf{A_lo_II_b} is sufficient for ${\cal G}_{G\otimes
G}[\rho]={\cal G}_G[\rho]$ and thus {\em sufficient} for $A^{\rm (sh)}_{G\otimes G}[\rho]=0$. But
both these conditions are {\em not necessary} for $A^{\rm (sh)}_{G\otimes G}[\rho]=0$. This means
there is a wider class of states for which $A^{\rm (lo)}_{G\otimes G}[\rho]\ne 0$ than for $A^{\rm
(sh)}_{G\otimes G}[\rho]\ne 0$.
Finally, Eqs.~(\ref{W_GxG}) and (\ref{W_is_W_GxG_A_GxG}) together yield
\begin{equation}
W(\rho^\beta) = W_{G\otimes G-{\rm L}}(\rho^\beta)+E_{G\otimes G}(\rho^\beta)+A^{\rm (lo)}_{G\otimes G}(\rho^\beta)
\end{equation}
which is consistent with \eqr{comp_E} on recalling \eqr{A_lo_is_A_sh_plus_A_G} and \eqr{comp_W} and
the fact that $A_G(\rho^\beta)=0$ for the globally symmetric state $\rho^\beta$.
\section{Discussion}
In this paper we have quantified the ability of a system to act as a reference system and ameliorate the effect of the
superselection rule $G$-SSR induced by $G$. Our approach is to express the reference-frame ability of a system in
terms of a physical quantity, namely, in terms of how the system can increase the amount of work that can extracted from
a thermal reservoir. To do this we introduced the quantity $\Upsilon$ in \erf{DeltaX}, which we call the synergy of two
systems. The work synergy is the extra amount of work that is extractable using the two systems collectively compared
to the total amount of work extractable using the systems separately. Theorem 2 shows that this quantity is bounded
above by the asymmetry $A_G$ with respect to symmetry group $G$ of each system, a result which elevates the
asymmetry of a system to a resource for overcoming the restrictions of the $G$-SSR. We used the same approach for
bipartite systems where we found (Theorem 3) that the synergy bounds the shared asymmetry $A^{\rm (sh)}_{G\otimes
G}$.
Our results can be arranged in terms of a hierarchy of increasing restrictions, from global $G$-SSR, global and local
$G$-SSR and finally global and local $G$-SSR and LOCC. At each level of restriction we find that the resources
reappear in new forms. For example, under the global $G$-SSR \eqr{comp_W} shows that the unconstrained extractable
work $W$ splits into two new resources of extractable work $W_G$ and asymmetry $A_G$, i.e.
\begin{equation}
W(\rho) = W_G(\rho)+A_G(\rho)
\label{compW_prime}
\end{equation}
for arbitrary states $\rho$. Next under global and local $G$-SSR we find from Eqs.~(\ref{compW_prime}),
(\ref{W_is_W_GxG_A_GxG}) and (\ref{A_lo_is_A_sh_plus_A_G}) that the extractable work $W_G$ further splits
into a more constrained extractable work $W_{G\otimes G}$ and a new asymmetry $A^{\rm (sh)}_{G\otimes G}$, i.e.
\begin{equation}
W_G(\rho) = W_{G\otimes G}(\rho)+A^{\rm (sh)}_{G\otimes G}(\rho)
\end{equation}
also for arbitrary states $\rho$. Finally under global and local $G$-SSR and LOCC we found
\begin{equation}
W_{G\otimes G}(\rho^\beta) = W_{G\otimes G-{\rm L}}(\rho^\beta)+E_{G\otimes G}(\rho^\beta)
\label{G_GxG_LOCC}
\end{equation}
for globally symmetric states $\rho^\beta$. These results are summarized in Table
\ref{table_G_GxG_L}. A different ordering of the constraints, where LOCC is applied first followed by the global
$G$-SSR and then the local $G$-SSR leads to the results in Table \ref{table_L_G_GxG}.
\begin{table}
\centering
\begin{tabular}{|c|c|c|}\hline
Constraints & Resources & State\\
\hline
-- & $W$ & $\rho$\\
$G$ & $W=W_{ G}+A_{ G}$ & $\rho$\\
$G$ \& $G\otimes G$ & $W_G=W_{G\otimes G}+A^{\rm (sh)}_{G\otimes G}$ & $\rho$\\
$\left.\begin{array}{c} G, G\otimes G\\ \&\ {\rm L}\end{array}\right\}$
& $W_{G\otimes G}=W_{G\otimes G-{\rm L}}+E_{G\otimes G}$ & $\rho^\beta$ \\
\hline
\end{tabular}
\caption{Hierarchy of constraints and resources for classes of states where $\rho$ represents an arbitrary state
and $\rho^\beta$ represents a pure $G$-invariant state.} \label{table_G_GxG_L}
\end{table}
\begin{table}
\centering
\begin{tabular}{|c|c|c|}\hline
Constraints & Resources & State\\
\hline
-- & $W$ & $\rho$\\
${\rm L}$ & $W=W_{\rm L}+E$ & $\rho$\\
${\rm L}\ \&\ G$ & $W_{G}=W_{G-\rm L}+E_G$ & $\rho^\beta$\\
$\left.\begin{array}{c} {\rm L}, G\ \&\\ G\otimes G \end{array}\right\}$
& $\begin{array}{l} W_{G-\rm L}+E_G\\
\quad\quad=W_{G\otimes G-{\rm L}}+E_{G\otimes G}+A^{\rm (sh)}_{G\otimes G}\end{array}$
& $\rho^\beta$ \\
\hline
\end{tabular}
\caption{Hierarchy for a different ordering of the constraints. The equation in the third row
is obtained directly from the second row using the fact that $W_G(\rho^\beta)=W(\rho^\beta)$,
$W_{G-{\rm L}}(\rho^\beta)=W_{\rm L}(\rho^\beta)$ and $E_G(\rho^\beta)=E(\rho^\beta)$
for $G$-invariant states $\rho^\beta$.}\label{table_L_G_GxG}
\end{table}
Our relations Eqs. ({\ref{comp_W}) and (\ref{comp_E}) show the mutually competing nature of the mechanical,
logical and asymmetry resources represented by a state. They are analogous to the particle--wave duality in the following
sense. Asymmetry with respect to the group $G=\{g\}$ can be thought of as a generalized measure of {\it localization}
in that the most asymmetric pure state is transformed into an orthogonal state by the group elements $g$ which is
analogous to moving a particle from one distinct path to another in a which-way experiment. On the other hand,
extractable work under the $G$-SSR measures the invariance of a state to the group action and can be thought of as a
measure of the system's ability to display {\it interference}. Our relation Eq.~(\ref{comp_W}) between asymmetry and
extractable work can then be seen to express a tradeoff between generalized measures of localization and interference.
This connection has been explored elsewhere \cite{JV}.
SSRs are ubiquitous in quantum physics where, for example, spatial orientation is limited by a SU($n$)-SSR and optical
phase is limited by a U(1)-SSR. In the presence of SSRs, quantum states require sufficient asymmetry in their attendant
reference systems in order to be useful. Moreover, a comparison of the relative efficiencies of classical and quantum
algorithms needs to account for the total amount of resources needed in each case. Quantum reference systems are
clearly a resource that needs to be tallied and so, in this sense, our results pave the way for evaluating the full cost of
resources needed for quantum information processing. They also open up a new direction of research in the study of
SSRs and reference systems.
We thank M. Plenio and S.D. Bartlett for discussions. This work was supported by the Australian Research Council, the
Queensland State Government and the Leverhulme Trust of the UK.
|
1,116,691,498,682 | arxiv | \section{Introduction}
\label{section:introduction}
Galaxy formation presents some of the most important and challenging
problems in modern astrophysics. A basic paradigm for the
dissipational formation of galaxies from primordial fluctuations in
the density field has been developed
\citep[e.g.][]{white1978a,blumenthal1984a,white1991a}, but many of the
processes accompanying galaxy formation are still poorly understood. In
particular, star formation shapes the observable properties of
galaxies but involves a variety of complicated dynamical, thermal,
radiative, and chemical processes on a wide range of scales
\citep[see][for a review]{mckee2007a}.
Observed
galaxies exhibit large-scale correlations between their
global star formation rate (SFR) surface density $\Sigma_{\mathrm{SFR}}$
and average gas surface density $\Sigma_{\mathrm{gas}}$ \citep{kennicutt1989a,kennicutt1998a}, and
these global correlations serve as the basis for treatments of star formation
in many models of galaxy formation.
While such models have supplied important insights,
detailed observations of galaxies have recently
provided evidence that the molecular, rather than the total, gas surface density is
the primary driver of global star formation in galaxies
\citep[e.g.,][]{wong2002a,boissier2003a,heyer2004a,boissier2007a,calzetti2007a,kennicutt2007a}.
In this study, we adopt an approach in which empirical and
theoretical knowledge of the star formation efficiency (SFE) in dense,
molecular gas is used as the basis for a star formation model in
hydrodynamical simulations
of disk galaxy evolution. This approach requires modeling processes
that shape properties of the dense phase of the interstellar medium
(ISM) in galaxies. The purpose of this paper is to present such a
model.
Stellar populations in galaxies exhibit salient trends of colors and
metallicities with galaxy luminosity
\citep[e.g.,][]{kauffmann2003c,blanton2005a,cooper2007a}. In the
hierarchical structure formation scenario these trends should emerge
through the processes of star formation and/or stellar feedback in the
progenitors of present-day galaxies. Observationally, ample
evidence suggests that the efficiency of the conversion of
gas into stars depends strongly and
non-monotonically on mass of the system. For example, the faint-end of
the galaxy luminosity function has a shallow slope
\citep[$\alpha_{\mathrm{L}}\approx1.0-1.3$, e.g.,][]{blanton2001a,blanton2003a}
compared to the steeper mass function of dark matter halos
\citep[$\alpha_{\mathrm{DM}}\approx2$, e.g.,][]{press1974a,sheth1999a}, indicating
a decrease in SFE in low-mass galaxies. At the
same time, the neutral hydrogen (HI) and baryonic mass functions
may be steeper than the luminosity function
\citep[$\alpha_{\mathrm{HI}}\approx1.3-1.5$, e.g.,][]{rosenberg2002a,zwaan2003a}.
The baryonic \cite{tully1977a} relation is continuous down to
extremely low-mass dwarf galaxies
\citep[e.g.,][]{mcgaugh2005a,geha2006a}, indicating that the
fractional baryonic content of galaxies of different mass is similar.
Hence, low-mass galaxies that are
unaffected by environmental processes are gas-rich, yet
often form stars inefficiently.
While feedback processes from supernovae and AGN
\citep[e.g.,][]{brooks2007a,sijacki2007a}, or the efficiency of gas
cooling and accretion \citep{dekel2006a,dekel2008a}, may account for
part of these trends, the SFE as a function
of galaxy mass may also owe to intrinsic ISM processes
\citep[e.g.,][]{tassis2008a,kaufmann2007a}. To adequately explore the
latter possibility, a realistic model for the conversion of gas into
stars in galaxies is needed.
Traditionally, star formation in numerical simulations of galaxy
formation is based on the empirical Schmidt-Kennicutt (SK) relation
\citep{schmidt1959a,kennicutt1989a,kennicutt1998a}, in which star
formation rate is a {\it universal} power-law function of the total
disk-averaged or global gas surface density: $\Sigma_{\mathrm{SFR}}\propto
\Sigma_{\mathrm{gas}}^{n_{\mathrm{tot}}}$ with $n_{\mathrm{tot}}\approx 1.4$ describing the correlation
for the entire population of normal and starburst galaxies. However,
growing observational evidence indicates that this relation may not be
universal on smaller scales within galaxies, especially at low surface
densities.
Estimates of the slope of the SK relation within individual galaxies exhibits
significant variations. For example, while \citet{schuster2007a} and
\citet{kennicutt2007a} find $n_{\mathrm{tot}}\approx 1.4$ for the molecular-rich
galaxy M51a, similar estimates in other large, nearby galaxies \citep[including the
Milky Way (MW);][]{misiriotis2006a} range from $n_{\mathrm{tot}}\approx1.2$ to $n_{\mathrm{tot}}\approx 3.5$ \citep{wong2002a,boissier2003a}
depending on dust-corrections and fitting methods.
The disk-averaged total gas SK relation for normal
(non-starburst) galaxies also has a comparably steep slope of $n_{\mathrm{tot}}\approx2.4$, with
significant scatter \citep{kennicutt1998a}.
While the variations in the $\Sigma_{\mathrm{SFR}}-\Sigma_{\mathrm{gas}}$ correlation may
indicate systematic uncertainties in observational measurements,
intrinsic variations or trends in galaxy properties may also
induce differences between
the global relation determined by \cite{kennicutt1998a}
and the $\Sigma_{\mathrm{SFR}}-\Sigma_{\mathrm{gas}}$ correlation in
individual galaxies.
Galaxies with low gas surface densities, like dwarfs or bulgeless spirals,
display an even wider
variation in their star formation relations.
\citet{heyer2004a} and
\citet{boissier2003a} show that in low-mass galaxies the
SFR dependence on the total gas surface density
exhibits a power-law slope $n_{\mathrm{tot}}\approx 2-3$ that is considerably steeper
than the global \citet{kennicutt1998a} relation slope of $n_{\mathrm{tot}}\approx 1.4$.
Further, star formation in the low-surface density outskirts of galaxies also
may not be universal.
Average
SFRs appear to drop rapidly at gas surface
densities of $\Sigma_{\mathrm{gas}}\lesssim 5-10{\rm\, M_{\sun}\,pc^{-2}}$
\citep{hunter1998a,martin2001a}, indicating that
star formation may be truncated
or exhibit a steep dependence on the gas surface density.
The existence of such threshold surface
densities have been proposed on theoretical grounds
\citep{kennicutt1998a,schaye2004a}, although recent GALEX results
using a UV indicator of star formation suggest that star formation
may continue at even lower surface densities \citep{boissier2007a}.
Star formation rates probed by
damped Lyman alpha absorption (DLA) systems also appear to lie
below the \cite{kennicutt1998a} relation, by an order of magnitude
\citep{wolfe2006a,wild2007a},
which may indicate that the relation between SFR
and gas surface density in DLA systems differs from the
local relation measured at high $\Sigma_{\mathrm{gas}}$.
In contrast, observations generally show that star formation in
galaxies correlates strongly with {\it molecular\/} gas, especially
with the highly dense gas traced by HCN emission
\citep{gao2004a,wu2005a}. The power-law index of the
SK relation connecting the SFR to the
surface density of molecular hydrogen consistently displays a value of
$n_{\mathrm{mol}}\approx1.4$ and exhibits considerably less galaxy-to-galaxy
variation
\citep{wong2002a,murgia2002a,boissier2003a,heyer2004a,matthews2005a,leroy2005,leroy2006,gardan2007a}.
Molecular gas, in turn, is expected to form in the high-pressure regions of the
ISM \citep{elmegreen1993a,elmegreen1994a}, as indicated by
observations \citep{blitz2004a,blitz2006a,gardan2007a}.
Analytical models and numerical simulations that tie star
formation to the fraction of gas in the dense ISM are successful in
reproducing many observational trends
\citep[e.g.,][]{elmegreen2002a,kravtsov2003a,krumholz2005a,li2005a,li2006a,tasker2006a,tasker2007a,krumholz2007a,wada2007a,tassis2007a}.
Recently, several studies have explored star formation recipes based
on molecular hydrogen. \citet{pelupessy2006a} and \citet{booth2007a}
implemented models for $\mathrm{H}_{2}$ formation in gaseous disks and used
them to study the molecular content and star formation in galaxies.
However, these studies focused on the evolution of galaxies of a
single mass and did not address
the origin of the
SK relation, its dependence on galaxy mass or
structure, or its connection to trends in the local molecular fraction.
Our study examines the SK relation
critically, including its dependence on the structural and ISM properties
of galaxies of different masses, to explain the observed deviations from the
global SK relation, to
investigate other connections between star formation and disk galaxy properties
such as rotation or gravitational instability, and to
explore how the temperature and density structure of the ISM
pertains to the star formation attributes of galaxies.
To these ends, we develop a model for the ISM and star formation whose
key premise is that star formation on the scales of molecular clouds
($\sim 10$~pc) is a function of molecular hydrogen density with a
universal SFE per free-fall time
\citep[e.g.,][]{krumholz2007b}. Molecular hydrogen, which
we assume to be a proxy for dense, star forming gas, is accounted for
by calculating the local $\mathrm{H}_{2}$ fraction of gas as a function of
density, temperature, and metallicity using the photoionization code
Cloudy \citep{ferland1998a} to incorporate $\mathrm{H}_{2}$-destruction
by the UV radiation
of local young stellar populations. We devise a
numerical implementation of the star formation and ISM model, and
perform hydrodynamical simulations to study the role of molecular gas
in shaping global star formation relations of self-consistent galaxy
models over a representative mass range.
The results of our study show that many of the observed global star formation
correlations and trends can be understood in terms of the dependence of molecular
hydrogen abundance on the local gas volume density. We show that
the physics controlling the abundance of molecular hydrogen and its destruction
by the interstellar radiation field (ISRF) play a key role in shaping
these correlations, in agreement with earlier calculations based on
more idealized models of the ISM \citep{elmegreen1993a,elmegreen1994a}.
While our simulations focus on the connection between the molecular ISM phase
and star formation on galactic scales, the formation of molecular hydrogen
has also been recently studied in simulations of the ISM on smaller,
subgalactic scales \citep{glover2007a,dobbs2007a}. These simulations
are complementary to the calculations presented in our study and could
be used as input to improve the molecular ISM model we
present.
The paper is organized as follows. The simulation methodology,
including our numerical models for the ISM, interstellar radiation
field, and simulated galaxies, is presented in \S
\ref{section:methodology}. The results of the simulations are
presented in \S \ref{section:results}, where the simulated star
formation relations in galactic disks and correlations of the
molecular fraction with the structure of the ISM are examined. We
discuss our results in \S \ref{section:discussion} and conclude with a
summary in \S \ref{section:summary}. Details of our tests of numerical
fragmentation in disk simulations and calculations of the model
scaling between star formation, gas density, and orbital frequency
are presented in the Appendices. Throughout, we work in the context of
a dark-energy dominated cold dark matter cosmology with a Hubble
constant $H_{0} \approx 70\,\mathrm{km}\,\,\mathrm{s}^{-1}{\mathrm{Mpc}}^{-1}$.
\begin{figure*}
\figurenum{1}
\epsscale{1}
\plotone{fig1.eps}
\caption{\label{fig:cooling}
Cooling ($\Lambda$) and heating ($\mathcal{H}$) rates for interstellar and intergalactic gas as a function of gas density ($n_{\mathrm{H}}$),
temperature ($T$), metallicity ($Z$), and interstellar radiation field (ISRF) strength ($U_{\mathrm{isrf}}$), in units of
the ISRF strength in the MW at the solar circle, as calculated
by the code Cloudy \citep{ferland1998a}. Shown are the cooling (dotted lines), heating (dashed lines) and net
cooling (solid) functions over the temperature range $T=10^{2}-10^{9}~\mathrm{K}$. Dense gas can efficiently cool via
atomic and molecular coolants below $T=10^{4}\mathrm{K}$, depending on the gas density and the strength of the ISRF. A strong
ISRF can enable the destruction of $\mathrm{H}_{2}$ gas and
thereby reduce the SFR.
}
\end{figure*}
\section{Methodology}
\label{section:methodology}
To examine molecular gas and star formation in
disk galaxies, we develop a model
that accounts for low temperature coolants ($T<10^{4}\,\mathrm{K}$), calculates
the equilibrium abundance of $\mathrm{H}_{2}$ in dense gas, and estimates the local
SFR based on the local $\mathrm{H}_{2}$ density with a universal
SFE per free fall time.
This ISM model is detailed below in \S \ref{section:methodology:ism},
and is referred to as the $\mathrm{H}_{2}$D-SF model (for ``$\mathrm{H}_{2}$ density-star formation'').
We extend the $\mathrm{H}_{2}$D-SF model to account for the destruction
of $\mathrm{H}_{2}$ by an
interstellar radiation field powered by local star formation, and refer
to the extended model as the $\mathrm{H}_{2}$D-SF+ISRF model
(for ``$\mathrm{H}_{2}$ density-star formation plus interstellar radiation field'').
The $\mathrm{H}_{2}$D-SF+ISRF model is also detailed below in
\S \ref{section:methodology:ism}.
The new molecular gas ISM and star formation models are contrasted against
a simple ISM model that includes only atomic cooling with a temperature
floor $T=10^{4}\mathrm{K}$ and a
SFR calculated from the total gas density above a density
threshold, which we refer to as
the GD-SF model (for ``gas density-star formation''). The GD-SF model
has been commonly used in previous
galaxy formation simulations.
Each star formation and ISM model is numerically implemented in the
smoothed particle hydrodynamics (SPH) /
N-body code GADGET2 \citep{springel2001a,springel2005c}, which is used to
perform the simulations presented in this work.
General issues of numerical fragmentation in the
disk simulations of the kind presented here are described in
\S \ref{section:methodology:jeans_resolution}, \S \ref{section:methodology:median_eos},
and in the Appendix.
The ISM
models are applied to simulations of the isolated evolution
of disk galaxies with a representative range of circular velocities and structural properties.
The galaxy models are designed as analogues of the well-studied nearby galaxies
DDO154, M33, and NGC 4501, and are detailed in \S \ref{section:methodology:galaxy_models}. The results of the simulations are presented in \S \ref{section:results}.
\subsection{ISM Properties}
\label{section:methodology:ism}
The thermal properties of the ISM are largely determined
by the radiative heating $\mathcal{H}$ and cooling $\Lambda$
rates of interstellar gas. Given $\mathcal{H}$ and $\Lambda$,
the rate of change of the internal energy $u$ of gas
with a density $\rho_{g}$ evolves as
\begin{equation}
\label{equation:thermal_evolution}
\rho_{g}\frac{\mathrm{d} u}{\mathrm{d} t} = \mathcal{H}-\Lambda,
\end{equation}
\noindent
with additional terms to describe the energy input
from feedback
or hydrodynamical interactions.
The temperature dependence of $\mathcal{H}$ and $\Lambda$ will depend on
the metallicity and density of gas, and on the
nature of the external radiation field.
To calculate the above heating and cooling processes,
as well as additional cooling processes from
molecules and ionic species other than $\mathrm{H}_{2}$,
we use the photoionization code Cloudy version C06.02a,
last described by
\citet{ferland1998a}.
Cloudy calculates
heating and cooling processes for H, He, and
metals, as well as absorption
by dust grains, photoelectric absorption, and molecular
photoabsorption. The chemical network and cooling
includes molecules and atomic ions
that allow gas to cool to temperatures $T<10^4$~K. For details on the molecular
network and cooling, the reader is encouraged to
examine \cite{hollenbach1979a,hollenbach1989a},
\cite{ferland1998a}, and the Cloudy documentation
suite Hazy\footnote{http://www.nublado.org}.
For purposes of calculating the efficiency of
molecular line cooling
using Cloudy, only the thermal line width is considered.
At temperatures
below our minimum temperature of ($T\sim100 \mathrm{K}$), the
influence
of nonthermal motions on molecular line cooling will
become increasingly important and should be considered when
modeling lower temperature gas.
For the heating term $\mathcal{H}$,
our Cloudy calculations include heating from the cosmic
ultraviolet background at $z=0$
following \cite{haardt1996a}, extended to include
the contribution to the UV background from galaxies
(see the Cloudy documentation for details).
Cosmic ray heating and ionization are incorporated following
\cite{ferland1984a}, and are important
at densities where the ISM becomes
optically-thick to UV radiation.
The presence of a blackbody cosmic microwave background
radiation with a temperature of
$T\approx2.7\mathrm{K}$ is included.
Milky Way-like ($R_{V}=3.1$) dust, including both graphite and
silicate dust, is modeled with the local ISM abundance.
Grain heating and cooling mechanisms are included, following \cite{van_hoof2001a}
and \cite{weingartner2001a}.
The $\mathrm{H}_{2}$D-SF model of the ISM includes all the cooling and heating
processes described above.
In addition, the $\mathrm{H}_{2}$D-SF+ISRF model includes the following
treatment of the ISRF, and is used to examine the consequences
of soft UV radiation from star formation for
the molecular phase of the ISM.
\cite{mathis1983a}
modeled the ISRF needed to power the emission spectrum
of dust both in the diffuse ISM and in molecular clouds
in the Milky Way. The short-wavelength
($\lambda\lesssim0.3\micron$) ISRF spectral energy distribution (SED)
was inferred to scale roughly exponentially with a scale length
$R_{u}\sim4\, {\mathrm{kpc}}$ in the MW disk (for a solar galactocentric radius of
$R_{\sun}=10\,{\mathrm{kpc}}$),
tracing the stellar population, while the
strength of the long-wavelength ISRF SED varied non-trivially with
radius owing to the relative importance of emission from
dust.
The short-wavelength ISRF SED determined by
\cite{mathis1983a} (their Table A3) for
$R = 0.5-1.3R_{\sun}$, normalized to a fixed stellar surface
density,
is roughly
independent of radius, with a shape similar to the
radiation field considered by \cite{draine1978a} and
\cite{draine1996a}. For simplicity, we fix the ISRF spectrum
to have its inferred SED in the solar vicinity
and scale the intensity
with the local SFR density normalized by the
solar value as
\begin{equation}
\label{equation:uisrf}
U_{\mathrm{isrf}} \equiv \frac{u_{\nu}}{u_{\nu,\sun}} = \frac{\Sigma_{\mathrm{SFR}}}{\Sigma_{\mathrm{SFR},\sun}}
\end{equation}
\noindent
where
$\Sigma_{\mathrm{SFR},\sun}\approx(2-5)\times10^{-9}M_{\sun}\,{\mathrm{yr}}^{-1}\,{\mathrm{pc}}^{-2}$
\citep[e.g.,][]{smith1978a,miller1979a,talbot1980a,rana1987a,rana1991a,kroupa1995a}.
For our model, we adopt $\Sigma_{\mathrm{SFR},\sun} =
4\times10^{-9}M_{\sun}\,{\mathrm{yr}}^{-1}\,{\mathrm{pc}}^{-2}$. When included, the ISRF is
added to the input spectrum for the Cloudy calculations as an
additional source of radiation.
Absorption of soft UV ($\lambda\sim0.1\micron$)
photons can enable the destruction of $\mathrm{H}_{2}$ through transitions to the
vibrational continuum or to excited states that can be photoionized or
photodissociated \citep{stecher1967a}.
The ISRF can supply the soft UV photons that lead to $\mathrm{H}_{2}$
dissociation and can subsequently regulate the $\mathrm{H}_{2}$ abundance at low
gas densities ($n_{\mathrm{H}}\sim1\mathrm{cm}^{-3}$). For values of $U_{\mathrm{isrf}}\gtrsim0.01$, the
ionizing flux of the ISRF dominates over the
\cite{haardt1996a} UV background
\citep[see, e.g.,][]{sternberg2002a}.
Figure \ref{fig:cooling} shows the cooling rate $\Lambda$ (dotted line), the
heating rate $\mathcal{H}$ (dashed line), and net cooling rate
$\Lambda_{\mathrm{net}}$ (solid lines) calculated
for a range of temperatures ($T$), densities ($n_{\mathrm{H}}$),
metallicities ($Z$), and ISRF strengths ($U_{\mathrm{isrf}}$).
For each value of $T$, $n_{\mathrm{H}}$, $Z$, and $U_{\mathrm{isrf}}$ a Cloudy
simulation is performed assuming a plane-parallel radiation
field illuminating a $10\,{\mathrm{pc}}$ thick slab chosen to be comparable
to our hydrodynamical spatial resolution.
Each simulation
is allowed to iterate until convergence, after which the heating rate,
cooling rate, ionization fraction, molecular fraction, electron
density, and molecular weight is recorded from the center of the
slab.
These quantities are tabulated on a grid over the range
$\log_{10} T=\{2,9\} \log_{10} \mathrm{K}$ with $\Delta \log_{10} T=0.25$,
$\log_{10} n_{\mathrm{H}}=\{-6,6\}\log_{10}\mathrm{cm}^{-3}$ with $\Delta \log_{10} n_{\mathrm{H}}=1.5$,
$\log_{10} Z=\{-2,1\}\log_{10}Z_{\sun}$ with $\Delta \log_{10} Z=1.5$,
and
$U_{\mathrm{isrf}} = \{0, 1, 10, 100, 1000\}\,U_{\mathrm{isrf}, \sun}$, and log-linearly
interpolated
according to the local gas properties.
We note that
since the slab thickness determines the typical particle
column density for a given volume density,
the tabulated quantities can depend on
the slab thickness.
For reference,
varying the slab thickness between $100{\mathrm{pc}}$ and $1{\mathrm{pc}}$ for a density of
$n_{\mathrm{H}}\sim30\mathrm{cm}^{-3}$ changes the molecular fraction by less than $30\%$ and the
heating and cooling rates by less than $50\%$.
The left panels of Figure \ref{fig:cooling} with $U_{\mathrm{isrf}}=0$ correspond
to net cooling functions used in the $\mathrm{H}_{2}$D-SF model, while the
$\mathrm{H}_{2}$D-SF+ISRF model includes all the net cooling functions presented
in the figure. The GD-SF model that includes only atomic cooling incorporates
the regions of the net cooling functions in the left ($U_{\mathrm{isrf}}=0$) panels of
Figure \ref{fig:cooling} at temperatures $T>10^{4}\mathrm{K}$.
In addition to well known cooling and heating processes operating at
high temperatures ($T>10^{4}\mathrm{K}$), the results of the Cloudy
simulations show that cooling of gas near $n_{\mathrm{H}}\sim1\mathrm{cm}^{-3}$ is
regulated by the presence of the ISRF (Figure \ref{fig:cooling}, middle
left and center panels) and an ISRF strength of $U_{\mathrm{isrf}}\sim1$ contributes a
low-temperature heating rate $\mathcal{H}$ more than an
order of magnitude larger than that supplied by the cosmic UV
background. At higher densities ($n_{\mathrm{H}}\sim10^{3}\mathrm{cm}^{-3}$), the gas remains
optically-thin to cosmic ray heating but becomes insensitive to the presence
of either the ISRF or the cosmic UV background.
The correlation
between the ISRF field strength and the local SFR limits the
applicability of the low-$n_{\mathrm{H}}$ / high-$U_{\mathrm{isrf}}$ region of the cooling
function but we include such regimes for completeness.
The simple description of the ISRF used in our modeling is designed
to replicate average conditions in systems like the Milky Way. However,
the ISRF in special locations in the ISM, such as near photoionized regions,
may have a different character than the average SED
we employ. Also, changes in the composition of dust or the initial mass function
of stars relative to Milky Way properties could alter the frequency-dependent ISRF.
We therefore caution that the detailed, frequency-dependent connection
between SFR and ISRF strength used in our modeling does not
capture every condition within the ISM of a single galaxy or between galaxies.
The model we present does realistically capture the physics that regulate the
abundance of molecular hydrogen and local SFR for a given
spectral form of the ISRF.
\subsection{Star Formation and Feedback}
\label{section:methodology:sf_and_feedback}
Modeling the SFR in the ISM as a power-law function
of the gas density $\rho_{g}$ dates to at least \cite{schmidt1959a}, who
modeled the past SFR of the Galaxy needed to
produce the observed luminosity function of main sequence stars.
We assume that star formation occurs in molecular clouds
in proportion to the local molecular gas density $f_{\mathrm{H2}}\rho_{g}$,
as suggested by a variety of observations
\citep[e.g.,][]{elmegreen1977a,blitz1980a,beichman1986a,lada1987a,young1991a}.
The model thus assumes that $\mathrm{H}_{2}$ is a good proxy for star forming gas.
On the scale of individual gas particles the SFR
is then determined by the density of molecular hydrogen, which
is converted into stars on a time scale $t_{\star}$: $\rho_{\star}\propto f_{\mathrm{H2}}\rho_{g}/t_{\star}$.
Observations indicate that at high densities $t_{\star}$ scales with the local free fall
time of the gas, $t_{\mathrm{ff}}\propto \rho_{g}^{-0.5}$, as
\begin{equation}
\label{equation:sfr_ff}
t_{\star}\approx t_{\mathrm{ff}}/\epsilon_{\mathrm{ff}},
\end{equation}
with the SFE per free fall time, $\epsilon_{\mathrm{ff}}\approx 0.02$,
approximately independent of density at $n\approx 10^2-10^4~\mathrm{cm}^{-3}$ \citep{krumholz2005a,krumholz2007b}.
We adopt $t_{\star}=1$~Gyr for gas at density $n_{\mathrm{H}} =10 h^{2}\mathrm{cm}^{-3}$
($t_{\mathrm{ff}}=2.33\times 10^7$~yrs), which corresponds to
$\epsilon_{\mathrm{ff}}=0.023$.
By calibrating the SFE
to observations of dense molecular clouds through
$\epsilon_{\mathrm{ff}}$, our model differs
from the usual approach of choosing gas consumption time scale to fit
the global SFR and the SK relation in
galaxies. As we show below, the chosen efficiency results in global
SFR for the entire galaxies consistent with
observations (see \S \ref{section:results}).
Incorporating the assumption that a
mass fraction $\beta$ of young stars promptly explode as supernovae,
the SFR in our models is given by
\begin{equation}
\label{equation:star_formation_rate}
\dot{\rho}_{\star} = (1-\beta) f_{\mathrm{H2}}\frac{\rho_{g}}{t_{\star}}\left(\frac{n_{\mathrm{H}}}{10\,h^{2}\mathrm{cm}^{-3}}\right)^{0.5}.
\end{equation}
\noindent
We assume $\beta\approx0.1$, appropriate for the \cite{salpeter1955a} initial mass function.
Implicit in Equation \ref{equation:star_formation_rate}
is that the molecular fraction $f_{\mathrm{H2}}$
may vary with a variety of local ISM properties as
\begin{equation}
\label{equation:fH2_function}
f_{\mathrm{H2}} = f_{\mathrm{H2}}\left(\rho_{g},T,Z_{g},U_{\mathrm{isrf}}\right).
\end{equation}
\noindent
The $\mathrm{H}_{2}$D-SF model includes the dependence on the gas density,
temperature $T$,
metallicity $Z_{g}$. The $\mathrm{H}_{2}$D-SF+ISRF model additionally includes
the dependence of the molecular fraction on the strength of the
interstellar radiation
field $U_{\mathrm{isrf}}$ parameterized as a fraction of the local interstellar
field energy density.
In both the $\mathrm{H}_{2}$D-SF and $\mathrm{H}_{2}$D-SF+ISRF models, the
equilibrium molecular fraction is tabulated by using the Cloudy
calculations
discussed
in \S \ref{section:methodology:ism}.
Variations in the cosmic ray ionization
rate or the dust model could affect the form of
Equation \ref{equation:fH2_function} but we do not consider them here.
For the simple GD-SF model that does not track the molecular gas abundance,
we set $f_{\mathrm{H2}}=1$ in gas with densities above the star formation
threshold density $n_{\mathrm{H}}\gtrsim0.1\mathrm{cm}^{-3}$ \citep[see, e.g.,][]{governato2007a}.
Figure \ref{fig:cooling} shows that the feature in the cooling
rate at $T\sim 10^3$~K for low metallicity, intermediate
density gas (e.g., middle
left panel), which owes to molecular hydrogen, disappears
in the presence of the interstellar radiation
field.
Equations
\ref{equation:thermal_evolution} and
\ref{equation:star_formation_rate} then imply that
destruction of $\mathrm{H}_{2}$ by soft UV photons from young
stars regulates star formation in the $\mathrm{H}_{2}$D-SF+ISRF model.
The dissociation of $\mathrm{H}_{2}$
by the ISRF means that ISM gas may be cold and dense but its SFR may
be suppressed if the local ISRF is strong.
We discuss this effect in the context of the simulated
galaxy models in \S \ref{section:results}.
The energy deposition from supernovae into interstellar gas is treated
as a thermal feedback, given by
\begin{equation}
\label{equation:sn_feedback}
\rho_{g}\frac{\mathrm{d} u}{\mathrm{d} t} = \epsilon_{\mathrm{SN}} \dot{\rho}_{\star}
\end{equation}
\noindent
where $\epsilon_{\mathrm{SN}}$ is the energy per unit mass of formed stars
that is deposited into the nearby ISM. We choose
$\epsilon=1.4\times10^{49} \mathrm{ergs}\,\,M_{\sun}^{-1}$ or, in the language of
\cite{springel2003b}, an effective supernovae temperature of
$T_{\mathrm{SN}}=3\times10^{8}$. This value for the energy deposition for
supernovae has been used repeatedly in simulations of galaxy mergers
\citep[e.g.,][]{robertson2006c,robertson2006b,robertson2006a}. Given
that extremely short cooling time of dense gas at low resolutions, prescribing
supernovae feedback as thermal input into the gas has long been known to
have a weak effect on the global evolution of simulated galaxies
\citep[e.g.,][]{katz1992b,steinmetz1995a}. For this reason, the results of this paper
also do
not depend strongly on the value of $\epsilon_{\mathrm{SN}}$ or the
supernovae mass fraction $\beta$. Metal enrichment of the ISM
is treated in the instantaneous approximation, with a mass fraction
$y=0.02$ of the supernovae ejecta being returned into the gas as metals
\citep[see][]{springel2003a}.
\subsection{Avoiding Numerical Jeans Fragmentation}
\label{section:methodology:jeans_resolution}
Perturbations in a self-gravitating medium of mean density
$\rho$ and sound speed $c_{s}$ can grow only
if their wavelength exceeds the Jeans length $\lambda_{\mathrm{Jeans}}$ \citep{jeans1928a}.
The corresponding Jeans mass contained within $\lambda_{\mathrm{Jeans}}$ is
\begin{equation}
\label{equation:jeans_mass}
m_{\mathrm{Jeans}} = \frac{\pi^{5/2}c_{s}^{3}}{6G^{3/2}\rho^{1/2}}
\end{equation}
\noindent
\citep[see, e.g., \S 5.1.3 of][]{binney1987a}.
\cite{bate1997a} identified a resolution requirement for
smoothed particle hydrodynamics simulations such that the
number of SPH neighbors $N_{\mathrm{neigh}}$ and gas particle mass
$m_{\mathrm{gas}}$ should satisfy
$2N_{\mathrm{neigh}}m_{\mathrm{gas}} < m_{\mathrm{Jeans}}$ to capture the pressure forces
on the Jeans scale and avoid numerical fragmentation.
\cite{klein2004a} suggest that the effective resolution of
SPH simulations is the number of SPH smoothing lengths per
Jeans scale
\begin{equation}
\label{equation:jeans_hsml}
h_{\mathrm{Jeans}} = \frac{\pi^{5/2}c_{s}^{3}}{6G^{3/2} N_{\mathrm{neigh}} m_{\mathrm{gas}} \rho^{1/2}},
\end{equation}
\noindent
which is very similar to the number of Jeans masses per
SPH kernel mass, if we define the kernel mass as
\begin{equation}
\label{equation:m_SPH}
m_{\mathrm{SPH}} \equiv \sum_{i=1}^{N_{\mathrm{neigh}}} m_{i}.
\end{equation}
\noindent
Our simulations include low temperature coolants that allow ISM
gas to become cold and dense, and for the typical number of
gas particles used in our models ($N_{\mathrm{gas}} \approx 400,000$) the
Jeans mass is not resolved at the lowest temperatures and
largest galaxy masses.
Motivated by techniques used to avoid numerical Jeans fragmentation in
grid codes \citep[e.g.,][]{machacek2001a}, a density-dependent
pressure floor is introduced into our SPH calculations. For every gas particle,
the kernel mass is monitored to ensure it resolves some number $N_{\mathrm{Jeans}}$
of Jeans masses. If not,
the particle internal energy is altered following
\begin{equation}
\label{equation:pressure_floor}
u = u \times \left\{ \begin{array} {c@{\quad:\quad}l}
\left(\frac{N_{\mathrm{Jeans}}}{h_{\mathrm{Jeans}}}\right)^{2/3} & h_{\mathrm{Jeans}}<N_{\mathrm{Jeans}} \\
\left(\frac{2N_{\mathrm{Jeans}}m_{\mathrm{SPH}}}{m_{\mathrm{Jeans}}}\right)^{2/3} & m_{\mathrm{Jeans}} < 2N_{\mathrm{Jeans}}m_{\mathrm{SPH}}
\end{array} \right.,
\end{equation}
\noindent
to provide the largest local pressure and to assure
the local Jeans mass is resolved.
We discuss the purpose and effect of this
pressurization in more detail in the first Appendix,
but we note that this requirement scales
with the resolution and naturally allows
for better-resolved SPH simulations to
follow increasingly lower temperature gas.
For the simulations results presented in this paper, we use $N_{\mathrm{Jeans}}=15$,
which is larger than the effective $N_{\mathrm{Jeans}}=1$ used by \cite{bate1997a}.
We have experimented with simulations with $N_{\mathrm{Jeans}}=1-100$ and find
$N_{\mathrm{Jeans}}\sim15$ to provide sufficient stability over the time evolution
of the simulations for the structure of galaxy models we use. Other
simulations may require different values $N_{\mathrm{Jeans}}$ or a different
pressurization than the $u\proptom_{\mathrm{Jeans}}^{-2/3}$ scaling suggested
by Equation \ref{equation:jeans_mass}.
\begin{figure}
\figurenum{2}
\epsscale{1.25}
\plotone{fig2.eps}
\caption{\label{fig:median_eos}
Median equation-of-state (EOS) of gas in a Milky Way-sized galactic
disk in our ISM model. Shown is the histogram of particles in the hydrogen
number density $n_{\mathrm{H}}$ -- temperature $T$ plane and the median temperature
measured in bins of density (white line). The median EOS is used to
determine the properties of gas pressurized to avoid numerical
Jeans fragmentation (see \S \ref{section:methodology:jeans_resolution}
and \S \ref{section:methodology:median_eos}).
At high densities, the particles follow the loci where heating
balances cooling for their
densities, temperatures, metallicities,
and local interstellar radiation field strengths.
}
\end{figure}
\subsection{Median Equation-of-State}
\label{section:methodology:median_eos}
In the absence of a numerical pressure floor, the equation-of-state (EOS) of
gas will follow the loci where heating balances cooling in the temperature-density
phase space.
However, the gas pressurization requirements to avoid numerical Jeans fragmentation
change the EOS of the gas.
For purposes of calculating the molecular or
ionization fraction of dense ISM gas that has been numerically pressurized,
an estimate of the EOS in the absence of resolution restraints is required.
Figure \ref{fig:median_eos} shows the temperature-density phase diagram of
ISM gas in a simulation of a Milky Way-sized galaxy without using the SPH
pressure floor described in \S \ref{section:methodology:jeans_resolution}
and detailed in the Appendix.
The median
EOS of this gas, determined by the heating and cooling processes calculated
by the Cloudy code for a given density, temperature, metallicity, and
interstellar radiation field strength, as well as heating from supernovae
feedback, is used in our simulations to
assign an effective temperature to dense gas
pressurized at the Jeans resolution limit.
The behavior of gas above a density of $n_{\mathrm{H}}\sim 100\mathrm{cm}^{-3}$ is especially
influenced by the balance of available cooling mechanisms and supernovae
heating, and in the absence of supernovae feedback such dense gas would
cool to the minimum temperature treated in our simulations ($T\sim 100\mathrm{K}$).
\subsection{Molecular Fraction vs. Gas Density}
\label{section:methodology:fH2_vs_density}
The molecular content of the ISM in models that account
for the abundance of $\mathrm{H}_{2}$ and other molecular
gas depends on the physical properties of interstellar
gas through Equation \ref{equation:fH2_function}. Since
the temperature and density phase structure of the ISM is
dictated by the local cooling time, which is nonlinearly
dependent on the temperature, density, metallicity, and
ISRF strength, the calculation of the equilibrium molecular
fraction as a function of the gas properties is
efficiently calculated using hydrodynamical simulations
of galaxy evolution.
We have measured the
molecular gas fraction $f_{\mathrm{H2}}$ as a function of
gas density $n_{\mathrm{H}}$ after evolving
in ISM models with
$\mathrm{H}_{2}$-destruction by an ISRF (the $\mathrm{H}_{2}$D-SF+ISRF model),
and without an ISRF (the $\mathrm{H}_{2}$D-SF model),
for $t=0.3\mathrm{Gyr}$ for each of the
$v_{\mathrm{circ}}=50-300\,\mathrm{km}\,\,\mathrm{s}^{-1}$ disk galaxy simulations.
The galaxy-to-galaxy
and point-to-point variations in the metallicity, temperature, ISRF
strength contribute to the spread in molecular fractions
at a given density $n_{\mathrm{H}}$.
Without $\mathrm{H}_{2}$-destruction, the molecular hydrogen content
of low-density gas calculated using Cloudy is surprisingly
large, with $f_{\mathrm{H2}}=0.1$ at $n_{\mathrm{H}}=0.1\mathrm{cm}^{-3}$ and $f_{\mathrm{H2}}\sim0.35$ at
$n_{\mathrm{H}}=1\mathrm{cm}^{-3}$.
For the $\mathrm{H}_{2}$D-SF model, the atomic-to-molecular
transition represents
a physical calculation of the star formation threshold density prescription
often included in coarse models of star formation in simulations of
galaxy formation and evolution.
For the $\mathrm{H}_{2}$D-SF+ISRF model, the destruction of $\mathrm{H}_{2}$ by the
ISRF reduces the molecular fraction in the low-density ($n_{\mathrm{H}}\lesssim1\mathrm{cm}^{-3}$)
ISM to $f_{\mathrm{H2}}<0.05$.
The abundance of $\mathrm{H}_{2}$ in this model is therefore greatly reduced
relative to the $\mathrm{H}_{2}$D-SF model that does not include an ISRF.
The decline in molecular abundance, in turn, produces much lower SFRs
in low density gas ($n_{\mathrm{H}}\lesssim1\mathrm{cm}^{-3}$) compared with either the
$\mathrm{H}_{2}$D-SF model (which has a larger molecular fraction at low
densities) or the GD-SF model (which allows for star formation in all
gas above a density threshold, $n_{\mathrm{H}}>0.1\mathrm{cm}^{-3}$).
For denser gas in the $\mathrm{H}_{2}$D-SF+ISRF model (e.g, $n_{\mathrm{H}}\gtrsim1\mathrm{cm}^{-3}$),
the numerical pressurization of
the ISM required to mitigate artificial Jeans fragmentation prevents
much of the gas from reaching high densities where the molecular
fraction becomes large ($f_{\mathrm{H2}}\sim1$ at $n_{\mathrm{H}}\gtrsim100\mathrm{cm}^{-3}$).
The molecular fraction of the numerically-pressurized gas must
be assigned at densities that exceed resolution limit of the simulations.
As Figure \ref{fig:median_eos} indicates, the transition between
the warm phase at densities $n_{\mathrm{H}}\lesssim1\mathrm{cm}^{-3}$ and the cold phase at densities
$n_{\mathrm{H}}\gtrsim30\mathrm{cm}^{-3}$ is rapid \citep[see also][]{wolfire2003a}.
The required pressure floor can artificially limit the density of gas
to the range $n_{\mathrm{H}}\lesssim30\mathrm{cm}^{-3}$ where, in the presence of an ISRF,
the molecular fraction of the gas would be artificially suppressed (without an ISRF,
even gas at $n_{\mathrm{H}}\approx1\mathrm{cm}^{-3}$ is already mostly molecular and would be relatively
unaffected).
To account for this numerical limitation,
gas at densities $n_{\mathrm{H}}>1\mathrm{cm}^{-3}$ that is numerically-pressurized to resolve the Jeans
scale is assigned a minimum molecular fraction according to the median
EOS and the high-density ($n_{\mathrm{H}}>30\mathrm{cm}^{-3}$) $f_{\mathrm{H2}}-n_{\mathrm{H}}$ trend of gas in the absence of
the pressure floor. The $f_{\mathrm{H2}}-n_{\mathrm{H}}$ relation calculated by Cloudy can
be modeled for pressurized gas in this regime as
$f_{\mathrm{H2}} = \mathrm{max}\{[0.67\left(\log_{10}n_{\mathrm{H}}+1.5\right)-1], 1\}$.
The utilization of the median EOS to assign detailed properties to the
dense gas in this manner has the further benefit of adaptively scaling with the local
resolution, allowing an approximation of the ISM properties that improves
with an increase in the particle number used in a simulation. Throughout
the paper, when simulation results include gas particles that are
pressurized to avoid numerical
Jeans fragmentation, their reported temperatures and molecular fractions are
determined from the median EOS presented in Figure \ref{fig:median_eos}.
While the density dependence of molecular fraction utilized here may not
be correct in every detail (i.e., the transition to $f_{\mathrm{H2}}=1$ may occur
at an inaccurately low density), for our purposes
the primary requirement is that the model captures
the fraction of star forming gas as a function of density in a realistic
fashion. The gas mass in disks in the $\mathrm{H}_{2}$D-SF+ISRF model
at $\Sigma_{\mathrm{gas}}\gtrsim 10\,\rm M_{\odot}\,pc^{-2}$ is mostly molecular
and the molecular fraction increases realistically with the external ISM pressure
(see \S \ref{section:results:fH2_pressure}),
in good agreement with observations \citep[e.g.,][]{wong2002a,blitz2006a}.
\subsection{Isolated Galaxy Models}
\label{section:methodology:galaxy_models}
We use three cosmologically-motivated galaxy models to study global
star formation relations. Below we describe the models and list the
observational data used to motivate their adopted structure.
The galaxy initial conditions
follow the methodology described by \cite{springel2005b}, with
a few exceptions. The properties of the disks
are chosen to be representative of disk galaxies that form
within the CDM structure formation paradigm \citep[e.g.,][]{mo1998a}.
The model galaxies consist of a \cite{hernquist1990a} dark matter
halo, exponential stellar and gaseous disks with scale lengths $R_{\mathrm{d},\star}$
and $R_{\mathrm{d},g}$, and an optional \cite{hernquist1990a} stellar bulge
with scale radius $a_{\mathrm{b},\star}$. For a given virial mass, the
\cite{hernquist1990a}
dark matter
halo parameters are scaled to an effective \cite{navarro1996a} halo
concentration. The velocity distributions of the dark matter halos
are initialized from the isotropic distribution function provided in
\cite{hernquist1990a}, using the rejection technique described by
\cite{press1992a}. The vertical distribution of the stellar disk is
modeled
with a $\mathrm{sech}^{2}(z/2z_{\mathrm{d}})$ function with
$z_{\mathrm{d}} = 0.2-0.3 R_{\mathrm{d},\star}$. The stellar disk velocity
field is initialized by using a Gaussian with the local
velocity dispersion $\sigma_{\mathrm{z}}^{2}$ determined
from the potential and
density via the Jeans equations. The stellar disk rotational velocity
dispersion and streaming velocity are set with the epicyclical approximation
\citep[see \S 3.2.3 and \S 4.2.1(c) of ][]{binney1987a}.
The bulge velocity field is also modeled by a Gaussian
with a velocity dispersion determined from the Jeans equations
but with no net rotation. The gas disk is initialized as an
isothermal medium of temperature of $T=10^{4}\mathrm{K}$
in
hydrostatic equilibrium with the total potential, including the
disk self-gravity, which determines vertical distribution of the gas
self-consistently \citep[see][]{springel2005b}.
The gravitational softening lengths for the dark matter and baryons
are set to $\epsilon_{\mathrm{DM}}=100h^{-1}{\mathrm{pc}}$ and $\epsilon_{\mathrm{baryon}}=50h^{-1}{\mathrm{pc}}$,
respectively.
The galaxy models are designed to roughly match the observed rotation curves,
and HI, $\mathrm{H}_{2}$, and stellar surface mass distributions of DDO154
($v_{\mathrm{circ}}\approx50\,\mathrm{km}\,\,\mathrm{s}^{-1}$),
M33 ($v_{\mathrm{circ}}\approx125\,\mathrm{km}\,\,\mathrm{s}^{-1}$), and NGC4501 ($v_{\mathrm{circ}}\approx300\,\mathrm{km}\,\,\mathrm{s}^{-1}$). These
systems cover a wide range in circular velocity, gas fractions, disk scale
lengths, molecular gas fractions, gas surface densities, and gas volume densities.
The parameters of the galaxy models are provided in Table \ref{table:models}.
\begin{deluxetable*}{lccccccccc}
\tiny
\tablecolumns{10}
\tablewidth{0pc}
\tablecaption{Galaxy Models}
\tablehead{
\multicolumn{10}{c}{Structural Parameters} \\
\cline{1-10}
\colhead{Galaxy} & \colhead{$V_{200}$} & \colhead{$v_{\mathrm{circ}}$} & \colhead{c} & \colhead{$M_{\mathrm{disk}}$} &
\colhead{$f_{\mathrm{gas}}$} & \colhead{$R_{\mathrm{d},\star}$} & \colhead{$R_{\mathrm{d},g}$} & \colhead{$M_{\mathrm{bulge}}$} & \colhead{$a_{\mathrm{b},\star}$}\\
\colhead{Analogue} & \colhead{$\,\mathrm{km}\,\,\mathrm{s}^{-1}$} & \colhead{$\,\mathrm{km}\,\,\mathrm{s}^{-1}$} & & \colhead{$h^{-1}M_{\sun}$} & & \colhead{$h^{-1}{\mathrm{kpc}}$} & \colhead{$h^{-1}{\mathrm{kpc}}$}
& \colhead{$h^{-1}M_{\sun}$}& \colhead{$h^{-1}{\mathrm{kpc}}$}
\label{table:models}}
\startdata
DDO 154 & 50 & 50 & 6 & $2.91\times10^{8}$ & 0.99 & 0.38 & 1.52 & \nodata & \nodata \\
M33 & 100 & 125 & 10 & $4.28\times10^{9}$ & 0.40 & 0.98 & 2.94 & \nodata & \nodata \\
NGC 4501 & 180 & 300 & 14 & $1.02\times10^{11}$ & 0.04 & 3.09 & 2.16 & $1.14\times10^{10}$ & 0.62 \\
\cline{1-10}
\multicolumn{10}{c}{Numerical Parameters}\\
\cline{1-10}
\colhead{Galaxy} & \colhead{$N_{\mathrm{DM}}$} & \colhead{$N_{\mathrm{d},\star}$} & \colhead{$N_{\mathrm{d},g}$} &
\colhead{$N_{\mathrm{bulge}}$} & \colhead{$\epsilon_{\mathrm{DM}}$} & \colhead{$\epsilon_{\mathrm{baryon}}$} & \colhead{$N_{\mathrm{neigh}}$} & \colhead{$N_{\mathrm{neigh}}m_{\mathrm{gas}}$} \\
\colhead{Analogue} & & & &
& \colhead{$h^{-1}{\mathrm{kpc}}$} & \colhead{$h^{-1}{\mathrm{kpc}}$} & & \colhead{$h^{-1}M_{\sun}$} & Data Refs.\\
\cline{1-10}
DDO 154 & 120000 & 20000 & 400000 & \nodata & 0.1 & 0.05 & 64 & $4.60\times10^{4}$ &1,2,3,4,5\\
M33 & 120000 & 120000 & 400000 & \nodata & 0.1 & 0.05 & 64 & $2.74\times10^{5}$ &5,6,7\\
NGC 4501 & 120000 & 177600 & 400000 & 22400 & 0.1 & 0.05 & 64 & $6.51\times10^{5}$ &8,9,10,11,12\\
\enddata
\tablerefs{\small
(1) \cite{carignan1989a}; (2) \cite{hunter2004a}; (3) \cite{hunter2006a}; (4) \cite{lee2006a};
(5) \cite{mcgaugh2005a}; (6) \cite{corbelli2003a}; (7) \cite{heyer2004a}; (8) \cite{wong2002a};
(9) \cite{mollenhoff2001a} (10) \cite{guhathakurta1988a}; (11) \cite{rubin1999a}; (12) \cite{boissier2003a}
}
\end{deluxetable*}
\subsubsection{DDO 154 Analogue}
\label{section:methodology:galaxy_models:ddo154}
DDO 154 is one of the most gas rich systems known and
therefore provides unique challenges to models of the ISM
in galaxies.
For the DDO 154 galaxy analogue, we used the \cite{carignan1989a}
HI map and rotation curves as the primary constraint on the mass
distribution. The total gas mass is
$M_{\mathrm{d},g} \approx 2.7\times10^{8}h^{-1}M_{\sun}$ \citep{carignan1989a,hunter2004a}.
The stellar disk mass has been estimated at
$M_{\mathrm{d},\star} \approx3.4\times10^{6}h^{-1}M_{\sun}$ \citep{lee2006a} with a
scale length of about $R_{\mathrm{d},\star} \approx 0.38h^{-1}{\mathrm{kpc}}$ at $\lambda=3.6\micron$.
The gaseous disk scale length was determined by approximating the HI surface
density profile, with $R_{\mathrm{d},g} \approx 1.52^{-1}{\mathrm{kpc}}$ providing a decent
mimic of the HI data. These numbers are similar to those compiled by
\cite{mcgaugh2005a} for DDO 154 and, combined with the dark matter virial
velocity $V_{200}=50\,\mathrm{km}\,\,\mathrm{s}^{-1}$ and concentration $c=6$, produce a
rotational velocity of
$v_{\mathrm{circ}}\approx50\,\mathrm{km}\,\,\mathrm{s}^{-1}$ (where $v_{\mathrm{circ}}$ is defined as the maximum of rotation
velocity profile) similar to the value of $v_{\mathrm{circ}}\approx54\,\mathrm{km}\,\,\mathrm{s}^{-1}$
reported by \cite{karachentsev2004a}.
\subsubsection{M33 Analogue}
\label{section:methodology:galaxy_models:M33}
The nearby spiral M33 has well measured stellar, HI, $\mathrm{H}_{2}$, and
SFR distributions, and serves nicely as an example disk galaxy with
an intermediate rotational velocity.
For the M33 galaxy analogue, we used the \cite{heyer2004a}
HI and $\mathrm{H}_{2}$ map as guidance for determining the
gas distribution with a total gas mass of
$M_{\mathrm{d},g}\approx1.68\times10^{9}h^{-1}M_{\sun}$
\citep{corbelli2003a,heyer2004a}. According to this
data, the gas disk scale length is roughly $R_{\mathrm{d},g}\approx2.7h^{-1}{\mathrm{kpc}}$.
We set the total stellar disk mass to $M_{\mathrm{d},\star}\approx2.6\times10^{9}h^{-1}M_{\sun}$
to match the total baryonic mass of $M_{\mathrm{baryon}}\approx4.3\times10^{9}h^{-1}M_{\sun}$
reported by \cite{mcgaugh2005a}, and set the disk scale length to
$R_{\mathrm{d},\star}\approx0.9h^{-1}{\mathrm{kpc}}$.
These numbers are similar to those compiled by
\cite{mcgaugh2005a} for M33 and, combined with the dark matter virial
velocity $V_{200}=100\,\mathrm{km}\,\,\mathrm{s}^{-1}$ and concentration $c=10$, produce a
rotational velocity of
$v_{\mathrm{circ}}\approx125\,\mathrm{km}\,\,\mathrm{s}^{-1}$.
\begin{figure*}
\figurenum{3}
\epsscale{1.0}
\plotone{fig3.eps}
\caption{\label{fig:disks}
\small
Gas distribution in galaxies with circular velocities of $v_{\mathrm{circ}} = 300\,\mathrm{km}\,\,\mathrm{s}^{-1}$ (left column), $v_{\mathrm{circ}} = 125 \,\mathrm{km}\,\,\mathrm{s}^{-1}$ (middle column),
and $v_{\mathrm{circ}}=50\,\mathrm{km}\,\,\mathrm{s}^{-1}$ (right column) for differing models of the ISM and star formation. Shown are an
atomic cooling model with a $T=10^{4}\mathrm{K}$ temperature floor and star formation based on the gas density above a
threshold of $n_{\mathrm{H}}=0.1\mathrm{cm}^{-3}$ ("GD-SF", bottom row), an atomic+molecular cooling model with a $T=10^{2}\mathrm{K}$ temperature floor
and star formation based on the molecular gas density ("$\mathrm{H}_{2}$D-SF", middle row), and
an atomic+molecular cooling model with a $T=10^{2}\mathrm{K}$
temperature floor, star formation based on the molecular gas density, and $\mathrm{H}_{2}$-destruction by an interstellar
radiation field ("$\mathrm{H}_{2}$D-SF+ISRF", top row). The intensity of the images reflects the gas surface density and
is color-coded by the local effective gas temperature. The diffuse morphologies of low-mass galaxies in the GD-SF
and $\mathrm{H}_{2}$D-SF+ISRF models owe to the abundance of warm atomic gas with temperatures comparable to the velocity
scale of the local potential.
}
\end{figure*}
\subsubsection{NGC 4501 Analogue}
\label{section:methodology:galaxy_models:NGC4501}
The massive spiral NGC 4501 is a very luminous
disk galaxy \citep[$M_{K}=-25.38$,][]{mollenhoff2001a}
that is gas poor by roughly a factor of $\sim3$ for
its stellar mass \citep[][]{giovanelli1983a,kenney1989a,blitz2006a},
and serves as an extreme contrast to the dwarf galaxy DDO 154.
To construct an analogue to NGC 4501, we used the
rotation curve data compiled by \cite{boissier2003a}
from the observations of \cite{guhathakurta1988a} and
\cite{rubin1999a} as a guide for designing the potential.
The stellar disk scale length $R_{\mathrm{d},\star}\approx3.1h^{-1}{\mathrm{kpc}}$
and bulge-to-disk ratio $M_{\mathrm{bulge}}/M_{\mathrm{disk}}= 0.112$ were
taken from the $K-$band observations of \cite{mollenhoff2001a},
and the total stellar mass was chosen to be consistent with
the \cite{bell2001a} $K-$band Tully-Fisher relation.
The gas distribution was modeled after the surface mass
density profile presented by \cite{wong2002a}.
These structure properties, combined with a
virial velocity $V_{200}=180\,\mathrm{km}\,\,\mathrm{s}^{-1}$ and dark matter concentration
$c=14$, provide a circular velocity of $v_{\mathrm{circ}}\approx300\,\mathrm{km}\,\,\mathrm{s}^{-1}$
appropriate for approximating the observed rotation curve.
\begin{figure*}
\figurenum{4}
\epsscale{1.1}
\plotone{fig4.eps}
\caption{\label{fig:fH2_vs_radius}
Molecular fraction as a function of radius in three simulated disk galaxies with ISM models with (light gray
lines) and without (dark gray lines) an interstellar radiation field. The general
trend is for molecular fraction to decrease and its dependence on radius
to steepen with decreasing mass of the system. This trend is more
pronounced in the model with an interstellar radiation field.
}
\end{figure*}
\begin{figure*}
\figurenum{5}
\epsscale{1.15}
\plotone{fig5.eps}
\caption{\label{fig:sfr}
Star formation rates (SFRs) as a function of time in isolated disks. Shown are the
SFRs for galaxy models with $v_{\mathrm{circ}}=50\,\mathrm{km}\,\,\mathrm{s}^{-1}$ (right panel), $v_{\mathrm{circ}}=125\,\mathrm{km}\,\,\mathrm{s}^{-1}$
(middle panel), and $v_{\mathrm{circ}}=300\,\mathrm{km}\,\,\mathrm{s}^{-1}$ (left panel) calculated from interstellar
medium models that allow for cooling at temperatures $T>10^{4}\mathrm{K}$ (black lines)
or
cooling at temperatures $T>10^{2}\mathrm{K}$ with (light gray lines) and without (dark gray lines) destruction of
$\mathrm{H}_{2}$ by a local interstellar radiation field (ISRF). The star formation efficiency of
the low-mass systems is strongly influenced by $\mathrm{H}_{2}$-destruction from the ISRF.
}
\end{figure*}
\section{Results}
\label{section:results}
The isolated evolution of each of the three
disk galaxy models is simulated for
$t=1.0\,\mathrm{Gyr}$ using the three separate ISM
and star formation models described in
\S \ref{section:methodology}: the model
$\mathrm{H}_{2}$D-SF with $\mathrm{H}_{2}$-based star formation,
the model $\mathrm{H}_{2}$D-SF+ISRF which in addition includes the
gas heating by local interstellar radiation,
and the GD-SF model with a $T=10^{4}\mathrm{K}$ temperature floor
and constant threshold density for star formation. In all
cases, the ISM of galaxies is assumed to have initial
metallicity of $10^{-2}\,\rm Z_{\odot}$ and is self-consistently
enriched by supernovae during the course of simulation.
\subsection{Galactic Structure and Evolution}
\label{section:results:structure}
The gas distributions in the simulated disk galaxies after $t=0.3\mathrm{Gyr}$ of
evolution in isolation are shown in Figure \ref{fig:disks}. The image
intensity reflects the disk surface density while the color indicates
the local mass-weighted temperature of the ISM. A variety of
structural properties of the galaxies are evident from the images,
including ISM morphologies qualitatively similar to those observed in
real galaxies. The large-scale structure of each disk model changes
slowly over
gigayear time-scales, excepting only the $\mathrm{H}_{2}$D-SF model for the
$v_{\mathrm{circ}}=50\,\mathrm{km}\,\,\mathrm{s}^{-1}$ galaxy. In this case, the ISM
fragments owing to the large cold gas
reservoir that becomes almost fully molecular in the absence of
$\mathrm{H}_{2}$-destruction (see \S
\ref{section:results:total_gas_schmidt_law} and \S
\ref{section:results:molecular_gas_schmidt_law} for further discussion).
The simulated disks can develop spiral structure and
bar-like instabilities during their evolution,
especially when modeled with $T<10^4$~K coolants.
The detailed
structures of the spiral patterns and bar instabilities are likely
influenced by numerical fluctuations in the potential, but they
illustrate the role of spiral structure in the ISM of real galaxies.
As the ISM density is larger in these structures,
the equilibrium gas temperature is lower compared with the
average disk properties. These structures also contain higher
molecular fractions (see \S \ref{section:results:fH2_pressure})
and will be sites of enhanced star formation
in our ISM model, similar to real galaxies.
The small-mass systems evolved with the GD-SF model are more diffuse and have
less pronounced structure than those evolved with low-temperature coolants, owing
to the higher ($T>10^4$~K) ISM temperature (and hence pressure) in
high-density regions. The diffuse morphologies of low-mass galaxies
in the GD-SF and $\mathrm{H}_{2}$D-SF+ISRF models owe to the abundance of warm
atomic gas with thermal energy comparable to the binding energy of the
gas. The tendency of low-mass galaxy disks to be more
diffuse and extended than more massive systems owing to inefficient
cooling was shown by \cite{kravtsov2004a} and \cite{tassis2008a} to
successfully reproduce the faint end of the luminosity function of
galactic satellites and scaling relations of dwarf galaxies.
\cite{kaufmann2007a} used hydrodynamical simulations of the
dissipative formation of disks to demonstrate that the temperature
floor of $T=10^4$~K results in
the equilibrium disk scale height-to-scale length ratio in systems
with $v_{\mathrm{circ}}\sim40\,\mathrm{km}\,\,\mathrm{s}^{-1}$ to be roughly three times larger than for
galaxies with $v_{\mathrm{circ}}\sim80\,\mathrm{km}\,\,\mathrm{s}^{-1}$. The disk morphologies in our GD-SF runs are
qualitatively consistent with their results.
\subsubsection{Molecular Fraction vs. Disk Radius}
\label{section:results:fH2_vs_radius}
The variations in the molecular fraction $f_{\mathrm{H2}}$ as a function of the
gas volume density $n_{\mathrm{H}}$ discussed in \S
\ref{section:methodology:fH2_vs_density} map into $f_{\mathrm{H2}}$ gradients with
surface density or disk radius on the global scale of a galaxy model.
Figure \ref{fig:fH2_vs_radius} shows azimuthally-averaged molecular
fraction $f_{\mathrm{H2}}$ as a function of radius normalized to the disk scale length
in the simulated disk galaxies with
$v_{\mathrm{circ}}=50-300\,\mathrm{km}\,\,\mathrm{s}^{-1}$ for molecular ISM models $\mathrm{H}_{2}$D-SF+ISRF and
$\mathrm{H}_{2}$D-SF, after $t=0.3\,\mathrm{Gyr}$ of evolution. The average molecular
fraction decreases with decreasing rotational velocity, from $f_{\mathrm{H2}} >
0.6$ within a disk scale length for the $v_{\mathrm{circ}} = 300\,\mathrm{km}\,\,\mathrm{s}^{-1}$ system (left
panel) to $f_{\mathrm{H2}}<0.4$ within a disk scale length for the $v_{\mathrm{circ}} = 50
\,\mathrm{km}\,\,\mathrm{s}^{-1}$ system (right column) in the $\mathrm{H}_{2}$D-SF+ISRF model. The radial
dependence of the molecular fraction steepens in smaller mass galaxies
relative to the $v_{\mathrm{circ}}=300\,\mathrm{km}\,\,\mathrm{s}^{-1}$ model since a larger fraction
of gas has densities below the
atomic-to-molecular transition.
The impact of this trend on the global star
formation relation is discussed below in \S
\ref{section:results:total_gas_schmidt_law}.
We note that details of the model initial conditions can
influence the radial distribution of the molecular gas by changing
the typical gas density in the disk. In particular, the $v_{\mathrm{circ}} = 125 \,\mathrm{km}\,\,\mathrm{s}^{-1}$
model galaxy has a more centrally concentrated molecular gas distribution than does
M33.
Figure
\ref{fig:fH2_vs_radius} clearly shows that the ISRF in the
$\mathrm{H}_{2}$D-SF+ISRF model steepens the dependence of molecular fraction
$f_{\mathrm{H2}}$ on the radius $R$ for each galaxy compared with the
$\mathrm{H}_{2}$D-SF model.
\subsubsection{Star Formation Histories}
\label{section:results:sfr}
The star formation histories of the simulated galaxies
provide a summary of the efficiency of
gas consumption for systems with different disk structures
and models for ISM physics. Figure \ref{fig:sfr} shows the
SFR over the time interval $t=0-1\,\mathrm{Gyr}$
for the isolated galaxy models evolved with the GD-SF,
$\mathrm{H}_{2}$D-SF, and $\mathrm{H}_{2}$D-SF+ISRF models. In most simulations, the SFR
experiences a general decline as the gas is slowly converted
into stars. Substantial increases in the SFR with time are
driven either by the formation of bar-like structures
(e.g., after $t\approx 0.7\mathrm{Gyr}$ for the $v_{\mathrm{circ}}=300\,\mathrm{km}\,\,\mathrm{s}^{-1}$
galaxy) or, in one case,
fragmentation of the cold ISM (after $t\approx 0.4\mathrm{Gyr}$ for
the $v_{\mathrm{circ}}=50\,\mathrm{km}\,\,\mathrm{s}^{-1}$ galaxy evolved with the $\mathrm{H}_{2}$D-SF
model).
For the most massive galaxy,
differences in the
relative star formation efficiencies of models
that tie the SFR to the total gas density (the GD-SF model)
or to the molecular gas density (the $\mathrm{H}_{2}$D-SF and
$\mathrm{H}_{2}$D-SF+ISRF models) are not large. The ISM in the
massive galaxy is dense, and when $\mathrm{H}_{2}$ abundance is modeled
much of the gas becomes molecular
(see Figure \ref{fig:fH2_vs_radius}). When
$\mathrm{H}_{2}$-destruction is included the molecular content
of the ISM in this galaxy is reduced at large radii and
small surface densities, but much of the mass of the ISM (e.g., interior
to a disk scale length) retains a molecular fraction comparable
to the ISM model without $\mathrm{H}_{2}$-destruction. The
effect of the ISRF on the SFR of this galaxy is fairly small.
The SFR history of the intermediate-mass ($v_{\mathrm{circ}}=125\,\mathrm{km}\,\,\mathrm{s}^{-1}$) galaxy model
is more strongly influenced by the $\mathrm{H}_{2}$-destroying ISRF.
While the GD-SF and $\mathrm{H}_{2}$D-SF ISM models produce similar
star formation efficiencies, reflecting the substantial molecular fraction
in the ISM for the $\mathrm{H}_{2}$D-SF model, the effects of
$\mathrm{H}_{2}$-destruction in the $\mathrm{H}_{2}$D-SF+ISRF model
reduce the SFR by $\sim30-40\%$. The time derivative of the SFR
in the
$\mathrm{H}_{2}$D-SF+ISRF
model
is also shallower than for
the
GD-SF or
$\mathrm{H}_{2}$D-SF models, reflecting a reservoir
of non-star-forming gas.
In each model, the SFR of the whole system
is similar to that estimated for M33
\citep[SFR$\sim0.25-0.7\,M_{\sun}{\mathrm{yr}}^{-1}$,][]{kennicutt1998a,hippelein2003a,blitz2006a,gardan2007a}.
The effects of $\mathrm{H}_{2}$-destruction play
a crucial role in determining the star
formation history of the $v_{\mathrm{circ}}=50\,\mathrm{km}\,\,\mathrm{s}^{-1}$ dwarf system.
The single phase ISM (GD-SF) model produces a
SFR history qualitatively similar to the
more massive galaxies, but in the
$\mathrm{H}_{2}$D-SF model the star formation
history changes dramatically. The colder ISM
temperature allows the gaseous disk to become
substantially
thinner in this case compared to the GD-SF model, leading
to higher densities and SFRs.
After $t\approx0.4\mathrm{Gyr}$ of evolution, the
cold, dense gas in the $\mathrm{H}_{2}$D-SF model
allows the ISM in the dwarf to undergo large-scale
instabilities and, quickly thereafter, fragmentation.
The dense knots of gas rapidly increase their
SFE compared with the
more diffuse ISM of the GD-SF model and
the SFR increases to a rate comparable to
that calculated for the $v_{\mathrm{circ}}=125\,\mathrm{km}\,\,\mathrm{s}^{-1}$ model.
In the $\mathrm{H}_{2}$D-SF+ISRF model,
the equilibrium temperature of most diffuse (and metal
poor) gas in the dwarf is increased and the
vertical structure of the disk becomes
thicker than in the $\mathrm{H}_{2}$D-SF model. The molecular content
of the disk is thus substantially reduced
(see the right panel of
Figure \ref{fig:fH2_vs_radius}) and the
disk remains diffuse. As a result, the
SFR drops to a low and roughly constant
level. The constancy of the SFR in the
$\mathrm{H}_{2}$D-SF+ISRF model is a result of
the large gas reservoir that remains
atomic and is therefore unavailable
for star formation. As the molecular gas,
which comprises a small fraction of the
total ISM in the dwarf ($f_{\mathrm{H2}}\lesssim0.1$
by mass),
is gradually consumed it is continually
refueled by the neutral gas
reservoir.
The star formation history of the dwarf galaxy system in the
$\mathrm{H}_{2}$D-SF+ISRF model demonstrates that physics other than energetic
supernova feedback can regulate star formation and produce long gas
consumption time scales in dwarf galaxies.
Star formation rates in
real galaxies similar to the $v_{\mathrm{circ}}=50\,\mathrm{km}\,\,\mathrm{s}^{-1}$ model fall in the wide
range $\mathrm{SFR}\sim10^{-4}-0.1M_{\sun}{\mathrm{yr}}^{-1}$ \citep[e.g.,][]{hunter2004a}.
While the $\mathrm{H}_{2}$D-SF+ISRF simulation of the dwarf galaxy produces a
steady $\mathrm{SFR}\sim0.025M_{\sun}{\mathrm{yr}}^{-1}$, we note that the galaxy DDO 154
that served as a model for the $v_{\mathrm{circ}}=50\,\mathrm{km}\,\,\mathrm{s}^{-1}$ system has a star
formation rate of only $\mathrm{SFR}\sim0.004M_{\sun}{\mathrm{yr}}^{-1}$
\citep{hunter2004a}.
\subsection{Star Formation Relations in Galactic Disks}
\label{section:results:sfr_laws}
The disk-averaged SK relation in galaxies, determined by
\cite{kennicutt1998a}, is well described by the power-law
\begin{eqnarray}
\Sigma_{\mathrm{SFR}} &=& (2.5\pm0.7)\times 10^{-4}\nonumber\\
&&\times \left(\frac{\Sigma_{\mathrm{gas}}}{1M_{\sun}{\mathrm{pc}}^{-2}}\right)^{1.4\pm0.15} M_{\sun} {\mathrm{yr}}^{-1}{\mathrm{kpc}}^{-2}.
\end{eqnarray}
\noindent
However, as discussed in the Introduction, spatially-dependent determinations of the
total gas SK relation slope vary in the range
$n_{\mathrm{tot}}=1.7-3.55$. While
the slope of the spatially-resolved molecular gas
SK relation is consistently measured to be
$n_{\mathrm{mol}}\approx1.4$ in the same systems, with a two-sigma
variation of $n_{\mathrm{mol}}\sim1.2-1.7$
\citep{wong2002a,boissier2003a,heyer2004a,boissier2007a,kennicutt2007a}.
Below, we examine SK relations in the simulated disks and compare
our results directly with these observations.
\begin{figure*}
\figurenum{6}
\epsscale{1.0}
\plotone{fig6.eps}
\caption{\label{fig:kennicutt.gas}
\small
Schmidt-Kennicutt relation for total gas surface densities. Shown is the SFR surface density
$\Sigma_{\mathrm{SFR}}$ as a function of the total gas surface density $\Sigma_{\mathrm{gas}}$ for galaxy models with circular
velocities of $v_{\mathrm{circ}}=50\,\mathrm{km}\,\,\mathrm{s}^{-1}$ (right column), $v_{\mathrm{circ}}=125\,\mathrm{km}\,\,\mathrm{s}^{-1}$ (middle column), and $v_{\mathrm{circ}}=300\,\mathrm{km}\,\,\mathrm{s}^{-1}$
(left column) for
ISM models that include atomic coolants (upper row), atomic+molecular coolants with (lower row)
and without (middle row)
$\mathrm{H}_{2}$-destruction from a local interstellar radiation field. The surface densities $\Sigma_{\mathrm{SFR}}$ and
$\Sigma_{\mathrm{gas}}$ are measured in annuli from the simulated disks after evolving for $t=0.3\,\mathrm{Gyr}$ (solid circles)
and $t=1\,\mathrm{Gyr}$ (open circles) in
isolation.
The disk-averaged data from \cite{kennicutt1998a} is shown for comparison (small dots).
The best power-law fits to the individual galaxies have indices in the range $\alpha\approx1.7-4.3$ (solid lines).
Deviations from the $\Sigma_{\mathrm{SFR}} \propto \Sigma_{\mathrm{gas}}^{1.5}$ relation are caused by the radial variation in
the molecular gas fraction $f_{\mathrm{H2}}$, the scale height of star-forming gas $h_{\mathrm{SFR}}$, and the scale height
of the total gas distribution $h_{\mathrm{gas}}$.
The total disk averaged surface densities (grey diamonds) are consistent with
the \cite{kennicutt1998a} normalization for normal, massive star-forming galaxies (dashed lines).
}
\end{figure*}
\begin{figure*}
\figurenum{7}
\epsscale{1.1}
\plotone{fig7.eps}
\caption{\label{fig:kennicutt.H2}
Schmidt-Kennicutt relation for molecular gas surface densities. Shown
is the star formation rate surface density $\Sigma_{\mathrm{SFR}}$ as a function
of the molecular gas surface density $\Sigma_{\mathrm{H2}}$ for galaxy models
with circular velocities of $v_{\mathrm{circ}}=50\,\mathrm{km}\,\,\mathrm{s}^{-1}$ (right panels),
$v_{\mathrm{circ}}=125\,\mathrm{km}\,\,\mathrm{s}^{-1}$ (middle panels), and $v_{\mathrm{circ}}=300\,\mathrm{km}\,\,\mathrm{s}^{-1}$ (left panels)
and ISM models that include atomic+molecular coolants with (lower
panels) and without (upper panels) $\mathrm{H}_{2}$-destruction from a local
interstellar radiation field. The surface densities $\Sigma_{\mathrm{SFR}}$ and
$\Sigma_{\mathrm{H2}}$ are measured in annuli (dark circles) from the simulated
disks after evolving for $t=0.3\mathrm{Gyr}$ (solid circles) and $t=1\mathrm{Gyr}$
(open circles) in isolation. The disk-averaged data from
\cite{kennicutt1998a} is shown for comparison (small dots). The best
power-law fits to the individual galaxies have indices in the range
$\alpha\approx1.2-1.5$ (solid lines). The total disk averaged surface
densities (grey diamonds) are consistent with the
\cite{kennicutt1998a} relation for normal, massive star-forming
galaxies (dashed lines). }
\end{figure*}
\subsubsection{The Total Gas Schmidt-Kennicutt Relation}
\label{section:results:total_gas_schmidt_law}
A dichotomy between the total gas and molecular gas SK
relations is currently suggested by the data and should be a feature
of models of the ISM and star formation. Figure
\ref{fig:kennicutt.gas} shows the simulated total gas
SK relation that compares the SFR
surface density $\Sigma_{\mathrm{SFR}}$ to the total gas surface density $\Sigma_{\mathrm{gas}}$
for different galaxy models. The quantities $\Sigma_{\mathrm{SFR}}$ and $\Sigma_{\mathrm{gas}}$
are azimuthally-averaged in annuli of width $\Delta r =150h^{-1}\,{\mathrm{pc}}$
for the $v_{\mathrm{circ}}=300\,\,\mathrm{km}\,\,\mathrm{s}^{-1}$ and $v_{\mathrm{circ}}=125\,\,\mathrm{km}\,\,\mathrm{s}^{-1}$ systems and $\Delta r
=75h^{-1}\,{\mathrm{pc}}$ for the smaller $v_{\mathrm{circ}}=50\,\,\mathrm{km}\,\,\mathrm{s}^{-1}$ disk. Regions within
the radius enclosing $95\%$ of the gas disk mass are shown. The
SK relation is measured in each disk at times
$t=0.3\,\mathrm{Gyr}$ and $t=1.0\,\mathrm{Gyr}$ to monitor its long-term evolution.
Power-law fits to the SFR density as a function of the total gas
surface mass densities in the range $\Sigma_{\mathrm{gas}}=5-100M_{\sun}{\mathrm{pc}}^{-2}$
measured at $t=0.3\,\mathrm{Gyr}$ are indicated in each panel, with the range
chosen to approximate regimes that are currently observationally
accessible. The simulated relations between $\Sigma_{\mathrm{SFR}}$ and $\Sigma_{\mathrm{gas}}$
are not strict power-laws, and different choices for the range of
surface densities or simulation time of the fit may slightly change
the inferred power-law index, with lower surface densities typically
leading to larger indices and, similarly, later simulation times
probing lower surface densities and correspondingly steeper indices.
Exceptions are noted where appropriate, but the conclusions of this
paper are not strongly sensitive to the fitting method of the
presented power-law fits. Disk-averaged SFR and surface mass
densities are also measured and plotted for comparison with the
non-starburst galaxies of \cite{kennicutt1998a}.
Each row of panels in Figure \ref{fig:kennicutt.gas} shows results
for the three galaxy models
evolved with different models for the
ISM and star formation.
For the traditional GD-SF model, the $\Sigma_{\mathrm{SFR}}-\Sigma_{\mathrm{gas}}$ correlation
in massive, thin-disk systems
tracks the input relation for this model,
$\dot{\rho}_{\star}\propto\rho_{g}^{1.5}$, at intermediate surface
densities. At low surface densities, massive disks deviate from the
$\Sigma_{\mathrm{SFR}}\propto\Sigma_{\mathrm{gas}}^{1.5}$ relation as the imposed density threshold
is increasingly probed in the disk interior and disk flaring begins to
steepen the relation between the volume density $\rho_{g}$
and the gas surface density $\Sigma_{\mathrm{gas}}$. In lower mass galaxies,
the disks are thicker at a given gas surface density, as dictated
by hydrostatic equilibrium \citep[e.g.,][]{kaufmann2007a}, with a
correspondingly lower average three-dimensional density. The SK
relation therefore steepens somewhat between galaxies with $v_{\mathrm{circ}}=300\,\mathrm{km}\,\,\mathrm{s}^{-1}$ and
$v_{\mathrm{circ}}=125\,\mathrm{km}\,\,\mathrm{s}^{-1}$ in this star formation model, but does not fully
capture the mass-dependence of the SK relation
apparent in the observations.
The middle row of Figure \ref{fig:kennicutt.gas} shows the total gas
SK relation measured in disks evolved with the $\mathrm{H}_{2}$D-SF model,
which are generally similar to the more prescriptive GD-SF model in
terms of the relation slope. The arbitrary density threshold for star
formation included in the GD-SF model roughly mimics the physics of
the atomic-to-molecular gas transition calculated by Cloudy for the
$\mathrm{H}_{2}$D-SF model, as molecular transition begins at densities of
$n_{\mathrm{H}}\sim0.1\mathrm{cm}^{-3}$ when the effects of $\mathrm{H}_{2}$-destruction by an ISRF are
\it not \rm included.
Note that the $\mathrm{H}_{2}$D-SF model allows gas to cool to considerably
lower temperatures than in the atomic cooling-only GD-SF model. This
allows the ISM gas in the $v_{\mathrm{circ}}=50\,\mathrm{km}\,\,\mathrm{s}^{-1}$ galaxy to become cold enough
to settle into a thin, dense disk and reach high molecular
fractions. Indeed, the increase in the molecular content of the disk
contributes to the increase in the SFE in the dwarf galaxy at $t=1$~Gyr during its
evolution seen in the middle right panel of
Figure~\ref{fig:kennicutt.gas}. However, observed
$v_{\mathrm{circ}}\approx50\,\mathrm{km}\,\,\mathrm{s}^{-1}$ galaxies have \it very low \rm molecular
fractions and the high molecular content of the dwarf at $t=1\,\mathrm{Gyr}$
is a potential deficit of the $\mathrm{H}_{2}$D-SF model.
The bottom row of Figure \ref{fig:kennicutt.gas} shows the
SK relation for the most complete "$\mathrm{H}_{2}$D-SF+ISRF" model.
In this model, the
SK relation power-law index increases systematically from
$n_{\mathrm{tot}}=2.2$ for the $v_{\mathrm{circ}}=300\,\mathrm{km}\,\,\mathrm{s}^{-1}$ galaxy to $n_{\mathrm{tot}}=4.3$ for the
$v_{\mathrm{circ}}=50\,\mathrm{km}\,\,\mathrm{s}^{-1}$ dwarf. The decreased abundance of $\mathrm{H}_{2}$ at low
densities has steepened the SK relation at low gas
surface densities compared with either the GD-SF or $\mathrm{H}_{2}$D-SF
models. The ISM in the dwarf galaxy is prevented from becoming fully
molecular and maintains an extremely steep SK relation
over its evolution from $t=0.3\,\mathrm{Gyr}$ to $t=1\,\mathrm{Gyr}$. Consequently,
the global SFR of the model dwarf galaxy in this case
stays low and approximately constant. The global gas consumption time
scale is also much longer than in the other two models.
While the disk-averaged SK relation
determination for the galaxies have the lowest SFE
in the $\mathrm{H}_{2}$D-SF+ISRF model, the
values are still consistent with the observational determinations by
\cite{kennicutt1998a}, \cite{wong2002a}, \cite{boissier2003a}, and
\cite{boissier2007a}.
\subsubsection{The Molecular Gas Schmidt-Kennicutt Relation}
\label{section:results:molecular_gas_schmidt_law}
As discussed in \S \ref{section:results:sfr_laws}, the two-sigma
range in observational estimates of the molecular gas SK relation
power-law index is $n_{\mathrm{mol}}\approx1.2-1.7$, and is smaller than the
variation in the slope of the $\Sigma_{\mathrm{SFR}}-\Sigma_{\mathrm{gas}}$ relation. Figure
\ref{fig:kennicutt.H2} shows the molecular gas SK relation measured in
galaxies evolved with the two molecular ISM models considered in this
paper. The SFR and molecular gas surface densities are measured from
the simulations as described in \S
\ref{section:results:total_gas_schmidt_law} and the reported power-law
indices were fit to molecular gas surface densities in the range
$\Sigma_{\mathrm{H2}}=0.1-100 M_{\sun}{\mathrm{pc}}^{-2}$. Note that the correlation of
the SFR
surface density with the $\mathrm{H}_{2}$ surface
density is not an entirely trivial consequence of
our H$_2$-based implementation
of star formation, as we implement our star formation model
on the scale
of individual gas particles in three dimensions
and the observational correlations we consider are
azimuthal surface density averages in annuli. The two are not equivalent, as we
discuss below in the next subsection \citep[see also][]{schaye2007a}.
The disk-averaged SFR and molecular
gas surface densities in the $\mathrm{H}_{2}$D-SF model are consistent with observations
of the molecular gas SK relation by
\cite{kennicutt1998a}. The molecular gas SK relation
power-law index of this model is similar for galaxies of different
mass $n_{\mathrm{mol}}=1.4-1.5$. Unlike the $\Sigma_{\mathrm{SFR}}-\Sigma_{\mathrm{gas}}$ relation, the
molecular gas SK relation is well-described by a
power-law over a wide range in surface densities in the $\mathrm{H}_{2}$D-SF
model. As discussed previously, the low-mass dwarf galaxy
experiences an increase in the SFE at a fixed
surface density during the evolutionary period between $t=0.3\,\mathrm{Gyr}$ and
$t=1.0\,\mathrm{Gyr}$ owing to a nearly complete conversion of its ISM
to molecular gas in the $\mathrm{H}_{2}$D-SF model.
Panels in the bottom row of Figure \ref{fig:kennicutt.H2} show
that the addition of a local interstellar radiation field
in the $\mathrm{H}_{2}$D-SF+ISRF model controls the amplitude of the
$\Sigma_{\mathrm{SFR}}-\Sigma_{\mathrm{H2}}$ relation, but does not strongly affect its slope.
The normalization and power-law index of the molecular gas
SK relation in this model are quite stable with time,
reflecting how the equilibrium temperature and molecular fractions of
the ISM in the $\mathrm{H}_{2}$D-SF+ISRF are maintained between simulation
times $t=0.3\mathrm{Gyr}$ and $t=1.0\mathrm{Gyr}$. Only the most massive galaxy
experiences noticeable steepening of the $\Sigma_{\mathrm{SFR}}-\Sigma_{\mathrm{H2}}$
correlation at high surface densities. The disk-averaged observations
of the molecular gas SK relation by
\cite{kennicutt1998a} are well-matched by the simulations and remain
approximately constant over their duration.
While the scale height of the cold, dense gas in the disks is
typically comparable to our gravitational softening, the slope of the
molecular gas SK relation appears to be insensitive to modest changes
in the simulation resolution. Additional resimulations of the
calculations presented here with an increased resolution of
$N_{\mathrm{d,g}}=1.2 \times 10^{6}$ gas particles show that the slope
of the molecular gas SK relation changes by less than $5\%$.
\begin{figure*}
\figurenum{8}
\epsscale{1.15}
\plotone{fig8.eps}
\caption{\label{fig:new_sfr_law}
A predicted star formation correlation for disk galaxies. The star formation rate (SFR) density in the simulations
is calculated as $\dot{\rho}_{\star} = f_{\mathrm{H2}} \rho_{g} / t_{\star} \propto f_{\mathrm{H2}}(\rho_{g}) \rho_{g}^{1.5}$,
where $f_{\mathrm{H2}}$ is the molecular fraction, $\rho_{g}$ is the gas density, and $t_{\star}$ is a star formation
time-scale that varies with the local gas dynamical time. The local SFR surface density
should scale as $\Sigma_{\mathrm{SFR}}\propto\dot{\rho}_{\star}h_{\mathrm{SFR}}$, where $h_{\mathrm{SFR}}$ is the scale height of
star-forming gas. Similarly, $\Sigma_{\mathrm{gas}}\propto\rho_{g} h_{\mathrm{gas}}$, where $h_{\mathrm{gas}}$ is the total gas scale height.
The simulations predict that the SFR density in disks should correlate with the
gas surface density as $\Sigma_{\mathrm{SFR}} \propto f_{\mathrm{H2}} h_{\mathrm{SFR}} h_{\mathrm{gas}}^{-1.5} \Sigma_{\mathrm{gas}}^{1.5}$ (dashed lines).
}
\end{figure*}
\subsubsection{Structural Schmidt-Kennicutt Relation}
\label{section:results:structural_schmidt_law}
While the observational total and molecular gas SK
relations are well-reproduced by the simulated
galaxy models, local deviations from a power-law
$\Sigma_{\mathrm{SFR}}-\Sigma_{\mathrm{gas}}$ or $\Sigma_{\mathrm{SFR}}-\Sigma_{\mathrm{H2}}$
correlation are abundant. Although these deviations
are physical, exhibited by real galaxies, and
are well-understood within the context of the
simulations, a question remains as to whether a
relation that trivially connects the
SFR density $\Sigma_{\mathrm{SFR}}$ and the properties of
the disks exists.
The primary relation between the local star formation
rate and the gas properties in 3D in the simulations
are encapsulated by Equation \ref{equation:star_formation_rate},
which relates the three-dimensional SFR
density to the
molecular gas density. We can relate the three-dimensional properties in the
ISM to
surface densities
through the characteristic scale heights of the disks.
The SFR surface density is related to an average
local SFR volume density
$\left<\dot{\rho}_{\star}\right>$ through the scale height
of star-forming gas $h_{\mathrm{SFR}}$ as
\begin{equation}
\Sigma_{\mathrm{SFR}} \propto \left< \dot{\rho_{\star}} \right> h_{\mathrm{SFR}}
\end{equation}
\noindent
Here, the averaging of the three-dimensional density is understood
to operate over the region of the disk where the two-dimensional
surface density is measured.
Similarly, the gas surface and volume densities are related by
the proportionality
\begin{equation}
\Sigma_{\mathrm{gas}} \propto \left<\rho_{g}\right> h_{g}.
\end{equation}
\noindent
If Equation \ref{equation:star_formation_rate} is averaged over the scale on which
$\Sigma_{\mathrm{SFR}}$ is measured as
\begin{equation}
\left<\dot{\rho}_{\star} \right>\propto \left<f_{\mathrm{H2}}\right> \left<\rho_{g}\right>^{1.5},
\end{equation}
\noindent
then the proportionalities can be combined to give
\begin{equation}
\label{equation:structural_schmidt_law}
\Sigma_{\mathrm{SFR}} \propto \left<f_{\mathrm{H2}}\right> \frac{h_{\mathrm{SFR}}}{h_{g}^{1.5}} \Sigma_{\mathrm{gas}}^{1.5}.
\end{equation}
\noindent
Equation \ref{equation:structural_schmidt_law} can be interpreted
as a SK-like correlation, which depends {\it both\/}
on the molecular gas abundance and the structure of the disk.
Figure \ref{fig:new_sfr_law} shows this expected structural SK
relation for the simulated disk galaxies. The star formation,
and surface densities are azimuthally-averaged in annuli, as before, and
limited to regions of the disk where star formation
is active.
The characteristic gas scale height
$h_{\mathrm{gas}}$ is determined as a
mass-weighted average of the absolute value of
vertical displacement for the annular gas distributions.
The characteristic star-forming gas scale height
$h_{\mathrm{SFR}}$ is determined in a similar
manner, but for gas that satisfies the condition
for star formation: namely, $f_{\mathrm{H2}}>0$ for the $\mathrm{H}_{2}$D-SF and
$\mathrm{H}_{2}$D-SF+ISRF models or $n_{\mathrm{H}}>0.1\mathrm{cm}^{-3}$ for the GD-SF model.
Measurements
from the simulations are presented after evolving the
systems for $t=0.3\mathrm{Gyr}$ with the GD-SF,
$\mathrm{H}_{2}$D-SF,
and $\mathrm{H}_{2}$D-SF+ISRF ISM models.
In each case, as expected, $\Sigma_{\mathrm{SFR}}$ in the disks linearly
correlates with
quantity
$f_{\mathrm{H2}} h_{\mathrm{SFR}} h_{\mathrm{gas}}^{-1.5} \Sigma_{\mathrm{gas}}^{1.5}\,\,$\footnote{Small
deviations in the GD-SF model at low surface densities
owe to the sharp density threshold for star formation adopted
in this model \citep[see][for a related discussion]{schaye2007a}.}.
This relation shows explicitly that the behavior of the $\Sigma_{\mathrm{SFR}}-\Sigma_{\mathrm{gas}}$
relation can be understood in terms of the dependence of $f_{\mathrm{H2}}$ and
$h_{\mathrm{gas}}$ on $\Sigma_{\mathrm{gas}}$. For example, the SK
relation can have a
fixed power law $\Sigma_{\mathrm{SFR}}\propto\Sigma_{\mathrm{gas}}^{1.5}$ when $f_{\mathrm{H2}}$ and
$h_{\mathrm{gas}}$ are roughly independent of $\Sigma_{\mathrm{gas}}$ \citep[e.g., M51a,][]{kennicutt2007a},
but then
steepen when the mass fraction of star forming gas becomes a strong function of gas
surface density in a manner characteristic of dwarf galaxies, large low
surface brightness galaxies, or the outskirts of normal
galaxies. Deviations from the $\Sigma_{\mathrm{gas}}^{1.5}$ relation can also
occur when $h_{\mathrm{gas}}$ changes quickly with decreasing $\Sigma_{\mathrm{gas}}$ in the
flaring outer regions of the disks. Such flaring, for example, is observed
for the Milky Way at $R\gtrsim 10$~kpc
\citep[e.g.,][and references therein]{wolfire2003a}.
\begin{figure}
\figurenum{9}
\epsscale{1.1}
\plotone{fig9.eps}
\caption{\label{fig:fH2_pressure}
\small
Molecular fractions as a function of external pressure in isolated disks. Shown
are the molecular-to-atomic gas surface density ratios $\Sigma_{\mathrm{H2}}/\Sigma_{\mathrm{HI}}$
measured in radial bins as a function of the local external pressure $P_{\mathrm{ext}}$
defined by \cite{blitz2004a} for galaxies with $v_{\mathrm{circ}}\approx300\,\,\mathrm{km}\,\,\mathrm{s}^{-1}$
(circles), $v_{\mathrm{circ}}\approx125\,\,\mathrm{km}\,\,\mathrm{s}^{-1}$ (triangles), and
$v_{\mathrm{circ}}\approx50\,\,\mathrm{km}\,\,\mathrm{s}^{-1}$ (squares) circular velocities. The simulations that include $\mathrm{H}_{2}$-destruction
by an ISRF (lower panel) follow the observed $\Sigma_{\mathrm{H2}}/\Sigma_{\mathrm{HI}}\proptoP_{\mathrm{ext}}^{0.9}$ power-law scaling (dashed line),
while the simulations that do not include an ISRF (upper panel) have a weaker scaling than
that found observationally ($\Sigma_{\mathrm{H2}}/\Sigma_{\mathrm{HI}}\simP_{\mathrm{ext}}^{0.4}$, dotted line).
\\
}
\end{figure}
\subsection{Local Pressure, Stellar Radiation, and $\mathrm{H}_{2}$-Fraction}
\label{section:results:fH2_pressure}
\cite{elmegreen1993b} calculated the dependence of the
ISM molecular fraction $f_{\mathrm{H2}}$ on the local pressure
and radiation field strength, including the effects of $\mathrm{H}_{2}$
self-shielding and dust extinction. The calculations
of \cite{elmegreen1993b} suggested that
\begin{equation}
f_{\mathrm{H2}} \propto P_{\mathrm{ext}}^{2.2}j^{-1},
\end{equation}
\noindent
where $P_{\mathrm{ext}}$ is the external pressure and $j$ is the volume
emissivity of the radiation field.
\cite{wong2002a} used observations to
determine the correlation between molecular fraction $f_{\mathrm{H2}}$
and the hydrostatic equilibrium mid-plane pressure in galaxies, given by
\begin{equation}
P_{\mathrm{ext}}\approx\frac{\pi}{2} G \Sigma_{\mathrm{gas}}\left(\Sigma_{\mathrm{gas}} + \frac{\sigma}{c_{\star}}\Sigma_{\star}\right),
\end{equation}
where $c_{\star}$ is the stellar velocity dispersion and $\sigma$ is the gas
velocity dispersion
\citep[see][]{elmegreen1989a}.
From observations of seven disk galaxies and the Milky Way, \cite{wong2002a}
determined that the observed molecular fraction scaled with pressure as
\begin{equation}
\Sigma_{\mathrm{H2}}/\Sigma_{\mathrm{HI}} \propto P_{\mathrm{ext}}^{0.8}.
\end{equation}
Subsequently,
\cite{blitz2004a} and \cite{blitz2006a}
studied these correlations in a larger
sample of 28 galaxies. \cite{blitz2006a}
found that if the mid-plane pressure was
defined as
\begin{eqnarray}
\label{equation:pext}
\frac{P_{\mathrm{ext}}}{k} &=& 272\,\,\mathrm{cm}^{-3}\,\mathrm{K}\,\left(\frac{\Sigma_{g}}{M_{\sun} {\mathrm{pc}}^{-2}}\right)\left(\frac{\Sigma_{\star}}{M_{\sun} {\mathrm{pc}}^{-2}}\right)^{0.5}\nonumber\\
&\times&\left(\frac{\sigma}{\,\mathrm{km}\,\,\mathrm{s}^{-1}}\right)\left(\frac{h_{\star}}{{\mathrm{pc}}}\right)^{-0.5},
\end{eqnarray}
\noindent
where $h_{\star}$ is the stellar scale height,
and if the local mass density was dominated by
stars,
then
the molecular fraction scaled with $P_{\mathrm{ext}}$ as
\begin{equation}
\Sigma_{\mathrm{H2}}/\Sigma_{\mathrm{HI}} \propto P_{\mathrm{ext}}^{0.92}.
\end{equation}
\noindent
\cite{wong2002a} state that if the
ISRF volume emissivity
$j\propto\Sigma_{\star}\propto\Sigma_{g}$
and the stellar velocity dispersion was constant,
then the \cite{elmegreen1993b} calculations
would provide a relation $f_{\mathrm{H2}}\propto P_{\mathrm{ext}}^{1.7}$
much steeper than observed. \cite{blitz2006a}
suggested that if the stellar velocity dispersion
scales as $c_{\star}\propto\Sigma_{\star}^{0.5}$
and $j\propto\Sigma_{\star}$, then the predicted
scaling is closer to the observed scaling.
These assumptions provide $f_{\mathrm{H2}}\propto P_{\mathrm{ext}}^{1.2}$.
The relation between $\Sigma_{\mathrm{H2}}/\Sigma_{\mathrm{HI}}$ and $P_{\mathrm{ext}}$
can be measured for the simulated disks.
Figure~\ref{fig:fH2_pressure} shows $\Sigma_{\mathrm{H2}}/\Sigma_{\mathrm{HI}}$
as a function of $P_{\mathrm{ext}}$, measured from the simulations
according to Equation \ref{equation:pext} in annuli,
for simulated disks with (bottom panel) and without (top
panel) $\mathrm{H}_{2}$-destruction by an ISRF. The simulations
with an ISRF scale in a similar way to the systems
observed by \cite{blitz2006a}, while the simulated disks
without an ISRF have a weaker scaling. The simulated
galaxies have $c_{\star}\propto\Sigma_{\star}^{0.5}$ and
roughly constant gas velocity dispersions.
Given the scaling of the ISRF strength in the simulations, for which
$U_{\mathrm{isrf}}\propto\Sigma_{\mathrm{SFR}}\propto\Sigma_{\mathrm{gas}}^{n_{\mathrm{tot}}}$ with $n_{\mathrm{tot}}\sim2-4$ according
to the total gas SK relations measured in
\S \ref{section:results:total_gas_schmidt_law},
and the scale-dependent SFE,
which will instill the proportionality $\Sigma_{\star}\propto\Sigma_{\mathrm{gas}}^{\beta}$
with $\beta\ne1$ in general,
the scaling between the molecular fraction
and pressure
$f_{\mathrm{H2}}\propto P_{\mathrm{ext}}^{0.9}$ is not guaranteed to
be satisfied in galaxies with greatly varying SK
relations.
However, given the relation $f_{\mathrm{H2}}\propto P_{\mathrm{ext}}^{2.2}j^{-1}$
calculated by \cite{elmegreen1993b}, a
general relation between the molecular fraction
and interstellar pressure can be calculated in
terms of the power-law indices $n$ of the total
gas SK relation and $\beta$ that describes the
relative distribution gas and stars in the disk.
Assuming a steady state, the
volume emissivity in the simulated ISM then
scales with the gas surface density
as
\begin{equation}
j \propto U_{\mathrm{isrf}} \propto \Sigma_{\mathrm{gas}}^{n_{\mathrm{tot}}}.
\end{equation}
\noindent
Given the ISRF strength in the simulations
and a proportionality of the stellar surface
density with the total gas surface density
of the form $\Sigma_{\star}\propto\Sigma_{\mathrm{gas}}^{\beta}$,
we find that, with
the scaling between the molecular
fraction, pressure, and ISRF volume emissivity calculated
by \cite{elmegreen1993b}, the $f_{\mathrm{H2}}-\Sigma_{\mathrm{gas}}$ relation
should follow
\begin{equation}
f_{\mathrm{H2}} \propto \Sigma_{\mathrm{gas}}^{2.2(1+\beta/2) - n}.
\end{equation}
\noindent
If the molecular fraction scales with the mid-plane
pressure as $f_{\mathrm{H2}}\propto P^{\alpha}$, then
the index $\alpha$ can be expressed in terms of
the power-law dependences of the SK relation
and the $\Sigma_{\star}-\Sigma_{\mathrm{gas}}$ correspondence as
\begin{equation}
\label{equation:general_fH2_pressure}
\alpha = \frac{2.2(1+\beta/2) - n}{1+\beta/2}.
\end{equation}
\noindent
The largest galaxy simulated, with a circular velocity
of $v_{\mathrm{circ}}\approx 300 \,\mathrm{km}\,\,\mathrm{s}^{-1}$, has a total gas SK
relation with a scaling $\Sigma_{\mathrm{SFR}} \propto \Sigma_{\mathrm{gas}}^{2}$ (see Figure
\ref{fig:kennicutt.gas}) and a nearly linear scaling
of the stellar and gas surface densities ($\beta\approx0.97$).
According to Equation \ref{equation:general_fH2_pressure},
power-law index for this system is then expected to follow
$\alpha=0.85$. At the opposite end of the
mass spectrum, the $v_{\mathrm{circ}}=50\,\mathrm{km}\,\,\mathrm{s}^{-1}$ dwarf galaxy, which
obeys a very different SK relation ($n_{\mathrm{tot}}\approx4$) and
$\Sigma_{\star}-\Sigma_{\mathrm{gas}}$ scaling ($\beta\sim4$), has an expected
$\alpha\approx0.87$.
These $\alpha$ values are remarkably close to the relation observed
by \cite{blitz2006a}, who find $\alpha\approx0.92$.
That the ISRF model
reproduces the observed correlation in galaxies of different
masses then owes to the opposite compensating effects of the scaling
of $j$ with $\Sigma_{\mathrm{gas}}$ and the $\Sigma_{\star}-\Sigma_{\mathrm{gas}}$ scaling.
For the simulations that do not include an ISRF, the
scaling between molecular fraction and pressure must
be independent of the ISRF volume emissivity. We
find that $f_{\mathrm{H2}} \sim \Sigma_{\mathrm{gas}}^{0.6}$ in these galaxies,
with significant scatter, which provides the much weaker
relation $\Sigma_{\mathrm{H2}}/\Sigma_{\mathrm{HI}}\sim P_{\mathrm{ext}}^{0.4}$. The
simulations without an ISRF have $f_{\mathrm{H2}}-P_{\mathrm{ext}}$ scalings
similar to this estimate or intermediate between this
estimate and the observed relation.
The good agreement between the $f_{\mathrm{H2}}-P_{\mathrm{ext}}$
scaling in the simulations with an ISRF and the
observations provide an \it a posteriori \rm justification
for the chosen scaling of the ISRF strength in the
simulations (Equation \ref{equation:uisrf}), even as
the ISRF strength was physically motivated by the
generation of soft UV photons by young stellar populations.
The robustness
of the $f_{\mathrm{H2}}-P_{\mathrm{ext}}$ may be a consequence of the
regulatory effects of $\mathrm{H}_{2}$-destruction
by the ISRF, which motivated in part the original
calculations by \cite{elmegreen1993b}, in concert
with the influence of the external pressure \citep{elmegreen1994a}.
However, we note that systems with a steep total gas SK relation,
such as the $v_{\mathrm{circ}}\sim50\,\mathrm{km}\,\,\mathrm{s}^{-1}$ and $v_{\mathrm{circ}}\sim125\,\mathrm{km}\,\,\mathrm{s}^{-1}$
systems, do break from the $f_{\mathrm{H2}}-P_{\mathrm{ext}}$ relation in
the very exterior of the disk where the molecular fraction
quickly declines.
Observations of M33, the real galaxy analogue to the $v_{\mathrm{circ}}\sim125\,\mathrm{km}\,\,\mathrm{s}^{-1}$
simulated galaxy,
indicate that the system may deviate from the
observed $f_{\mathrm{H2}}-P_{\mathrm{ext}}$
relation determined at higher surface densities in the very exterior
of the disk
\citep{gardan2007a}.
The surface density of M33 in these exterior regions is dominated by
gas rather than stars, and the \cite{blitz2006a} relation may not
be expected to hold under such conditions.
The high-surface density regions of M33
do satisfy the $f_{\mathrm{H2}}-P_{\mathrm{ext}}$ relation \citep{blitz2006a}, as
does the simulated M33 analogue.
We delay a more thorough examination of the exterior disk regions
until further work.
Recently, \citet{booth2007a} presented a model of star formation based
on a subgrid model for the formation of molecular clouds from
thermally unstable gas in a multiphase ISM. In their model, molecular
clouds are represented as ballistic particles which can coagulate
through collisions. This is quite different from the treatment of ISM
in our model, as we do not attempt to model molecular clouds on small
scales via a subgrid model, but calculate the equilibrium abundance of
molecular hydrogen using the local gas properties. \citet{booth2007a}
show that their simulations also reproduce the molecular
fraction-pressure relation of \citet{blitz2004a,blitz2006a}. However,
the interpretation of this relation in their model must be different
from our interpretation involving the relation between the gas surface
density, the stellar surface density, and the ISRF strength, because
their model does not include the dissociating effects of the interstellar
radiation field.
In addition, in contrast to our model, the global SK relation in
simulations of \citet{booth2007a} does not show a break or steepening
down to very small surface densities ($\Sigma\approx 10^{-2}{\rm\
M_{\odot}\,pc^{-2}}$; see their Fig. 15). Given the differences, it
will be interesting to compare results of different models of
$\mathrm{H}_{2}$-based star formation in more detail in future studies.
\begin{figure*}
\figurenum{10}
\epsscale{1}
\plotone{fig10.eps}
\caption{\label{fig:sfr_rotation}
Star formation rate surface density $\Sigma_{\mathrm{SFR}}$ as a function of the total gas
surface density consumed per orbit $C_{\Sigma\Omega}\Sigma_{\mathrm{gas}}\Omega$ (left panel) and molecular gas
surface density consumed per orbit $C_{\Sigma\Omega}\Sigma_{\mathrm{H2}}\Omega$ (right panel), where $\Omega$ is
the orbital frequency and the efficiency $C_{\Sigma\Omega}=0.017$ was determined from
observations by \cite{kennicutt1998a} (represented by the {\it dashed\/} line which extends
over the range probed by observations). Shown are the local values of $\Sigma_{\mathrm{gas}}$,
$\Sigma_{\mathrm{H2}}$ and $\Omega$ in annuli for galaxy models with circular velocities of
$v_{\mathrm{circ}}=50\,\mathrm{km}\,\,\mathrm{s}^{-1}$ (squares), $v_{\mathrm{circ}}=125\,\mathrm{km}\,\,\mathrm{s}^{-1}$ (triangles), and $v_{\mathrm{circ}}=300\,\mathrm{km}\,\,\mathrm{s}^{-1}$
(circles) evolved with atomic+molecular cooling and destruction of $\mathrm{H}_{2}$
by a local interstellar radiation field for $t=0.3\mathrm{Gyr}$. The star formation rate
in the simulations display the correlations $\Sigma_{\mathrm{SFR}}\propto\Sigma_{\mathrm{H2}}\Omega$ and, in the
high surface density region where the molecular gas and total gas densities are comparable,
$\Sigma_{\mathrm{SFR}}\propto\Sigma_{\mathrm{gas}}\Omega$.
}
\end{figure*}
\subsection{Star Formation and Angular Velocity}
\label{section:results:sfr_rotation}
The SFR surface density $\Sigma_{\mathrm{SFR}}$ has been shown to
correlate with the product of the gas surface density $\Sigma_{\mathrm{gas}}$ and
the disk angular frequency $\Omega$ as
\begin{equation}
\Sigma_{\mathrm{SFR}} \simeq C_{\Sigma\Omega} \Sigma_{\mathrm{gas}}\Omega
\end{equation}
\noindent
with the constant $C_{\Sigma\Omega}=0.017$
\citep{kennicutt1998a} and where the angular
frequency is defined as
\begin{equation}
\Omega^{2} = \frac{1}{R}\frac{\mathrm{d} \Phi}{\mathrm{d} R}
\end{equation}
\citep[e.g., \S 3.2.3 of ][]{binney1987a}. Star formation relations
of this form have been forwarded to connect the SFR to
the cloud-cloud collision time-scale
\citep[e.g.,][]{wyse1986a,wyse1989a,tan2000a}, to tie the gas
consumption time-scale to the orbital time-scale
\citep[e.g.,][]{silk1997a,elmegreen1997a}, or to reflect the
correlation between the ISM density and the tidal density
\citep{elmegreen2002a}. These ideas posit that the
$\Sigma_{\mathrm{SFR}}-\Sigma_{\mathrm{gas}}\Omega$ correlation provides a reasonable physical
interpretation of the $\Omega\propto\sqrt{\rho}$ scaling.
Figure \ref{fig:sfr_rotation} shows the SFR surface
density $\Sigma_{\mathrm{SFR}}$ as a function of the total gas surface density
consumed per orbit $C_{\Sigma\Omega}\Sigma_{\mathrm{gas}}\Omega$ and molecular gas surface density
consumed per orbit $C_{\Sigma\Omega}\Sigma_{\mathrm{H2}}\Omega$, compared to the observed
$\Sigma_{\mathrm{SFR}}-\Sigma_{\mathrm{gas}}\Omega$ relation \cite{kennicutt1998a}. The
local values of $\Sigma_{\mathrm{gas}}$ and $\Sigma_{\mathrm{H2}}$ are measured in radial
annuli from the gas properties, while $\Omega$ is measured from the
rotation curve at the same location.
The SFR surface density scales as $\Sigma_{\mathrm{gas}}\Omega$ at
$\Sigma_{\mathrm{SFR}}\gtrsim 5\times 10^{-3}\,\rm M_{\odot}yr^{-1} kpc^{-2}$.
The offset between the disk-averaged $\Sigma_{\mathrm{SFR}}-\Sigma_{\mathrm{gas}}\Omega$
relation found by \cite{kennicutt1998a} and the simulated relation may
owe in part to the typically higher values of $\Omega$ in the disk
interior that can affect the normalization of the azimuthally-averaged
$\Sigma_{\mathrm{SFR}}-\Sigma_{\mathrm{gas}}\Omega$ relation. This offset is
comparable to the observed offset between the disk-averaged relation
of \cite{kennicutt1998a} and the spatially-resolved
$\Sigma_{\mathrm{SFR}}-\Sigma_{\mathrm{gas}}\Omega$ relation found in M51a by
\cite{kennicutt2007a}. As in the previous sections,
scaling $\Sigma_{\mathrm{SFR}}-\Sigma_{\mathrm{H2}}\Omega$ works throughout the disk, while the
$\Sigma_{\mathrm{SFR}}-\Sigma_{\mathrm{gas}}\Omega$ scaling steepens significantly
at lower surface densities where the
molecular fraction is declining.
That the SFR in the simulated disks
correlates well with the quantity $\Sigma_{\mathrm{gas}}\Omega$, even though the
SFR in the simulations is determined
directly from the {\it local} molecular gas density and
dynamical time without direct knowledge of $\Omega$, may
be surprising.
The correlation suggests that the local gas density
should scale with $\Omega$,
although the physical reason for such a correlation in our
simulations is unclear.
While previous examinations of the
$\Sigma_{\mathrm{SFR}}-\Sigma_{\mathrm{gas}}\Omega$ correlation have developed
explanations of why $\Omega \propto \sqrt{\rho_{d}}$, these
simulations have a coarser treatment of the ISM than is
invoked in such models;
the molecular fraction and SFR
are primarily functions of the local density.
The $\Omega\sim\sqrt{\rho_{d}}$ correlation should then
reflect the structural properties
galaxy models rather than the detailed properties of the
ISM that are somehow influenced by global processes in the
disk.
The galaxy models used in the simulations remain roughly
axisymmetric after their evolution, modulo spiral structure
and disk instabilities. Hence, the fundamental equation
that describes the gravitational
potential $\Phi$ of the galaxies is the Poisson
equation
\begin{equation}
\label{equation:poisson}
\frac{\mathrm{d}^{2} \Phi}{\mathrm{d} z^{2}} + \frac{\mathrm{d}^{2} \Phi}{\mathrm{d} R^2} + \Omega^{2} = 4\pi G \rho_{\mathrm{total}}.
\end{equation}
\noindent
where the density $\rho_{\mathrm{total}}$ reflects the total density
of the multicomponent system. If the form of the potential generated
by $\rho_{\mathrm{total}}$ has the property that $\Omega^2 =
(1/R)\mathrm{d}\Phi/\mathrm{d} R \propto \rho_{d}$, then the quantity
$\Sigma_{\mathrm{gas}}\Omega$ will mimic the scaling of $\Sigma_{\mathrm{SFR}}$. In the
second Appendix, we examine the solutions for $\Omega$ determined from
Equation \ref{equation:poisson} for the limiting cases of locally
disk-dominated ($\rho_{\mathrm{total}}\approx\rho_{\mathrm{disk}}$)
and halo-dominated ($\rho_{\mathrm{total}}\approx\rho_{\mathrm{DM}}$)
potentials, and show that typically $\Omega \propto \sqrt{\rho_{d}}
B(\rho_{d})$ where $B(\rho_{d})\sim\mathcal{O}(1)$ is a weak function
of density if the disk density is exponential with radius. This
behavior suggests that whether the disk dominates the local potential,
as is the case for massive galaxies, or the dark matter halo dominates
the potential as it does in low-mass dwarfs, the angular frequency
scales as $\Omega\propto\sqrt{\rho_{d}}$ throughout most of the disk.
Based on these calculations, we conclude that the correlation
$\Sigma_{\mathrm{SFR}}\propto\Sigma_{\mathrm{gas}}\Omega$ in the simulations is likely a
consequence of the density-dependent star formation relation (Equation
\ref{equation:star_formation_rate}) and the galaxy structure inducing
the correlation $\Omega\propto\sqrt{\rho_{d}}$.
This
correlation may be established during the formation of the exponential
disk in the extended dark matter halo, and the correlation observed in
real galaxies therefore may have a similar origin. Star formation may
then correlate with $\Sigma_{\mathrm{gas}}\Omega$, but not be
directly controlled or
influenced by the global kinematics of the disk.
\begin{figure}
\figurenum{11}
\epsscale{1.15}
\plotone{fig11.eps}
\caption{\label{fig:crit_mu}
\small
Stability parameters for
two-fluid Toomre instability
($Q_{\mathrm{sg}}^{-1}$, bottom panel) or shear
instability ($\Sigma_{\mathrm{gas}}/\Sigma_{A}$, top panel)
as a function of
radius normalized to the radius $R_{\mathrm{SFR}}$ containing
$99.9\%$ of the total active disk star formation. Shown are the
parameters $Q_{\mathrm{sg}}^{-1}$ and $\Sigma_{\mathrm{gas}}/\Sigma_{A}$ for galaxy models with
circular velocities
of $v_{\mathrm{circ}} = 50\,\mathrm{km}\,\,\mathrm{s}^{-1}$ (squares, light gray lines),
$v_{\mathrm{circ}} = 125\,\mathrm{km}\,\,\mathrm{s}^{-1}$ (triangles, dark gray lines),
and $v_{\mathrm{circ}} = 300\,\mathrm{km}\,\,\mathrm{s}^{-1}$ (circles, black lines). The proportionality constants
$\alpha_{Q}=0.69$ and $\alpha_{A}=2$ are taken to match
the values used by \cite{martin2001a}. For each
simulated galaxy, the parameter $Q_{\mathrm{sg}}^{-1}$
approaches the transition to stability at the radius $R_{\mathrm{SFR}}$
beyond which only very little star formation operates.
An increase in the stability against growing shearing modes also
occurs near $R_{\mathrm{SFR}}$ for the simulated disks, but
the transition is less uniform than for Toomre instability.
\\
}
\end{figure}
\subsection{Star Formation and Critical Surface Density Thresholds}
\label{section:results:critical_surface_densities}
Disk surface density, dynamical
instabilities, and the formation of molecular clouds and
stars may be connected.
A single-component thin disk will experience growing axisymmetric
instabilities if its surface mass density exceeds the critical
value $\Sigma_{Q}$, given by
\begin{equation}
\label{equation:sigma_crit_Q}
\Sigma_{Q}(\sigma) \approx \frac{\alpha_{Q} \kappa \sigma}{\pi G},
\end{equation}
\noindent
where $\kappa$ is the epicyclical frequency
\begin{equation}
\kappa^2 = 2\left(\frac{v^2}{R^2} + \frac{v}{R}\frac{\mathrm{d} v}{\mathrm{d} R}\right),
\end{equation}
\noindent
$\sigma$ is the characteristic velocity dispersion
of the fluid,
and the parameter
$\alpha_{Q}\sim\mathcal{O}(1)$
\citep{safronov1960a,toomre1964a}.
For a two component system, consisting of stars with surface
mass density $\Sigma_{\star}$ and velocity dispersion
$c_{\star}$, and gas with
surface mass density $\Sigma_{\mathrm{gas}}$ and velocity dispersion
$\sigma$, the instability
criterion for axisymmetric modes with a wavenumber $k$ can
be expressed in terms of the normalized wavenumber
$q\equiv k c_{\star}/\kappa$ and the ratio of
velocity dispersions $R\equiv\sigma/c_{\star}$ as
\begin{equation}
\label{equation:general_toomre_criterion}
Q_{\mathrm{sg}}^{-1} \equiv \frac{2}{Q_{\mathrm{s}}}\frac{q}{1+q^{2}} + \frac{2}{Q_{\mathrm{g}}} R\frac{q}{1+q^{2}R^{2}} > 1
\end{equation}
\noindent
\citep{jog1984a,rafikov2001a}, where $Q_{\mathrm{s}} = \Sigma_{Q}(c_{\star})/\Sigma_{\star}$
and $Q_{\mathrm{g}} =\Sigma_{Q}(\sigma)/ \Sigma_{\mathrm{gas}}$.
The most unstable wavenumber $q$ can be determined by maximizing
this relation.
\cite{elmegreen1993a}
considered another
critical surface density $\Sigma_{A}$, given by
\begin{equation}
\label{equation:sigma_crit_A}
\Sigma_{A} \approx \frac{\alpha_{A} A \sigma}{\pi G}
\end{equation}
\noindent
where the parameter $\alpha_{A}\sim\mathcal{O}(1)$,
and
the Oort constant $A$ is
\begin{equation}
A = -0.5R\frac{\mathrm{d} \Omega}{\mathrm{d} R}
\end{equation}
\noindent
\citep[e.g.,][]{binney1987a}, above which density perturbations
can grow through self-gravity before being disrupted by shear.
This instability criterion has been shown to work well in
Milky Way spiral arms \citep[e.g.,][]{luna2006a} and
ring galaxies \citep{vorobyov2003a}. In regions of low shear, the
magneto-Jeans instability may induce the growth of perturbations
\citep[e.g.,][]{kim2002a}.
Observational
evidence for connection between global disk instabilities
and star formation has been a matter
of considerable debate \citep[e.g.,][]{kennicutt1989a,martin2001a,hunter1998a,boissier2003a,boissier2007a,kennicutt2007a,yang2007a}. In particular, the existence of global star formation thresholds related
to Toomre instability \cite[e.g.,][]{kennicutt1989a,martin2001a} has been
challenged \citep{boissier2007a}. It is
thus interesting to use
model galaxies to explore the role of critical surface densities
in star formation. While proper radiative transfer
calculations are needed to assign either H$\alpha$ or UV emissivities
to the simulated disk galaxies, an indirect comparison between
simulations and observations can be made by measuring surface mass
densities, velocity dispersions, dynamical properties, and a
characteristic extent of star formation in the model disks. These
comparisons are presented in Figure \ref{fig:crit_mu}, where the
instability parameters $Q_{\mathrm{sg}}^{-1}$ and $\Sigma_{\mathrm{gas}}/\Sigma_{A}$ are
plotted as a function of radius in the disk galaxies simulated with
the $\mathrm{H}_{2}$D-SF+ISRF molecular ISM and star formation model. For
comparison with the two-fluid Toomre axisymmetric instability
criterion, the parameter $Q_{\mathrm{sg}}$ is measured by determining the locally
most unstable wavenumber $q$ from $Q_{\mathrm{s}}$ and $Q_{\mathrm{g}}$ with $\alpha=0.69$
to approximate the observational results of \cite{kennicutt1989a} and
\cite{martin2001a}. When examining the shearing instability
criterion, the parameter $\alpha_{A}=2$ is chosen for comparison with
the observational estimation of critical surface densities by
\cite{martin2001a} and only the gas surface density is utilized.
A ``critical'' radius
$R_{\mathrm{SFR}}$ containing $99.9\%$ of the total star formation
in each simulated disk is measured and used as a proxy
for the star formation threshold radius observationally
determined from, e.g., the H$\alpha$ emission.
The shearing instability criterion (upper panel Figure \ref{fig:crit_mu})
is supercritical ($\Sigma_{\mathrm{gas}}/\Sigma_{A}>1$) over the extent of star formation
in the low-mass galaxy, but the larger galaxies are either marginally supercritical
or subcritical. Given
the large gas content in this dwarf system and that this comparison only uses
the gas surface mass density $\Sigma_{\mathrm{gas}}$, the result is unsurprising.
While the $v_{\mathrm{circ}}\sim125\,\mathrm{km}\,\,\mathrm{s}^{-1}$
and $v_{\mathrm{circ}}\sim300\,\mathrm{km}\,\,\mathrm{s}^{-1}$ galaxies are subcritical at
$R=R_{\mathrm{SFR}}$, we note that
there is still a noticeable drop in $\Sigma_{\mathrm{gas}}/\Sigma_{A}$
near $R=R_{\mathrm{SFR}}\sim1$ for these systems.
Interestingly, with the chosen definition of $R_{\mathrm{SFR}}$, the bottom panel of
Figure \ref{fig:crit_mu} shows that the two-component Toomre
instability parameter $Q_{\mathrm{sg}}$ is a remarkably accurate indicator of
where the majority of star formation operates in the simulated disk galaxies.
Each galaxy approaches the $Q_{\mathrm{sg}}\sim1$ transition near
$R_{\mathrm{SFR}}$, demonstrating that much of the star formation is operating
within Toomre-unstable regions of the disks. These results appear most consistent
with the observations by \cite{kennicutt1989a}, \cite{martin2001a}, and \cite{yang2007a},
who find that star formation operates most efficiently in regions that are gravitationally
unstable.
Our simulations are also qualitatively
consistent with the theoretical findings of \cite{li2005b,li2005a,li2006a}, who used
hydrodynamical simulations of isolated galaxies with sink particles
to represent dense molecular gas and star clusters. They found that if
the accretion onto the sink particles resulted in star formation with
some efficiency, the
two-component Toomre parameter was $Q_{\mathrm{sg}}<1.6$ in regions of active star formation.
Although a direct comparison is difficult given their use of sink particles,
we similarly find that the majority of star formation in our simulations
occurs in regions of the disk where $Q_{\mathrm{sg}}\lesssim1$.
In related theoretical work, a gravithermal instability criterion for star formation was explored by
\cite{schaye2004a},
who suggested that the decrease in thermal velocity dispersion associated with
the atomic-to-molecular transition in the dense ISM may induce gravitational instability
and the corresponding connection between critical surface densities and star formation.
The simulated galaxies do experience a decline in thermal sound speed near the
$Q_{\mathrm{sg}}\sim1$ transition, consistent with the mechanism advocated by \cite{schaye2004a}.
However, of our $\mathrm{H}_{2}$D-SF+ISRF simulations (that include physics necessary for
such instability),
only the dwarf galaxy model displays
a sharp transition in $Q_{\mathrm{sg}}$ near the critical radius while the stability of the other,
more massive galaxies
are more strongly influenced by the turbulent gas velocity dispersion and have more
modest increases in $Q_{\mathrm{sg}}$. Further, we note that the gravithermal mechanism suggested by
\cite{schaye2004a} should not operate in isothermal treatments of the ISM such as those
presented by \cite{li2005b,li2005a,li2006a}. Similarly our GD-SF simulations, which
are effectively isothermal ($T\approx10^{4}\mathrm{K}$) at ISM densities, display similar transitions
to $Q_{\mathrm{sg}}\lesssim1$ at the critical radius even as such simulations do not include a cold
phase to drive gravitational instability.
We stress that the definition of $R_{\mathrm{SFR}}$ may affect
conclusions about the importance of threshold surface densities.
For instance, if a $90\%$ integrated
star formation threshold was used $R_{\mathrm{SFR}}$ would be
lowered to $0.3-0.6$ of the chosen value and the
subsequent interpretations about the applicability of
the instability parameter $Q_{\mathrm{sg}}$ could be quite different.
Detailed radiative transfer calculations may therefore be necessary to
determine the integrated star formation fraction that brings
$R_{\mathrm{SFR}}$ in the best agreement with
observationally-determined critical radii.
\begin{figure*}
\figurenum{12}
\epsscale{1}
\plotone{fig12.eps}
\caption{\label{fig:temperature_pdf}
Temperature structure of the interstellar medium in galaxies
of different mass. Shown are the probability distribution
functions (PDF) of the fractional volume of the ISM with a temperature
$T$ (black histograms), for systems with circular velocities of
$v_{\mathrm{circ}}=300\,\mathrm{km}\,\,\mathrm{s}^{-1}$ (left panel),
$v_{\mathrm{circ}}=125\,\mathrm{km}\,\,\mathrm{s}^{-1}$ (middle panel), and $v_{\mathrm{circ}}=50\,\mathrm{km}\,\,\mathrm{s}^{-1}$ (right panel).
The simulations use the $\mathrm{H}_{2}$D-SF+ISRF model to evolve the systems in isolation for $t=0.3\mathrm{Gyr}$.
The peaks in the temperature PDF correspond to regions of
density-temperature phase space where the cooling time $\tcool$ has local
maxima (i.e., where $\tcool$ is locally a weak function of density).
These peaks in the temperature PDF correspond to features in the density
PDF, and
the colored arrows highlight regions that can be modeled approximately as
separate isothermal phases (i.e., lognormals) in the density PDF
(see Figure \ref{fig:density_pdf} and
\S \ref{section:results:pdfs} of the text for details).
}
\end{figure*}
\begin{figure*}
\figurenum{13}
\epsscale{1}
\plotone{fig13.eps}
\caption{\label{fig:density_pdf}
Density structure of the interstellar medium in galaxies
of different mass. Shown are the probability distribution
functions (PDF) of the fractional volume of the ISM with a density
$n_{\mathrm{H}}$ (black histograms), for systems with circular velocities of
$v_{\mathrm{circ}}=300\,\mathrm{km}\,\,\mathrm{s}^{-1}$ (left panel),
$v_{\mathrm{circ}}=125\,\mathrm{km}\,\,\mathrm{s}^{-1}$ (middle panel), and $v_{\mathrm{circ}}=50\,\mathrm{km}\,\,\mathrm{s}^{-1}$ (right panel)
for the $\mathrm{H}_{2}$D-SF+ISRF model at $t=0.3\mathrm{Gyr}$.
The features in the density PDF correspond to regions of
density-temperature phase space where the cooling time $\tcool$ has local
maxima (i.e., where $\tcool$ is locally a weak function of density).
The density PDF of the ISM in these simulated galaxies
is not well-modeled by a single log-normal, except at the high-density tail.
The figure shows that the density PDF may be modeled approximately as a sum of
isothermal phases (i.e., lognormal PDFs in density, colored dashed lines),
where the
characteristic density of each phase is determined by using
the ISM equation-of-state to find the typical density
of peaks in the temperature PDF
(see Figure \ref{fig:temperature_pdf} and
\S \ref{section:results:pdfs} of the text for details).
}
\end{figure*}
\subsection{Temperature and Density Structure of the ISM}
\label{section:results:pdfs}
The temperature and density structure of interstellar gas is
determined by a variety of processes including radiative
heating, cooling, and, importantly,
turbulence
\citep[see, e.g.,][]{elmegreen2002a}.
The calculation of the density
probability distribution function (PDF) of an isothermal gas, defined
as a volume fraction of gas with a given density, suggests that the
PDF will resemble a lognormal distribution with a
dispersion $\sigma$ that scales linearly with the mean gas Mach number
$\sigma \propto \beta \mathcal{M}$. The origin of
the lognormal form is discussed in detail in
\citet{padoan1997a} and \citet{passot1998a}.
For a
non-isothermal gas the density PDF may more generally follow
a power-law distribution \citep{passot1998a,scalo1998a}.
Although the lognormal PDF was initially discussed in the context
of molecular clouds, \cite{wada2001a}
found that two-dimensional numerical simulations of a multiphase ISM on galactic scales
produced a lognormal density PDF for dense gas, while at low densities gas
followed a flatter distribution in $\log \rho$ \citep[see][for more
recent investigations of this issue]{wada2007a,wada2007b}. A similar
PDF was found in a cosmological simulation of galaxy formation at
$z\approx 4$ by \citet{kravtsov2003a}. Motivated in part by the result of
\cite{wada2001a}, \cite{elmegreen2002a}
suggested that the SK relation could be explained in terms of the density
PDF of ISM gas if a fraction $f_{c}$ of gas that resides above some
critical density threshold $\rho_{c}$ converts
into stars with a constant efficiency and $\rho_{c}$ scales linearly
with the average ISM density $\left<\rho\right>$.
Similarly, \cite{kravtsov2003a} found
that the fraction of dense gas in cosmological simulations of
disk galaxy formation at redshifts $z\gtrsim4$ varied with the
surface density in a manner to produce the
$\Sigma_{\mathrm{SFR}} \propto \Sigma_{\mathrm{gas}}^{1.4}$ relation at high surface
densities. The gas density distribution in these simulations
also followed a log-normal PDF at large densities and
flattened at lower densities \citep[see also][]{kravtsov2005a}.
\cite{joung2006a} used three-dimensional models of the
stratified ISM to demonstrate that supernovae explosions can
act to regulate star formation by inputting energy on small
scales. Their scheme produced temperature and density structures
consistent with those observed in the real ISM, with
density and temperature PDFs that contain multiple peaks.
\cite{tasker2007a} found the density PDF in their
multiphase simulations could be approximated by a single lognormal
at high densities, but exhibited deviations from the lognormal
in the form of peaks and dips at smaller densities.
Figure \ref{fig:temperature_pdf} shows the probability distribution
function (PDF) of the fractional volume in the ISM at a temperature
$T$ in the $\mathrm{H}_{2}$D-SF+ISRF simulations at $t=0.3\,\mathrm{Gyr}$. The
temperature PDF is measured by assigning an effective volume to each
particle based on its mass and density, counting the particles in bins
of fixed width in $\log_{10} T$, and then normalizing by the total
effective volume of the ISM. The temperature PDF displays structure
corresponding to gas at several phases. These temperature peaks
correspond to features in the density-temperature phase diagram of the
ISM, set by the cooling and heating processes. The abundance of gas
in different regions of this diagram depend on the local cooling time,
with regions corresponding to long local cooling times becoming more
populated. The overall gravitational potential of the system also
influences the density of gas in the disks and thus indirectly
influences cooling and abundance of cold, dense gas, as denser ISM gas
can reach lower equilibrium temperatures.
Figure~\ref{fig:temperature_pdf} shows that the temperature PDFs of
the galaxies do differ, with more massive galaxies containing more
volume at low ISM temperatures. In smaller mass galaxies, the ISM
densities are lower, gas distributions are more extended, and the
abundance of cold gas is greatly decreased.
Figure \ref{fig:density_pdf}
shows the PDF of gas densities for each model galaxy in the
$\mathrm{H}_{2}$D-SF+ISRF model at $t=0.3\mathrm{Gyr}$. The density PDFs are measured
in the same manner as the temperature PDFs, but the particles are
binned according to the $\log_{10} n_{\mathrm{H}}$. Each galaxy has a PDF with
noticeable features that change depending on the galaxy mass scale.
None of the density PDFs are well represented by a single lognormal
distribution, reflecting the multiple peaks apparent in the
temperature PDFs (Figure \ref{fig:temperature_pdf}).
Motivated by the multiphase structure of the ISM displayed
in Figure \ref{fig:temperature_pdf}, a model of the density PDFs
can be constructed by identifying characteristic temperature peaks in
the temperature PDF and assigning separate lognormal distributions to
represent each approximately isothermal gas phase.
The colored arrows in Figure
\ref{fig:temperature_pdf} identify characteristic temperatures in the
ISM of each galaxy. The characteristic density corresponding to each
selected temperature, determined by the median equation-of-state
calculated using Cloudy, is employed as the mean of a lognormal
distribution used to model a feature in the density PDF (colored
dashed lines in Figure \ref{fig:density_pdf}). The heights and widths
of the lognormals are constrained to provide the same relative volume
fraction in the density PDFs as the phase occupies in the temperature
PDF, resulting in a model shown by the solid gray lines in Figure
\ref{fig:density_pdf}.
Clearly the ISM is best understood in terms of a continuous
distribution of temperatures, represented in the simulations by Figure
\ref{fig:temperature_pdf}, but the multiphase model of the density
PDFs provides some conceptual insight into the ISM phase structure.
First, the model of separate isothermal ISM phases can capture
the prominent features of the density PDFs. Second, the galaxies
share common features in the density (and temperature) PDFs, such as a
warm neutral phase ($\log_{10} T\sim3.8$) and the mix of cold neutral
and molecular gas at low temperatures ($\log_{10} T\lesssim3.5$).
Third, the relative widths of the lognormal distributions representing
the dense ISM in each galaxy increase with increasing galaxy mass
(i.e., the orange lognormals centered near $\log_{10} n_{\mathrm{H}} \sim 0.5$),
as do their normalizations. The increasing widths of the lognormals
mimic the increasing velocity dispersions (and turbulent Mach
numbers) of the dense gas in the more massive galaxy models.
Note that for the ISM of the $v_{\mathrm{circ}}=300\,\mathrm{km}\,\,\mathrm{s}^{-1}$ system the cold dense
phase contributes the majority of the ISM by mass, while shock heated
gas at $T\sim10^{5}\mathrm{K}$ contributes the largest ISM fraction by volume.
We note that this result qualitatively resembles the effect of the
volume-averaged equation-of-state utilized in subgrid models of the
multiphase ISM \citep[e.g.][]{springel2003a}.
For purposes of calculating the overall SFE
of galaxies, our results suggest that approximating the density PDF by
a single lognormal distribution is overly simplistic. The detailed
shape of the PDF is set by global properties of galaxies and heating
and cooling processes in their ISM and can therefore be expected to
vary from system to system. Nevertheless, a single PDF modeling may be
applicable for the cases when most of the ISM mass is in one dominant
gas phase (e.g., dense cold gas), as is the case for the massive
galaxy in our simulations. We leave further exploration of this
subject for future work.
\section{Discussion}
\label{section:discussion}
Results presented in the previous sections indicate that
star formation prescriptions based on the local abundance
of molecular hydrogen lead to interesting features of
the global star formation relations.
We show that the inclusion of an interstellar radiation field is
critical to control the amount of diffuse $\mathrm{H}_{2}$ at low gas
densities. For instance, without a dissociating ISRF the low mass
dwarf galaxy eventually becomes almost fully molecular, in stark
contrast with observations. We also show that without the dissociating
effect of the ISRF our model galaxies produce a much flatter relation
between molecular fraction $f_{\mathrm{H2}}$ and pressure $P_{\mathrm{ext}}$, as can be
expected from the results of \cite{elmegreen1993a}. Including
$\mathrm{H}_{2}$-destruction by an ISRF results in a $f_{\mathrm{H2}}-P_{\mathrm{ext}}$ relation in
excellent agreement with the observations of \citet{wong2002a} and
\citet{blitz2004a,blitz2006a}.
Our model also predicts that the relation between $\Sigma_{\rm
SFR}$ and $\Sigma_{\rm gas}$ should not be universal and can be
considerably steeper than the canonical value of $n_{\rm tot}=1.4$,
even if the three-dimensional Schmidt relation in molecular clouds is
universal. The slope of the relation
is controlled by the dependence of molecular fraction (i.e., fraction
of star forming gas) on the total local gas
surface density. This relation is non-trivial because the
molecular fraction is controlled by pressure and ISRF strength,
and can thus vary
between different regions with the same total gas surface density.
The relation can also be different in the regions
where the disk scale-height changes rapidly (e.g., in flaring outer
regions of disks), as can be seen from equation~\ref{equation:structural_schmidt_law}
\citep[see also][]{schaye2007a}. We show that the effect
of radial variations in
the molecular fraction $f_{\mathrm{H2}}$ and gas scale heights ($h_{\mathrm{SFR}}$ and $h_{\mathrm{gas}}$)
on the SFR can be accounted for
in terms of a structural, SK-like correlation,
$\Sigma_{\mathrm{SFR}}\proptof_{\mathrm{H2}} h_{\mathrm{SFR}} h_{\mathrm{gas}}^{-1.5} \Sigma_{\mathrm{gas}}^{1.5}$, that trivially relates
the local SFR to the consumption of molecular gas with an
efficiency that scales with the local dynamical time.
A generic testable prediction of our model is that deviations from the
SK $\Sigma_{\mathrm{SFR}}-\Sigma_{\mathrm{gas}}$ relation are expected in
galaxies or regions of galaxies where the molecular fraction is declining or
much
below unity. As we discussed above, star formation in the molecule-poor
($f_{\mathrm{H2}}\sim0.1$) galaxy M33 supports
this view as its total gas SK power-law index is
$n_{\mathrm{tot}}\approx 3.3$ \citep{heyer2004a}. While our simulations of an M33
analogue produce a steep SK power-law, the overall SFE
in our model galaxies lies
below that observed for M33. However, as recently emphasized by
\cite{gardan2007a}, the highest surface density regions of M33 have
unusually efficient star formation compared with the normalization of
the \cite{kennicutt1998a} relation and so the discrepancy is not
surprising.
Given that dwarf galaxies generally
have low surface densities and are poor in molecular gas, it will be
interesting to examine SK relation in other small-mass
galaxies.
Another example of a low-molecular fraction galaxy close to home is
M31, which has only a fraction $f_{\mathrm{H2}}\approx0.07$
of its gas in molecular form within
18 kpc of the galactic center \citep{nieten2006a}.
Our model would predict that
this galaxy should deviate from the total gas SK
relation found for molecular-rich galaxies. Observationally, the
SK relation of M31 measured by \citet{boissier2003a} is
rather complicated and even has an increasing SFR with decreasing gas
density over parts of the galaxy. Low molecular fractions can also be
expected in the outskirts of normal galaxies and in the disks of low
surface brightness galaxies. The latter have molecular fractions of only
$f_{\mathrm{H2}}\lesssim 0.10$ \citep{matthews2005a,das2006a}, and we therefore predict
that they will not follow the total gas SK relation obeyed by
molecule-rich, higher surface density galaxies. At the same time, LSBs
do lie on the same relation between $\mathrm{H}_{2}$ mass and far infrared
luminosity
as higher surface brightness (HSB) galaxies \citep{matthews2005a},
which suggests that the dependence of star formation on
molecular gas may be the same in both types of galaxies.
An alternative formulation of the global star formation relation is
based on the angular frequency of disk rotation: $\Sigma_{\rm
SFR}\propto \Sigma_{\mathrm{gas}}\Omega$. That this relation works in real
galaxies is not trivial, because star formation and dynamical
time-scale depend on the local gas density, while $\Omega$ depends on
the total mass distribution {\it within} a given radius. Although
several models were proposed to explain such a correlation
\citep[see, e.g.,][for reviews]{kennicutt1998a,elmegreen2002a}, we show in
\S \ref{section:results:sfr_rotation} and the second Appendix that the star
formation correlation with $\Omega$ can be understood as a fortuitous
correlation of $\Omega$ with gas density of $\Omega\propto \rho_{\rm
gas}^{\alpha}$, where $\alpha\approx 0.5$, for self-gravitating
exponential disks or exponential disks
embedded in realistic halo potentials. Moreover,
we find that $\Sigma_{\mathrm{SFR}}\propto \Sigma_{\mathrm{gas}}\Omega$ breaks down
at low values of $\Sigma_{\mathrm{gas}}\Omega$ where the molecular fraction declines,
similarly to the
steepening of the SK relation.
Our models therefore predict that the $\Sigma_{\mathrm{SFR}}\propto\Sigma_{\mathrm{H2}}\Omega$
relation is more robust than the $\Sigma_{\mathrm{SFR}}\propto\Sigma_{\mathrm{gas}}\Omega$ relation.
An important issue related to the global star formation in galaxies is
the possible existence of star formation thresholds
\citep{kennicutt1998a,martin2001a}. Such thresholds are expected to
exist on theoretical grounds, because the formation of dense,
star-forming gas is thought to be facilitated by either dynamical
instabilities \citep[see, e.g.,][for comprehensive
reviews]{elmegreen2002a,mckee2007a} or gravithermal instabilities
\citep{schaye2004a}. We find that the two-component Toomre instability
threshold that accounts for both stars and gas, $Q_{\rm sg}<1$, works
well in predicting the transition from atomic gas, inert to
star formation, to the regions where molecular gas and star formation
occur in our simulations. Our results are in general agreement with
\citet{li2005a,li2005b,li2006a}, who used sink-particle simulations of
the dense ISM in isolated galaxies to study the relation between star
formation and the development of gravitational instabilities, and with
observations of star formation in the LMC \citep{yang2007a}. Our
simulations further demonstrate the importance of accounting for
all mass components in the disk to predict correctly which regions
galactic disks are gravitationally unstable.
Given that our star formation prescription is based
on molecular hydrogen, the fact that $Q_{\rm sg}$ is a good
threshold indicator may imply that gravitational instabilities
strongly influence the abundance of dense, molecular gas in the disk.
Conversely, the gas
at radii where the disk is stable remains at low density and
has a low molecular fraction. We find that in our
model galaxies, the shear instability criterion of
\cite{elmegreen1993a} does not work as well as the Toomre $Q_{\rm sg}$-based
criterion. Almost all of the star formation in our model galaxies
occurs at surface densities $\Sigma_{\mathrm{gas}}\gtrsim 3{\rm\ M_{\odot}\,pc^{-2}}$,
which is formally consistent with the \citet{schaye2004a}
constant surface density criterion for gravithermal instability.
However, as Figure~\ref{fig:kennicutt.gas} shows, we do not see a clear indication
of threshold at a particular surface density and
our GD-SF models that have an effectively isothermal ISM with $T\approx10^{4}\mathrm{K}$
(and hence do not have a gravithermal instability) still show a good correlation
between regions where $Q_{\mathrm{sg}}<1$ and regions where star formation operates.
Our results have several interesting implications for interpretation
of galaxy observations at different epochs. First,
low molecular fractions in dwarf galaxies mean that
only a small fraction of gas is participating in star formation
at any given time. This connection between SFE
and molecular hydrogen abundance may explain why dwarf galaxies are
still gas rich today compared to larger mass galaxies \citep{geha2006a},
without relying on
mediation of star formation or gas blowout by supernovae.
Note that a similar
reasoning may also explain why large LSBs at low redshifts
are
gas rich but anemic in their star formation.
Understanding the star formation
and evolution of dwarf galaxies is critical because they serve
as the building blocks of larger galaxies at high redshifts. Such
small-mass galaxies are also expected to be the first objects
to form large masses of stars and should therefore play an important
role in enrichment of primordial gas and the
cosmic star formation
rate at high redshifts \citep{hopkins2004a,hopkins2006am}.
The star-forming disks at $z\sim2$ that may be
progenitors of low-redshift spiral galaxies are observed to lack
centrally-concentrated bulge components \citep{elmegreen2007a}. Given
that galaxies are expected to undergo frequent mergers at $z>2$,
bulges should have formed if a significant fraction of baryons are
converted into stars during such mergers
\citep[e.g.,][]{gnedin2000a}. The absence of the bulge may indicate
that star formation in the gas rich progenitors of these $z\sim 2$
systems was too slow to convert a significant fraction of gas into
stars. This low SFE can be understood if the
high cosmic UV background, low-metallicities, and low dust content of
high-$z$ gas disks keep their molecular fractions low
\citep{pelupessy2006a}, thereby inhibiting star formation over most of
gas mass and keeping the progenitors of the star-forming $z\sim 2$
disks mostly gaseous. Gas-rich progenitors may also help explain the
prevalence of extended disks in low-redshift galaxies despite the
violent early merger histories characteristic of $\Lambda$CDM
universes, as gas-rich mergers can help build
high-angular momentum disk galaxies \citep[][]{robertson2006c}.
Mergers of mostly stellar disks, on the other hand, would form spheroidal systems.
Our results may also provide insight into the interpretation of the results of
\cite{wolfe2006a}, who find that the SFR associated with neutral atomic gas
in damped Lyman alpha (DLA) systems is an order of magnitude lower than
predicted by the local \citet{kennicutt1998a} relation. The DLAs
in their study sample regions with column densities $N_{\mathrm{H}}\approx 2\times 10^{21}\rm\
cm^{-2}$, or surface gas densities of $\Sigma_{\rm DLA}\approx 20\rm\
M_{\odot}\,pc^{-2}$, assuming a gas disk with a thickness of
$h\approx100$~pc. Suppose the local SK relation steepens from the local
relation with the slope $n_0=1.4$ to a steeper slope $n_1$ below some
surface density $\Sigma_{\rm b}>\Sigma_{\rm DLA}$. Then for
$\Sigma_{\mathrm{gas}}<\Sigma_{\rm b}$, the SFR density will be
lower than predicted by the local relation by a factor of
$(\Sigma_{\mathrm{gas}}/\Sigma_{\rm b})^{n_1-n_0}$. For $n_0=1.4$ and $n_1=3$ the
SFR will be suppressed by a factor of $>10$ for
$\Sigma_{\mathrm{gas}}/\Sigma_{\rm b}<0.25$. Thus, the results \cite{wolfe2006a} can
be explained if the total gas SK relation at $z\sim3$
steepens below $\Sigma_{\mathrm{gas}}\lesssim 100\rm\ M_{\odot}\, pc^{-2}$.
We suggest that if the majority of the molecular hydrogen at these redshifts
resides in rare, compact, and dense systems \citep[e.g.,][]{zwaan2006a}, then
both the lack of star formation and the rarity of molecular hydrogen
in damped Ly$\alpha$ absorbers may be explained simultaneously.
Our results also indicate that the thermodynamics of the ISM can leave
an important imprint on its density probability distribution. Each
thermal phase in our model galaxies has its own log-normal density
distribution. Our results thus imply that using
a single lognormal PDF to build a model of global star formation in
galaxies \citep[e.g.,][]{elmegreen2002a,wada2007a} is likely an
oversimplification. Instead, the global star formation relation may
vary depending on the dynamical and thermodynamical properties of the
ISM. We can thus expect differences in the SFE
between the low-density and low-metallicity environments of dwarf and
high-redshift galaxies and the higher-metallicity, denser gas of many
large nearby spirals. Note that many of the results and effects we
discuss above may not be reproduced with a simple 3D density threshold
for star formation, as commonly implemented in galaxy formation
simulations. Such a threshold can reproduce the atomic-to-molecular
transition only crudely and would not include effects of the local
interstellar radiation field, metallicity and dust content, etc.
A clear caveat for our work is that the simulation resolution
limits the densities we can model correctly. At high densities, the gas
in our simulations is over-pressurized to avoid numerical Jeans instability.
The equilibrium density and temperature structure of the ISM and
the molecular fraction are therefore not correct in detail.
Note, however, that our pressurization prescription
is designed to scale with the resolution, and should converge to
the ``correct'' result as the resolution improves.
In any event, the simulations likely do not include all the relevant
physics shaping density and temperature PDFs of the ISM in real
galaxies. The results may of course depend on other microphysics
of the ISM as they influence both the
temperature PDF and the fraction of gas
in a high-density, molecular form.
Future simulations of the molecular ISM
may need to account for new microphysics as
they resolve scales where such processes become important.
\section{Summary}
\label{section:summary}
Using hydrodynamical simulations of the ISM and star formation in
cosmologically motivated disk galaxies over a range of
representative masses,
we examine the connection between molecular hydrogen abundance
and destruction, observed
star formation relations, and the thermodynamical structure of
the interstellar medium. Our simulations provide a
variety of new insights into the mass dependence of star formation
efficiency in galaxies.
A summary of our methodology and
results follows.
\begin{itemize}
\item[1.] A model of heating and cooling processes
in the interstellar medium (ISM), including low-temperature
coolants, dust heating and cooling processes, and heating
by the cosmic UV background, cosmic rays, and the local
interstellar radiation field (ISRF), is calculated using the
photoionization code Cloudy \citep{ferland1998a}.
Calculating the molecular fraction of the ISM
enables us to implement a
prescription for the star formation rate (SFR) that ties
the SFR directly to the molecular gas density. The
ISM and star formation model is implemented in the SPH/N-body code GADGET2
\citep{springel2005c} and used to simulate the evolution
of isolated disk galaxies.
\item[2.] We study the correlations between gas surface density
($\Sigma_{\mathrm{gas}}$), molecular gas surface density ($\Sigma_{\mathrm{H2}}$), and SFR
surface density ($\Sigma_{\mathrm{SFR}}$). We find that in our most realistic
model that includes heating and destruction of $\mathrm{H}_{2}$ by the
interstellar radiation field, the power law index
of the SK relation, $\Sigma_{\mathrm{SFR}}\propto\Sigma_{\mathrm{gas}}^{n_{\mathrm{tot}}}$,
(measured in annuli)
varies from $n_{\mathrm{tot}}\sim2$ in massive galaxies to $n_{\mathrm{tot}}\gtrsim4$ in
small mass dwarfs. The corresponding slope of the
$\Sigma_{\mathrm{SFR}}\propto\Sigma_{\mathrm{H2}}^{n_{\mathrm{mol}}}$ molecular-gas Schmidt-Kennicutt
relation is approximately the same for all
galaxies, with $n_{\mathrm{mol}}\approx 1.3$. These results are consistent
with observations of star formation in different galaxies
\citep[e.g.,][]{kennicutt1998a,wong2002a,boissier2003a,heyer2004a,boissier2007a,kennicutt2007a}.
\item[3.] In our models, the SFR density scales as
$\Sigma_{\mathrm{SFR}} \propto f_{\mathrm{H2}} h_{\mathrm{SFR}} h_{\mathrm{gas}}^{-1.5} \Sigma_{\mathrm{gas}}^{1.5}$,
where $h_{\mathrm{gas}}$ is the scale-height of the ISM and $h_{\mathrm{SFR}}$ is the scale-height
of star-forming gas. The different $\Sigma_{\mathrm{SFR}}-\Sigma_{\mathrm{gas}}$ relations in galaxies
of different mass and in regions of different surface density
in our models therefore owe to the dependence of molecular fraction $f_{\mathrm{H2}}$ and
scale height of gas on the gas surface density.
\item[4.]
We show that the $\Sigma_{\mathrm{SFR}}\propto\Sigma_{\mathrm{gas}}\Omega$ and
$\Sigma_{\mathrm{SFR}}\propto\Sigma_{\mathrm{H2}}\Omega$ correlations describe the
simulations results well where the molecular gas and total gas
densities are comparable,
while the simulations deviate from
$\Sigma_{\mathrm{SFR}}\propto\Sigma_{\mathrm{gas}}\Omega$ \citep[e.g.,][]{kennicutt1998a} at
low $\Sigma_{\mathrm{gas}}$ owing to a declining molecular fraction.
We demonstrate that these relations may
owe to the fact that the angular frequency
and the disk-plane gas density are generally related as $\Omega\propto\sqrt{\rho}$
for exponential disks if the potential is dominated by either the
disk, a \cite{navarro1996a} halo, \cite{hernquist1990a} halo, or an
isothermal sphere. The correlation of $\Sigma_{\mathrm{SFR}}$ with $\Omega$ is thus
a secondary correlation in the sense that $\Omega\propto\sqrt{\rho}$ is set
during galaxy formation and $\Omega$ does not directly influence
star formation.
\item[5.] The role of critical surface densities for shear
instabilities ($\Sigma_{A}$) and \cite{toomre1964a} instabilities
($\Sigma_{Q}$) in star formation \citep[e.g.,][]{martin2001a} is examined
in the context of the presented simulations. We find that the
two-component Toomre instability criterion $Q_{\mathrm{sg}}<1$ is an accurate
indicator of the star-forming regions of disks, and that gravitational
instability and star formation are closely related in our simulations.
Further, the $Q_{\mathrm{sg}}$ criterion works even in
simulations in which cooling is restricted to $T>10^4$~K where
gravithermal instability cannot operate.
\item[6.] Our simulations that include $\mathrm{H}_{2}$-destruction by an ISRF
naturally reproduce the observed scaling $f_{\mathrm{H2}}\proptoP_{\mathrm{ext}}^{0.9}$
between molecular fraction and external pressure
\citep[e.g.,][]{wong2002a,blitz2004a,blitz2006a}, but we find that simulations
without an ISRF have a weaker scaling $f_{\mathrm{H2}}\propto P_{\mathrm{ext}}^{0.4}$. We
calculate how the connection between the scalings of the gas surface
density, the stellar surface density, and the ISRF strength influence
the $f_{\mathrm{H2}}-P_{\mathrm{ext}}$ relation in the ISM, and show how the simulated
scalings reproduce the $f_{\mathrm{H2}}-P_{\mathrm{ext}}$ relation even as the power-law
index of the total gas Schmidt-Kennicutt relation varies dramatically
from galaxy to galaxy.
\item[7.] We present a method for mitigating numerical Jeans
fragmentation in Smoothed Particle Hydrodynamics simulations
that uses a density-dependent pressurization of
gas on small scales to ensure that the Jeans mass is properly
resolved, similar to techniques
used in grid-based simulations
\citep[e.g.,][]{truelove1997a,machacek2001a}. The gas
internal energy $u$ at the Jeans scale is
scaled as $u\proptom_{\mathrm{Jeans}}^{-2/3}$,
where $m_{\mathrm{Jeans}}$ is the local Jeans mass,
to ensure the Jeans mass is resolved by some $N_{\mathrm{Jeans}}$ number
of SPH kernel masses $2N_{\mathrm{neigh}} m_{\mathrm{SPH}}$, where $N_{\mathrm{neigh}}$
is the number of SPH neighbor particles and $m_{\mathrm{SPH}}$ is
the gas particle mass.
For the simulations presented here, we find the
\cite{bate1997a} criterion of $N_{\mathrm{Jeans}}=1$
to be insufficient to avoid numerical
fragmentation and that $N_{\mathrm{Jeans}}\sim15$
provides sufficient stability against numerical fragmentation
over the time evolution of our simulations.
Other simulations may have more stringent resolution requirements \citep[e.g.,][]{commercon2008a}.
We also
demonstrate that isothermal galactic disks with temperatures
of $T=10^{4}\mathrm{K}$ may be susceptible to numerical Jeans instabilities
at resolutions common in cosmological simulations of disk galaxy
formation, and connect this numerical effect to possible
angular momentum deficiencies in cosmologically simulated disk galaxies.
\end{itemize}
The results of our study indicate that star formation may deviate
significantly from the relations commonly assumed in models of galaxy
formation in some regimes and that these deviations can be important for the
overall galaxy evolution. Our findings provide strong motivation for
exploring the consequences of such deviations and for developing
further improvements in the
treatment of star formation in galaxy formation simulations.\\[2mm]
\acknowledgments
BER gratefully acknowledges support from a Spitzer Fellowship through
a NASA grant administrated by the Spitzer Science Center.
AVK is supported by the NSF grants
AST-0239759, AST-0507596, AST-0708154 and by the Kavli Institute for
Cosmological Physics at the University of Chicago. AVK thanks
the Miller Institute and Astronomy department of UC Berkeley
for hospitality during completion of this paper.
We thank
Andrew Baker,
Leo Blitz,
Bruce Elmegreen,
Nick Gnedin,
Dan Marrone,
Chris McKee,
Eve Ostriker,
Erik Rosolowsky,
and
Konstantinos Tassis
for helpful ideas, comments, and discussions.
We also thank
Gary Ferland and collaborators for developing and maintaining the
Cloudy code used to tabulate cooling and heating rates and molecular
fractions in our simulations, and Volker Springel for making his
hydrodynamical simulation code GADGET2 available.
Simulations presented here were
performed at the {\tt cobalt} system at the National Center for
Supercomputing Applications (NCSA) under project TG-MCA05S017. We
made extensive use of the NASA Astrophysics Data System and {\tt
arXiv.org} preprint server in this study.
|
1,116,691,498,683 | arxiv | \section{Introduction}
In many statistical questions related to multivariate dependence, a crucial role is played by the copula function.
A basic nonparametric copula estimator is the empirical copula,
dating back to \cite{Rus76, Deh79} and defined as the empirical distribution function of the vectors of component-wise ranks.
The asymptotic behavior of the empirical copula has been established under various assumptions on the true copula and the serial dependence of the observed random vectors \citep[see, e.g.,][]{GanStu87,FerRadWeg04,Seg12,BucVol13}. The upshot is that the empirical copula process converges weakly to a centered Gaussian field with covariance function depending on the true copula and the serial dependence of the observations.
Recently, \citet{BerBucVol17} investigated the weak convergence of the weighted empirical copula process. They showed that the empirical copula process divided by a weight function, that can be zero on parts of the boundary of the unit cube, still converges weakly to a Gaussian field. As illustrated in the latter reference, this stronger result allows for additional applications of the continuous mapping theorem or the functional delta method. However, this result is only valid for a clipped version of the process. Since the empirical copula itself is not a copula, weak convergence fails on the upper boundaries of the unit cube \citep[Remark~2.3]{BerBucVol17}.
The empirical beta copula \citep{SegSibTsu17} arises as a particular case of the empirical Bernstein copula \citep[see, e.g.,][]{SanSat04,JanSwaVer12} if the degrees of the Bernstein polynomials are set to the sample size. In the numerical experiments in \citep{SegSibTsu17}, the empirical beta copula exhibited a better performance than the empirical copula, both in terms of bias and variance.
In contrast to the empirical copula, the empirical beta copula is a genuine copula, a property that it shares with the checkerboard copula, whose limit is derived in \cite{GenNesRem2017} and which is very close to the empirical copula if the margins are continuous. Since the empirical beta copula is itself a copula, it is possible to prove weighted weak convergence for the empirical beta copula process on the whole unit cube. This is the main result of the paper. Weak convergence on the whole unit cube rather than on a subset thereof is quite handy since it allows for a direct application of, e.g., the continuous mapping theorem. In particular, there is no longer any need to treat the boundary regions separately.
We consider two applications. First, we modify the Cram\'er--von Mises test statistic for independence in \cite{genest+r:2004} by using the empirical beta copula and, more importantly, adding a weight function in the integral, emphasizing the tails. The asymptotic distribution of the statistic under the null hypothesis is an easy corollary of our main result. More interestingly, the inclusion of a weight function leads to a markedly better power against difficult alternatives such as the \emph{t} copula with zero correlation parameter, with favorable comparisons even to the novel statistics introduced recently by \citet{belalia+b+l+t:2017}. As a second application we consider the Cap\'er\`aa--Foug\`eres--Genest estimator \citep{CapFouGen97} of the Pickands dependence function of a multivariate extreme-value copula. Under weak dependence, replacing the empirical copula by the empirical beta copula yields a more accurate estimator. Its asymptotic distribution is again an immediate consequence of our main result.
The paper is organized as follows. In Section~\ref{sec:main} we introduce the various empirical copula processes and we state the main result of the paper, the weighted convergence of the empirical beta copula process on the whole unit cube. We illustrate the ease of application of the main result to the analysis of weighted Cram\'er--von Mises tests of independence (Section~\ref{sec:indep}) and nonparametric estimation of multivariate extreme-value copulas (Section~\ref{sec:pick}). The proofs are deferred to Section~\ref{sec:proof}, whereas a number of technical arguments are worked out in detail in Section~\ref{sec:aux}.
\section{Notation and main result}
\label{sec:main}
Let $(\vect X_n)_n$ be a strictly stationary time series whose $d$-variate stationary distribution function $F$ has continuous marginal distribution functions $F_1,\dots, F_d$ and copula $C$. Writing $\vect X_i = (X_{i,1}, \ldots, X_{i,d})$, we have, for $\vect x \in \mathbb{R}^d$,
\begin{align*}
\Prob( X_{i,j} \le x_j ) &= F_j(x_j), &
\Prob( \vect X_i \le \vect x ) &= F(\vect x) = C\{ F_1(x_1), \ldots, F_d(x_d) \}.
\end{align*}
For vectors $\vect x, \vect y \in \mathbb{R}^d$, the inequality $ \vect x \leq \vect y$ means that $x_j \leq y_j$ for $j = 1, \dots, d$. Similar conventions apply for other inequalities and for minima and maxima, denoted by the operators $\wedge$ and $\vee$, respectively.
Given the sample $\vect X_1, \ldots, \vect X_n$, the aim is to estimate $C$ and functionals thereof.
Although the copula $C$ captures the instantaneous (cross-sectional) dependence, the setting is still general enough to include questions about serial dependence. For instance, if $(Y_n)_{n}$ is a univariate, strictly stationary time series, then the $d$-variate time series of lagged values $\vect X_n = (Y_n, Y_{n-1}, \ldots, Y_{n-d+1})$ is strictly stationary too and the instantaneous dependence within the series $(\vect X_n)_n$ corresponds to serial dependence within the original series $(Y_n)_n$ up to lag $d-1$.
For $i = 1, \ldots, n$ and $j = 1, \ldots, d$, let $R_{i,j}$ denote the rank of $X_{i,j}$ among $X_{1,j}, \ldots, X_{n,j}$. For convenience, we omit the sample size $n$ in the notation for ranks. The random vectors $\hat{\vect U}_i = (\hat{U}_{i,1}, \ldots, \hat{U}_{i,d})$, with $\hat{U}_{i,j} = n^{-1} R_{i,j}$ and $i = 1, \ldots, n$, are called pseudo-observations from $C$. Letting $\operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_A$ denote the indicator of the event $A$, the empirical copula is
\[
\hat C_n(\vect u) = \frac{1}{n} \sum_{i=1}^n \operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{ \{ \hat{\vect U}_i \leq \vect u\} }, \qquad \vect u \in [0,1]^d.
\]
Under mixing conditions on the sequence $( \vect X_n )_n$ and smoothness conditions on $C$, \citet{BucVol13} showed that
\begin{equation}
\label{eq:hCn:weak}
\hat{\mathbb{C}}_n = \sqrt n(\hat C_n - C) \rightsquigarrow \mathbb{C}_C, \qquad n \to \infty
\end{equation}
in the metric space $\ell^\infty([0,1]^d) = \{ f : [0, 1]^d \to \mathbb{R} \mid \sup_{\vect u \in [0, 1]^d} \lvert f(\vect u) \rvert < \infty \}$ equipped with the supremum distance. The arrow $\rightsquigarrow$ in \eqref{eq:hCn:weak} denotes weak convergence in metric spaces as exposed in \cite{VanWel96}. The limit process in \eqref{eq:hCn:weak} is
\[
\mathbb{C}_C(\vect u)
= \alpha_C(\vect u) - \sum_{j=1}^d \dot C_j(\vect u) \, \alpha_C(1, \dots, 1, u_j, 1, \dots, 1),
\qquad \vect u \in [0, 1]^d,
\]
where $\dot{C}_j(\vect u) = \partial C(\vect u) / \partial u_j$ and where $\alpha_C$ is a tight, centered Gaussian process on $[0, 1]^d$ with covariance function
\begin{equation}
\label{eq:cov}
\operatorname{Cov}\bigl( \alpha_C(\vect u), \alpha_C(\vect v) \bigr)
= \sum_{i \in \mathbb{Z}} \operatorname{Cov}\bigl( \operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{ \{ \vect U_0 \leq \vect u \} }, \operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{ \{ \vect U_i \leq \vect v \} } \bigr),
\qquad \vect u, \vect v \in [0, 1]^d,
\end{equation}
where $\bm{U}_i = (U_{i,1}, \ldots, U_{i,d})$ and $U_{i,j} = F_j(X_{i,j})$. Since $F_j$ is continuous, the random variables $U_{i,j}$ are uniformly distributed on $[0, 1]$. The joint distribution function of $\bm{U}_i$ is $C$. The margins $F_1, \ldots, F_d$ being unknown, we cannot observe the $\vect U_i$, and this is why we use the $\hat{\vect U}_i$ instead. In the case of serial independence, weak convergence of $\hat{\mathbb{C}}_n$ has been investigated by many authors, see the survey by \citet{BucVol13}; the series in \eqref{eq:cov} simplifies to $\operatorname{Cov}( \operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{ \{\vect U_0 \le \vect u\} }, \operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{ \{ \vect U_0 \le \vect v \} } ) = C( \vect u \wedge \vect v ) - C( \vect u) C( \vect v)$ so that $\alpha_C$ is a $C$-Brownian bridge. In the stationary case, convergence of the series in \eqref{eq:cov} is a consequence of the mixing conditions imposed on $(\vect X_n)_n$.
Weak convergence in \eqref{eq:hCn:weak} is helpful for deriving asymptotic properties of estimators and test statistics based upon the empirical copula, such as estimators of Kendall's tau or Spearman's rho or such as Kolmogorov--Smirnov and Cram\'er--von Mises statistics for testing independence. However, as argued by \citet{BerBucVol17}, sometimes weak convergence with respect to a stronger metric is required, i.e., a weighted supremum norm. Examples mentioned in the cited article include nonparametric estimators of the Pickands dependence function of an extreme-value copula and bivariate rank statistics with unbounded score functions such as the van der Waerden rank (auto-)correlation. This motivates the study of the weighted empirical copula process $\hat {\mathbb{C}}_n/g^\omega$, with $\omega \in(0,1/2)$ and a suitable weight function $g$ on $[0, 1]^d$. The limit of the empirical copula process is zero almost surely as soon as one of its arguments is zero or if all arguments but at most one are equal to one. We can thus hope to obtain weak convergence with respect to a weight function that vanishes at such points. A possible function with this property is
\begin{equation}
\label{eq:g}
g(\vect u ) =
\bigwedge_{j=1}^d \biggl\{ u_j \wedge \bigvee_{k \ne j} (1 - u_k) \biggr\},
\qquad \vect u \in [0, 1]^d.
\end{equation}
Note that $g(\vect u)$ is small as soon as there exists $j$ such that either $u_j$ is small or else all other $u_k$ are close to $1$.
The trajectories of the processes $\hat{\mathbb{C}}_n/g^\omega$ are not bounded on the unit cube, hence the processes cannot converge weakly in $\ell^\infty([0,1]^d)$. A solution is to restrict the domain from $[0, 1]^d$ to sets of the form $[c/n, 1 - c/n]^d$ for $c \in (0, 1)$, or, more generally, to $\{ \vect v \in [0, 1]^d : g(\vect v) \ge c/n \}$. Relying on such a workaround, Theorem~2.2 in \cite{BerBucVol17} states weak convergence of the weighted empirical copula process $\hat{\mathbb{C}}_n/g^\omega$ to $\mathbb{C}_C/g^\omega$.
Note that $g(\vect v) = 0$ if and only if $v_j = 0$ for some $j$ or if there exists $j$ such that $v_k = 1$ for all $k \ne j$, and that $\mathbb{C}_C(\vect v) = 0$ almost surely for such $\vect v$ too.
The empirical copula is a piecewise constant function whereas the estimation target is continuous. It is natural to consider smoothed versions of the empirical copula. \citet{SegSibTsu17} defined the empirical beta copula as
\begin{equation}
\label{eq:empBetaCop}
\Cnb (\vect u )= \frac{1}{n} \sum_{i=1}^n\prod_{j=1}^d F_{n,R_{i,j}}(u_j),
\qquad \vect u =(u_1,\dots , u_d) \in [0,1]^d,
\end{equation}
where $F_{n,r}$ is the distribution function of the beta distribution $\mathcal{B}(r,n+1-r)$, i.e., $F_{n,r}(u) = \sum_{s=r}^n \binom{n}{s} u^s (1-u)^{n-s}$, for $u \in [0, 1]$ and $r \in \{1, \ldots, n\}$. Note that
\begin{equation}
\label{eq:empCop2empBetaCop}
\Cnb (\vect u ) = \int_{[0,1]^d} \hat C_n (\vect w) {\,\mathrm{d}} \mu_{n, \vect u}(\vect w),
\end{equation}
where $\mu_{n,\vect u}$ is the law of the random vector $(S_1/n, \dots , S_d/n)$, with $S_1, \dots, S_d$ being independent binomial random variables, $S_j \sim \operatorname{Bin}(n, u_j)$. In the absence of ties, the rank vector $(R_{1,j}, \ldots, R_{n,j})$ of the $j$-th coordinate sample is a permutation of $(1, \ldots, n)$. As a consequence, the empirical beta copula can be shown to be a genuine copula, unlike the empirical copula.
Under a smoothness condition on $C$, it follows from Theorem~3.6(ii) in \cite{SegSibTsu17} that weak convergence in $\ell^\infty([0, 1]^d)$ of the empirical copula process $\hat{\mathbb{C}}_n$ in \eqref{eq:hCn:weak} to a limit process $\mathbb{C}$ with continuous trajectories is sufficient to conclude the weak convergence of the empirical beta copula process: in the space $\ell^\infty([0, 1]^d)$, we have
\begin{equation}
\label{eq:betacop:weak}
\mathbb{C}_n^{\beta} = \sqrt n (\Cnb - C) = \hat{\mathbb{C}}_n + \mathrm{o}_{\Prob}(1) \rightsquigarrow \mathbb{C}, \qquad n \to \infty.
\end{equation}
The asymptotic distribution of the empirical beta copula is thus the same as the one of the empirical copula. Still, for finite samples, numerical experiments in \cite{SegSibTsu17} revealed the empirical beta copula to be more accurate.
Our aim is to extend the convergence statement in \eqref{eq:betacop:weak} for weighted versions $\mathbb{C}_n^{\beta} / g^\omega$, with $g$ as in \eqref{eq:g} and for suitable exponents $\omega > 0$. As the empirical beta copula is a genuine copula, the zero-set of $\mathbb{C}_n^{\beta}$ includes the zero-set of $g$, and on this set we implicitly define $\mathbb{C}_n^{\beta} / g^\omega$ to be zero. With this convention, the sample paths of $\mathbb{C}_n^{\beta} / g^\omega$ are bounded on $[0, 1]^d$; see Lemma~\ref{lem:boundary} below. We can therefore hope to prove weak convergence of $\mathbb{C}_n^{\beta} / g^\omega \rightsquigarrow \mathbb{C}_C/g^\omega$ in $\ell^\infty([0, 1]^d)$ without having to exclude those border regions of $[0, 1]^d$ where $g$ is small, as was necessary in \cite{BerBucVol17}.
The analysis of $\mathbb{C}_n^{\beta}/g^\omega$ will be based on the one of $\hat{\mathbb{C}}_n/g^\omega$ via \eqref{eq:empCop2empBetaCop}. We will therefore need the same smoothness condition on $C$ as imposed in \citet[Condition~2.1]{BerBucVol17}, combining Conditions~2.1 and~4.1 in~\cite{Seg12}. Condition~\ref{cond:second} below is satisfied by many copula families: in \citep[Section~5]{Seg12}, part~(i) of the condition is verified for strict Archimedean copulas with continuously differentiable generators, whereas both parts of the condition are verified for the non-singular bivariate Gaussian copula and for bivariate extreme-value copulas with twice continuously differentiable Pickands dependence function and a growth condition on the latter's second derivative near the boundary points of its domain.
\begin{condition}
\label{cond:second}
(i) For every $j \in \{ 1, \dots, d \}$, the first-order partial derivative $\dot C_j(\vect u) := \partial C(\vect u)/\partial u_j$ exists and is continuous on $V_j=\{ \vect u \in [0,1]^d: u_j \in (0,1) \}$.
(ii) For every $j_2, j_2 \in \{1, \dots, d\}$, the second-order partial derivative $\ddot C_{j_1 j_2}(\vect u) := \partial^2 C(\vect u)/\partial u_{j_1}\partial u_{j_2}$ exists and is continuous on $V_{j_1} \cap V_{j_2}$. Moreover, there exists a constant $K>0$ such that, for all $j_1, j_2 \in \{1, \ldots, d\}$, we have
\begin{equation}
\label{eq:second}
\bigl\lvert \ddot C_{j_1j_2}(\vect u) \bigr\rvert
\le K \min \left\{ \frac{1}{u_{j_1}(1-u_{j_1})}, \frac{1}{u_{j_2}(1-u_{j_2})} \right\}, \qquad \forall\, \vect u \in V_{j_1} \cap V_{j_2}.
\end{equation}
\end{condition}
The alpha-mixing coefficients of the sequence $(\vect X_n)_n$ are defined as
\[
\alpha(k) =
\sup \left\{
\lvert \Prob(A \cap B) - \Prob(A) \, \Prob(B) \rvert :
A \in \sigma(\vect X_j, j \le i), B \in \sigma(\vect X_{j+k}, j \ge i), i \in \mathbb{Z}
\right\},
\]
for $k = 1, 2, \ldots$. The sequence $(\vect X_n)_n$ is said to be strongly mixing or alpha-mixing if $\alpha(k) \to 0$ as $k \to \infty$. Now we can state the main result.
\begin{theorem}
\label{thm:main}
Suppose that $\vect X_1, \vect X_2, \dots$ is a strictly stationary, alpha-mixing sequence with $\alpha(k) = \mathrm{O}(a^k)$, as $k \to \infty$, for some $a \in (0,1)$. Assume that within each variable, ties do not occur with probability one. If the copula $C$ satisfies Condition~\ref{cond:second}, then, for any $\omega\in[0,1/2)$, we have, in $\ell^\infty([0, 1]^d)$,
\[
\mathbb{C}_n^\beta/g^\omega \rightsquigarrow \mathbb{C}_C/g^\omega, \qquad n \to \infty.
\]
\end{theorem}
\begin{remark}
The tie-excluding assumption is needed to ensure that the empirical beta copula is a genuine copula almost surely. The assumption implies that the $d$ stationary marginal distributions are continuous. For iid sequences, continuity of the margins is also sufficient. In the strictly stationary case, ties may occur with positive probability even if the margins are continuous; for instance, take a Markov chain where the current state is repeated with positive probability.
\end{remark}
\begin{remark}
The result also holds under weaker assumptions on the serial dependence. In \cite{BerBucVol17} it is shown that weak convergence of the weighted empirical copula process is still valid under more general assumptions on the marginal empirical processes and quantile processes and an assumption on the multivariate empirical process. In this case, however, the range of $\omega$ is smaller \citep[Theorem~4.5]{BerBucVol17}.
\end{remark}
\section{Application: weighted Cram\'er--von Mises tests for independence}
\label{sec:indep}
Testing for independence is a classical subject which still attracts interest today. One approach consists of comparing the multivariate empirical cumulative distribution function to the product of empirical cumulative distribution functions. Integrating out the difference with respect to the sample distribution yields a Cram\'er--von Mises style test statistic going back to \citet{hoeffding:1948} and \citet{blum+k+r:1961}. To achieve better power, one may, in the spirit of the Anderson--Darling goodness-of-fit test statistic, introduce a weight function in the integral that tends to infinity near (parts of) the boundary of the domain; see \citet{dewet:1980}.
\citet{deheuvels:1981, deheuvels:1981:jmva} was perhaps the first to reformulate the question in the copula framework: for continuous variables, the problem consists in testing whether the true copula, $C$, is equal to the independence copula, $\Pi(\vect u) = \prod_{j=1}^d u_j$. The empirical copula process $\sqrt{n} (\Cn - \Pi)$, for which he proposed an ingenious combinatorial transformation, can thus be taken as a basis for the construction of test statistics. \citet{genest+r:2004} relied on his ideas to test the white noise hypothesis and considered Cram\'er--von Mises statistics based on the empirical copula process. \citet{genest+q+r:2006} studied the power of such statistics against local alternatives, while \citet{kojadinovic+h:2009} developed an extension to the case of testing for independence between random vectors. For the latter problem, \citet{fan:2017} proposed an alternative approach based on empirical characteristic functions.
Recently, \citet{belalia+b+l+t:2017} proposed to use the Bernstein empirical copula \cite{SanSat04,JanSwaVer12} rather than the empirical copula in the Cram\'er--von Mises test statistic. Moreover, they constructed new test statistics based on the Bernstein copula density estimator by \citet{bouezmarni+r+t:2010}. Recall that the empirical beta copula arises from the Bernstein empirical copula by a specific choice of the degree of the Bernstein polynomials.
A situation of particular interest is when the true copula differs from the independence copula mainly in the tails. For instance, the bivariate \emph{t} copula with zero correlation parameter has both Spearman's rho and Kendall's tau equal to zero. Still, the common value of its coefficients of upper and lower tail dependence is positive and depends on the degrees-of-freedom parameter. In their numerical experiments, \citet{belalia+b+l+t:2017} found that for such alternatives, the power of the Cram\'er--von Mises test based on both the empirical copula and the Bernstein empirical copula is particularly weak. Their test statistics based on the Bernstein copula density estimator performed much better.
To increase the power of the Cram\'er--von Mises statistic against such difficult alternatives, a natural approach is to follow \citet{dewet:1980} and introduce a weight function emphasizing the tails. For $\gamma \in [0, 2)$, we propose the weighted Cram\'er--von Mises statistic
\begin{equation}
\label{eq:CvM}
T_{n,\gamma}
=
n \int_{[0,1]^d} \frac{\{ C_n^\beta(\vect u) - C(\vect u) \}^2}{\{g(\vect u)\}^\gamma} \, \mathrm{d} \vect u.
\end{equation}
We are mostly interested in the case where $C(\vect u) = \Pi(\vect u) = \prod_{j=1}^d u_j$, the independence copula. If $\gamma = 0$, the weight function disappears and we are back to the original Cram\'er--von Mises statistic, but with the empirical beta copula replacing the empirical copula.
\begin{corollary}
Under the assumptions of Theorem~\ref{thm:main}, we have, for every $\gamma \in [0, 2)$, the weak convergence
\[
T_{n,\gamma} \rightsquigarrow T_\gamma = \int_{[0, 1]^d} \frac{\{\mathbb{C}_C(\vect u)\}^2}{\{g(\vect u)\}^\gamma} \, \mathrm{d} \vect{u},
\qquad n \to \infty.
\]
This is particularly true in case of independent random sampling from a $d$-variate distribution with continuous margins and independent components ($C = \Pi$).
\end{corollary}
\begin{proof}
We have
\[
T_{n, \gamma}
=
\int_{[0,1]^d}
\left( \frac{\mathbb{C}_n^\beta(\vect u)}{\{g(\vect u)\}^{\gamma/4}}\right)^2 \,
\frac{1}{\{g(\vect u)\}^{\gamma/2}} \,
\mathrm{d} \vect u.
\]
By Theorem~\ref{thm:main} applied to $\omega = \gamma / 4 \in [0, 1/2)$, the first part of the integrand converges weakly, in $\ell^\infty([0, 1]^d)$, to the stochastic process $(\mathbb{C}_C/g^{\gamma/4})^2$. Further, since $\gamma/2 \in [0, 1)$, the integral $\int_{[0, 1]^d} \{ g(\vect u) \}^{-\gamma/2} \, \mathrm{d} \vect u$ is finite. The linear functional that sends a measurable function $f \in \ell^\infty([0, 1]^d)$ to the scalar $\int_{[0, 1]^d} f( \vect u ) \, \{ g(\vect u) \}^{-\gamma/2} \, \mathrm{d} \vect u$ is therefore bounded. The conclusion follows from the continuous mapping theorem.
\end{proof}
A comprehensive simulation study comparing the performance of the weighted Cram\'er--von Mises statistic against all competitors and for a wide range of tuning parameters and data-generating processes is out of this paper's scope. We limit ourselves to the case identified as the most difficult one in \citet{belalia+b+l+t:2017}, the bivariate \emph{t} copula with zero correlation parameter. We copy the settings in their Section~5: the degrees-of-freedom parameter is $\nu = 2$ and we consider independent random samples of size $n \in \{100, 200, 400, 500\}$. We compare the power of our statistic $T_{n,\gamma}$ with the powers of their statistics $T_n,\delta_n,I_n$ at the $\alpha = 5\%$ significance level based on $1\,000$ replications.
We implemented our estimator in the statistical software environment \textsf{R} \citep{Rlanguage} using the package \textsf{copula} \citep{KojYan10R}. The critical values were computed by a Monte Carlo approximation based on $10\,000$ random samples from the uniform distribution on the unit square. For the statistics in \cite{belalia+b+l+t:2017}, we copied the relevant values from their Tables~4, 5, and~6. Their statistics depend on the degree, $k$, of the Bernstein polynomials, which they selected in $\{5, 10, \ldots, 30\}$. Note that for $\gamma = 0$ and $k = n$, our statistic $T_{n,\gamma}$ coincides with their statistic $T_n$. Their statistics $\delta_n$ and $I_n$ are based on the Bernstein copula density estimator in \cite{bouezmarni+r+t:2010}.
The results are presented in Table~\ref{tab:power}. The unweighted Cram\'er--von Mises statistic $T_n$ does a poor job in detecting the alternative. The novel statistics $\delta_n$ and $I_n$ in \cite{belalia+b+l+t:2017} are more powerful, especially the statistic $I_n$, which is a Cram\'er--von Mises statistic based on the Bernstein copula density estimator. For the weighted Cram\'er--von Mises statistic $T_{n,\gamma}$, the power increases with $\gamma$. For the largest considered value, $\gamma = 1.75$, the power is higher than the one of $T_n$, $\delta_n$ and $I_n$ for any value of $k$ considered.
\begin{table}
\begin{center}
\begin{tabular}{r|r@{\qquad}rrr|r@{\qquad}r}
\toprule
&$k$& $T_n$ & $\delta_n$ & $I_n$ & $\gamma$ & $T_{n,\gamma}$ \\
\midrule\midrule
$n=100$ & $\phantom{0}5$ & 0.056 & 0.114 & 0.094 & $0.25$ & 0.102\\
& $10$ & 0.064 & 0.130 & 0.168 & $0.50$ & 0.091\\
& $15$ & 0.066 & 0.166 & 0.254 & $0.75$ & 0.138\\
& $20$ & 0.070 & 0.132 & 0.270 & $1.00$ & 0.179\\
& $25$ & 0.070 & 0.102 & 0.284 & $1.25$ & 0.216\\
& $30$ & 0.068 & 0.114 & 0.294 & $1.50$ & 0.292\\
& & & & & $1.75$ & 0.401\\
\midrule
$n=200$ & $\phantom{0}5$ & 0.076 & 0.176 & 0.094 & $0.25$ & 0.123\\
& $10$ & 0.080 & 0.222 & 0.308 & $0.50$ & 0.161\\
& $15$ & 0.088 & 0.226 & 0.442 & $0.75$ & 0.233\\
& $20$ & 0.094 & 0.210 & 0.466 & $1.00$ & 0.335\\
& $25$ & 0.096 & 0.176 & 0.472 & $1.25$ & 0.428\\
& $30$ & 0.086 & 0.148 & 0.458 & $1.50$ & 0.605\\
& & & & & $1.75$ & 0.705\\
\midrule
$n=400$ & $\phantom{0}5$ & 0.044 & 0.366 & 0.230 & $0.25$ & 0.278\\
& $10$ & 0.038 & 0.492 & 0.588 & $0.50$ & 0.427\\
& $15$ & 0.048 & 0.472 & 0.702 & $0.75$ & 0.555\\
& $20$ & 0.044 & 0.432 & 0.762 & $1.00$ & 0.777\\
& $25$ & 0.048 & 0.382 & 0.772 & $1.25$ & 0.864\\
& $30$ & 0.050 & 0.354 & 0.780 & $1.50$ & 0.930\\
& & & & & $1.75$ & 0.964\\
\midrule
$n=500$ & $\phantom{0}5$ & 0.072 & 0.398 & 0.192 & $0.25$ &0.406 \\
& $10$ & 0.096 & 0.542 & 0.688 & $0.50$ & 0.588\\
& $15$ & 0.100 & 0.552 & 0.746 & $0.75$ & 0.773\\
& $20$ & 0.110 & 0.506 & 0.806 & $1.00$ & 0.883\\
& $25$ & 0.106 & 0.476 & 0.824 & $1.25$ & 0.966\\
& $30$ & 0.096 & 0.458 & 0.824 & $1.50$ & 0.986\\
& & & & & $1.75$ & 0.992\\
\bottomrule
\end{tabular}
\end{center}
\caption{\label{tab:power}Testing the independence hypothesis when the true copula is equal to the \emph{t} copula with zero correlation parameter and degrees-of-freedom parameter $\nu = 2$. Powers based on $1\,000$ random samples of sizes $n \in \{100, 200, 400, 500\}$ at significance level $\alpha = 5\%$. Comparison between, on the one hand, the statistics $T_n,\delta_n,I_n$ in \citet{belalia+b+l+t:2017} with degree $k$ of the Bernstein polynomials and, on the other hand, the weighted Cram\'er--von Mises statistic $T_{n,\gamma}$ in Eq.~\eqref{eq:CvM} with weight parameter $\gamma$. The values in the columns headed $T_n$, $\delta_n$ and $I_n$ have been copied from Tables~4--6 in \cite{belalia+b+l+t:2017}.}
\end{table}
\section{Application: nonparametric estimation of a Pickands dependence function}
\label{sec:pick}
A $d$-variate copula $C$ is a multivariate extreme-value copula if and only if it can be written as
\[
C(\vect u)
=
\exp \left\{
\left( \sum_{j=1}^d \log u_j \right) \,
A \left( \frac{\log u_1}{\sum_{j=1}^d \log u_j}, \dots, \frac{\log u_{d-1}}{\sum_{j=1}^d \log u_j} \right)
\right\},
\]
for $\vect u \in (0,1]^d \setminus \{ (1, \ldots, 1) \}$. The function $A:\Delta_{d-1} \to [1/d,1]$ is called the Pickands dependence function \citep[after][]{Pic81}, its domain being the unit simplex $\Delta_{d-1} = \{ \vect t=(t_1, \dots , t_{d-1}) \in [0,1]^{d-1}: \sum_{j=1}^{d-1}t_j\leq 1\}$.
Writing $t_d = t_d(\vect t) = 1 - t_1 - \cdots - t_{d-1}$ for $\vect t \in \Delta_{d-1}$, we have $C(u^{t_1}, \ldots, u^{t_d}) = u^{A(\vect t)}$ for $0 < u < 1$, and thus
\[
\log \{ A( \vect t ) \}
=
- \gamma + \int_0^1 \left\{ C(u^{t_1}, \ldots, u^{t_d}) - \operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{[e^{-1}, 1]}(u) \right\} \, \frac{{\,\mathrm{d}} u}{u \log u},
\]
where $\gamma = 0.5772156649\ldots$ is the Euler--Mascheroni constant. The rank-based Cap\'er\`aa--Foug\`eres--Genest (CFG) estimator, $\Acfg(\vect t)$, arises by replacing $C$ in the above formula by the empirical copula, $\Cn$; see \citep{CapFouGen97} for the original estimator and see \citep{GenSeg09, gudendorf+s:2012} for the rank-based versions in dimensions two and higher, respectively. We now propose to replace $C$ by the empirical beta copula \eqref{eq:empBetaCop} instead, which gives the estimator
\begin{equation}
\label{eq:CFG:b}
\log \{ \Acfgb(\vect t) \}
=
- \gamma + \int_0^1 \left\{ \Cnb(u^{t_1}, \ldots, u^{t_d}) - \operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{[e^{-1}, 1]}(u) \right\} \, \frac{{\,\mathrm{d}} u}{u \log u}.
\end{equation}
The technique could also be used for other estimators based upon the empirical copula \citep{BucDetVol11,BerBucDet13}.
For the CFG-estimator on usually employs the endpoint-corrected version
\begin{equation}
\label{eq:CFG:c}
\log \{ \Acfgc(\vect t) \}
=
\log \{ \Acfg(\vect t) \}
-
\sum_{j=1}^d t_j \log \{ \Acfg( \vect e_j ) \},
\end{equation}
where $\vect e_j = (0, \ldots, 0, 1, 0, \ldots, 0)$ is the $j$-th canonical unit vector in $\mathbb{R}^d$. For the estimator based on the empirical beta copula the endpoint correction is immaterial, since $\Cnb$ is a copula itself and thus $\log \Acfgb( \vect e_j ) = 0$ for all $j = 1, \ldots, d$.
Thanks to Theorem~\ref{thm:main}, the limit of the beta CFG estimator can be derived from Theorem~\ref{thm:main} by a straightforward application of the continuous mapping theorem. The result does not require serial independence and can be extended to higher dimensions.
\begin{corollary}
Let $C$ be a $d$-variate extreme-value copula with Pickands dependence function $A : \Delta_{d-1} \to [1/d, 1]$. Under the assumptions of Theorem~\ref{thm:main} we have, as $n \to \infty$,
\[
\sqrt n \left\{ \Acfgb(\,\cdot\,) - A(\,\cdot\,) \right\}
\rightsquigarrow \mathbb{A}(\,\cdot\,) \; \text{ in } \ell^\infty(\Delta_{d-1}),
\]
where, for $\vect t \in \Delta_{d-1}$, we define $\mathbb{A}(\vect t) = A( \vect t ) \int_0^1 \mathbb{C}_C(u^{t_1}, \ldots, u^{t_d}) \, (u \log u)^{-1} \, \mathrm{d}u$.
\end{corollary}
\begin{proof}
Let $0 < \omega < 1/2$. We have
\begin{align*}
\sqrt n \left[ \log \{ \Acfgb(\vect t) \} - \log \{ A(\vect t) \} \right]
=
\int_0^1
\mathbb{C}_n^{\beta}(u^{t_1}, \ldots, u^{t_d})
\frac{\mathrm{d}u}{u \log u}
=
\int_0^1
\frac{\mathbb{C}_n^{\beta}(u^{t_1}, \ldots, u^{t_d})}{\{g(u^{t_1}, \ldots, u^{t_d})\}^\omega} \,
\{g(u^{t_1}, \ldots, u^{t_d})\}^\omega \,
\frac{\mathrm{d}u}{u \log u}.
\end{align*}
The integral
$
\int_0^1 \{g(u^{t_1}, \ldots, u^{t_d})\}^\omega \, (u \log u)^{-1} \, \mathrm{d}u
$
is bounded, uniformly in $\vect t \in \Delta_{d-1}$. Therefore, the linear map that sends a measurable function $f \in \ell^\infty([0, 1]^d)$ to the bounded function $\vect t \mapsto \int_0^1 f(u^{t_1}, \ldots, u^{t_d}) \, \{g(u^{t_1}, \ldots, u^{t_d})\}^\omega \, (u \log u)^{-1} \, \mathrm{d} u$ is continuous. By Theorem~\ref{thm:main} and the continuous mapping theorem, we find, as $n \to \infty$,
\[
\sqrt n \left[\log \{\Acfgb(\,\cdot\,)\} - \log\{A(\,\cdot\,)\}\right]
\rightsquigarrow \left(\int_0^1 \mathbb{C}_C(u^{t_1}, \ldots, u^{t_d}) \, \frac{\mathrm{d}u}{u \log u}\right)_{\vect t \in \Delta_{d-1}}
\; \text{ in } \ell^\infty(\Delta_{d-1}).
\]
Finally, the result follows by an application of the functional delta method.
\end{proof}
We compare the finite-sample performance of the endpoint-corrected CFG estimator with the variant based on the empirical beta copula. As performance criterion for an estimator $\hat{A}$, we use the integrated mean squared error,
\[
\int_{\Delta_{d-1}} \operatorname{E} \left[ \left\{ \hat{A}(\vect t) - A(\vect t) \right\}^2 \right] \, {\,\mathrm{d}} \vect t
=
\operatorname{E} \left[ \left\{ \hat{A}(\vect T) - A(\vect T) \right\}^2 \right],
\]
where the random variable $\vect T$ is uniformly distributed on $\Delta_{d-1}$ and is independent of the sample from which $\hat{A}$ was computed. We approximate the integrated mean squared error through a Monte Carlo procedure: for a large integer $M$, we generate $M$ random samples of size $n$ from a given copula and we calculate
\[
\frac{1}{M} \sum_{m=1}^M \left\{ \hat{A}_n^{(m)}(\vect T^{(m)}) - A(\vect T^{(m)}) \right\}^2
\]
where $\hat{A}_n^{(m)}$ denotes the estimator based upon sample number $m$, and where the random variables $\vect T^{(1)}, \ldots, \vect T^{(m)}$ are uniformly distributed on $\Delta_{d-1}$ and are independent of each other and of the copula samples. The approximation error is $\mathrm{O}_{\Prob}(1/\sqrt{M})$, aggregating both the sampling error and the integration error. A similar trick was used in \cite{SegSibTsu17} and is more efficient then first estimating the pointwise mean squared error through a Monte Carlo procedure and then integrating this out via numerical integration.
We considered the following data-generating processes:
\begin{itemize}
\item[(M1)]
independent random sampling from the bivariate Gumbel copula \cite{gumbel:1961}, which has Pickands dependence function $A(t) = \{t^{1/\alpha} + (1-t)^{1/\alpha}\}^{\alpha}$ for $t \in [0, 1]$ and with parameter $\alpha \in [0, 1]$, for which Kendall's tau is $\tau = 1-\alpha$. We also considered independent random samples from the bivariate Galambos, H\"usler--Reiss and t-EV copula families, yielding similar results as for the bivariate Gumbel copula, not shown to save space. See, e.g., \cite{GudSeg2010} for the definitions of these copulas;
\item[(M2)]
independent random sampling from a special case of the trivariate asymmetric logistic extreme-value copula \cite{tawn:1990}, with Pickands dependence function $A(t_1, t_2) = \sum_{(i,j) \in \{(1, 2), (2, 3), (3, 1)\}} \{(\theta t_i)^{1/\alpha} + (\phi t_j)^{1/\alpha} \}^\alpha + 1 - \theta - \phi$ for $(t_1, t_2) \in \Delta_2$ and $t_3 = 1-t_1-t_2$. As in \citep[Section~5]{GudSeg12}, we set $\phi = 0.3$ and $\theta = 0.6$, and $\alpha$ varies between $0$ and $1$;
\item[(M3)]
sampling from the strictly stationary bivariate moving maximum process $(U_{t1}, U_{t2})_{t \in \mathbb{Z}}$ given by
\[
U_{t1} = \max \left\{ W_{t-1,1}^{1/a}, W_{t1}^{1/(1-a)} \right\}
\qquad
\text{and}
\qquad
U_{t2} = \max \left\{ W_{t-1,2}^{1/b}, W_{t2}^{1/(1-b)} \right\},
\]
where $a, b \in [0, 1]$ are two parameters and where $(W_{t1}, W_{t2})_{t \in \mathbb{Z}}$ is an iid sequence of bivariate random vectors whose common distribution is an extreme value-copula with some Pickands dependence function $B$. By Eq.~(8.1) in \cite{bucher+s:2014}, the stationary distribution of $(U_{t1}, U_{t2})$ is an extreme-value copula too, and its Pickands dependence function can be easily calculated to be
\[
A(t)
=
\{ a(1-t)+bt \} \, B \left( \frac{bt}{a(1-t)+bt} \right)
+
\{ (1-a)(1-t) + (1-b)t \} \, B \left( \frac{(1-b)t}{(1-a)(1-t) + (1-b)t} \right),
\]
for $t \in [0, 1]$. We let $B(t) = \{t^{1/\alpha} + (1-t)^{1/\alpha}\}^{\alpha}$ for $t \in [0, 1]$ and $\alpha \in [0, 1]$ (the bivariate Gumbel copula as above, with Kendall's tau $\tau = 1-\alpha$) and we set $a = 0.1$ and $b = 0.7$, so that $A$ is asymmetric.
\end{itemize}
The results are shown in Figure~\ref{fig:Pickands}. Each plot is based on $10\,000$ samples of size $n \in \{20, 50, 100\}$. For weak dependence (small $\tau$, large $\alpha$), the beta variant \eqref{eq:CFG:b} is the more efficient one, whereas for strong dependence (large $\tau$, small $\alpha$), it is the usual CFG estimator \eqref{eq:CFG:c} which is more accurate.
In order to gain a better understanding, we have also traced some trajectories of estimated Pickands dependence functions for independent random samples of the bivariate Gumbel copula at $\tau \in \{0.3, 0.9\}$ and $n \in \{20, 50, 100\}$; see Figure~\ref{fig:Pickands:A}. For each trajectory of the CFG estimator, there is a corresponding trajectory of the new estimator that is based on the same sample. For large $\tau$, the true extreme-value copula $C$ is close to the Fr\'echet--Hoeffding upper bound, $M(u_1, u_2) = \max(u_1, u_2)$. As a result, $C$ is strongly curved around the main diagonal $u_1 = u_2$, and this implies a strong curvature of the Pickands dependence function $A$ around $t = 1/2$. The empirical beta copula can be seen as a smoothed version of the empirical copula with an implicit bandwidth of the order $1/\sqrt{n}$ \citep[p.~47]{SegSibTsu17}. For smaller $n$, oversmoothing occurs, producing a negative bias for the empirical beta copula around the diagonal $u_1 = u_2$ and thus a positive bias for the beta variant of the CFG estimator around $t = 1/2$.
\begin{figure}
\begin{center}
\begin{tabular}{@{}ccc}
\includegraphics[width=0.32\textwidth]{art/gumbel_n=20}&
\includegraphics[width=0.32\textwidth]{art/gumbel_n=50}&
\includegraphics[width=0.32\textwidth]{art/gumbel_n=100}\\
\includegraphics[width=0.32\textwidth]{art/asymmetric_logistic_n=20}&
\includegraphics[width=0.32\textwidth]{art/asymmetric_logistic_n=50}&
\includegraphics[width=0.32\textwidth]{art/asymmetric_logistic_n=100}\\
\includegraphics[width=0.32\textwidth]{art/mm_gumbel_n=20}&
\includegraphics[width=0.32\textwidth]{art/mm_gumbel_n=50}&
\includegraphics[width=0.32\textwidth]{art/mm_gumbel_n=100
\end{tabular}
\end{center}
\caption{\label{fig:Pickands}Integrated mean squared error (vertical axis) of the endpoint-corrected CFG-estimator \eqref{eq:CFG:c} (dashed, blue) and the empirical beta variant \eqref{eq:CFG:b} (solid, red) based on samples of the data-generating process (M1), (M2) and (M3) (top to bottom) for various choices of the parameter $\alpha$ or $\tau = 1-\alpha$. Each point is based on $10\,000$ random samples of size $n \in \{20, 50, 100\}$ (left to right).}
\end{figure}
\begin{figure}[h!]
\begin{center}
\begin{tabular}{@{}ccc}
\includegraphics[width=0.32\textwidth]{art/gumbel_n=20_tau=03_A}&
\includegraphics[width=0.32\textwidth]{art/gumbel_n=50_tau=03_A}&
\includegraphics[width=0.32\textwidth]{art/gumbel_n=100_tau=03_A}\\
\includegraphics[width=0.32\textwidth]{art/gumbel_n=20_tau=09_A}&
\includegraphics[width=0.32\textwidth]{art/gumbel_n=50_tau=09_A}&
\includegraphics[width=0.32\textwidth]{art/gumbel_n=100_tau=09_A}
\end{tabular}
\end{center}
\caption{\label{fig:Pickands:A}Plots of trajectories of the CFG-estimator \eqref{eq:CFG:c} (dashed blue) and the empirical beta variant \eqref{eq:CFG:b} (solid red) of the Pickands dependence function (solid black) based on samples from the Gumbel copula with Kendall's $\tau \in \{0.3, 0.9\}$ (top to bottom) and $n \in \{20, 50, 100\}$ (left to right).}
\end{figure}
\section{Proof of Theorem~\ref{thm:main}}
\label{sec:proof}
Recall the empirical copula process $\hat{\mathbb{C}}_n = \sqrt{n} (\hat{C}_n - C)$ and the empirical beta copula process $\hat{\mathbb{C}}_n^\beta = \sqrt{n} (\Cnb - C)$. The link between the empirical copula $\hat{C}_n$ and the empirical beta copula $\Cnb$ is given in \eqref{eq:empCop2empBetaCop}. In the derivation of the limit of the weighted empirical beta copula process the following decomposition plays a central role:
\begin{multline}\label{eq:decomp}
\frac {\mathbb{C}_n^\beta(\vect u )}{g(\vect u)^\omega}
~=~
\frac{\hat{\mathbb{C}}_n(\vect u)}{g(\vect u )^\omega}\int_{[0,1]^d} \frac{g(\vect w)^\omega}{g(\vect u)^\omega} {\,\mathrm{d}} \mu_{n,\vect u} (\vect w) \\
~+ ~
\int_{[0,1]^d} \left\{ \frac{\hat {\mathbb{C}}_n(\vect w)}{g(\vect w )^\omega} - \frac{\hat{\mathbb{C}}_n(\vect u)}{g(\vect u)^\omega} \right\} \frac{g(\vect w)^\omega}{g(\vect u)^\omega} {\,\mathrm{d}} \mu_{n,\vect u} (\vect w)
~ +~
\int_{[0,1]^d} \sqrt n \frac{C(\vect w) - C(\vect u)}{g(\vect u)^\omega} {\,\mathrm{d}} \mu_{n,\vect u} (\vect w).
\end{multline}
It is reasonable to assume that the last two terms on the right-hand side vanish as $n\to \infty$. Indeed, the measure $\mu_{n,\vect u}$ concentrates around its mean $\vect u$, if the sample size grows, and both integrands are small if $\vect w$ is close to $\vect u$. By the same reason, the integral in the first term should be close to one. The decomposition can thus be used to obtain weak convergence of $\mathbb{C}_n^\beta / g^\omega$ on the interior of the unit cube. The boundary of the unit cube has to be treated separately.
The case $\omega = 0$ corresponds to the unweighted case, so we assume henceforth that $0 < \omega < 1/2$. Fix a scalar $\gamma$ such that $1 / \{2(1-\omega)\} < \gamma < 1$. Consider the abbreviations $\{ g \ge n^{-\gamma} \} = \{ \vect v \in [0, 1]^d \mid g(\vect v) \ge n^{-\gamma} \}$ and similarly $\{ g < n^{-\gamma} \}$. By Lemma~\ref{lem:boundary}, we have
\begin{align*}
\mathbb{C}_n^\beta/g^\omega
&=
{\mathbb{C}_n^\beta/g^\omega \operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{\{ g \geq n^{-\gamma} \}}} + \mathbb{C}_n^\beta/g^\omega \operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{ \{ g < n^{-\gamma} \} } \\
&=
{\mathbb{C}_n^\beta/g^\omega \operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{\{ g \geq n^{-\gamma} \}}} + \mathrm{o}(1),
\qquad n \to \infty, \quad \text{a.s.}
\end{align*}
The three terms on the right-hand side of \eqref{eq:decomp} are treated in Lemmas~\ref{lem:bias}, \ref{lem:int} and \ref{lem:bias2}. We find
\begin{equation}
\label{eq:CbnhatCbn}
\mathbb{C}_n^\beta/g^\omega
=
\hat{\mathbb{C}}_n/g^\omega \operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{ \{ g \geq n^{-\gamma} \} } (1 + \mathrm{o}(1)) + \mathrm{o}_{\Prob}(1),
\qquad n \to \infty.
\end{equation}
Recall $\vect U_i = (U_{i,1}, \ldots, U_{i,d})$ with $U_{i,j} = F_j(X_{i,j})$. The empirical distribution function and the empirical process associated to the unobservable sample $\vect U_1, \ldots, \vect U_n$ are
\begin{align*}
C_n(\vect u)
&= \frac{1}{n} \sum_{i=1}^n \operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{ \{\vect U_i \le \vect u\} }, &
\alpha_n(\vect u)
&= \sqrt n \{ C_n(\vect u) - C(\vect u) \},
\end{align*}
respectively, for $\vect u \in [0, 1]^d$. Consider the process
\[
\bar \mathbb{C}_n(\vect u)
=
\alpha_n(\vect u)
- \sum_{j=1}^d \dot C_j(\vect u) \, \alpha_n(1,\dots,1,u_j,1,\dots,1),
\qquad \vect u \in [0, 1]^d,
\]
with $u_j$ appearing at the $j$-th coordinate. Note the slight but convenient abuse of notation in the definition of $\bar{\mathbb{C}}_n$: if $\vect u$ is such that $u_j \in \{0, 1\}$, then $\alpha_n(1,\dots,1,u_j,1,\dots,1) = 0$ almost surely, so that the fact that for such $\vect u$, the partial derivative $\dot{C}_j(\vect u)$ has been left undefined in Condition~\ref{cond:second} plays no role.
By \eqref{eq:CbnhatCbn} above and by Theorem~2.2 in \cite{BerBucVol17} (see also Remark~\ref{rem:proof} below),
\begin{align*}
\mathbb{C}_n^\beta/g^\omega
&=
\{ \bar{\mathbb{C}}_n/g^\omega + \mathrm{o}_{\Prob}(1) \} \operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{ \{ g \geq n^{-\gamma} \} } (1 + \mathrm{o}(1)) + \mathrm{o}_{\Prob}(1) \\
&=
{\bar{\mathbb{C}}_n/g^\omega \operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{ \{ g \geq n^{-\gamma} \} }} + \mathrm{o}_{\Prob}(1),
\qquad n \to \infty.
\end{align*}
In view of Lemma~4.9 in \cite{BerBucVol17}, the indicator function can be omitted, and, applying Theorem~2.2 in the same reference again, we obtain
\[
\mathbb{C}_n^\beta/g^\omega
=
\bar{\mathbb{C}}_n/g^\omega
+ \mathrm{o}_{\Prob}(1)
\rightsquigarrow
\mathbb{C}_C/g^\omega,
\qquad n \to \infty,
\]
as required. This finishes the proof of Theorem~\ref{thm:main}.
\iffalse
The theorem follows immediately from the decomposition in \eqref{eq:decomp}, Lemmas~\ref{lem:boundary}-\ref{lem:bias2} and Theorem 2.2 in \cite{BerBucVol17}. These results immediately imply that for $\gamma \in (1/(2(1-\omega)),1] $
\begin{align*}
\mathbb{C}_n^\beta/g^\omega
~=~&
\mathbb{C}_n^\beta/g^\omega \operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{ \{ \vect v \vert g(\vect v)\geq n^{-\gamma} \} } +\mathbb{C}_n^\beta/g^\omega \operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{ \{\ \vect v \vert g(\vect v)<n^{-\gamma} \} } \\
~=~&
\hat \mathbb{C}_n/g^\omega \operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{ \{ \vect v \vert g(\vect v)\geq n^{-\gamma} \} } + o_\Prob(1)\\
~=~&
\bar{\mathbb{C}}_n/g^\omega +o_\Prob(1),
\end{align*}
which converges weakly to $\mathbb{C}_C/g^\omega $ in $\ell^\infty([0,1]^d)$ and where, for any $\vect u \in [0,1]^d$,
\[
\bar \mathbb{C}_n(\vect u) := \alpha_n(\vect u) - \sum_{j=1}^d \dot C_j(\vect u) \alpha_n(1,\dots ,1,u_j,1, \dots,1 ),
\]
and where
\[
\alpha_n(\vect u) = \sqrt n \{ C_n(\vect u) - C(\vect u) \}, \qquad C_n(\vect u) = n^{-1} {\textstyle \sum_{i=1}^n} \operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}( \vect U_i \le \vect u),
\]
denotes the empirical process based on $U_{i,j}=F_j(X_{i,j})$, $j=1,\dots,d$ and $i=1,\dots,n$.
\fi
\begin{remark}
\label{rem:proof}
Some of the results in \cite{BerBucVol17} have to be adapted to the present situation.
\begin{itemize}
\item
In the latter reference, the pseudo-observations are defined as $\hat U_{i,j} = (n+1)^{-1} R_{i,j}$ rather than $n^{-1} R_{i,j}$. However, this does not affect the asymptotics, since the difference of the two empirical copulas is at most $d/n$, almost surely. For $\vect u \in \{ g \geq n^{-\gamma} \}$, this modification makes a difference of the order $\mathrm{O}_\Prob(n^{\gamma\omega+1/2-1}) = \mathrm{o}_\Prob(1)$, as $n \to \infty$.
\item
In Theorem~2.2 in \cite{BerBucVol17}, the approximation of $\hat \mathbb{C}_n$ by $\bar \mathbb{C}_n$ is stated on the interior of the set $ [c/n,1-c/n]^d$ for any $c \in (0,1)$. But it can be seen in the proof of the latter statement that the result can be easily extended to the set $\{ g \geq c/n \}$. See Section~\ref{sec:BerBucVol17} below for details.
\end{itemize}
\end{remark}
\section{Auxiliary results}
\label{sec:aux}
Throughout and unless otherwise stated, we assume the conditions of Theorem~\ref{thm:main}.
\subsection{Negligibility of the boundary regions}
\begin{lemma}
\label{lem:boundary}
For $\gamma > 1 / \{2(1-\omega)\}$, we have
\[
\sup_{\vect u \in \{g \leq n^{-\gamma}\}} \lvert \mathbb{C}_n^\beta(\vect u)/g(\vect u)^\omega \rvert = \mathrm{o}(1),
\qquad n \to \infty,
\quad \text{a.s.}
\]
\end{lemma}
\begin{proof}
Let $\gamma > 1 / \{2(1-\omega)\}$ and $\vect u \in \{ g \leq n^{-\gamma}\}$. Without loss of generality, we only need to consider the cases $g (\vect u) = u_1$ and $g(\vect u)= 1-u_1$. The remaining cases can be treated analogously.
Let us start with the case $g(\vect u)=u_1 \leq n^{-\gamma}$. Since $\Cnb$ is a copula almost surely, we have $\Cnb (\vect u) \leq u_1$. This in turn gives us
\[
\lvert \mathbb{C}_n^\beta(\vect u)/g(\vect u)^\omega \rvert
\leq
\sqrt n \, u_1^{-\omega} \lvert \Cnb(\vect u) + C(\vect u) \rvert
\leq
2 \sqrt n \, u_1^{1-\omega}
\leq
2 n^{1/2 +\gamma\omega -\gamma},
\qquad \text{a.s.},
\]
an upper bound which vanishes as $n \to \infty$ by the choice of $\gamma$.
Now suppose that $g(\vect u)=1-u_1 \leq n^{-\gamma}$. By the definition of $g(\vect u)$, we can assume without loss of generality that $1-u_j \leq 1-u_1$ for $j=3,\dots ,d$. Again, we will use the fact that $\Cnb$ is a copula almost surely. Note that $\Cnb(1,u_2,1,\dots ,1)= u_2$. Hence, by the Lipschitz continuity of copulas we obtain, almost surely,
\begin{align*}
\lvert \mathbb{C}_n^\beta(\vect u)/g(\vect u)^\omega \rvert
&\leq
\sqrt n (1-u_1)^{-\omega}
\{ \lvert \Cnb(\vect u) -u_2 \rvert + \lvert C(\vect u)-u_2 \rvert \} \\
&\leq
\textstyle{2 \sqrt n (1-u_1)^{-\omega} \sum_{j\ne 2} (1-u_j)} \\
&\leq
\textstyle{2 \sqrt n \sum_{j \ne 2} (1-u_j)^{1-\omega}} \\
&\leq
2(d-1) n^{1/2+\gamma\omega-\gamma}
= \mathrm{o}(1),
\qquad n \to \infty.
\end{align*}
The upper bounds do not depend on $\vect u \in \{ g \le n^{-\gamma} \}$, whence the uniformity in $\vect u$.
\end{proof}
\subsection{The three terms in the decomposition (\ref{eq:decomp})}
The following lemma is to be compared with Proposition~3.5 in \cite{SegSibTsu17}. There, a pointwise approximation rate of $\mathrm{O}(n^{-1})$ was established. Here, we state a rate which is slightly slower, $\mathrm{O}(n^{-1} \log n)$, but uniformly in $\vect u$.
\begin{lemma}
\label{lem:bias}
If $C$ satisfies Condition \ref{cond:second}, then
\[
\sup_{\vect u \in [0, 1]^d}
\left\lvert
\int_{[0, 1]^d} \{ C(\vect w) - C(\vect u) \} {\,\mathrm{d}} \mu_{n, \vect u}(\vect w)
\right\rvert
=
\mathrm{O}( n^{-1} \log n ),
\qquad n \to \infty.
\]
\end{lemma}
\begin{proof}
Put $\varepsilon_n = n^{-1} \log n$. First, we show that we can ignore those $\vect u$ for which $u_j \le \varepsilon_n$ for some $j \in \{1, \ldots, d\}$. Indeed, for such $\vect u$, the absolute value in the statement is bounded by
\[
\int_{[0, 1]^d} w_j {\,\mathrm{d}} \mu_{n, \vect u}(\vect w) + u_j = 2u_j \le 2\varepsilon_n.
\]
Let $\vect u \in [\varepsilon_n, 1]^d$. We show how to reduce the analysis to the case where $\vect u \in [\varepsilon_n, 1-\varepsilon_n]^d$. Let $J = J(\vect u)$ denote the set of indices $j = 1, \ldots, d$ such that $u_j > 1-\varepsilon_n$ and suppose that $J$ is not empty. Consider the vector $\vect e \in \{0, 1\}^d$ which has components $e_j = 1$ for $j \in J$ and $e_j = 0$ otherwise. For $\vect v \in [0, 1]^d$, the vector $\vect v \vee \vect e$ has components $(\vect v \vee \vect e)_j$ equal to $v_j$ if $j \not\in J$ and to $1$ if $j \in J$. Recall that copulas are Lipschitz continuous with respect to the $L^1$ norm with Lipschitz constant $1$. It follows that
\begin{multline*}
\left\lvert
\int_{[0, 1]^d} \{ C(\vect w) - C(\vect u) \} {\,\mathrm{d}} \mu_{n, \vect u}(\vect w)
\right\rvert
\le
\left\lvert
\int_{[0, 1]^d} \{ C(\vect w \vee \vect e) - C(\vect u \vee \vect e) \} {\,\mathrm{d}} \mu_{n, \vect u}(\vect w)
\right\rvert \\
+ \int_{[0, 1]^d} \lvert C(\vect w) - C(\vect w \vee \vect e) \rvert {\,\mathrm{d}} \mu_{n, \vect u}(\vect w)
+ \lvert C(\vect u \vee \vect e) - C(\vect u) \rvert.
\end{multline*}
\begin{itemize}[leftmargin=*]
\item
The first integral on the right-hand side does not depend on the variables $w_j$ for $j \in J$. It can therefore be reduced to an integral as in the statement of the lemma with respect to the variables in the set $\{1, \ldots, d\} \setminus J$. The copula of those variables is a multivariate margin of the original copula and Condition~\ref{cond:second} applies to it as well. By construction, all remaining $u_j$ are in the interval $[\varepsilon_n, 1-\varepsilon_n]$, as required.
\item
We have $\lvert C(\vect w) - C(\vect w \vee \vect e) \rvert \le \sum_{j \in J} \lvert w_j - 1 \rvert \le \sum_{j \in J} (\lvert w_j - u_j \rvert + \varepsilon_n)$. By the Cauchy--Schwarz inequality, $\int_{[0, 1]^d} \lvert w_j - u_j \rvert {\,\mathrm{d}} \mu_{n, \vect u}(\vect w) \le \{n^{-1} u_j(1-u_j)\}^{1/2} \le n^{-1/2} \varepsilon_n^{1/2} \le \varepsilon_n$. Hence $\int_{[0,1]^d}\lvert C(\vect w) - C(\vect w \vee \vect e) \rvert {\,\mathrm{d}} \mu_{n, \vect u}(\vect w)\le 2d\varepsilon_n$.
\item
Finally, $\lvert C(\vect u \vee \vect e) - C(\vect u) \rvert \le \sum_{j \in J} (1 - u_j) \le d \varepsilon_n$.
\end{itemize}
It remains to consider the case $\vect u \in [\varepsilon_n, 1-\varepsilon_n]^d$. As in the proof of Proposition~3.4 in \cite{SegSibTsu17}, we have
\[
\int_{[0, 1]^d} \{ C(\vect w) - C(\vect u) \} {\,\mathrm{d}} \mu_{n, \vect u}(\vect w) \\
=
\sum_{j=1}^d
\int_0^1
\left[
\int_{[0, 1]^d}
(w_j - u_j)
\bigl\{ \dot{C}_j(\vect u + t(\vect w - \vect u)) - \dot{C}_j(\vect u) \bigr\}
{\,\mathrm{d}} \mu_{n, \vect u}(\vect w)
\right]
{\,\mathrm{d}} t.
\]
It is sufficient to show that the absolute value of the integral in square brackets is $\mathrm{O}(\varepsilon_n)$, uniformly in $j \in \{1, \ldots, d\}$ and $t \in (0, 1)$ and $\vect u \in [\varepsilon_n, 1-\varepsilon_n]^d$.
The integral over $[0, 1]^d$ can be reduced to an integral over $(0, 1)^d$: indeed, the integrand is bounded in absolute value by $1$ (recall $0 \le \dot{C}_j \le 1$), and the mass on the boundary is $\mu_{n, \vect u}([0, 1]^d \setminus (0, 1)^d) = \Prob[\exists j : S_j \in \{0, n\}] \le 2d (1 - \varepsilon_n)^n \le 2d \exp(-n\varepsilon_n) = 2d n^{-1} = \mathrm{o}(\varepsilon_n)$ as $n \to \infty$.
In view of the second part of Condition~\ref{cond:second}, we have
\begin{multline*}
\int_{(0, 1)^d}
(w_j - u_j)
\bigl\{ \dot{C}_j(\vect u + t(\vect w - \vect u)) - \dot{C}_j(\vect u) \bigr\}
{\,\mathrm{d}} \mu_{n, \vect u}(\vect w) \\
=
t
\sum_{k=1}^d
\int_0^1
\left[
\int_{(0, 1)^d}
(w_j - u_j)(w_k - u_k)
\ddot{C}_{jk} (\vect u + st(\vect w - \vect u))
{\,\mathrm{d}} \mu_{n, \vect u}(\vect w)
\right]
{\,\mathrm{d}} s.
\end{multline*}
It is sufficient to show that the absolute value of the integral in square brackets is $\mathrm{O}(\varepsilon_n)$, uniformly in $j,k \in \{1, \ldots, d\}$ and $s, t \in (0, 1)$ and $\vect u \in [\varepsilon_n, 1-\varepsilon_n]^d$.
We apply the bound in \eqref{eq:second} to $\ddot{C}_{jk}( \vect u + st (\vect w - \vect u) )$. We have $\min(a^{-1}, b^{-1}) \le (ab)^{-1/2}$, and the latter is a convex function of $(a, b) \in (0, \infty)^2$. The point $\vect u + st (\vect w - \vect u)$ is located on the line segment connecting $\vect u$ and $\vect w$. Therefore,
\begin{align*}
\bigl\lvert \ddot{C}_{jk}( \vect u + st (\vect w - \vect u) ) \bigr\rvert
\le
K \left[ \frac{1}{\{u_j(1-u_j)u_k(1-u_k)\}^{1/2}} + \frac{1}{\{w_j(1-w_j)w_k(1-w_k)\}^{1/2}} \right].
\end{align*}
We obtain
\begin{multline*}
\left\lvert
\int_{(0, 1)^d}
(w_j - u_j)(w_k - u_k)
\ddot{C}_{jk} (\vect u + st(\vect w - \vect u))
{\,\mathrm{d}} \mu_{n, \vect u}(\vect w)
\right\rvert \\
\le
K
\int_{(0, 1)^d}
\biggl[
\frac{\lvert (w_j - u_j) (w_k - u_k) \rvert}{\{u_j(1-u_j)u_k(1-u_k)\}^{1/2}}
+
\frac{\lvert (w_j - u_j) (w_k - u_k) \rvert}{\{w_j(1-w_j)w_k(1-w_k)\}^{1/2}}
\biggr]
{\,\mathrm{d}} \mu_{n, \vect u}(\vect w).
\end{multline*}
First, by the Cauchy--Schwarz inequality and the fact that $\operatorname{E}[ (S_i/n - u_i)^2 ] = n^{-1} u_i(1-u_i)$ for all $i \in \{1, \ldots, d\}$, we have
\[
\int_{(0, 1)^d}
\frac{\lvert (w_j - u_j) (w_k - u_k) \rvert}{\{u_j(1-u_j)u_k(1-u_k)\}^{1/2}}
{\,\mathrm{d}} \mu_{n, \vect u}(\vect w)
\le
\prod_{i \in \{j,k\}} \left\{ \int_{(0, 1)^d} \frac{(w_i - u_i)^2}{u_i(1-u_i)} {\,\mathrm{d}} \mu_{n, \vect u}(\vect w) \right\}^{1/2}
\le n^{-1} \le \varepsilon_n.
\]
Second, again by Cauchy--Schwarz inequality,
\[
\int_{(0, 1)^d}
\frac{\lvert (w_j - u_j) (w_k - u_k) \rvert}{\{w_j(1-w_j)w_k(1-w_k)\}^{1/2}}
{\,\mathrm{d}} \mu_{n, \vect u}(\vect w)
\le
\prod_{i \in \{j,k\}} \left\{ \int_{(0, 1)^d} \frac{(w_i - u_i)^2}{w_i(1-w_i)} {\,\mathrm{d}} \mu_{n, \vect u}(\vect w) \right\}^{1/2}.
\]
Each of the two integrals ($i = j$ and $i = k$), and therefore their geometric mean, will be bounded by the same quantity. Note that $\tfrac{1}{w_i(1-w_i)} = \tfrac{1}{w_i} + \tfrac{1}{1-w_i}$ and that the integral involving $\tfrac{1}{1-w_i}$ is equal to the one involving $\tfrac{1}{w_i}$ when $u_i$ is replaced by $1-u_i$, which we are allowed to do since $\vect u \in [\varepsilon_n, 1-\varepsilon_n]^d$ anyway. Therefore, we can replace $w_i(1-w_i)$ by $w_i$ in the denominator at the cost of a factor two. Further,
\begin{align*}
\int_{(0, 1)^d} \frac{(w_i - u_i)^2}{w_i} {\,\mathrm{d}} \mu_{n, \vect u}(\vect w)
&\le
\int_{[0, 1]^d} \operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{(0, 1]}(w_i) \frac{(w_i - u_i)^2}{w_i} {\,\mathrm{d}} \mu_{n, \vect u}(\vect w) \\
&=
\int_{[0, 1]^d} \operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{(0, 1]}(w_i) \bigl( w_i - 2u_i + \tfrac{u_i^2}{w_i} \bigr) {\,\mathrm{d}} \mu_{n, \vect u}(\vect w) \\
&=
u_i - 2u_i \Prob[S_i/n > 0] + u_i^2 \operatorname{E}[ \tfrac{1}{S_i/n} \operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{\{ S_i/n > 0 \}} ] \\
&\le
-u_i + 2\Prob[S_i = 0] + n u_i^2 \operatorname{E}[ \tfrac{1}{S_i} \operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{\{ S_i \ge 1 \}} ].
\end{align*}
Recall that $u_i \in [\varepsilon_n, 1-\varepsilon_n]$ and thus $\Prob[S_i = 0] \le (1-\varepsilon_n)^n \le \exp(-n\varepsilon_n) = n^{-1} = \mathrm{o}(\varepsilon_n)$. Further, the expectation of the reciprocal of a binomial random variable is treated in Lemma~\ref{lem:binom}. Note that $n \varepsilon_n = \log n \to \infty$. We find
\[
\sup_{\vect u \in [\varepsilon_n, 1 - \varepsilon_n]^d} \max_{i=1,\ldots,d}
\int_{(0, 1)^d} \frac{(w_i - u_i)^2}{w_i(1-w_i)} {\,\mathrm{d}} \mu_{n, \vect u}(\vect w)
= \mathrm{O}(n^{-1}) = \mathrm{o}(\varepsilon_n), \qquad n \to \infty.
\]
The proof is complete.
\end{proof}
\iffalse
\begin{lemma}
\label{lem:bias}
Suppose Condition \ref{cond:second} holds. Then as $n\to\infty$, for $1/\{2(1-\omega)\} < \gamma < 1$,
\[
\sup_{\vect u \in \{ g > n^{-\gamma}\}}
\left\lvert
\int_{[0,1]^d} \sqrt n \frac{C(\vect w) - C(\vect u)}{g(\vect u)^\omega} {\,\mathrm{d}} \mu_{n,\vect u} (\vect w)
\right\rvert
= \mathrm{o}(1),
\qquad n \to \infty.
\]
\end{lemma}
\begin{proof}
\johan{Proof revised, among others, regarding the omission of $w_j \in \{0, 1\}$ in the integration domain: this needs to come before passing to the second-order partial derivatives, otherwise the integrand is infinity. Details added at various other places.}
First, suppose $\vect u \in [n^{-\gamma}, 1 - n^{-\gamma}]^d$. Note that $g(\vect u) \ge n^{-\gamma}$ for such $\vect u$. Recall that $\mu_{n, \vect u}$ is the joint distribution of the random vector $(S_1/n, \ldots, S_d/n)$, where $S_1, \ldots, S_d$ are independent random variables with $S_j \sim \operatorname{Bin}(n, u_j)$.
By the first part of Condition~\ref{cond:second} and the fact that $\int_{[0, 1]^d} w_j {\,\mathrm{d}} \mu_{n, \vect u}({\,\mathrm{d}} w) = \operatorname{E}[S_j / n] = u_j$ for all $j \in \{1, \ldots, d\}$, we have
\begin{align*}
\lefteqn{
\int_{[0,1]^d}
\sqrt{n} \frac{C(\vect w) - C(\vect u)}{g(\vect u)^\omega}
{\,\mathrm{d}} \mu_{n,\vect u} (\vect w)
} \\
&=
\frac{\sqrt{n}}{g(\vect u)^\omega}
\int_{[0,1]^d}
\sum_{j=1}^d
(w_j - u_j)
\int_0^1
\dot{C}_j \bigl( \vect u + t( \vect w - \vect u) \bigr)
{\,\mathrm{d}} t
{\,\mathrm{d}} \mu_{n, \vect u}(\vect w) \\
&=
\frac{\sqrt{n}}{g(\vect u)^\omega}
\sum_{j=1}^d
\int_{[0, 1]^d}
(w_j - u_j)
\int_0^1
\bigl\{ \dot{C}_j \bigl( \vect u + t( \vect w - \vect u) \bigr) - \dot{C}_j( \vect u ) \bigr\}
{\,\mathrm{d}} t
{\,\mathrm{d}} \mu_{n, \vect u}(\vect w).
\end{align*}
Since $0 \le \dot{C}_j \le 1$ and since $\Prob[S_j \in \{0, n\}] \le 2 (1 - n^{-\gamma})^n \le 2 \exp(-n^{1-\gamma})$ for $u_j \in [n^{-\gamma}, 1-n^{-\gamma}]$, we obtain that we can replace the integration domain $[0, 1]^d$ by $(0, 1)^d$:
\begin{multline*}
\int_{[0,1]^d}
\sqrt{n} \frac{C(\vect w) - C(\vect u)}{g(\vect u)^\omega}
{\,\mathrm{d}} \mu_{n,\vect u} (\vect w) \\
=
\frac{\sqrt{n}}{g(\vect u)^\omega}
\sum_{j=1}^d
\int_{(0, 1)^d}
(w_j - u_j)
\int_0^1
\bigl\{ \dot{C}_j \bigl( \vect u + t( \vect w - \vect u) \bigr) - \dot{C}_j( \vect u ) \bigr\}
{\,\mathrm{d}} t
{\,\mathrm{d}} \mu_{n, \vect u}(\vect w)
+
R_n(\vect u)
\end{multline*}
where $\sup \{ \lvert R_n(\vect u) \rvert : \vect u \in [n^{-\gamma}, 1-n^{-\gamma}]^d \} = \mathrm{o}(1)$ as $n \to \infty$. Let $t \in (0, 1)$ and $\vect w \in (0, 1)^d$. By the second part of Condition~\ref{cond:second}, we have
\[
\dot{C}_j \bigl( \vect u + t( \vect w - \vect u) \bigr) - \dot{C}_j( \vect u )
=
\sum_{k=1}^d t (w_k - u_k) \int_0^1 \ddot{C}_{jk} \bigl( \vect u + st (\vect w - \vect u) \bigr) {\,\mathrm{d}} s.
\]
We apply the bound in \eqref{eq:second} to $\ddot{C}_{jk}( \vect u + st (\vect w - \vect u) )$. We have $\min(a^{-1}, b^{-1}) \le (ab)^{-1/2}$, the upper bound being a convex function of $(a, b) \in (0, \infty)^2$. Moreover, the point $\vect u + st (\vect w - \vect u)$ is located on the line segment connecting $\vect u$ and $\vect w$. Therefore,
\begin{align*}
\bigl\lvert \ddot{C}_{jk}( \vect u + st (\vect w - \vect u) ) \bigr\rvert
\le
K \bigl[ \{u_j(1-u_j)u_k(1-u_k)\}^{-1/2} + \{w_j(1-w_j)w_k(1-w_k)\}^{-1/2} \bigr].
\end{align*}
We obtain that, for $\vect u \in [n^{-\gamma}, 1-n^{-\gamma}]^d$,
\begin{multline*}
\left\lvert
\int_{[0,1]^d}
\sqrt{n} \frac{C(\vect w) - C(\vect u)}{g(\vect u)^\omega}
{\,\mathrm{d}} \mu_{n,\vect u} (\vect w)
\right\rvert \\
\shoveleft
\le
K n^{1/2 + \gamma \omega}
\sum_{j=1}^d \sum_{k=1}^d
\int_{(0, 1)^d}
\biggl[
\frac{\lvert (w_j - u_j) (w_k - u_k) \rvert}{\{u_j(1-u_j)u_k(1-u_k)\}^{1/2}} \\
+
\frac{\lvert (w_j - u_j) (w_k - u_k) \rvert}{\{w_j(1-w_j)w_k(1-w_k)\}^{1/2}}
\biggr]
{\,\mathrm{d}} \mu_{n, \vect u}(\vect w)
+ \mathrm{o}(1), \qquad n \to \infty,
\end{multline*}
the $\mathrm{o}(1)$ term being uniform in $\vect u \in [n^{-\gamma}, 1-n^{-\gamma}]^d$. Fix $j, k \in \{1, \ldots, d\}$. By the Cauchy--Schwarz inequality and the fact that $\operatorname{E}[ (S_i/n - u_i)^2 ] = n^{-1} u_i(1-u_i)$ for all $i \in \{1, \ldots, d\}$, we have
\begin{multline*}
\lefteqn{
\int_{(0, 1)^d}
\frac{\lvert (w_j - u_j) (w_k - u_k) \rvert}{\{u_j(1-u_j)u_k(1-u_k)\}^{1/2}}
{\,\mathrm{d}} \mu_{n, \vect u}(\vect w)
} \\
\le
\prod_{i \in \{j,k\}} \left\{ \int_{(0, 1)^d} \frac{(w_i - u_i)^2}{u_i(1-u_i)} {\,\mathrm{d}} \mu_{n, \vect u}(\vect w) \right\}^{1/2}
\le n^{-1}.
\end{multline*}
This bound is sharp enough since $1/2 + \gamma \omega - 1 < 0$ as $0 < \omega < 1/2$ and $0 < \gamma < 1$. Again the Cauchy--Schwarz inequality,
\begin{multline*}
\lefteqn{
\int_{(0, 1)^d}
\frac{\lvert (w_j - u_j) (w_k - u_k) \rvert}{\{u_j(1-u_j)u_k(1-u_k)\}^{1/2}}
{\,\mathrm{d}} \mu_{n, \vect u}(\vect w)
} \\
\le
\prod_{i \in \{j,k\}} \left\{ \int_{(0, 1)^d} \frac{(w_i - u_i)^2}{w_i(1-w_i)} {\,\mathrm{d}} \mu_{n, \vect u}(\vect w) \right\}^{1/2}
\end{multline*}
Each of the two integrals, and therefore their geometric mean, will be bounded by the same quantity. Note that $\tfrac{1}{w_i(1-w_i)} = \tfrac{1}{w_i} + \tfrac{1}{1-w_i}$ and that the integral involving $\tfrac{1}{1-w_i}$ is equal to the one involving $\tfrac{1}{w_i}$ when $u_i$ is replaced by $1-u_i$. Further,
\begin{align*}
\int_{(0, 1)^d} \frac{(w_i - u_i)^2}{w_i} {\,\mathrm{d}} \mu_{n, \vect u}(\vect w)
&\le
\int_{[0, 1]^d} \operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{(0, 1]}(w_i) \frac{(w_i - u_i)^2}{w_i} {\,\mathrm{d}} \mu_{n, \vect u}(\vect w) \\
&=
\int_{[0, 1]^d} \operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{(0, 1]}(w_i) \bigl( w_i - 2u_i + \tfrac{u_i^2}{w_i} \bigr) {\,\mathrm{d}} \mu_{n, \vect u}(\vect w) \\
&=
u_i - 2u_i \Prob[S_i/n > 0] + u_i^2 \operatorname{E}[ \tfrac{1}{S_i/n} \operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{\{ S_i/n > 0 \}} ] \\
&\le
-u_i + 2\Prob[S_i = 0] + n u_i^2 \operatorname{E}[ \tfrac{1}{S_i} \operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{\{ S_i \ge 1 \}} ].
\end{align*}
Recall that $u_i \in [n^{-\gamma}, 1-n^{-\gamma}]$ and thus $\Prob[S_i = 0] \le (1-n^{-\gamma})^n \le \exp(-n^{1-\gamma})$. Further, the expectation of the reciprocal of a binomial random variable is treated in Lemma~\ref{lem:binom}. We find
\[
\sup_{\vect u \in [n^{-\gamma}, 1-n^{-\gamma}]^d} \max_{i=1,\ldots,d}
\int_{(0, 1)^d} \frac{(w_i - u_i)^2}{w_i(1-w_i)} {\,\mathrm{d}} \mu_{n, \vect u}(\vect w)
= \mathrm{O}(n^{-1}), \qquad n \to \infty.
\]
By Condition~\ref{cond:second} we can rewrite the bias term as follows
\begin{multline*}
\int_{[0,1]^d}\sqrt n \frac{C(\vect w ) - C(\vect u)}{g(\vect u)^\omega} {\,\mathrm{d}} \mu_{n,\vect u}(\vect w)
= ~
\sqrt n \sum_{j=1}^d \frac{\dot C_j (\vect u)} {g(\vect u)^\omega}\int_{[0,1]^d} (w_j -u_j) {\,\mathrm{d}} \mu_{n, \vect u} (\vect w)\\
+ ~
\sum_{j=1}^d \sum_{k=1}^d \sqrt n \int_{[0,1]^d} \frac{(w_j-u_j)(w_k - u_k)}{g(\vect u)^\omega} \int_0^1 \int_0^1 t ~ \ddot C_{jk}(\vect v(st)){\,\mathrm{d}} s{\,\mathrm{d}} t {\,\mathrm{d}} \mu_{n, \vect u} (\vect w)
,
\end{multline*}
with $\vect v(r) = \vect u +r(\vect w- \vect u)$ fir $r \in [0,1]$. By the definition of $\mu_{n,\vect u}$, the first integral on the right-hand side is equal to zero. Let
\[
S_{j,k}(\vect u)=\sqrt n \int_{[0,1]^d} \frac{(w_j-u_j)(w_k - u_k)}{g(\vect u)^\omega} \int_0^1 \int_0^1 t ~ \ddot C_{jk}(\vect v(st)){\,\mathrm{d}} s{\,\mathrm{d}} t {\,\mathrm{d}} \mu_{n, \vect u} (\vect w).
\]
The result follows if we can show that $\sup_{\vect u \in\{ g > n^{-\gamma}\}} \lvert S_{j,k}(\vect u) \rvert = \mathrm{o}(1)$ as $n\to\infty$.
By Condition~\ref{cond:second}, we have
\[
\lvert \ddot C_{jk} (\vect v(st)) \rvert
\leq
K \left( \frac{1}{\{u_j(1-u_j)u_k(1-u_k)\}^{1/2}} + \frac{1}{\{w_j(1-w_j)w_k(1-w_k)\}^{1/2}} \right).
\]
To see this, use convexity of the upper bound in Condition~\ref{cond:second} as a function of $\vect u$ together with the inequality $a \wedge b \le \sqrt{a b}$ for $a, b \ge 0$. This upper bound for the absolute value of the second-order partial derivatives and the Cauchy--Schwarz inequality lead to
\begin{multline} \label{eq:sjk}
\vert S_{j,k}(\vect u) \vert
\leq ~
\frac{\sqrt n }{g(\vect u)^\omega}\Big (\int _{[0,1]^d} \frac{(w_j-u_j)^2}{u_j(1-u_j)}{\,\mathrm{d}} \mu_{n,\vect u}(\vect w) \Big )^{1/2}\Big (\int _{[0,1]^d} \frac{(w_k-u_k)^2}{u_k(1-u_k)}{\,\mathrm{d}} \mu_{n,\vect u}(\vect w) \Big )^{1/2}\\
+~
\frac{\sqrt n }{g(\vect u)^\omega}\Big (\int _{[0,1]^d} \frac{(w_j-u_j)^2}{w_j(1-w_j)}\operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{\{ w_j \not\in \{0,1 \} \}}{\,\mathrm{d}} \mu_{n,\vect u}(\vect w) \Big )^{1/2}\\
\times
\Big (\int _{[0,1]^d} \frac{(w_k-u_k)^2}{w_k(1-w_k)}\operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{\{ w_k \not\in \{0,1 \} \}}{\,\mathrm{d}} \mu_{n,\vect u}(\vect w) \Big )^{1/2}
+ \mathrm{O}((1-n^{-\gamma})^n).
\end{multline}
\johan{Something is not OK here. Not only should the $\mathrm{O}(\ldots)$ term should be multiplied with $\sqrt{n} / g(\vect u)^\omega$, which doesn't matter, but where do we deal with the integrand on the events $\{ w_j = 0 \}$ and $\{w_j = 1\}$? The integrand is infinity on these events. I believe we have to clip off the events $\{ w_j(1-w_j) = 0 \}$ and $\{ w_k(1-w_k) = 0 \}$ \emph{before} invoking the upper bound on $\ddot{C}_{jk}$.}
To see this, note that $\Prob(S_j/n=0)= (1-u_j)^n$ and $\Prob(S_j/n=1)= u_j^n$ which vanishes at least with rate $O((1-n^{-\gamma})^n)$ for $\vect u \in[1/n^\gamma,1-1/n^\gamma]^d$.
Since
\[
\int _{[0,1]^d}(w_j-u_j)^2{\,\mathrm{d}} \mu_{n,\vect u}(\vect w) = \operatorname{Var} (S_j/n) =n^{-1} u_j(1-u_j)
\]
the first term on the right-hand side of \eqref{eq:sjk} is equal to $n^{-1/2}/g(\vect u)^\omega$, which in turn vanishes uniformly in $\vect u \in [1/n^\gamma,1-1/n^\gamma]^d$, as $n \to \infty$.
Now consider the second term on the right-hand side. First note, that
\[
\frac{1}{u(1-u)}=\frac{1}{u}+\frac{1}{1-u}
\]
and hence
\begin{multline*}
\int _{[0,1]^d} \frac{(w_j-u_j)^2}{w_j(1-w_j)}\operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{\{ w_j \not\in \{0,1 \} \}}{\,\mathrm{d}} \mu_{n,\vect u}(\vect w)
=~
\int _{[0,1]^d} \frac{(w_j-u_j)^2}{w_j}\operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{\{ w_j \not\in \{0,1 \} \}}{\,\mathrm{d}} \mu_{n,\vect u}(\vect w)\\
+~
\int _{[0,1]^d} \frac{(w_j-u_j)^2}{1-w_j}\operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{\{ w_j \not\in \{0,1 \} \}}{\,\mathrm{d}} \mu_{n,\vect u}(\vect w).
\end{multline*}
We will only consider the first integral on the right-hand side, the second one can be treated analogously.
Note that
\[
\int _{[0,1]^d} \frac{(w_j-u_j)^2}{w_j}\operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{\{ w_j \not\in \{0,1 \} \}}{\,\mathrm{d}} \mu_{n,\vect u}(\vect w)
=
-u_j+u_j^2 \operatorname{E} \Big [\frac{n}{S_j} \operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{\{S_j \geq 1\}} \Big] +O((1-n^{-\gamma})^n),
\]
which by Lemma~\ref{lem:binom} is bounded by $ O(n^{-1}) +O((1-n^{-\gamma})^n)= O(n^{-1})$, uniformly in $u_j \in [1/n^\gamma,1-1/n^\gamma]$.
Hence, uniformly in $\vect u \in [1/n^\gamma,1-1/n^\gamma]^d$ the second term on the right-hand side of \eqref{eq:sjk} is of the order $O(n^{-1+1/2+\omega})=o(1)$, as $ n \to \infty$. \\
Finally, we need to consider $\vect u\in \{\vect v \vert g(\vect v) > n^{-\gamma}\}\setminus[1/n\gamma,1-1/n^\gamma]^d$, i.e., it exists at least one (and at most $d-2$) component $j$ such that $1-u_j<n^{-\gamma}$. Suppose $j=1$. Then define $\vect u_{-1}=(1,u_2, \dots,u_d)$ and proceed as follows
\begin{multline*}
\int_{[0,1]^d} \sqrt n \frac{C(\vect u) - C(\vect w)}{g(\vect u)^\omega} {\,\mathrm{d}} \mu_{n,\vect u}(\vect w) \\
=~
\int_{[0,1]^d} \sqrt n \frac{C(\vect u) - C(\vect u_{-1})+C(\vect u_{-1}) - C (\vect w_{-1})+C(\vect w_{-1})- C(\vect w)}{g(\vect u)^\omega} {\,\mathrm{d}} \mu_{n,\vect u}(\vect w)\\
=~
\int_{[0,1]^{d-1}} \sqrt n \frac{C(\vect u_{-1}) - C(\vect w_{-1})}{g(\vect u_{-1})^\omega} {\,\mathrm{d}} \mu_{n,\vect u_{-1}}(\vect w_{-1}) \\
+ O(2n^{-\gamma+1/2+\gamma \omega})+ n^{1/2+\gamma \omega}\int_{[0,1]} \vert u_1 - w_1\vert {\,\mathrm{d}} \mu_{n, u_1}(w_1).
\end{multline*}
Since, $ \int_{[0,1]} \vert u_1 - w_1\vert {\,\mathrm{d}} \mu_{n, u_1}(w_1) \leq \sqrt{\operatorname{Var}(S_1/n)} = \sqrt{u_1(1-u_1)/n}\leq n^{-1/2-\gamma/2}$ the last integral vanishes as $n \to \infty$, and the remaining term can be treated exactly as the bias of a $(d-1)$-dimensional empirical beta copula.
\end{proof}
\fi
\begin{lemma}
\label{lem:int}
For any $1/\{2(1-\omega)\} < \gamma < 1$, we have
\[
\sup_{\vect u \in \{g \ge n^{-\gamma}\}}
\left\lvert
\int_{[0,1]^d}
\frac{g(\vect w)^\omega}{g(\vect u)^\omega}
{\,\mathrm{d}} \mu_{n,\vect u} (\vect w)
- 1
\right\rvert
=
\mathrm{O} \left\{ n^{-(1-\gamma)/2} \log(n) \right\},
\qquad n \to \infty.
\]
\end{lemma}
\begin{proof}
Since $g(\frac{S_1}{n}, \dots, \frac{S_d}{n})$ is a random variable taking values in $[0, 1]$, we can write
\begin{align}
\nonumber
\int_{[0,1]^d}
\frac{g(\vect w)^\omega}{g(\vect u)^\omega}
{\,\mathrm{d}} \mu_{n,\vect u} (\vect w)
&=
\frac{1}{g(\vect u)^\omega}
\operatorname{E} \Bigl[
g \bigl( \tfrac{S_1}{n}, \dots, \tfrac{S_d}{n} \bigr)^\omega
\Bigr] \\
\label{eq:int:aux}
&=
\frac{1}{g(\vect u)^\omega}
\int_0^1
\Prob \Bigl\{
g \bigl(\tfrac{S_1}{n}, \dots, \tfrac{S_d}{n} \bigr) > t^{1/\omega}
\Bigr\}
{\,\mathrm{d}} t.
\end{align}
Split the integral into two pieces, $\int_0^{a_{n, \pm}} + \int_{a_{n, \pm}}^1$, where $a_{n, \pm} = a_{n, \pm}(\vect u) = g(\vect u)^\omega (1 \pm \varepsilon_n)^\omega$. Write $\varepsilon_n = n^{-(1-\gamma)/2} \log n$. Recall that $0 < \omega < 1/2$.
On the one hand, we find
\begin{align*}
\int_{[0,1]^d}
\frac{g(\vect w)^\omega}{g(\vect u)^\omega}
{\,\mathrm{d}} \mu_{n,\vect u} (\vect w)
&\le
\frac{a_{n, +}}{g(\vect u)^\omega}
+
\frac{1 - a_{n,+}}{g(\vect u)^\omega}
\Prob \Bigl\{
g \bigl(\tfrac{S_1}{n}, \dots, \tfrac{S_d}{n} \bigr) > a_{n,+}^{1/\omega}
\Bigr\} \\
&\le
(1 + \varepsilon_n)^\omega
+
g( \vect u )^{-\omega}
\Prob \Bigl\{
g \bigl(\tfrac{S_1}{n}, \dots, \tfrac{S_d}{n} \bigr)
> g(\vect u) (1 + \varepsilon_n)
\Bigr\} \\
&\le
1 + \varepsilon_n
+
g( \vect u )^{-\omega}
2d \exp \{ - n g(\vect u) h(1+\varepsilon_n) \}
\end{align*}
where we used \eqref{eq:upper} in the last step. Since $h(1+\varepsilon_n) \ge \frac{1}{3} \varepsilon_n^2$ for $0 \le \varepsilon_n \le 1$ and since $g(\vect u) \ge n^{-\gamma}$, the upper bound is bounded by
\[
1 + \varepsilon_n + 2d n^{\gamma \omega} \exp \{ - \tfrac{1}{3} (\log n)^2 \}
=
1 + \varepsilon_n + \mathrm{o}( \varepsilon_n ),
\qquad n \to \infty.
\]
On the other hand, restricting the integral in \eqref{eq:int:aux} to $[0, a_{n,-}]$, we have
\begin{align*}
\int_{[0,1]^d}
\frac{g(\vect w)^\omega}{g(\vect u)^\omega}
{\,\mathrm{d}} \mu_{n,\vect u} (\vect w)
&\ge
\frac{a_{n,-}}{g(\vect u)^\omega}
\Prob \Bigl\{
g \bigl( \tfrac{S_1}{n}, \ldots, \tfrac{S_d}{n} \bigr)
>
a_{n,-}^{1/\omega}
\Bigr\} \\
&=
(1 - \varepsilon_n)^\omega
\Prob \Bigl\{
g \bigl( \tfrac{S_1}{n}, \ldots, \tfrac{S_d}{n} \bigr)
>
g(\vect u) (1 - \varepsilon_n)
\Bigr\} \\
&\ge
(1-\varepsilon_n)^\omega [1 - 4d\exp \{ - n g(\vect u) h(1+\varepsilon_n) \}],
\end{align*}
where we used \eqref{eq:lower} in the last step. Since $0 \le \varepsilon_n \to 0$ and $g(\vect u) \ge n^{-\gamma}$, the lower bound is bounded from below by
\begin{align*}
(1-\varepsilon_n)^\omega [1 - 4d\exp \{ - n g(\vect u) h(1+\varepsilon_n) \}]
&\ge (1-\varepsilon_n) [1 - 4d \exp \{ - \tfrac{1}{3} (\log n)^2 \} ] \\
&\ge 1 - \varepsilon_n - \mathrm{o}( \varepsilon_n ),
\qquad n \to \infty.
\end{align*}
\end{proof}
\iffalse
\begin{proof}
First note that, since $g(\frac{S_1}{n}, \dots, \frac{S_d}{n})$ is a positive random variable, we can write
\begin{multline}\label{eq:intdecomp}
\int_{[0,1]^d}\frac{g(\vect w)^\omega}{g(\vect u)^\omega} {\,\mathrm{d}} \mu_{n,\vect u} (\vect w)
=~
\int_0^1 \frac{ \Prob \big (g(\frac{S_1}{n}, \dots, \frac{S_d}{n})>t^{1/\omega} \big) }{ g(\vect u)^\omega } {\,\mathrm{d}} t \\
=~
\int_0^{g(\vect u)^\omega(1-\frac{1}{\log(n)})^\omega} f_{n}(t){\,\mathrm{d}} t
+
\int_{g(\vect u)^\omega(1+\frac{1}{\log(n)})^\omega}^1 f_{n}(t) {\,\mathrm{d}} t
+R_n,
\end{multline}
where we define
\[
f_{n}(t)
=
\frac{ \Prob \big (g(\frac{S_1}{n}, \dots, \frac{S_d}{n})>t^{1/\omega} \big)}{g(\vect u)^\omega} .
\]
and where
\begin{multline*}
R_n
= \int_{g(\vect u)^\omega(1-\frac{1}{\log(n)})^\omega}^{g(\vect u )^\omega(1+\frac{1}{\log(n)})^\omega}\frac{ \Prob \big (g(\frac{S_1}{n}, \dots, \frac{S_d}{n})>t^{1/\omega} \big) }{ g(\vect u)^\omega }{\,\mathrm{d}} t
\leq
(1+ \tfrac{1}{\log n})^\omega - (1- \tfrac{1}{\log n})^\omega\\
\leq
\omega (1- \tfrac{1}{\log n})^{\omega -1} \tfrac{2}{\log n}
= \mathrm{O} ( \tfrac{1}{\log n} ),
\qquad n \to \infty.
\end{multline*}
First consider the integral over $f_{n,1}$. Note that for $t \leq g(\vect u)^\omega(1-\frac{1}{\log(n)})^\omega$
\[
1/{g(\vect u)^\omega}
\geq
~ f_{n}(t)
\geq
~ \frac{ \Prob \big (g(\frac{S_1}{n}, \dots, \frac{S_d}{n})> g (\vect u ) \{ 1-\frac{1}{\log(n)}\} \big)}{g(\vect u)^\omega} .
\]
From~ \eqref{eq:lower} in Lemma \ref{lem:bounds} we obtain
\[
\Prob \Big (g(\frac{S_1}{n}, \dots, \frac{S_d}{n})> g (\vect u ) \big \{ 1-\frac{1}{\log(n)} \big \} \Big)
\geq
1 - K_2 \exp\big (-n g(\vect u) h \{ \log(n)^{-1}+1\} \big ).
\]
This in turn implies that for $t \leq g(\vect u)^\omega(1-\frac{1}{\log(n)})^\omega$
\[
1/{g(\vect u)^\omega}
\geq
~ f_{n}(t)
\geq
~ 1/{g(\vect u)^\omega} \{ 1- K_2 \exp\big (-n g(\vect u) h \{ \log(n)^{-1}+1\} \big ) \}
\]
and hence
\begin{multline*}
\{1- \log(n^{-1})\} ^\omega
\geq
~ \int_0^{g(\vect u)^\omega(1-\frac{1}{\log(n)})^\omega} f_{n}(t){\,\mathrm{d}} t \\
\geq
~ \{ 1- \log(n^{-1}) \}^\omega \{ 1-K_2 \exp\big (-n g(\vect u) h \{ \log(n)^{-1}+1\} \big ) \} .
\end{multline*}
Since $g(\vect u) \geq n^{- \gamma}$ for any $\vect u \in \{\vect v \vert g(\vect v) \geq n^{-\gamma}\}$, this implies that
\[
\sup_{\vect u \in \{\vect v \vert g(\vect v) \geq n^{-\gamma}\} } \Big \vert \int_0^{g(\vect u)^\omega(1-\frac{1}{\log(n)})^\omega} f_{n}(t){\,\mathrm{d}} t -1 \Big \vert = \mathrm{O}(\log(n)^{-1}).
\]
Now consider the second integral in \eqref{eq:intdecomp}. Note that for $t \geq g(\vect u)^\omega(1+\frac{1}{\log(n)})^\omega$
\[
0
\leq
~ f_{n}(t)
\leq
~ \frac{ \Prob \big (g(\frac{S_1}{n}, \dots, \frac{S_d}{n})> g (\vect u ) \{ 1+\frac{1}{\log(n)}\} \big)}{g(\vect u)^\omega} .
\]
Furthermore, by \eqref{eq:upper} in Lemma \ref{lem:bounds} we have that
\[
\Prob \Big (g(\frac{S_1}{n}, \dots, \frac{S_d}{n})> g (\vect u ) \{ 1+\frac{1}{\log(n)}\} \Big)
\leq
~ K_1 \exp\big (-n g(\vect u) h \{ \log(n)^{-1}+1\} ),
\]
which is why the second integral in \eqref{eq:intdecomp} is bounded as follows
\[
0
\leq
~ \int_{g(\vect u)^\omega(1+\frac{1}{\log(n)})^\omega}^1 f_{n}(t){\,\mathrm{d}} t ~\\
\leq
~ g (\vect u )^{- \omega }K_1 \exp\big (-n g(\vect u) h \{ \log(n)^{-1}+1\} ).
\]
Thus, the result follows by noticing that
\[
\sup_{\vect u \in \{\vect v \vert g(\vect v) \geq n^{-\gamma}\} } \Big \vert \int_{g(\vect u)^\omega(1+\frac{1}{\log(n)})^\omega}^1 f_{n}(t){\,\mathrm{d}} t \Big \vert = \mathrm{O}(n^{-m}),
\]
fo any $m>0$.
\end{proof}
\fi
\begin{lemma}
\label{lem:bias2}
As $n \to\infty$, we have, for any $\gamma \in (1/(2(1-\omega)),1)$
\[
\sup_{\vect u \in \{g \ge n^{-\gamma}\}}
\left\lvert
\int_{[0,1]^d}
\biggl\{
\frac{\hat {\mathbb{C}}_n(\vect w)}{g(\vect w )^\omega}
-
\frac{\hat{\mathbb{C}}_n(\vect u)}{g(\vect u)}
\biggr\}
\frac{g(\vect w)^\omega}{g(\vect u)^\omega}
{\,\mathrm{d}} \mu_{n,\vect u} (\vect w)
\right\rvert
= \mathrm{o}_\Prob(1).
\]
\end{lemma}
\begin{proof}
Let $\delta_n = 1 / \log(n)$. Write $\lVert \hat{\mathbb{C}}_n \rVert_\infty = \sup \{ \lvert \hat{\mathbb{C}}_n( \vect v) \rvert : \vect v \in [0, 1]^d \}$. We have $\lVert \hat{\mathbb{C}}_n \rVert_\infty = \mathrm{O}_p(1)$ as $n \to \infty$ by weak convergence of $\hat{\mathbb{C}}_n$ in $\ell^\infty([0, 1]^d)$.
We split the integral over $\vect w \in [0, 1]^d$ into two pieces: the integral over the domain
\[
A_{n, \vect u} =
\{ \vect w \in [0, 1]^d : \lvert \vect w - \vect u \rvert_\infty > \delta_n \} \cup \{ g < n^{-\gamma}(1-\delta_n) \}
\]
and the integral over its complement; here $\lvert \vect x \rvert_\infty = \max\{ \lvert x_j \rvert : j = 1, \ldots, d \}$.
For all $\vect w \in [0, 1]^d$ and all $\vect u \in \{g \ge n^{-\gamma} \}$, we have
\begin{align*}
R_n(\vect u, \vect w)
:=
\biggl\lvert
\frac{\hat{\mathbb{C}}_n(\vect w)}{g(\vect w)^\omega}
-
\frac{\hat{\mathbb{C}}_n(\vect u)}{g(\vect u)^\omega}
\biggr\rvert
\frac{g(\vect w)^\omega}{g(\vect u)^\omega}
\le
\frac{\lvert \hat{\mathbb{C}}_n(\vect w) \rvert}{g(\vect u)^\omega}
+
\frac{ \lvert \hat{\mathbb{C}}_n(\vect u) \rvert}{g(\vect u)^{2\omega}}
\le
2 \lVert \hat{\mathbb{C}}_n \rVert_\infty n^{2\gamma\omega}.
\end{align*}
Moreover, for all $\vect u \in \{g \ge n^{-\gamma}\}$, using Chebyshev's inequality and the concentration inequality \eqref{eq:lower}, we have
\begin{align*}
\mu_{n,\vect u} \bigl( A_{n, \vect u} \bigr)
&\le
\sum_{j=1}^d \Prob \Bigl\{ \bigl\lvert \tfrac{S_j}{n} - u_j \bigr\rvert > \delta_n \Bigr\}
+
\Prob \Bigl\{
g \bigl( \tfrac{S_1}{n}, \ldots, \tfrac{S_d}{n} \bigr)
< g(\vect u) (1 - \delta_n)
\Bigr\} \\
&\le
d n^{-1} \delta_n^{-2} + 4d \exp \{ - n^{1-\gamma} h(1+\delta_n) \}.
\end{align*}
Since $0 < \omega < 1/2$, $0 < \gamma < 1$, $\delta_n = 1/\log(n)$ and $h(1+\delta_n) \ge \tfrac{1}{3} \delta_n^2$, it follows that
\[
\sup_{\vect u \in \{g \ge n^{-\gamma}\}}
\int_{A_{n, \vect u}}
R_n( \vect u, \vect w )
{\,\mathrm{d}} \mu_{n, \vect u}( \vect w )
\le
n^{2\gamma\omega} [n^{-1} \delta_n^{-2} + \exp \{ - n^{1-\gamma} h(1+\delta_n) \}] \, \mathrm{O}_p(1)
=
\mathrm{o}_p(1), \qquad n \to \infty.
\]
It remains to consider the integral over $\vect w \in [0, 1]^d \setminus A_{n, \vect u}$, i.e., $\lvert \vect w - \vect u \rvert_\infty \le \delta_n$ and $g( \vect w ) \ge n^{-\gamma}(1 - \delta_n) > n^{-1}$, at least for sufficiently large $n$. By Lemma~4.1 in \cite{BerBucVol17}, we have
\begin{equation}
\label{eq:modcont}
\sup_{\substack{ \vect u, \vect w \in \{ g \ge n^{-1} \} \\ \lvert \vect u - \vect w \rvert_\infty \le \delta_n }}
\left|
\frac{\hat{\mathbb{C}}_n(\vect w)}{g(\vect w )^\omega}
-
\frac{\hat{\mathbb{C}}_n(\vect u)}{g(\vect u)}
\right|
= \mathrm{o}_{\Prob}(1), \qquad n \to \infty.
\end{equation}
In view of Lemma~\ref{lem:int}, we obtain that
\[
\sup_{\vect u \in \{g \ge n^{-\gamma}\}}
\int_{[0, 1]^d \setminus A_{n, \vect u}}
R_n( \vect u, \vect w )
{\,\mathrm{d}} \mu_{n, \vect u}( \vect w )
\le
\mathrm{o}_{\Prob}(1) \int_{[0, 1]^d} \frac{g(\vect w)^\omega}{g(\vect u)^\omega} {\,\mathrm{d}} \mu_{n, \vect u}(\vect w)
= \mathrm{o}_{\Prob}(1),
\]
as $n \to \infty$. The stated limit relation follows by combining the assertions on the integral over $A_{n, \vect u}$ and the one over its complement.
Note that in Lemma~4.1 in \cite{BerBucVol17}, the supremum in \eqref{eq:modcont} is taken over $[1/n,1-1/n]^d$ instead of over $\{g > n^{-1}\}$. But it can be seen in the proof of that statement that the result can be extended to the set $\{ g \geq n^{-1} \}$. Furthermore, in the latter reference, the pseudo-observations are defined as $\hat U_{i,j}=\frac{1}{n+1} R_{i,j}$. However, this does not affect the above proof, since the difference of the two empirical copulas is at most $d/n$, almost surely. This gives an additional error term on the event $\{ g \geq n^{-1} \}$ which is of the order $\mathrm{O}_\Prob(n^{\omega+1/2-1}) = \mathrm{o}_\Prob(1)$, as $n \to \infty$.
\end{proof}
\subsection{On the expectation of the reciprocal of a binomial random variable}
\begin{lemma}
Let $0 < u \le 1$ and let $n \ge 2$ be integer. If $S \sim \operatorname{Bin}(n, u)$ and $T \sim \operatorname{Bin}(n-1, u)$, then
\begin{equation}
\label{eq:Bin:invMoment}
\operatorname{E} \left[ \frac{1}{S} \operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{ \{ S \ge 1 \} } \right]
=
nu \operatorname{E} \left[ \frac{1}{(1+T)^2} \right]
=
nu \int_0^1 (1-u+us)^{n-1} (-\log s) \, {\,\mathrm{d}} s.
\end{equation}
\end{lemma}
\begin{proof}
For $k\in\{1,\dots,n\}$, we have
\[
\frac{\Prob(S=k)}{\Prob(T+1=k)}
=
\frac{\binom{n}{k}u^k(1-u)^{n-k}}{\binom{n-1}{k-1}u^{k-1}(1-u)^{(n-1)-(k-1)}}
=
\frac{nu}{k}.
\]
We obtain that
\begin{align*}
\operatorname{E} \left[ \frac{1}{S} \operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{\{S\geq 1\}} \right]
=
\sum_{k=1}^n \frac{1}{k} \Prob(S = k)
=
\sum_{k=1}^n \frac{1}{k} \frac{nu}{k} \Prob(T+1=k)
=
(nu) \operatorname{E} \left[ \frac{1}{(1+T)^2} \right].
\end{align*}
Now we apply a trick due to \cite{ChaStr72}: we have
\[
\frac{1}{(1+T)^2}
=
\int_{t=0}^1 \frac{1}{t} \int_{s=0}^t s^T \, {\,\mathrm{d}} s \, {\,\mathrm{d}} t
=
\int_{s=0}^1 s^T \int_{t=s}^1 \frac{{\,\mathrm{d}} t}{t} \, {\,\mathrm{d}} s
=
\int_0^1 s^T (- \log s) \, {\,\mathrm{d}} s.
\]
Taking expectations and using Fubini's theorem, we obtain
\[
\operatorname{E} \left[ \frac{1}{(1+T)^2} \right]
=
\int_0^1 \operatorname{E}(s^T) \, (- \log s) \, {\,\mathrm{d}} s
=
\int_0^1 (1-u+us)^{n-1} (-\log s) \, {\,\mathrm{d}} s,
\]
as required.
\end{proof}
\begin{lemma}
\label{lem:binom:aux}
Let $0 < u_n \le 1$ and let $S_n \sim \operatorname{Bin}(n, u_n)$. If $nu_n \to \infty$, then
\[
(nu_n^2) \operatorname{E} \left[ \frac{1}{S_n} \operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{ \{ S_n \ge 1 \} } \right]
=
u_n+ \mathrm{O} \bigl( n^{-1} \bigr),
\qquad n \to \infty.
\]
\end{lemma}
\begin{proof}
We start from \eqref{eq:Bin:invMoment}:
\[
(nu_n^2) \operatorname{E} \left[ \frac{1}{S_n} \operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{ \{ S_n \ge 1 \} } \right]
=
n^2 u_n^3 \int_0^1 (1-u_n+u_ns)^{n-1} (-\log s) \, {\,\mathrm{d}} s.
\]
We split the integral in two parts, cutting at $s = 1/2$.
First we consider the case $s \le 1/2$. For some positive constant $K$, we have
\begin{align*}
n^2u_n^3 \int_0^{1/2} (1-u_n+u_ns)^{n-1} (-\log s) \, {\,\mathrm{d}} s
&\le
n^2u_n^3 \, (1-u_n/2)^{n-1} \int_0^{1/2} (-\log s) \, {\,\mathrm{d}} s \\
&\le
K \, n^2u_n^3 \, (1-u_n/2)^n \\
&\le
K \, n^2u_n^3 \exp(-nu_n/2).
\end{align*}
For any $m > 0$, this expression is $\mathrm{O}(u_n (nu_n)^{-m}) = \mathrm{o}(n^{-1})$ as $n \to \infty$, hence by choosing $m=1$ it is $\mathrm{O}(n^{-1})$ as $n \to \infty$.
Second we consider the case $s \ge 1/2$. The substitution $s = 1 - v / (nu_n)$ yields
\begin{equation}
\label{eq:Bin:invMoment:aux}
n^2 u_n^3 \int_{1/2}^1 (1-u_n+u_ns)^{n-1} (-\log s) \, {\,\mathrm{d}} s
=
u_n \int_0^{(nu_n/2)} (1-v/n)^{n-1} [-(nu_n) \log \{ 1 - v/(nu_n) \}] \, {\,\mathrm{d}} v.
\end{equation}
We need to show that this integral is $u_n + \mathrm{O}(n^{-1})$ as $n \to \infty$.
For facility of writing, put $k_n = nu_n$. Recall that $k_n \to \infty$ as $n \to \infty$ by assumption. The inequalities $x \le - \log(1-x) \le x/(1-x)$ for $0 \le x < 1$ imply that
\[
0 \le -k_n \log(1 - v/k_n) - v \le \frac{v^2}{k_n-v} \le \frac{2 v^2}{k_n}, \qquad v \in [0, k_n/2].
\]
As $(1-v/n)^{n-1} \le (1-k_n/(2n))^{-1} (1-v/n)^{-n} \le 2 \exp(-v)$ for $v \in [0, k_n/2]$, we find
\begin{align*}
u_n \int_0^{k_n/2} (1-v/n)^{n-1} \, \left\lvert -k_n \log ( 1 - v/k_n ) - v \right\rvert \, {\,\mathrm{d}} v
&\le
\frac{4 u_n}{k_n} \int_0^{k_n/2} \exp(-v) \, v^2 \, {\,\mathrm{d}} v \\
&=
\mathrm{O}( n^{-1} ),
\qquad n \to \infty.
\end{align*}
As a consequence, replacing $-k_n \log(1-v/k_n)$ by $v$ in \eqref{eq:Bin:invMoment:aux} produces an error of the required order $\mathrm{O}(n^{-1})$.
It remains to consider the integral
\[
u_n\int_0^{k_n/2} (1-v/n)^{n-1} \, v \, {\,\mathrm{d}} v.
\]
Via the substitution $x = 1-v/n$, this integral can be computed explicitly. After some routine calculations, we find it is equal to
\[
u_n \frac{n}{n+1} [ 1 - \{1 - k_n/(2n)\}^{n} (1 + k_n/2) ].
\]
Since $\{1 - k_n/(2n)\}^{n} \le \exp(-k_n/2)$, the previous expression is
\[
u_n + \mathrm{O}(u_n n^{-1}) + \mathrm{O}(u_n \exp(-k_n/2) k_n ), \qquad n \to \infty,
\]
The error term is $\mathrm{O}(n^{-1})$, as required.
\end{proof}
\begin{lemma}
\label{lem:binom}
If $0 < u_n \le 1$ is such that $nu_n \to \infty$ as $n \to \infty$, then
\[
\sup_{u_n \le u \le 1} \left\lvert nu^2 \operatorname{E} \left[ \frac{1}{S} \operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{ \{ S \ge 1 \} } \right] - u \right\rvert
=
\mathrm{O}( n^{-1} ),
\qquad n \to \infty,
\]
where the expectation is taken for $S \sim \operatorname{Bin}(n,u)$.
\end{lemma}
\begin{proof}
The function sending $u \in [u_n, 1]$ to $\lvert nu^2 \operatorname{E} [ S^{-1} \operatorname{\mathds{1}}}%{\operatorname{\bf{1}}}%{\operatorname{1}_{ \{ S \ge 1 \} } ] - 1 \rvert$, with $S \sim \operatorname{Bin}(n, u)$, is continuous and therefore attains its supremum at some $v_n \in [u_n, 1]$. Since $nv_n \ge nu_n \to \infty$ as $n \to \infty$, we can apply Lemma~\ref{lem:binom:aux} to find that the supremum is $\mathrm{O}(n^{-1})$ as $n \to \infty$.
\end{proof}
\subsection{Inequalities for binomial random variables}
If $S \sim \operatorname{Bin}(n, u)$ is a binomial random variable with succes probability $0 < u < 1$, then Bennett's inequality states that
\[
\Pr \Bigl(
\sqrt{n} \bigl\lvert \tfrac{S}{n} - u \bigr\rvert \ge \lambda
\Bigr)
\le
2 \exp \Bigl\{
- \frac{\lambda^2}{2u}
\psi \Bigl( \frac{\lambda}{\sqrt{n} u} \Bigr)
\Bigr\}
=
2 \exp \Bigl\{
- nu \, h \Bigl( 1 + \frac{\lambda}{\sqrt{n} u} \Bigr)
\Bigr\}
\]
for $\lambda > 0$, where $\psi(x) = 2 \, h(1+x)/x^2$ and $h(x) = x(\log x - 1) + 1$; see for instance \citet[Proposition~A.6.2]{VanWel96}. Setting $\lambda = \sqrt{n} u \delta$, we find
\begin{equation}
\label{eq:bennett}
\Pr \Bigl(
\big\lvert \tfrac{S}{n} - u \big\rvert \ge u \delta
\Bigr)
\le
2 \exp \{ - nu \, h(1+\delta) \},
\qquad \delta > 0.
\end{equation}
Note that $h(1+\delta) = \int_0^\delta \log(1+t) {\,\mathrm{d}} t \ge \int_0^\delta (t - \tfrac{1}{2} t^2) {\,\mathrm{d}} t = \tfrac{1}{2} \delta^2 (1 - \tfrac{1}{3} \delta)$ for $\delta \ge 0$ and thus $h(1+\delta) \ge \tfrac{1}{3} \delta^2$ for $0 \le \delta \le 1$.
We extend \eqref{eq:bennett} to a vector of independent binomial random variables and in terms of the weight function $g$ in \eqref{eq:g}.
\begin{lemma}
\label{lem:bounds}
If $S_1, \dots, S_d$ are independent random variables with $S_j \sim \operatorname{Bin}(n,u_j)$ and $0 < u_j < 1$ for all $j \in \{1, \ldots, d\}$, then, for $\delta > 0$,
\begin{align}
\label{eq:upper}
\Prob \Bigl\{
g \bigl( \tfrac{S_1}{n}, \ldots, \tfrac{S_d}{n} \bigr)
\geq g(\vect u ) ( 1 + \delta )
\Bigr\}
&\leq
2d \exp\bigl\{ - n g(\vect u) h(1+\delta) \bigr\}, \\
\label{eq:lower}
\Prob \Bigl\{
g \bigl( \tfrac{S_1}{n}, \ldots, \tfrac{S_d}{n} \bigr)
\leq g(\vect u ) ( 1 - \delta )
\Bigr\}
&\leq
4d \exp \bigl\{ - n g(\vect u) h(1+\delta) \bigr\},
\end{align}
with $h$ as above; in particular, $h(1+\delta) \ge \tfrac{1}{3} \delta^2$ for $0 < \delta \le 1$.
\end{lemma}
\begin{proof}
Let us start with \eqref{eq:lower}. The definition of the weight function $g$ in \eqref{eq:g} yields
\[
\Prob
\Bigl\{
g \bigl( \tfrac{S_1}{n}, \dots , \tfrac{S_d}{n} \bigr)
\leq g(\vect u ) ( 1 - \delta )
\Bigr\}
\leq
\sum_{j=1}^d
\Bigl[
\Prob
\Bigl\{
\tfrac{S_j}{n} \leq g(\vect u ) (1 - \delta)
\Bigr\}
+
\Prob
\Bigl\{
\max_{k \ne j} \bigl( 1 - \tfrac{S_{k}}{n} \bigr)
\leq g(\vect u ) ( 1 - \delta)
\Bigr\}
\Bigr].
\]
Let us first consider the first term on the right-hand side, i.e., $\Prob \{ \tfrac{S_j}{n} \leq g(\vect u ) ( 1 - \delta ) \}$. By definition of the weight function we have $g (\vect u) \leq u_j$. By Bennett's inequality~\eqref{eq:bennett},
\begin{align*}
\Prob \Bigl\{ \tfrac{S_j}{n} \leq g(\vect u) ( 1 - \delta ) \Bigr\}
&\leq
\Prob \Bigl\{ \tfrac{S_j}{n} \leq u_j ( 1 - \delta ) \Bigr\} \\
&\le
\Prob \Bigl\{ \big\lvert \tfrac{S_j}{n} - u_j \big\rvert
\geq u_j \delta \Bigr\} \\
&\leq
2 \exp \{ - n u_j h(1 + \delta) \} \\
&\leq
2 \exp \{ - n g(\vect u) h(1 + \delta) \} .
\end{align*}
Second, consider the term $\Prob \{ \max_{k \ne j} ( 1 - \tfrac{S_{k}}{n} )\leq g(\vect u ) ( 1 - \delta) \} $. Suppose $j=1$; the other cases can be treated exactly along the same lines. We have $g(\vect u) \leq \max_{k \neq 1}(1 - u_k)$. Assume without loss of generality that $\max_{k \neq 1} (1 - u_k) = 1 - u_2$. Then we obtain $g(\vect u) \le 1 - u_2$ and, by Bennett's inequality \eqref{eq:bennett} applied to $n - S_2 \sim \operatorname{Bin}(n, 1-u_2)$,
\begin{align*}
\Prob
\Bigl\{
\max_{k \ne 1} \bigl(1-\tfrac{S_{k}}{n}\bigr)
\leq g(\vect u ) (1 - \delta)
\Bigr\}
&\leq
\Prob
\Bigl\{
\max_{k \ne 1} \bigl(1-\tfrac{S_{k}}{n}\bigr)
\leq (1 - u_2) (1 - \delta)
\Bigr\} \\
&\leq
\Prob
\Bigl\{
1 - \tfrac{S_{2}}{n}
\leq (1- u_2) (1 - \delta)
\Bigr\} \\
&\leq
\Prob
\Bigl\{
\big\lvert 1 - \tfrac{S_2}{n} - (1- u_2) \big\rvert
\geq (1- u_2 ) \delta
\Bigr\} \\
&\leq
2 \exp \{ -n (1-u_2) h(1 + \delta) \} \\
&\leq
2 \exp \{ -n g(\vect u) h(1 + \delta) \}.
\end{align*}
Let us now show \eqref{eq:upper}. First suppose $g(\vect u) = u_1$. Since $g(\frac{S_1}{n} , \dots \frac{S_d}{n }) \leq \frac{S_1}{n} $ we have, by Bennett's inequality~\eqref{eq:bennett},
\begin{align*}
\Prob
\Bigl\{
g \bigl( \tfrac{S_1}{n}, \dots, \tfrac{S_d}{n} \bigr)
\geq g(\vect u) (1 + \delta)
\Bigr\}
&\leq
\Prob
\Bigl\{
\tfrac{S_1}{n} \geq u_1 (1 + \delta)
\Bigr\} \\
&\le
\Prob
\Bigl\{
\bigl\lvert \tfrac{S_1}{n} - u_1 \bigr\rvert \ge u_1 \delta
\Bigr\} \\
&\le
2 \exp \bigl\{ - nu_1 h(1+\delta) \bigr\}
=
2 \exp \bigl\{ - n g(\vect u) h(1+\delta) \bigr\}.
\end{align*}
Finally, suppose that $g(\vect u) = 1 - u_1 \geq 1 - u_k$, for $k = 3, \ldots, d$. Note that $g (\tfrac{S_1}{n}, \dots, \tfrac{S_d}{n }) \leq \max_{k \neq 2} (1 - \tfrac{S_j}{n}) $, which yields
\begin{align*}
\Prob \Bigl\{
g \bigl( \tfrac{S_1}{n}, \ldots, \tfrac{S_d}{n} \bigr)
\geq g(\vect u ) (1 + \delta)
\Bigr\}
&\leq
\Prob \Bigl\{
\max_{k \neq 2} (1- \tfrac{S_k}{n}) \geq (1- u_1) (1 + \delta)
\Bigr\} \\
&\leq
\sum_{k \neq 2} \Prob \Bigl\{
1 - \tfrac{S_k}{n} \geq (1- u_1)(1 + \delta)
\Bigr\}.
\end{align*}
By Bennett's inequality \eqref{eq:bennett} applied to $n - S_k \sim \operatorname{Bin}(n, 1-u_k)$ for every $k \ne 2$, we have, since $(1-u_1)/(1-u_k) \ge 1$,
\begin{align*}
\Prob \Bigl\{
1 - \tfrac{S_k}{n} \geq (1- u_1)(1 + \delta)
\Bigr\}
&\le
\Prob \Bigl\{
\bigl\lvert 1 - \tfrac{S_k}{n} - (1-u_k) \bigr\rvert
\geq
(1 - u_1)(1+\delta) - (1 - u_k)
\Bigr\} \\
&\le
2 \exp \Bigl\{
- n (1 - u_k) h \Bigl( \tfrac{1 - u_1}{1 - u_k} (1 + \delta) \Bigr)
\Bigr\}.
\end{align*}
For $a \ge 1$ and $\delta \ge 0$, a direct calculation\footnote{Or, since $h(x) = \int_1^x \log(t) \, {\,\mathrm{d}} t$, we have $h(a(1+\delta)) = \int_1^{a(1+\delta)} \log(t) \, {\,\mathrm{d}} t = a \int_{1/a}^{1+\delta} \log(as) \, {\,\mathrm{d}} s \ge a \int_1^{1+\delta} \log(s) \, {\,\mathrm{d}} s = a \, h(1+\delta)$ for $a \ge 1$ and $\delta \ge 0$.} shows that $h(a(1+\delta)) - a \, h(1+\delta) \ge h(a) \ge 0$ and thus $h(a(1+\delta)) \ge a \, h(1+\delta)$. Apply this inequality to $a = (1-u_1) / (1-u_k)$ to find
\begin{align*}
\Prob \Bigl\{
1 - \tfrac{S_k}{n} \geq (1- u_1)(1 + \delta)
\Bigr\}
&\le
2 \exp \Bigl\{ - n (1 - u_k) \tfrac{1 - u_1}{1 - u_k} h(1+\delta) \Bigr\} \\
&=
2 \exp \{ - n (1 - u_1) h(1+\delta) \}
=
2 \exp \{ - n g(\vect u) h(1+\delta) \}. \qedhere
\end{align*}
\end{proof}
\subsection{Extensions of results in \cite{BerBucVol17}}
\label{sec:BerBucVol17}
For any sequence $\delta_n>0 $ that converges to zero as $n \to \infty$, Lemma~4.10 in \cite{BerBucVol17} can be extended to
\begin{equation}
\label{eq:extend1}
\sup \left\{
\left|
\frac{\mathbb{C}_n(\vect u )}{g(\vect u)^\omega}
-
\frac{\mathbb{C}_n(\vect u')}{g(\vect u')^\omega}
\right|
\; : \;
g(\vect u) \ge c/n, \,
g(\vect u') \ge c/n, \,
\lvert \vect u - \vect u' \rvert \leq \delta_n
\right\}
= \mathrm{o}_\Prob(1), \qquad n \to \infty.
\end{equation}
Here, $\mathbb{C}_n = \sqrt{n} ( \tilde{C}_n - C )$ and $\tilde{C}_n$ is the empirical copula based on the generalized inverse function of the marginal empirical distribution functions \citep[beginning of Section~4.2]{BerBucVol17}.
Furthermore, Theorem~4.5 in the same reference can be extended to
\begin{equation}
\label{eq:extend2}
\sup \left\{
\left|
\frac{\hat \mathbb{C}_n(\vect u)}{g(\vect u)^\omega} - \frac{\bar \mathbb{C}_n(\vect u)}{g(\vect u )^\omega}
\right|
\; : \;
g(\vect u) \geq c/n
\right\}
= \mathrm{o}_\Prob(1), \qquad n \to \infty.
\end{equation}
\begin{proof}
Let us start with \eqref{eq:extend1}. The result is similar to the result in Lemma 4.10, in particular Equation (4.1), in \cite{BerBucVol17}. A look at the proof of the result shows that the restriction $\vect u , \vect u' \in [c/n, 1-c/n]^d$ instead of $ \vect u , \vect u' \in \{ g \geq c/n \}$ is not needed. The proof of Equation (4.1) in Lemma 4.10 in \cite{BerBucVol17} is based on Lemma 4.7, 4.8 and Equations (4.8) and (4.8) which are all valid on sets of the form $N(c_{n1},c_{n2})= \{ g \in (c_{n1}, c_{n2}] \}$. Hence, in the proof, all suprema can be taken over $ \vect u , \vect u' \in \{ g \geq c/n \}$ instead of $\vect u , \vect u' \in [c/n, 1-c/n]^d$, which gives us exactly \eqref{eq:extend1}.
For the proof of \eqref{eq:extend2} note that for any $\vect u \in \{ g \geq c/n \}$ we can find $\vect u' \in \{ g \geq n^{-1/2} \}$ such that $\lvert \vect u - \vect u' \rvert \leq d n^{-1/2}$. To find such a $\vect u'$ is all that it is needed to extend the proof of Theorem 4.5 in \cite{BerBucVol17} to obtain \eqref{eq:extend2}.
\end{proof}
\section*{Acknowledgments}
The authors gratefully acknowledge the editor-in-chief, the associate editor, and the referees for additional references, for suggesting the idea of a weighted test of independence, and for various suggestions concerning the numerical experiments on the estimation of the Pickands dependence function.
Betina Berghaus gratefully acknowledges support by the Collaborative Research Center ``Statistical modeling of nonlinear dynamic processes'' (SFB 823) of the German Research Foundation (DFG).
Johan Segers gratefully acknowledges funding by contract ``Projet d'Act\-ions de Re\-cher\-che Concert\'ees'' No.\ 12/17-045 of the ``Communaut\'e fran\c{c}aise de Belgique'' and by IAP research network Grant P7/06 of the Belgian government.
|
1,116,691,498,684 | arxiv | \section{Introduction}
We consider finite simple graphs $G$ with vertex set $V(G)$ and edge set $E(G)$.
The vertex-degree of $v \in V(G)$ is denoted by $d_G(v)$, and $\Delta(G)$ denotes the maximum vertex-degree of $G$.
If it is clear from the context, then $\Delta$ is frequently used.
A graph is planar if it is embeddable into the Euclidean plane. A plane graph $(G,\Sigma)$ is a planar graph $G$ together with an embedding $\Sigma$ of $G$ into the Euclidean plane.
If $(G,\Sigma)$ is a plane graph, then $F(G)$ denotes the set of faces of $(G,\Sigma)$.
The degree $d_{(G,\Sigma)} (f) $ of a face $f$ is the length of its facial circuit. A face $f$ is a $k$-face if $d_{G}(f)=k$,
and it is a $k^+$-face if $d_{G}(f) \geq k$.
The edge-chromatic number $\chi'(G)$ of a graph $G$ is the minimum $k$ such that $G$ admits a proper $k$-edge-coloring.
Vizing \cite{Vizing_1964} proved that $\chi'(G) \in \{\Delta(G), \Delta(G)+1\}$. If $\chi'(G)= \Delta(G)$, then $G$ is a class 1 graph, and it is a
class 2 graph otherwise. A class 2 graph $H$ is $k$-critical, if $\Delta(H)=k$ and $\chi'(H') < \chi'(H)$ for every proper subgraph $H'$ of $H$.
Vizing \cite{Vizing_1964} showed for each $k \in \{2,3,4,5\}$ that there is a planar class 2 graph $G$ with $\Delta(G) = k$. He proved that
every planar graph with $\Delta\geq 8$ is a class 1 graph, and conjectured that every planar graph with $\Delta \in \{6,7\}$ is a class 1 graph. Vizing's conjecture is
proved for planar graphs with $\Delta=7$ by Gr\"unewald \cite{Gruenewald_2000}, Sanders, Zhao \cite{Sanders_Zhao_2001}, and Zhang \cite{Zhang_2000} independently.
It is still open for the case $\Delta=6$. The paper provides short proofs for the following statements.
\begin{theorem} \label{th_main}
There is no 6-critical plane graph $(G, \Sigma)$, such that every vertex of $G$ is incident to at most three 3-faces.
\end{theorem}
If Vizing's conjecture is not true, then every 6-critical graph has the following property.
\begin{corollary}
Let $(G,\Sigma)$ be a plane graph. If $G$ is $6$-critical, then there is a vertex of $G$ which is incident to at least four $3$-faces.
\end{corollary}
\begin{theorem} \label{th_main 1}
Let $(G,\Sigma)$ be a plane graph. If $G$ is $5$-critical, then $(G,\Sigma)$ has a $3$-face which is adjacent to a $3$-face or to a $4$-face.
\end{theorem}
A significant longer proof of Theorem \ref{th_main} is given in \cite{Wang_Xu_2013}, but the statement is formulated for plane graphs.
However, the proof works for critical graphs only. The assumption that a minimal counterexample is critical is wrong. It might be that a subgraph of this minimal counterexample $G$ does not fulfill the pre-condition of the statement. For example, if $G$ has a triangle $[vxyv]$ and a bivalent vertex $u$ such that $u$ is the unique vertex inside $[vxyv]$ and $u$ is adjacent to $x$ and $y$, then the removal of $u$ increases the number of 3-faces containing $v$ (see Figure \ref{mistake_example}).
\begin{figure}[h]
\centering
\includegraphics[width=4.5cm]{mistake_example.pdf}\\
\caption{an example}\label{mistake_example}
\end{figure}
\section{Proofs of Theorems \ref{th_main} and \ref{th_main 1}}
We will use the following two lemmas.
\begin{lemma} [\cite{Luo_Miao_Zhao_2009}] \label{lem_m_n}
If $G$ is a $6$-critical graph, then $|E(G)|\geq \frac{1}{2}(5|V(G)|+3)$.
\end{lemma}
\begin{lemma} [\cite{Woodall_2008}] \label{lem_m_n 1}
If $G$ is a $5$-critical graph, then $|E(G)| \geq \frac{15}{7}|V(G)|$.
\end{lemma}
\subsection*{Proof of Theorem \ref{th_main}}
Suppose to the contrary that there is a counterexample to the statement. Then there is a $6$-critical graph $G$ which has an embedding $\Sigma$
such that every $v \in V(G)$ is incident to at most three 3-faces.
With Euler's formula and Lemma \ref{lem_m_n} we deduce $\sum_{f\in F(G)}(d_G(f)-4) = 2|E(G)|-4|F(G)| = 2|E(G)|-4(|E(G)|+2-|V(G)|) \leq -|V(G)|-11$.
Therefore,
$ |V(G)| + \sum_{f\in F(G)}(d_{G}(f)-4) \leq -11$.
Give initial charge 1 to each $v \in V(G)$ and $d_{G}(f)-4$ to each $f \in F(G)$. Discharge the elements of $V(G) \cup F(G)$ according to the following rule:\\
\textbf{R1}: Every vertex sends $\frac{1}{3}$ to its incident 3-faces.
The rule only moves the charge around and does not affect the sum. Furthermore, the finial charge of every vertex and face is at least 0.
Therefore, $0\leq\sum_{v\in V(G)}1 + \sum_{f\in F(G)}(d_{G}(f)-4) = |V(G)| + \sum_{f\in F(G)}(d_{G}(f)-4) \leq -11$, a contradiction.
\subsection*{Proof of Theorem \ref{th_main 1}}
Suppose to the contrary that there is a counterexample to the statement. Then there is a $5$-critical graph $G$ which has an embedding $\Sigma$
such that every 3-face is adjacent to $5^+$-faces only. Hence, every vertex of $G$ is incident to at most two 3-faces, and every vertex which is incident to a 3-face is also incident
to a $5^+$-face.
By Lemma \ref{lem_m_n 1}, we have $\sum_{f\in F(G)}(d_{G}(f)-4) \leq -\frac{2}{7}|V(G)|-8$. Therefore,
$\frac{2}{7}|V(G)| + \sum_{f\in F(G)}(d_{G}(f)-4) \leq -8$.
Give initial charge of $\frac{2}{7}$ to each vertex and $d_G(f)-4$ to each face of $G$. Discharge the elements of $V(G) \cup F(G)$ according to the following rules:\\
\textbf{R1}: Every vertex sends $\frac{1}{3}$ to its incident 3-faces.\newline
\textbf{R2}: Every $5^+$-face sends $\frac{d_G(f)-4}{d_G(f)}$ to its incident vertices.
Denote the finial charge by $ch^*$. Rules R1 and R2 imply that $ch^*(f)\geq 0$ for every $f \in F(G)$.
Let $n \leq 2$ and $v$ be a vertex which is incident to $n$ 3-faces.
If $n=0$, then $ch^*(v)\geq \frac{2}{7}>0$.
If $n=1$, then $v$ is incident to at least one $5^+$-face, and therefore,
$ch^*(v)\geq \frac{2}{7}+\frac{1}{5}-\frac{1}{3}>0$ by rule R2. If $n=2$, then $v$ is incident to at least two $5^+$-faces,
and therefore $ch^*(v)\geq \frac{2}{7}+2\times\frac{1}{5}-2\times\frac{1}{3}=\frac{2}{105}>0$, by rule R2.
Hence, $0\leq\sum_{v\in V(G)}\frac{2}{7} + \sum_{f\in F(G)}(d_G(f)-4) \leq -8$, a contradiction.
|
1,116,691,498,685 | arxiv | \section{Introduction}
\label{sec:intro}
\noindent Language, reflecting users' psychology, has been used as an effective tool to understand and predict mental health conditions (i.e., \citealp{de2013predicting}). While language analyses widely utilize social media platforms, like Facebook~\cite{eichstaedt2018facebook,seabrook2018predicting}, text messages (SMS) have just recently been demonstrated as a new platform to detect translational linguistic markers for mental health conditions, such as depression and schizophrenia ~\cite{liu2021relationship,barnett2018relapse}.
When new and different platforms emerge, researchers face the question of {\it whether and how these platforms differ in language use patterns}.
As demonstrated in the psycho-linguistic research, language use is a social-related behavior, characteristics of which are closely tied to and adjusted based on social contexts and communication channels~\cite{forgas2012language}. The use of Facebook and SMS might be different because people could selectively share content, engage in different social activities, and choose different communication styles on different platforms~\cite{harari2020sensing}, and SMS contains denser information and
is used by broader populations than Facebook does~\cite{liu2021relationship}. However, many sentiment analysis studies often focus on the conclusions and model development (see reviews in \citealp{guntuku2017detecting}), and apply models which are pre-trained on one platform (e.g., Facebook) to language from another (e.g., SMS, \citealp{liu2021relationship}), without considering the cross-platform differences. It is important to be aware of the different linguistic characteristics between Facebook language and SMS to properly process and analyze the data, handle the machine learning models, and interpret the results.
The present work aims to understand \textbf{the difference between the Facebook and SMS language from the same useres}. In this paper, we first compare Facebook and SMS in language use; then illustrate the change in performance of and correction solutions to depression diagnosis using Facebook-derived models on SMS.
\paragraph{Contributions} Our contributions are: 1) we provide evidence that Facebook and SMS language are psycho-linguistically different for the same users;
2) with a focus on depression prediction, we show that a naive application of Facebook-trained model suffers from accuracy deprecation on SMS due to cross-platform differences in language, not demographics;
3) we derive a domain adaption correction to bridge the linguistic differences between Facebook and SMS, and demonstrate significant improvement in depression-prediction model performance;
4) the implications and generalized impacts of cross-platform language model selection and adaptation are discussed.
\section{Background}
\label{sec:background}
Preliminary work has shown that linguistic model predictions should be platform-aware. For example, \citet{seabrook2018predicting} examined the association between depression and emotion word expressions on both Facebook and Twitter, and found different patterns: instability of negative emotion words predict depression on Facebook but not Twitter, but the variability of negative emotion words reflect depression severity on Twitter. \citet{jaidka2018facebook} showed the difference between Facebook and Twitter in predicting users' traits. By qualitatively comparing the linguistic and demographic features underlies the differences between Facebook and Twitter, they found that users prefer to talk about family, personal concerns, and emotions on Facebook, while more ambitions and needs on Twitter.
The variation of language on different platforms may be attributed to users' psychological differences during communication and the anticipated function of each platform. Although no study to our knowledge has compared SMS with Facebook public posts, some work has been done to compare Facebook status updates with direct messages, a private communication form on Facebook that is similar to SMS.
~\citet{doi:10.1177/0261927X12456384} observed that sharing positive emotions is associated with self-presentational concerns in Facebook status updates, but not private messages, noting the difference between communication on public and private channels.
~\citet{10.1111/jcom.12106} further identified self-disclosures in Facebook status updates and private messages are associated with different strategic goals and motivations. Status updates are associated with higher odds of social validation, self-expression, and relief, whereas private messages are related to higher odds of relational development, social maintenance, and information sharing. Our focus is to demonstrate the social-psychological differences between Facebook and SMS in language use, and then provide an example from the language prediction of depression.
\section{Data}
\label{sec:data}
Participants were recruited between Sept 2020 and July 2021 for a larger national survey focused on COVID-19, mental health, and substance use. Participants were recruited online via the Qualtrics Panel. To qualify, consenting participants must have been 18 years or older, U.S. residence, and Facebook users. Specifically, participants must have posted at least 500 words across their status updates over the lifetime of the account and posted at least 5 posts within the past 180 days, to ensure that they are active users~\cite{eichstaedt2021closed}. 2,796 participants were paid \$30 to finish an initial survey, which consisted of multiple items centered on socio-demographics, physical and mental health (including depression), substance use, and COVID-19. This pool of participants has been used to study loneliness and alcohol use~\cite{bragard2021loneliness} and COVID-related victimization~\cite{fishercovid}.
After completing this survey, participants were invited to install the open-source mobile sensing application AWARE on their mobile phones~\cite{ferreira2015aware,nishiyama2020ios}. This application collects mobile sensor information, such as movement, app usage, and, importantly for the current study, keystroke data. Participants were paid \$4 per day to keep the AWARE app running for at most 30 days. A total of 300 participants completed this phase of the study. We note that the mobile sensor data depends on the phone manufacturer (i.e., iPhone vs. Android) and, in particular, keystroke data is only available for Android users. We collected keystroke data from a total of 192 Android users, out of which 123 wrote at least 500 words within the 30-day study period\footnote{Extensive cleaning was automatically applied (i.e., no human in the loop) to the keystroke data in order to remove any sensitive PII data. See the Supplemental Materials for complete details. In order to fairly compare the Facebook data to the keystroke data, we applied the same cleaning pipeline to the Facebook data.}. We note that while keystroke data is collected across all applications, we only consider the Google, Verizon, and Samsung messaging apps, hereafter referred to as SMS data. Finally, three participants were removed because of their text-based depression estimates (see below) being outliers due to mostly Spanish Facebook status updates. Thus, the final sample consisted of 120 participants who posted at least 500 words across their Facebook status updates, 500 words across their SMS, and answered a standard depression screener (PHQ-9; see below).
\paragraph{Test Based Depression Estimates} We employ an off-the-shelf text-based depression estimation model, which was trained on Facebook status updates to predict self-reported depression~\cite{schwartz2014towards}. This model was built on roughly 28,000 Facebook users who consented to share their Facebook data and answered the depression facet of neuroticism in the ``Big 5" personality inventory, a 100-item personality questionnaire (the International Personality Item Pool proxy to the
NEO-PI-R~\citep{goldberg1999broad}). This model resulted in prediction accuracy (Pearson \emph{r}) of 0.386. Please see the original paper for full details.
\paragraph{Survey Based Depression} The Patient Health Questionnaire (PHQ-9) is a 9-item questionnaire developed based on DSM-IV criteria, which has been widely used to assess depression in both clinical and non-clinical settings~\cite{kroenke2001phq}. We utilize this scale to assess the severity of individuals' depression symptoms and as a ``gold standard" measure of depression in our participants.
\begin{table}[!htb]
\centering
\resizebox{\linewidth}{!}{%
\renewcommand{\arraystretch}{1.25} %
\begin{tabular}{lcccccc}
\hline
\multirow{2}{*}{N = 120} & \multicolumn{3}{c}{Words} & \multicolumn{3}{c}{Posts} \\
\cline{2-4}\cline{5-7}
& Median & Mean & SD & Median & Mean & SD \\
\hline
Facebook & 12,800 & 26,652 & 37,924 & 1,279 & 2,193 & 2,599 \\
SMS & 3,607 & 7,881 & 11,693 & 331 & 711 & 961 \\
\hline
\end{tabular}
}
\caption{Posts and word count statistics for each platform.}
\label{table stats}
\end{table}
\section{Methods}
\label{sec:methods}
In order to assess how Facebook and SMS use differs and how these differences drive downstream analyses, we proceed in three parts: (1) examine how language use (as measured through standard dictionary approaches) differ across each platform, (2) show that depression estimates derived from a model trained in a single domain are less accurate when applied out of domain, and (3) quantify platform differences and use these to correct the out-of-domain depression estimates.
\paragraph{Task 1: Cross-platform differences} We begin by first tokenizing both the Facebook status updates and SMS data, using a tokenizer designed for social media data~\cite{schwartz2017dlatk}. Given the small sample size of the study (N = 120), we do not have sufficient power to explore cross-platform differences in a large feature space. We, therefore, use the
Linguistic Inquiry and Word Count (LIWC) dictionary, which consists of 73 manually curated categories (e.g., both function and content categories such as positive emotions, sadness, and pronouns;~\citealp{pennebaker2015development}). This dictionary has a rich history in psychological sciences with over 8,800 citations as of April 2020~\cite{eichstaedt2021closed} and can, thus, aid in interpreting cross-platform differences. For each of the 120 participants, we separately extract the 73 LIWC categories for both the Facebook and SMS data. Next, to calculate differences in LIWC usage, we compute a dependent \emph{t}-test for paired samples (i.e., one sample, repeated measures) for each category and adjust the overall significance threshold using a Benjamini-Hochberg False Discovery Rate (FDR) correction.
\paragraph{Task 2: In vs. Out of Domain Depression Estimates} Next, for each participant we estimate depression from both Facebook and SMS text using a preexisting text-based depression model described in the previous section. We then correlated the depression estimates with the PHQ-9 depression screener survey responses. To quantify which features drive the depression estimates in both domains we examine feature importance $i$, which is defined as:
\begin{equation}
i(f) = w_f\big(freq_{FB}(f) - freq_{SMS}(f)\big).
\label{eq feat import}
\end{equation}
Here $w_f$ is the weight of the feature $f$ in the depression model, $freq_{*}(f)$ is the frequency of feature $f$ in either the Facebook (FB) or SMS domain.
\paragraph{Task 3: Domain Adaptation Correction} Here we show that a simple domain adaptation algorithm can be applied to the SMS data in order to increase the predictive accuracy of the depression model. To do this, we multiply each participant's SMS word frequency by a ratio of the global mean Facebook to global mean SMS frequency. We note that, this method is informed by the feature importance measure defined above in Equatin \ref{eq feat import}: if we apply this correction factor, the importance measure reduces to 0:
\begin{equation}
w_f\big(freq_{FB}(f) - \frac{freq_{FB}(f)}{freq_{SMS}(f)}freq_{SMS}(f)\big) = w_f\times0.
\nonumber
\end{equation}
Since we use a ratio of word frequencies, we only adjust those words which are not rare, since rare word frequencies are noisy. As such, we only adjust words used by at least 5 users in each text data set (i.e., Facebook and SMS). This also stops single users from dominating the frequency measures.
\section{Results}
\label{sec:results}
\begin{figure}[!htb]
\centering
\includegraphics[width=1\columnwidth]{figures/barchart_singlecol.pdf}
\caption{Feature importance results, as defined by the product of the depression model word weight and the difference in Facebook vs SMS word usage frequency. Top row (red bars; A and B) are positively weighted words in the depression model, while the bottom row (blue bars; C and D) are negatively weighted words. Left column (A and C) is more frequency words in Facebook (i.e., positive frequency difference), while the right column (B and D) contains words more frequent on SMS (i.e., negative frequency difference).}
\label{fig:bar plots}
\end{figure}
\paragraph{Task 1: Cross-platform differences}
Table \ref{table stats} shows usage differences between Facebook and SMS.
Psycho-linguistic characteristics in forming the within-differences between Facebook and SMS have been compared using paired \textit{t}-tests for each LIWC category extracted from Facebook and SMS. As shown in Table~\ref{table LIWC}, people prefer to talk about personal concerns (leisure activities, religion), drives (power, achievement), what they see, and use certain grammar features (common adjectives, quantifiers) on Facebook. In SMS, people use more differentiation and discrepancy, personal pronouns (I, you), informal language (assent), more auxiliary and common verbs, and more present and future temporal focus.
\paragraph{Task 2: In vs. Out of Domain Depression Estimates} We see that in-domain depression estimates from Facebook data correlate (Pearson \emph{r}) with PHQ-9 at 0.38, which is an equivalent prediction accuracy to that listed in the original paper from which the depression model was derived. When correlating SMS-based depression estimates (i.e., out-of-domain) with PHQ-9 scores, we see a drop in prediction accuracy (Pearson \emph{r} = 0.29), showing that the model does not work as well when applied to out-of-domain data. Results are listed in Table \ref{tab:depression-corr}.
Linguistic features driven discrepancies across Facebook vs. SMS for prediction of depression are shown in Figure \ref{fig:bar plots}, which shows the top-weighted features in driving the differences in positive and negative depression estimates between Facebook and SMS. For example, more features reflecting language style, such as more use of contractions (``i'll", ``i'm", ``they're", ``she's", ``haven't") in SMS link to the discrepancies in predicting positive depression estimates; while more features about contents people discussed (``family", ``sick", ``chicago", ``anniversary", ``year") on Facebook link to the discrepancies in predicting positive depression estimates. As shown in Figure 1, we argue that domain-specific adaption correction based on these features are needed.
\paragraph{Task 3: Domain Adaptation Correction} We then performed domain adaption correction on SMS in predicting depression. Results are provided in Table \ref{tab:depression-corr}. We could see an improvement in correlation with one's PHQ-9 scores after the model reconstruction, suggesting a necessity of domain adaption correction in cross-platform language analysis.
\begin{table}[tb]
\centering
\resizebox{\linewidth}{!}{%
\begin{tabular}{lclc}
\toprule
LIWC Category & \emph{t} & LIWC Category & \emph{t} \\ \hline
Prepositions & 13.71 & Common Adverbs & -9.85 \\
Leisure & 12.27 & Discrepancy & -10.21 \\
Common adjectives & 11.75 & Future focus & -11.45 \\
Achievement & 8.89 & 2nd person pronouns & -11.58 \\
Certainty & 8.88 & Assent & -12.12 \\
Power & 8.80 & Total pronouns & -14.43 \\
See & 7.26 & Present focus & -14.56 \\
Time & 7.20 & Personal pronouns & -15.68 \\
Biological Processes & 6.80 & Auxiliary verbs & -17.58\\
Perception & 5.82 & Common verbs & -20.16 \\
\bottomrule
\end{tabular}
}
\caption{Paired \emph{t}-tests results of the LIWC categories which differ across Facebook and SMS. The left hand side (positive \emph{t} values) are more frequent in Facebook, the right hand side (negative \emph{t} values) are more frequent in SMS. All results are statistically significant with $p < 0.001$ after Benjamini-Hochberg FDR correction.}
\label{table LIWC}
\end{table}
\section{Conclusions}
\label{sec:conclusions}
To the best of our knowledge, this is the first-to-date study investigating Facebook vs. SMS language use differences. We show that, \textit{for the same users}: (1) Facebook and SMS contain different linguistic features, (2) Facebook-derived language model of depression performs weaker on SMS, and (3) corrections based on word use frequencies improve Facebook-derived depression estimates on SMS.
We found that the same user uses Facebook and SMS for different purposes. In line with psychology research, Facebook usage links to the need to belong and self-presentation~\cite{nadkarni2012people}, leading to more contents sharing and opinion expression. While SMS is used for playful forwarding, for phatic communication to maintain than impact social relationships via pointless texts, and for intimate and informal discussions~\cite{fibaek2016s}. Our findings of LIWC categories in Task 1 and feature importance results in Task 2 also confirm these variances, with more content-wise features from Facebook and style-wise features from SMS. By using data from the same users, we showed that discrepancies in Facebook vs. SMS language and model prediction accuracy are not due to demographic differences but to varied language patterns.
Our findings are important for future language analysis research. Facebook and SMS contain significantly different linguistic features reflecting social and psychological attributes. Future studies should explore more downstream applications along this line. Researchers in computational social science should be aware of such differences between Facebook and SMS in model selection and adaption. Domain-specific corrections based on user preference in language are needed for prediction accuracy.
\begin{table}[]
\centering
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{lcc}
\toprule
& No Correction & Domain Adaptation \\ \hline
Facebook & 0.38 & - \\
SMS & 0.29 & 0.32 \\
\hline
\end{tabular}
\caption{Pearson correlations between text-based depression estimates and survey-based PHQ-9 scores, before and after domain adaptation. Depression model was trained on Facebook data, so Domain Adaptation was only applied to the SMS data.}
\label{tab:depression-corr}
\end{table}
\paragraph{Limitation} One limitation is the small sample size, which prevents generalization to a broader context. However, all our comparisons are within-person, which promises the power of the analysis. Another limitation comes from the unbalanced word and post counts from two platforms, with Facebook language containing more words and posts in total than SMS texts (due to the data being collected over a longer time span). Instead of matching the samples by word or post counts, we choose to include all possible texts from both platforms to maximize the sample size, noting that a minimum of 500 words per text-domain is needed for inclusion. To ensure our cross-platform differences are not caused by the unbalanced word counts, we further create a subset of Facebook language to match the number of words from both platforms and generate new depression estimates. Depression estimates from newly matched Facebook language correlate with original Facebook language at 0.86 and with PHQ-9 at 0.36, proving that our findings are driven by cross-platform differences per se, not the word count difference.
\paragraph{Ethics Statement} This study involves human subjects and was approved by an Institutional Review Board (IRB). The methods and types of data used in this study open a number of ethical issues. First, social media, keystroke, and mobile sensing data are highly sensitive and can contain PII. We took extreme care to store, clean, and analyze the data (see Supplemental Materials for exact data cleaning methods). As such, data sharing is unavailable for this study. We also estimate sensitive attributes like depression using social media data, SMS data, and machine learning methods. This can be problematic for many reasons, including biases in training data and misclassifications in downstream tasks which can further marginalize vulnerable populations, among other issues. Due to the sensitive nature of this study, data cannot be shared publicly.
\section{Keystroke Data and Text Cleaning}
The AWARE mobile sensing app logs each non-password keystroke on Android phones across all apps (e.g., text messages and search engine entries). These logs are stored one character at a time and include modifications such as deletions and autocorrect. For example, if a user searched ``Talyor Swift" in a search engine, AWARE would log separate database entries for ``T", ``Ta", ``Tal", etc. If the same user misspelled ``Talyor" while typing, AWARE would also log the misspelling and the delete key; for example ``T", ``Ta", ``Tai", ``Ta" (i.e., a \emph{backspace} occurred), ``Tal", etc. This present a unique challenge when dealing with possibly sensitive information.
While the main goal of cleaning Personal Identifiable Information (PII) was to enable non-trusted sources to access the collected data by removing PII, a secondary goal was to replace the PII with a tag indicating what kind of data was removed to allow deeper analysis. Basic cleaning of each string was done in several stages. The first was to remove PII data that was structurally identified by the device itself as either a password field or a phone number. The second stage was to use \emph{spaCy's} Name Entity Recognizer (NER) and replace all flagged entities with their category label. The third stage was to check against a list of common data formats using regular expressions using a modified version of CommonRegex\footnote{\url{https://github.com/madisonmay/CommonRegex}}. We note that these category labels are ignored by our tokenizer and not used in the downstream analysis in the present study.
Cleaning keystroke data which changes 1 character at a time, however, contains an extra challenge over standard complete string cleaning. Detection of partial PII data that doesn't yet match a known form (but will eventually) is required. We accomplished this by rolling future data back through the previous data in two stages. The first stage was each time the completion of a new token at the end of a string was detected, we apply the replacement information, or lack thereof, back through the previous strings until the beginning of that token (there may be incomplete tokens that match NER that are not necessary to replace based on subsequent characters). This allowed us to clean data that might be removed via deletion before the entry is complete. The second stage is once the whole entry is complete, we roll all of the change data back through all of the incomplete string items for this entry. This involves overlaying data replacement item information for individual strings that was wholly contained by the completed entry information, or where the replacement data fields only overlap, merging the possible replacement item information together to create a compound tag. This process is executed automatically on the study data, with no human intervention, so as to minimize the risk of leaking sensitive information. Finally, we note that while we collected full keystroke data, only the final text data which was sent via SMS was analyzed (i.e., no partial text messages were considered).
\section{Acknowledgments}
This study was funded by the Intramural Research Program of the National Institutes of Health (NIH), National Institute on Drug Abuse (NIDA). Dr. Brenda Curtis is the corresponding author of the paper. Dr. Brenda Curtis and Dr. Lyle Ungar share the senior authorship.
\bibliographystyle{unsrt}
\section{Acknowledgements}
This study was funded by the Intramural Research Program of the National Institutes of Health (NIH), National Institute on Drug Abuse (NIDA). Dr. Brenda Curtis is the corresponding author of the paper. Dr. Brenda Curtis and Dr. Lyle Ungar share the senior authorship.
\end{document}
\section{Copyright}
All papers submitted for publication by AAAI Press must be accompanied by a valid signed copyright form. They must also contain the AAAI copyright notice at the bottom of the first page of the paper. There are no exceptions to these requirements. If you fail to provide us with a signed copyright form or disable the copyright notice, we will be unable to publish your paper. There are \textbf{no exceptions} to this policy. You will find a PDF version of the AAAI copyright form in the AAAI AuthorKit. Please see the specific instructions for your conference for submission details.
\section{Formatting Requirements in Brief}
We need source and PDF files that can be used in a variety of ways and can be output on a variety of devices. The design and appearance of the paper is strictly governed by the aaai style file (aaai21.sty).
\textbf{You must not make any changes to the aaai style file, nor use any commands, packages, style files, or macros within your own paper that alter that design, including, but not limited to spacing, floats, margins, fonts, font size, and appearance.} AAAI imposes requirements on your source and PDF files that must be followed. Most of these requirements are based on our efforts to standardize conference manuscript properties and layout. All papers submitted to AAAI for publication will be recompiled for standardization purposes. Consequently, every paper submission must comply with the following requirements:
\begin{quote}
\begin{itemize}
\item Your .tex file must compile in PDF\LaTeX{} --- (you may not include .ps or .eps figure files.)
\item All fonts must be embedded in the PDF file --- including includes your figures.
\item Modifications to the style file, whether directly or via commands in your document may not ever be made, most especially when made in an effort to avoid extra page charges or make your paper fit in a specific number of pages.
\item No type 3 fonts may be used (even in illustrations).
\item You may not alter the spacing above and below captions, figures, headings, and subheadings.
\item You may not alter the font sizes of text elements, footnotes, heading elements, captions, or title information (for references and mathematics, please see the limited exceptions provided herein).
\item You may not alter the line spacing of text.
\item Your title must follow Title Case capitalization rules (not sentence case).
\item Your .tex file must include completed metadata to pass-through to the PDF (see PDFINFO below).
\item \LaTeX{} documents must use the Times or Nimbus font package (you may not use Computer Modern for the text of your paper).
\item No \LaTeX{} 209 documents may be used or submitted.
\item Your source must not require use of fonts for non-Roman alphabets within the text itself. If your paper includes symbols in other languages (such as, but not limited to, Arabic, Chinese, Hebrew, Japanese, Thai, Russian and other Cyrillic languages), you must restrict their use to bit-mapped figures. Fonts that require non-English language support (CID and Identity-H) must be converted to outlines or 300 dpi bitmap or removed from the document (even if they are in a graphics file embedded in the document).
\item Two-column format in AAAI style is required for all papers.
\item The paper size for final submission must be US letter without exception.
\item The source file must exactly match the PDF.
\item The document margins may not be exceeded (no overfull boxes).
\item The number of pages and the file size must be as specified for your event.
\item No document may be password protected.
\item Neither the PDFs nor the source may contain any embedded links or bookmarks (no hyperref or navigator packages).
\item Your source and PDF must not have any page numbers, footers, or headers (no pagestyle commands).
\item Your PDF must be compatible with Acrobat 5 or higher.
\item Your \LaTeX{} source file (excluding references) must consist of a \textbf{single} file (use of the ``input" command is not allowed.
\item Your graphics must be sized appropriately outside of \LaTeX{} (do not use the ``clip" or ``trim'' command) .
\end{itemize}
\end{quote}
If you do not follow these requirements, your paper will be returned to you to correct the deficiencies.
\section{What Files to Submit}
You must submit the following items to ensure that your paper is published:
\begin{itemize}
\item A fully-compliant PDF file that includes PDF metadata.
\item Your \LaTeX{} source file submitted as a \textbf{single} .tex file (do not use the ``input" command to include sections of your paper --- every section must be in the single source file). (The only allowable exception is .bib file, which should be included separately).
\item The bibliography (.bib) file(s).
\item Your source must compile on our system, which includes only standard \LaTeX{} 2020 TeXLive support files.
\item Only the graphics files used in compiling paper.
\item The \LaTeX{}-generated files (e.g. .aux, .bbl file, PDF, etc.).
\end{itemize}
Your \LaTeX{} source will be reviewed and recompiled on our system (if it does not compile, your paper will be returned to you. \textbf{Do not submit your source in multiple text files.} Your single \LaTeX{} source file must include all your text, your bibliography (formatted using aaai21.bst), and any custom macros.
Your files should work without any supporting files (other than the program itself) on any computer with a standard \LaTeX{} distribution.
\textbf{Do not send files that are not actually used in the paper.} We don't want you to send us any files not needed for compiling your paper, including, for example, this instructions file, unused graphics files, style files, additional material sent for the purpose of the paper review, and so forth.
\textbf{Do not send supporting files that are not actually used in the paper.} We don't want you to send us any files not needed for compiling your paper, including, for example, this instructions file, unused graphics files, style files, additional material sent for the purpose of the paper review, and so forth.
\textbf{Obsolete style files.} The commands for some common packages (such as some used for algorithms), may have changed. Please be certain that you are not compiling your paper using old or obsolete style files.
\textbf{Final Archive.} Place your PDF and source files in a single archive which should be compressed using .zip. The final file size may not exceed 10 MB.
Name your source file with the last (family) name of the first author, even if that is not you.
\section{Using \LaTeX{} to Format Your Paper}
The latest version of the AAAI style file is available on AAAI's website. Download this file and place it in the \TeX\ search path. Placing it in the same directory as the paper should also work. You must download the latest version of the complete AAAI Author Kit so that you will have the latest instruction set and style file.
\subsection{Document Preamble}
In the \LaTeX{} source for your paper, you \textbf{must} place the following lines as shown in the example in this subsection. This command set-up is for three authors. Add or subtract author and address lines as necessary, and uncomment the portions that apply to you. In most instances, this is all you need to do to format your paper in the Times font. The helvet package will cause Helvetica to be used for sans serif. These files are part of the PSNFSS2e package, which is freely available from many Internet sites (and is often part of a standard installation).
Leave the setcounter for section number depth commented out and set at 0 unless you want to add section numbers to your paper. If you do add section numbers, you must uncomment this line and change the number to 1 (for section numbers), or 2 (for section and subsection numbers). The style file will not work properly with numbering of subsubsections, so do not use a number higher than 2.
\subsubsection{The Following Must Appear in Your Preamble}
\begin{quote}
\begin{scriptsize}\begin{verbatim}
\def\year{2021}\relax
\documentclass[letterpaper]{article}
\usepackage{aaai21}
\usepackage{times}
\usepackage{helvet}
\usepackage{courier}
\usepackage[hyphens]{url}
\usepackage{graphicx}
\urlstyle{rm}
\def\UrlFont{\rm}
\usepackage{graphicx}
\usepackage{natbib}
\usepackage{caption}
\frenchspacing
\setlength{\pdfpagewidth}{8.5in}
\setlength{\pdfpageheight}{11in}
\pdfinfo{
/Title (AAAI Press Formatting Instructions for Authors
Using LaTeX -- A Guide)
/Author (AAAI Press Staff, Pater Patel Schneider,
Sunil Issar, J. Scott Penberthy, George Ferguson,
Hans Guesgen, Francisco Cruz, Marc Pujol-Gonzalez)
/TemplateVersion (2021.1)
}
\end{verbatim}\end{scriptsize}
\end{quote}
\subsection{Preparing Your Paper}
After the preamble above, you should prepare your paper as follows:
\begin{quote}
\begin{scriptsize}\begin{verbatim}
\begin{document}
\maketitle
\begin{abstract}
\end{abstract}\end{verbatim}\end{scriptsize}
\end{quote}
\noindent You should then continue with the body of your paper. Your paper must conclude with the references, which should be inserted as follows:
\begin{quote}
\begin{scriptsize}\begin{verbatim}
\section{Keystroke Data and Text Cleaning}
The AWARE mobile sensing app logs each non-password keystroke on Android phones across all apps (e.g., text messages and search engine entries). These logs are stored one character at a time and include modifications such as deletions and auto-correct. For example, if a user searched ``Talyor Swift" in a search engine, AWARE would log separate database entries for ``T", ``Ta", ``Tal", etc. If the same user misspelled ``Talyor" while typing, AWARE would also log the misspelling and the delete key; for example ``T", ``Ta", ``Tai", ``Ta" (i.e., a \emph{backspace} occurred), ``Tal", etc. This presents a unique challenge when dealing with possibly sensitive information.
While the main goal of cleaning Personal Identifiable Information (PII) is to enable non-trusted sources to access the collected data by removing PII, a secondary goal is to replace the PII with a tag indicating what kind of data has been removed to allow deeper analysis. Basic cleaning of each string was done in several stages. The first was to remove PII data that was structurally identified by the device itself as either a password field or a phone number. The second stage was to use \emph{spaCy's} Name Entity Recognizer (NER) and to replace all flagged entities with their category label. The third stage was to check against a list of common data formats using regular expressions using a modified version of CommonRegex\footnote{\url{https://github.com/madisonmay/CommonRegex}}. We noted that these category labels were ignored by our tokenizer and not used in the downstream analyses in the present study.
Cleaning keystroke data which changes 1 character at a time; however, contains an extra challenge over standard complete string cleaning. Detection of partial PII data that doesn't yet match a known form (but will eventually) is required. We accomplished this by rolling future data back through the previous data in two stages. The first stage was that, each time when the completion of a new token at the end of a string was detected, we applied the replacement information, or lack thereof, back through the previous strings until the beginning of that token (there may be incomplete tokens which match NER that were not necessary to replace based on subsequent characters). This allowed us to clean data that might be removed via deletion before the entry is complete. The second stage was once the whole entry was complete, we rolled all of the changed data back through all of the incomplete string items for this entry. This involved overlaying data replacement item information for individual strings that was wholly contained by the completed entry information, or where the replacement data fields only overlap, merging the possible replacement item information together to create a compound tag. This process was executed automatically on the study data, with no human intervention, so as to minimize the risk of leaking sensitive information. Finally, we noted that while we collected full keystroke data, only the final text data which was sent via SMS was analyzed (i.e., no partial text messages are considered).
\end{document}
|
1,116,691,498,686 | arxiv |
\section{Introduction}
Object detection in aerial images is one of the most fundamental yet challenging research topics in the community of computer vision~\cite{zhang2020feature, zhang2019positional}. As many image recognition techniques~\cite{ye2021multiview, fu2020learning, ye2019nonpeaked}, it is the task of recognizing each object category with a precise bounding box, which is the foundation of some potential application scenarios, e.g., forest disturbance dynamic monitoring~\cite{yang2012landsat}, land resource management~\cite{mou2018learning}, and urban environmental assessment~\cite{liang2010assessing}.
Recently, deep learning based object detection has made dramatic advances with the significant development of deep Convolutional Neural Networks (CNNs)~\cite{ding2019learning, liang2021learning}.
\input{fig1}
Since the geoclimatic objects in aerial images have the characteristics of large density variations, intricate background, and arbitrary orientation, it is challenging for standard deep CNNs to yield accurate category and location prediction results, especially for some overlapping and vague ones~\cite{zhang2021self,zhang2020causal}. This phenomenon can be explained by the information bottleneck principle~\cite{saxe2019information}, which explores the information flow between individual layers. The information bottleneck compresses the input information as much as possible, which may cause the transmitted features losing some task-related information, especially non-discriminative one
that is critical to location. It results that the trained classification network only paying close attention on small discriminative parts of the target object, which is a fatal flaw for positioning.
To this end, a great deal of state-of-the-art approaches are proposed, which are devoted to introducing label assignment strategy, and adaptive anchor selection. \textbf{For the first category} (e.g., Dynamic R-CNN~\cite{zhang2020dynamic}, CSL~\cite{yang2020arbitrary}
), the training process is explicitly optimized by fine-turning the label assignment standard (i.e., the IoU threshold), and parameters of loss function dynamically to focus on high-quality samples.
\textbf{For the second category} (e.g., DAL~\cite{ming2020dynamic}, Aabo~\cite{ma2020aabo}),
the most classic among them is Aabo~\cite{ma2020aabo}. It proposes a new hyperparameter optimization method to adjust the anchor configurations automatically, tailoring more suitable anchors for the specified data sets.
Researchers attach importance to directly developing a new head (e.g., R$^3$Det~\cite{yang2019r3det}, S$^2$A-Net~\cite{han2021align}) or backbone (e.g., DetNet~\cite{li2018detnet}, SpineNet~\cite{du2020spinenet}) network, while ignoring the neck network plays a considerable role in object detection. As illustrated in Figure~\ref{fig1} (a), most above models utilize the conventional neck network, which is generally based on the coarse accumulation of multiple convolutions.
To address this issue, in this letter, we present a Global Semantic Network (GSNet) and Fusion Refinement Module (FRM), which are based on the Feature Pyramid Network (FPN)~\cite{lin2017feature}. Concretely, as shown in Figure~\ref{fig1} (b), GSNet can obtain affluent backbone feature cues via a global bilateral scanning operation, thus making the model more suitable for dense prediction tasks. Then, FRM is a light yet active architecture which increases the model representational power by propagating semantically abstract features. To demonstrate its superiority, experiments are implemented on two baselines (i.e., Fast R-CNN~\cite{ren2015faster} and RetinaNet~\cite{lin2017focal}) and two commonly used data sets(i.e., DOTA~\cite{xia2018dota} and HRSC2016~\cite{liu2017high}) for oriented object detection. Results show that our GSNet + FRM model achieves the top performance(i.e., 79.37$\%$ and 74.49$\%$ mAP for DOTA, 90.50$\%$ and 90.47$\%$ for HRSC2016) with high efficiency.
Our main contributions are summed up as: 1) we reveal that the information bottleneck causes a trained classification network merely concentrating on small discriminative parts of the object. 2) based on the above observation, we propose GSNet and FRM to reduce the information bottleneck by reconstructing the neck network in a simple manner. 3) we report competitive 79.37$\%$ and 90.50$\%$ mAP on two challenging data sets for OBB task.
\input{fig2}
\section{Background}
This paper briefly introduces the tasks and problems of remote sensing image target detection
\section{Our Approach}
\label{Ⅱ}
\subsection{Preliminaries}
Training a classification network needs to extract the maximum compression features of the input and preserve as much information as possible about the target object.
While the information bottleneck compresses the information contained in the input image, which results in merely a subset of information about task-relevant features being passed to the output, especially ignoring most of the non-discriminative information.
Therefore, we seek to reduce the information bottleneck to solve the contradiction in object detection: classification and localization.
Our overall framework is shown in Figure~\ref{fig2}. Concretely, the pre-trained ResNet~\cite{he2016deep} is adopted as the backbone network following~\cite{ding2019learning}. Then, we construct the enhanced
feature pyramid which based on GSNet (in Figure~\ref{fig2}(b)) and FRM (in Figure~\ref{fig2}(c)). At last, the final prediction results are output by the head network.
\subsection{Global Semantic Network (GSNet)}
The classification task requires invariance of translation and scale. That is, the learned features will not change with the position and shape of the target. While localization is a position-sensitive task, which ensure the equality of translation and scale. When performing both classification and localization tasks, we consider the following two points.
Above all, the fully connected layer is spatially sensitive, so it is not suitable for localization~\cite{wu2020rethinking}. The convolutional layer has the characteristics of parameter sharing, and the features obtained by convolutions with the same kernel have stronger spatial correlation. In summary, the network is fully convolutional~\cite{long2015fully}, without fully connected layers or global pooling layers.
Secondly, in general, the classification network is able to identify a few small discriminative parts with high response. Large receptive field allows low-response regions to be recognized by sensing the high-response environment around them. What's more, Large kernel size enables the classification model to have a tightly connected structure that cope with different transformations. Therefore, the network uses as large convolution kernels as possible or even global convolutions to significantly expand the receptive field.
Motivated by the above observation, we present a Global Semantic Network (GSNet) in Figure~\ref{fig2} (b)
to explicitly alleviate the inconsistent between classification and regression.
GSNet employs the combined convolutions of 1 × k + k × 1 and k × 1 + 1 × k rather than the large kernel convolutions of k × k directly. Meanwhile, linear operations are completely applied between the convolution layers. This symmetric and depth-separable combined convolutions not only incorporates the context, but also controls number of parameters and computational cost, which make it more practical. The specific operation can be formulated as:
\begin{equation}
\text {Y}=\operatorname{Conv} 1 \mathrm{D}\left(\operatorname{Conv} 1 \mathrm{D}(X)^{T}\right)+\operatorname{Conv} 1 \mathrm{D}(\operatorname{Conv} 1 \mathrm{D}(X))^{T}
\end{equation}
where X as the input are the feature maps extracted from the Backbone feature pyramid.
Furthermore, since the localization maps obtained by the classification network cannot precisely represent the boundary of the target object, we refine the bounding box by modeling the boundary alignment as a residual structure to increase the accuracy. GSNet is introduced into the feature pyramid structure, which is closely linked to the feature maps and trained in an end-to-end manner, improving the
regression ability of the model effectively. Formally,
\begin{equation}
\text { GSNet }(Y)=Y+R(Y)+X
\end{equation}
\begin{equation}
R(Y)=\operatorname{Conv} 2 D(\sigma(\operatorname{Conv} 2 D(Y)))
\end{equation}
where R(·) is the residual branch, and $\sigma$ is the ReLU activation function
\subsection{Fusion Refinement Module (FRM)}
We proposed a novel Fusion Refinement Module (FRM) in Figure~\ref{fig2} (c). Since the semantic information of different stages is not similar, direct addition for cross-scale fusion will cause aliasing problems, which may confuse localization and classification tasks.
Consequently, we use concatenation to replace the original addition in the feature pyramid. Compared with addition, concatenation retains more feature information especially channel information, but increases the the number of model parameters and computation. To this end, 1 × 1 convolutions are then adopted to reduce dimension.
Note that a residual branch is introduced to inject different spatial context information. There are 3 feature maps for concatenation, which are from backbone network, GSNet in the same stage, GSNet in the next stage after up-sampling separately. The residual structure superimposes depth features on the basis of the original features, realizing the fusion of global and local information.
After that, we implement stacked convolutional layers to remove the aliasing effect caused by the
interpolation, reducing information loss in channel and enhancing the feature representational ability. 1 × 1 convolutions are adopted at
intervals as dimensionality reduction modules to reduce convolution bottlenecks. The enhanced feature pyramid contain more higher-level and semantic information. Formally, it can be expressed as
\begin{equation}\label{eq4}
\mathrm{Z}=\mathrm{f}^{1 \times 1}\left(\mathrm{f}^{3 \times 3}\left(\mathrm{f}^{1 \times 1}\left(\mathrm{f}^{3 \times 3}\left(\mathrm{f}^{1 \times 1}\left(\left[\mathrm{X}, \mathrm{Y}_{3}, \mathrm{Y}_{4}\right]\right)\right)\right)\right)\right)
\end{equation}
where [·] is channel-wise concat, and X, $\mathrm{Y}_{3}$ and $\mathrm{Y}_{4}$ are feature maps from backbone neckwork, the third and fourth stage of feature pyramid respectively. $f^{1 \times 1}$ and $f^{3 \times 3}$ denote the standard 1 × 1 and 3 × 3 convolutions.
\section{Experiments}
\subsection{Data sets and Evaluation Metric}
{DOTA}~\cite{xia2018dota} contains 2,806 aerial images with 15 classes, whose size varies from 800 × 800 to 4000 × 4000.
{HRSC2016}~\cite{liu2017high} contains 1061 images with high resolution, whose size ranges from 300 × 300 to 1500 × 900.
For both data sets, we randomly take 3 / 6 for training, 1 / 6 and 2 / 6 for validation and testing, respectively. All images are cropped into 1024 × 1024 patches with a stride of 824.
We use mean average precision (mAP) as primary metric. In addition, two commonly used metrics are taken into consideration to verify the model efficiency, which are giga floating-point operations per second (GFLOPs), and the model parameters (Params). Our prediction results are submitted to the official DOTA evaluation server to get the final mAP and APs of each class.
\subsection{Experimental Setup}
\myparagraph{Baselines.}
We deploy the standard two-stage detector Faster R-CNN~\cite{ren2015faster} and one-stage detector RetinaNet~\cite{lin2017focal} as the baselines. ResNet101 and ResNet50 (both pre-trained on ImageNet~\cite{russakovsky2015imagenet}) are adopted as backbone network separately, and FPN is utilized to produce enhanced neck network.
As in~\cite{ren2015faster,lin2017focal}, the rotated head is developed in RoI-Transformer~\cite{ding2019learning} and RotatedRetinaNet~\cite{lin2017focal} individually. For fair comparison, all settings strictly follow as reported in offical codes.
\myparagraph{Training Details.}
During training, we use standard SGD~\cite{krizhevsky2012imagenet} as the optimizer, where the learning rate is initialized to 0.005 and 0.0025 for two baselines individually, and weight decay and momentum are 0.0001 and 0.9 respectively. Models are trained for DOTA and HRSC2016 in 12 epochs on RTX3060 with batch size set to 2.
\input{tab1}
\subsection{Ablation Study}
Our ablation studies aim to verify the effectiveness and efficiency of our proposed modules on different baselines and data sets. For this propose, we conduct a series of experiments and show some visual comparisons.
\myparagraph{Effectiveness of the proposed modules.}
We implement each module and its combination on DOTA data set. Table\ref{tab1} shows some results of the comparison. Specifically, we take Faster R-CNN based on ResNet101 FPN as baseline, which is applied for rotational regression tasks. It is observed that GSNet and FRM improve bounding box mAP by 3.52$\%$ and 5.89$\%$, respectively. Combining GSNet and FRM, our methods achieved 79.37$\%$mAP, which is about 6.28$\%$ higher than the baseline method. For model efficiency, it is evident that when GSNet is deployed, there are only 0.49 M model Params and 4.79 GFLOPs increment. It proves that the symmetric combined convolution in GSNet could effectively control model parameters and computational cost.
\myparagraph{Effectiveness on different baselines.}
Table\ref{tab1} shows the results of our modules deployed to two baselines(i.e., two-stage detector Faster R-CNN and one-stage detector RetinaNet) on DOTA. For RetinaNet, comparing row 9 to row 6, we observe that the proposed modules bring remarkable performance enhancements(i.e., 2.82$\%$mAP). This is because our GSNet and FRM encourage each layer to preserve more features so as to reduce information bottleneck. This phenomenon is consistent across HRSC2016 data set.
As we mentioned under Eq.\ref{eq4}, the main reason for computational overheads is the introduction of additional convolutional layers in constructing FRM.
\input{fig3}
\input{tab3}
\input{tab4}
\myparagraph{Visualizations.}
Figure~\ref{fig3} shows some visual comparisons on DOTA between the baseline (blue boxes) and Faster R-CNN+Ours (red boxes). Faster R-CNN+Ours refers to our proposed network based on ResNet101 FPN following~\cite{ding2019learning}. It could be seen intuitively that our proposed methods have obvious accuracy improvement on object location and boundary. Especially large objects such as roundabout, harbor and plane. From the last two rows, we clearly observe that it also enhances the recall of small targets.
\subsection{Comparisons with State-of-the-art Methods}
\myparagraph{Results on DOTA.}
The existing state-of-the-art methods are mainly divided into two-stage methods and one-stage methods. As shown in Table\ref{tab3}, compared with the previous best result 76.81$\%$ by SCRDet++~\cite{yang2020scrdet++}(i.e., two-stage method) and 73.50$\%$ by CFC-Net~\cite{ming2021cfc}(i.e., one-stage method), our GSNet + FRM model ranks first and improves 2.56$\%$ and 0.99$\%$ mAP individually. Concretely, some hard categories(e.g., Ship, Large vehicle, Harbor, and Helicopte) have notable mAP increments. These results indicate that our model further enhance the feature presentation capabilities.
\myparagraph{Results on HRSC2016.}
The result comparisons on the test set of HRSC2016 is shown in Table~\ref{tab3}. Ours markedly prevail, achieving the top performance(i.e., 90.50$\%$ and 90.47$\%$ mAP) among all the state-of-the-art methods, which surpass the previous best model by 0.33$\%$ and 0.30$\%$ respectively with fewer anchors. That shows the significance of utilizing heuristically defined anchors.
\section{Conclusion}
In this letter, we first analyze the existing problems in classification networks with theory of information bottleneck. Then, we propose a simple yet valid Global Semantic Network and Fusion Refinement Module for reducing information bottleneck on aerial images. Extensive experiments on two data sets confirm the superiority of our GSNet+FRM model. In the future, we are about to consider modifying our modules to be suitable for various computer vision tasks (e.g., semantic segmentation).
\section{Introduction}
Object detection in aerial images is one of the most fundamental yet challenging research topics in the community of computer vision~\cite{zhang2020feature, zhang2019positional}. As many image recognition techniques~\cite{ye2021multiview, fu2020learning, ye2019nonpeaked}, it is the task of recognizing each object category with a precise bounding box, which is the foundation of some potential application scenarios, e.g., forest disturbance dynamic monitoring~\cite{yang2012landsat}, land resource management~\cite{mou2018learning}, and urban environmental assessment~\cite{liang2010assessing}.
Recently, deep learning based object detection has made dramatic advances with the significant development of deep Convolutional Neural Networks (CNNs)~\cite{ding2019learning, liang2021learning}.
\input{fig1}
Since the geoclimatic objects in aerial images have the characteristics of large density variations, intricate background, and arbitrary orientation, it is challenging for standard deep CNNs to yield accurate category and location prediction results, especially for some overlapping and vague ones~\cite{zhang2021self,zhang2020causal}. This phenomenon can be explained by the information bottleneck principle~\cite{saxe2019information}, which explores the information flow between individual layers. The information bottleneck compresses the input information as much as possible, which may cause the transmitted features losing some task-related information, especially non-discriminative one
that is critical to location. It results that the trained classification network only paying close attention on small discriminative parts of the target object, which is a fatal flaw for positioning.
To this end, a great deal of state-of-the-art approaches are proposed, which are devoted to introducing label assignment strategy, and adaptive anchor selection. \textbf{For the first category} (e.g., Dynamic R-CNN~\cite{zhang2020dynamic}, CSL~\cite{yang2020arbitrary}
), the training process is explicitly optimized by fine-turning the label assignment standard (i.e., the IoU threshold), and parameters of loss function dynamically to focus on high-quality samples.
\textbf{For the second category} (e.g., DAL~\cite{ming2020dynamic}, Aabo~\cite{ma2020aabo}),
the most classic among them is Aabo~\cite{ma2020aabo}. It proposes a new hyperparameter optimization method to adjust the anchor configurations automatically, tailoring more suitable anchors for the specified data sets.
Researchers attach importance to directly developing a new head (e.g., R$^3$Det~\cite{yang2019r3det}, S$^2$A-Net~\cite{han2021align}) or backbone (e.g., DetNet~\cite{li2018detnet}, SpineNet~\cite{du2020spinenet}) network, while ignoring the neck network plays a considerable role in object detection. As illustrated in Figure~\ref{fig1} (a), most above models utilize the conventional neck network, which is generally based on the coarse accumulation of multiple convolutions.
To address this issue, in this letter, we present a Global Semantic Network (GSNet) and Fusion Refinement Module (FRM), which are based on the Feature Pyramid Network (FPN)~\cite{lin2017feature}. Concretely, as shown in Figure~\ref{fig1} (b), GSNet can obtain affluent backbone feature cues via a global bilateral scanning operation, thus making the model more suitable for dense prediction tasks. Then, FRM is a light yet active architecture which increases the model representational power by propagating semantically abstract features. To demonstrate its superiority, experiments are implemented on two baselines (i.e., Fast R-CNN~\cite{ren2015faster} and RetinaNet~\cite{lin2017focal}) and two commonly used data sets(i.e., DOTA~\cite{xia2018dota} and HRSC2016~\cite{liu2017high}) for oriented object detection. Results show that our GSNet + FRM model achieves the top performance(i.e., 79.37$\%$ and 74.49$\%$ mAP for DOTA, 90.50$\%$ and 90.47$\%$ for HRSC2016) with high efficiency.
Our main contributions are summed up as: 1) we reveal that the information bottleneck causes a trained classification network merely concentrating on small discriminative parts of the object. 2) based on the above observation, we propose GSNet and FRM to reduce the information bottleneck by reconstructing the neck network in a simple manner. 3) we report competitive 79.37$\%$ and 90.50$\%$ mAP on two challenging data sets for OBB task.
\input{fig2}
\section{Background}
This paper briefly introduces the tasks and problems of remote sensing image target detection
\section{Our Approach}
\label{Ⅱ}
\subsection{Preliminaries}
Training a classification network needs to extract the maximum compression features of the input and preserve as much information as possible about the target object.
While the information bottleneck compresses the information contained in the input image, which results in merely a subset of information about task-relevant features being passed to the output, especially ignoring most of the non-discriminative information.
Therefore, we seek to reduce the information bottleneck to solve the contradiction in object detection: classification and localization.
Our overall framework is shown in Figure~\ref{fig2}. Concretely, the pre-trained ResNet~\cite{he2016deep} is adopted as the backbone network following~\cite{ding2019learning}. Then, we construct the enhanced
feature pyramid which based on GSNet (in Figure~\ref{fig2}(b)) and FRM (in Figure~\ref{fig2}(c)). At last, the final prediction results are output by the head network.
\subsection{Global Semantic Network (GSNet)}
The classification task requires invariance of translation and scale. That is, the learned features will not change with the position and shape of the target. While localization is a position-sensitive task, which ensure the equality of translation and scale. When performing both classification and localization tasks, we consider the following two points.
Above all, the fully connected layer is spatially sensitive, so it is not suitable for localization~\cite{wu2020rethinking}. The convolutional layer has the characteristics of parameter sharing, and the features obtained by convolutions with the same kernel have stronger spatial correlation. In summary, the network is fully convolutional~\cite{long2015fully}, without fully connected layers or global pooling layers.
Secondly, in general, the classification network is able to identify a few small discriminative parts with high response. Large receptive field allows low-response regions to be recognized by sensing the high-response environment around them. What's more, Large kernel size enables the classification model to have a tightly connected structure that cope with different transformations. Therefore, the network uses as large convolution kernels as possible or even global convolutions to significantly expand the receptive field.
Motivated by the above observation, we present a Global Semantic Network (GSNet) in Figure~\ref{fig2} (b)
to explicitly alleviate the inconsistent between classification and regression.
GSNet employs the combined convolutions of 1 × k + k × 1 and k × 1 + 1 × k rather than the large kernel convolutions of k × k directly. Meanwhile, linear operations are completely applied between the convolution layers. This symmetric and depth-separable combined convolutions not only incorporates the context, but also controls number of parameters and computational cost, which make it more practical. The specific operation can be formulated as:
\begin{equation}
\text {Y}=\operatorname{Conv} 1 \mathrm{D}\left(\operatorname{Conv} 1 \mathrm{D}(X)^{T}\right)+\operatorname{Conv} 1 \mathrm{D}(\operatorname{Conv} 1 \mathrm{D}(X))^{T}
\end{equation}
where X as the input are the feature maps extracted from the Backbone feature pyramid.
Furthermore, since the localization maps obtained by the classification network cannot precisely represent the boundary of the target object, we refine the bounding box by modeling the boundary alignment as a residual structure to increase the accuracy. GSNet is introduced into the feature pyramid structure, which is closely linked to the feature maps and trained in an end-to-end manner, improving the
regression ability of the model effectively. Formally,
\begin{equation}
\text { GSNet }(Y)=Y+R(Y)+X
\end{equation}
\begin{equation}
R(Y)=\operatorname{Conv} 2 D(\sigma(\operatorname{Conv} 2 D(Y)))
\end{equation}
where R(·) is the residual branch, and $\sigma$ is the ReLU activation function
\subsection{Fusion Refinement Module (FRM)}
We proposed a novel Fusion Refinement Module (FRM) in Figure~\ref{fig2} (c). Since the semantic information of different stages is not similar, direct addition for cross-scale fusion will cause aliasing problems, which may confuse localization and classification tasks.
Consequently, we use concatenation to replace the original addition in the feature pyramid. Compared with addition, concatenation retains more feature information especially channel information, but increases the the number of model parameters and computation. To this end, 1 × 1 convolutions are then adopted to reduce dimension.
Note that a residual branch is introduced to inject different spatial context information. There are 3 feature maps for concatenation, which are from backbone network, GSNet in the same stage, GSNet in the next stage after up-sampling separately. The residual structure superimposes depth features on the basis of the original features, realizing the fusion of global and local information.
After that, we implement stacked convolutional layers to remove the aliasing effect caused by the
interpolation, reducing information loss in channel and enhancing the feature representational ability. 1 × 1 convolutions are adopted at
intervals as dimensionality reduction modules to reduce convolution bottlenecks. The enhanced feature pyramid contain more higher-level and semantic information. Formally, it can be expressed as
\begin{equation}\label{eq4}
\mathrm{Z}=\mathrm{f}^{1 \times 1}\left(\mathrm{f}^{3 \times 3}\left(\mathrm{f}^{1 \times 1}\left(\mathrm{f}^{3 \times 3}\left(\mathrm{f}^{1 \times 1}\left(\left[\mathrm{X}, \mathrm{Y}_{3}, \mathrm{Y}_{4}\right]\right)\right)\right)\right)\right)
\end{equation}
where [·] is channel-wise concat, and X, $\mathrm{Y}_{3}$ and $\mathrm{Y}_{4}$ are feature maps from backbone neckwork, the third and fourth stage of feature pyramid respectively. $f^{1 \times 1}$ and $f^{3 \times 3}$ denote the standard 1 × 1 and 3 × 3 convolutions.
\section{Experiments}
\subsection{Data sets and Evaluation Metric}
{DOTA}~\cite{xia2018dota} contains 2,806 aerial images with 15 classes, whose size varies from 800 × 800 to 4000 × 4000.
{HRSC2016}~\cite{liu2017high} contains 1061 images with high resolution, whose size ranges from 300 × 300 to 1500 × 900.
For both data sets, we randomly take 3 / 6 for training, 1 / 6 and 2 / 6 for validation and testing, respectively. All images are cropped into 1024 × 1024 patches with a stride of 824.
We use mean average precision (mAP) as primary metric. In addition, two commonly used metrics are taken into consideration to verify the model efficiency, which are giga floating-point operations per second (GFLOPs), and the model parameters (Params). Our prediction results are submitted to the official DOTA evaluation server to get the final mAP and APs of each class.
\subsection{Experimental Setup}
\myparagraph{Baselines.}
We deploy the standard two-stage detector Faster R-CNN~\cite{ren2015faster} and one-stage detector RetinaNet~\cite{lin2017focal} as the baselines. ResNet101 and ResNet50 (both pre-trained on ImageNet~\cite{russakovsky2015imagenet}) are adopted as backbone network separately, and FPN is utilized to produce enhanced neck network.
As in~\cite{ren2015faster,lin2017focal}, the rotated head is developed in RoI-Transformer~\cite{ding2019learning} and RotatedRetinaNet~\cite{lin2017focal} individually. For fair comparison, all settings strictly follow as reported in offical codes.
\myparagraph{Training Details.}
During training, we use standard SGD~\cite{krizhevsky2012imagenet} as the optimizer, where the learning rate is initialized to 0.005 and 0.0025 for two baselines individually, and weight decay and momentum are 0.0001 and 0.9 respectively. Models are trained for DOTA and HRSC2016 in 12 epochs on RTX3060 with batch size set to 2.
\input{tab1}
\subsection{Ablation Study}
Our ablation studies aim to verify the effectiveness and efficiency of our proposed modules on different baselines and data sets. For this propose, we conduct a series of experiments and show some visual comparisons.
\myparagraph{Effectiveness of the proposed modules.}
We implement each module and its combination on DOTA data set. Table\ref{tab1} shows some results of the comparison. Specifically, we take Faster R-CNN based on ResNet101 FPN as baseline, which is applied for rotational regression tasks. It is observed that GSNet and FRM improve bounding box mAP by 3.52$\%$ and 5.89$\%$, respectively. Combining GSNet and FRM, our methods achieved 79.37$\%$mAP, which is about 6.28$\%$ higher than the baseline method. For model efficiency, it is evident that when GSNet is deployed, there are only 0.49 M model Params and 4.79 GFLOPs increment. It proves that the symmetric combined convolution in GSNet could effectively control model parameters and computational cost.
\myparagraph{Effectiveness on different baselines.}
Table\ref{tab1} shows the results of our modules deployed to two baselines(i.e., two-stage detector Faster R-CNN and one-stage detector RetinaNet) on DOTA. For RetinaNet, comparing row 9 to row 6, we observe that the proposed modules bring remarkable performance enhancements(i.e., 2.82$\%$mAP). This is because our GSNet and FRM encourage each layer to preserve more features so as to reduce information bottleneck. This phenomenon is consistent across HRSC2016 data set.
As we mentioned under Eq.\ref{eq4}, the main reason for computational overheads is the introduction of additional convolutional layers in constructing FRM.
\input{fig3}
\input{tab3}
\input{tab4}
\myparagraph{Visualizations.}
Figure~\ref{fig3} shows some visual comparisons on DOTA between the baseline (blue boxes) and Faster R-CNN+Ours (red boxes). Faster R-CNN+Ours refers to our proposed network based on ResNet101 FPN following~\cite{ding2019learning}. It could be seen intuitively that our proposed methods have obvious accuracy improvement on object location and boundary. Especially large objects such as roundabout, harbor and plane. From the last two rows, we clearly observe that it also enhances the recall of small targets.
\subsection{Comparisons with State-of-the-art Methods}
\myparagraph{Results on DOTA.}
The existing state-of-the-art methods are mainly divided into two-stage methods and one-stage methods. As shown in Table\ref{tab3}, compared with the previous best result 76.81$\%$ by SCRDet++~\cite{yang2020scrdet++}(i.e., two-stage method) and 73.50$\%$ by CFC-Net~\cite{ming2021cfc}(i.e., one-stage method), our GSNet + FRM model ranks first and improves 2.56$\%$ and 0.99$\%$ mAP individually. Concretely, some hard categories(e.g., Ship, Large vehicle, Harbor, and Helicopte) have notable mAP increments. These results indicate that our model further enhance the feature presentation capabilities.
\myparagraph{Results on HRSC2016.}
The result comparisons on the test set of HRSC2016 is shown in Table~\ref{tab3}. Ours markedly prevail, achieving the top performance(i.e., 90.50$\%$ and 90.47$\%$ mAP) among all the state-of-the-art methods, which surpass the previous best model by 0.33$\%$ and 0.30$\%$ respectively with fewer anchors. That shows the significance of utilizing heuristically defined anchors.
\section{Conclusion}
In this letter, we first analyze the existing problems in classification networks with theory of information bottleneck. Then, we propose a simple yet valid Global Semantic Network and Fusion Refinement Module for reducing information bottleneck on aerial images. Extensive experiments on two data sets confirm the superiority of our GSNet+FRM model. In the future, we are about to consider modifying our modules to be suitable for various computer vision tasks (e.g., semantic segmentation).
|
1,116,691,498,687 | arxiv | \section{Introduction}
The amazing physical properties of graphene\cite{novo04} have triggered the search for structures with superior performance among the graphene allotropes. One of their families, the graphynes, results from insertion of acetylenic groups $-$C$\equiv$C$-$ into carbon-carbon bonds of graphene, which can be done in a variety of ways.\cite{baug87,nari98} The graphynes are semiconductors with a finite gap\cite{nari98,colu03,kim12} or are, similarly to graphene, zero-gap semiconductors with linear electronic dispersion at the Fermi energy.\cite{malk12,kim12} It has been predicted that these structures share the unique mechanical,\cite{cran12} thermal,\cite{ouya12} and electrical\cite{chen13} properties of graphene. Except for $\gamma$-graphyne, the vibrational properties of the graphynes have not been investigated yet.\cite{ouya12} Along with the theoretical study of the graphynes, there is some progress in the synthesis of fragments of graphene allotropes.\cite{malk12} Even though extended graphynes have not been synthesized yet, it is important to investigate theoretically their reponse to external perturbations for the needs of sample characterization for future nanoelectronics applications. A cheap and non-distructive characterization method is Raman spectroscopy, which has been applied with success to a number of all-carbon structures. The application of this method requires, however, the knowledge of the Raman bands of the graphynes.
Here, we present a theoretical study of the Raman spectra of $\alpha$-, $\beta$-, and $\gamma$-graphyne within an \textit{ab-initio}-based non-orthogonal tight-binding (NTB) model. First, we calculate the electronic band structure and minimize the total energy to obtain the relaxed structure of the three graphynes. Then, the phonon dispersion and Raman intensity of the Raman-active phonons are derived using perturbation theory within the NTB model. Lastly, the simulated Raman spectra of the graphynes are discussed. The computational details are given in Sec. II. The obtained results are compared to available theoretical data in Sec. III. The paper ends up with conclusions (Sec. IV).
\section{Computational details}
The NTB model\cite{popo04} utilizes Hamiltonian and overlap matrix elements, determined from \textit{ab-initio} data on carbon dimers.\cite{pore95} The total energy of the atomic structure is split into the sum of band structure and repulsive contributions. The latter is modeled by pair potentials with \textit{ab-initio}-derived parameters. The expression of the total energy is used for relaxation of the atomic structure, which is mandatory for the calculation of the phonon dispersion. The dynamical matrix explicitly accounts for the electronic response to the atomic displacements in first-order perturbation theory with electron-phonon matrix elements, calculated within the NTB model.\cite{popo10} Both the structure relaxation and phonon calculation require summations over the Brillouin zone of the graphynes. The convergence of the phonon frequencies to within 1 cm$^{-1}$ is reached on increasing the size of the Monkhorst-Pack mesh of points for the summations up to $40\times40$ ($\alpha$-graphyne), $15\times15$ ($\beta$-graphyne), and $20\times20$ ($\gamma$-graphyne).
The Raman intensity of graphyne is calculated in fourth-order quantum-mechanical perturbation theory\cite{popo12} with electron-photon and electron-phonon matrix elements, derived within the NTB model. The expression for the Raman intensity includes summations over the Brillouin zone. The integrated intensity of the Raman bands is found to converge to within a few percent by increasing the size of the Monkhorst-Pack mesh of points for the summations up to $200\times200$ for the three graphynes. The obtained Raman intensity depends on the laser photon energy. As a consequence, the Raman signal is resonantly enhanced for laser photon energies close to the optical transitions of the structure. Such a resonant behavior of the Raman scattering is normally observed in many all-carbon periodic materials, e.g., carbon nanotubes. In graphene, the scattering is resonant at all laser energies, which is manifested as a monotonous increase of the intensity with increasing laser photon energy. The resonant scattering makes it possible to observe Raman signal even from nano-size structures.
\section{Results and Discussion}
\begin{figure}[t]
\includegraphics[width=80mm]{fig1-popov-lambin.eps}
\caption{Schematic representation of the crystal structure of (a) graphene, (b) $\alpha$-graphyne, (c) $\beta$-graphyne, and (d) $\gamma$-graphyne. The atoms and bonds are denoted by solid circles and connecting lines, respectively; the rhombs are the unit cells of the structures.}
\end{figure}
\begin{table}[b]
\caption{\label{tab:table1}
Optimized unit cell parameter $a$ and bond lengths $r_{1}$, $r_{2}$, and $r_{3}$ (in $\text{\AA}$), graphyne unit cell area per atom relative to that of graphene, $RA$, and relative excess of binding energy per atom (in eV) with respect to graphene, $\Delta E_{b}$, in comparison with available reported \textit{ab-initio} data. $r_{1}$ is the length of the bond between a triply-coordinated atom and its doubly-coordinated neighbors, $r_{2}$ is the length of the bond between two triply-coordinated atoms, and $r_{3}$ is the length of the bond between doubly-coordinated atoms (triple bond). For each graphyne, available reported \textit{ab-initio} data are provided.}
\begin{ruledtabular}
\begin{tabular}{rlllllll}
&$a$&$r_{1}$&$r_{2}$&$r_{3}$&$RA$&$\Delta E_{b}$&Ref.\\
\hline
$\alpha$&$6.992$&$1.402$&$-$&$1.232$&$2.02$&$0.96$&\\
&$6.997$&$1.4$&$-$&$1.244$&&&[\onlinecite{colu03}]\\
&$6.981$&$1.400$&$-$&$1.232$&&&[\onlinecite{kim12}]\\\
$\beta$&$9.507$&$1.415$&$1.398$&$1.225$&$1.72$&$0.87$&\\
&$9.46$&$1.43$&$1.34$&$1.2$&&&[\onlinecite{colu03}]\\
&$9.50$&$1.463$&$1.392$&$1.234$&&&[\onlinecite{kim12}]\\
$\gamma$&$6.894$&$1.430$&$1.407$&$1.220$&$1.31$&$0.63$&\\
&$6.86$&$1.42$&$1.40$&$1.22$&&&[\onlinecite{nari98}]\\
&$6.883$&$1.424$&$1.407$&$1.221$&&&[\onlinecite{kim12}]\\
\end{tabular}
\end{ruledtabular}
\end{table}
The $\alpha$-, $\beta$-, and $\gamma$-graphyne have the hexagonal symmetry of graphene itself with two-dimensional space group $p6m$ (Fig. 1). The three graphynes can be derived from graphene by insertion of an acetylenic group into every carbon bond, two-thirds of the carbon bonds, and one-third of the carbon bonds of graphene, respectively. Other, less symmetric graphynes can also be constructed by insertion of acetylenic groups. Nevertheless, the calculations for them can be performed in a similar way as for the considered graphynes.
The optimized unit cell parameter and carbon-carbon bond lengths are given in Table I along with the relative area per atom of the graphynes with respect to that of graphene, $RA$, and the relative excess of binding energy per atom of the graphynes with respect to graphene, $\Delta E_{b}$. The lengths of the bonds between two triply-coordinated atoms and between a triply-coordinated atom and its doubly-coordinated neighbors are close to that of graphene of $\approx1.42$ $\text{\AA}$ with a few exceptions (see Table I). The bond between doubly-coordinated atoms is $\approx1.2$ $\text{\AA}$, as expected for triple carbon bonds. The three graphynes have larger binding energy compared to graphene. Amongst the three graphynes, $\gamma$-graphyne has the smallest $\Delta E_{b}$ and the smallest $RA$, while these two quantities are the largest for $\alpha$-graphyne. For each graphyne, the optimized parameters agree well with the available \textit{ab-initio} ones in Table I. The deviation of the NTB structural parameters from the \textit{ab-initio} ones is comparable to the deviation between the two sets of \textit{ab-initio} parameters. The reasons for the disagreement between the optimized parameters are unclear.
\begin{figure}[t]
\includegraphics[width=80mm]{fig2-popov-lambin.eps}
\caption{Electronic band structure and the density of states (DOS) of (a) $\alpha$-graphyne, (c) $\beta$-graphyne, and (d) $\gamma$-graphyne. Panel (b) shows the Brillouin zone of the three graphynes with high-symmetry points. The symbols are reported \textit{ab-initio} data. In panel (c), the empty circle along the $\Gamma$M direction marks the predicted \textit{ab-initio} crossing of the conduction and valence bands and those along the $\Gamma$K direction mark the gap.\cite{malk12} The predictions of Ref.~\onlinecite{kim12} are similar to those of Ref.~\onlinecite{malk12} and are not shown on this panel.}
\end{figure}
Panels a), c), and d) of Fig. 2 show the calculated band structure of the three graphynes close to the Fermi energy (chosen as zero) and along three high-symmetry directions in the Brillouin zone (Fig. 2b). Graphene and the three graphyne structures sharing the same space group, the symmetry of the $k$ points in reciprocal space is the same as for graphene. According to symmetry, singly- and doubly-degenerate bands coexist at the K point of the first Brillouin zone for all these structures. Both $\alpha$-graphyne and graphene are honeycomb networks of carbon atoms and are characterized by similar band structures. Their Fermi level coincides with a doubly-degenerate band at the K point that is formed by the crossing of valence and conduction bands having linear dispersion when approaching the K point (Dirac cone). At the M point, the splitting between these two bands in $\alpha$-graphyne is $1.58$ eV, which agrees well with the reported value of $\approx1.6$ eV.\cite{colu03,malk12} In $\beta$- and $\gamma$-graphyne, the Fermi level does no longer coincide with a particular band at the K point. In $\beta$-graphene, there is a band crossing (at 0 energy) along $\Gamma$M that is allowed by the $C_{2v}$ symmetry of the Bloch vectors along this line. Additionally, a tiny gap of $0.16$ eV opens at a point along the $\Gamma$K direction, in agreement with the reported one of $\approx 0.15$ eV.\cite{colu03} By comparison, the corresponding bands in $\gamma$-graphyne are pushed away and do not cross. The calculated gap at the M point of $1.45$ eV corresponds well with the reported tight-binding result of $1.3$ eV (Ref.~\onlinecite{colu03}) but is about three times larger than the \textit{ab-initio} one of $0.39$ eV (Ref.~\onlinecite{colu03}) and $0.47$ (Ref.~\onlinecite{kim12}). This disagreement, partly due to density functional theory (DFT) underestimating the electronic band gap, affects the conditions for resonant Raman scattering for laser photon energy below the higher-energy gap.
The Fermi level of $\beta$-graphyne coincides with the energy of the band crossing along the $\Gamma$M direction discussed above. The Fermi surface is reduced to six points in the first Brillouin zone and, like graphene and $\alpha$-graphyne, $\beta$-graphyne is a zero-gap semiconductor.\cite{colu03,kim12,malk12} The shape of the Dirac cone is slightly more complex than in graphene and in $\alpha$-graphyne\cite{malk12} due to the lowering of symmetry, from threefold at the K point (trigonal wrapping) to mirror symmetry at the actual crossing point $k_F$ along the $\Gamma$M line. The exact location of $k_F$ depends on the details of the calculations. In a $\pi$ orthogonal tight-binding description,~\cite{kim12} $k_F$ depends on the relative values of the hopping parameters attributed to the three different C-C bonds that the $\beta$-graphyne structure has. In the present work, $k_F$ = $0.43 k_M$ where $k_M$ is the length of the $\Gamma$M segment. By comparison, H\"uckel-type calculations predict $k_F/k_M$ = 0.42,\cite{colu03} whereas \textit{ab-initio} calculations yield $k_F/k_M$ = 0.37,\cite{colu03} and 0.73.~\cite{kim12,malk12} The three cited \textit{ab-initio} calculations use DFT in the generalized gradient approximation and the Perdew-Burke-Ernzerhof exchange-correlation functional. The difference between the three calculations is in the program package used: SIESTA on the one hand\cite{colu03} and VASP on the other hand.\cite{kim12,malk12} In particular, the former package uses Troullier-Martins pseudopotentials, while the latter is based on the projector augmented waves method. The reasons for the disagreement between the available \textit{ab-initio} results should be elucidated by further sophisticated band structure studies. The reliability of the \textit{ab-initio} programs and the NTB model depends on the clarification of this point.
\begin{figure}[t]
\includegraphics[width=80mm]{fig3-popov-lambin.eps}
\caption{Phonon dispersion of (a) $\alpha$-graphyne, (c) $\beta$-graphyne, and (d) $\gamma$-graphyne. Panel (b) shows the Brillouin zone of the three graphynes with high-symmetry points. The solid arrows, labeled by $1$, $2$, and $3$, denote electron scattering paths between Dirac points at two points along the $\Gamma$M direction, which give rise to anomaly of the phonon dispersion of $\beta$-graphyne at three wavevectors. In $\alpha$-graphyne, the anomaly is observed at the K point due to electron scattering between Dirac points at two K points.}
\end{figure}
Next, we calculate the phonon dispersion of the graphynes. Before proceeding with the results, we note that the NTB model overestimates the phonon frequencies in comparison with the experimental ones. This has been demonstrated and discussed in the case of C$_{60}$.\cite{pore95a} The relative deviation of the NTB phonon frequencies with respect to the reported ones for C$_{60}$ (Ref.~\onlinecite{mene00}) increases with increasing frequency up to about $1000$ cm$^{-1}$ but remains almost constant in the high-frequency region above $1000$ cm$^{-1}$. Agreement of the calculated frequencies with the reported ones can be achieved by appropriate scaling of the former. We focus on the high-frequency region, which contains the intense Raman lines for the three graphynes, and derive a single scaling factor of $0.90$ with a standard deviation of $0.01$. The same factor has been found in graphene.\cite{popo10} We assume that this scaling factor is also applicable to the high-frequency phonons in the graphynes and scale all phonon frequencies by $0.9$. The accuracy of the scaled NTB phonon frequencies in the high-frequency region, deduced from the standard deviation of the scaling factor, is $\approx1\%$, which is an acceptable accuracy for the NTB model.
\begin{figure}[t]
\includegraphics[width=50mm]{fig4-popov-lambin.eps}
\caption{Atomic displacement of the Raman-active phonons of $\alpha$-graphyne, their symmetry species and frequency (in parenthesis, in cm$^{-1}$).}
\end{figure}
Panels a), c), and d) of Fig. 3 show the NTB phonon dispersion of the graphynes, scaled by $0.9$. As for graphene, there are three acoustic branches: a longitudinal and a transverse ones with linear dispersion and atomic displacements in the graphyne plane (in-plane phonons), and a transverse branch with quadratic dispersion and displacements perpendicular to the graphyne plane (out-of-plane phonons). The latter branch determines the linear temperature dependence of the phonon heat capacity at low temperatures as in the case of graphene. The optical branches have in-plane and out-of-plane atomic displacements with bond-bending (displacements perpendicular to the bond) and bond-stretching (displacements along the bond) character. The bond-stretching phonons of the triple bonds form a strip of high-frequency branches. The calculated phonon dispersion of $\gamma$-graphyne corresponds very well to the \textit{ab-initio} one,\cite{ouya12} which justifies the use of the scaling factor of $0.9$.
According to their optical activity, the zone-center phonons can be Raman-active, ir-active, or inactive (silent). Here, we are concerned with the Raman-active phonons and their contribution to the Raman spectra. In the parent structure graphene, there is a single Raman-active phonon, the so-called $G$ mode, characterized by in-plane displacements of the carbon atoms. The Raman line, associated with this phonon is usually termed the $G$ band and is observed at $1582$ cm$^{-1}$. The atomic displacements of the Raman-active phonons of the graphynes are given in Fig. 4, 5, and 6, labeled by their symmetry in the hexagonal group $p6m$ and the calculated frequency (in parenthesis). There are altogether 4, 9, and 6 such phonons of the two symmetry species A$_{1g}$ and E$_{2g}$ in $\alpha$-, $\beta$-, and $\gamma$-graphyne, respectively. In all cases, the atomic displacements of the Raman-active phonons are in-plane ones. The phonons with frequencies below $\approx 1000$ cm$^{-1}$ have predominantly bond-bending character; the phonons with frequencies above $\approx 1000$ cm$^{-1}$ are predominantly bond-stretching. Among the latter, the phonons with frequencies above $\approx 2000$ cm$^{-1}$ are bond-stretching for the atoms involved in the triple bonds.
\begin{figure}[t]
\includegraphics[width=80mm]{fig5-popov-lambin.eps}
\caption{Atomic displacement of the Raman-active phonons of $\beta$-graphyne, their symmetry species and frequency (in parenthesis, in cm$^{-1}$).}
\end{figure}
The calculated Raman spectra of the graphynes for in-plane parallel scattering geometry at the commonly used laser photon energy $E_{L}=2.4$ eV are shown in Fig. 7. The spectra exhibit a few intense lines. In $\alpha$-graphyne, the two intense lines are due to phonons of E$_{2g}$ symmetry. The lower-frequency one at $1012$ cm$^{-1}$ arises from a G mode-like phonon. It is by $\approx50\%$ lower than the G mode of graphene, apparently due to softening of the restoring force by the replacement of $sp^{2}$ graphene bonds by single C$-$C ones. The higher-frequency line at $2022$ cm$^{-1}$ is due to a bond-stretching phonon with displacement of the atoms of the triple bonds. The other two Raman-active phonons give negligible contribution to the Raman spectra. In $\beta$-graphyne, the two intense lines come from phonons with A$_{1g}$ symmetry. The lower-frequency line at $1196$ cm$^{-1}$ is due to a phonon with breathing motion of the carbon hexagons. Contrary to $\alpha$-graphyne, the higher-frequency line is less intense than the lower-frequency one and arises from a phonon with bond-stretching movements of the triple-bond atoms. The Raman spectrum of $\gamma$-graphyne exhibits three intense lines, coming from the two phonons of A$_{1g}$ symmetry and one phonon of E$_{2g}$ symmetry. The Raman line at $1221$ cm$^{-1}$ is due to a phonon with breathing motion of the carbon hexagon and that at $2258$ cm$^{-1}$ is due to a phonon with displacements of the triple-bond carbon atoms. The Raman line at $1518$ cm$^{-1}$ is a G mode-like one but is softened to a lesser extent than in $\alpha$-graphyne, because of the replacement of only one $sp^{2}$ graphene bond by a single bond. The comparison of the three Raman spectra in Fig. 7, shows that the lowest-frequency lines are located in a narrow interval of Raman shifts around $1000$ cm$^{-1}$ and the highest-frequency have Raman shifts around $2000$ cm$^{-1}$. In order to distinguish between the samples of the three graphynes, Raman measurement in in-plane cross scattering geometry should be performed. In this geometry, only the E$_{2g}$ phonons in Fig. 7 should be observed (not shown).
\begin{figure}[t]
\includegraphics[width=80mm]{fig6-popov-lambin.eps}
\caption{Atomic displacement of the Raman-active phonons of $\gamma$-graphyne, their symmetry species and frequency (in parenthesis, in cm$^{-1}$).}
\end{figure}
Finally, similarly to graphene, defect-induced and two-phonon Raman bands are expected in the Raman spectra of $\alpha$- and $\beta$-graphyne.\cite{popo12} Such bands can originate from double resonant Raman scattering processes. In such a process, an incident photon is absorbed with creation of an electron-hole pair. The electron (or hole) is scattered by a phonon and a defect or by two phonons between the states of one (or two) Dirac cones before annihilation of the electron-hole pair with emission of a photon.\cite{popo12} The Raman shift of the most intense defect-induced and two-phonon bands can be deduced from the phonon branches with most prominent Kohn anomaly, which for both graphynes are along the $\Gamma$K direction. The length of the wavevector of the scattering phonons for double-resonant processes is given as $q=q_{DD}\pm E_{L}/v_{F}$, where $q_{DD}$ is the length of the wavevector, connecting two Dirac points and $v_{F}$ is the electron Fermi velocity at the Dirac point along the $\Gamma$K direction. The frequency of the scattering phonon is determined from the phonon dispersion, Figs. 3a and 3c. Thus, at $E_L=2.4$ eV, two-phonon overtone bands should be observed at $\approx2030$ cm$^{-1}$ and $\approx4000$ cm$^{-1}$ for $\alpha$-graphyne, and at $\approx2450$ cm$^{-1}$ and $\approx4360$ cm$^{-1}$ for $\beta$-graphyne. In presence of defects, defect-induced bands should be seen at about half of the Raman shift of the two-phonon bands. The thorough calculation of the defect-induced and the two-phonon bands is prohibitively time-consuming and is not done here. This leaves open the question of the intensity of these bands.
\begin{figure}[t]
\includegraphics[width=80mm]{fig7-popov-lambin.eps}
\caption{Raman spectra of $\alpha$-, $\beta$-, and $\gamma$-graphyne at $E_L=2.4$ eV. The most intense lines of $\alpha$-graphyne, $\beta$-graphyne, and the $G$ band of graphene (not shown) have comparable intensity. The intensity of the lines of $\gamma$-graphyne strongly depends on $E_L$.}
\end{figure}
\section{Conclusions}
In conclusion, we have performed calculations of the electronic band structure, phonon dispersion, and Raman spectra of $\alpha$-, $\beta$-, and $\gamma$-graphyne. We have shown that among the many Raman-active phonons for each graphyne, only two or three of them give rise to intense Raman bands. The one-phonon Raman lines between $2000$ and $2300$ cm$^{-1}$ should be characteristic features of the graphynes. The predicted Raman bands can be used for assignment of the lines of the Raman spectra of graphyne samples to particular graphynes.
\acknowledgments
V.N.P. acknowledges financial support from the Universit{\'e} de Namur (FUNDP), Namur, Belgium.
|
1,116,691,498,688 | arxiv |
\section{Introduction}\label{Introduction}
Learning on Graphs (LoG) in multi-client systems has extensive applications such as Graph Neural Networks (GNNs) for recommendation \citep{fan2019graph, ge2020graph, wang2020global, wu2021fedgnn}, finance \citep{wang2019semi, liu2018heterogeneous}, and traffic \citep{yu2017spatio, diehl2019graph}.
The key to the success of LoG is sharing local raw data between clients. For example, when recommending items to users with insufficient local data, data sharing from their friends with similar preferences in a social network can improve the performance of recommendation models.
On the other hand, Federated Learning (FL) is widely explored due to its protection of data privacy, especially in medical fields \citep{xu2020federated}, mobile device fields \citep{hard2018federated, lim2020federated}, and Internet of Things (IoT) fields \citep{lu2019blockchain}. In FL, models are trained without data sharing among clients to protect clients' local data privacy. As a consequence, combining FL and LoG in multi-client systems faces a fundamental conflict in data sharing.
As we know, considerable works are combing Federated Learning and Graph Machine Learning. One attractive research line is using FL to train GNNs \citep{wang2020graphfl,zhang2021subgraph}. In addition, \citep{meng2021cross, wu2021fedgnn} use FL to train GNN-based models to address specific real-world applications. \citep{zhang2021federated, he2021fedgraphnn} summarises current efforts on FL over graphs, including the above literature. However, most current works did not utilize the network of clients in the system and failed to protect the privacy of the nodes in the network. In other words, previous literature never models FL clients as nodes in GNNs on multi-client systems. Besides, all these works are application-oriented without a theoretical guarantee. Therefore, fundamental data sharing conflict remains unsolved.
Such significant conflict motivates our investigation of the construction of Graph Federated Learning (GFL) in multi-client systems: \textbf{Can we formulate a GFL framework to address the data sharing conflict, paired with theoretical and empirical supports?} We aim to deliver a generic framework of GFL. Our work focuses on the centralized federated learning setting while data collected by clients are Non-IID distributed.
\textbf{Contributions.}
We formulate the GFL problem for a graph-based model in multi-client systems. To address the data sharing conflict, we propose an FL solution with the hidden representation sharing technique, which only requires the sharing of hidden representations rather than the raw data from the neighbors to protect data privacy on multi-client systems. A technical challenge arises since the hidden representations are only exchanged during communication between clients and the central server, making unbiased gradient estimation becomes impractical. As a remedy, we provide a practical gradient estimation method.
Moreover, a convergence analysis with non-convex objectives of the proposed algorithm is provided.
To the best of our knowledge, this is the first theoretical analysis for FL with a graph-based model.
We propose \texttt{GFL-APPNP} and empirically evaluate the proposed method for several classification tasks on graphs, including deterministic node classification, stochastic node classification, and supervised classification.
Our experiments show that the proposed method converges and achieves competitive performance.
Additionally, the results provide a good match between our theory and practice. The contributions in this paper are summarized as follows:
\begin{itemize}[leftmargin=10pt, itemsep = -3pt]
\item Formulate the GFL problem to model FL clients as nodes in LoG on multi-client systems.
\item Propose FL solution with hidden representation sharing for GFL problem to resolve data sharing conflict.
\item Provide theoretical non-convex convergence analysis for GFL.
\item Propose \texttt{GFL-APPNP} and empirically show the proposed algorithm is valid and has competitive performance on classification tasks.
\end{itemize}
\section{Related Works}\label{Related_works}
\textbf{Federated Learning for GNNs.} How to utilize the FL technique to train GNNs is an interesting topic that attracts lots of attention from researchers. For instance, \cite{wang2020graphfl} focuses on graph semi-supervised learning via meta-learning and handles testing nodes with new label domains as well as leverages unlabeled data. \citep{zhang2021subgraph} proposes federated learning to train GNNs by dividing a large graph into subgraphs. \citep{xie2021federated} considers an FL solution to train GNNs for the entire graph classification. \citep{he2021spreadgnn} proposes decentralized periodic SGD to solve the serverless Federated Multi-Task Learning for GNNs. \cite{meng2021cross} Proposes a GNN-based federated learning architecture for spatio-temporal data modeling. \citep{wu2021fedgnn} puts forward a decentralized federated framework for privacy-preserving GNN-based recommendations.
However, \citep{zhang2021subgraph, wang2020graphfl, xie2021federated, he2021fedgraphnn} assume each client has its own graphs and \citep{meng2021cross, wu2021fedgnn} use federated learning to train GNN-based model. None of these works is federated learning to train GNNs on multi-client systems with the protection of node-level privacy, which is addressed by our work.
\textbf{Personalized Federated Learning.} The conventional FL approach faces a fundamental challenge of poor performance on highly heterogeneous clients. Previous works \citep{li2019convergence, li2020federated} provided solutions to tackling Non-IID data across clients. Recently, inspired by personalization, research on personalized federated learning has developed rapidly \citep{fallah2020personalized, t2020personalized, mansour2020three}. Particularly, personalization with graph structure to tackle heterogeneity in FL is highly related to our work. For example, \citep{smith2017federated} proposes \texttt{MOCHA} which uses a graph-type regularization to control the parameters and perform a prime-dual framework, and \citep{hanzely2020federated} provides a similar regularizer to the multitask learning. \citep{t2020personalized} considers an implicit model where personalized parameters come from a Moreau envelope, and this idea recently has got generalized to graph Laplacian regularization \citep{dinh2021new}. \citep{lalitha2019peer, lalitha2018fully} assumes that there is a common parameter shared across the network when each node of the graph is viewed as a federated learning client that generates independent data. All these works are model-level personalization based on graphs, such as graph regularization, while LoG encourages data-level sharing.
\textbf{Notations.}
Let $[n]$ be the set $\{1,2,...,n\}$. Vectors are assumed to be column vectors. ${\boldsymbol 1}$ is a vector with all ones. ${\boldsymbol I}$ is the identity matrix with appropriate dimension. $\norm{\cdot}$ is assumed to be the $2$-norm. For a matrix ${\boldsymbol A}$, $\lambda_{max}({\boldsymbol A})$ is the maximum eigenvalue of ${\boldsymbol A}$ and ${\boldsymbol A}^{\dagger}$ is the Moore–Penrose inverse of ${\boldsymbol A}$. ${\cal O}(\cdot)$ is the big-O notation.
\section{Graph Federated Learning}\label{Problem}
\begin{figure}[t!]
\centering
\includegraphics[scale = 0.3]{figures&tables/GFL_framework}
\caption{(a) four steps in each communication round: (i) Uploading models. (ii) Broadcasting aggregated model. (iii) Uploading hidden representations. (iv) Broadcasting hidden representations. (b) global encoder $\Psi_h$ is shared since the same task is shared among clients. This is why \texttt{FedAvg} can be utilized in this multi-client system. The personalized aggregator $\Psi_{{\cal G}}$ accounts for the statistical heterogeneity across clients.}
\label{fig: GFL_framework}
\vspace{-4mm}
\end{figure}
\subsection{Preliminaries}
\label{Sec: preliminaries}
\textbf{Federated Learning.} In typical FL, multiple clients collectively solve a global task. Our work focuses on the centralized setting with a central server, and we consider the following consensus optimization problem:
\begin{equation}
\begin{aligned}
\label{FL_problem}
\min_{{\boldsymbol W}} \mbox{ } F({\boldsymbol W}):=\frac{1}{N} \sum_{k=1}^N F_k({\boldsymbol W})
\end{aligned}
\end{equation}
where $N$ is the number of clients, and ${\boldsymbol W}$ is the model parameter. $F({\boldsymbol W})$ is the global loss function and $F_k({\boldsymbol W})$ is the local loss function. For client $k$, it has access only to its local data and conducts local update ${\boldsymbol W}_{k}^{t+1} \leftarrow {\boldsymbol W}_k^t - \eta {\boldsymbol g}^t_k$ where ${\boldsymbol g}^t_k := \nabla \Hat{F}_k({\boldsymbol W}_k^t)$ is the stochastic gradient estimator of $\nabla F_k({\boldsymbol W}_k^t)$ and $\eta$ is the learning rate. Throughout our work, $\nabla$ is gradient w.r.t model parameter ${\boldsymbol W}$. Denote $I$ as the number of local updates between two communication rounds. During the FL process, after $I$ steps of local update, the central server aggregates the latest models from clients according to \texttt{FedAvg} \citep{konevcny2016federated}: $\Bar{{\boldsymbol W}}^t = \frac{1}{N} \sum_{k=1}^N {\boldsymbol W}_k^t$.
\textbf{Statistical Heterogeneity.} The goal of FL is to minimize the global loss on the average data distribution across clients, as shown in Eq.\eqref{FL_problem}. However, in most substantial applications of FL, clients collect data in a Non-IID distributed manner, leading to a fundamental statistical heterogeneity/shift problem in FL \citep{li2020federated, karimireddy2020scaffold}. \citep{fallah2020personalized, laguel2021superquantile} have suggested quantification of statistical heterogeneity. In this paper, we use the term "level of statistical heterogeneity" to describe how large the statistical shift is across clients.
\subsection{GFL Problem Formulation}
\label{Sec: graph_federated_optimization}
The topological structure which describes the Non-IID relationship among clients' distributions is an undirected graph denoted as ${\cal G} = ({\cal V}, {\cal E})$ where ${\cal V}$ is a set of $N$ clients and ${\cal E}$ is a set of edges. The adjacency matrix of ${\cal G}$ is denoted as ${\boldsymbol A} \in \{0,1\}^{N \times N}$. Throughout this paper, nodes in graph ${\cal G}$ are referred to as clients. Furthermore, denote ${\boldsymbol \Xi}_{{\cal G}} = ({\boldsymbol X}, {\boldsymbol Y}) \in {\boldsymbol R}^{N \times (d+c)}$ as the data matrix where ${\boldsymbol X} \in \mathbb{R}^{N \times d}$ is the feature matrix with the number of features $d$ and ${\boldsymbol Y} \in \mathbb{R}^{N \times c}$ is the label matrix with the dimension of label $c$. To formulate the GFL problem generally, consider ${\boldsymbol \Xi}_{{\cal G}}$ as a random matrix from a distribution ${\cal D}_{{\cal G}}$ which depends on ${\cal G}$, that is, ${\boldsymbol \Xi}_{{\cal G}} \sim {\cal D}_{{\cal G}}$. More specifically, we define the $k$-th row vector of ${\boldsymbol \Xi}_{{\cal G}}$ as ${\boldsymbol \xi}_k:=({\boldsymbol x}_k, {\boldsymbol y}_k)$ where ${\boldsymbol x}_k$ is the feature vector and ${\boldsymbol y}_k$ is a vector of labels in client $k$. Thus ${\boldsymbol \Xi}_{{\cal G}}$ is the random data matrix whose rows are correlated, and the relationship between ${\boldsymbol \xi}_k$ is described by graph ${\cal G}$.
Here we assume that graph structure ${\cal G}$ is deterministic. With these notations, the GFL problem is defined as:
\begin{equation}
\begin{aligned}
\label{GFL_problem}
\min_{{\boldsymbol W}} \mbox{ } \frac{1}{N} \sum_{k=1}^N
F_k({\boldsymbol W}), \text{where } F_k({\boldsymbol W}) := \mathbb{E}[f_k({\boldsymbol W}; {\boldsymbol \Xi}_{{\cal G}})],
\end{aligned}
\end{equation}
and $f_k({\boldsymbol W}; {\boldsymbol \Xi}_{{\cal G}})$ is the local loss after observing data matrix ${\boldsymbol \Xi}_{{\cal G}}$, indicating the local objective function of client $k$ depends not only on the data collected by the $k$-th client but also the data from other clients. This is the key difference between GFL and conventional FL. As our discussion in $\S$\ref{Introduction}, this crucial difference induces the data-sharing conflict. Therefore, we propose the following hidden representation sharing method to address the challenge of data-sharing conflict in the GFL problem.
\subsection{Hidden Representation Sharing}
\label{Sec: hidden_representation_sharing}
Our proposal is using hidden representation. The hidden representations are allowed to be shared across clients in network ${\cal G}$, and a neighborhood aggregator is applied to these hidden representations of all nodes. For client $k$, define its hidden representation ${\boldsymbol h}_k$ as follow
\begin{equation}
\label{hidden_representations}
\begin{aligned}
{\boldsymbol h}_k &= \Psi_{h}({\boldsymbol x}_k;{\boldsymbol W}_{h})
\end{aligned}
\end{equation}
where $\Psi_{h}(\cdot;{\boldsymbol W}_{h})$ is a hidden encoder such as the multi-layer perceptron (MLP) parametrized by ${\boldsymbol W}_{h}$. The hidden representation matrix is denoted as ${\boldsymbol H} \in \mathbb{R}^{N \times d_h}$ where the $k$-th row vector of ${\boldsymbol H}$ is ${\boldsymbol h}_k$ and $d_h$ is the dimension of the hidden representations. Graph representations are defined by neighborhood representation aggregation of ${\boldsymbol H}$:
\begin{equation}
\label{graph_representations}
\begin{aligned}
{\boldsymbol Z} &= \Psi_{{\cal G}} ({\boldsymbol H}; {\boldsymbol W}_{{\cal G}})
\end{aligned}
\end{equation}
where $\Psi_{{\cal G}}(\cdot;{\boldsymbol W}_{{\cal G}})$ is a neighborhood aggregator parametrized by ${\boldsymbol W}_{{\cal G}}$. ${\boldsymbol Z} \in \mathbb{R}^{N \times d_z}$ is the graph representation matrix whose $k$-th row vector is denoted as ${\boldsymbol z}_k$ and $d_z$ is the dimension of the graph representations. In most classification tasks, $d_z=c$. Model parameter ${\boldsymbol W} = ({\boldsymbol W}_{h}, {\boldsymbol W}_{{\cal G}})^{\top}$ is the concatenate of ${\boldsymbol W}_{h}$ and ${\boldsymbol W}_{{\cal G}}$. With these privacy-preserving representations, the loss function for the corresponding graph federated optimization can be expressed as,
$f_k({\boldsymbol W}; {\boldsymbol \Xi}_{{\cal G}}) := \ell({\boldsymbol y}_k, {\boldsymbol z}_k)$ where $\ell$ is the pre-specified loss function such as cross-entropy for classification task.
\begin{remark}
The explication in Figure \ref{fig: GFL_framework} shows that the hidden encoder is a global model which facilitates the involvement of FL, while the neighborhood aggregator is the personalized model which accounts for statistical heterogeneity. Intuitively,
$\Psi_{h}$ contributes to privacy protection and representation extraction. Meanwhile, $\Psi_{{\cal G}}$ serves as modeling the heterogeneity using graph ${\cal G}$.
Note that if we set $\Psi_{{\cal G}}$ as the identity mapping (ignore the graph information), our solution reduces to the conventional FL solution to learn a global model $\Psi_{h}$. In addition, when ${\cal G}$ does not fully capture the relationship across clients, ${\boldsymbol W}_{{\cal G}}$ serves as weights for adjusting the neighborhood aggregation level using ${\cal G}$. See Appendix \ref{Appendix: mechanism_GFL} for further discussion.
\end{remark}
\subsection{Gradient Estimation}
\label{Sec: gradient_estimation}
In practice, to solve the GFL problem by gradient-based methods, the unbiased stochastic gradient $\nabla f_k({\boldsymbol W}; {\boldsymbol \Xi}_{{\cal G}})$ of client $k$ depends on data from all nodes in the network ${\cal G}$ ($\mathbb{E}[f_k({\boldsymbol W}; {\boldsymbol \Xi}_{{\cal G}})] = F_k({\boldsymbol W})$). However, since FL restricts the data-sharing, $\nabla f_k({\boldsymbol W}; {\boldsymbol \Xi}_{{\cal G}})$ is inaccessible. Another estimation of $\nabla F_k({\boldsymbol W})$ for local updates must be raised. In the proposed hidden representation sharing method, local information is exchanged as a function of $\{{\boldsymbol h}_j, \nabla {\boldsymbol h}_j\}_{j=1}^N$ during the interactions between clients and the central server. In other words, if the client $k$ can access $\{{\boldsymbol h}_j, \nabla {\boldsymbol h}_j\}_{j=1}^N$, the unbiased estimator $\nabla f_k({\boldsymbol W}; {\boldsymbol \Xi}_{{\cal G}})$ is accessible. Formally, with the shared hidden representations, $\nabla f_k({\boldsymbol W}; {\boldsymbol \Xi}_{{\cal G}})$ can be expressed as a function of hidden representations: $\nabla f_k({\boldsymbol W}; {\boldsymbol \Xi}_{{\cal G}}) = \phi_k ({\boldsymbol h}_1,...,{\boldsymbol h}_N)$.
Note that $\nabla {\boldsymbol h}_j$ is also a function of ${\boldsymbol h}_j$, and we consider the case that estimating $\nabla {\boldsymbol h}_j$ is completely based on an estimator of ${\boldsymbol h}_j$. Furthermore, define $\Hat{{\boldsymbol h}}_{j \rightarrow k}$ as the estimation of the hidden representation ${\boldsymbol h}_j$ for client $k$. Then the biased estimator of $\nabla F_k({\boldsymbol W})$ is,
\begin{equation}
\begin{aligned}
\label{gradient_estimator_single}
\nabla \Hat{f}_k ({\boldsymbol W}; {\boldsymbol \xi}_{k}) = \phi_k (\Hat{{\boldsymbol h}}_{1 \rightarrow k},...,\Hat{{\boldsymbol h}}_{N \rightarrow k}) \mbox{, }\forall k \in [N].
\end{aligned}
\end{equation}
The strategy to design estimator $\Hat{{\boldsymbol h}}_{j \rightarrow k}$ depends on the concrete scenario. In $\S$\ref{Scenario}, we provide a gradient compensation strategy with theoretical analysis in Appendix \ref{Appendix: analysis_gradient_compensation} and the empirical results of this biased estimation strategy are provided in $\S$\ref{Experiments}. In practice, the estimator of $\nabla F_k({\boldsymbol W})$ is the batch mean of biased stochastic gradients. Formally, suppose ${\cal B}_k : =\{{\boldsymbol \xi}_{k,s}\}_{s=1}^{|{\cal B}_k|}$ is the mini-batch with batch size $|{\cal B}_k|$ for some local update in client $k$. $\nabla \Hat{f}_k ({\boldsymbol W}; {\boldsymbol \xi}_{k,s})$ is the estimated gradient which depends on the example ${\boldsymbol \xi}_{k,s}$. The batch mean of biased stochastic gradients is defined as
$
\nabla \Hat{F}_k({\boldsymbol W})
:= \frac{1}{|{\cal B}_k|} \sum_{s \in {\cal B}_k}
\nabla \Hat{f}_k ({\boldsymbol W}; {\boldsymbol \xi}_{k,s})
$.
\textbf{Privacy in GFL.}
FL and GFL require the protection of node-level privacy: client can not share their own collected data with both other clients and the central server directly. However, directly sharing $\{{\boldsymbol h}_j, \nabla {\boldsymbol h}_j\}_{j=1}^N$ raises the concern about raw data recovery by untrustworthy clients or the central server. Our proposed solution does not violate node-level privacy even though we allow sharing hidden representations and the corresponding gradients during the communication between clients and the central server. By using the personalized neighborhood aggregator, clients will not receive $\{{\boldsymbol h}_j, \nabla {\boldsymbol h}_j\}_{j=1}^N$ directly, making the raw data recovery infeasible. In addition, the concern about the unreliable server can be addressed by applying Differential Privacy (DP) method in GFL. Detailed discussion and experiments for DP are given in Appendix \ref{Appendix: privacy_GFL} and Appendix \ref{appendix: noisy_gradient}
\subsection{Graph Federated Learning Procedure}
\label{Sec: GFL_procedure}
A framework of communications in GFL with hidden representation sharing is described in Figure \ref{fig: GFL_framework}. An concrete example is Algorithm \ref{alg: GFL-APPNP} introduced in $\S$ \ref{sec: GFL_APPNP}. Steps at each communication round are:
\vspace{-2mm}
\begin{itemize}[leftmargin=18pt, itemsep = -0.6pt]
\item[\textbf{(1)}] \textbf{Uploading Models:} Clients parallelly upload the latest models to the central server.
\item[\textbf{(2)}] \textbf{Centralizing Models:} Central server aggregates models by \texttt{FedAvg} and broadcasts the aggregated result.
\item[\textbf{(3)}] \textbf{Uploading Hidden Representations:} Clients compute estimated hidden representation and their gradient using the received aggregated model in step (2) and then parallelly upload them to the central server.
\item[\textbf{(4)}] \textbf{Broadcasting Hidden Representations:} Central server allocates estimated hidden representation and their gradients and broadcasts the aggregated ones to clients.
\item[\textbf{(5)}] \textbf{Local Updates:} Clients parallelly perform local updates for $I$ times.
\end{itemize}
\section{Theoretical Analysis}\label{Theory}
\begin{figure}[t!]
\centering
\includegraphics[scale = 0.2]{figures&tables/test_fig.jpeg}
\vspace{-1mm}
\caption{(a) the markers represent accuracy for graphs with different connectivity measured by $\lambda_{\max} ({\boldsymbol B}_N {\boldsymbol L}^{\dagger})$ discussed in Remark \ref{remark: theorem}.
(b) box plot of average accuracy on $20$ synthetic graphs over our methods and baseline models.
More details are in Appendix \ref{Appendix: additional_experiments}.
}
\label{fig: graph and connectivity}
\vspace{-5mm}
\end{figure}
\subsection{Assumptions}
\label{sec:assumptions}
\begin{assumption}
\label{assumption: smoothness}
(Smoothness) Local loss function $F_k$ is differentiable and assumed to be smooth with constant $\rho_f$, $\forall k \in [N]$. Formally, $\forall {\boldsymbol W}, {\boldsymbol W}^{\prime}$, $\exists \rho_{f} > 0$ such that
$
\norm{\nabla F_k({\boldsymbol W}) - \nabla F_k({\boldsymbol W}^{\prime})}
\leq
\rho_{f} \norm{{\boldsymbol W} - {\boldsymbol W}^{\prime}}
$.
\end{assumption}
\begin{assumption}
\label{assumption: bound_hidden_representation_estimation}
(Bound for Hidden Representation Estimation) Hidden Representation ${\boldsymbol h}_j$ of client $j$ is estimated by $\Hat{{\boldsymbol h}}_{j \rightarrow k}$ for local updates at client $k$. The mean squared error of estimation is bounded in the following sense: $\forall j,k \in [N]$, $\exists \sigma_j^2 > 0$ and $\sigma_H^2 : = \sum_{j=1}^N \sigma_j^2$ such that,
$
\mathbb{E}[\norm{\Hat{{\boldsymbol h}}_{j\rightarrow k} - {\boldsymbol h}_j }^2] \leq \sigma_j^2
$.
\end{assumption}
\begin{assumption}
\label{assumption: graph_smoothing}
(Graph Smoothing on Gradients)
Graph ${\cal G} = ({\cal V}, {\cal E})$ is connected graph and $\exists \kappa^2 > 0$ such that $\forall {\boldsymbol W}$,
$
\sum_{(i,j)\in {\cal E}} \norm{\nabla F_i({\boldsymbol W}) - \nabla F_j({\boldsymbol W})}^2 \leq \kappa^2
$.
\end{assumption}
\begin{assumption}
\label{assumption: bound_stochastic_gradient}
(Bounds for Stochastic Gradient)\\
(i) (Bounded Variance) Variance of unbiased stochastic gradient $\nabla f_k ({\boldsymbol W}; {\boldsymbol \Xi}_{{\cal G}})$ is bounded. Formally, $\exists \sigma_{{\cal G}}^2 >0$ such that $\forall {\boldsymbol W}$,
\begin{equation}
\begin{aligned}
\sum_{k=1}^N \mathbb{E} [\norm{\nabla f_k ({\boldsymbol W}; {\boldsymbol \Xi}_{{\cal G}}) - \nabla F_k({\boldsymbol W})}^2]
\leq \sigma_{{\cal G}}^2.
\end{aligned}
\end{equation}
(ii) (Smoothness) Denote $\nabla f_k ({\boldsymbol W}; {\boldsymbol \Xi}_{{\cal G}}) = \phi_k ({\boldsymbol h}_1,...,{\boldsymbol h}_N)$. Assume for any $k \in [N]$, $\phi_k$ satisfies that $\forall {\boldsymbol h}_i, {\boldsymbol h}_i^{\prime}$ and $i \in [N]$, $\exists \rho_{\phi} > 0$ such that,
\begin{equation}
\begin{aligned}
\norm{\phi_k ({\boldsymbol h}_1,...,{\boldsymbol h}_N) - \phi_k ({\boldsymbol h}_1^{\prime},...,{\boldsymbol h}_N^{\prime})}
\leq
\rho_{\phi} \big(\sum_{i=1}^N \norm{{\boldsymbol h}_i - {\boldsymbol h}_i^{\prime}}\big)^{1/2}.
\end{aligned}
\end{equation}
\end{assumption}
\textbf{Interpretation of Assumptions.}
(i) Assumption \ref{assumption: smoothness} is commonly assumed in the literature on nonconvex optimization and FL.
(ii) Assumption \ref{assumption: bound_hidden_representation_estimation} is the goodness of hidden representations estimation. $\sigma_j^2$ represents the estimation error of $\Hat{{\boldsymbol h}}_{j \rightarrow k}$ and $\sigma_H^2$ quantifies the total estimation error.
(iii) $\kappa^2$ in Assumption \ref{assumption: graph_smoothing} quantifies this statistical heterogeneity among clients by considering the network structure which captures the relationship among clients' distributions. A previous work \citep{fallah2020personalized} shows that there is a connection between data distribution shift among clients and the gradient shift among clients. Lemma \ref{appendix: gradient_graph_smoothing} provides the upper bound for the level of statistical heterogeneity based on $\kappa^2$ and connectivity of ${\cal G}$.
(iv) Assumption \ref{assumption: bound_stochastic_gradient} ensures $\nabla f_k ({\boldsymbol W}; {\boldsymbol \Xi}_{{\cal G}})$ satisfies two properties. First, it has a bounded variance ($\sigma_{{\cal G}}^2$) which is commonly presumed in previous works. The second one is a sense of smoothness of $\nabla f_k ({\boldsymbol W}; {\boldsymbol \Xi}_{{\cal G}})$ in terms of a function of hidden representations with smoothness quantified by $\rho_{\phi}$.
\subsection{Convergence Analysis}
\label{sec: convergence}
\begin{theorem}
\label{theorem: nonconvex_main}
Consider GFL optimization problem \eqref{GFL_problem} under Assumptions \ref{assumption: smoothness}, \ref{assumption: bound_hidden_representation_estimation}, \ref{assumption: graph_smoothing} and \ref{assumption: bound_stochastic_gradient}. Use the federated learning procedure described in $\S$ \ref{Problem}. Suppose $\eta$ and $I$ satisfies $I \eta^2 < 1/13 \rho_f^2$, then for all $T \geq 1$, we have
\begin{equation}
\begin{aligned}
\frac{1}{T}
\sum_{t=0}^{T-1} \mathbb{E}[\norm{\nabla F(\Bar{{\boldsymbol W}}^t)}^2]
\leq
{\cal O}(\frac{1}{\eta T}) +
{\cal O}(\frac{I^2 \eta^2 \sigma_{{\cal G}}^2 }{N}) +
{\cal O}(I^2 \eta^2 \sigma_H^2)
+
{\cal O}(\frac{I^2 \eta^2 \kappa^2 \lambda_{\max} ({\boldsymbol B}_N {\boldsymbol L}^{\dagger})}{N})
\end{aligned}
\end{equation}
Where ${\boldsymbol B}_N := \frac{1}{N} {\boldsymbol I} - \frac{1}{N^2}{\boldsymbol 1} {\boldsymbol 1}^{\top}$ and ${\boldsymbol L}$ is the Laplacian matrix of ${\cal G}$.
\end{theorem}
\begin{proof}
See Appendix \ref{appendix : proof_nonconvex_main}.
\end{proof}
\begin{corollary}
\label{corollary: rate}
Under the setting of Theorem \ref{theorem: nonconvex_main}. Suppose learning rate is chosen as $\eta = \frac{\sqrt{N}}{\sqrt{T}}$ and removing smoothness constants $\rho_f$ and $\rho_{\phi}$, we have
\begin{equation}
\begin{aligned}
\frac{1}{T}
\sum_{t=0}^{T-1} \mathbb{E}[\norm{\nabla F(\Bar{{\boldsymbol W}}^t)}^2]
=
{\cal O}(\frac{1}{\sqrt{N T}}) + {\cal O}(\frac{N I^2}{T}) + {\cal O}(\frac{\lambda_{\max} ({\boldsymbol B}_N {\boldsymbol L}^{\dagger}) I^2}{T})
\end{aligned}
\end{equation}
\end{corollary}
\begin{remark}
\label{remark: theorem} According to our Theorem \ref{theorem: nonconvex_main} and Corollary \ref{corollary: rate}, when $\sigma_H^2=0$, that is, ignoring the node-level privacy issue and access to the unbiased stochastic gradient, our convergence result matches the rate of the previous works \citep{jiang2018linear, yu2019parallel, stich2018local} with the gradient deviation among clients is described by a graph structure.
For the effect from graph structure, since $\lambda_{\max} ({\boldsymbol B}_N {\boldsymbol L}^{\dagger})$ is an indication of the connectivity of graph ${\cal G}$ with normalized by averaged aggregation ${\boldsymbol B}_N$ (large $\lambda_{\max} ({\boldsymbol B}_N {\boldsymbol L}^{\dagger})$ means a bad connectivity and high level of statistical heterogeneity), we can expect that a graph with good connectivity ensures a better performances as shown in Figure \ref{fig: graph and connectivity}. This observation matches our intuition that a smaller level of statistical heterogeneity in FL secures a better performance. Moreover, our Corollary \ref{corollary: rate} keep the linear speed up (w.r.t number of clients) when $I = 1$ \citep{yu2019linear}.
\end{remark}
\section{\texttt{GFL-APPNP} for Classification}\label{Scenario}
\input{algorithms/GFL-APPNP_practice}
\subsection{GFL for Classification Tasks on Graphs}
\label{sec: clissification_tasks_graph}
\textbf{Deterministic Node Classification (DNC).} Graph-based semi-supervised node classification is the most popular classification task on graphs. In this paper, we call it deterministic node classification since ${\boldsymbol \xi}_k$ for each node is deterministic with one feature vector and one label. The GFL problem can formulate this task in $\S$ \ref{Problem} by assuming ${\boldsymbol \xi}_k$ is from a degenerated distribution.
\textbf{Stochastic Node Classification (SNC).} An extended version of the deterministic node classification is the setting where local distribution at each node is not degenerated, namely stochastic node classification. This task is a semi-supervised node classification that classifies the nodes by learning from local distributions. Similarly, this task can be formulated by the GFL problem in $\S$ \ref{Problem} by assuming the randomness of ${\boldsymbol \Xi}_{{\cal G}}$ is only from ${\boldsymbol X}$. An important real-world application for this task is the user demographic label prediction in social networks.
\textbf{Supervised Classification (SC).} Consider the supervised learning task on clients, which is another classification task on graphs assuming the label of a client also follows a distribution. We call it supervised classification. The objective of this task is to classify the feature vectors in all clients. This task assumes the randomness of ${\boldsymbol \Xi}_{{\cal G}}$ is from both ${\boldsymbol X}$ and ${\boldsymbol Y}$, resulting in that each client might have examples with different labels. One practical application is the patient classification problem in hospitals with insufficient medical records.
More details about classification tasks are provided in Appendix \ref{Appendix: experiment_task}. Moreover, our GFL setting introduced in $\S$ \ref{Problem} is not only for standard supervised learning but also can be easily extended to semi-supervised client classification like DNC and SNC.
\subsection{\texttt{GFL-APPNP} Algorithm}
\label{sec: GFL_APPNP}
Approximate Personalized Propagation of Neural Predictions (\texttt{APPNP}) \citep{klicpera2018predict} is one of the state-of-the-art GNN models. With the notations and context in Section \ref{Problem}, \texttt{APPNP} has the hidden encoder $\Psi_h$ and neighborhood aggregator $\Psi_{{\cal G}}$ defined as follow,
\begin{subequations}
\label{APPNP}
\begin{align}
{\boldsymbol h}_k &= \Psi_h ({\boldsymbol x}_k;{\boldsymbol W})= {\mbox{MLP}}({\boldsymbol x}_k;{\boldsymbol W}), \\
{\boldsymbol Z} &= \sum_{i=0}^M (1 - \alpha I\{i\neq M\}) \alpha^i (\Hat{{\boldsymbol D}}^{-1/2} {\boldsymbol{\Hat{A}} } \Hat{{\boldsymbol D}}^{-1/2})^i {\boldsymbol H}
=
{\boldsymbol{\Tilde{A}} } {\boldsymbol H},
\end{align}
\end{subequations}
where $\alpha$ is teleport probability \citep{klicpera2018predict} and $M$ is the total steps for personalized propagation. ${\boldsymbol{\Hat{A}} }$ is the adjacency matrix with self-loop, and $\Hat{{\boldsymbol D}}$ is the degree matrix with self-loop. ${\boldsymbol{\Tilde{A}} }$ is defined as ${\boldsymbol{\Tilde{A}} }:= \sum_{i=0}^M (1 - \alpha I\{i\neq M\}) \alpha^i (\Hat{{\boldsymbol D}}^{-1/2} {\boldsymbol{\Hat{A}} } \Hat{{\boldsymbol D}}^{-1/2})^i$. It can be interpreted as the "adjacency matrix" after $M$ steps random walk, which shows the reachability between two nodes in the structure after propagations. Loss function $\ell$ in \texttt{APPNP} is the cross entropy loss. In \texttt{APPNP}, ${\boldsymbol W}_h$ discussed in Eq.\eqref{hidden_representations} refers to ${\boldsymbol W}$ since the neighborhood aggregator in \texttt{APPNP} is not parametrized. Note that original \texttt{APPNP} is proposed to solve the deterministic node classification task. Denote predicted one-hot vectors as $\Hat{{\boldsymbol Y}} = {\mbox{Softmax}}({\boldsymbol Z})$ where ${\mbox{Softmax}}(\cdot)$ is a row-wise softmax function. The gradients can be expressed explicitly as follow,
\begin{equation}
\begin{aligned}
\nabla f_k ({\boldsymbol W}; {\boldsymbol \Xi}_{{\cal G}})
= ({\boldsymbol y}_k - \Hat{{\boldsymbol y}}_k) \sum_{i=1}^N {\boldsymbol{\Tilde{A}} }_{ki} \nabla {\boldsymbol h}_i,
\end{aligned}
\end{equation}
where ${\boldsymbol y}_k$ is the one-hot vector for the true label of client $k$ and $\Hat{{\boldsymbol y}}_k$ is the predicted probability vector for the label of client $k$. ${\boldsymbol{\Tilde{A}} }_{ki}$ is the element in matrix ${\boldsymbol{\Tilde{A}} }$. Note that the hidden representation sharing contributes to two parts in the gradient for local loss $f_k$: one is $\Hat{{\boldsymbol y}}_k$ and the other one is $\{\nabla {\boldsymbol h}_i\}_{i \neq k}$. A good property of using personalized propagation as the neighborhood aggregator is the linearity, which means
\begin{subequations}
\begin{align}
\Hat{{\boldsymbol y}}_k &
= {\mbox{Softmax}}({\boldsymbol{\Tilde{A}} }_{kk} {\boldsymbol h}_k + {\boldsymbol C}_k), \\
\nabla f_k ({\boldsymbol W}; {\boldsymbol \Xi}_{{\cal G}})
& =
({\boldsymbol y}_k - \Hat{{\boldsymbol y}}_k) ({\boldsymbol{\Tilde{A}} }_{kk} \nabla {\boldsymbol h}_k + \nabla {\boldsymbol C}_k),
\end{align}
\end{subequations}
where ${\boldsymbol C}_k := \sum_{i \neq k} {\boldsymbol{\Tilde{A}} }_{ki} {\boldsymbol h}_i$ and $\nabla {\boldsymbol C}_k:= \sum_{i \neq k} {\boldsymbol{\Tilde{A}} }_{ki} \nabla {\boldsymbol h}_i$. Clearly, ${\boldsymbol C}_k$ and its gradient $\nabla {\boldsymbol C}_k$ are aggregated information for client $k$. Therefore, in practice, the central server only needs to broadcast ${\boldsymbol C}_k$ and its gradient to client $k$ in the communication round, which provides private communication since the hidden representations are not shared directly.
We propose \texttt{GFL-APPNP} algorithm for GFL problem on classification tasks, using hidden representation sharing. In addition, we use the latest aggregated model to compute hidden representations at each communication round as our gradient compensation strategy. As a concrete example, consider client $j$ has $n_j$ local feature vectors $\{{\boldsymbol x}_{j,s}\}_{s=1}^{n_j}$,
suppose $t_0 < t$ is the largest multiple of $I$,
\begin{equation}
\label{gradient_compensation}
\begin{aligned}
\Hat{{\boldsymbol h}}^t_{j \rightarrow k}
=
\begin{cases}
\Psi_{h} ({\boldsymbol x}_{k}; {\boldsymbol W}^t_{h,k}) & j=k\\
\frac{1}{n_j} \sum_{s=1}^{n_j} \Psi_{h}({\boldsymbol x}_{j,s};\Bar{{\boldsymbol W}}^{t_0}_h)
& j \neq k
\end{cases},
\end{aligned}
\end{equation}
where $\Hat{{\boldsymbol h}}^t_{j \rightarrow k}$ is the estimation for ${\boldsymbol h}_{j \rightarrow k}$ at time $t$. Our compensation strategy satisfies the guarantee discussed in Assumption \ref{assumption: bound_hidden_representation_estimation} with additional assumptions. See Appendix \ref{Appendix: analysis_gradient_compensation} for detailed discussion for gradient compensation. Summary of \texttt{GFL-APPNP} is described in Algorithm \ref{alg: GFL-APPNP}. Our proposed \texttt{GFL-APPNP} is a FL version for \texttt{APPNP}, which fulfills the FL for a GNN model. It is noteworthy that \texttt{GFL-APPNP} with $I=1$ is \textit{equivalent} to the FL for the vanilla \texttt{APPNP} for deterministic node classification tasks.
\section{Experiments}\label{Experiments}
\input{figures&tables/table_DNC}
\begin{figure*}[t!]
\centering
\includegraphics[scale = 0.2]{figures&tables/grid_main_result.jpeg}
\caption{Train and validation loss for DNC (first column), subCora (second column), SNC (third column), and SC (fourth column). For different lines, the numbers of points are different given the same number of updates
(if $T=3000$ and $I=10$, $3000/10 = 300$ points are in the line).
The shaded area represents $95\%$ CIs. See Appendix \ref{appendix: experiments} for more variants.}
\label{fig: main_figure}
\end{figure*}
\subsection{Deterministic Node Classification}
\label{experiment: deterministic_node_classification}
We compare the proposed \texttt{GFL-APPNP} to baseline models including \texttt{GCN}, \texttt{GAT}, and \texttt{GraphSAGE} under the DNC setting described in $\S$\ref{sec: clissification_tasks_graph}.
For synthetic data, we use contextual Stochastic Block Models (cSBMs) \citep{deshpande2018contextual} to generate synthetic graphs with approximately two equal-size classes. For real-world data, we use subgraphs of Cora \cite{sen2008collective}, namely subCora, due to the limitation of computational resources. The details of the generation of synthetic graphs by cSBMs and the generation of subCora graphs are provided in Appendix \ref{Appendix: data_generation}.
For the proposed \texttt{GFL-APPNP}, we use a two-layer \texttt{MLP} with $64$ hidden units. $\alpha$ is chosen to be $0.1$ and the total number of steps for personalized propagation $M$ is set as $10$, following the same configuration as it in the \citep{chien2020adaptive} for the \texttt{APPNP} model. $I$ is set to be $\{1, 10, 20, 50\}$. SGD is applied as our optimizer with the optimized learning rate. Baseline models including \texttt{GCN}, \texttt{GAT}, and \texttt{GraphSAGE} follow the same design of the well-optimized hyperparameters from \citep{kipf2016semi, velivckovic2017graph, hamilton2017inductive}. The details for all models are provided in Appendix \ref{Appendix: model_description}.
Table \ref{table: DNC_results} and the first two columns of Figure \ref{fig: main_figure} show that our method of different $I$ can match the performance of the vanilla \texttt{APPNP} on both synthetic graphs and subCora graphs. Our method rivals baseline models based on Table \ref{table: DNC_results}.
\subsection{Stochastic Node Classification}
\label{experiment: stochastic_node_classification}
We also conduct experiments under the SNC setting described in $\S$\ref{sec: clissification_tasks_graph} to test the robustness of \texttt{GFL-APPNP}. As we know, current graph machine learning models are not designed for the SNC task. Therefore our experiment will focus on the proposed \texttt{GFL-APPNP}.
The original cSBMs can not be used to generate data for this task since they are not designed to generate graphs whose nodes have multiple features. We modify the original cSBMs to generate distributions for each client. Details about the modifications for the SNC task are available in Appendix \ref{Appendix: data_generation}. Similar to $\S$\ref{experiment: deterministic_node_classification}, we use the same hyperparameters and the same number of updates. The third column of Figure \ref{fig: main_figure} and additional Table \ref{table: SNC_results} provided in Appendix \ref{Appendix: additional_experiments} show that our method is valid for solving the SNC task.
\subsection{Supervised Classification }
\label{experiment: supervised_classification}
We compare \texttt{GFL-APPNP}, \texttt{MLPs}, and \texttt{FedMLP} under the SC setting described in $\S$\ref{sec: clissification_tasks_graph}. Similar to the SNC task, we modify the original cSBMs to generate distributions for each client. Details about the modifications for the SC task are given in Appendix \ref{Appendix: data_generation} as well. Both baseline models \texttt{MLPs} and \texttt{FedMLP} share the same model structure, a two-layer \texttt{MLP} model with $64$ hidden units. Similar to $\S$\ref{experiment: deterministic_node_classification}, for the proposed \texttt{GFL-APPNP}, we use the same hyperparameters and the same numbers of updates. More details and results are provided in Appendix \ref{appendix: experiments}.
The last column of Figure \ref{fig: main_figure} and Table \ref{table: table_all} provided in Appendix \ref{Appendix: additional_experiments} shows that our method is valid for solving the SC task and it demonstrates the necessity of graph in GFL problem as shown in Figure \ref{fig: graph and connectivity}.
\section{Conclusion}\label{Conclusion}
In this paper, we formulate Graph Federated Learning on multi-client systems.
To tackle the fundamental data sharing conflict between LoG and FL, we propose an FL solution with hidden representation sharing.
In theory, we provide a non-convex convergence analysis.
Empirically, by experimenting with several classification tasks on graphs, we validate the proposed method on both real-world and synthetic data. Our experimental results show that the proposed method provides an FL solution for GNNs and works for different practical tasks on graphs with a competitive performance that matches our theory.
\bibliographystyle{unsrtnat}
|
1,116,691,498,689 | arxiv | \section{Introduction}
A word $w$ of \emph{length} $|w|$=$r$ over an alphabet $\mathbb{A}$ is a
sequence $w_1\dots w_r$ of $r$ letters, i.e. $r$ elements of $\mathbb{A}$.
A \textit{prefix} of a word $w = w_1\dots w_r$
is a word $p = w_1\dots w_s$, for some $s \le r$.
A \emph{repetition} in a word $w$ is a pair of words $p$ (called the
\emph{period}) and $e$ (called the \emph{excess}) such that $pe$ is a
factor of $w$, $p$ is non-empty, and $e$ is a prefix of $pe$.
The \emph{exponent} of a repetition $pe$ is $\exp(pe) = \tfrac{|pe|}{|p|}$. A
\emph{$\beta$-repetition} is a repetition of exponent $\beta$. A word is
\emph{$\alpha^+$-free} (resp. \textit{$\alpha$-free}) if it
contains no $\beta$-repetition such that $\beta>\alpha$ (resp. $\beta\ge\alpha$).
\bigskip
Given $k \ge 2$, Dejean~\cite{Dej72} defined the repetition threshold
$\RT(k)$ for $k$ letters as the smallest $\alpha$ such that there
exists an infinite $\alpha^+$-free word over a $k$-letter alphabet.
Dejean initiated the study of $\RT(k)$ in 1972 for $k=2$ and $k=3$.
Her work was followed by a series of papers which determine the exact
value of $\RT(k)$ for any $k\ge 2$.
\begin{theorem}[\cite{Car07,CurRam09b,CurRam09,CurRam11,Dej72,MohCur07,Oll92,Pan84,Rao11}]
\label{thm:path}
~
\begin{enumerate}[$(i)$]
\item $\RT(2) = 2$~\cite{Dej72}; \label{thm:path_2}
\item $\RT(3) = \tfrac{7}{4}$~\cite{Dej72};
\item $\RT(4) = \tfrac{7}{5}$~\cite{Pan84};
\item $\RT(k) = \tfrac{k}{k-1}$, for $k \ge 5$~\cite{Car07,CurRam09b,CurRam09,CurRam11,MohCur07,Oll92,Pan84,Rao11}.
\end{enumerate}
\end{theorem}
\bigskip
The notions of $\alpha$-free word and $\alpha^+$-free word have been
generalized to graphs. A graph $G$ is determined by a set
of vertices $V(G)$ and a set of edges $E(G)$. A mapping
$c \ : \ V(G) \to \set{1,\dots,k}$ is a \textit{$k$-coloring} of $G$.
A sequence of colors on a non-intersecting path in a $k$-colored graph
$G$ is called a \textit{factor}. A coloring is said to be $\alpha^+$-free
(resp. $\alpha$-free) if
every factor is $\alpha^+$-free (resp. $\alpha$-free).
The notion of repetition threshold
can be generalized to graphs as follows.
Given a graph $G$ and $k$ colors,
$$
\RT(k,G)=\inf_{k\textrm{-coloring}\ c}\sup\acc{\exp(w)\,|\, w\ \textrm{is~a~factor~in}\ c}\,.
$$
When considering the repetition threshold over a whole class of graphs $\mathcal{G}$, it is defined as
$$
\RT(k,\mathcal{G})=\sup_{G\in\mathcal{G}}\RT(k,G)\,.
$$
In the remainder of this paper, $\mathcal{P}$, $\mathcal{C}$,
$\mathcal{S}$, $\mathcal{T}$, $\mathcal{T}_k$, $\mathcal{CP}$, and
$\mathcal{CP}_k$ respectively denote the classes of paths, cycles,
subdivisions\footnote{A subdivision of a graph $G$
is a graph obtained from $G$ by a sequence of edge subdivisions. Note
that by a graph subdivision, a ``large enough''
subdivision is always meant.}, trees, trees of maximum degree $k$, caterpillars and
caterpillars of maximum degree $k$.
Since $\alpha^+$-free words are closed under reversal, the repetition thresholds for paths
are clearly defined as $\RT(k,\mathcal{P}) = \RT(k)$, and thus Theorem~\ref{thm:path}
completely determines $\RT(k,\mathcal{P})$.
In 2004, Aberkane and Currie~\cite{AbeCur04} initiated the study of
the repetition threshold of cycles for $2$ letters. Another result of
Currie~\cite{Cur02} on ternary circular square-free word allows to
determine the repetition threshold of cycles for $3$ letters. In 2012,
Gorbunova~\cite{Gor12} determined the repetition threshold of cycles for $k\ge 6$ letters.
\begin{theorem}[\cite{AbeCur04,Cur02, Gor12}]
\label{thm:cycle}
~
\begin{itemize}
\item[$(i)$] $\RT(2,\mathcal{C}) = \tfrac{5}{2}$~\cite{AbeCur04};
\item[$(ii)$] $\RT(3,\mathcal{C}) = 2$~\cite{Cur02};
\item [$(iii)$] $\RT(k,\mathcal{C}) = 1+\tfrac1{\ceil{\nicefrac{k}{2}}}$, for $k\ge 6$~\cite{Gor12}.
\end{itemize}
\end{theorem}
Gorbunova~\cite{Gor12} also conjectured that $\RT(4,\mathcal{C}) = \tfrac32$ and $\RT(4,\mathcal{C}) = \tfrac 43$.
For the classes of graph subdivisions and trees, the bounds are
completely determined~\cite{OchVas12}.
\begin{theorem}[\cite{OchVas12}]
\label{thm:sub}
~
\begin{itemize}
\item[$(i)$] $\RT(2,\mathcal{S}) = \tfrac{7}{3}$;
\item[$(ii)$] $\RT(3,\mathcal{S}) = \tfrac{7}{4}$;
\item[$(iii)$] $\RT(k,\mathcal{S}) = \tfrac{3}{2}$, for $k \ge 4$.
\end{itemize}
\end{theorem}
\newpage
\begin{theorem}[\cite{OchVas12}]
\label{thm:tree}
~
\begin{itemize}
\item[$(i)$] $\RT(2,\mathcal{T}) = \tfrac{7}{2}$;
\item[$(ii)$] $\RT(3,\mathcal{T}) = 3$;
\item[$(iii)$] $\RT(k,\mathcal{T}) = \tfrac{3}{2}$, for $k \ge 4$.
\end{itemize}
\end{theorem}
In this paper, we continue the study of repetition thresholds of trees
under additional assumptions. In particular, we completely determine
the repetition thresholds for caterpillars of maximum degree 3
(\Cref{thm:cp2,thm:cp3,thm:cp4,th:cp3_k}) and for caterpillars of
unbounded maximum degree (\Cref{thm:cp2,thm:cp3}) for every alphabet
of size $k\ge 2$. We determine the repetition thresholds for trees of
maximum degree 3 for every alphabet of size $k\in\{4,5\}$
(\Cref{th:tree3_5}). We finally give a lower and an upper bound on the
repetition threshold for trees of maximum degree 3 for every alphabet
of size $k\ge 6$ (\Cref{th:tree}). We summarize the results in
Table~\ref{tbl:sum} (shaded cells correspond to our results).
\begin{table}[htp]
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|}
\hline
& $ |\mathbb{A}| = 2 $ & $|\mathbb{A}| = 3$ & $|\mathbb{A}| = 4$ & $|\mathbb{A}| = 5$ & $|\mathbb{A}| = k$, $k \ge 6$ \\
\hline
$\mathcal{P}$ & $2$ & $\nicefrac{7}{4}$ & $\nicefrac{7}{5}$ & $\nicefrac{5}{4}$ & $\nicefrac{k}{k-1}$ \\
\hline
$\mathcal{C}$ & $\nicefrac{5}{2}$ & $2$ & $?$ & $?$ & $1+\tfrac1{\ceil{\nicefrac{k}{2}}}$ \\
\hline
$\mathcal{S}$ & $\nicefrac{7}{3}$ & $\nicefrac{7}{4}$ & $\nicefrac{3}{2}$ & $\nicefrac{3}{2}$ & $\nicefrac{3}{2}$ \\
\hline
$\mathcal{CP}_3$ & \cc $3$ & \cc $2$ & \cc $\nicefrac{3}{2}$ & \cc $\nicefrac{4}{3}$ & \cc $1+\tfrac1{\ceil{\nicefrac{k}{2}}}$ \\
\hline
$\mathcal{T}_3$ & $?$ & $?$ & \cc $\nicefrac{3}{2}$ & \cc $\nicefrac{3}{2}$ & \cc $1 + \tfrac1{2\log k} + o\left(\tfrac1{\log k}\right)$\\
\hline
$\mathcal{CP}$ & \cc $3$ & \cc $2$ & \cc $\nicefrac{3}{2}$ & \cc $\nicefrac{3}{2}$ & \cc $\nicefrac{3}{2}$ \\
\hline
$\mathcal{T}$ & $\nicefrac{7}{2}$ & $3$ & $\nicefrac{3}{2}$ & $\nicefrac{3}{2}$ & $\nicefrac{3}{2}$ \\
\hline
\end{tabular}
\end{center}
\caption{Summary of repetition thresholds for different classes of graphs. }
\label{tbl:sum}
\end{table}
\section{Caterpillars}
A \textit{caterpillar} is a tree such that the graph induced by the
vertices of degree at least $2$ is a path, which is called \textit{backbone}.
\begin{theorem}
\label{thm:cp2}
$
\RT(2,\CP)=\RT(2,\CP_3)=3.
$
\end{theorem}
\begin{proof}
First, we show that the repetition threshold is at least $3$. Note
that it suffices to prove it for the class of caterpillars with
maximum degree $3$. Suppose, to the contrary, that $\RT(2,\CP_3)<3$.
Then, the factor $xxx$ is forbidden for any $x\in\A$.
Therefore, in any $3$-free $2$-coloring, every vertex colored with
$x$ has at most one neighbor colored with $x$.
It follows that four consecutive backbone vertices of degree $3$
cannot be colored $xyxy$ for any $x,y\in\A$,
since the $3$-repetition $yxyxyx$ appears.
The factor $xyx$ is also forbidden.
Indeed, $xyx$ must extend to $xxyxx$ on the backbone
since $xyxy$ is forbidden. Then, $xxyxx$ must extend to $yxxyxxy$ on
the backbone since $xxx$ is forbidden.
Finally, $yxxyxxy$ must extend to the $3$-repetition $xyxxyxxyx$ in the
caterpillar. Thus, the binary word on the backbone must avoid $xxx$
and $xyx$. So, this word must be $(0011)^\omega$ which is not
$3$-free, a contradiction. Hence, $\RT(2,\CP_3)\ge3$.
Now, consider a $2$-coloring of an arbitrary caterpillar such that the
backbone induces a $2^+$-free word (which exists by
\Cref{thm:path}$(i)$) and every pendent vertex gets the
color distinct from the color of its neighbor. Clearly, this
$2$-coloring is $3^+$-free, and so
$\RT(2,\CP)\le3$.
\end{proof}
\begin{theorem}
\label{thm:cp3}
$
\RT(3,\CP)=\RT(3,\CP_3)=2.
$
\end{theorem}
\begin{proof}
We start by proving $\RT(3,\CP_3)\ge2$. So, suppose, for a
contradiction, that there is a $2$-free $3$-coloring for any
caterpillar with maximum degree $3$. In every $2$-free $3$-coloring,
the factor $xyx$ appears on the backbone, since otherwise the word
on the backbone would be $(012)^\omega$ which is not $2$-free. Then,
we have no choice to extend the factor $xyx$ to the right
(see~\Cref{fig:k3cat}). This induces a $2$-repetition $yxzyxz$.
\begin{figure}
\centering
\includegraphics[scale=0.7]{k3cat}
\caption{After a factor $xyx$, the remaining colors are forced.}
\label{fig:k3cat}
\end{figure}
Now, we show that $\RT(3,\CP)\le2$ by constructing a $2^+$-free
$3$-coloring of an arbitrary caterpillar. Take a $2^+$-free
$2$-coloring of the backbone (which exists by Theorem~\ref{thm:path}),
and color the pendent vertices with the third color.
\end{proof}
\begin{theorem}
\label{thm:cp4}
$
\RT(4,\CP_3)=\tfrac32.
$
\end{theorem}
\begin{proof}
By \Cref{thm:tree}$(iii)$, we have $\RT(4,\CP_3)\le\tfrac32$.
Let us show that any $4$-coloring $c$ of a caterpillar of maximum degree
3 contains a $\tfrac32$-repetition. Consider six consecutive
vertices $u_0,u_1,u_2,u_3,u_4,u_5$ of the backbone. Let $v_i$ be the pendent
neighbors of $u_i$. In any $\tfrac32$-free coloring, the vertices
$u_1,u_2,u_3,v_2$ must get distinct colors: say $c(u_1)=x, c(u_2)=y,
c(u_3)=z, c(v_2)=t$. Either $u_0$ or $u_4$ must be colored with
color $t$; w.l.o.g. assume $c(u_4)=t$. Then, either $u_5$ or $v_4$
must be colored by $y$, and we obtain the $\tfrac53$-repetition $tyzty$.
\end{proof}
\begin{lemma}
\label{lem:cp35low}
For every integer $k\ge5$, we have
$\RT(k,\CP_3)\ge1+\tfrac1{\ceil{\nicefrac k2}}$.
\end{lemma}
\begin{proof}
\begin{figure}[htp!]
\centering
\subfloat[\label{fig:k5catodd}There exist $2\eta$
vertices at distance at most~$\eta$ from each other.]{\includegraphics[scale=0.7]{k5catodd}}
$\qquad\qquad$\subfloat[Even case.\label{fig:k5cateven}]{\includegraphics[scale=0.7]{k5cateven}}
\caption{Illustrations of~\Cref{lem:cp35low}.}
\end{figure}
Let $\eta=\ceil{\tfrac k2}$. Suppose, to the contrary, that there
exists a $(1+\tfrac1\eta)$-free $k$-coloring $c$ for any caterpillar
with maximum degree $3$. Then, every two vertices at distance at
most $\eta$ must be colored differently. In caterpillars with
maximum degree $3$, we can have $2\eta$ vertices being pairwise at
distance at most $\eta$ (see~\Cref{fig:k5catodd}). If $k$ is odd,
then $2\eta>k$, and thus $c$ is not $(1+\tfrac1\eta)$-free. If $k$
is even, the vertices $x_i$ of~\Cref{fig:k5cateven} necessarily get
distinct colors, say $x_i$ gets color $i$. Then, we have
$c(y)\in\{1,3\}$ and w.l.o.g. $c(y)=1$. We also have
$2\in\{c(z_1),c(z_2)\}$ and w.l.o.g. $c(z_1)=2$. Then we obtain a
$\paren{1+\tfrac2{\eta+1}}$-repetition with excess $c(y)c(z_1)=12$,
a contradiction.
\end{proof}
\begin{lemma}
\label{lem:cp35up}
$
\RT(5,\CP_3)\le\tfrac43.
$
\end{lemma}
\begin{proof}
We start from a right infinite $\tfrac54^+$-free word
$w=w_0w_1\ldots$ on $5$ letters. We associate to $w$ its Pansiot
code $p$ such that $p_i=0$ if $w_i=w_{i+4}$ and $p_i=1$ otherwise,
for every $i\ge0$~\cite{Pan84}. Let us construct a
$\tfrac43^+$-free $5$-coloring $c$ of the infinite caterpillar such
that every vertex on the backbone has exactly one pendant vertex.
For every $i\ge0$, $c[0][i]$ is the color of the $i$-th backbone
vertex and $c[1][i]$ is the color of the $i$-th pendant vertex.
We define below the mapping $h[t][\ell]$ such that $t\in\acc{0,1}$
corresponds to the type of transition in the Pansiot code and
$\ell\in\acc{0,1}$ corresponds to the type of vertex ($\ell=0$ for
backbone, $\ell=1$ for leaf):
\begin{center}
\begin{tabular}{c}
$h[0][0]=150251053150352053$\\
$h[0][1]=033332322221211110$\\
$h[1][0]=143123021324123103$\\
$h[1][1]=000044440400004444$\\
\end{tabular}
\end{center}
Notice that the length of $h[t][\ell]$ is $18$. Given
$t\in\acc{0,1}$ and $\ell\in\acc{0,1}$, let us denote
$h[t][\ell][j]$, for $j\in\acc{0,\ldots,17}$, the $j^{\rm{th}}$
letter of $h[t][\ell]$ (e.g. $h[0][0][3] = 2$).
The coloring is defined by $c[\ell][18i+j]=w_{i+h[p_i][\ell][j]}$
for every $\ell\in\acc{0,1}$, $i\ge0$, and $j\in\acc{0,\ldots,17}$.
Let us prove that this coloring is $\tfrac43^+$-free.
We check exhaustively that there exists no forbidden repetition of
length at most 576 in the caterpillar. Now suppose for contradiction
that there exists a repetition $r$ of length $n>576$ and exponent
$\tfrac n d>\tfrac43$ in the caterpillar. This implies that there
exists a repetition of length $n'\ge n-2$ and period of length $d$ in
the backbone. This repetition contains a repetition $r'$ consisting of
full blocks of length 18 having length at least
$n'-2\times({18-1})\ge n-36$ and period length $d$. Given $n>576$
and $\tfrac n d>\tfrac43$, the repetition $r'$ has exponent at least $\tfrac{n-36}{d}>\tfrac54$.
The repetition $r'$ in the backbone
implies a repetition of exponent greater than $\tfrac54$ in $w$, which is a
contradiction.
\end{proof}
\begin{lemma}
\label{lem:cp36up}
For every integer $k\ge6$, we have
$
\RT(k,\CP_3)\le1+\tfrac1{\ceil{\nicefrac k2}}.
$
\end{lemma}
\begin{proof}
\begin{figure}[htp!]
\centering
\includegraphics[scale=0.7]{k6catup}
\caption{A $(1+\tfrac16)^+$-free $11$-coloring of a caterpillar
with maximum degree $3$.}
\label{fig:k6catup}
\end{figure}
First notice that it suffices to construct colorings for odd $k$'s, since
$1+\tfrac1{\ceil{\nicefrac k2}}=1+\tfrac1{\ceil{\nicefrac{(k-1)}2}}$
for $k$ even. So, let $k$ be odd and let $\eta=\ceil{\tfrac k2}$.
By~\Cref{thm:path}$(iv)$, we can color the vertices of the backbone
by a $(1+\tfrac1\eta)^+$-free $(\eta+1)$-coloring. Then, it remains
to color the pendant vertices: let us color them cyclically using
the remaining $k-(\eta + 1) = \eta-2$ unused colors
(see~\Cref{fig:k6catup} for an example with $k=11$). Clearly, the
repetition which does contain a pendant vertex are
$(1+\tfrac1\eta)^+$-free. Moreover, for a repetition
containing a pendant vertex, the length of the excess is at most $1$
and the period length is at least $\eta$. Thus, its exponent is at
most $\tfrac{\eta+1}{\eta}=1+\tfrac1{\eta}$.
This shows that this $k$-coloring is $(1+\tfrac1\eta)^+$-free.
\end{proof}
\Cref{lem:cp35low,lem:cp35up,lem:cp36up} together imply the following theorem.
\begin{theorem}\label{th:cp3_k}
For every integer $k$, with $k\ge5$, we have
$
\RT(k,\CP_3)=1+\tfrac1{\ceil{\nicefrac k2}}.
$
\end{theorem}
Observe that for all $k\ge4$, we have $\RT(k,\CP)=\tfrac32$.
Indeed, caterpillars are trees and thus $\RT(k,\CP)\le\RT(k,\T)=\tfrac32$.
On the other hand, we have $\RT(k,\CP)\ge \RT(k,K_{1,k}) = \tfrac32$ (where $K_{1,k}$ is the star of degree $k$).
\section{Trees of maximum degree 3}
The class of trees of maximum degree 3 is denoted by
$\mathcal{T}_3$. Let $T\in\mathcal{T}_3$ be the infinite embedded rooted tree
whose vertices have degree 3, except the root which has degree 2.
Thus, every vertex of $T$ has a left son and a right son. The \emph{level}
of a vertex of $T$ is the distance to the root (the root has level 0).
Since every tree of maximum degree 3 is a subgraph of $T$, we only
consider $T$ while proving that $\RT(k,\mathcal{T}_3)\le \alpha$ for
some $k$ and $\alpha$.
\bigskip
Note first that $\RT(4,\mathcal{CP}_3) \le \RT(4,\mathcal{T}_3) \le \RT(4,\mathcal{T})$ and thus $\RT(4,\mathcal{T}_3)=\tfrac32$.
\begin{figure}[htp!]
\centering
\includegraphics[scale=0.7]{tree}
\caption{Construction for Theorem~\ref{th:tree3_5}}
\label{fig:tree}
\end{figure}
\begin{theorem}\label{th:tree3_5}
$\RT(5,\mathcal{T}_3) = \tfrac32$.
\end{theorem}
\begin{proof}
Note first that
$\RT(5,\mathcal{T}_3) \le \RT(5,\mathcal{T}) = \tfrac32$. Let us
show that this bound is best possible.
Let $G\in\mathcal{T}_3$ be the graph depicted in
Figure~\ref{fig:tree} and let $v\in V(G)$ be the squared vertex.
In every $5$-coloring of $G$, at least two among the six vertices at distance
$2$ of $v$ will get the same color. In every $\tfrac32$-free
$5$-coloring, the distance between these two vertices is
four. W.l.o.g., the two triangle vertices of Figure~\ref{fig:tree}
are colored with the same color, say color $1$. Then, we color the
other vertices following the labels in alphabetical order (a vertex
labelled $x$ is called an $x$-vertex). The $a$-vertices have to get
three distinct colors (and distinct from~$1$), say $2$, $3$, and
$4$. The $b$-vertices can only get colors $3$ or $5$ and they must
have distinct colors in every $\tfrac32$-free $5$-coloring. This is the
same for $c$-vertices. The $d$-vertex must then get color $5$. Therefore
the $e$-vertices can only get colors $2$ or $4$. The $f$-vertex must get
color $4$. Finally, the $g$-vertex cannot be colored without
creating a forbidden factor. Thus $\RT(5,\mathcal{T}_3)\ge\tfrac32$,
and that concludes the proof.
\end{proof}
\begin{theorem}\label{th:tree}
For every $t\ge 4$, we have $$\RT((t+1)2^{\left\lfloor(t+1)/2\right\rfloor},\mathcal{T}_3)\le1+\tfrac1{t}\le\RT\paren{3\paren{2^{\floor{t/2}}-1},\mathcal{T}_3}.$$
\end{theorem}
\begin{proof}
To prove that
$\RT((t+1)2^{\left\lfloor(t+1)/2\right\rfloor},\mathcal{T}_3)\le1+\tfrac1{t}$,
we color $T$ as follows. Let $f$ be the
coloring of $T$ mapping every vertex $v$ to a color of the form
$(\gamma,\lambda)$ with $0\le \gamma\le 2t+1$
and $\lambda\in\{0,1\}^{\floor{(t-1)/2}}$. Let us describe each
component of a color:
\begin{description}
\item[$\gamma$-component:] Let us consider a Dejean word $w$ over
$t+1$ letters which is $(1+\tfrac 1t)^+$-free since $t\ge 4$. We apply to
$w$ the morphism $m$ which doubles every letter, that is for every
letter $i$ such that $0\le i\le t$, $m(i) = i_0i_1$. Let $w'=m(w)$
and, given $\ell\ge 0$, let $w'[\ell]$ be the $\ell$-th letter of
$w'$. Every vertex at level $\ell$ gets $w'[\ell]$ as
$\gamma$-component.
\item[$\lambda$-component:] Given a vertex $v$, let $u$ be its ancestor at distance
$\floor{\tfrac{t-1}2}$ (or the root if $v$ is at level
$\ell<\floor{\tfrac{t-1}2}$). Let
$u=u_0,u_1,u_2,u_3,\ldots,u_{\floor{\tfrac{t-1}2}}=v$ be the path
from $u$ to $v$. The $\lambda$-component of $v$ is the binary word built
as follows: if $u_{i+1}$ is the left son of $u_i$, then $\lambda[i]=0$;
otherwise, $\lambda[i]=1$. If $v$ is at level $\ell<\floor{\tfrac{t-1}2}$,
add $\floor{\tfrac{t-1}2} - \ell$ 0's as prefix of $\lambda$.
\end{description}
Let us prove that $f$ is a $(1+\tfrac 1t)^+$-free coloring.
Suppose that there exists a forbidden repetition such that the
repeated factor is a single letter $a$, that is a factor $axa$ where
$|ax|<t$. Let $u$ and $v$ be the two vertices colored
$f(u)=f(v)=(\gamma,\lambda)$. The vertices $u$ and $v$ must lie on the
same level, since otherwise they would be at distance at least $2t$
due to $\gamma$. Since $u$ and $v$ are distinct and have
the same $\lambda$, their common ancestor is at distance at least
$\floor{\tfrac{t-1}2}+1$ from each of them.
Thus $u$ and $v$ are at distance at least
$2\paren{\floor{\tfrac{t-1}2}+1}\ge t$, which contradicts $|ax|<t$.
Suppose now that there exists a forbidden repetition such that the
length of the repeated factor is at least $2$. Suppose first that the
path supporting the repetition is of the form
$l_{i},l_{i-1},l_{i-2},\ldots,l_1,l_0=u=r_0,r_1,r_2,\ldots,r_j$
where $l_1$ and $r_1$ are the left son and the right son of $u$,
respectively. Let $l_i,l_{i-1},\ldots,l_1,l_0$ be the
\emph{left branch} and $r_0,r_1,\ldots,r_{j-1},r_j$ be the
\emph{right branch} of the path.
W.l.o.g., assume that $1\le j \le i$.
Therefore, there exist two vertices $l_k$ and $l_{k-1}$ of the left
branch such that $f(l_kl_{k-1})=f(r_{j-1}r_j)$. Let us show that,
given the $\gamma$-components of the colors of two adjacent
vertices, it is possible to determine which vertex is the
father. W.l.o.g. the two $\gamma$-components are $i_0$ and $j_1$. If
$i=j$, then the father is the vertex with $\gamma$-component $i_0$;
otherwise, $i\neq j$ and the father is the vertex with
$\gamma$-component $j_1$. This is a contradiction since $l_{k-1}$
is the father of $l_k$ and $r_{j-1}$ is the father of $r_j$.
Suppose finally that the path supporting the repetition does not
contain two brothers. This is equivalent to say that $m(w)$ is not
$(1+\tfrac 1t)^+$-free. It is clear that if the $m$-image of a word
contains an $e$-repetition, then this word necessarily
contains an $e$-repetition. This implies that $w$ is not $(1+\tfrac
1t)^+$-free, a contradiction.
Therefore, we have $\RT((t+1)2^{\left\lfloor(t+1)/2\right\rfloor},\mathcal{T}_3)\le1+\tfrac1{t}$.
\bigskip
To prove that $\RT\paren{3\paren{2^{\floor{t/2}}-1},\mathcal{T}_3}\ge1+\tfrac1{t}$,
we consider the tree $T\in\mathcal{T}_3$ consisting of a vertex and all of its neighbors
at distance at most $\floor{t/2}$. The distance between every two vertices in $T$ is at most $t$.
Thus, no two vertices of $T$ have the same color in a $(1+\tfrac1{t})$-free coloring.
Since $T$ contains $3\paren{2^{\floor{t/2}}-1}+1$ vertices,
it admits no $(1+\tfrac1{t})$-free coloring with $3\paren{2^{\floor{t/2}}-1}$ colors,
which gives $\RT\paren{3\paren{2^{\floor{t/2}}-1},\mathcal{T}_3}\ge1+\tfrac1{t}$.
\end{proof}
Note that Theorem~\ref{th:tree} can be generalized to trees of bounded
maximum $\Delta$. This would give the following:
$$\RT(2(t+1)(\Delta-1)^{\left\lfloor(t-1)/2\right\rfloor},\mathcal{T}_\Delta)\le1+\tfrac1{t}\le\RT\paren{\frac{\Delta\paren{(\Delta-1)^{\floor{t/2}}-1}}{\Delta-2},\mathcal{T}_\Delta}$$
\section{Conclusion}
In this paper, we continued the study of repetition thresholds in colorings of various subclasses of trees.
We completely determined the repetition thresholds for caterpillars and caterpillars of maximum degree $3$,
and presented some results for trees of maximum degree $3$.
There are several open questions in the latter class for which it appears that more advanced methods of analysis
should be developed. In particular, our bounds show that
$$
3 \le \RT\paren{2,\mathcal{T}_3} \le \frac{7}{2} \quad \mbox{and} \quad 2 \le \RT\paren{3,\mathcal{T}_3} \le 3\,,
$$
however, we have not been able to determine the exact bounds yet.
Additionally, the repetition thresholds in trees of bounded degrees
for alphabets of size at least $6$ remain unknown.
\bigskip
\noindent
{\bf Acknowledgement.}
The research was partially supported by Slovenian research agency ARRS program no.\ P1--0383 and project no.\ L1--4292
and by French research agency ANR project COCOGRO.
|
1,116,691,498,690 | arxiv |
\section{Novel Prediction-Aware Online Algorithm}
\label{sec:CHASE-pp}
\subsection{Intuitions}
Consider the first and second type-1 critical segments in Fig.~\ref{fig:An-example-of-CHASE}. For both of these segments the previous algorithm \textsf{CHASElk($w$)} detects the segment type at $t=\tilde{T_{1}^{c}}-w$ and $t=\tilde{T_{3}^{c}}-w$, respectively and turns on the generator.
Using these two examples, we explain two intuitions that motivate us to design better prediction-aware online algorithms.
\begin{itemize}
\item The first intuition is that in the first type-1 segment the cumulative cost difference in the window $[\tilde{T_{1}^{c}}-w, \tilde{T_{1}^{c}}]$ is large ($\Delta(\tilde{T_{1}^{c}})-\Delta(\tilde{T_{1}^{c}}-w)=\lambda $). This means there is a substantial demand in the look-ahead window, and it is cost-effective to turn on the generator because spending the startup cost is worthy when we can enjoy the significant benefit of using the generator. Meanwhile, in the second type-1 segment, this cumulative cost difference is almost zero, which means sporadic demand in the look-ahead window. Hence, it is better to keep the generator off and avoid spending the high startup cost.
\item The second intuition is that in the second type-1 segment, when $\Delta(\tilde{T_{3}^{c}})$ reaches zero $\Delta(\tilde{T_{3}^{c}}-w)$ is almost zero and it means we already suffered from not turning on the generator earlier. Hence, turning on the generator at the current time $\tilde{T_{3}^{c}}-w$, when there is not enough demand in the look-ahead window, is not beneficial. On the other hand, in the first type-1 segment at time $\tilde{T_{1}^{c}}-w$, we have $\Delta(\tilde{T_{1}^{c}}-w)= -\lambda$, which means that we are still at the beginning of the type-1 segment and turning on the generator at this moment is worthy.
\end{itemize}
Following these intuitions, in the online setting, we turn on the generator only if we detect entering a type-1 critical segment, and there is a substantial benefit in using the generator in the look-ahead window. Meanwhile, as soon as we detect the type-2 critical segment, we should turn off the generator. Otherwise, the online algorithm will spend $c_{\mathrm{m}}$ unit per time slot of idling cost, while the offline has already turned off the generator and does not cost a penny. This is similar to a ski-rental problem where keeping the generator on is like the skier keeps renting the ski, and its online cost keeps increasing while the offline algorithm bought the ski, and its cost is fixed. To capture the benefit of using the generator, we define an important parameter named cumulative differential cost in the prediction window.
\begin{defn}
For any $\tau \in [t,t+w]$, we define $\Delta_{t}^{\tau}$ as the {\em cumulative differential cost} between using or not using the generator in the time interval $[t,\tau]$ as follows: \begin{equation}
\Delta_{t}^{\tau} \triangleq \sum_{s=t}^{\tau} \delta(s).
\end{equation}
\end{defn}
This parameter utilizes all the predicted information, and it is critical for our novel online algorithm design.
\subsection{Algorithm Description}
We denote our prediction-aware online algorithm as \textsf{CHASEpp($w$)}, as presented in Algorithm~\ref{alg:CHASE-pr}. In line 3 of the algorithm, if we detect being in a type-2 critical segment, we turn off the generator. Meanwhile, if we detect being in a type-1 critical segment (lines 5 to 13), we check for detecting the next type-2 critical segment. If we can not detect it by the end of the window (lines 7), we turn on the generator if we have $\Delta_{t}^{t+w} \geq \lambda $, which means there is enough benefit in using the generator for the prediction window. Similarly, if we can detect the next type-2 critical segment, and $\Delta_{t}^{\tau_2} \geq 0$ (line 9), we turn on the generator since there is enough benefit that can compensate the startup cost. Otherwise, we just keep the generator status unchanged.
\begin{algorithm}[htb!]
{\caption{ ${\sf CHASEpp}(w) [t,\sigma(\tau)_{\tau = t}^{t+w}, y(t-1)]$} \label{alg:CHASE-pr}
\begin{algorithmic}[1]
\STATE find $ \Delta(\tau)_{\tau = t}^{t+w}$
\STATE set $\tau_1 \leftarrow \min\big\{\tau =t, ...,t+w \mid \Delta(\tau) = 0 \mbox{\ or\ }-\beta \big\}$
\IF{ $\Delta(\tau_1)=-\beta$ (type-2)}
\STATE {$y(t) \leftarrow 0$}
\ELSIF{$\Delta(\tau_1)=0$ (type-1)}
\STATE set $\tau_2 \leftarrow \min\big\{\tau =t, ...,t+w \mid \Delta(\tau)=-\beta \big\}$
\IF {$\Delta(\tau) > -\beta, \forall \tau \in [t,t+w], \mbox{\ and \ } \Delta_{t}^{t+w} \geq \lambda$}
\STATE {$y(t) \leftarrow 1$}
\ELSIF {$\Delta_{t}^{\tau_2} \geq 0$}
\STATE {$y(t) \leftarrow 1$}
\ELSE
\STATE {$y(t) \leftarrow y(t-1)$}
\ENDIF
\ELSE
\STATE {$y(t) \leftarrow y(t-1)$}
\ENDIF
\STATE set $u(t)$, $v(t)$, and $s(t)$ according to~\eqref{lem:fMCMP}
\end{algorithmic}
}
\end{algorithm}
We note that the online algorithm design space explored in this paper is new, in which turning on the generator depends on satisfying two conditions at the same time. First, we need to make sure the offline algorithm has turned on the generator. Second, we turn on the generator only if there is a significant benefit in using it for the future window. By comparing \textsf{CHASEpp($w$)} with the existing algorithms, one can see that the state-of-the-art algorithm \textsf{CHASElk($w$)} is a simple extension of \textsf{CHASE}, which tracks the offline solution in an online manner. But with checking the cumulative differential cost in the prediction window, \textsf{CHASEpp($w$)} makes smarter decisions when tracking the offline optimal to ensure better competitiveness. This new and effective design space improves the state-of-the-art competitive ratio.
\section{Proof of Theorem~\ref{lem:crfunction}}
\label{sec:appendix A}
\begin{proof}
Without loss of generality let us consider that in a type 1 critical segment the algorithm turns on the generator at time $t= \tilde{T}_{i}^{c}+kw-\theta$, where $k \in [0,\infty)$ and $\theta \in [1,w]$. $k = 0$ and $\theta = w$ gives us the case that we turn on the generator at $t=\tilde{T}_{i}^{c}-w$. Here we first consider the case with $k=0$, and then we extend the result to the general $k$. In the case with $k=0$ we turn on the generator at time $t=\tilde{T}_{i}^{c}-\theta$ which means we keep the generator off from $t=\tilde{T}_{i}^{c}-w$ to $t=\tilde{T}_{i}^{c}-\theta$, where $\theta \in [1,w]$.
We denote the outcome of ${\sf CHASEpp}(w)$ by $\big(y_{{\sf CHASE}(w)}(t)\big)_{t=1}^{T}$, and the outcome of the optimal offline algorithm by $\big(y_{OFA}(t)\big)_{t=1}^{T}$. Let us define ${\mathcal K}_{j}$ as the set of indices of type-$j$ critical segments i.e.,
\begin{equation}
{\mathcal K}_{j} \triangleq \{ \, i \, |\, \, [T_i^c+1 , T_{i+1}^c] \textit{is a type-j critical segment in } [0, T] \}. \notag
\end{equation}
Denote the sub-cost for type-$j$ by
\begin{eqnarray}
{\rm Cost}^{{\rm ty\mbox{-}}j}(y) \triangleq \sum_{i\in{\mathcal K}_{j}}\sum_{t=T_{i}^{c}+1}^{T_{i+1}^{c}}\psi\big(\sigma(t),y(t)\big)+\beta\cdot[y(t)-y(t-1)]^{+}
\end{eqnarray}
Hence, ${\rm Cost}(y)=\sum_{j=0}^{3}{\rm Cost}^{{\rm ty\mbox{-}}j}(y)$.
We prove by comparing the sub-cost for each type-$j$.
(\textbf{Type-0}): Note that both $y_{{\rm OFA}}(t)=y_{{\rm CHASE(w)}}(t)=0$
for all $t\in[1,T]$. Hence,
\begin{equation}
{\rm Cost}^{{\rm ty\mbox{-}}0}(y_{{\rm OFA}})={\rm Cost}^{{\rm ty\mbox{-}}0}(y_{{\rm CHASE(w)}})
\end{equation}
(\textbf{Type-1}): Based on the definition of critical segment,
we recall that there is an auxiliary point $\tilde{T_{i}^{c}}$, such that either \big($\Delta(T_{i}^{c})=0$ and $\Delta(\tilde{T_{i}^{c}})=-\beta$\big)
or \big($\Delta(T_{i}^{c})=-\beta$ and $\Delta(\tilde{T_{i}^{c}})=0$\big).
We assumed we turn on the generator at $t= \Tilde{T_i^c}-\theta$. Now we focus on the segment $ [T_i^c+1, T_{i+1}^c]$. We observe
\begin{equation}
y_{{\sf CHASE}(w)}(t)=\begin{cases}
0, & \mbox{for all\ } t\in[T_{i}^{c}+1,\tilde{T}_{i}^{c}-\theta),\\
1, & \mbox{for all\ } t\in[\tilde{T}_{i}^{c}-\theta,T_{i+1}^{c}].
\end{cases}
\end{equation}
We consider a particular type-1 critical segment: $[T_{i}^{c}+1,T_{i+1}^{c}]$. Note that by the definition
of type-1, at the beginning of this segment for both online and offline algorithms the generator status is off $y_{{\rm OFA}}(T_{i}^{c})=y_{{\sf CHASE}(w)}(T_{i}^{c})=0$. For the offline algorithm
$y_{{\rm OFA}}(t)$ switches from $0$ to $1$ at time $T_{i}^{c}+1$,
while for the online algorithm $y_{{\sf CHASE}(w)}$ switches at time $\tilde{T}_{i}^{c}-\theta-1$,
both incurring startup cost $\beta$. Hence, in the interval $[T_{i}^{c}+1,\tilde{T}_{i}^{c}-\theta-1]$, the online and offline algorithms have a different status, while in the interval $[\tilde{T}_{i}^{c}-\theta, T_{i+1}^{c}]$ they have the same status. The cost difference between $y_{{\sf CHASE}(w)}$ and $y_{{\rm OFA}}$ within $[T_{i}^{c}+1,T_{i+1}^{c}]$
is
\begin{eqnarray} \label{eq31}
&&\sum_{t=T_{i}^{c}+1}^{\tilde{T}_{i}^{c}-1}\Big(\psi\Big(\sigma(t),0\Big)-\psi\Big(\sigma(t),1\Big)\Big)+\beta-\beta = \sum_{t=T_{i}^{c}+1}^{\tilde{T}_{i}^{c}-\theta-1}\delta(t) \notag \\
&&= \Delta(\tilde{T}_{i}^{c}-\theta-1)-\Delta(T_{i}^{c})= -q_i^1+\beta,
\end{eqnarray}
where $q_i^1 \triangleq - \Delta(\tilde{T}_{i}^{c}-\theta-1)$.
If we repeat this process for all type-1 critical segments, and we have $m_{1}\triangleq|{\mathcal K}_{1}|$ type-1 critical segments, we obtain
\begin{equation} \label{eq32}
{\rm Cost}^{{\rm ty\mbox{-}}1}(y_{{\sf CHASE}(w)})\le{\rm Cost}^{{\rm ty\mbox{-}}1}(y_{{\rm OFA}})+m_{1}\cdot\beta-\sum_{i\in {\mathcal K}_{1}}q_{i}^{1}.
\end{equation}
(\textbf{Type-2}) and (\textbf{Type-3}):
Now, we repeat the same process for type-2 and type-3 (type-end) critical segments. Let the number of type-$j$ critical segments be $m_{j}\triangleq|{\mathcal K}_{j}|$. We derive similarly for $j=2$ or $3$ as
\begin{equation}
{\rm Cost}^{{\rm ty\mbox{-}}j}(y_{{\sf CHASE}(w)})\le{\rm Cost}^{{\rm ty\mbox{-}}j}(y_{{\rm OFA}})+m_{j}\cdot \beta-\sum_{i\in {\mathcal K}_{j}}q_{i}^{j},
\end{equation}
where $q_{i}^{j} \triangleq \beta+ \Delta(\tilde{T}_{i}^{c}-w-1)$.
Note that $|q_{i}^{j}|\leq\beta$ for all $i,j$.
Furthermore, we note $m_{1}=m_{2}+m_{3}$, because it takes equal
numbers of critical segments for increasing $\Delta(\cdot)$ from
$-\beta$ to 0 and for decreasing from 0 to $-\beta$. We obtain
\begin{eqnarray}
&&{\displaystyle \frac{{\rm Cost}(y_{{\sf CHASE}(w)})}{{\rm Cost}(y_{{\rm OFA}})}}= {\displaystyle \frac{\sum_{j=0}^{3}{\rm Cost}^{{\rm ty\mbox{-}}j}(y_{{\sf CHASE}(w)})}{\sum_{j=0}^{3}{\rm Cost}^{{\rm ty\mbox{-}}j}(y_{{\rm OFA}})}} \leq \notag\\
&& {\displaystyle 1+ \frac{2m_{1}\beta+\sum_{i\in {\mathcal K}_{1}}q_{i}^{1}-\sum_{i\in {\mathcal K}_{2}}q_{i}^{2}-\sum_{i\in {\mathcal K}_{3}}q_{i}^{3} }{\sum_{j=0}^{3}{\rm Cost}^{{\rm ty\mbox{-}}j}(y_{{\rm OFA}})}}
\end{eqnarray}
It should be noted that in the calculation a type-3 critical segment is exactly the same as a type-2 critical segment and hence in the rest of the calculation we just consider type-2 critical segments. As a result we can write $m_{1}=m_{2}$ for the ease of calculation. We have
\begin{eqnarray}
& &{\displaystyle \frac{{\rm Cost}(y_{{\sf CHASE}(w)})}{{\rm Cost}(y_{{\rm OFA}})}} \leq {\displaystyle 1+\frac{2m_{1}\beta-\sum_{i\in {\mathcal K}_{1}}q_{i}^{1}-\sum_{i\in {\mathcal K}_{2}}q_{i}^{2}}{\sum_{j=0}^{2}{\rm Cost}^{{\rm ty\mbox{-}}j}(y_{{\rm OFA}})}}\notag \\
& & \leq 1+\begin{cases}
0 & \mbox{if\ }m_{1}=0,\\
{\displaystyle \frac{2m_{1}\beta-\sum_{i\in {\mathcal K}_{1}}q_{i}^{1} -\sum_{i\in {\mathcal K}_{2}}q_{i}^{2}}{\sum_{j=0}^{2}{\rm Cost}^{{\rm ty\mbox{-}}j}(y_{{\rm OFA}})}} & \mbox{otherwise}
\end{cases}
\end{eqnarray}
By Lemma \ref{lem:lower bound type-1}, and Lemma \ref{lem:lower bound type-2} and simplifications, we obtain:
\begin{eqnarray}
&& \frac{{\rm Cost}(y_{{\sf CHASE}(w)})}{{\rm Cost}(y_{{\rm OFA}})} \leq 1+ \big(1- \frac{Lc_{\mathrm{o}}+c_{\mathrm{m}}}{L(p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}})} \big) \cdot \max\limits_{{q \in \{0,wc_{\mathrm{m}}\}}} \notag \\
&& \big\{ \frac{ (2\beta-q)}{ \beta+\big(2wc_{\mathrm{m}}-q+\frac{c_{\mathrm{o}}}{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}}\lambda\big) \big(1-\frac{c_{\mathrm{m}}}{L(p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}-c_{\mathrm{o}})} \big) } \big\}
\end{eqnarray}
We denote this value as $R_{\mathrm{on}}(\lambda)$. This is the performance ratio for the case with $k=0$. Now if we increase $k$, by using the same process we can see that both online and offline cost increase. Using the result from Lemma \ref{lem:Roff}, if we change to general $k$, the increment ratio is as follows:
\begin{eqnarray}
\frac{\text{online cost increment}}{\text{offline cost increment}} \leq \frac{k\big( wc_{\mathrm{m}}+\lambda \big)}{k \big( wc_{\mathrm{m}}+\frac{c_{\mathrm{o}}}{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}}\lambda \big)}
\end{eqnarray}
We denote this ratio as $R_{\mathrm{off}}(\lambda)$. If this ratio is larger than the previous one $R_{\mathrm{off}}(\lambda) > R_{\mathrm{on}}(\lambda)$, then by increasing $k$, the value of the performance ratio keeps increasing and when $k$ goes to $\infty$, this competitive ratio goes to $R_{\mathrm{off}}(\lambda)$. On the other hand, if $R_{\mathrm{off}}(\lambda)\leq R_{\mathrm{on}}(\lambda)$, by increasing the value of $k$, value of the competitive ratio will not increase and is still upper bounded by $R_{\mathrm{on}}(\lambda)$. This shows that the competitive ratio is upper bonded by the maximum of $R_{\mathrm{on}}(\lambda)$, and $R_{\mathrm{off}}(\lambda)$. In Lemma~\ref{lem:incresing} we show that $R_{\mathrm{off}}(\lambda)$ is always an increasing function while $R_{\mathrm{on}}(\lambda)$ is always a decreasing function. This completes the proof.
\end{proof}
\subsection{Proof of Lemma~\ref{lem:lower bound type-1}}
\begin{lem} For the (\textbf{type-1}), we have
\label{lem:lower bound type-1}
\begin{eqnarray}
& & {\rm Cost}^{{\rm ty\mbox{-}}1}(y_{{\rm OFA}}) \geq m_{1}\beta+\sum_{i\in {\mathcal K}_{1}}\Big(\frac{(q_{i}^{1}+\beta)(Lc_{\mathrm{o}}+c_{\mathrm{m}})}{L\big(p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}-c_{\mathrm{o}}\big)-c_{\mathrm{m}}} + \notag\\
& & \frac{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}}{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}-c_{\mathrm{o}}} \big( wc_{\mathrm{m}} + \frac{c_{\mathrm{o}}}{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}} \lambda \big) \Big).
\end{eqnarray}
\end{lem}
\begin{proof}
Consider a particular type-1 segment $[T_{i}^{c}+1,T_{i+1}^{c}]$. We denote its offline cost as $ \mathrm{Cost^{\rm t1}} $. We have:
\begin{eqnarray}
&& \mathrm{Cost^{\rm t1}} = \beta+\sum_{t=T_{i}^{c}+1}^{T_{i+1}^{c}}\psi\big(\sigma(t),1\big) = \beta+\notag \\
&& (T_{i+1}^{c}-T_{i}^{c})c_{\mathrm{m}}\label{eq:type-1 cost eq1-1} +\sum_{t=T_{i}^{c}+1}^{T_{i+1}^{c}}\big(\psi\big(\sigma(t),1\big)-c_{\mathrm{m}}\big).
\end{eqnarray}
By \cite[Lemma.~4]{Minghua2013SIG} and simplification we obtain
\begin{eqnarray}
& & \mathrm{Cost^{\rm t1}} \geq \beta+(T_{i+1}^{c}-T_{i}^{c})c_{\mathrm{m}}+ \frac{c_{\mathrm{o}}}{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}-c_{\mathrm{o}}} \cdot \notag \\
&& \Big(\sum_{t=T_{i}^{c}+1}^{T_{i+1}^{c}} \delta(t)+(T_{i+1}^{c}-T_{i}^{c})c_{\mathrm{m}}\Big) = \beta+ \frac{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}}{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}-c_{\mathrm{o}}} \cdot \notag \\
& & (T_{i+1}^{c}-T_{i}^{c})c_{\mathrm{m}} + \frac{c_{\mathrm{o}}}{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}-c_{\mathrm{o}}} \sum_{t=T_{i}^{c}+1}^{T_{i+1}^{c}} \delta(t)
\end{eqnarray}
Now we need to find the lower bound of both $(T_{i+1}^{c}-T_{i}^{c})$ and $\sum_{t=T_{i}^{c}+1}^{T_{i+1}^{c}} \delta(t)$ in the following two steps.
{\bf Step 1:} We write the lower bound of $\sum_{t=T_{i}^{c}+1}^{T_{i+1}^{c}} \delta(t)$ as follows:
\begin{eqnarray}
&& \sum_{t=T_{i}^{c}+1}^{T_{i+1}^{c}} \delta(t) = \sum_{t=T_{i}^{c}+1}^{\Tilde{T_i^c}-\theta -1} \delta(t)+ \sum_{\Tilde{T_i^c}-\theta }^{T_{i+1}^{c}} \delta(t) = \Delta(\Tilde{T_i^c}-\theta -1)- \Delta(T_{i}^{c})\notag \\
&&+ \sum_{\Tilde{T_i^c}-\theta }^{T_{i+1}^{c}} \delta(t) = \beta - q_i^1 + \sum_{\Tilde{T_i^c}-\theta }^{T_{i+1}^{c}} \delta(t)\geq \beta - q_i^1 +\lambda \notag
\label{eq:w type-1 delta lowebound}
\end{eqnarray}
{\bf Step 2:} To find the lower bound of the length of the interval $[T_{i}^{c}+1, T_{i+1}^{c}]$, we have two cases with $\theta = w$ or $\theta < w$ we calculate the lower bound as follows:
{\bf Case 1:} If $\theta = w$, we can see that $[T_{i}^{c}+1, T_{i+1}^{c}]$ has two part as $[T_{i}^{c}+1, \Tilde{T_i^c}-w-1]$ and $[\Tilde{T_i^c}-w, T_{i+1}^{c}]$. We note that $\big(\Tilde{T_i^c}-w-1-T_{i}^{c}\big)$
is lower bounded by the steepest descend when $p(t)=p_{\mathrm{max}}$, $a(t)=L$
and $h(t)=\eta L$,
\begin{eqnarray}
\Tilde{T_i^c}-w-1-T_{i}^{c} \geq \frac{\beta - q_i^1}{L\big(p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}-c_{\mathrm{o}}\big)-c_{\mathrm{m}}},
\label{eq:w type-1 cost eq7}
\end{eqnarray}
and for the second part we have $ T_{i+1}^{c}- \Tilde{T_i^c}+w +1 \geq w,$ which means
\begin{eqnarray}
T_{i+1}^{c}- T_{i}^{c} \geq \frac{\beta - q_i^1}{L\big(p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}-c_{\mathrm{o}}\big)-c_{\mathrm{m}}}+ w
\end{eqnarray}
{\bf Case 2:} On the other hand, when $\theta < w$, we know the length of the interval $[\Tilde{T_i^c}-w,\Tilde{T_i^c} ]$ is $w+1$ time slot and its cost difference is less than $\lambda$, hence to calculate the total $[T_{i}^{c}+1, T_{i+1}^{c}]$ length we have
\begin{eqnarray}
&& \sum_{t=T_{i}^{c}+1}^{T_{i+1}^{c}} \delta(t) \geq \beta - q_i^1 +\lambda \implies \sum_{t=T_{i}^{c}+1}^{T_{i+1}^{c}} \delta(t) - \sum_{\Tilde{T_i^c}-w}^{\Tilde{T_i^c}} \delta(t) + \\
&& \sum_{\Tilde{T_i^c}-w}^{\Tilde{T_i^c}} \delta(t)\geq \beta - q_i^1 +\lambda \implies \sum_{t=T_{i}^{c}+1}^{T_{i+1}^{c}} \delta(t) - \sum_{\Tilde{T_i^c}-w}^{\Tilde{T_i^c}} \delta(t) \geq \beta - q_i^1, \notag
\end{eqnarray}
where the last inequality comes from the fact that $\sum_{\Tilde{T_i^c}-w}^{\Tilde{T_i^c}} \delta(t) \leq \lambda$. We note that $\big(T_{i+1}^{c}-T_{i}^{c} - (w+1)\big)$
is lower bounded by the steepest descend when $p(t)=p_{\mathrm{max}}$, $a(t)=L$
and $h(t)=\eta L$,
\begin{eqnarray}
&& T_{i+1}^{c}-T_{i}^{c} - (w+1) \geq \frac{\beta - q_i^1}{L\big(p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}-c_{\mathrm{o}}\big)-c_{\mathrm{m}}} \notag \\
&& \implies T_{i+1}^{c}-T_{i}^{c} \geq \frac{\beta - q_i^1}{L\big(p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}-c_{\mathrm{o}}\big)-c_{\mathrm{m}}}+w
\end{eqnarray}
So one can see that in both of these cases we always have
\begin{eqnarray}
T_{i+1}^{c}-T_{i}^{c} \geq \frac{\beta - q_i^1}{L\big(p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}-c_{\mathrm{o}}\big)-c_{\mathrm{m}}}+w
\label{eq:w type-1 length lowebound}
\end{eqnarray}
length
By Eqns.~\eqref{eq:w type-1 delta lowebound}-\eqref{eq:w type-1 length lowebound}, we obtain
\begin{eqnarray}
&& \mathrm{Cost^{\rm t1}} \geq \beta+ \notag \\
&&\frac{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}}{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}-c_{\mathrm{o}}} \big( \frac{\beta - q_i^1}{L(p_{\mathrm{max}}+\eta c_{\mathrm{g}}-c_{\mathrm{o}}-\frac{c_{\mathrm{m}}}{L})}+w \big)c_{\mathrm{m}} + \notag \\
&& \frac{c_{\mathrm{o}}}{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}-c_{\mathrm{o}}} \big(\beta - q_i^1+\lambda \big)= \beta+ \frac{(\beta - q_i^1)(Lc_{\mathrm{o}}+c_{\mathrm{m}})}{L(p_{\mathrm{max}}+\eta c_{\mathrm{g}}-c_{\mathrm{o}}-\frac{c_{\mathrm{m}}}{L})} \notag \\
&& + \frac{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}}{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}-c_{\mathrm{o}}} \big(wc_{\mathrm{m}} + \frac{c_{\mathrm{o}}}{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}} \lambda \big)
\label{eq:w type-1 cost eq6}
\end{eqnarray}
Since there are $m_{1}$ type-1 critical segments, according to Eqna.~\eqref{eq:w type-1 cost eq6}, we obtain
\begin{eqnarray}
& & {\rm Cost}^{{\rm ty\mbox{-}}1}(y_{{\rm OFA}}) \geq m_{1}\beta+\sum_{i\in {\mathcal K}_{1}}\Big(\frac{(\beta- q_i^1)(Lc_{\mathrm{o}}+c_{\mathrm{m}})}{L\big(p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}-c_{\mathrm{o}}\big)-c_{\mathrm{m}}} \notag\\
&&+\frac{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}}{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}-c_{\mathrm{o}}} \big( wc_{\mathrm{m}} + \frac{c_{\mathrm{o}}}{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}} \lambda \big) \Big).
\end{eqnarray}
\end{proof}
\subsection{Proof of Lemma~\ref{lem:lower bound type-2}}
\begin{lem} For (\textbf{type-2}), we have
\label{lem:lower bound type-2}
\begin{eqnarray}
{\rm Cost}^{{\rm ty\mbox{-}}2}(y_{{\rm OFA}}) \geq \sum_{i\in {\mathcal K}_{2}}\Big(\frac{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}}{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}-c_{\mathrm{o}}} (w \cdot c_{\mathrm{m}}-q_{i}^{2}) \Big)
\end{eqnarray}
\end{lem}
\begin{proof}
Consider a particular type-2 segment $[T_{i}^{c}+1,T_{i+1}^{c}]$. We denote its offline cost as $ \mathrm{Cost^{\rm t2}} $. We have:
\begin{eqnarray}
\mathrm{Cost^{\rm t2}} = \sum_{t=T_{i}^{c}+1}^{T_{i+1}^{c}}\psi\big(\sigma(t),0\big)
\label{eq:w type-2 cost eq1}
\end{eqnarray}
By \cite[Lemma.~4]{Minghua2013SIG} and simplification we obtain
\begin{eqnarray}
& & \mathrm{Cost^{\rm t2}} \geq \frac{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}}{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}-c_{\mathrm{o}}} \bigg( \sum_{t=T_{i}^{c}+1}^{T_{i+1}^{c}}\delta(t)+ (T_{i+1}^{c}-T_{i}^{c})c_{\mathrm{m}} \bigg)\notag \\
&& = \frac{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}}{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}-c_{\mathrm{o}}} \bigg( -\beta + (T_{i+1}^{c}-T_{i}^{c})c_{\mathrm{m}} \bigg)
\label{eq:w type-2 cost eq2}
\end{eqnarray}
Furthermore, we note that $\big(T_{i+1}^{c}-T_{i}^{c}\big)$
is lower bounded by the steepest descend when $\min \{a(t),h(t)\}=0$,
\begin{equation}
T_{i+1}^{c}-T_{i}^{c}\geq w + \frac{\beta - q_{i}^{2}}{c_{\mathrm{m}}}
\label{eq:w type-2 cost eq3}
\end{equation}
By Eqns.~(\ref{eq:w type-2 cost eq2})-(\ref{eq:w type-2 cost eq3}), we obtain
\begin{eqnarray}
\mathrm{Cost^{\rm t2}} & \geq & \frac{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}}{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}-c_{\mathrm{o}}} ( w \cdot c_{\mathrm{m}} -q_{i}^{2})
\label{eq:w type-2 cost roof-2}
\end{eqnarray}
Since there are $m_{2}$ type-2 critical segments, according to Eqna.~(\ref{eq:w type-2 cost roof-2}), we obtain
\begin{eqnarray}
{\rm Cost}^{{\rm ty\mbox{-}}2}(y_{{\rm OFA}}) \geq \sum_{i\in {\mathcal K}_{2}}\Big(\frac{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}}{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}-c_{\mathrm{o}}} ( w \cdot c_{\mathrm{m}} -q_{i}^{2})\Big).
\end{eqnarray}
\end{proof}
\subsection{Proof of Lemma~\ref{lem:Roff}}
\label{sec:appendix B}
\begin{lem}
Consider a window $[t,t+w]$, in the type-1 critical segment. If we have $\Delta_{t}^{t+w} \leq \lambda $, then the cost of the online algorithm over the cost of the optimal offline algorithm in this window is upper bounded by the following:
\label{lem:Roff}
\begin{eqnarray}
\frac{{\rm Cost}(y_{{\sf CHASEpp}(w)})}{{\rm Cost}(y_{{\rm OFA}})}\leq \frac{wc_{\mathrm{m}}+\lambda}{wc_{\mathrm{m}}+\frac{c_{\mathrm{o}}}{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}}\lambda}.
\end{eqnarray}
\end{lem}
\begin{proof}
By \cite[Lemma.~4]{Minghua2013SIG} and simplification we know that in a type-1 critical segment for a window with $\Delta_{t}^{t+w}=\lambda$, for the offline cost we have
\begin{eqnarray}
{\rm Cost}(y_{{\rm OFA}}) \geq \frac{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}}{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}-c_{\mathrm{o}}}\big( wc_{\mathrm{m}}+\frac{c_{\mathrm{o}}}{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}}\lambda \big) \notag\\
\end{eqnarray}
On the other hand, if in the type-1 critical segment the online keep the generator off in this window, the cost difference between the online and the offline is equal to $\lambda$, which means
\begin{eqnarray}
{\rm Cost}(y_{{\sf CHASEpp}(w)}) - {\rm Cost}(y_{{\rm OFA}}) =\lambda
\end{eqnarray}
Hence we have
\begin{eqnarray}
\frac{{\rm Cost}(y_{{\sf CHASEpp}(w)})}{{\rm Cost}(y_{{\rm OFA}})} = 1+\frac{\lambda}{{\rm Cost}(y_{{\rm OFA}})} \leq \frac{wc_{\mathrm{m}}+\lambda}{wc_{\mathrm{m}}+\frac{c_{\mathrm{o}}}{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}}\lambda}
\end{eqnarray}
This completes the proof.
\end{proof}
\subsection{Proof of Lemma~\ref{lem:incresing}}
\begin{lem} $R_{\mathrm{on}}(a)$ is always a decreasing function of $a$ and $R_{\mathrm{off}}(a)$ is always an increasing function of $a$.
\label{lem:incresing}
\end{lem}
\begin{proof}
$R_{\mathrm{on}}(\lambda)$ : To prove that $R_{\mathrm{on}}(\lambda)$ is always a decreasing function first we take the derivative as a function of $\lambda$. To compute the derivative we only consider the maximization part of $R_{\mathrm{on}}(\lambda)$. The denominator of the function is always positive and the numerator is given by
\begin{eqnarray}
- \big( \frac{(2\beta - q)c_{\mathrm{o}}}{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}}\lambda \big) \big(1-\frac{c_{\mathrm{m}}}{L(p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}-c_{\mathrm{o}})} \big).
\end{eqnarray}
Note that we have $q \in \{0,wc_{\mathrm{m}}\}$. If $wc_{\mathrm{m}} < 2 \beta$ the derivative is always negative. If $wc_{\mathrm{m}} \geq 2 \beta$ we show that we have $q=0$ in the maximization part of the function which again shows that derivative is negative. To show that for $wc_{\mathrm{m}} \geq 2 \beta$ we have $q=0$, we first take the derivative as a function of $q$ and we can see that the denominator of the function is always positive and the numerator is given by
\begin{eqnarray}
-\beta - \big( 2(wc_{\mathrm{m}}-\beta) + \frac{c_{\mathrm{o}}}{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}}\lambda \big) \big(1-\frac{c_{\mathrm{m}}}{L(p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}-c_{\mathrm{o}})} \big).
\end{eqnarray}
We can see that for $wc_{\mathrm{m}} \geq \beta$ value of the derivative is always negative which means that for $w c_{\mathrm{m}} \geq 2\beta$ the maximum value of $R_{\mathrm{on}}(\lambda)$ happens at $q=0$. This prove that for both cases the derivative is negative and hence $R_{\mathrm{on}}(\lambda)$ is always a decreasing function.
$R_{\mathrm{off}}(\lambda)$: Now we show that $R_{\mathrm{off}}(\lambda)$ is an increasing function. We take the derivative and we can see that the denominator of the function is always positive and the numerator is given by
\begin{eqnarray}
\frac{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}-c_{\mathrm{o}}}{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}}wc_{\mathrm{m}},
\end{eqnarray}
which is always positive, and hence $R_{\mathrm{off}}(\lambda)$ is always an increasing function. This completes the proof.
\end{proof}
\section{Proof of Theorem~\ref{ref: competitive ratio}}
\label{sec:appendix C}
\begin{proof}
From Theorem~\ref{lem:crfunction} we have
\begin{eqnarray}
{\sf CR}({\mathcal A}(\lambda^*)) = \max \{R_{\mathrm{on}}(\lambda^*), R_{\mathrm{off}}(\lambda^*)\}
\end{eqnarray}
and from the definition of the optimal threshold $\lambda^*$ in~\eqref{C_a_optimal} we have
\begin{eqnarray}
\max \{R_{\mathrm{on}}(\lambda^*), R_{\mathrm{off}}(\lambda^*)\} = R_{\mathrm{on}}(\lambda^*)
\end{eqnarray}
Therefore for the competitive ratio we have:
\begin{eqnarray}
{\sf CR}({\mathcal A}(\lambda^*)) = R_{\mathrm{on}}(\lambda^*).
\end{eqnarray}
By using the definition of $\alpha$ in~\eqref{def:alpha} and simplification we obtain the result which completes the proof.
\end{proof}
\section{Proof of Theorem~\ref{thm:nOFA-optimal}}
\label{sec:appendix M}
\begin{proof}
This theorem includes two parts as follows:
\begin{itemize}
\item \textbf{Offline algorithm:} As shown in \cite[Theorem.~5, and 6]{Minghua2013SIG}, when we have homogeneous generators, the offline algorithm that uses the layering approach produces an optimal offline solution for \textbf{MCMP}. For the case with multiple heterogeneous generators, since assigning the bottom layers to the generators with larger capacities minimizes the start-up cost, it can easily be shown that the layering approach also leads to an optimal offline solution. On the other hand, the operational cost does not depend on the capacity $L_n$, and it is the same for all the generators. Hence, it can be shown that the layering approach produces an offline optimal.
\item \textbf{Online algorithm:} In this case, each generator is solving its own sub-problem with a given sub-demand. Hence, the competitive ratio of the algorithm is upper bounded by the largest competitive ratio among all generators.
\begin{equation}
{\sf CR} \leq \max\limits_{{n \in [1,N] }} 3-2g(\alpha_n, w),
\end{equation}
where
\begin{equation}
\alpha_n= \frac{c_{\mathrm{o}}+c_{\mathrm{m}}/L_n}{p_{\mathrm{max}}+\eta c_{\mathrm{g}}}.
\end{equation}
Since the competitive ratio is an increasing function of $L$ and $L_1 \geq L_2 ... \geq L_{\mathrm{N}}$, we have:
\begin{equation}
{\sf CR} \leq 3-2g(\alpha_1, w).
\end{equation}
\end{itemize}
This completes the proof.
\end{proof}
\section{Proof of Theorem~\ref{CRLOB}}
\label{sec:appendix L}
\begin{proof}
Finding the lower bound of the competitive ratio is equal to constructing an special input $\sigma(t) \triangleq (a(t), h(t), p(t))$ such that for any deterministic online algorithm $\mathcal A$, we have:
\begin{equation}
\frac{{\rm Cost}({\sf y}_{\mathcal A};\sigma)}{{\rm Cost}({\sf y}_{\rm OFA},\sigma)} \geq {\sf cr}(w).
\end{equation}
In \cite{Minghua2013SIG}, it is shown that when we do not have any prediction $w=0$, the following input gives us the lower bound.
\begin{eqnarray}
\delta(t)=
\begin{cases}
\delta_{max}, & \mathrm{if } \quad y(t-1)=0 ,\\
\delta_{min}, & \mathrm{if } \quad y(t-1)=1.
\end{cases}
\end{eqnarray}
As one can see in this input, as long as the algorithm keeps the generator off, the adversary keeps giving full demand ($\delta_{max}$) as the input, and as soon as the algorithm turns off the generator, the adversary starts giving zero demand ($\delta_{min}$) as the input.
This simple input is designed in a way that it always tries to hurt the algorithm most. In creating the lower bound for the case with perfect prediction, we follow the same logic, but we need to design a different input.
If we keep giving full demand until the algorithm turns on the generator, at some point, we have already given a lot of full demand to the algorithm, and by turning on the generator, the algorithm can enjoy a window of full demand. In this way, we can not really hurt the algorithm. Therefore we need to carefully choose the demand in the future window in a way that it gives the algorithm some incentive to turn on the generator, but at the same time, it does not give it a lot of demand in the coming window to enjoy. By carefully adjusting this demand, we can find the lower bound.
At any time $t$, we need to construct the input of the time $t$, and the algorithm has already decided the generator status for the time $[1,t-w-1]$. We need to know that for how many consecutive time slots the algorithm kept the generator off. To this end, we define a counter called $c(t)$. This counter will reset anytime we turn on the generator and keeps increasing when the algorithm keeps the generator off. We define it as follows:
\begin{eqnarray}
c(t)=
\begin{cases}
0, & \mathrm{if } \quad y(t-w-1)=1,\\
c(t-1)+1, & \mathrm{if } \quad y(t-w-1)=0,
\end{cases}
\end{eqnarray}
where the initial value of $y$ we have $c(0)=0$, and $y(t)_{t = -w}^{0}=0$.
We construct the worst-case input as follows:
\begin{eqnarray}
\delta(t)=
\begin{cases}
\delta_1, & \mathrm{if } \quad c(t) \leq \frac{\beta -w \delta_2}{\delta_1} ,\\
\delta_2, & \mathrm{if } \quad c(t) > \frac{\beta -w \delta_2}{\delta_1}.
\end{cases}
\end{eqnarray}
Now we need to calculate the proper values for $\delta_1$ and $\delta_2$. We use Lemma~\ref{prlb} toward this end.
\subsection{Proof of Lemma~\ref{prlb}}
\label{sec:lemma1proof}
\begin{proof}
Consider the input shown in Fig.~\ref{fig:lbexample}. If the algorithm turns on the generator at some point $s\in [1,\frac{\beta -w\delta_2}{\delta_1}]$, we can calculate the performance ratio as the online cost over the offline cost as follows:
\begin{eqnarray}
PR(s)=1+ \frac{\beta-(q(s+w)-q(s))+ max(q(s+w)- wc_m,0) }{ \frac{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}}{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}-c_{\mathrm{o}}} ((s+w)c_m + q(s+w) ) },
\end{eqnarray}
where $q(s)= \Delta(s)+\beta$.
Since we are looking for the lower bound, we find the minimum across all possible values of $s$. We call this value $R_{on}(\delta_1,\delta_2)$ and define it as follows:
\begin{eqnarray}
R_{on}(\delta_1,\delta_2)= \min_{s\in [1,\frac{\beta -w\delta_2}{\delta_1}]} PR(s).
\end{eqnarray}
By using the same logic in the proof of Theorem~\ref{lem:crfunction}, if the algorithm does not turn on the generator in $s\in [1,\frac{\beta -w\delta_2}{\delta_1}]$, and keeps the generator off, the ratio of the online cost increment over the offline cost increment is
\begin{eqnarray}
R_{\mathrm{off}}(\delta_2)=\frac{c_{\mathrm{m}}+\delta_2}{c_{\mathrm{m}}+\frac{Lc_{\mathrm{o}} \alpha }{Lc_{\mathrm{o}}+c_{\mathrm{m}}}\delta_2}.
\end{eqnarray}
Hence the lower bound can be calculated by finding the minimum of these two values:
\begin{eqnarray}
{\sf CR}_{\mathrm{l}}(\delta_1,\delta_2) = \min \{R_{\mathrm{on}}(\delta_1,\delta_2), R_{\mathrm{off}}(\delta_2)\}.
\end{eqnarray}
This complete the proof of Lemma~\ref{prlb}.
\end{proof}
Similar to Theorem~\ref{lem:crfunction}, we want to find $(\delta_1^*,\delta_2^*)$ that maximizes the lower bound ${\sf CR}_{\mathrm{l}}(\delta_1,\delta_2)$. First, for each $\delta_2$, we find a corresponding $\delta_1$ such that $\delta_1= \text{arg}\max\limits_{ \delta }\, R_{\mathrm{on}}(\delta,\delta_2)$. This reduces $R_{\mathrm{on}}(\delta_1,\delta_2)$ to a single variable function of $\delta_2$.
\begin{subequations}\label{d1}
\begin{eqnarray}
&& \delta_1= \text{arg}\max\limits_{ \delta }\, R_{on}(\delta,\delta_2) \\
&\mbox{s.t.}& \delta_2 \leq \delta \leq (\beta - w\delta_2 )/w,\\
&& \delta_2 \leq \delta \leq L \big(p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}-c_{\mathrm{o}}-\frac{c_{\mathrm{m}}}{L}).
\end{eqnarray}
\end{subequations}
For a given $\delta_2$, the function $R_{on}(\delta_1,\delta_2)$ is a concave function of $\delta_1$. Therefore we can easily find a corresponding $\delta_1$ for each $\delta_2$.
Now both $R_{\mathrm{on}}$ and $R_{\mathrm{off}}$ are a function of $\delta_2$, and we find the maximum of the minimum of two single variable functions. We know that for $\delta_2=0$, we have $R_{\mathrm{on}}(\delta_1,0) \geq R_{\mathrm{off}}(0)=1 $, and $R_{\mathrm{off}}(\delta_2) $ is an increasing function. Hence, similar to \eqref{eq:finda} we keep increasing $\delta_2$ until we find the intersection of the two functions. Therefore, $\delta_2^*$ can be obtained by solving the following optimization problem:
\begin{subequations}\label{deltaa2}
\begin{eqnarray}
&& \delta_2^*= \text{arg}\max\limits_{ \delta_2 }\, {\sf CR}_{\mathrm{l}}(\delta_1,\delta_2) \\
&\mbox{s.t.}& 0 \leq \delta_2 \leq \beta/(2w),\\
&& 0 \leq \delta_2\leq L \big(p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}-c_{\mathrm{o}}-\frac{c_{\mathrm{m}}}{L}),\\
&& R_{\mathrm{on}}(\delta_1,\delta_2) \geq R_{\mathrm{off}}(\delta_2), \\
&& \delta_1 \textit{is obtained from}~\eqref{d1}.
\end{eqnarray}
\end{subequations}
Therefore, we always have $R_{\mathrm{on}}(\delta_1^*,\delta_2^*) \geq R_{\mathrm{off}}(\delta_2^*)$ and ${\sf CR}_{\mathrm{l}}(\delta_1^*,\delta_2^*) = R_{\mathrm{off}}(\delta_2^*)$, which completes the proof of Theorem~\ref{CRLOB}.
\end{proof}
\section{Preliminary on Offline and Online Solutions}
\label{sec:chase}
We first review state-of-the-art online solutions and the optimal offline solution for $\textsf{MCMP}_{\mathrm{s}}$, providing necessary understandings towards designing a new algorithm later.
\subsection{Optimal Offline Algorithm Design}
In the offline setting the input $\big[\sigma(t)\big]_{t=1}^{T}$ is given at the beginning. We define
\begin{equation}
\delta(t)\triangleq\psi(\sigma(t),0)-\psi(\sigma(t),1),\label{eqn:delta-definition}
\end{equation}
to capture the single-slot cost difference between using or not using the local generation. When $\delta(t)>0$ (resp. $\delta(t)<0$), we tend to turn on (resp. off) the generator. However, to avoid turning on/off the generator too frequently, the cumulative cost difference $\Delta(t)$ is defined as
\begin{equation}
\Delta(t)\triangleq\min\Big\{0,\max\{-\beta,\Delta(t-1)+\delta(t)\}\Big\},\label{eqn:Delta-definition}
\end{equation}
where the initial value is $\Delta(0)=-\beta$. Now that $\Delta(t)$ is defined, we divide the time horizon $\mathcal{T}$ into several disjoint sets called critical segments. As shown in Fig.~\ref{fig:An-example-of-CHASE}, each critical point $T_{i}^{c}$ is defined using a pair $(T_{i}^{c},\tilde{T_{i}^{c}})$ corresponds to an interval where $\Delta(t)$ goes from -$\beta$ to $0$ or from $0$ to -$\beta$, without touching the boundaries.
Based on the boundary values of these critical segments, we classify them into four categories as follows:
\begin{itemize}
\item \textbf{type-start}: $[1,T_{1}^{c}]$
\item \textbf{type-$1$}: $[T_{i}^{c}+1,T_{i+1}^{c}]$, if $\Delta(T_{i}^{c})=-\beta$
and $\Delta(T_{i+1}^{c})=0$
\item \textbf{type-$2$}: $[T_{i}^{c}+1,T_{i+1}^{c}]$, if $\Delta(T_{i}^{c})=0$
and $\Delta(T_{i+1}^{c})=-\beta$
\item \textbf{type-end}: $[T_{k}^{c}+1,T]$.
\end{itemize}
In \cite{Minghua2013SIG}, the optimal offline solution of $\textsf{MCMP}_{\mathrm{s}}$ is given by
\begin{equation}\label{thm:OFA-optimal}
y^{\star}(t)\triangleq
\begin{cases}
1, & \text{if $t$ is in a type-1 segment},\\
0, & \text{otherwise}.
\end{cases}
\end{equation}
After getting the generator on/off status $y^{\star}(t)$, we apply~\eqref{lem:fMCMP} to obtain the optimal $u(t)$, $v(t)$, and $s(t)$.
\subsection{Online Algorithm Without Prediction} \label{onlinealg}
To evaluate the performance of the online algorithm, the competitive ratio is defined as follows:
\begin{defn}
Let $\mathcal A$ be an online algorithm for $\textsf{MCMP}_{\mathrm{s}}$. Define
\begin{equation}
{\sf CR}(\mathcal A)\triangleq \max\limits_{\sigma} \frac{{\rm Cost}({\sf y}_{\mathcal A})}{{\rm Cost}({\sf y}_{\rm OFA})}.
\end{equation}
\end{defn}
It is the worst-case ratio of the online cost over the offline cost.
We proceed by explaining the online algorithm \textsf{CHASE} \cite{Minghua2013SIG} that is later used in designing our new prediction-aware online algorithm. Recall that in the offline setting, we can detect the beginning of each critical segment right after the process enters them and set $y(t)$ accordingly. However, in the online setting, with no future information, it is impossible to do so. But, as shown in Fig.~\ref{fig:An-example-of-CHASE}, when $\Delta(t)=0$ (resp. $\Delta(t)=-\beta$), we are sure that we entered a type-1 (resp. type-2). Hence, $\textsf{CHASE}$ sets $y(t)=1$ (resp. $y(t)=0$).
Intuitively, $\textsf{CHASE}$ tracks the offline optimal in an online manner, and its competitive ratio satisfies
\begin{equation}
{\sf CR}(\textsf{CHASE}) \leq 3-2\alpha,
\label{thm:chase}
\end{equation}
where
\begin{equation}
\alpha \triangleq \frac{c_{\mathrm{o}}+c_{\mathrm{m}}/L}{p_{\mathrm{max}}+\eta c_{\mathrm{g}}} \in (0,1]
\label{def:alpha}
\end{equation}
and no other deterministic online algorithm can achieve a better competitive ratio. The adversarial input $\big[\sigma(t) \triangleq (a(t), h(t), p(t)) \big]_{1}^{T}$ that results in this worst-case competitive ratio for $\textsf{CHASE}$ is the input that always tries to make the online algorithm incur the maximum cost. Therefore, when the generator is on, the adversary gives zero demand $a(t)=0$, and when the generator is off, the adversary gives maximum demand $a(t)=L$ as the input, as follows:
\begin{eqnarray}
a(t)=
\begin{cases}
L, & \mathrm{if } \, \, \, y(t-1)=0 ,\\
0, & \mathrm{if } \, \, \, y(t-1)=1,
\end{cases}
\, \, \, \, h(t)= \eta a(t), \, \textit{and} \, \,\, p(t)= p_{\mathrm{max}}.
\end{eqnarray}
Later in Sec.~\ref{subsec:worst}, when analyzing the competitive ratio of our new prediction-aware online algorithm, we also present its corresponding worst-case input.
\begin{figure}[t]
\centering{}\includegraphics[ width=0.85\columnwidth]{fig/deltadef.png}\caption{\label{fig:An-example-of-CHASE}An example of $\Delta(t)$ and the online algorithms \textsf{CHASE}, \textsf{CHASElk}, and \textsf{CHASEpp}. The prediction-aware online algorithms detect the segment type $w$ time slots before the prediction-oblivious \textsf{CHASE}.}
\vspace{-4mm}
\end{figure}
\subsection{Online Algorithm With Prediction}
\textsf{CHASE} can be extended straightforwardly to the setting with prediction, where at each time slot $t$ the precise prediction of the input for a window of $w$ time slots $\big[\sigma(\tau)\big]_{t}^{t+w}$, is available. As one can see from the Fig.~\ref{fig:An-example-of-CHASE}, the algorithm \textsf{CHASElk$(w)$} \cite{Minghua2013SIG} behaves exactly the same as \textsf{CHASE}, except that it can detect the critical segment type and turn on/off the generator $w$ time slots ahead of \textsf{CHASE}. Hence, \textsf{CHASElk$(w)$} achieves a better competitive ratio that satisfies
\begin{equation}
{\sf CR}(\textsf{CHASElk}(w)) \leq 3-2f(\alpha, w)\leq 3-2\alpha , \label{eq:crchaselk}
\end{equation}
where
\begin{equation}
f(\alpha, w)=\alpha+ \frac{(1-\alpha)}{1+ \beta \frac{Lc_{\mathrm{o}}+c_{\mathrm{m}}/(1-\alpha)}{wc_{\mathrm{m}}(Lc_{\mathrm{o}}+c_{\mathrm{m}})}}
\end{equation}
As discussed in Sec.~\ref{sec:relwork}, \textsf{CHASElk$(w)$} achieves the best competitive ratio with prediction prior to our study.
Next, we tackle this problem from a different perspective and propose a new threshold-based online algorithm that is substantially different from the existing algorithms.
This prediction-aware online algorithm attains the best competitive ratio to date by exploiting a new design space.
\section{Conclusion and Future Work}
\label{sec:conc}
We investigate how to leverage the prediction of the near future for online energy generation scheduling in microgrids. We tackle this problem from a new perspective and propose an effective online algorithm design that is fundamentally different from the existing algorithms. Our novel threshold-based online algorithm attains the best competitive ratio to date, which is upper-bounded by $3-2/(1+\mathcal{O}(\frac{1}{w}))$, where $w$ is the prediction window size. We also characterize a non-trivial lower bound of the competitive ratio and show that the competitive ratio of our algorithm is only $9\%$ away from the lower bound, when a few hours of prediction is available. Our theoretical and empirical evaluations demonstrate that our online algorithm outperforms the state-of-the-art ones. An interesting future direction is to exploit our new design space in developing competitive algorithms for general MTS problems with limited prediction. We also plan to incorporate energy storage system into the problem setting and algorithm design.
\vspace{-1mm}
\section{Acknowledgement}
The work presented in this paper was supported in part by a Start-up Grant from School of Data Science (Project No. 9380118), City University
of Hong Kong, and a General Research Fund from Research Grants Council, Hong Kong (Project No. CityU 11206821).
\section{NUMERICAL EXPERIMENTS}
\label{sec:exp}
\begin{figure}
\begin{minipage}[b][1\totalheight][t]{0.48\columnwidth}%
\begin{center}
\includegraphics[width=1\columnwidth]{fig/difference.jpg}
\par\end{center}
\caption{\label{fig:alphaomegadif} Competitive ratio improvement as a function of $\alpha$ and $w$.}
\end{minipage}\hfill{
\begin{minipage}[b][1\totalheight][t]{0.48\columnwidth}%
\begin{center}
\includegraphics[width=1\columnwidth]{fig/Lower.jpg}
\par\end{center}
\caption{\label{fig:crlb} Lower bound of the competitive ratio as a function of $\alpha$ and $w$.}
\end{minipage}
\vspace{-3mm}
\end{figure}
We carry out numerical experiments using real-world traces to evaluate the performance of \textsf{CHASEpp}. We calculate the cost incurred by using only external electricity, heating, and wind energy (when no generator is utilized) as a benchmark, and we report the cost reduction of different algorithms compared to this benchmark. We compare performance of the optimal offline algorithm ${\sf OPT}$, ${\sf CHASE}$, ${\sf CHASElk}^+$, ${\sf CHASEpp}^+$, and ${\sf RHC}$, which is a popular algorithm widely used in the control literature \cite{Kwon1977modified}, with both perfect and noisy prediction. The length of each time slot is 1 hour and the total cost incurred during one week (T = 168) is reported. \vspace{-3mm}
\subsection{Experiment Setting} \label{expsetting}
We obtain the electricity and heat demand traces from \cite{CEUS}, which belongs to a college in San Francisco, with yearly electricity consumption of around $154GWh$, and gas consumption of around $5.1 \times 10^6$ therms. We use wind power traces from \cite{NREL}, which are collected from a wind farm outside San Francisco with an installed capacity of $12MW$.
We obtain the electricity and natural gas price from PG\&E \cite{PGE}, and we deploy generators with the same specifications as the one in \cite{tecogen}, with heat recovery efficiency $\eta$ set to be $1.8$. The incremental cost $c_{\mathrm{o}}$ and running cost $c_{\mathrm{m}}$ per unit time are set to be $\$0.051\ensuremath{/KWh}$ and $\$110/h$ respectively. We consider a heating system with the unit heat generation cost of $c_{\mathrm{g}}=\$0.0179/KWh$, according to \cite{greenenergy} and the startup cost $\beta$ is set to be $\$1400$. The peak for the electricity demand is $30MW$, so we adopt $10$ generators with maximum capacity $1MW \times 3$, $3MW \times 4$, and $5MW \times 3$ to fully satisfy the demand. All the experiments are modeled and implemented in Matlab \cite{MATLAB:2021} using Gurobi optimization tools \cite{gurobi}.
\vspace{-1mm}
\subsection{Theoretical Ratio}
\begin{figure}
\begin{minipage}[b][1\totalheight][t]{0.48\columnwidth}%
\begin{center}
\includegraphics[width=1.01\columnwidth]{fig/new1.png}
\par\end{center}
\caption{\label{fig:comparecr} Competitive ratio as a function of prediction window size ($w$).}
\end{minipage}\hfill{
\begin{minipage}[b][1\totalheight][t]{0.48\columnwidth}%
\begin{center}
\includegraphics[width=1\columnwidth]{fig/costreduction.jpg}
\par\end{center}
\caption{\label{fig:costreduction} Cost reduction as a function of prediction window size ($w$). }
\end{minipage}
\vspace{-3mm}
\end{figure}
In Fig.~\ref{fig:alphaomegadif}, we plot the competitive ratio improvement of our algorithm $\textsf{CHASEpp}^+(w)$ over $\textsf{CHASElk}^+(w)$ as a function of $\alpha$ and $w$. We can see that our algorithm improves the competitive ratio by up to $20\%$. As expected by decreasing the value of $\alpha$, or increasing the window size, the competitive ratio improvement increases. But if we keep increasing the window size, both competitive ratios approach $1$, and the competitive ratio gap starts to decrease.
In a similar figure, the competitive ratio lower bound is depicted in Fig.~\ref{fig:crlb}. We can see that when $w$ approaches 0, the lower bound approaches $3-2\alpha$, which is the lower bound of the prediction-oblivious \textsf{CHASE}. We also plot the competitive ratio and the lower bound for different values of $w$ in Fig.~\ref{fig:comparecr}. Our algorithm's competitive ratio is always better than the previous algorithm, and it is not far from the lower bound. With a 3-hour prediction, our algorithm's competitive ratio is away from the lower bound by $9\%$. In the worst case with $w=10$ hours, our algorithm is away by at most $22\%$. One should note that in practice, 3 hours is a more typical prediction window size \cite{santhosh2020current}.
\subsection{The Effect of Prediction Window}
In this section, we change the window size from $0$ to $15$, and we show the results in Fig.~\ref{fig:costreduction}. We observe that when the window size is large, all the algorithms perform very well and approach the optimal offline solution. On the other hand, when the window size is small, \textsf{RHC} performs very poorly while our online algorithm $\textsf{CHASEpp}^+(w)$ performs better than the previous algorithm. It is important to note that depending on the input structure, there may be a large performance discrepancy between $\textsf{CHASElk}^+(w)$ and $\textsf{CHASEpp}^+(w)$. In the following section, we show how our new online algorithm can improve the previous algorithm's performance by exploiting the structure of the predicted information.
{\bf Effect of the cumulative differential cost}: In this section, we build two inputs and depict the cost reduction of the two algorithms for these inputs in Fig.~\ref{fig:crdifference1} and Fig.~\ref{fig:crdifference2}. We observe that in the first input shown in Fig.~\ref{fig:crdifference1} with $a(t)=L$ when $\Delta(t+w)$ reaches zero the cumulative differential cost in the window is large, and hence both algorithms turn on the generator, and they have the same performance. On the other hand, in the second input shown in Fig.~\ref{fig:crdifference2} with $a(t)=L/4$ when $\Delta(t+w)$ reaches zero the cumulative differential cost is small, and the new algorithm does not turn on the generator and will not spend the additional startup cost, which leads to its better performance.
Therefore, for a general input, if we have a large value of demand $a(t)=L$ coming in every time slot, both algorithms perform very well. But if the demand is small $a(t)=L/4$, the cumulative differential cost is also small, and using our new algorithm significantly improves the performance.
\begin{figure}
\begin{minipage}[b][1\totalheight][t]{0.48\columnwidth}%
\begin{center}
\includegraphics[width=.98\columnwidth]{fig/crex1.jpg}
\par\end{center}
\begin{center}
\includegraphics[width=1\columnwidth]{fig/crexample1.jpg}
\par\end{center}
\caption{\label{fig:crdifference1}TInput with $a(t)\in\{0,L\}$, $h(t)=\eta a(t)$, and $p(t)=p_{\mathrm{max}}$, where both algorithms perform the same.}
\end{minipage}\hfill{
\begin{minipage}[b][1\totalheight][t]{0.48\columnwidth}%
\begin{center}
\includegraphics[width=.98\columnwidth]{fig/crex2.jpg}
\par\end{center}
\begin{center}
\includegraphics[width=1\columnwidth]{fig/crexample2.jpg}
\par\end{center}
\caption{\label{fig:crdifference2}Input with $a(t)\in\{0,L/4,L\}$, $h(t)=\eta a(t)$, and $p(t)=p_{\mathrm{max}}$, where the new algorithm performs better.}
\end{minipage}
\vspace{-4mm}
\end{figure}
\subsection{The Effect of Prediction Error} \label{predictionerror}
While the day-ahead electricity demand prediction has an error range of $2-3\%$, the highly fluctuating nature of the wind power makes the next hour's prediction error to usually be around $20-40\%$ \cite{Wang2018MultistepAW}. Therefore, it is important to see how the prediction error can affect our online algorithm's performance. We obtain real-world wind power forecasting error distributions from \cite{hodge2012wind}, where the mean and standard deviation of the errors are based on the typical forecasts in the U.S., and hyperbolic distribution is used to represent the error. In \cite{hodge2012characterizing} it has been shown that hyperbolic distribution is superior to the normal distribution in capturing wind power forecasting error. Still, to compare these two, we generate wind power forecasting errors from both. We start with the real-world hyperbolic distribution and zero-mean Gaussian, and in each time slot, we add the errors to the actual values. We also increase the standard deviation by $0$ to $100\%$ of the total installed capacity and the total peak demand for the wind power error and the heat demand error, respectively. In Fig.~\ref{fig:crerror1} and~\ref{fig:crerror3}, we report the average cost reduction of the algorithms over $100$ runs for two different lookahead window sizes of $1$ and $3$ hours with Gaussian errors. In Fig.~\ref{fig:hype1} and~\ref{fig:hype3} the results of the simulation for the real-world prediction errors with hyperbolic distribution are shown. It is important to note that for a $3$-hour prediction window size the errors are often in $20-40\%$ range \cite{Wang2018MultistepAW}. Therefore, by increasing the standard deviation up to $100\%$, we are stress-testing the algorithm. When $w$ is small, both algorithms are robust to the prediction error. When the window size increases, however, the previous algorithm becomes more sensitive, and its performance starts deteriorating. Because, for a large window size, the prediction error of each time slot aggregates over the window, and if the window size becomes too large, the prediction can even worsen the algorithm's performance. On the other hand, the new online algorithm keeps its performance even for large prediction errors. The reason is that instead of only detecting the segment type, our algorithm checks the cumulative differential cost and only turns on the generator when it sees enough benefits in the look-ahead window.
\vspace{-1mm}
\begin{figure}
\begin{minipage}[b][1\totalheight][t]{0.48\columnwidth}%
\begin{center}
\includegraphics[width=1\columnwidth]{fig/norm1.jpg}
\par\end{center}
\caption{\label{fig:crerror1}Cost reduction for different sizes of the normal prediction error ($w=1$).}
\end{minipage}\hfill{
\begin{minipage}[b][1\totalheight][t]{0.48\columnwidth
\begin{center}
\includegraphics[width=1\columnwidth]{fig/norm3.jpg}
\par\end{center}
\caption{\label{fig:crerror3}Cost reduction for different sizes of the normal prediction error ($w=3$).}
\end{minipage}
\begin{minipage}[b][1\totalheight][t]{0.48\columnwidth}%
\begin{center}
\includegraphics[width=1\columnwidth]{fig/hype1.jpg}
\par\end{center}
\caption{\label{fig:hype1}Cost reduction for different sizes of the hyperbolic prediction error ($w=1$).}
\end{minipage}\hfill{
\begin{minipage}[b][1\totalheight][t]{0.48\columnwidth
\begin{center}
\includegraphics[width=1\columnwidth]{fig/hype3.jpg}
\par\end{center}
\caption{\label{fig:hype3}Cost reduction for different sizes of the hyperbolic prediction error ($w=3$).}
\end{minipage}
\vspace{-4mm}
\end{figure}
\section{Introduction}
Central to online decision-making problems is the presence of future information, which, if available, determines the optimal decisions taken currently. Without knowledge of future information, competitive online algorithms are robust decision-making algorithms that can offer a worst-case guarantee to their sub-optimal decisions, against the optimal offline decisions with complete future information. In microgrids, it is essential to serve the fluctuating demands using local generators, intermittent renewable energy sources, and an external grid with time-varying tariffs. This is a well-studied class of problems in smart grid literature (including economic dispatching \cite{gaing2003particle} and unit commitment problems \cite{kazarlis1996genetic}). It is appealing to employ competitive online algorithms for efficient energy management in microgrids \cite{narayanaswamy2012online}. In practice, however, prediction is often plausible within a limited time window. For example, in smart grid, the advances of machine learning and big data analytics enable relatively accurate renewable energy forecasting with several hours ahead \cite{Wang2018MultistepAW}. The availability of predicted future information will certainly enhance the design of online decision-making algorithms, providing the missing information for optimal decisions. Nonetheless, prediction is never perfect. When we consider only a limited time window of accurate prediction, it may not be sufficient to determine the current optimal decisions. Thus, a worst-case guarantee is still desirable to benchmark an online algorithm's sub-optimal decisions using limited future information against the optimal decisions with complete future information.
In this paper, a novel prediction-aware online algorithm is provided for energy generation scheduling in microgrids that considers a prediction window. We note that it is non-trivial to design a competitive prediction-aware online algorithm. A straightforward approach is to use receding horizon control (RHC), which determines the best possible decisions based on only the predicted future information, but does not consider any future events beyond the prediction window. Hence, RHC is not robust against the uncertainty beyond prediction. Therefore, a more robust algorithm is needed that can both harness the predicted information and accommodate the uncertainty beyond prediction. Such an algorithm should be sufficiently general to consider a parameterized prediction window with any window size. In this paper, we consider the energy generation scheduling problem for microgrids, where one needs to decide when to switch energy supply between a cheaper local generator with startup cost and the costlier on-demand external grid, considering intermittent renewable generation and fluctuating demands. There have been a number of recent studies about online energy generation scheduling. In the previous study \cite{Minghua2013SIG}, a prediction-oblivious algorithm called \textsf{CHASE}
has been proposed to solve this problem. It is shown that \textsf{CHASE} achieves a competitive ratio of $3$, which is the best among all deterministic online algorithms. More generally, there is an abstract framework called Metrical Task System (MTS) problem \cite{borodin2005online}, which considers general online decision-making processes for state changes with uncertain future switching costs among the states. We note that the online energy generation scheduling problem belongs to a class of scalar MTS problems, where the states are the number of generators being on (or off). However, there is no prediction-aware online algorithm for MTS in the literature so far, to the best of our knowledge. In this paper, we present a novel prediction-aware online algorithm with the best competitive ratio to date; see Sec.~\ref{sec:relwork} for the discussion. Our algorithm not only solves the online energy generation scheduling problem, but also paves the way of tackling more general MTS problems with limited predicted information. MTS is capable of modeling many problems arising in a wide range of applications, including embedded systems and data centers \cite{DBLP:journals/tecs/IraniSG03}, transportation systems \cite{coester2018online}, and online learning \cite{Blum97on-linelearning}. As another novelty considered in this work, the previous study \cite{Minghua2013SIG} focuses on a homogeneous setting of local generators. In practice, however, microgrids may employ different types of generators with heterogeneous operating constraints. In this paper, we consider a more general setting where local generators can be heterogeneous with different capacities. We summarize our main contributions as follows: \vspace{-1.5mm} \begin{enumerate}
\item We propose \textsf{CHASEpp} as a novel prediction-aware online algorithm that can further improve the competitive ratio of the state-of-the-art \textsf{CHASElk}. This algorithm achieves competitive ratio of $ 3-\big(2\alpha +2(1-\alpha)/(1+\mathcal{O}(\frac{1}{w}) \big) \leq 3-2/(1+\mathcal{O}(\frac{1}{w})) $, where $\alpha \in [0,1]$ is the system parameter that captures price discrepancy between using local generation and external sources to supply energy. Our algorithm achieves the best competitive ratio to date with up to $20\%$ improvement than the state-of-the-art \textsf{CHASElk}. This competitive ratio also decreases twice faster with respect to $w$ than \textsf{CHASElk}. We explore a new design space in our algorithm called cumulative differential cost in the prediction window, to better utilize the prediction information in making more competitive decisions. Our approach proactively monitors the possible online to offline cost ratio in the prediction window and makes intelligent online decisions.
\item
We also characterize a non-trivial lower bound of the competitive ratio. To obtain the lower bound, we create an adversary that progressively generates a worst-case input for any algorithm. We assume at anytime the accurate prediction of a future window is available to the algorithm. This means the adversary needs to build a window of input ($\big[\sigma(\tau)\big]_{t}^{t+w}$) without knowing the algorithm's behavior in the upcoming window, which makes it difficult to establish the lower bound. In Sec.~\ref{sec:exp}, we show that the competitive ratio of \textsf{CHASEpp} is close to the lower bound. For example, they only differ by $9\%$ (i.e., 1.94 vs. 1.75) when we have a few hours of predictions.
\item
We use both theoretical analysis and trace-driven experiments to evaluate the performance of our algorithm by comparing it with the state-of-the-art algorithms. We also elaborate on how the input structure and system parameters can affect its performance. We note that our approach, with both perfect and noisy prediction information, can be extended to the online algorithm design for a general class of MTS problems with a similar structure.
\end{enumerate}
\section{Lower bound of the Competitive Ratio}
\label{sec:lowerbound}
\begin{figure}
\begin{minipage}[b][1\totalheight][t]{0.48\columnwidth}%
\begin{center}
\includegraphics[width=1\columnwidth]{fig/lowerbound.jpg}
\par\end{center}
\caption{\label{fig:lbexample} An example of the worst-case input with $(\delta_1,\delta_2)$.}
\end{minipage}\hfill{
\begin{minipage}[b][1\totalheight][t]{0.48\columnwidth}%
\begin{center}
\includegraphics[width=1\columnwidth]{fig/a1a2.jpg}
\par\end{center}
\caption{\label{fig:a1a2}Value of $(\delta_1^*,\delta_2^*)$, for different window sizes.}
\end{minipage}
\vspace{-3mm}
\end{figure}
\begin{thm}\label{CRLOB} Let $\epsilon > 0 $ be the length of each time slot. As $\epsilon$ goes to zero and the discrete time setting approaches to the continues time setting, the competitive ratio for any prediction-aware deterministic online algorithm
${\mathcal A}$ for $\textsf{MCMP}_{\mathrm{s}}$ is lower bounded by
\begin{equation} \label{bounddelta}
{\sf CR}({\mathcal A})\ge {\sf cr}(w)= \frac{c_{\mathrm{m}}+\delta_2^*}{c_{\mathrm{m}}+\frac{Lc_{\mathrm{o}} \alpha }{Lc_{\mathrm{o}}+c_{\mathrm{m}}}\delta_2^*},
\end{equation}
where $\delta_2^*>0$ is an optimal objective of an optimization problem with system parameters as input.
\end{thm}
\begin{proof}
Refer to Appendix~\ref{sec:appendix L}.
\end{proof}
\begin{proof}[Sketch of Proof] The key idea is that given any deterministic online algorithm $\mathcal A$, we progressively construct a particular worst-case input $\sigma(t) \triangleq (a(t), h(t), p(t))$, that for the performance ratio we have
\begin{equation}
\frac{{\rm Cost}({\sf y}_{\mathcal A};\sigma)}{{\rm Cost}({\sf y}_{\rm OFA},\sigma)} \geq {\sf cr}(w).
\end{equation}
In what follows, we explain this input, and it's corresponding lower bound in Lemma~\ref{prlb}. Consider an example of the input in Fig.~\ref{fig:lbexample}. Starting from the beginning $t=0$, by giving a large value of differential cost ($\delta(t)=\delta_1$), the adversary tries to encourage the algorithm to turn on the generator. As soon as the algorithm turns on the generator at $s$, the adversary starts to hurt the algorithm as hard as possible by giving it $\delta(s+w)=-c_{\mathrm{m}}$. Hence, the $\Delta(t)$ function keeps decreasing until the algorithm turns off the generator at time $t=T_{i+1}^{c}-w $.
We denote the performance ratio for this input as $PR(s)$. One can see that at the beginning, the adversary gives a larger differential cost ($\delta(t)=\delta_1$) as the input. As time goes by and the offline cost changes, the adversary continues by giving a smaller differential cost $\delta_2$ as input.
\begin{lem} \label{prlb} The lower bound of the competitive ratio is
\begin{eqnarray}
{\sf CR}_{\mathrm{l}}(\delta_1,\delta_2) = \min \{R_{\mathrm{on}}(\delta_1,\delta_2), R_{\mathrm{off}}(\delta_2)\},
\end{eqnarray}
where $R_{\mathrm{on}}(\delta_1,\delta_2)$ is given by
\begin{eqnarray}
R_{\mathrm{on}}(\delta_1,\delta_2) = \min_{s\in [1,(\beta-w\delta_2)/\delta_1]} PR(s),
\end{eqnarray}
and $R_{\mathrm{off}}(\delta_2)$ is given by
\begin{eqnarray}
R_{\mathrm{off}}(\delta_2)=\frac{c_{\mathrm{m}}+\delta_2}{c_{\mathrm{m}}+\frac{Lc_{\mathrm{o}} \alpha }{Lc_{\mathrm{o}}+c_{\mathrm{m}}}\delta_2}.
\end{eqnarray}
\end{lem}
\begin{proof}
Refer to Appendix~\ref{sec:lemma1proof}.
\end{proof}
Similar to Theorem~\ref{lem:crfunction}, we find $(\delta_1^*,\delta_2^*)$ that maximizes the lower bound ${\sf CR}_{\mathrm{l}}(\delta_1,\delta_2)$. First, for each $\delta_2$, we find a corresponding $\delta_1$ such that $\delta_1= \text{arg}\max\limits_{ \delta }\, R_{\mathrm{on}}(\delta,\delta_2)$. This reduces $R_{\mathrm{on}}(\delta_1,\delta_2)$ to a single variable function of $\delta_2$. Now we find the maximum of the minimum of two single variable functions. We know that for $\delta_2=0$, we have $R_{\mathrm{on}}(\delta_1,0) \geq R_{\mathrm{off}}(0)=1 $, and $R_{\mathrm{off}}(\delta_2) $ is an increasing function. Hence, similar to \eqref{eq:finda} we keep increasing $\delta_2$ until we find the intersection of the two functions. Therefore, we always have $R_{\mathrm{on}}(\delta_1^*,\delta_2^*) \geq R_{\mathrm{off}}(\delta_2^*)$ and ${\sf CR}_{\mathrm{l}}(\delta_1^*,\delta_2^*) = R_{\mathrm{off}}(\delta_2^*)$, which completes the proof.
\end{proof}
In Fig.~\ref{fig:a1a2}, we plot $(\delta_1^*,\delta_2^*)$ for different window sizes. As we can see, by increasing the window size, both $\delta_1^*$ and $\delta_2^*$ keep decreasing. As a result, the lower bound, which is a function of $\delta_2^*$, keeps decreasing.
\vspace{-2mm}
\section{Multiple Generator Scenario}
\label{sec:multigenerator}
In this section, we solve the multi-generator \textbf{MCMP} problem presented in Sec.~\ref{sec:pf}, where we have N units of heterogeneous generators. Without loss of generality, we assume that $L_1 \geq L_2 ... \geq L_{\mathrm{N}}$. The key is to slice the demand into multiple layers, assign each sub-demand to a different generator, and solve an optimization problem for a single generator. For partitioning, we start from the bottom up, and we slice the demand such that the $n$-th layer has at most $a^{\mathrm{ly-n}}=L_{\mathrm{n}}$ units of electricity demand and $h^{\mathrm{ly-n}}=\eta \cdot
L_{\mathrm{n}}$ units of heat demand. Once we used all the local generation capacities, for the remaining demands $(a^{\rm top}, h^{\rm top})$, we use the external sources. In this way, the bottom layers have the least frequent variations of demand, and we assign these layers to the generators with larger capacities. As a result, we use fewer generators, and these generators observe less variations, which helps to reduce the startup cost.
The following theorem captures the performance of offline and online algorithms.
\begin{thm} \label{thm:nOFA-optimal}
The offline algorithm that uses the layering approach produces an optimal offline solution for \textbf{MCMP}, and the online algorithm achieves the following competitive ratio:
\begin{equation}
{\sf CR} \leq 3-2g(\alpha_1, w),
\end{equation}
where $\alpha_1= \frac{c_{\mathrm{o}}+c_{\mathrm{m}}/L_1}{p_{\mathrm{max}}+\eta c_{\mathrm{g}}}$, and $g(\alpha, w) $ is defined in Theorem~\ref{ref: competitive ratio} .
\end{thm}
\begin{proof}
Refer to Appendix~\ref{sec:appendix M}.
\end{proof}
\begin{figure}[t]
\centering{}\includegraphics[width=.48\columnwidth]{fig/CRalpha.jpg}\caption{\label{fig:alphaomega}Competitive ratio of the algorithm $\textsf{CHASEpp}^+(w)$ as a function of $\alpha$ and $w$.}
\vspace{-3mm}
\end{figure}
In the heterogeneous setting, each generator has its own unique prediction-aware online algorithm with a different optimal threshold that depends on their generation capacity $L_{\mathrm{n}}$.
The significance of this result is that it extends the applicability of our online algorithm beyond the homogeneous setting.
\section{Performance Analysis}
\label{sec:Performance_analysis}
\subsection{Competitive Ratio}
\label{subsec:worst}
Let us denote the algorithm with $\lambda \geq 0$ as its threshold to be $\textsf{CHASEpp}(w,\lambda)$. We have:
\begin{thm} The competitive ratio of Algorithm~\ref{alg:CHASE-pr} with $0 \leq \lambda \leq \beta$ is
\label{lem:crfunction}
\begin{equation}
{\sf CR}(\textsf{CHASEpp}(w,\lambda)) = \max \{R_{\mathrm{on}}(\lambda), R_{\mathrm{off}}(\lambda)\},
\end{equation}
where $R_{\mathrm{on}}(\lambda)$ is a decreasing function given by
\begin{eqnarray}
\label{eq:RON}
&& R_{\mathrm{on}}(\lambda) = 1+ \big(1- \frac{Lc_{\mathrm{o}}+c_{\mathrm{m}}}{L(p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}})} \big) \cdot \\
&& \max\limits_{{q \in \{0,wc_{\mathrm{m}}\}}} \big\{ \frac{ 2\beta-q}{ \beta+\big(2wc_{\mathrm{m}}-q+\frac{c_{\mathrm{o}}}{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}}\lambda \big) \big(1-\frac{c_{\mathrm{m}}}{L(p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}-c_{\mathrm{o}})} \big) } \big\}, \notag
\end{eqnarray}
and $R_{\mathrm{off}}(\lambda)$ is an increasing function given by
\begin{equation}
\label{eq:ROFF}
R_{\mathrm{off}}(\lambda)=\frac{wc_{\mathrm{m}}+\lambda}{wc_{\mathrm{m}}+\frac{c_{\mathrm{o}}}{p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}}\lambda}.
\end{equation}
\end{thm}
All the results presented in this paper have rigorous proofs, but due to page limit, we only provide a sketch of the idea behind our proof and present the details in Appendix~\ref{sec:appendix A}.
\begin{proof}[Sketch of Proof] One can show that this algorithm has two and only two possible worst-case inputs. We explain each of these inputs, and the competitive ratio can be found by finding the maximum of the performance ratio among these two values.
\begin{figure}
\begin{minipage}[b][1\totalheight][t]{0.48\columnwidth
\begin{center}
\includegraphics[width=1\columnwidth]{fig/ron.jpg}
\par\end{center}
\caption{\label{fig:Ron} First worst-case input for the type-1 and type-2 critical segments.}
\end{minipage}\hfill{
\begin{minipage}[b][1\totalheight][t]{0.48\columnwidth
\begin{center}
\includegraphics[width=1\columnwidth]{fig/roff.jpg}
\par\end{center}
\caption{\label{fig:Roff} Second worst-case input appears by keeping the generator off for a window.}
\end{minipage}
\vspace{-4mm}
\end{figure}
First worst-case input: Consider the input in Fig.~\ref{fig:Ron}.
The algorithm turns on the generator at $t=T_{i+1}^{c}-w $ and turns it off at $t=T_{i+2}^{c}-w $. We calculate performance ratio for this input as $R_{\mathrm{on}}(\lambda)$ presented in~\eqref{eq:RON}, where $q= \Delta(T_{i+2}^{c}-w)- \Delta(T_{i+2}^{c})$. This performance ratio is calculated by finding the maximum across the possible values of $q$.
Second worst-case input: Consider the example in Fig.~\ref{fig:Roff}. At time $t$ we have $\Delta_{t}^{t+w}<\lambda$ and the algorithm keeps the generator off, but at time $t+w$ we have $\Delta_{t+w}^{t+2w}\geq \lambda$ and the algorithm turns on the generator. When calculating the performance ratio for this input, we observe that the ratio of the online cost increment over the offline cost increment is upper bounded by $R_{\mathrm{off}}(\lambda)$ presented in~\eqref{eq:ROFF}. Hence, the competitive ratio is equal to the maximum of these two values.
\end{proof}
\subsection{The Optimal Threshold}
We find the optimal threshold $\lambda^*$ that minimizes the competitive ratio ${\sf CR}$
\begin{equation}
\lambda^*= \arg\min\limits_{{ \lambda }}\max \{R_{\mathrm{on}}(\lambda), R_{\mathrm{off}}(\lambda)\}.
\end{equation}
To understand how to find $\lambda^*$ and its corresponding ${\sf CR}$, we consider the example given in Fig.~\ref{fig:findaplot}. In this example we observe that at $\lambda=0$, we have $R_{\mathrm{off}}(0) \leq R_{\mathrm{on}}(0) $. Meanwhile, from Theorem~\ref{lem:crfunction} we know that $R_{\mathrm{on}}(\lambda)$ is always a decreasing function and $R_{\mathrm{off}}(\lambda)$ is always an increasing function. Hence, $\lambda^*$ can be computed by finding the intersection of these two functions:
\begin{subequations}
\label{eq:finda}
\begin{eqnarray}
\lambda^*= & \max & \lambda \\
&\mbox{s.t.}& 0 \leq \lambda \leq \beta,\label{C_a_beta}\\
&& 0 \leq \lambda \leq L \big(p_{\mathrm{max}}+\eta\cdot c_{\mathrm{g}}-c_{\mathrm{o}}- \frac{c_{\mathrm{m}}}{L}\big)w,\label{C_a_window}\\
&& R_{\mathrm{on}}(\lambda) \geq R_{\mathrm{off}}(\lambda),\label{C_a_optimal}
\end{eqnarray}
\end{subequations}
where~\eqref{C_a_beta} ensures the threshold is not larger than the startup cost; otherwise, it is always optimal to turn on the generator. The constraint~\eqref{C_a_window} ensures that the threshold is within the maximum possible value, and ~\eqref{C_a_optimal} ensures that we find the threshold that gives us the minimum competitive ratio.
We can find the intersection of the two functions by using a simple binary search. As we can see in Fig.~\ref{fig:threshold}, by increasing the window size, the value of the threshold over the startup cost ($\lambda^*/\beta$) monotonically increases and approaches 1.
Now that the optimal threshold is calculated, we present the competitive ratio of our proposed online algorithm:
\begin{thm}
\label{ref: competitive ratio}
The competitive ratio of the algorithm \textsf{CHASEpp$(w)$} satisfies
\begin{eqnarray}
{\sf CR} \leq 3-2g(\alpha, w), \label{eq:competitive-ratio}
\end{eqnarray}
where
\begin{align}
\label{eq:g-function}
&g(\alpha, w) = \alpha + (1-\alpha) \Big( 1 - \tfrac{1}{2} \cdot \max\limits_{{q \in \{0,wc_{\mathrm{m}}\}}} \big\{ \\
&\frac{ (2\beta-q)}{ \beta+\big((2wc_{\mathrm{m}}-q)(Lc_{\mathrm{o}}+c_{\mathrm{m}})+ \alpha L c_{\mathrm{o}} \lambda^*\big) /\big(Lc_{\mathrm{o}}+c_{\mathrm{m}}/(1-\alpha)\big) } \big \} \Big) \notag
\end{align}
which is $\alpha +(1-\alpha)/(1+\mathcal{O}(\frac{1}{w})) $ and monotonically increases from $\alpha$ to $1$ as $w$ increases.
\end{thm}
\begin{figure}
\begin{minipage}[b][1\totalheight][t]{0.48\columnwidth
\begin{center}
\includegraphics[width=1\columnwidth]{fig/lambda.jpg}
\par\end{center}
\caption{\label{fig:findaplot}The value of $\lambda^*$ is found at the intersection of the two functions $R_{\mathrm{on}}(\lambda)$ and $R_{\mathrm{off}}(\lambda)$.}
\end{minipage}\hfill{
\begin{minipage}[b][1\totalheight][t]{0.48\columnwidth}%
\begin{center}
\includegraphics[width=1\columnwidth]{fig/threshold.jpg}
\par\end{center}
\caption{\label{fig:threshold}The value of the threshold over the startup cost $\beta$ for different window sizes ($w$).}
\end{minipage}
\vspace{-4mm}
\end{figure}
\begin{proof}
Refer to Appendix~\ref{sec:appendix C}.
\end{proof}
If there is no prediction ($w=0$), we have $g(\alpha,0)=\alpha$, and~\eqref{eq:competitive-ratio} reduces to the competitive ratio of \textsf{CHASE}. Meanwhile, if $w$ is large,~\eqref{eq:g-function} turns to
\begin{align}\label{eq:crfunction}
& g(\alpha, w)=\alpha+
\\
&\frac{(1-\alpha)}{1+ \beta(Lc_{\mathrm{o}}+c_{\mathrm{m}}/(1-\alpha))/ \underbrace{(2wc_{\mathrm{m}}(Lc_{\mathrm{o}}+c_{\mathrm{m}})+\alpha Lc_{\mathrm{o}}\lambda^*)}_{(\dagger)} }. \notag \end{align}
One can see that by increasing the value of $\lambda^*$ or $w$, value of the function $g(\alpha, w)$ keeps increasing thus, the competitive ratio keeps decreasing.
To understand how the new algorithm improves the competitive ratio of \textsf{CHASElk$(w)$}, we compare $g(\alpha, w)$ with $f(\alpha,w)$ presented in~\eqref{eq:crchaselk}. One can see that in $g(\alpha,w)$, the term denoted as $(\dagger)$ is larger than the corresponding part in $f(\alpha,w)$ in~\eqref{eq:crchaselk} by $(wc_{\mathrm{m}}(Lc_{\mathrm{o}}+c_{\mathrm{m}})+\alpha Lc_{\mathrm{o}}\lambda^*)$. This means that the competitive ratio decreases twice faster with respect to $w$ than the previous algorithm. Therefore, our new algorithm always achieves better competitive ratio than the state-of-the-art algorithm \textsf{CHASElk$(w)$} as much as $20\%$ improvement as shown in Fig.~\ref{fig:alphaomegadif} to be discussed later.
\begin{algorithm}[htb!]
{\caption{ ${\sf CHASEpp}^+(w) [t,\sigma(\tau)_{\tau = t}^{t+w}, y(t-1)]$} \label{alg:CHASE-pr+}
\begin{algorithmic}[1]
\IF{ $\tfrac{1}{\alpha}< {\sf CR}({\sf CHASEpp}(w))$ }
\STATE {$y(t) \leftarrow 0, \, \, \, u(t) \leftarrow 0, \, \, \, v(t) \leftarrow a(t), \, \, \, s(t) \leftarrow h(t) $}
\STATE {return $(y(t), u(t), v(t), s(t))$}
\ELSE
\STATE {return ${\sf CHASEpp}(w) [t,\sigma(\tau)_{\tau = t}^{t+w}, y(t-1)]$}
\ENDIF
\end{algorithmic}
}
\end{algorithm}
Note that from the definition, maximum value of $R_{\mathrm{off}}(\lambda)$ is $1/\alpha$, which is the competitive ratio of the algorithm that only uses the external supply. Hence, when ${\sf CR}= \max \{R_{\mathrm{on}}(\lambda^*), R_{\mathrm{off}}(\lambda^*)\}\geq 1/\alpha$, it means that instead of using our algorithm it is better to never turn on the generator. We summarize this result in Algorithm~\ref{alg:CHASE-pr+} denoted as ${\sf CHASEpp}^+(w)$. \begin{cor}
The competitive ratio of ${\sf CHASEpp}^+(w)$ satisfies
\begin{equation}
{\sf CR}({\sf CHASEpp}^+(w))= \min \{{\sf CR}({\sf CHASEpp}(w)), \tfrac{1}{\alpha}\}.
\end{equation}
\end{cor} By the same logic ${\sf CHASElk}^+(w)$ can be defined. In the rest of this paper, we evaluate the performance of these new algorithms. In Fig.~\ref{fig:alphaomega}, this competitive ratio is depicted as a function of $\alpha$ and $w$.
When $\alpha$ is large, it means the economic advantage of using local generation over external sources is small. Hence, both online and offline algorithms tend to use local generation less often. This improves the competitiveness of online algorithms. Thus, by increasing $\alpha$ from $0$ to $1$, the competitive ratio decreases from $3$ to $1$ monotonically.
\section{Energy Generation Scheduling Problem}
\label{sec:pf}
The objective of energy generation scheduling in microgrids is to coordinate various energy sources such as local generation units and renewable sources to fulfill both electricity and heat demands while minimizing the total energy cost. It can be formulated as a microgrid cost minimization problem (\textsf{MCMP}). We consider a system that operates in a time-slotted fashion, where $\mathcal{T}$ is the set of time slots, and the total length of the time horizon is $T$ time slots ($T\triangleq |\mathcal{T}|$).
The key notations are presented in Table~\ref{tbl:not}.
\vspace{-1mm}
\subsection{\label{sec:sysmod}System Model}
\textbf{Energy demand}:
The energy demand profile includes two types of energy demand, namely electricity demand and heat demand. Let $a(t)$ and $h(t)$ be the net electricity demand (i.e., the residual electricity demand not covered by renewable generation) and the heat demand at time $t$, respectively.
\textbf{External grid and heating}: We assume the microgrid operates in the ``grid-connected'' mode, and the unbalanced electricity demand can be acquired from the external grid in an on-demand manner. We denote $p(t)$ as the spot price from the electricity grid at time $t$, where $p(t)\in [P_{\mathrm{min}},P_{\mathrm{max}}]$. To keep the generality of the problem, we do not assume any specific stochastic model for the input profile $\sigma(t) \triangleq (a(t), h(t), p(t))$. Finally, to cover the heating demand, we can use external natural gas, costing $c_{\mathrm{g}}$ per unit of demand.
\textbf{Local generation}:
We consider a heterogeneous setting where the power output capacity for power generation $n\in [1, N]$ is $L_n$, and these capacities can be different from each other. This generalizes the homogeneous setting considered in \cite{Minghua2013SIG}, where local generators have identical capacities. By adopting the widely-used generator model \cite{kazarlis1996genetic}, We denote $\beta$ as the startup cost of turning on a generator, $c_{\mathrm{m}}$ as the sunk cost per unit time of running a generator in its active state, and $c_{\mathrm{o}}$ as the incremental operational cost per unit time for an active generator to output one unit of energy. In a more realistic model of generators, two additional operational constraints are considered. Namely, minimum turning on/off periods, and ramping up/down rates. In \cite{Minghua2013SIG}, a general problem that includes these additional constraints is considered, and the approach to solve them is also proposed. In this paper, we focus on the ``fast-responding'' generators whose minimum on/off period constraint and ramping-up/down constraint is negligible. Our solution can then be extended to the case with general generators using the same approach as in \cite{Minghua2013SIG}. Finally, we assume the local generators are CHP generators that can generate both electricity and heat simultaneously. We denote $\eta$ as the heat recovery efficiency for co-generation \textit{i.e.}, for each unit of electricity generated, $\eta$ unit of useful heat can be supplied for free. Thus, $\eta c_{\mathrm{g}}$ is the cost-saving due to using co-generation to supply heat, provided that there is sufficient heat demand. Note that by setting $\eta=0$, the problem reduces to the case of a system with no co-generation. We assume $c_{\mathrm{o}} \geq \eta \cdot c_{\mathrm{g}} $, which means it is cheaper to obtain heat by using natural gas than purely by generators. To keep the problem interesting, we assume that $c_{\mathrm{o}}+ \frac{c_{\mathrm{m}}}{L} \leq p_{\mathrm{max}}+ \eta \cdot c_{\mathrm{g}} $. This assumption ensures that the minimum co-generation energy cost is cheaper than the maximum external energy price. If this assumption does not hold, the optimal decision is to always acquire power and heat externally and separately. In this paper, we do not consider
using energy storage in the generation scheduling problem. The reason is that for the typical size of microgrids, e.g., a college campus, existing energy storage systems are rather expensive and not widely available \cite{menati2021preliminary}.
\vspace{-1mm}
\begin{table}[!t]
\begin{center}
\begin{tabular}{|c|c|p{6.5cm}|}
\hline
\multicolumn{2}{|c|}{\textbf{Notation}} & \textbf{Definition} \\
\hline \hline
\multirow{10}{*}{\rotatebox[origin=c]{90}{\hspace{10mm} \textbf{Generator} }}
&$\beta$ & The startup cost of local generator (\$)\tabularnewline
&$c_{m}$ & The sunk cost per interval of running local generator (\$)\tabularnewline
&$c_{o}$ & The incremental operational cost per interval of running local generator
to output an additional unit of power
(\$/Watt)\tabularnewline
&$L$ & The maximum power output of generator (Watt)\tabularnewline
&$\eta$ & The heat recovery efficiency of co-generation \tabularnewline\hline
\multirow{12}{*}{\rotatebox[origin=c]{90}{ \hspace{9mm} \textbf{Demand}}}&$\mathcal{T}$ & The set of time slots ($T\triangleq |\mathcal{T}|$)\tabularnewline
&$c_{g}$ & The price per unit of heat obtained externally using natural gas
(\$/Watt)\tabularnewline
&$a(t)$ & The net electricity demand minus the instantaneous renewable supply at time $t$ (Watt)\tabularnewline
&$h(t)$ & The heat demand at time $t$ (Watt)\tabularnewline
&$p(t)$ & The spot price per unit of power obtained from the electricity grid
($P_{\min}\leq p(t)\leq P_{\max}$) (\$/Watt)\tabularnewline
&$\sigma(t)$ & The joint input at time $t$: $\sigma(t) \triangleq (a(t), h(t), p(t))$ \tabularnewline\hline
\multirow{7}{*}{\rotatebox[origin=c]{90}{ \hspace{4mm} \textbf{Opt. Var}}} & $y(t)$ & The on/off status of the local generator (on as ``$1$'' and off as ``$0$'') \tabularnewline
&$u(t)$ & The power output level when the generator is on (Watt) \tabularnewline
&$s(t)$ & The heat level obtained externally by natural gas (Watt)\tabularnewline
&$v(t)$ & The power level obtained from electricity grid (Watt)\tabularnewline\hline
\end{tabular}
\end{center}
\caption{Key Notations. Brackets indicate the unit. We denote a vector by a single symbol, {\em e.g.,} ${a\triangleq\big[a(t)\big]_{t=1}^{T}}$. }
\label{tbl:not}
\vspace{-5mm}
\end{table}
\subsection{Problem Formulation} \label{ssec:problem_definition}
Let $v(t)$ and $s(t)$ be the amount of electricity and heat obtained from the external grid and the external natural gas at time $t$, respectively. Let $y_{\mathrm{n}}(t)$ be the generator binary on/off status ($1$ as on and $0$ as off), and $u_{\mathrm{n}}(t)$ be the power output level of the $n$-th generator. The microgrid aggregated operational cost over the time horizon $\mathcal{T}$ is given by
\begin{eqnarray} \label{mcmpproblem}
& \textsf{cost}(y,u,v,s) \triangleq \sum_{t\in\mathcal{T}}\Big( p(t)v(t)+c_{\mathrm{g}}s(t)+ \\
& \sum_{n=1}^{N}[c_{\mathrm{o}}u_{\mathrm{n}}(t)+ c_{\mathrm{m}}y_{\mathrm{n}}(t)+\beta[y_{\mathrm{n}}(t)-y_{\mathrm{n}}(t-1)]^{+} ] \Big), \notag
\vspace{-7mm}
\end{eqnarray}
that includes the grid electricity, external gas costs, and cost of the local generators, which is calculated by adding their operational cost and their switching cost over the entire time horizon $\mathcal{T}$. In this paper, we assume the initial status of all generators is off, \textit{i.e.,} $y_{\mathrm{n}}(0) = 0$. Given the cost function and decision variables we formulate the \textbf{Microgrid Cost Minimization Problem} (\textsf{MCMP}) as follows:
\begin{subequations} \label{prob2}
\begin{eqnarray}
&\underset{{y,u,v,s}}{\min}& \textsf{cost}(y,u,v,s) \\
&\mbox{s.t.}\;& 0 \leq u_{\mathrm{n}}(t)\leq L_{\mathrm{n}} y_{\mathrm{n}}(t), \label{C_max_output}\\
&& \textstyle{\sum}_{n=1}^N u_{\mathrm{n}}(t)+v(t) = a(t), \label{C_e-demand}\\
&& \eta \cdot \textstyle{\sum}_{n=1}^N u_{\mathrm{n}}(t)+s(t)\geq h(t), \label{C_h-demand}\\
&\mbox{vars.}\;& y_{\mathrm{n}}(t)\in \{0,1\},u_{\mathrm{n}}(t),v(t),s(t)\in \mathbb{R}_0^{+}, n\in [1,N], t\in \mathcal{T}, \nonumber
\end{eqnarray}
\end{subequations}
where the constraint~\eqref{C_max_output} captures the capacity limit of the generators and the constraints~\eqref{C_e-demand}-\eqref{C_h-demand} assure the electricity and heat demands are covered using the grid, natural gas, and the generators. It should be noted that the constraint~\eqref{C_e-demand} is in the form of equality, which means that the electricity power-balance constraint is strictly satisfied. To ensure this, we run the local CHP generators, which might produce a heating supply that is more than the demand. There are various mechanisms to manage the excessive generated heat, including thermal storage systems coupled with CHP units, which allow storing energy and reusing it later by lowering the temperature of a substance, such as water \cite{chpheat}.
We note that the AC Optimal Power Flow (OPF) constraints in the microgrid are not considered in \textsf{MCMP}, which is a joint unit commitment and economic dispatch problem. For certain microgrids, the ACOPF and economic dispatch should be coupled. However, if the microgrid is relatively large, similar to the conventional electric grids, to reduce the computational complexity, first unit commitment and economic dispatch are solved in hour-ahead time-scales, and then optimal power flow is solved minutes ahead of real-time \cite{OPF}. On the other hand, for relatively small-scale microgrids, because of the short distances, negligible losses, and large line capacities, the constraints of the ACOPF problem will not be activated, and usually, the generation cost has the dominant impact on microgrid planning. Therefore, although in some microgrids with fast-responding generators, the economic dispatch and the ACOPF are solved together, for relatively small or relatively large microgrids, this is not the case. In a general setting, the minimum turning on/off periods, and the ramping up/down rates can also be formulated as additional constraints. In \cite{Minghua2013SIG} the authors propose an approach, which obtains the solution to the general problem using the solution to the ``fast-responding'' generators setting. A simple heuristic is to first compute solutions using the online and offline algorithms without the constraints and then modify the solutions to respect the switching constraints. In this paper, we also focus on the ``fast-responding'' generators, but our offline and online algorithms can be easily updated to incorporate the switching constraints of the general case using the same approach. Note that this minimization problem is challenging to solve for several reasons. First, even in offline setting, this problem is a mixed-integer linear problem, which is generally difficult to solve. Second, the startup cost $\beta[y_{\mathrm{n}}(t)-y_{\mathrm{n}}(t-1)]^+$ term in the objective function makes decisions coupled across the time, hence the problem cannot be decomposed. Finally, the input profile $\sigma(t) \triangleq (a(t), h(t), p(t))$ arrives online and we may not know the complete future input. In this paper, we first consider a microgrid with a single generator and solve the \textsf{MCMP}. Later in Sec.~\ref{sec:multigenerator}, we extend the solution to the multiple generator scenario. Therefore, we drop the subscript $n$, and the problem $\textsf{MCMP}$ reduces to the problem $\textsf{MCMP}_{\mathrm{s}}$ for a single generator. We also utilize a useful observation to simplify the formulation: if the on/off status is given, the startup cost is determined, and $\textsf{MCMP}_{\mathrm{s}}$ reduces to a timewise decoupled linear program. According to \cite{Minghua2013SIG} given a fixed on/off status $\big(y(t)\big)_{t=1}^{T}$, the solution that minimizes $\textsf{cost}(y,u,v,s)$ is
\begin{equation}
u(t)\mbox{=}\begin{cases}
0, & \mathrm{if } \quad p(t)+\eta \cdot c_{\mathrm{g}}\leq c_{\mathrm{o}},\\
\min\Big\{\frac{h(t)}{\eta},a(t),L y(t)\Big\}, & \mathrm{if } \quad p(t)<c_{\mathrm{o}}<p(t)\mbox{+}\eta \cdot c_{\mathrm{g}},\label{eq:optimal_u}\\
\min\Big\{a(t),L y(t)\Big\}, & \mathrm{if } \quad c_{\mathrm{o}}\leq p(t), \notag
\end{cases}
\end{equation}
\begin{equation}
v(t)=\left[a(t)-u(t)\right]^{+}, \, \mathrm{and } \quad s(t)=\left[h(t)-\eta \cdot u(t)\right]^{+}.
\label{lem:fMCMP}
\end{equation}
By using~\eqref{lem:fMCMP}, the problem $\textsf{MCMP}_{\mathrm{s}}$ can be further simplified to the following problem with a single decision variable to turn on ($y(t) = 1$) or off ($y(t) = 0$) the generator.
\begin{eqnarray}
\textsf{MCMP}_{\mathrm{s}}: & \underset{y}{\min} \; \sum_{t\in\mathcal{T}}\Big\{ \psi\big(\sigma(t),y(t)\big) +\beta[y(t)-y(t-1)]^{+}\Big\} \notag\\
& \mbox{vars.}\;\; y(t)\in \{0,1\}, t\in \mathcal{T}, \notag
\end{eqnarray}
where $\psi\big(\sigma(t),y(t)\big)\triangleq p(t)v(t)+c_{\mathrm{g}}s(t)+c_{\mathrm{o}}u(t)+ c_{\mathrm{m}}y(t)$, and $(u(t),v(t),s(t))$ are defined based on the result in~\eqref{lem:fMCMP}.
\section{Related Work}
\label{sec:relwork}
Generation scheduling problems have attracted considerable attention. In large-scale power systems, with high aggregation effect on demand and small percentage of the erratic renewable generation, predicting the demand in the entire time horizon with a good level of accuracy is possible. Therefore the energy generation scheduling is basically an offline problem. Two main forms of this problem are economic dispatching \cite{chen1995large,selvakumar2007new,gaing2003particle} and unit commitment \cite{kazarlis1996genetic,guan2003optimization,padhy2004unit}. In the literature, researchers tackled this problem in different ways including dynamic programming \cite{snyder1987dynamic}, stochastic programming \cite{takriti1996stochastic}, and mixed-integer programming
\cite{carrion2006computationally}. In recent years, with the increasing integration of the highly fluctuating renewable sources and deployment of small-scale microgrids, the uncertainty and intermittency have increased substantially on both supply and demand sides, and local supply-demand matching has become an essential part of the microgrid operation. Therefore, the previous approaches for the traditional grid are not applicable to this new scenario, where we do not know all the information on the time horizon \cite{kroposki2017integrating, yang2018economic}.
The microgrid operator is the responsible party for local power balancing, which determines the optimal power generation and scheduling of all on-site
resources \cite{microgrid}. To address the supply-demand matching problem in microgrids, researchers proposed different approaches which aim at scheduling either dispatchable generation on the supply side \cite{narayanaswamy2012online} or flexible load on the demand side (e.g., \cite{chang2013real, huang2014adaptive}). Some other works combined these two with energy storage management \cite{chen2013heterogeneous, guo2013decentralized}, in order to achieve power balance in microgrid.
In recent years, online optimization has emerged as a foundational topic in a variety of computer systems. It has seen considerable attention from applications in a wide range of research, including networking and distributed systems \cite{tu2013dynamic,urgaonkar2011optimal,neely2010stochastic}.
\begin{table}
\scriptsize
\begin{center}
\begin{tabular}{|p{1.4cm}||c|c|c|c|}
\hline \textbf{Reference} & \vtop{\hbox{\strut \textbf{Structure} }\hbox{\strut \textbf{Exploitation}}} & \vtop{\hbox{\strut \textbf{Competitive} }\hbox{\strut \textbf{Ratio}}} & \vtop{\hbox{\strut \textbf{Lower} }\hbox{\strut \textbf{Bound}}} & \vtop{\hbox{\strut \textbf{Heterogeneous} }\hbox{\strut \textbf{Generators}}} \\
\hline\hline Lin \textit{et al.} \cite{ocoprediction} & \xmark & \vtop{\hbox{\strut arbitrarily large (in a }\hbox{\strut more general setting)}} & \xmark & \xmark \\
\hline Hajiesmaili \textit{et al.} \cite{hajiesmaili2016rand} & \xmark & heuristic & \xmark & \xmark \\
\hline Lu \textit{et al.} \cite{Minghua2013SIG} & \xmark & \vtop{\hbox{\strut sub-optimal (partial }\hbox{\strut use of the information)}} & \xmark & \xmark \\
\hline\hline \textbf{This work} & \checkmark & \vtop{\hbox{\strut reduces twice faster }\hbox{\strut than \cite{Minghua2013SIG} with $w$} } & \checkmark & \checkmark \\
\hline
\end{tabular}
\caption{Summary and comparison of existing works.}\vspace{-10pt}
\label{tbl:sum}
\end{center}
\vspace{-7mm}
\end{table}
In \cite{narayanaswamy2012online}, online convex optimization (OCO) framework \cite{zinkevich2003online} is used to design algorithms for the microgrid economic dispatch. OCO is a prominent paradigm being increasingly applied in different applications \cite{caoVirtual2018,Cao2018OnTT,Chen2016Prediction,Chen2015OCO, 6322266, 8486362}. There are some similarities between OCO with switching cost for dynamic scaling in datacenters \cite{5934885} and the one of energy generation \cite{Minghua2013SIG}. However, the inherent structures of both problems and solutions are significantly different. First, OCO considers a continuous feasible region, while in our setting with energy generators, the decision variable can only take discrete values. Second, their solution only applies to the homogeneous setting, while our solution can utilize multiple heterogeneous local generation units. Finally, in their recent work \cite{ocoprediction}, the competitive ratio is sub-optimal compared to the existing solutions and our new algorithm. By increasing the switching cost, their competitive ratio increases linearly, while our algorithm's competitive ratio is always upper bounded by a constant that is independent of the switching cost. Other prediction-aware online algorithms like the one in \cite{li2018online} also produce competitive ratios that grow unbounded as the switching cost increases. Some recent works \cite{Xiaojun2019, Shi2021CombiningRW} tried to solve this issue by designing online algorithms with bounded competitive ratios. Still, their algorithm can only leverage prediction for large enough window sizes $w \geq r_{co}$, where $r_{co}$ is a constant that grows unbounded as the switching cost increases. For example, in a real-world setting used in our numerical experiments (Sec.~\ref{sec:exp}), for $w<11$ hours, their competitive ratio is a constant that is independent of the window size. Meanwhile, the competitive ratio of our algorithm always keeps decreasing as the window size increases. In \cite{zhang2015peak}, a competitive algorithm design approach is used to solve the online economic dispatching problem with a peak-based charging model, which does not take the startup cost into account. The study in \cite{Minghua2013SIG} incorporates the startup cost and turns the problem into a joint unit commitment and economic dispatch problem. In \cite{hajiesmaili2016rand}, a randomized online algorithm is proposed to solve this problem. In this paper, we aim to solve this problem with accurate prediction of the near future demand. In \cite{Minghua2013SIG}, a prediction-aware online algorithm has been proposed to this end, but it fails to utilize all the given predicted information. Here we propose a novel competitive online algorithm that will further improve both theoretical and practical performance over the previous algorithm. There are several aspects both in algorithm design and theoretical analysis that make our work to be different from other online solutions. We compare the most important aspects of these works and our work in Table~\ref{tbl:sum}.
\vspace{-1mm}
|
1,116,691,498,691 | arxiv | \section{\label{sec:level0}Introduction}
For more than 70 years the behaviour of rotating superfluids have been of considerable experimental and theoretical interest to several generations of physicists. Initially, it was recognized by Onsager~\cite{nuovocimento_6_supp2_279-287_1949} and Feynman~\cite{proglowtempphys_1_2_17-53_1955} that superfluid flow is characterized by a quantized circulation, which was subsequently experimentally verified by Hall and Vinen \cite{procrsoca_238_1213_204-214_1956,procrsoca_238_1213_215-234_1956}. Since then, it has been recognized that the presence of a nonzero angular momentum in a superfluid leads to complex, nontrivial behaviour, spurring discoveries such as the recent realization of negative-temperature Onsager vortex clusters in two-dimensional Bose gases~\cite{science_364_6447_1264-1267_2019, science_364_6447_1267-1271_2019}. In particular, Bose-Einstein condensates (BECs) offer a uniquely flexible platform for the study of superfluid rotation and have been the focus of intense study for several years~\cite{pitaevskiistringaribec, advphys_57_6_539-616_2008, rmp_81_2_647-691_2009}.
Consider a superfluid in a rotating bucket. This superfluid cannot support rigid-body rotation if the bucket is symmetric about the rotation axis, due to the absence of shear forces, and so its angular momentum manifests in the form of quantum vortices above a certain critical rotation frequency~\cite{leggettquantumliquids}. However, when the symmetry of the bucket about the rotation axis is broken, it is able to transfer angular momentum to the superfluid even in the absence of shear forces, and the condensate exhibits solid-body rotation at slow rotation frequencies and quantized vortices above a critical rotation frequency. For Bose-Einstein condensates, in which the `bucket' is replaced by an atomic trap generated by electromagnetic fields, angular momentum may be transferred from the trap to the condensate by modulating the applied fields such that the condensate is confined by a rotating potential that is asymmetrical about the rotation axis~\cite{pitaevskiistringaribec}. At sufficiently high rotation frequencies, this method induces vorticity in the condensate~\cite{prl_88_1_010405_2001}. Alternate methods for producing vortices in BECs also exist, such as stirring with a Gaussian laser beam~\cite{prl_84_5_806-809_2000}, dragging a laser configuration through a trapped condensate (or, equivalently, a condensate through a laser configuration)~\cite{prl_104_16_160401_2010, prl_117_24_245301_2016}, applying oscillatory perturbations to the trapping~\cite{pra_79_4_043618_2009,prl_103_4_045301_2009}, condensing a rotating thermal (non-condensed) atomic vapor~\cite{prl_87_23_210403_2001}, and utilizing the Kibble-Zurek mechanism by quenching a thermal vapor across the BEC critical temperature~\cite{nature_455_7215_948-951_2008}. The theoretical and analytical study of the resulting vortices have uncovered phenomena rich in variety; some examples that are relevant to scalar, nondipolar, single-component condensates at zero temperature includes Kelvin waves~\cite{pra_62_6_063617_2000, prl_90_10_100403_2003, prl_101_2_020402_2008}, Abrikosov vortex lattices and their Tkachenko modes~\cite{science_292_5516_476-479_2001, prl_91_11_110402_2003, prl_91_10_100402_2003}, quantum Hall-like physics~\cite{prl_87_6_060403_2001, prl_87_12_120405_2001, prl_91_3_030402_2003, prl_92_4_040404_2004}, vortex reconnections~\cite{physfluids_24_1_125108_2012, prx_7_2_021031_2017}, quantum analogs of classical fluid instabilities~\cite{prl_104_15_150404_2010, prl_117_24_245301_2016, pra_97_5_053608_2018}, and hysteresis~\cite{pra_63_4_041603r_2001, pra_74_4_043618_2006}.
By contrast, previous studies relating to tilting effects in rotating BECs have mainly focussed on the collective modes of vortices in response to tilting perturbations of the trap~\cite{prl_86_21_4725-7428_2001, prl_91_9_090403_2003, prl_93_8_080406_2004, prl_113_16_165303_2014}, while the literature concerning the steady rotation of the external confinement about a non-principal axis is chiefly limited to the stability of the centre-of-mass oscillations in the rotating frame~\cite{pra_65_6_063606_2002, pra_71_4_043610_2005}. Given that such \emph{tilted} rotating traps may be experimentally generated in a similar manner to the excitation of the tilting modes, and that roughly analogous systems such as dipolar BECs with tilted rotating dipole moments have been realized experimentally~\cite{prl_89_13_130401_2002, prl_120_23_230401_2018}, a systematic study of BECs confined by a tilted rotating trap is warranted. In this paper, we analytically obtain stationary solutions of the Gross-Pitaevskii equation in the Thomas-Fermi limit for a condensate subject to a range of different tilting angles and harmonic trapping regimes. For all but the most trivial cases, the stationary solution densities are found to be tilted about the rotation axis by a different angle than the trap itself. One of the consequences of this additional degree of freedom is the existence of two previously unknown branches of stationary solutions. These exist even when the trap is not tilted away from the rotation axis, a result that is analogous to the tilted triaxial ellipsoids that are rotating-frame stationary solutions for self-gravitating irrotational classical fluids~\cite{physfluids_8_12_3414-3422_1996}. Focusing on the stationary solution branch existing in the nonrotating limit, we semi-analytically linearize the condensate's fluctuations in response to small perturbations and thus predict a dynamical instability at higher rotation frequencies, where the amplitude of one or more collective modes is expected to amplify exponentially in time. In the regions of dynamical instability we show via numerical Gross-Pitaevskii simulations that untilted vortices are nucleated from a tilted condensate, despite the background condensate density still being tilted. The theoretical formalism utilized represents a generalization of existing theoretical methods for studying BECs in asymmetric, untilted rotating traps~\cite{prl_86_3_377-380_2001, prl_87_19_190402_2001, prl_92_2_020403_2004, prl_95_14_145301_2005, pra_73_6_061603r_2006, jphysb_40_18_3615-3628_2007} and are readily conducive to experimental investigation along the lines of previous studies that have probed the untilted regime~\cite{prl_86_20_4443-4446_2001, prl_88_1_010405_2001, prl_88_7_070406_2002}.
This paper is structured as follows. Section \ref{sec:level1} defines the concept of a tilted rotating trap and introduces the relevant coordinate reference frames, while Sec.~\ref{sec:level2} discusses the methodology for solving for the vortex-free stationary solutions in the Thomas-Fermi limit. In Sec.~\ref{sec:level3}, we examine the features of these stationary solutions for two distinct trapping regimes, and in Sec.~\ref{sec:level4}, the time-dependent theory is linearized in order to characterize the dynamical stability of the vortex-free stationary solutions during a quasi-adiabatic rampup of the trapping rotation frequency. Finally, Sec.~\ref{sec:level5} contains a discussion of the outcomes of a series of numerical simulations of such rampups, where the dynamical route to vortex nucleation in a vorticity-free condensate in a tilted rotating trap is demonstrated.
\section{\label{sec:level1}The Tilted, Rotating Harmonic Trap}
In order to describe a dilute, scalar BEC, at zero temperature in a rotating, tilted harmonic trap, we utilize the Gross-Pitaevskii equation for the condensate order parameter, $\psi$. We assume that $N$ condensed bosons, each with a mass $m$, are confined in the trap and that the root mean squared harmonic trapping frequency in the $x$-$y$ plane is given by $\omega_{\perp}$. This may be used to rescale $t$ as $t\rightarrow\omega_{\perp}t$ and $\mathbf{r}$ as $\mathbf{r}\rightarrow\mathbf{r}/l_{\perp}$, where $l_{\perp} = \sqrt{\hbar/(m\omega_{\perp})}$ is the in-plane harmonic oscillator length. We also rescale $\psi$ as $\psi\rightarrow\sqrt{l_{\perp}^3/N}\psi$, such that it is normalized as
\begin{equation}
\int\mathrm{d}^3r\,|\Psi(\mathbf{r}, t)|^2 = 1. \label{eq:normalization}
\end{equation}
Subsequently, in a reference frame rotating with respect to the inertial laboratory frame with the angular velocity $\mathbf{\Omega}$, $\psi$ obeys the dimensionless Gross-Pitaevskii equation (GPE)~\cite{pitaevskiistringaribec, pethicksmithbecdilutegases, rmp_81_2_647-691_2009, advphys_57_6_539-616_2008}:
\begin{equation}
i\frac{\partial\psi}{\partial t} = -\frac{1}{2}\nabla^2\psi + V_{\text{T}}(\mathbf{r},t)\psi + \tilde{g}|\psi|^2\psi + i\mathbf{\Omega}\cdot(\mathbf{r}\times\nabla)\psi. \label{eq:rescaledgpe}
\end{equation}
Here we define $\tilde{g} = 4\pi Na_{\text{s}}/l_{\perp}$ as the effective strength of a mean-field, two-body interaction, with a corresponding $s$-wave scattering length given by $a_s$, and denote the time-dependent harmonic trapping potential by $V_{\text{T}}$.
Previously, theoretical and experimental studies of the angular momentum of trapped BECs have tended to assume that the rotation axis of the confinement coincides with one of its symmetry axes. The rotation of a harmonic trap about an arbitrary axis can effectively be modeled by fixing $\mathbf{\Omega} = \Omega\hat{z}$, without loss of generality, and assuming that the trapping potential is not symmetric under the transformation $z \rightarrow -z$. In the co-rotating reference frame, this potential can be specified by
\begin{align}
V_{\text{T}}\left(\mathbf{r}\right) &= \frac{1}{2}\left[(1-\varepsilon)\left(x\cos\theta + z\sin\theta\right)^2 + (1+\varepsilon)y^2\right] \nonumber \\
&+ \frac{1}{2}\gamma^2\left(x\sin\theta - z\cos\theta\right)^2, \label{eq:tiltedtrapuprightcoord}
\end{align}
where $\varepsilon \in (-1, 1)$ and $\gamma\in\mathbb{R}$. This external potential is equivalent to
\begin{equation}
V_{\text{T}}(\mathbf{R}) = \frac{1}{2}\left[(1-\varepsilon)X^2 + (1+\varepsilon)Y^2 + \gamma^2Z^2\right], \label{eq:tiltedtrapnocrossterm}
\end{equation}
via a rotation of the co-rotating coordinates as given by
\begin{equation}
\begin{pmatrix}
X \\
Y \\
Z
\end{pmatrix}
=
\begin{pmatrix}
\cos\theta & 0 & \sin\theta \\
0 & 1 & 0 \\
-\sin\theta & 0 & \cos\theta
\end{pmatrix}
\begin{pmatrix}
x \\
y \\
z
\end{pmatrix}. \label{eq:traptilttransform}
\end{equation}
By inspection, Eq.~\eqref{eq:tiltedtrapuprightcoord} is equivalent to Eq.~\eqref{eq:tiltedtrapnocrossterm} when, for integer $n$, the tilting angle obeys $\theta = n\pi$. A similar equivalence, albeit with modified values of $\gamma$ and $\varepsilon$, holds when $n$ takes on half-integer values.
To simulate the stationary state and dynamics of a BEC in this trap via numerical methods, it is sufficient to use Eqs.~\eqref{eq:rescaledgpe} and \eqref{eq:tiltedtrapuprightcoord}. However, the vorticity-free stationary solutions of Eq.~\eqref{eq:tiltedtrapuprightcoord}, and their linear response to environmental perturbations, are well-described in the $\theta = 0$ limit by purely semi-analytical methods~\cite{prl_86_3_377-380_2001, prl_87_19_190402_2001}. To utilise these methods for an arbitrary value of $\theta$ it is necessary to set up a hydrodynamic formalism. This involves the definition of the condensate's density, $n$, phase, $S$, and superfluid velocity, $\mathbf{v}$, via the relations~\cite{pitaevskiistringaribec}:
\begin{align}
\psi &= \sqrt{n}e^{iS}, \label{eq:nsdef} \\
\mathbf{v} &= \nabla S. \label{eq:vdef}
\end{align}
Substituting Eqs.~\eqref{eq:nsdef} and \eqref{eq:vdef} into Eq.~\eqref{eq:rescaledgpe} yields a pair of hydrodynamic equations given by~\cite{advphys_57_6_539-616_2008, rmp_81_2_647-691_2009}:
\begin{align}
\frac{\partial n}{\partial t} &= -\nabla\cdot\left[n\left(\mathbf{v}-\mathbf{\Omega}\times\mathbf{r}\right)\right], \label{eq:continuity} \\
\frac{\partial\mathbf{v}}{\partial t} &= -\nabla\left\lbrace\frac{\mathbf{v}^2}{2} + V_{\text{T}} + \tilde{g}n - \mathbf{v}\cdot(\mathbf{\Omega}\times\mathbf{r}) - \frac{\nabla^2\left(\sqrt{n}\right)}{2\sqrt{n}}\right\rbrace. \label{eq:euler}
\end{align}
When $Na_s \gg l_{\perp}$ the \emph{quantum pressure} term in Eq.~\eqref{eq:euler}, $\nabla\left[\nabla^2(\sqrt{n})/\sqrt{n}\right]$, is negligible due to the minimal effects of zero-point kinetic energy fluctuations in the condensate~\cite{pra_51_2_1382-1386_1995, prl_76_1_6-9_1996, pitaevskiistringaribec}. In the Thomas-Fermi (TF) limit, where this term may be neglected, Eq.~\eqref{eq:euler} is approximated by the simplified form
\begin{equation}
\frac{\partial\mathbf{v}}{\partial t} = -\nabla\left\lbrace\frac{\mathbf{v}^2}{2} + V_{\text{T}} + \tilde{g}n -\mathbf{v}\cdot(\mathbf{\Omega}\times\mathbf{r})\right\rbrace. \label{eq:tfeuler}
\end{equation}
We also note that the vector $\mathbf{\Omega}\times\mathbf{r}$ lies in the $x$-$y$ plane whereas the principal axes of the trap are given by $\hat{X}$, $\hat{Y}$, and $\hat{Z}$, with $\hat{x}$ and $\hat{X}$ not coinciding with each other unless the trap is not tilted. The resulting competition between the trapping and rotating-frame transformation terms necessitates the introduction of a second angle, $\xi$, and a third co-rotating coordinate frame, $\tilde{\mathbf{r}}$, in order to find the axes of symmetry of the solutions of Eqs.~\eqref{eq:continuity} and \eqref{eq:tfeuler}. Let us define $\xi$ and $\tilde{\mathbf{r}}$ via the transformation
\begin{align}
\begin{pmatrix}
\tilde{x} \\
\tilde{y} \\
\tilde{z}
\end{pmatrix} &=
\begin{pmatrix}
\cos\xi & 0 & -\sin\xi \\
0 & 1 & 0 \\
\sin\xi & 0 & \cos\xi
\end{pmatrix}
\begin{pmatrix}
X \\
Y \\
Z
\end{pmatrix} \nonumber \\
&=
\begin{pmatrix}
\cos(\theta-\xi) & 0 & \sin(\theta-\xi) \\
0 & 1 & 0 \\
-\sin(\theta-\xi) & 0 & \cos(\theta-\xi)
\end{pmatrix}
\begin{pmatrix}
x \\
y \\
z
\end{pmatrix}. \label{eq:adjusttiltframe}
\end{align}
In this new reference frame, the trapping is given by
\begin{align}
V_{\text{T}}\left(\tilde{\mathbf{r}}\right) &= \frac{1}{2}\left[(1-\varepsilon)\left(\tilde{x}\cos\xi + \tilde{z}\sin\xi\right)^2 + (1+\varepsilon)\tilde{y}^2\right] \nonumber \\
&+ \frac{1}{2}\gamma^2\left(\tilde{x}\sin\xi - \tilde{z}\cos\xi\right)^2, \label{eq:tiltedtraptildecoord}
\end{align}
while the rotating-frame term, $\mathbf{\Omega}\times\mathbf{r}$ transforms to
\begin{equation}
\mathbf{\Omega}\times\tilde{\mathbf{r}} = \Omega\left[\cos(\theta-\xi)\left(-\tilde{y}\hat{\tilde{x}} + \tilde{x}\hat{\tilde{y}}\right) + \sin(\theta-\xi)\left(\tilde{y}\hat{\tilde{z}} - \tilde{z}\hat{\tilde{y}}\right)\right]. \label{eq:galileitiltedtrap}
\end{equation}
To clarify the relationship between the co-rotating reference frames, we overlay the coordinate axes of $\mathbf{r}$, $\mathbf{R}$, and $\tilde{\mathbf{r}}$ at constant $y = Y = \tilde{y} = 0$ on a typical cross-section of an TF surface of constant density in Fig. \ref{referenceframes}.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{referenceframes.pdf}
\caption{Shaded cross-section, at $y = Y = \tilde{y} = 0$, of the ellipsoidal surface of constant density for a Thomas-Fermi stationary state with its semi-axes along the $\tilde{x}$- and $\tilde{z}$-axes, $R_x$ and $R_z$, respectively, illustrated for reference. The Cartesian axes corresponding to the coordinate frames $\mathbf{r}$, $\mathbf{R}$, and $\tilde{\mathbf{r}}$ are overlaid on the cross-section, and $\mathbf{\Omega} \parallel \hat{z}$.}
\label{referenceframes}
\end{figure}
\section{\label{sec:level2}Thomas-Fermi Stationary Solutions}
The stationary solutions of the GPE are specified through the condensate's chemical potential, $\mu$, via~\cite{pitaevskiistringaribec}
\begin{equation}
\psi(\tilde{\mathbf{r}}, t) = \psi(\tilde{\mathbf{r}}, t = 0)\exp(-i\mu t).
\end{equation}
Therefore, the stationary state density, $n_{\text{TF}}$, and velocity, $\mathbf{v}_{\text{TF}}$, obey
\begin{gather}
0 = \nabla\cdot\left[n\left(\mathbf{v}-\mathbf{\Omega}\times\tilde{\mathbf{r}}\right)\right], \label{eq:contstat} \\
\nabla\mu = \nabla\left\lbrace\frac{\mathbf{v}_{\text{TF}}^2}{2} + V_{\text{T}} + \tilde{g}n_{\text{TF}} -\mathbf{v}_{\text{TF}}\cdot(\mathbf{\Omega}\times\tilde{\mathbf{r}})\right\rbrace. \label{eq:tfeulerstat}
\end{gather}
Let us impose the following \emph{Ans{\"a}tze} for $n_{\text{TF}}$ and $\mathbf{v}_{\text{TF}}$:
\begin{align}
n_{\text{TF}}(\tilde{\mathbf{r}}) &= n_0\left(1 - \mathlarger{\sum_{i\in{x,y,z}}}\frac{\tilde{r}_i^2}{R_i^2}\right)\Theta\left(1 - \mathlarger{\sum_{i\in{x,y,z}}}\frac{\tilde{r}_i^2}{R_i^2}\right), \label{eq:nstat} \\
\mathbf{v}_{\text{TF}}(\tilde{\mathbf{r}}) &= \nabla\left[\alpha_{xy}\tilde{x}\tilde{y} + \alpha_{yz}\tilde{y}\tilde{z} + \alpha_{zx}\tilde{z}\tilde{x}\right]. \label{eq:modifyrecati}
\end{align}
Here, $n_0 = 15/(8\pi R_xR_yR_z)$ is a normalization parameter that ensures that $n_{\text{TF}}$ obeys Eq.~\eqref{eq:normalization}~\cite{pra_51_2_1382-1386_1995, prl_76_1_6-9_1996}. The form of Eq.~\eqref{eq:nstat} shows that the angle $\xi$ in the coordinate transformation given by Eq.~\eqref{eq:adjusttiltframe} is fixed by the requirement that the principal axes of the TF stationary state density coincide with the Cartesian axes of the $\mathbf{r}$ coordinate frame. The parameters $\lbrace R_i\rbrace$ thus denote the semi-axes of the paraboloid TF profile along the $\tilde{r}_i$-axis. We illustrate these features in the TF density cross-section in Fig.~\ref{referenceframes} by labeling the ellipsoid's semi-axes along $\hat{\tilde{x}}$ and $\hat{\tilde{z}}$ as $R_x$ and $R_z$, respectively.
Equation~\eqref{eq:modifyrecati} is consistent with the quadrupolar flow of a TF stationary state in an untilted harmonic trap ($\theta = \xi = 0$) rotating about the $z$-axis, $\mathbf{v}_{\text{TF}} = \alpha\nabla(xy)$~\cite{prl_86_3_377-380_2001}. An inspection of Eq.~\eqref{eq:contstat} shows that the $k$th component of $\mathbf{v}$, $\sum_{j\neq k}\alpha_{jk}\tilde{r}_j$, is nonzero only if $\epsilon_{ijk}\Omega_i\tilde{r}_j \neq 0$, which in turn shows that $\alpha_{ij} \neq 0$ only if $\epsilon_{ijk}\Omega_k \neq 0$. This suggests that for the problem at hand, we have $\alpha_{zx} = 0$ since $\Omega_y = 0$. By substituting Eq.~\eqref{eq:nstat} and~\eqref{eq:modifyrecati} into Eq.~\eqref{eq:contstat} and equating the coefficients of the spatial coordinates, we can verify the property that $\alpha_{zx}$ is null and also derive the relations
\begin{align}
\alpha \equiv \alpha_{xy} &= \left(\frac{\kappa_x^2 - \kappa_y^2}{\kappa_x^2 + \kappa_y^2}\right)\Omega\cos(\theta-\xi), \label{eq:alphadefn} \\
\delta \equiv \alpha_{zx} &= \left(\frac{\kappa_y^2 - 1}{\kappa_y^2 + 1}\right)\Omega\sin(\theta-\xi), \label{eq:deltadefn}
\end{align}
where $\kappa_x = R_x/R_z$ and $\kappa_y = R_y/R_z$. Thus the trial solution employed for the velocity field is
\begin{equation}
\mathbf{v}_{\text{TF}}(\tilde{\mathbf{r}}) = \alpha\nabla(\tilde{x}\tilde{y}) + \delta\nabla(\tilde{y}\tilde{z}). \label{eq:vstat}
\end{equation}
This quadrupolar profile for the velocity field, and thereby the spatial dependence of the condensate's phase, may be considered as the quantum analog of the classical velocity potential for an inviscid fluid inside an ellipsoid container rotating about a non-principal axis of the ellipsoid~\cite{lambhydrodynamics, landaulifshitzvol6fluidmechanics}. In both systems, solid-body rotation is possible only when the density is asymmetric about the rotation axis. We also note that Eqs.~\eqref{eq:alphadefn} and \eqref{eq:deltadefn} are formally similar to the equations of motion appearing in the context of the rotational energy bands in the tilted-axis cranked shell model of rotating triaxial nuclei~\cite{prc_65_5_054304_2002}.
The problem of determining the stationary solutions of Eqs.~\eqref{eq:contstat} and \eqref{eq:tfeulerstat} may now be reduced to solving a set of five self-consistency relations for $\lbrace \kappa_x, \kappa_y, \alpha, \delta, \xi\rbrace$. These are obtained by substituting Eqs.~\eqref{eq:nstat} and \eqref{eq:vstat} into Eq.~\eqref{eq:tfeulerstat} and subsequently reading off the coefficients of like terms. Firstly, from the coefficients of $\tilde{x}^2$, $\tilde{y}^2$ and $\tilde{z}^2$, we find that the TF semi-axes are given by
\begin{equation}
R_i^2 = \frac{2\tilde{g}n_0}{\tilde{\omega}_i^2}. \label{eq:tfradii}
\end{equation}
In Eq.~\eqref{eq:tfradii}, we make use of generalized harmonic trapping frequencies, $\tilde{\omega}_i^2$, that are defined as
\begin{align}
\tilde{\omega}_x^2 &= (1-\varepsilon)\cos^2\xi + \gamma^2\sin^2\xi + \alpha^2 - 2\Omega\alpha\cos(\theta - \xi), \label{eq:omegaxeff} \\
\tilde{\omega}_y^2 &= 1+\varepsilon + \alpha^2 + \delta^2 + 2\Omega[\alpha\cos(\theta-\xi)-\delta\sin(\theta-\xi)], \label{eq:omegayeff} \\
\tilde{\omega}_z^2 &= \gamma^2\cos^2\xi + (1-\varepsilon)\sin^2\xi + \delta^2 + 2\Omega\delta\sin(\theta-\xi). \label{eq:omegazeff}
\end{align}
This implies that the quantities $\kappa_x$ and $\kappa_y$ obey
\begin{equation}
\kappa_i^2 = \frac{\tilde{\omega}_z^2}{\tilde{\omega}_i^2}\,:\,i = x,y. \label{eq:tfratiosols}
\end{equation}
By recognizing that there is no $\tilde{x}\tilde{z}$ term in Eq.~\eqref{eq:nstat}, we also obtain the condition that
\begin{equation}
(1 - \varepsilon - \gamma^2)\sin\xi\cos\xi + \alpha\delta + \Omega[\alpha\sin(\theta - \xi) - \delta\cos(\theta - \xi)] = 0. \label{eq:xzcoeffzero}
\end{equation}
The two final self-consistency relations are obtained via substituting Eq.~\eqref{eq:tfradii} into Eqs.~\eqref{eq:alphadefn} and \eqref{eq:deltadefn}, which yields
\begin{align}
[\alpha+\Omega\cos(\theta-\xi)]\tilde{\omega}_x^2 &+ [\alpha-\Omega\cos(\theta-\xi)]\tilde{\omega}_y^2 = 0, \label{eq:alphaeqn} \\
[\delta+\Omega\sin(\theta-\xi)]\tilde{\omega}_y^2 &+ [\delta-\Omega\sin(\theta-\xi)]\tilde{\omega}_z^2 = 0. \label{eq:deltaeqn}
\end{align}
Equations \eqref{eq:tfratiosols} --~\eqref{eq:deltaeqn} describe branches of stationary solutions as functions of $\Omega$ that terminate when one or more of $\tilde{\omega}_x, \tilde{\omega}_y, \tilde{\omega}_z$ equal zero. The locations of these endpoints determine the number of real stationary solutions for a given value of $\Omega$. We identify four such limits which are of use to us, noting that a rotation of the rotating frame by $\pi/2$ about the $\tilde{y}$-axis transforms $\tilde{x}$ to $\tilde{z}$:
\begin{enumerate}[label=\alph*)]
\item \begin{center}$\tilde{\omega}_x \rightarrow 0$ and $\tilde{\omega}_y,\text{}\tilde{\omega}_z \neq 0$,\end{center} \label{en:endpointx}
\item \begin{center}$\tilde{\omega}_y \rightarrow 0$ and $\tilde{\omega}_x,\text{}\tilde{\omega}_z \neq 0$,\end{center} \label{en:endpointy}
\item \begin{center}$\tilde{\omega}_x,\text{}\tilde{\omega}_y \rightarrow 0$ and $\tilde{\omega}_z \neq 0$,\end{center} \label{en:endpointxy}
\item \begin{center}$\tilde{\omega}_y,\text{}\tilde{\omega}_z \rightarrow 0$ and $\tilde{\omega}_x \neq 0$.\end{center} \label{en:endpointyz}
\end{enumerate}
For the remainder of this paper, the subscripts $xc$, $yc$, $xyc$, and $yzc$ are used to denote the values of quantities such as $\xi$ in the limits~\ref{en:endpointx}, \ref{en:endpointy}, \ref{en:endpointxy} and \ref{en:endpointyz}, respectively. A detailed description of the self-consistency relations satisfied by $\Omega$, $\alpha$, $\delta$ and $\xi$ at each of these limits is provided in Appendix~\ref{sec:level7}. We also provide a description of how the shape of the TF distribution can be understood via inspection of the signs of $\alpha$, $\delta$ and $\theta - \xi$ can be found in Appendix~\ref{sec:level8}.
\section{\label{sec:level3}Stationary Solution Branches}
Keeping in mind the possible limits of the stationary solution branches, $\Omega \rightarrow \lbrace\Omega_{xc}, \Omega_{yc}, \Omega_{xyc}, \Omega_{yzc}\rbrace$, we proceed to solve Eqs.~\eqref{eq:tfratiosols} -- \eqref{eq:deltaeqn} and plot the resulting values of $\alpha$, $\delta$, and $\xi$ as functions of $\Omega$ for fixed values of $\theta$, $\gamma$ and $\varepsilon$. To provide a representative sample of the variety of trapping regimes, we analyze the following cases:
\begin{enumerate}
\item $\gamma = 3/4$, $\varepsilon = 0$, $\theta \in \lbrace 0, \pi/8, \pi/4, 3\pi/8\rbrace$,
\item $\gamma = 4/3$, $\varepsilon = 0.05$, $\theta \in \lbrace 0, \pi/8, \pi/4, 3\pi/8\rbrace$.
\end{enumerate}
When $\Omega = 0$, stationary states in the harmonic trap described by case $1$ are prolate and are axially symmetric about $\hat{z}$, while those for case $2$ are oblate and do not exhibit this axial symmetry. We do not analyze the trap tilting angle $\theta = \pi/2$ as this limit is easily transformed to an untilted trap with a different set of trapping frequencies by rotating the coordinate frame about $\hat{y}$ by $\pi/2$.
\subsection{\label{sec:level3.1}Prolate, Symmetric Trapping}
We initially focus on the prolate, symmetric trap, where $\gamma = 3/4$ and $\varepsilon = 0$. For this trap, we specify the rotational frequencies as defined by cases a) -- d) in Sec.~\ref{sec:level2} in Table~\ref{tab:omcrit1}:
\begin{table}[h]
\caption{Endpoints of the branches - $\gamma = 3/4$, $\varepsilon = 0$ \label{tab:omcrit1}}
\begin{ruledtabular}
\begin{tabular}{c||c|c|c|c}
& $\Omega_{xc}/\omega_{\perp}$ & $\Omega_{yc}/\omega_{\perp}$ & $\Omega_{xyc}/\omega_{\perp}$ & $\Omega_{yzc}/\omega_{\perp}$ \\ \hline
$\theta = 0$ & 1 & 1 & 1.75 & 1.75 \\
$\theta = \pi/8$ & 0.9475 & 1 & 1.8958 & 1.6495 \\
$\theta = \pi/4$ & 0.8485 & 1 & 1.9899 & 1.6578 \\
$\theta = 3\pi/8$ & 0.7752 & 1 & 1.9833 & 1.7563 \\
\end{tabular}
\end{ruledtabular}
\end{table}
In Fig.~\ref{epsilon0gamma0point75thetaalltf}(a), we plot $\alpha$ as a function of $\Omega$ for the values of $\theta$ listed in Table~\ref{tab:omcrit1}. Here we see that there exist five distinct stationary solution branches, four of which exhibit the endpoints defined by cases a) -- d). Initially we describe the limit $\theta = 0$, as explored in previous theoretical studies. For a trap with axial symmetry about the rotation axis, i.e. $\varepsilon = 0$, an $\alpha = 0$ stationary solution exists for all $\Omega \geq 0$, while two further solutions emerge at the rotational bifurcation frequency $\Omega = \Omega_{\text{b}1} \equiv \omega_{\perp}/\sqrt{2}$~\cite{prl_86_3_377-380_2001, pra_73_6_061603r_2006, jphysb_40_18_3615-3628_2007}. We note that the position of this bifurcation is attributable to an energetic instability of the $l = 2,\,m = 2$ quadrupolar surface mode, which has a frequency $\omega(l = 2,\,m = 2) = \sqrt{2}\omega_{\perp} - 2\Omega$ and is thus energetically favourable for $\Omega \geq \Omega_{\text{b}1}$~\cite{prl_86_3_377-380_2001}. These additional stationary solutions are symmetric about the $\Omega$ axis and terminate in the limit $\Omega \rightarrow \Omega_{xc} = \Omega_{yc} = \omega_{\perp}$, where $\alpha \rightarrow \omega_{\perp}$ as well. Furthermore, we find evidence for the existence of a second bifurcation where two more stationary solutions emerge from the stationary solution defined by $\alpha = 0$ and terminate when $\Omega \rightarrow \Omega_{xyc} = \Omega_{yzc} = 1.75\omega_{\perp}$. We attribute the existence of this bifurcation to the energetic instability of the $l = 2,\,m = 1$ quadrupole mode, which boasts the frequency $\omega(l = 2,\,m = 1) = \omega_{\perp}\sqrt{1 + \gamma^2} - \Omega$~\cite{pitaevskiistringaribec} and is therefore associated with the bifurcation frequency $\Omega_{\text{b}2} = 5\omega_{\perp}/4$ when $\gamma = 3/4$. These new branches are not symmetric about the $\Omega$ axis unlike those emerging from the $m = 1$ bifurcation, and their existence had not previously been predicted in the context of rotating BECs due to the omission of the additional degrees of freedom given by $\delta$ and $\xi$. However, we note that the energetic instability of an $l = 2,\,m = 1$ surface mode causes similar bifurcations in other systems. For instance, in a rotating reference frame, the equilibrium density of a irrotational gravitationally-bound fluid can undergo just such a bifurcation from a Maclaurin spheroid to a tilted Riemann ellipsoid~\cite{physfluids_8_12_3414-3422_1996}.
The new class of stationary solutions described by $\theta \neq 0$ behaves markedly differently to those for the untilted trap. When $\Omega = 0$ we have a solution defined by $\alpha = 0$ and this solution, which we denote as Branch I, persists for $\Omega < \min\lbrace\Omega_{xc}, \Omega_{yc}\rbrace$. From Table~\ref{tab:omcrit1}, this rotation frequency is given by $\Omega = \Omega_{xc}$ for all of the values of $\theta$ that we consider in this case. Branch I is, in general, the solution that the condensate will follow in response to a quasi-adiabatic acceleration of the trap's rotation frequency from zero. Two additional, connected, branches emerge at a bifurcation frequency, denoted as $\Omega_{\text{b}1}$, and initially have values of $\alpha$ with opposite sign to the first solution. One of these solutions, denoted here as Branch II, terminates at $\Omega = \max\lbrace\Omega_{xc}, \Omega_{yc}\rbrace \equiv \omega_{\perp}$. The other solution, denoted here as Branch III, persists until the endpoint defined by $\Omega = \min\lbrace\Omega_{xyc}, \Omega_{yzc}\rbrace$, which is equivalent to $\Omega = \Omega_{yzc}$ for this trap. The behavior of Branch III contrasts with that of the solutions for $\theta = 0$, where it is possible for a condensate to follow the same solution branch from $\Omega = \Omega_{\text{b}1}$ till $\Omega \rightarrow \infty$. A second bifurcation frequency, $\Omega_{\text{b}2}$, heralds the emergence of an additional pair of connected branches that exhibit the same sign of $\alpha$. One of these, denoted here as Branch IV, terminates when $\Omega \rightarrow \max\lbrace\Omega_{xyc}, \Omega_{yzc}\rbrace \equiv \Omega_{xyc}$, while the other solution, denoted here as Branch V, exists for $\Omega \in [\Omega_{\text{b}2}, +\infty)$ and is the only solution that exists for $\Omega > \max\lbrace\Omega_{xyc}, \Omega_{yzc}\rbrace$.
\begin{figure}[h]
\includegraphics[width=\linewidth]{epsilon0gamma0point75thetaalltf.pdf}
\vspace*{-5mm}
\caption{Stationary solutions as a function of $\Omega$ for $\alpha$ (a), $\theta - \xi$ (b), and $\delta$ (c), when $\gamma = 3/4$, $\varepsilon = 0$, $\theta \in \lbrace 0, \pi/8, \pi/4, 3\pi/8\rbrace$.}
\label{epsilon0gamma0point75thetaalltf}
\end{figure}
We also present the corresponding solutions of $\theta - \xi$, as a function of $\Omega$, in Fig.~\ref{epsilon0gamma0point75thetaalltf}(b) where we observe that both of the bifurcations are clearly evident in the behavior of $\theta - \xi$ as well as that of $\alpha$. Furthermore, for $\theta = 0$, the solutions that emerge at $\Omega = \Omega_{\text{b}2}$ and terminate at $\Omega = \Omega_{xyc} = \Omega_{yzc} = 1.75\omega_{\perp}$ are closely related to each other; they correspond to density profiles with the identical TF semi-axes but with opposite tilting angles about the rotation axis. As such, their respective values of $\xi$ are symmetric about the value $\xi = -\pi/4$. In Fig.~\ref{epsilon0gamma0point75thetaalltf}(c), where $\delta$ is plotted as a function of $\Omega$, we find that for the $\theta = 0$ branches emerging when $\Omega = \Omega_{\text{b}2}$, the values of $\alpha$ for one branch are equivalent to those of $-\delta$ for the other branch. We also note that unlike the corresponding behavior of $\alpha$ and $\xi$, a qualitative discrepancy in $\delta$ along Branch I for $\theta = \pi/8$ is evident when compared to the angles $\theta = \pi/4$ and $\theta = 3\pi/8$. Specifically, $\delta$ is a monotonically increasing function of $\Omega$ when $\theta = \pi/4$ and $\theta = 3\pi/8$ but exhibits a maximum at $\Omega\approx 0.78\omega_{\perp}$ when $\theta = \pi/8$. However, such qualitative differences with respect to the trap tilting angle are not exhibited by Branches II - V.
\subsection{\label{sec:level3.2}Oblate, Asymmetric Trapping}
We proceed to discuss the condensate's behavior in the oblate, asymmetric trap where $\gamma = 4/3$ and $\varepsilon = 0.05$. Here, the lack of axial symmetry of the trapping along any axis results in the features of the stationary solutions being qualitatively different to those described in Sec.~\ref{sec:level3.1}. In Table~\ref{tab:omcrit2}, we specify the rotation frequencies that correspond to the termination cases a) -- d):
\begin{table}[h]
\caption{Endpoints of the branches - case $2$ \label{tab:omcrit2}}
\begin{ruledtabular}
\begin{tabular}{c||c|c|c|c}
& $\Omega_{xc}/\omega_{\perp}$ & $\Omega_{yc}/\omega_{\perp}$ & $\Omega_{xyc}/\omega_{\perp}$ & $\Omega_{yzc}/\omega_{\perp}$ \\ \hline
$\theta = 0$ & 0.9747 & 1.0247 & 2.3334 & 2.3334 \\
$\theta = \pi/8$ & 1.0097 & 1.0247 & 2.1311 & 2.4914 \\
$\theta = \pi/4$ & 1.1128 & 1.0247 & 1.9924 & 2.5176 \\
$\theta = 3\pi/8$ & 1.2556 & 1.0247 & 2.0012 & 2.3892 \\
\end{tabular}
\end{ruledtabular}
\end{table}
When the rotating trap is untilted, i.e. $\theta = 0$, the stationary solutions corresponding to Branches I, II and III are also untilted, i.e. $\theta = \xi = 0$. We find that Branch I, for which $\alpha \geq 0$, terminates when $\Omega = \Omega_{xc} = \omega_{\perp}\sqrt{1-\varepsilon}$. Branches II and III, which both exhibit $\alpha < 0$, are connected at the bifurcation frequency $\Omega = \Omega_{\text{b}1}$ but are disconnected from Branch I. While Branch II terminates when $\Omega = \Omega_{yc} = \omega_{\perp}\sqrt{1+\varepsilon}$, Branch III is characterized by $\alpha$ monotonically tending to zero as $\Omega \rightarrow \infty$~\cite{prl_86_3_377-380_2001, pra_73_6_061603r_2006, jphysb_40_18_3615-3628_2007}. The extra degrees of freedom that are represented by $\delta$ and $\xi$ manifest themselves when $\theta = 0$ through the presence of the additional, previously unknown, branches IV and V, which are connected at $\Omega = \Omega_{\text{b2}}$ and terminate at the same rotation frequency, $\Omega = \Omega_{xyc} = \Omega_{yzc}$. However, when the rotating trap is tilted the stationary solutions more closely resemble those in Sec.~\ref{sec:level3.1} except that $\Omega_{xc} < \Omega_{yc} = \omega_{\perp}\sqrt{1+\varepsilon}$ when $\theta = 0, \pi/8$ and $\Omega_{xc} > \omega_{\perp}\sqrt{1+\varepsilon}$ when $\theta \in \pi/4, 3\pi/8$; the crossover, where $\Omega_{xc} = \Omega_{yc}$, occurs when $\theta \approx 0.4693 \equiv 26.89^{\circ}$. This results in Branches I -- III possessing the opposite signs for $\Omega_{xc} > \Omega_{yc}$ when $\theta \in \pi/4, 3\pi/8$ to the solutions when $\Omega_{xc} < \Omega_{yc}$, a feature not seen in Sec.~\ref{sec:level3.1}. This behavior is demonstrated in Fig.~\ref{epsilon0point05gamma4over3thetaalltf}(a), where we have plotted $\alpha$ as a function of $\Omega$ for the angles $\theta \in \lbrace 0, \pi/8, \pi/4, 3\pi/8\rbrace$.
\begin{figure}[h]
\includegraphics[width=\linewidth]{epsilon0point05gamma4over3thetaalltf.pdf}
\vspace*{-5mm}
\caption{Stationary solutions as a function of $\Omega$ for $\alpha$ (a), $\theta - \xi$ (b), and $\delta$ (c), when $\gamma = 4/3$, $\varepsilon = 0.05$, $\theta \in \lbrace 0, \pi/8, \pi/4, 3\pi/8\rbrace$.}
\label{epsilon0point05gamma4over3thetaalltf}
\end{figure}
We have also plotted $\theta - \xi$ as a function of $\Omega$ in Fig.~ \ref{epsilon0point05gamma4over3thetaalltf}(b) for this oblate, axially asymmetric trap, and thereby find that $\theta - \xi$ similarly behaves differently for Branch I when $\theta = \pi/8$ as compared to the angles $\theta = \pi/4$ and $\theta = 3\pi/8$. For instance, the behavior of Branch I for $\theta = \pi/8$ is not monotonic but has a maximum at $\Omega \approx 0.65\omega_{\perp}$. This contrasts sharply with the monotonic behavior of $\theta - \xi$ as a function of $\Omega$ for $\theta = \pi/4$ and $\theta = 3\pi/8$. However, Branches II - V exhibit merely quantitative differences with respect to the tilting angle. As in Sec.~\ref{sec:level3.1}, the values of $\xi$ for the branches that emerge when $\Omega = \Omega_{\text{b}2}$ are symmetric about the value $\xi = -\pi/4$, suggesting that the density profiles for these two branches are physically equivalent with the same TF semi-axes but exhibit opposite tilting angles about the rotation axis. Thus the values of $\alpha$ for one branch is equivalent to those of $-\delta$ for the other branch, which may be inferred from the corresponding plots of $\delta$, as a function of $\Omega$, that are provided in Fig.~ \ref{epsilon0point05gamma4over3thetaalltf}(c). Interestingly, the maximum of $\theta - \xi$ for $\theta = \pi/8$ along Branch I when $\Omega \approx 0.65\omega_{\perp}$ is reflected in a similar maximum in $\delta$, which eventually attains negative values as $\Omega \rightarrow \Omega_{xc}$.
\section{\label{sec:level4}Linearized Time-Dependent Hydrodynamics}
Via the hydrodynamic formalism elucidated in Secs.~\ref{sec:level2} and \ref{sec:level3}, we have shown in Sec.~\ref{sec:level3} that the tilting of a rotating harmonic trap induces a nontrivial tilting angle of the condensate's TF stationary state density. The hydrodynamic formalism may also be used to determine the parametric domain of dynamical stability against environmental perturbations, a procedure that has been achieved in the $\theta = 0$ limit~\cite{prl_87_19_190402_2001}. Let us specifically address the scenario where the rotation frequency, $\Omega$, is quasi-adiabatically accelerated from zero for a fixed choice of $\varepsilon$, $\gamma$, and $\theta$. In the TF limit, the condensate will follow the stationary solution Branch I and therefore we solely investigate the dynamical stability of Branch I.
In general, the perturbating of a trapped BEC in a stationary state can excite one or more of its collective modes. For perturbations of sufficiently small magnitude the condensate's response may be assumed to be linear and the collective excitations may be obtained by linearizing Eqs.~\eqref{eq:continuity} and \eqref{eq:tfeuler} about the TF stationary state. In this formalism the collective modes are expressed as time-dependent fluctuations of the density and phase that are equivalent to linear combinations of the solutions of the Bogoliubov--de Gennes equations~\cite{pra_54_5_4204-4212_1996, pra_58_4_3168-3179_1998, pitaevskiistringaribec}. To determine the spectrum of collective modes, we write:
\begin{align}
n(\tilde{\mathbf{r}}, t) &= n_{\text{TF}}(\tilde{\mathbf{r}}) + \delta n(\tilde{\mathbf{r}}, t), \label{eq:densitypert} \\
S(\tilde{\mathbf{r}}, t) &= S_{\text{TF}}(\tilde{\mathbf{r}}, t) + \delta S(\tilde{\mathbf{r}}, t). \label{eq:phasepert}
\end{align}
Here, $S_{\text{TF}}(\tilde{\mathbf{r}}, t) = -\mu t + \alpha\tilde{x}\tilde{y} + \delta\tilde{y}\tilde{z}$. The subsequent linearization of Eqs.~\eqref{eq:continuity} and \eqref{eq:tfeuler} is equivalent to neglecting contributions from terms that are quadratic in the density and phase fluctuations, $\delta n$ and $\delta S$, respectively. This results in a coupled set of first-order equations for the time evolution of the fluctuations that is given by~\cite{prl_87_19_190402_2001, pra_73_6_061603r_2006}
\begin{gather}
\frac{\partial}{\partial t}
\begin{pmatrix}
\delta S \\
\delta n
\end{pmatrix}
=
\mathcal{M}
\begin{pmatrix}
\delta S \\
\delta n
\end{pmatrix}, \label{eq:perteqns} \\
\mathcal{M} = -\begin{pmatrix}
\mathbf{v}_c\cdot\nabla & \tilde{g} \\
\nabla\cdot\left(n_{\text{TF}}\nabla\right) & \mathbf{v}_c\cdot\nabla
\end{pmatrix}, \label{eq:pertmatrix} \\
\mathbf{v}_c = \nabla S_{\text{TF}} - \mathbf{\Omega}\times\tilde{\mathbf{r}}. \label{eq:labvel}
\end{gather}
Hence we can express each collective mode, indexed by $\nu$, as a combination of a density fluctuation, $\delta n_{\nu}(\tilde{\mathbf{r}})e^{\lambda_{\nu}t}$, and a phase fluctuation, $\delta S_{\nu}(\tilde{\mathbf{r}})e^{\lambda_{\nu}t}$, that satisfies Eq.~\eqref{eq:perteqns} if the constant $\lambda_{\nu}$ is an eigenvalue of the operator $\mathcal{M}$.
Since the time-dependence of the collective modes is exponential it is evident that, to linear order, the dynamical stability of a stationary state is determined by the set of all eigenvalues of $\mathcal{M}$, $\lbrace\lambda_{\nu}\rbrace$. If a given eigenvalue has a positive real component, the amplitude of the corresponding collective mode will grow exponentially in time and will overwhelm the stationary state, rendering the stationary state dynamically unstable. Conversely we have dynamical stability only if all of the eigenvalues of $\mathcal{M}$ have a negative real component, while purely imaginary eigenvalues are characteristic of excitations with an infinite lifetime. To diagonalise $\mathcal{M}$, we expand $\delta n$ and $\delta S)$ as polynomials in $\mathbb{R}^3$~\cite{prl_87_19_190402_2001, pra_73_6_061603r_2006}. Since it is not possible to consider all possible collective modes, we truncate the polynomial expansion of the fluctuations such that the maximum allowed order of the polynomials is $N_{\text{max}} = 10$. This proves to be a sufficiently high order to explore the dynamical stability of the stationary states in the linearized regime. However, we note that even if no unstable modes are found from this procedure for a given stationary state, it is not a guarantee of dynamical stability as a higher value of $N_{\text{max}}$ may admit a collective mode whose eigenvalue has a positive real component. Furthermore, by limiting our analysis to the linearized regime, we neglect nonlinear effects that could destabilize modes that are stable at linear order in the fluctuations.
We now proceed to describe the eigenvalues of the collective modes for Branch I of the stationary solutions presented in Sec.~\ref{sec:level3}. From an inspection of $\mathcal{M}$, every possible collective mode features the same maximum polynomial order for both $\delta n(\tilde{\mathbf{r}})$ and $\delta S(\tilde{\mathbf{r}})$, except for a spatially uniform phase fluctuation without a corresponding density fluctuation that is associated with a null eigenvalue. This is a manifestation of the Goldstone mode and is a consequence of the broken $\mathbb{U}(1)$ symmetry that characterizes Bose-Einstein condensation~\cite{pra_54_5_4204-4212_1996, pitaevskiistringaribec}. Fixing $N_{\text{max}} = 10$, we diagonalize Eq.~\eqref{eq:pertmatrix} over the discretely binned parameter space specified by the domain of Branch I of the stationary solutions described in Sec.~\ref{sec:level3.1} ($\varepsilon = 0$, $\gamma = 3/4$ and $\theta\in [0^{\circ}, 90^{\circ}]$). In Fig.~\ref{epsilon0gamma0point75upperheaviside}, we have shaded the bins where the respective Branch I solutions are associated with at least one eigenvalue of $\mathcal{M}$ with a real positive component. To linear order, these points in parameter space comprise a domain of guaranteed dynamical instability. A similar diagonalization of $\mathcal{M}$ with respect to the stationary solutions in Sec.~\ref{sec:level3.2}, i.e. $\varepsilon = 0.05$, $\gamma = 4/3$ and $\theta\in [0^{\circ}, 90^{\circ}]$, yields the stability diagram depicted in Fig.~\ref{epsilon0point05gamma4over3upperheaviside}.
\begin{figure}[h]
\includegraphics[width=\linewidth]{epsilon0gamma0point75upperheaviside.pdf}
\vspace*{-5mm}
\caption{Phase diagram of the dynamical stability of Branch I for $\varepsilon = 0,\,\gamma = 3/4$, with $N_{\text{max}} = 10$; Branch I is dynamically unstable at the shaded points of parameter space. The red dashed lines and markers denote the trajectories of the GPE simulations and the corresponding instability frequency, respectively. The red unbroken curve denotes the endpoints of Branch I, $\Omega = \min\lbrace\Omega_{xc},\Omega_{yc}\rbrace$.}
\label{epsilon0gamma0point75upperheaviside}
\end{figure}
\begin{figure}[h]
\includegraphics[width=\linewidth]{epsilon0point05gamma4over3upperheaviside.pdf}
\vspace*{-5mm}
\caption{Phase diagram of the dynamical stability of Branch I for $\varepsilon = 0.05,\,\gamma = 4/3$, with $N_{\text{max}} = 10$; Branch I is dynamically unstable at the shaded points of parameter space. The red dashed lines and markers denote the trajectories of the GPE simulations and the corresponding instability frequency, respectively. The red unbroken curve denotes the endpoints of Branch I, $\Omega = \min\lbrace\Omega_{xc},\Omega_{yc}\rbrace$.}
\label{epsilon0point05gamma4over3upperheaviside}
\end{figure}
From Figs.~\ref{epsilon0gamma0point75upperheaviside} and \ref{epsilon0point05gamma4over3upperheaviside}, we can see that Branch I is stable for small rotation frequencies and becomes dynamically unstable as $\Omega\rightarrow\omega_{\perp}$. In both cases, we find that the first rotation frequency of instability is lower for larger trap tilt angles, which we attribute to the effective ellipticity of the trapping in the upright co-rotating frame becoming larger as $\theta \rightarrow \pi/2$. Due to the high order of polynomial perturbations that is required to realize unstable collective modes in the limit $\Omega\rightarrow\min\lbrace\Omega_{xc},\Omega_{yc}\rbrace$, Figs.~\ref{epsilon0gamma0point75upperheaviside} and \ref{epsilon0point05gamma4over3upperheaviside} erroneously predict a region of dynamical stability. This limit is represented in Figs.~\ref{epsilon0gamma0point75upperheaviside} and \ref{epsilon0point05gamma4over3upperheaviside} by the red lines that plot $\min\lbrace\Omega_{xc},\Omega_{yc}\rbrace$ as a function of $\theta$; for a sufficiently large value of $N_{\text{max}}$, these red lines would be the boundary of the domain of dynamical instability.
\section{\label{sec:level5}Gross-Pitaevskii Equation Simulations}
In the preceding two sections we have found that the rotation of a tilted harmonic trap induces a nontrivial tilted angle of the condensate's density profile and that the stationary solutions become dynamically unstable as $\Omega\rightarrow\min\lbrace\Omega_{xc}, \Omega_{yc}\rbrace$. However, the Thomas-Fermi approximation does not provide information about the behavior of the condensate after the dynamical instability has manifested itself, nor does it predict whether or not the narrow regions of instability that extend to lower rotation frequencies in Figs.~\ref{epsilon0gamma0point75upperheaviside} and \ref{epsilon0point05gamma4over3upperheaviside} are negligible during a quasi-adiabatic rampup of $\Omega$. In order to attempt to answer these questions, we have also directly explored this system via numerically solving the GPE and thereby simulating a quasi-adiabatic rampup of a harmonic trap's rotation frequency from zero. In this section, we employ the same set of trapping parameters that were specified in the discussion of the TF stationary states and their dynamical stability, i.e. $\theta \in \lbrace\pi/8,\pi/4,3\pi/8\rbrace$ with either $\lbrace\gamma = 3/4,\,\varepsilon = 0\rbrace$ or $\lbrace\gamma = 4/3,\,\varepsilon = 0.05\rbrace$, and discuss the results of the GPE simulations.
Our procedure for solving Eq.~\eqref{eq:rescaledgpe} in the upright, co-rotating coordinate frame (denoted by $\mathbf{r}$) is as follows. We set the rescaled two-body interaction strength as $\tilde{g} = 10^4$ and specify a $200\times 200\times 200$ spatial grid with the intervals $\Delta x = \Delta y = \Delta z = 0.25l_{\perp}$; these parameters are sufficient for the ground state at $\Omega = 0$ to be well-described by the TF stationary solution. Initially, the backward Euler method is utilized to simulate Eq.~\eqref{eq:rescaledgpe} in imaginary time, with a suitable trial state as the initial condition, and the converged solution is taken as the ground state solution at zero rotation~\cite{kinetrelatmod_6_1_1-135_2013}. Before propagating this resulting solution in real time, the local value of the condensate density is randomly perturbed by up to $5\%$ of the original value in order to represent the environmental noise or experimental imperfections that would seed any potentially unstable collective modes. This perturbed state is used as the initial condition for the real-time evolution of the GPE, which is achieved using the Alternate Direction Implicit-Time Splitting pseudoSPectral (ADI-TSSP) Strang scheme~\cite{bao_wang_2006}. We employ a timestep of $\Delta t = 0.004\omega_{\perp}^{-1}$ and an angular acceleration $\frac{\Delta\Omega}{\Delta t} = 0.0005\omega_{\perp}^2$, where the resulting increase in $\Omega$ at each timestep, $\Delta\Omega = 2\times 10^{-6}$, is sufficiently small that the condition of adiabaticity holds. Therefore, the condensate is expected to smoothly follow Branch I of the TF stationary solutions during the rampup procedure. During the real-time evolution of the GPE, we extract the observables $R_x$, $R_y$, $R_z$, and $\xi$ by fitting the density at $x = z = 0$ to the $1$D TF density profile, $n(y) = n_0(1 - y^2/R_y^2)$, and similarly the density at $y = 0$ to the $2$D TF density profile, $n(x, z) = n_0\lbrace 1 - x^2[\cos^2(\theta - \xi)/R_x^2 + \sin^2(\theta-\xi)/R_z^2] - z^2[\sin^2(\theta - \xi)/R_x^2 + \cos^2(\theta-\xi)/R_z^2] - (1/R_x^2-1/R_z^2)\sin[2(\theta-\xi)]xz\rbrace$. Note that the form of these density cross-sections can be found by applying the transformation in Eq.~\eqref{eq:adjusttiltframe} to Eq.~\eqref{eq:nstat}. Subsequently, we may determine $\alpha$ and $\delta$ via Eqs.~\eqref{eq:alphadefn} and \eqref{eq:deltadefn}.
\subsection{\label{sec:level5.1}Prolate, Symmetric Trapping}
In Fig.~\ref{case_1_alphatmx} we compare $\alpha$ and $\xi$ as obtained from the GPE simulations of a quasi-adiabatic rampup of $\Omega$, when $\varepsilon = 0$ and $\gamma = 3/4$, to the TF results in Fig.~\ref{epsilon0gamma0point75thetaalltf}(a) and (b). Here, the first (a, c, e) and second (b, d, f) columns correspond to $\alpha$ and $\theta - \xi$, respectively, as functions of $\Omega$ while the rows correspond to distinct tilting angles: $\theta = \pi/8$ in the first row (a, b), $\theta = \pi/4$ in the second row (c, d), and $\theta = 3\pi/8$ in the third row (e, f). Figure \ref{case_1_alphatmx} demonstrates that the condensate initially follows the TF stationary state closely during the quasi-adiabatic acceleration of the rotation frequency, which confirms the prediction in Fig.~\ref{epsilon0gamma0point75upperheaviside} that the TF stationary states are dynamically stable for low rotation frequencies. However, as $\Omega\rightarrow\omega_{\perp}$, each of the trajectories from the numerical simulations diverge dramatically from the TF-based predictions. This indicates the onset of a dynamical instability, as predicted in Fig.~\ref{epsilon0gamma0point75upperheaviside}, where the condensate has been forced away from the TF stationary state due to the uncontrolled growth of collective modes. Similar behavior is seen in the analogous comparison of the TF- and GPE-derived values of $\delta$, which we have included in Appendix~\ref{sec:level9} for the reader's reference.
\begin{figure}
\includegraphics[width=\linewidth]{case_1_alphatmx.pdf}
\vspace*{-5mm}
\caption{Comparison of the TF stationary solutions along Branch I to GPE simulations, for $\varepsilon = 0$ and $\gamma = 3/4$, during a quasi-adiabatic rampup of $\Omega$. The first column (a, c, e) plots $\alpha$ as a function of $\Omega$ and the column (b, d, f) plots $\theta - \xi$ as a function of $\Omega$, where $\theta$ is equal to $\pi/8$ (a, b), $\pi/4$ (c, d), or $3\pi/8$ (e, f).}
\label{case_1_alphatmx}
\end{figure}
The rotation frequencies at which the condensate densities in each of the three simulations diverge from the corresponding TF stationary state densities are depicted as red circular markers in Fig.~\ref{epsilon0gamma0point75upperheaviside}. When $\theta = \pi/8$ or $3\pi/8$, the onset of dynamical instability agrees well with the predictions of the linearized hydrodynamical formalism. However, when $\theta = \pi/4$ the critical rotation frequency is approximately $0.55\omega_{\perp}$, whereas Fig.~\ref{epsilon0gamma0point75upperheaviside} predicts that the stationary solution is always unstable when $\Omega \gtrsim 0.7\omega_{\perp}$. We attribute this discrepancy to the existence of the small fringes of dynamical instability that intersect the trajectory of the $\theta = \pi/4$ simulation at $\Omega \approx 0.50\omega_{\perp}$ and $0.51\omega_{\perp}$, which are sufficient to destabilize the stationary solution. The fringes at $\Omega \approx 0.57\omega_{\perp}$ and $\Omega \approx 0.44\omega_{\perp}$ that are crossed by the trajectories of the simulations for $\theta = \pi/8$ and $\theta = 3\pi/8$, respectively, seem to be too narrow to sufficiently destabilize the stationary states. While another such fringe is crossed by the simulation for $\theta = \pi/8$ when $\Omega \approx 0.51\omega_{\perp}$, and a set of fringes is crossed by the $\theta = 3\pi/8$ simulation when $\Omega \approx 0.72\omega_{\perp}$, they are very close to the continuous domain of dynamical instability and thus their effect is relatively minimal. Furthermore, the GPE simulations also capture nonlinear effects that are ignored in the the linearized hydrodynamic formalism.
The deviation of the simulations from the respective TF stationary states at higher rotation frequencies can be better understood by examining the cross-sections of the condensate densities at $y = 0$ and $z = 0$, which provide us information about the density profiles; tilting angles, $\xi$, and the semi-axes, $\lbrace R_x,\,R_y,\,R_z\rbrace$, during the acceleration of the rotation frequency. These cross-sections are presented in Fig.~\ref{case1_45} for the trap tilt $\theta = \pi/4$ when $\Omega$ equals $0.25\omega_{\perp}$ (first row) and $0.575\omega_{\perp}$ (second row). When $\Omega = 0.575\omega_{\perp}$, we halt the rampup of $\Omega$ and then evolve the GPE at constant rotation frequency for a duration of $500\omega_{\perp}^{-1}$; the cross-sections at the end of this procedure are given in the third row of Fig.~\ref{case1_45}. We include the results for the analogous procedures performed with the same trapping geometry but with $\theta = \lbrace\pi/8,3\pi/8\rbrace$ in Appendix~\ref{sec:level9}. In order to aid the reader's visualization of how the density's principal axes do not generally coincide with those of either the trapping frame or the rotation axis, the $X$-$Z$ Cartesian axes, i.e. the principal axes of the trapping, are overlaid in white upon the cross-sections at $y = 0$.
\begin{figure}
\includegraphics[width=\linewidth]{case1_45.pdf}
\vspace*{-5mm}
\caption{Cross-sections of the condensate density in the co-rotating $x$-$y$ (first column) and $x$-$z$ (second column) planes for $\varepsilon = 0$, $\gamma = 3/4$, and $\theta = \pi/4$, during a quasi-adiabatic rampup of $\Omega$ at $\Omega = 0.25\omega_{\perp}$ (first row) and $\Omega = 0.575\omega_{\perp}$ (second row), and after $500\omega_{\perp}^{-1}$ at constant $\Omega = 0.575\omega_{\perp}$ (third row). The white lines represent the co-rotating $X$-$Z$ axes.}
\label{case1_45}
\end{figure}
In Fig.~\ref{case1_45} we can see that the density profile is smooth when the condensate is dynamically stable against the initially seeded perturbation and, as predicted by the TF theory, its symmetry axes in the $x-z$ plane are slightly tilted away from those of the trap. However, when the condensate initially enters the regime of dynamic instability, the density develops surface ripples and a surrounding cloud as some of the atoms are ejected from the centre of the condensate, as seen in the second column of Fig.~\ref{case1_45}. Moreover, we see that after evolution over a period of $500\omega_{\perp}^{-1}$ at constant rotation frequency, $\Omega = 0.575\omega_{\perp}$, the condensate does not resemble a smooth TF distribution but has been subject to quantum vortex nucleation after further atoms have been ejected from the centre of the condensate. This behaviour is a well-known phenomenon that occurs in the rotation of an upright, anisotropic harmonic trap containing a BEC~\cite{prl_87_19_190402_2001, prl_86_20_4443-4446_2001, pra_65_2_023603_2002, pra_67_3_033610_2003, prl_92_2_020403_2004, prl_95_14_145301_2005, pra_73_6_061603r_2006, jphysb_40_18_3615-3628_2007} and thus it is not surprising that it occurs in this system. Crucially, an inspection of Fig.~\ref{case1_45}(f) shows that the vortex lines coincident upon the $x-z$ plane are almost completely aligned along the rotation axis. This is in contrast to the background condensate density whose symmetry axes are tilted with respect to both $\lbrace\hat{x}, \hat{z}\rbrace$ and $\lbrace\hat{X}, \hat{Z}\rbrace$. While the vortices that are seen in Fig.~\ref{case1_45} are not ordered in a lattice, we expect that after a considerably longer period of evolution of the GPE at a constant rotation frequency, the final state of the system is a triangular Abrikosov vortex lattice, as is seen in BECs subject to rotation about a principal axis of the trapping~\cite{science_292_5516_476-479_2001, prl_92_2_020403_2004, prl_95_14_145301_2005, pra_73_6_061603r_2006}.
\subsection{\label{sec:level5.2}Oblate, Asymmetric Trapping}
We now describe the results of the analogous GPE simulations for a trap with the parameters $\gamma = 4/3$ and $\varepsilon = 0.05$. In Fig.~\ref{case_2_alphatmx} we compare $\alpha$ and $\theta - \xi$ from these GPE simulations to the TF results in Fig.~\ref{epsilon0point05gamma4over3thetaalltf}(a) and (b). Here, the first (a, c, e) and second (b, d, f) columns correspond to $\alpha$ and $\theta - \xi$, respectively, as functions of $\Omega$ while the rows correspond to distinct tilting angles: $\theta = \pi/8$ in the first row (a, b), $\theta = \pi/4$ in the second row (c, d), and $\theta = 3\pi/8$ in the third row (e, f). Just as in the simulations described in Sec.~\ref{sec:level5.1}, the condensate is seen to be unstable at higher rotation frequencies against collective modes seeded by the random perturbation at $t = 0$. This agrees with the behavior seen in a comparison of the semi-analytically and numerically obtained values of $\delta$, which we have included in Appendix~\ref{sec:level9}. A comparison may also be made with the prediction of dynamical instability in Fig.~\ref{epsilon0point05gamma4over3upperheaviside}, where we have indicated the rotation frequencies at which the GPE states diverge considerably from the TF states via red circular markers. When $\theta \in \lbrace\pi/8, \pi/4\rbrace$, these rotation frequencies are greater than the respective threshold frequencies above which the stationary states are always dynamically unstable. However, for $\theta = 3\pi/8$, the rotation frequency where the GPE solution diverges wildly from the TF prediction occurs at $\Omega \approx 0.69\omega_{\perp}$, which is considerably lower than the prediction of Fig.~\ref{epsilon0point05gamma4over3upperheaviside} that the stationary state is dynamically unstable when $\Omega \gtrsim 0.80\omega_{\perp}$. This may be attributed to the fact that the trajectory of the quasi-adiabatic rampup crosses a fringe of dynamical instability when $\Omega \approx 0.63\omega_{\perp}$. We note that a similar fringe is crossed when $\theta = \pi/4$ and $\Omega \approx 0.65\omega_{\perp}$, but this fringe is narrower than the one that destabilizes the $\theta = 3\pi/8$ stationary state. While the quasi-adiabatic trajectory for $\theta = \pi/8$ crosses several narrow fringes when $\Omega\in(0.75\omega_{\perp},0.78\omega_{\perp})$, their effect is relatively minimal as they are closely followed by the threshold for dynamical instability at $\Omega \approx 0.8\omega_{\perp}$.
\begin{figure}
\includegraphics[width=\linewidth]{case_2_alphatmx.pdf}
\vspace*{-5mm}
\caption{Comparison of the TF stationary solutions along Branch I to GPE simulations, for $\varepsilon = 0.05$ and $\gamma = 4/3$, during a quasi-adiabatic rampup of $\Omega$. The first column (a, c, e) plots $\alpha$ as a function of $\Omega$ and the column (b, d, f) plots $\theta - \xi$ as a function of $\Omega$, where $\theta$ is equal to $\pi/8$ (a, b), $\pi/4$ (c, d), or $3\pi/8$ (e, f).}
\label{case_2_alphatmx}
\end{figure}
We can also visualize the GPE solutions for $\theta = \pi/4$ by plotting the cross-sections of the density in the $x$-$y$ and $x$-$z$ planes for $\theta = \pi/4$ in Fig.~\ref{case2_45}, with the corresponding plots for $\theta = \lbrace\pi/8, 3\pi/8\rbrace$ included for reference in Appendix~\ref{sec:level9}. Just as in Sec.~\ref{sec:level5.1}, the density cross-sections are smooth and ellipsoidal at low rotation frequencies during a quasi-adiabatic rampup of $\Omega$. This is evident in the first row of Fig.~\ref{case2_45}, where $\Omega = 0.4\omega_{\perp}$, which also shows that the condensate density's principal axes are slightly tilted away from those of the trapping. Similarly, we again observe that the onset of the dynamical instability is marked by the presence of a high-density core with surface rippling, surrounded by a low-density halo-like cloud, in the second row of Fig.~\ref{case2_45} where $\Omega = 0.85\omega_{\perp}$. Upon halting the acceleration of the rotation frequency when $\Omega = 0.85\omega_{\perp}$ and then evolving the GPE at constant rotation frequency for the duration $500\omega_{\perp}^{-1}$, the condensate is subject to the nucleation of a large number of vortices as seen in the third row of Fig.~\ref{case2_45}. More vortices are found in Fig.~\ref{case2_45} than in Fig.~\ref{case1_45}, which is likely due to the higher rotation frequency at which the quasi-adiabatic rampup was halted. In both cases, however, we find that the vortex lines coincident upon the $x-z$ plane are almost completely aligned along the $z$-axis and that the background condensate density profile is tilted with respect to both the rotating trap and the rotation axis.
\begin{figure}
\includegraphics[width=\linewidth]{case2_45.pdf}
\vspace*{-5mm}
\caption{Cross-sections of the condensate density in the co-rotating $x$-$y$ (first column) and $x$-$z$ (second column) planes for $\varepsilon = 0.05$, $\gamma = 4/3$, and $\theta = \pi/4$, during a quasi-adiabatic rampup of $\Omega$ at $\Omega = 0.4\omega_{\perp}$ (first row) and $\Omega = 0.85\omega_{\perp}$ (second row), and after $500\omega_{\perp}^{-1}$ at constant $\Omega = 0.85\omega_{\perp}$ (third row). The white lines represent the co-rotating $X$-$Z$ axes.}
\label{case2_45}
\end{figure}
\section{\label{sec:level6}Conclusion}
In this work, we have extended the Thomas-Fermi theory for slowly rotating Bose-Einstein condensates in anisotropic harmonic traps to account for rotations of the trap about an axis that is not one of its three principal axes. In traps subject to tilted rotation, the stationary state density profile's principal axes are generally tilted with respect to those of both the confinement and the rotation. The quadrupolar irrotational velocity profile describing the vorticity-free flow of the condensate is also modified as a consequence of the tilting of the rotating harmonic trap. Our analysis of the resulting stationary solutions demonstrate the existence of previously unknown, tilted, solution branches (Branches III and IV) that exist when $\Omega > \omega_{\perp}$. Although we have only conducted a systematic study of the dynamical stability of one of the five stationary solution branches, Branch I, it is nonetheless interesting to consider whether Branch III, in particular, becomes dynamically unstable immediately upon reaching $\Omega = \omega_{\perp}$ or if its stability persists for a larger window. When $\theta = 0$ and $\varepsilon \neq 0$, a method that has been proposed for accessing the branch defined for $\Omega \in [\omega_{\text{b1}},\infty)$ is to start from the $\alpha = 0$ stationary solution when $\varepsilon = 0$ and then quasi-adiabatically tune $\varepsilon$ to the desired final value whilst keeping $\Omega$ fixed~\cite{prl_86_3_377-380_2001}. In principle a similar method could be utilized to explore the stationary solution along Branch III for $\Omega\in\left[\max\lbrace\Omega_{xc},\Omega_{yc}\rbrace, \Omega_{\text{b}2}\right]$ and $\theta \neq 0$, in an anisotropic harmonic trap, by starting from an isotropic trap rotating at a fixed frequency and adiabatically tuning its anisotropy as desired.
Our work also suggests that vortices are nucleated in response to a tilted rotating trap and are aligned along the rotation axis, and not along one of the tilted principal axes of the trap. Although we expect that the condensate's final state in the dynamically unstable domain to be a triangular vortex lattice, further work in this direction is needed to resolve this, as well as the tilting angle of the background condensate density and the response of the vortices to perturbations~\cite{pra_62_6_063617_2000, prl_86_21_4725-7428_2001, prl_90_10_100403_2003, prl_91_9_090403_2003, prl_91_10_100402_2003, prl_91_11_110402_2003, prl_93_8_080406_2004, prl_101_2_020402_2008, prl_113_16_165303_2014}. The formalism outlined here for finding rotating frame stationary solutions with a tilting of the trap's symmetry axes can be extended to more exotic condensates than the scalar one we have considered. Notably, in the field of dipolar quantum gases, we expect that dipolar Bose-Einstein condensates in the TF limit can be described in a similar manner, based on previous work on rotating either the trapping or the dipole polarization about a principal axis of the trapping~\cite{prl_98_15_150401_2007, pra_80_3_033617_2009, jphyscondesmatter_29_10_103004_2017, prl_122_5_050401_2019, pra_100_2_023625_2019}. Similarly we would expect that spin-orbit-coupled BECs subject to an artificial gauge field that induces a synthetic rotation about a non-principal axis would be described analogously, in the TF limit, to the formalism we have introduced here~\cite{pra_84_2_021604r_2011, prl_120_18_183202_2018}.
\begin{acknowledgments}
S. B. P. is supported by an Australian Government Research Training Program Scholarship and by the University of Melbourne. The numerical simulations were conducted on the Spartan HPC cluster~\cite{meade2017spartan}, and we thank Research Computing Services at the University of Melbourne for access to this resource. We also thank Nick Parker and Thomas Bland for several stimulating discussions that motivated this work.
\end{acknowledgments}
|
1,116,691,498,692 | arxiv | \section{LL BFKL}
In the limit of center-of-mass energy much greater than
the momentum transfer, $\hat s\gg|\hat t|$, any scattering process is dominated
by gluon exchange in the cross channel. Building upon this fact,
the BFKL theory~\cite{bal} models
strong-interaction processes with two large and disparate scales,
by resumming the radiative corrections to parton-parton
scattering. This is achieved
to leading logarithmic (LL) accuracy, in
$\ln(\hat s/|\hat t|)$, through the BFKL equation, i.e. a two-dimensional
integral equation which describes the evolution in transverse momentum
space and moment space of the gluon propagator exchanged in the cross
channel,
\begin{eqnarray}
&&\hspace{-7mm} \omega\, f_{\omega}(k_a,k_b)\, = \label{bfklb}\\
&&\hspace{-7mm} {1\over 2}\,\delta^2(k_a-k_b)\, +\, {\a\over \pi}
\int {d^2k_{\perp}\over k_{\perp}^2}\, K(k_a,k_b,k)\, ,\nonumber
\end{eqnarray}
with $\a= N_c\alpha_s/\pi$, $N_c=3$ the number of colors, $k_a$ and $k_b$
the transverse
momenta of the gluons at the ends of the propagator, and with kernel $K$,
\begin{eqnarray}
&&\hspace{-7mm} K(k_a,k_b,k) = \label{kern}\\
&&\hspace{-7mm} f_{\omega}(k_a+k,k_b) - {k_{a\perp}^2\over k_{\perp}^2 +
(k_a+k)_{\perp}^2}\, f_{\omega}(k_a,k_b)\, ,\nonumber
\end{eqnarray}
where the first term accounts for the emission of a gluon of momentum
$k$ and the second for the virtual radiative corrections, which {\sl reggeize}
the gluon. Eq.~(\ref{bfklb}) has been derived in the
{\sl multi-Regge kinematics}, which presumes that the produced gluons are
strongly ordered in rapidity and have comparable transverse momenta.
The solution of eq.~(\ref{bfklb}), transformed from moment space to
$y$ space, and averaged over the azimuthal angle
between $k_a$ and $k_b$, is
\begin{eqnarray}
&&\hspace{-7mm} f(k_a,k_b,y)\, = \int {d\omega\over 2\pi i}\, e^{\omega y}\,
f_{\omega}(k_a,k_b)\label{solc}\\ &&= {1\over k_{a\perp}^2 }
\int_{{1\over 2}-i\infty}^{{1\over 2}+i\infty} {d\gamma\over 2\pi i}\,
e^{\omega(\gamma)y}\, \left(k_{a\perp}^2\over k_{b\perp}^2
\right)^{\gamma}\, ,\nonumber
\end{eqnarray}
with $\omega(\gamma)=\a\chi(\gamma)$ the leading eigenvalue of
the BFKL equation, determined through the implicit equation
\begin{equation}
\chi(\gamma) = 2\psi(1) - \psi(\gamma) - \psi(1-\gamma)\, .\label{llchi}
\end{equation}
In eq.~(\ref{solc}) the evolution parameter $y$ of the propagator is
$y=\ln(\hat s/\tau^2)$. The precise definition of the {\sl reggeization}
scale $\tau$
is immaterial to LL accuracy; the only requirement is that it is much
smaller than any of the $s$-type invariants, in order to guarantee that
$y\gg 1$.
The maximum of the leading eigenvalue, $\omega(1/2)=4\ln{2}\a$,
yields the known power-like growth of $f$ in energy~\cite{bal}.
What does the BFKL theory have to do with reality ? There is no evidence,
as yet, of the necessity of a BFKL resummation either in the scaling
violations to the evolution of the
$F_2$ structure function in DIS (for a summary of the theoretical status, see
ref.~\cite{cat}), or in dijet production at large rapidity intervals \cite{mn}.
The most
promising BFKL footprint, as of now, seems to be forward jet production in DIS,
where the data~\cite{hera} show a better agreement with the BFKL resummation
\cite{hot} than with a NLO calculation \cite{mz}
(for a summary of dijet and forward jet production, see ref.~\cite{vdd}).
In a phenomenological analysis,
the BFKL resummation is plagued by several deficiencies;
the most relevant is that energy and longitudinal momentum
are not conserved, and since the momentum fractions of the incoming partons
are reconstructed from the kinematic variables of the outgoing partons,
the BFKL prediction for a production rate may be affected by large numerical
errors~\cite{ds}. However, energy-momentum conservation at each gluon
emission in the BFKL ladder can be achieved through a Monte Carlo
implementation~\cite{bfklmc} of the BFKL equation~(\ref{bfklb}).
Besides, because of the strong rapidity ordering between the gluons
emitted along the ladder, any two-parton invariant mass is large. Thus there
are no collinear divergences, no QCD coherence and no soft emissions
in the BFKL ladder.
Accordingly jets are determined only to leading order and have no
non-trivial structure. Other resummations in the high-energy limit,
like the CCFM equation \cite{ccfm} which has QCD coherence and soft emissions,
seem thus better suited to describe
more exclusive quantities, like multi-jet rates \cite{marche}.
However, it has been shown that, provided the jets are resolved, i.e.
their transverse energy is larger than a given resolution scale, the
BFKL and the CCFM multi-jet rates coincide to LL accuracy \cite{salam}.
\section{NLL BFKL and NNLO}
In addition to the problems mentioned above, the BFKL equation
is determined at a fixed value of $\alpha_s$ (as a consequence, the
solution (\ref{solc}) is scale invariant). All these problems can
be partly alleviated by computing the next-to-leading logarithmic (NLL)
corrections to the BFKL equation~(\ref{bfklb}). In order to do that,
the real~\cite{real} and the one-loop~\cite{1loop,1loopeps,1loopds}
corrections to the gluon emission in the kernel~(\ref{kern}) had to be
computed, while the reggeization term in eq.~(\ref{kern}) needed
to be determined to NLL accuracy~\cite{2loop}.
From the standpoint of a fixed-order
calculation, the NLL corrections \cite{real,1loop,1loopeps,1loopds,2loop}
present features of
both NLO and NNLO calculations. Namely, they contain
only the one-loop running of the coupling; on the other hand, in order to
extract the NLL reggeization term, an approximate evaluation of
two-loop parton-parton scattering amplitudes had to be performed \cite{2loop}.
In addition, the one-loop corrections to the gluon emission in the
kernel~(\ref{kern}) had to be evaluated to higher order in the dimensional
regularization parameter $\epsilon$, in order to generate correctly all
the singular and finite contributions to the interference term between the
one-loop amplitude and its tree-level counterpart~\cite{1loopeps,1loopds}.
This turns out to be a general feature in the construction of the
infrared and collinear phase space of an exact NNLO calculation~\cite{bds},
and can be tackled in a partly model-independent way by using one-loop
eikonal and splitting functions evaluated to higher order in $\epsilon$
\cite{bdks}.
Building upon the NLL corrections \cite{real,1loop,1loopeps,1loopds,2loop}
to the terms in
the kernel~(\ref{kern}), the BFKL equation was evaluated to NLL
accuracy~\cite{bfklnl,cc}. Applying the NLL kernel to the LL eigenfunctions,
$(k_\perp^2)^\gamma$, the solution has still the form of eq.~(\ref{solc}),
with leading eigenvalue,
\begin{eqnarray}
\omega(\gamma) &=& \a(\mu) \bigl[1-b_0\a(\mu)
\ln(k_{a\perp}^2/\mu^2) \bigr] \chi_0(\gamma) \nonumber\\
&+& \a^2(\mu) \chi_1(\gamma) \, \label{nllsol}
\end{eqnarray}
where $b_0=11/12 - n_f/(6N_c)$ is proportional the one-loop coefficient
of the $\beta$ function, with $n_f$ active flavors, and $\mu$ is
the $\overline{MS}$ renormalization scale. $\chi_0(\gamma)$ is given in
eq.~(\ref{solc}), and $\chi_1(\gamma)$ in ref.~\cite{bfklnl}. In
eq.~(\ref{nllsol}),
the running coupling term, which breaks the scale invariance, has
been singled out.
Both the running-coupling and the scale-invariant
terms in eq.~(\ref{nllsol}) present problems that could undermine the
whole resummation program (for a summary of its status see ref.~\cite{carl}).
Firstly, the NLL corrections at $\gamma=1/2$ are negative and
large~\cite{bfklnl} (however, eq.~(\ref{nllsol}) no longer has a maximum
at $\gamma=1/2$~\cite{ross}). Secondly, double transverse
logarithms of the type $\ln^2(k_{a\perp}^2/k_{b\perp}^2)$, which are not
included in the NLL resummation, can give a large contribution and need
to be resummed~\cite{doublelog}. Double transverse logarithms appear
because the NLL resummation is sensitive to the choice of reggeization
scale $\tau$; e.g. the choices $\tau^2=k_{a\perp}k_{b\perp}$, $k_{a\perp}^2$
or $k_{b\perp}^2$, which are all equivalent at LL, introduce double transverse
logarithms one with respect to the others at NLL. An alternative, but related,
approach is to introduce a cut-off $\Delta$ as the lower limit of
integration over the rapidity of the gluons emitted along the
ladder~\cite{lip,delta}. This has the advantage of being similar in spirit
to the dependence of a fixed-order calculation on the factorization scale,
namely in a NLL resummation the dependence on the rapidity scale $\Delta$
is moved on to the NNLL terms~\cite{delta}, just like in a NLO exact
calculation the
dependence on the factorization scale is moved on to the NNLO terms.
Finally, we remark that so far the activity has mostly been concentrated
on the NLL corrections to
the Green's function for a gluon exchanged in the cross channel.
However, in a scattering amplitude this is convoluted with process-dependent
impact factors, which must be determined to the required accuracy.
In a NLL production rate, the impact factors must be computed at NLO.
For dijet production at large rapidity intervals, they are given in
ref.~\cite{dsif}.
|
1,116,691,498,693 | arxiv | \section{Introduction}
Nowadays it is generally accepted that a huge number of real processes arising in physics, biology, chemistry, etc. can be described by nonlinear PDEs. And the most powerful methods for costruction of exact solutions for a wide ranges of nonlinear PDEs are symmetry-based methods, and these methods originated from the Lie method, so the basic part of the theory, is infinitesimal method of Sophus Lie~\cite{Lie}, that is connection between continuous transformation groups and algebras of their infinitesimal generators. This method leads to techniques in the group-invariant solutions and conservation laws of differential equations~\cite{O1,Ib3,Ov1}.
In fact the method that we proposed, the method of preliminary group classification, is a conclusion of Lie infinitesimal method, and is defined and related to the theory of group classification of differential equations. this method is proposed in~\cite{Akhatov} and is developed for deferential equation in~\cite{Ib2,Bihlo}.
The main idea of preliminary group classification is based on extension of the kernel of admitted Lie groups that
are obtained by the transformations from the corresponding equivalence Lie group. The problem of finding inequivalent cases of such extension of symmetry can reduce to the classification of inequivalent subgroups of the equivalence Lie group(In particular, if a Lie group is finite-parameter, then one can use an optimal systems of its subgroups). we use equivalence transformations and the theory of classification of finite-dimensional Lie algebras. In this paper we study point symmetry and equivalence classification of $\mathrm{HESI}$ equation, leading to a number of new interesting nonlinear invariant models associated to non-trivial invariance algebras.
A complete list of these models are given for a finite-dimensional equivalence algebra derived for $\mathrm{HESI}$ equation. To obtain these goals we perform algorithms that is explained in references ~\cite{Ov1,Bila,Ib4,Cherniha}, and we use similar works in ~\cite{Karn,Ib2,mah1,Dr1,Dr2,Ib1,Song}.
\textbf{For the local solution:} The existence of $\textit{C}^{\infty}$ local solutions of $\mathrm{HESI}$ equation In $\mathbb{R}^3$ is studied in~\cite{Tian}, and the solution is in the following form:
\begin{equation*}
u(x,y,z)=\frac{1}{2}(\tau_1x^2+\tau_2y^2+\tau_3z^2)+\varepsilon^5\omega(\varepsilon^{-2}(x,y,z)),
\end{equation*}
where $\varepsilon$ and $\tau_i$ are arbitrary constants, and $\omega$ is a given smooth function. So at the end of this work, with the symmetry group of the equation, this solutions transforms to another solution of $\mathrm{HESI}$ equation.
\textbf{About $\mathrm{HESI}$ equation:} Based on refrences~\cite{fros,Tian}, $k$-Hessian equations are a family of PDEs in $n$-dimensional space equations that can be written as ${\cal S}_k[u]=f$, where $1\leqslant k \leqslant n$, ${\cal S}_k[u]=\sigma_k(\lambda({\cal D}^2u))$, and $\lambda({\cal D}^2u)=(\lambda_1,\cdots,\lambda_n)$, are the eigenvalues of the \textit{Hessian matrix} ${\cal D}^2u$ $\big((\partial_i \partial_ju)_{1\leqslant i,j \leqslant n}\big)$ and $\sigma_k(\lambda)=\sum_{i_1<\cdots<i_k}\lambda_{i_1}\cdots\lambda_{i_k}$, is a $k$th elementry symmetric polynomial.
The $k$-Hessian equations include the Laplace equation, when $k=1$, And the Monge-Ampere equation, when $k=n$.
Here we study $2-$hessian equation in three dimensions, and f is an arbitrary function of $x,y,z$ ($\mathrm{HESI}$ equation) :
\begin{equation}
{\cal S}_2[u] := u_{xx}u_{yy}+u_{xx}u_{yy}+u_{yy}u_{zz}-u_{xy}^2-u_{yz}^2-u_{xz}^2,
\end{equation}
this equation is a fully nonlinear elliptic partial diferential equation, that is related to intrinsic curvature for three-dimensional manifolds.
In fact, the $2-$hessian equation is unfamiliar outside Riemannian geometry and elliptic regularity theory, that is closely related to the scalar curvature operator, which provides an intrinstic curvature for a three-dimensional manifold. Geometric PDEs have been used widely in image analysis ~\cite{sapiro}. In particular, the Monge-Ampere equation in the context of optimal transportation has been used in three dimensional volume-based image registration ~\cite{haker}.
The $2-$hessian operator also appears in conformal mapping problems. Conformal surface mapping has been used for two-dimmensional image registration ~\cite{angenent,Gu}, but does not generalize directly to three dimensions. Quasi-conformal maps have been used in three dimensions ~\cite{wang,zeng}. However, these methods are still being developed.
\section{Principal Lie Algebra}
The symmetry approach to the classification of admissible partial differential equations depends heavily on a useful way of describing transformation groups that keep invariant the form of a given partial differential equation. This is done via the well-known infinitesimal method developed by Sophus Lie~\cite{O1,O2,Ov1}.Given a partial differential equation, the problem of finding its maximal (in some sense) Lie invariance group reduces to solving \textit{determining equations} that is an over-determined system of linear partial differential equation.
We consider the $2-$hessian equation as the form:
\begin{equation}
\mathrm{HESI}:\; {\cal S}_2[u]=f(x,y,z),\label{Hesi}
\end{equation}
where $u=u(x,y,z)$ is dependent variable and $x,y,z$ are independent variables and $f$ is arbitrary function. Considering the total space $ E=X\times U$ with local coordinate $(x,y,z,u)$ which $x,y,z\in X$ and $u\in U$. The solution space of equation (\ref{Hesi}) is a subvariety $S_{\Delta}\subset J^2(\mathbb{R}^3,\mathbb{R})$ of the second order of jet bundle of $3-$dimensional submanifolds of $E$. The $1-$parameter Lie group of infinitesimal transformations on $E$ is as follows:
\begin{equation}
\begin{aligned}
\tilde{x}&=x+t\xi(x,y,z,u)+O(t^2), & \tilde{y}&=y+t\zeta(x,y,z,u)+O(t^2), \\
\tilde{z}&=z+t\eta(x,y,z,u)+O(t^2), & \tilde{u}&=u+t\phi(x,y,z,u)+O(t^2),
\end{aligned}
\end{equation}
where $t$ is the group parameters and $\xi,\zeta,\phi$ and $\eta$ are the infinitesimals of the transformations for the independent and dependent variables, resp. So the corresponding infinitesimal generators have the following form generally
\begin{equation}
V=\xi(x,y,z,u)\partial_x +\zeta\partial_y +\eta\partial_z +\phi\partial_u.
\end{equation}
So based on Theorem 2.31 of~\cite{O2}, $V$ is a invariant point transformation if ${\bf pr}^{(2)}V[\mathrm{HESI}]=0$. Where ${\bf pr}^{(2)}V$ is the second order prolongation of the vector field $V$, that means:
\begin{equation}
{\bf pr}^{(2)}V = V+{\phi}^x(x,y,z,u^{(2)})\,{\partial}_{u_x}+
\cdots+{\phi}^{xz}(x,y,z,u^{(2)})\,{\partial}_{u_{xz}},
\end{equation}
in which $u^{(2)}=(u,u_x,u_y,u_z,u_{xx},u_{xy},u_{xz},u_{yy},u_{yz},u_{zz})$ and
\begin{align}
{\phi}^J(x,y,z,u^{(2)})=\textbf{D}_J\bigg(\phi-\sum_{i=1}^{3}{\xi}^i\frac{\partial u}{\partial x_i}\bigg)+\sum_{i=1}^{3}{\xi}^i\frac{\partial {u_J}}{\partial x_i}
\end{align}
where $({\xi}^1,{\xi}^2,{\xi}^3)=(\xi,\zeta,\eta)$ and $(x_1,x_2,x_3)=(x,y,z)$, further $J=(j_1,\cdots,j_k)$ is a $k$-th order multi-index, and $j_i$s adopt $x$ or $y$ or $z$, for each $1\leqslant i \leqslant k$, then $\textbf{D}_J$ denotes the total derivatives for the multi-index $J$, and the $J$-th total derivative is as
$\textbf{D}_J=\textbf{D}_{j_1}\cdots\textbf{D}_{j_k}$, where
\begin{equation*}
\textbf{D}_i={\partial}_{x_i}+\sum_{J}\frac{\partial {u_J}}{\partial x_i}\,{\partial}_{u_J}, \qquad (x_1,x_2,x_3)=(x,y,z).
\end{equation*}
So ${\bf pr}^{(2)}V$ acts on Eq.\eqref{Hesi} and with replacing $u_{yy}$ with equivalent expression of $\mathrm{HESI}$ equation we have the following system as determining equation:
\begin{eqnarray}
\begin{aligned}
&{\phi}_{xx}= {\phi}_{xy}= {\phi}_{xz}={\phi}_{xu}= {\phi}_{yy}= {\phi}_{yz}={\phi}_{yu}= {\phi}_{zz}= {\phi}_{zu}=0,\\
&{\phi}_{uu}= {\xi}_{u}= {\xi}_{yy}={\xi}_{yz}= {\xi}_{zz}= {\zeta}_{u}={\zeta}_{zz}= {\eta}_{u}= {\eta}_{zz}=0,\\
&{\xi}_{x}={\eta}_{z}, \;\; {\zeta}_{x}=-{\xi}_{y}, \;\; {\zeta}_{y}={\eta}_{z},\;\;{\eta}_{x}=-{\xi}_{z}, \;\; {\eta}_{y}=-{\zeta}_{z}, \\
&f_x{\xi}+f_y{\zeta}+f_z{\eta}+2f(2{\eta}_z-{\phi}_u)=0.
\end{aligned}
\end{eqnarray}
where $f$ is arbitrary function.with solving above relations we have:
\begin{eqnarray} \label{con}
\begin{aligned}
&\xi=c_6x+c_7y+c_8z+c_9, \qquad \zeta=c_{10}z+c_6y-c_7x+c_{11},\\
&\eta=-c_{10}y+c_6z-c_8x+c_{12}, \qquad \phi=c_1x+c_2u+c_3y+c_4z+c_5, \\
&f_x{\xi}+f_y{\zeta}+f_z{\eta}+2f(2{\eta}_z-f{\phi}_u)=0.
\end{aligned}
\end{eqnarray}
which $c_i,i=1,\cdots,12$ are arbitrary costants.
So if $f(x,y,z)=0$ the last equation of relations \eqref{con} will be removed and based on the first four equations in relations \eqref{con}, we have $12$-dimensional symmetry group, but if $f(x,y,z)\neq0$ we substitute the first four equations of \eqref{con} in the last one, and obtain the following \textit{condition}:
\begin{eqnarray} \label{con2}
\begin{aligned}
(c_6x+c_7y&+c_8z+c_9)f_x+( c_{10}z+c_6y-c_7x+c_{11})f_y \\
&+(-c_{10}y+c_6z-c_8x+c_{12})f_z+(-2c_2+4c_6)f=0.
\end{aligned}
\end{eqnarray}
There isn't $c_1,c_3,c_4,c_5$ in condition \eqref{con2}, so these coeficients are free,that means equation \eqref{Hesi} have $4-$dimentional symmetry group at minimum. So we conclude the following theorem from above relations:
\begin{theorem}:
The $\mathrm{HESI}$ equation(Eq.\eqref{Hesi}) admits symmetry group of dimension 4 to 12, for different choises of given function $f(x,y,z)$. These equations have the common following vectors as infinitesimal generators:
\begin{align}\label{principle}
V_1&=\partial_u , & V_2&=x\partial_u , & V_3&=y\partial_u , & V_4&=z\partial_u.
\end{align}
\end{theorem}
Then the Lie algebra $\mathfrak{g}$ generated with the vectors \eqref{principle} is called the principal Lie algebra for Eq.\eqref{Hesi}. Now we want to specify the coefficient $f$ such that Eq.\eqref{Hesi} admits an extension of the principal algebra $\mathfrak{g}$. therefore, we do not solve the determining equation, instead we obtain a partial group classification of Eq.\eqref{Hesi} via so-called method of \textit{preliminary group classification}.
This method was suggested in ~\cite{Akhatov} and applied when an equivalence group is generated by a finite-dimensional Lie algebra $\mathfrak{g}_{\mathscr{E}}$. The essential part of the method is the classification of all nonsimilar subalgebras of $\mathfrak{g}_{\mathscr{E}}$. Actually the classification is based on finite-dimentional equivalence algebra $\mathfrak{g}_{\mathscr{E}}$.
\section{Equivalence Transformations}
with a nondegenerate change of the variables $x,y,z$ an equation of the form $\mathrm{HESI}$ equation convert to an equation of the same form, but with different $f(x,y,z)$. The set of all equivalence transformatioms forms an equivalence group $E$. We shal find a subgroup $E_c$ of it with infinitesimal method.
We suppose an operator of the group $E_c$ is in the form:
\begin{equation}
Y=\xi(x,y,z,u)\partial_x +\zeta\partial_y +\eta\partial_z +\phi\partial_u +\psi(x,y,z,u,f)\partial_f.\label{Y}
\end{equation}
So from the invariance conditions of Eq.\eqref{Hesi} written as the following system:
\begin{align}
{\cal S}_2[u]=f(x,y,z),\qquad f_u=0.
\end{align}
Note that $f$ and $u$ are considered as differential variables; $u$ on the space $(x,y,z)$ and $f$ on the space $(x,y,z,u)$. The coordinates $\xi,\zeta,\eta,\phi$ of operator \eqref{Y} are funtions of $x,y,z,u$, while the coordinate $\psi$ is function of $x,y,z,u,f$. as usual way we should solve the following system that obtained of the invariance conditions:
\begin{align}
{\bf pr}^{(2)}Y( {\cal S}_2[u]=f(x,y,z)),\qquad {\bf pr}^{(2)}Y(f_u)=0.
\end{align}
Where ${\bf pr}^{(2)}Y$ is the second order prolongation of the vector field $Y$.
\textbf{But}, to obtain the operator $Y$ of the group $E_c$ we use of \textit{N. Bila's method} in ref.~\cite{Bila}. The base of our procedure is theorem (1) of ~\cite{Bila}, then this theorem and it's results can be summarized as the following three-steps procedure:
\textbf{step 1:} Find the determining equations of the extended classical symmetries related to the Eq.\eqref{Hesi}. For the meaning of the extended classical symmetries, a vector
\begin{equation}
V=\xi(x,y,z,u,f)\partial_x +\zeta\partial_y +\eta\partial_z +\phi\partial_u +\psi\partial_f.\label{v}
\end{equation}
is said the extended classical symmetry operator assosiated with $\mathrm{HESI}$ Equation and the determining equations of the extended classical symmetries related to the $\mathrm{HESI}$ Equation is the following equation:
\begin{equation}
{\bf pr}^{(2)}V[\mathrm{HESI}]=0,\label{sys}
\end{equation}
where $\xi$, $\zeta$, $\eta$, $\phi$ and $\psi$ are functions of $x$, $y$, $z$, $u$ and $f$, and ${\bf pr}^{(2)}V$ is
\begin{align}
V+\sum_{J}{\phi}^J(x,y,z,u^{(2)},f^{(2)})\,{\partial}_{u_J}+\sum_{J}{\psi}^J(x,y,z,u^{(2)},f^{(2)})\,{\partial}_{f_J}
\end{align}
where $u^{(2)}=(u, u_x,\cdots,u_{zz})$ and $f^{(2)}=(f, f_x,\cdots, f_{zz})$:
and the coefficients obtain from:
\begin{align}
{\phi}^J(x,y,z,u^{(2)},f^{(2)})=\textbf{D}_J\bigg(\phi-\sum_{i=1}^{3}{\xi}^i\frac{\partial u}{\partial x_i}\bigg)+\sum_{i=1}^{3}{\xi}^i\frac{\partial {u_J}}{\partial x_i}\nonumber\\
{\psi}^J(x,y,z,u^{(2)},f^{(2)})=\textbf{D}_J\bigg(\psi-\sum_{i=1}^{3}{\xi}^i\frac{\partial f}{\partial x_i}\bigg)+\sum_{i=1}^{3}{\xi}^i\frac{\partial {f_J}}{\partial x_i}
\end{align}
where $({\xi}^1,{\xi}^2,{\xi}^3)=(\xi,\zeta,\eta)$ and $(x_1,x_2,x_3)=(x,y,z)$, further $J=(j_1,\cdots,j_k)$ is a $k$-th order multi-index, and $j_i$s adopt $x$, $y$ or $z$, for $1\leqslant i \leqslant k$, then $\textbf{D}_J$ denotes the total derivatives for the multi-index $J$, and the $J$-th total derivative is as
$\textbf{D}_J =\textbf{D}_{j_1}\cdots\textbf{D}_{j_k}$, that total derivative operator with respect to $i$ is as following
\begin{equation*}
\textbf{D}_i=\partial_{x_i}+\sum_{J}\frac{\partial {u_J}}{\partial x_i}\,{\partial}_{u_J}+\sum_{J}\frac{\partial {u_f}}{\partial x_i}\,{\partial}_{f_J},
\end{equation*}
\textbf{Note:} ${\bf pr}^{(2)}V$ is determined by taking into account that $u$ and $f$ are both dependent variables, exactly as one would proceed in finding the classical Lie symmetries for a system without arbitrary functions.
So with solving equation \eqref{sys} we gain:
\begin{equation}\label{sys2}
\begin{aligned}
& \xi=c_6x+c_9y+c_7z+c_8, & & \zeta=c_{10}z+c_6y-c_9x+c_{11}, \\
& \psi=2f(-2c_6+c_3), & & \eta=-c_{10}y+c_6z-c_7x+c_{12}, \\
& \phi=c_1x+c_3u+c_2y+c_5z+c_4,
\end{aligned}
\end{equation}
which $c_i,i=1,\cdots,12$ are arbitrary costants.
\textbf{step 2:} Augment the system of step 1 with the following conditions:
\begin{equation}
\frac{\partial {\xi}}{\partial u}=0,\qquad \frac{\partial {\zeta}}{\partial u}=0,\qquad\frac{\partial {\eta}}{\partial u}=0,\qquad \frac{\partial {\psi}}{\partial u}=0.
\end{equation}
As we seen in relations \eqref{sys2} above conditions are satiesfied.
\textbf{step 3:} Augment the system of steps 1 and 2 with the following conditions:
\begin{equation}
\frac{\partial {\xi}}{\partial f}=0,\qquad \frac{\partial {\zeta}}{\partial f}=0,\qquad\frac{\partial {\eta}}{\partial f}=0,\qquad \frac{\partial {\psi}}{\partial f}=0.
\end{equation}
As we seen in relations \eqref{sys2} above conditions are satisfied too.
Ultimately, The class of equations \eqref{Hesi} has a finite continuous group of equivalence transformations generated by the following infinitesimal operators:
\begin{equation} \label{vectors2}
\begin{aligned}
Y_1 &=\partial_x , & Y_2 &=\partial_y , & Y_3 &=\partial_z ,\\
Y_4 &=\partial_u , & Y_5 &=x\partial_u , & Y_6 &=y\partial_u ,\\
Y_7 &=z\partial_u , & Y_8 &=z\partial_x -x\partial_z , & Y_9 &=y\partial_x -x\partial_y ,\\
Y_{10} &=z\partial_y -y\partial_z , & Y_{11} &=u\partial_u +2f\partial_f, & Y_{12} &=x\partial_x +y\partial_y +z\partial_z -4f\partial_f.
\end{aligned}
\end{equation}
Moreover, in the group of equivalence transformations are included also discrete transformations, i.e., reflections
$(x,y,z,u,f)\mapsto-(x,y,z,u,f)$.
\section{Sketch of the method of preliminary group classification}
In many applications of group analysis, most of extensions of the principal Lie algebra admitted by an equation are obtained from the equivalence algebra $\mathfrak{g}_{\mathscr{E}}$. We call these extension ${\mathscr{E}}$-\textit{extension of the principal Lie algebra}. The classification of all nonequivalent equations admitting ${\mathscr{E}}$-extension of the principal Lie algebra is called \textit{a preliminary qroup classification}. What we obtain also is not necessarily the largest equivalence group but, it can be any subgroup of the qroup of all equivalence transformations.
The application of this method is effective and simple when it is based on a finite-dimensional equivalence algebra $\mathfrak{g}_{\mathscr{E}}$. So we take finite dimensional algebra $\mathfrak{g}_{12}$ spanned on the basis \eqref{vectors2} and use it for preliminary group classification.
The function $f$ of Eq.\eqref{Hesi} depends on the variables $x,y,z$, so we don't construct any prolongations of operators \eqref{Y}. But we take projections on the space $(x,y,z,f)$.
The nonzero projections of \eqref{vectors2} are:
\begin{align}
Z_1 &={\bf pr}(Y_1)=\partial_x , & Z_2 &={\bf pr}(Y_2)=\partial_y ,\nonumber\\
Z_3 &={\bf pr}(Y_3)=\partial_z , & Z_4 &={\bf pr}(Y_8)=z\partial_x -x\partial_z , \nonumber\\
Z_5 &={\bf pr}(Y_9)=y\partial_x -x\partial_y , & Z_6 &={\bf pr}(Y_{10})z\partial_y -y\partial_z ,\nonumber\\
Z_7 &={\bf pr}(Y_{11})=2f\partial_f, & Z_8 &={\bf pr}(Y_{12})=x\partial_x +y\partial_y +z\partial_z -4f\partial_f.\label{vectors3}
\end{align}
It's clear that there aren't the minimal infinitesimal generators \eqref{principle}, among above vectors.
The Lie algebra generated with the vectors in \eqref{vectors3} is denoted by $\mathfrak{g}_{8}$.
The essence of the preliminary method is based on the following two proposition:
\begin{proposition}:\label{proposition1}
Let $\mathfrak{g}_{m}$ be a $m-$dimensional subalgebra of $\mathfrak{g}_{8}$. Suppose $Z^{(i)}$, $i=1,\cdots,m$ be a basis of $\mathfrak{g}_{m}$ and $Y^{(i)}$ is the elements of the algebra $\mathfrak{g}_{12}$, such that $Z^{(i)}={\bf pr}(Y^{(i)})$, that means, if
\begin{equation}\label{1}
Z^{(i)}=\sum_{\alpha=1}^{8}e^\alpha_iZ_{\alpha},
\end{equation}
then with respect to \eqref{vectors2} and \eqref{vectors3}:
\begin{align}\label{2}
Y^{(i)}=e^{1}_iY_{1}+e^{2}_iY_{2}+e^{3}_iY_{3}+e^{4}_iY_{8}+e^{5}_iY_{9}+e^{6}_iY_{10}+
e^{7}_iY_{11}+e^{8}_iY_{12}.
\end{align}
If function $f=f(x,y,z)$ be invariant with respect to the algebra $\mathfrak{g}_{m}$, then the $\mathrm{HESI}$ equation admits the operators
\begin{equation}\label{3}
X^{(i)}=\mbox{projection of}\;\;Y^{(i)} \mbox{on}\;\; (x,y,z,u).
\end{equation}
\end{proposition}
\begin{proposition}:\label{proposition2}
Let equations
\begin{eqnarray}
{\cal S}_2[u]=f(x,y,z), \label{eq1} \\
{\cal S}_2[u]=f^{\prime}(x,y,z),\label{eq2}
\end{eqnarray}
be constructed according to proposition \eqref{proposition1} with subalgebras $\mathfrak{g}_{m}$ and $\mathfrak{g}_{m^{\prime}}$, respectively. If $\mathfrak{g}_{m}$ and $\mathfrak{g}_{m^{\prime}}$ are similar subalgebras in $\mathfrak{g}_{12}$ then equations \eqref{eq1} and \eqref{eq2} are equivalent with respect to the equivalence group $G_{12}$ generated by $\mathfrak{g}_{12}$.
\end{proposition}
According to above propositions, continuation of the preliminary group classification of Eq.\eqref{Hesi} with respect to the finite-dimensional algebra $\mathfrak{g}_{12}$, is reduced to the algebraic problem of constructing of nonsimilar subalgebras of $\mathfrak{g}_{8}$, or optimal systems of subalgebras.
\textbf{note:} In this paper we just solve the problem of preliminary group classification with respect to one-dimensional subalgebras.
\section{Adjoint group for algebra $\mathfrak{g}_{8}$ }
We determine a list or optimal system, of conjuacy inequivalent subalgebras with the property that any other subalgebra is equivalent to a unique member of the list under some element of the adjoint representation, i.e. $\bar{\mathfrak{h}}={\bf Ad }(g)\mathfrak{h}$ for some g of a considered Lie group, see~\cite{O1,O2,Ov1}.
The adjoint action is given by the Lie series
\begin{equation}
{\bf Ad }(\exp(\varepsilon Y_{i}))Y_{j}=Y_{j}-\varepsilon [Y_{i},Y_{j}]+\frac{\varepsilon^2}{2}[Y_{i},[Y_{i},Y_{j}]]-\cdots,
\end{equation}
The commutator and adjoint representations of $\mathfrak{g}_{8}$ are listed in tables \ref{tab1} and \ref{tab2}.
\begin{table}[ht]
\caption{Commutators table for $\mathfrak{g}_{8}$: $[Z_{i},Z_{j}]$} \label{tab1}
\begin{center}
\begin{tabular}{c|cccccccc}
& $Z_{1}$ & $Z_{2}$ & $Z_{3}$ & $Z_{4}$ & $Z_{5}$ & $Z_{6}$ & $Z_{7}$ & $Z_{8}$\\
\hline
$Z_{1}$ & 0 & 0 & 0 & $-Z_{3}$ & $-Z_{2}$ & 0 & 0 & $Z_{1}$\\
$Z_{2}$ & 0 & 0 & 0 & 0 & $Z_{1}$ & $-Z_{3}$ & 0 & $Z_{2}$\\
$Z_{3}$ & 0 & 0 & 0 & $Z_{1}$ & 0 & $Z_{2}$ & 0& $Z_{3}$\\
$Z_{4}$ & $Z_{3}$ & 0 & $-Z_{1}$ & 0 & $-Z_{6}$ & $Z_{5}$ & 0 & 0\\
$Z_{5}$ & $Z_{2}$ & $-Z_{1}$ & 0 & $Z_{6}$ & 0 & $-Z_{4}$ & 0 & 0\\
$Z_{6}$ & 0 & $Z_{3}$ & $-Z_{2}$ & $-Z_{5}$ & $Z_{4}$ & 0 & 0 & 0\\
$Z_{7}$ & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
$Z_{8}$ & $-Z_{1}$ & $-Z_{2}$ & $-Z_{3}$ & 0 & 0 & 0 & 0 & 0\\
\end{tabular}
\end{center}
\end{table}
\begin{table}[ht]
\begin{center}
\caption{Adjoint table for $\mathfrak{g}_{8}$: ${\bf Ad }(\exp(\varepsilon_i Y_{i}))Y_{j}$} \label{tab2}
\scalebox{0.8}{
\begin{tabular}{c|cccccccc}
& $Z_{1}$ & $Z_{2}$ & $Z_{3}$ & $Z_{4}$ & $Z_{5}$ & $Z_{6}$ & $Z_{7}$ & $Z_{8}$\\
\hline
$Z_{1}$ & $Z_{1}$ & $Z_{2}$ & $Z_{3}$ & $\varepsilon_1Z_{3}+Z_{4}$ & $\varepsilon_1Z_{2}+Z_{5}$ & $Z_{6}$ & $Z_{7}$ & $-\varepsilon_1Z_{1}+Z_{8}$\\
\hline
$Z_{2}$ & $Z_{1}$ & $Z_{2}$ & $Z_{3}$ & $Z_{4}$ & $-\varepsilon_2Z_{1}+Z_{5}$ & $\varepsilon_2Z_{3} +Z_{6}$ & $Z_{7}$ & $-\varepsilon_2Z_{2}+Z_{8}$\\
\hline
$Z_{3}$ & $Z_{1}$ & $Z_{2}$ & $Z_{3}$ & $-\varepsilon_3Z_{1}+Z_{4}$ & $Z_{5}$ & $-\varepsilon_3Z_{2} +Z_{6}$ & $Z_{7}$ & $-\varepsilon_3Z_{3}+Z_{8}$\\
\hline
$Z_{4}$ & $\displaystyle \cos(\varepsilon_4)Z_{1} \atop \displaystyle -\sin(\varepsilon_4)Z_{3}$ & $Z_{2}$ & $\displaystyle \sin(\varepsilon_4)Z_{1} \atop \displaystyle +\cos(\varepsilon_4)Z_{3}$ & $Z_{4}$ & $\displaystyle \cos(\varepsilon_4)Z_{5} \atop \displaystyle +\sin(\varepsilon_4)Z_{6}$ & $\displaystyle -\sin(\varepsilon_4)Z_{5} \atop \displaystyle +\cos(\varepsilon_4)Z_{6}$ & $Z_{7}$ & $Z_{8}$ \\
\hline
$Z_{5}$ & $\displaystyle \cos(\varepsilon_5)Z_{1} \atop \displaystyle -\sin(\varepsilon_5)Z_{2}$ & $\displaystyle \sin(\varepsilon_5)Z_{1} \atop \displaystyle +\cos(\varepsilon_5)Z_{2}$ & $Z_{3}$ & $\displaystyle \cos(\varepsilon_5)Z_{4} \atop \displaystyle -\sin(\varepsilon_5)Z_{6}$ & $Z_{5}$ & $\displaystyle \sin(\varepsilon_5)Z_{4} \atop \displaystyle +\cos(\varepsilon_5)Z_{6}$ & $Z_{7}$ & $Z_{8}$ \\
\hline
$Z_{6}$ & $Z_{1}$ & $\displaystyle \cos(\varepsilon_6)Z_{2} \atop \displaystyle -\sin(\varepsilon_6)Z_{3}$ & $\displaystyle \sin(\varepsilon_6)Z_{2} \atop \displaystyle +\cos(\varepsilon_6)Z_{3}$ & $\displaystyle \cos(\varepsilon_6)Z_{4} \atop \displaystyle +\sin(\varepsilon_6)Z_{5}$ & $\displaystyle -\sin(\varepsilon_6)Z_{4} \atop \displaystyle +\cos(\varepsilon_6)Z_{5}$ & $Z_{6}$ & $Z_{7}$ & $Z_{8}$ \\
\hline
$Z_{7}$ & $Z_{1}$ & $Z_{2}$ & $Z_{3}$ & $Z_{4}$ & $Z_{5}$ & $Z_{6}$ & $Z_{7}$ & $Z_{8}$\\
\hline
$ Z_{8}$ & $e^{\varepsilon_8}Z_{1}$ & $e^{\varepsilon_8}Z_{2}$ & $e^{\varepsilon_8}Z_{3}$ & $Z_{4}$ & $Z_{5}$ & $Z_{6}$ & $Z_{7}$ & $Z_{8}$
\end{tabular}}
\end{center}
\end{table}
\section{Construction of the optimal system of one-dimensional subalgebras of $\mathfrak{g}_{8}$}
\begin{theorem}:\label{opti}
An optimal system of one-dimensional Lie algebras of $\mathfrak{g}_{8}$ in $\mathrm{HESI}$ equation are as follows:
\begin{equation}\label{op}
\begin{aligned}
& A^1=Z_{7}, & A^2&=\pm Z_{1}+Z_{7},\\
& A^3=\gamma_1Z_{6}+Z_{7}, & A^4&=\pm Z_{1}+\gamma_2Z_{6}+Z_{7},\\
& A^5=\alpha_1Z_{4}+Z_{7}, & A^6&=\pm Z_{2}+\alpha_2Z_{4}+Z_{7},\\
&A^7=\alpha_3Z_{4}+\gamma_3Z_{6}+Z_{7}, & A^8&=\pm Z_{1}+\alpha_4Z_{4}+\gamma_4Z_{6}+Z_{7},\\
& A^9=\alpha_5Z_{4}+\beta_1Z_{5}+Z_{7}, & A^{10}&=\pm Z_{3}+\alpha_6Z_{4}+\beta_2Z_{5}+Z_{7},\\
& A^{11}=\alpha_7Z_{4}+\beta_3Z_{5}+\gamma_5Z_{7}+Z_{8}, & A^{12}&=\pm Z_{2}+\alpha_8Z_{4}+\beta_4Z_{5}+\gamma_6Z_{7}+Z_{8},
\end{aligned}
\end{equation}
Where $\alpha_i$, $i=1,...8$ and $\beta_i$, $i=1,\cdots,4$ and $\gamma_i$, $i=1,\cdots,6$ are arbitrary constants.
\end{theorem}
\medskip \noindent {\it Proof:}
We will start with $Z=\sum_{i=1}^{8}a_iZ_{i}$, suppose Z is a nonzero vector field of $\mathfrak{g}_{8}$, we want simplify as many of the coefficients $a_i$, $i=1,\cdots,8$ as possible through proper Adjoint applications on Z. We proceed this simplifications through following cases:
\textbf{note:} The coefficients $a_7$ and $a_8$ don't change at all.
\textbf{Case 1:} At first, we assume that $a_8=0$, so with scalling on Z, we can suppose that $a_7=1$, then we have $Z=\sum_{i=1}^{6}a_iZ_{i}+Z_7$. therefore for different values of $a_5=0$, when it is either zero or nonzero, we have cases 1.1 and 1.2.
\textbf{Case 1.1:} If $a_8=a_5=0$, so $Z=a_1Z_{1}+a_2Z_{2}+a_3Z_{3}+a_4Z_{4}+a_6Z_{6}+Z_{7}$. Then for different values of $a_4$, when it is either zero or nonzero, we have cases 1.1.a and 1.1.b.
\textbf{Case 1.1.a:} If $a_8=a_5=a_4=0$, so $Z=a_1Z_{1}+a_2Z_{2}+a_3Z_{3}+a_6Z_{6}+Z_{7}$. Then for different values of $a_6$, when it is either zero or nonzero, we have cases 1.1.a1 and 1.1.a2.
\textbf{Case 1.1.a1:} If $a_8=a_5=a_4=a_6=0$, so $Z=a_1Z_{1}+a_2Z_{2}+a_3Z_{3}+Z_{7}$. Then for different values of $a_3$, when it is either zero or nonzero, the coefficient can be vanished; when $a_3\neq 0$, with effecting ${\bf Ad }(\exp(\cot^{-1}(a_1/{a_3})Z_4))$ on Z. Then we have $Z=a_1Z_{1}+a_2Z_{2}+Z_{7}$.
Now if $a_2=0$ or $a_2\neq0$; by effecting ${\bf Ad }(\exp(\cot^{-1}(a_1/{a_2})Z_5))$ on Z, we can make the coefficient of $Z_2$ vanished.Then we have $Z=a_1Z_{1}+Z_{7}$.
So if $a_1=0$, then $Z=Z_7$, so we have $A^1$.
And if $a_1\neq 0$, with ${\bf Ad }(\exp(\ln(\pm1/{a_1})Z_8))$ change the coefficient of $Z_1$ equal $\pm 1$, so $Z=\pm Z_1+Z_7$, therefore we have $A^2$.
\textbf{Case 1.1.a2:} If $a_8=a_5=a_4=0$, but $a_6\neq0$, so $Z=a_1Z_{1}+a_2Z_{2}+a_3Z_{3}+a_6Z_{6}+Z_{7}$. Then for different values of $a_3$, when it is either zero or nonzero, the coefficient can be vanished; when $a_3\neq 0$, with applying ${\bf Ad }(\exp(-a_3/{a_6})Z_2)$ on Z. So we have $Z=a_1Z_{1}+a_2Z_{2}+a_6Z_{6}+Z_{7}$.
Similarly, the coefficient $a_2$ is either zero or we make it vanished with effecting ${\bf Ad }(\exp(a_2/{a_6})Z_3)$ on Z. Then we have $Z=a_1Z_{1}+a_6Z_{6}+Z_{7}$.
So $a_1=0$ or $a_2\neq0$, if $a_1=0$, so $Z=a_6Z_{6}+Z_7$, and we have $A^3$.
And if $a_1\neq 0$, with ${\bf Ad }(\exp(\ln(\pm1/{a_1})Z_8))$ change the coefficient of $Z_1$ equal $\pm 1$, so $Z=\pm Z_1++a_6Z_{6}+Z_7$, therefore we have $A^4$.
\textbf{Case 1.1.b:} If $a_8=a_5=0$, but $a_4\neq0$, so $Z=a_1Z_{1}+a_2Z_{2}+a_3Z_{3}+a_4Z_{4}+a_6Z_{6}+Z_{7}$. Then for different values of $a_3$, when it is either zero or nonzero, the coefficient can be vanished; when $a_3\neq 0$, with effecting ${\bf Ad }(\exp(-a_1/{a_4})Z_1))$ on Z. Then we have $Z=a_1Z_{1}+a_2Z_{2}+a_4Z_{4}+a_6Z_{6}+Z_{7}$.
Therefore, for different values of $a_6$, when it is either zero or nonzero, we have cases 1.1.b1 and 1.1.b2.
\textbf{Case 1.1.b1:} If $a_8=a_5=a_3=a_6=0$, but $a_4\neq0$, so $Z=a_1Z_{1}+a_2Z_{2}+a_4Z_{4}+Z_{7}$. Then either $a_1$ is zero or nonzero, but
the coefficient can be vanished; when $a_1\neq 0$, with effecting ${\bf Ad }(\exp(a_1/{a_4})Z_3))$ on Z. Then we have $Z=a_2Z_{2}+a_4Z_{4}+Z_{7}$.
Ultimately, $a_2=0$ or $a_2\neq0$; If $a_2=0$, so we have $Z=a_4Z_{4}+Z_{7}$.Then we have $A^5$.
And if $a_2\neq 0$, with ${\bf Ad }(\exp(\ln(\pm1/{a_2})Z_8))$ change the coefficient of $Z_2$ equal $\pm 1$, so $Z=\pm Z_2+a_4Z_{4}+Z_7$, therefore we have $A^6$.
\textbf{Case 1.1.b2:} If $a_8=a_5=a_3=0$, but $a_6\neq0$ and $a_4\neq0$, so $Z=a_1Z_{1}+a_2Z_{2}+a_4Z_{4}+a_6Z_{6}+Z_{7}$. Then for different values of $a_2$, when it is either zero or nonzero, the coefficient can be vanished; when $a_2\neq 0$, with applying ${\bf Ad }(\exp(a_2/{a_6})Z_3)$ on Z. So we have $Z=a_1Z_{1}+a_4Z_{4}+a_6Z_{6}+Z_{7}$.
Similarly, the coefficient $a_1$ is either zero or nonzero, if $a_1=0$, so $Z=a_4Z_{4}+a_6Z_{6}+Z_7$, and we have $A^7$.
And if $a_1\neq 0$, with ${\bf Ad }(\exp(\ln(\pm1/{a_1})Z_8))$ change the coefficient of $Z_1$ equal $\pm 1$, so $Z=\pm Z_1+a_4Z_{4}++a_6Z_{6}+Z_7$, therefore we have $A^8$.
\textbf{Case 1.2:} If $a_8=0$ but $a_5\neq0$, so $Z=a_1Z_{1}+a_2Z_{2}+a_3Z_{3}+a_4Z_{4}+a_5Z_{5}+a_6Z_{6}+Z_{7}$. So we have different values of $a_2$, when it is either zero or nonzero, the coefficient can be vanished; when $a_2\neq 0$, with applying ${\bf Ad }(\exp(-a_2/a_5)Z_1)$ on Z. Then again we have two cases $a_1=0$ or $a_1\neq0$; If $a_1\neq0$, with applying ${\bf Ad }(\exp(a_1/a_5)Z_2)$ on Z, make it vanished. so we have $Z=a_3Z_{3}+a_4Z_{4}+a_5Z_{5}+a_6Z_{6}+Z_{7}$.
Again we have two cases $a_6=0$ or $a_6\neq0$; If $a_6\neq0$, with applying ${\bf Ad }(\exp(\cot^{-1}(a_4/{a_6}))Z_5)$ on Z, make it vanished. so we have $Z=a_3Z_{3}+a_4Z_{4}+a_5Z_{5}+Z_{7}$.
Ultimately, If $a_3=0$, then $Z=a_4Z_{4}+a_5Z_{5}+Z_{7}$.Then we have $A^9$.
And if $a_3\neq0$, then with ${\bf Ad }(\exp(\ln(\pm1/{a_3})Z_8))$ change the coefficient of $Z_3$ equal $\pm 1$, so $Z=\pm Z_3+a_4Z_{4}+a_5Z_{5}+Z_{7}$.Then we have $A^{10}$.
\textbf{Case 2:} If $a_8\neq0$, so with scalling on Z, we can suppose that $a_8=1$, then we have $Z=\sum_{i=1}^{7}a_iZ_{i}+Z_8$. So we have different values of $a_1$, when it is either zero or nonzero, the coefficient can be vanished; when $a_1\neq 0$, with applying ${\bf Ad }(\exp(a_1Z_1))$ on Z. So we reduce Z to $Z=a_2Z_{2}+a_3Z_{3}+a_4Z_{4}+a_5Z_{5}+a_6Z_{6}+a_7Z_{7}+Z_{8}$.
Now if $a_3=0$ or $a_3\neq0$; by effecting ${\bf Ad }(\exp(\cot^{-1}(a_2/{a_3})Z_6)$ on Z, make the coefficient of $Z_3$ vanished. so we have $Z=a_2Z_{2}+a_4Z_{4}+a_5Z_{5}+a_6Z_{6}+a_7Z_{7}+Z_{8}$.
Again we have two cases $a_6=0$ or $a_6\neq0$; If $a_6\neq0$, with applying ${\bf Ad }(\exp(-\cot^{-1}(a_5/{a_6}))Z_4)$ on Z, make it vanished. So we have $Z=a_2Z_{2}+a_4Z_{4}+a_5Z_{5}+a_7Z_{7}+Z_{8}$.
Ultimately, If $a_2=0$, then $Z=a_4Z_{4}+a_5Z_{5}+a_7Z_{7}+Z_{8}$. Then we have $A^{11}$.
And if $a_2\neq0$, then with ${\bf Ad }(\exp(\ln(\pm1/{a_2})Z_8))$ change the coefficient of $Z_2$ equal $\pm 1$, so $Z=\pm Z_2+a_4Z_{4}+a_5Z_{5}+a_7Z_{7}+Z_{8}$. Then we have $A^{12}$.
There is not any more possible cases, and the proof is complete. \hfill\ $\Box$
\section{Equations admitting an extension by one of the principal Lie algebra}
Now based on propositions \eqref{proposition1} and \eqref{proposition2}, and with the optimal system \eqref{op}, we obtain all nonequivalent equations of the form equation \eqref{Hesi}, that admitting extension of principal Lie algebra $\mathfrak{g}$ by one operator $V_5$, that means every equation of the form equation \eqref{Hesi} admits symmetry group of dimension $4$ with infinitesimal generators \eqref{principle}, also together with a fifth operator $V_5$. for every case, when this extension occurs, we indicate the corresponding coefficients $f$ and additional operator $V_5$.
The algorithm will be clarified with these examples:
%
\paragraph{First example:} Consider the operator $A^3=\gamma_1Z_{6}+Z_{7}$, which $\gamma_1\neq 0$, from \eqref{op}, so \begin{equation}
\begin{aligned}
A^3=\gamma_1z\partial_y -\gamma_1y\partial_z +2f\partial_f.
\end{aligned}
\end{equation}
Invariants are found from the following equation (see ~\cite{O2}):
\begin{equation}
\begin{aligned}
\frac{dy}{\gamma_1z}=-\frac{d z}{\gamma_1y}=\frac{d f}{2f},
\end{aligned}
\end{equation}
and are the following functions:
\begin{equation}
\begin{aligned}
I_1=x,\qquad I_2=y^2+z^2,\qquad I_3=f\exp(-\frac{2}{\gamma_1}{\tan}^{-1}(\frac{y}{z})).
\end{aligned}
\end{equation}
It follows
\begin{equation}
\begin{aligned}
f={\exp(\frac{2}{\gamma_1}{\tan}^{-1}(\frac{y}{z}))}{H(x,y^2+z^2)},
\end{aligned}
\end{equation}
where $H$ is arbitrary function.
By applying the formulas \eqref{1}, \eqref{2} and \eqref{3} on the operator $A^3$ we obtain the additional operator $V_5=\gamma_1z\partial_y -\gamma_1y\partial_z +u\partial_u$. Thus, the equation
\begin{equation}
\mathrm{HESI}: \; {\cal S}_2[u]={\exp(\frac{2}{\gamma_1}{\tan}^{-1}(\frac{y}{z}))}{H(x,y^2+z^2)},
\end{equation}
admits the five-dimensional algebra ${\mathfrak{g}}_5$ , that is generated with the following vectors
\begin{align}
V_1&=\partial_u , & V_2&=x\partial_u , &V_3&=y\partial_u , \nonumber\\
V_4&=z\partial_u , & V_5&=\gamma_1z\partial_y -\gamma_1y\partial_z +u\partial_u.
\end{align}
\paragraph{Second example:} Consider the operator $A^1=Z_{7}$ from \eqref{op}, so $A^1=2f\partial_f$.
Invariants are the following functions:
\begin{equation}\label{inv}
\begin{aligned}
I_1=x,\qquad I_2=y,\qquad I_3=z.
\end{aligned}
\end{equation}
So, there are no invariant functions $f=f(x,y,z)$ because the necessary condition for existence of invariant solutions based on ref.~\cite{Ov1} (section 19.3) is not satisfied; that means invariants \eqref{inv} can't be solved with respect to $f$.
We continue calculations on some operators of \eqref{op}, and show results in table \ref{tab3}, that is the preliminary group classification of equation \eqref{Hesi}, which admit an extension ${\mathfrak{g}}_5$ of the principal Lie algebra ${\mathfrak{g}}$.
The results of classification:
\begin{longtable}{c|cc}
\caption{The equation ${\cal E}:\;{\cal S}_2[u]=f$ has $V_5$ as its additional operator v.r.t $A^s$} \label{tab3}\\
& $f$ & $V_5$ \\ \hline
$A^2$ & $ {\exp(\pm 2x)}{H(y,z)}$ & $ \pm\partial_x +u\partial_u $\\
\hline
${A^3}_{({\gamma_1}\neq 0)}$ & $\displaystyle \exp(\frac{2}{\gamma_1}{\tan}^{-1}(\frac{y}{z})) H(x,y^2+z^2)$ & $ \gamma_1z\partial_y -\gamma_1y\partial_z +u\partial_u $\\
\hline
${A^4}_{({\gamma_2}\neq 0)}$ & $\displaystyle \exp(\frac{2}{\gamma_2}{\tan}^{-1}(\frac{y}{z})) \atop \displaystyle H(y^2+z^2,x\mp \frac{1}{\gamma_2}{\tan}^{-1}(\frac{y}{z}))$ & $ \pm\partial_x +\gamma_2z\partial_y -\gamma_2y\partial_z +u\partial_u $\\
\hline
${A^5}_{({\alpha_1}\neq 0)}$ & $ \exp(\frac{2}{\alpha_1}{\tan}^{-1}(\frac{x}{z}))H(y,x^2+z^2)$ & $ \alpha_1z\partial_x -\alpha_1x\partial_z +u\partial_u $\\
\hline
${A^6}_{({\alpha_2}=0)}$ & $ {\exp(\pm 2y)}{H(x,z)}$ & $ \pm\partial_y +u\partial_u $\\
\hline
${A^6}_{({\alpha_2}\neq 0)}$ & $\displaystyle \exp(\frac{2}{\alpha_2}{\tan}^{-1}(\frac{x}{z})) \atop \displaystyle H(x^2+z^2,y\mp \frac{1}{\alpha_2}{\tan}^{-1}(\frac{x}{z}))$ & $ \pm\partial_y +\alpha_2z\partial_x -\alpha_2x\partial_z +u\partial_u $\\
\hline
${A^7}_{({\alpha_3}\neq 0,{\gamma_3}\neq 0)}$ & $\displaystyle \exp(\frac{2}{\sqrt{{\alpha_3}^2+{\gamma_3}^2}}{\tan}^{-1}{(\frac{\alpha_3x+\gamma_3y}{z\sqrt{\alpha_3^2+\gamma_3^2}})}) \atop \displaystyle H(y-\frac{\gamma_3}{\alpha_3}x,(
(1-\frac{{\gamma_3}^2}{{\alpha_3}^2})x^2+\frac{2\gamma_3}{\alpha_3}xy+z^2))$ & $\displaystyle \alpha_3z\partial_x +\gamma_3z\partial_y \atop \displaystyle -(\alpha_3x+\gamma_3y)\partial_z +u\partial_u $\\
\hline
${A^9}_{({\alpha_5}=0,{\beta_1}\neq 0)}$ & $ \exp(\frac{2}{\beta_1}{\tan}^{-1}(\frac{x}{y}))H(z,x^2+y^2)$ & $ \beta_1y\partial_x -\beta_1x\partial_y +u\partial_u$\\
\hline
${A^9}_{(\alpha_5\neq 0,\beta_1\neq 0)}$ & $\displaystyle \exp(\frac{-2}{\sqrt{\beta_1^2+\alpha_5^2}}\tan^{-1}(
\frac{\beta_1y+\alpha_5z}{x\sqrt{\alpha_5^2+\beta_1^2}})) \atop \displaystyle H(z-\frac{\alpha_5}{\beta_1}y, (x^2+(1-\frac{\alpha_5^2}{\beta_1^2})y^2+\frac{2\alpha_5}{\beta_1}yz))$ & $\displaystyle (\alpha_5z+\beta_1y)\partial_x \atop \displaystyle -\beta_1x\partial_y -\alpha_5x\partial_z +u\partial_u $\\
\hline
${A^{10}}_{(\alpha_6=\beta_2=0)}$ & $ {\exp(\pm 2z)}{H(x,y)}$ & $ \pm\partial_z +u\partial_u $\\
\hline
${A^{10}}_{(\alpha_6\neq 0,\beta_2=0)}$ & $\displaystyle \exp(\frac{2}{\beta_2}{\tan}^{-1}(\frac{x}{y})) \atop \displaystyle H(x^2+y^2,z\mp \frac{1}{\beta_2}{\tan}^{-1}(\frac{x}{y}))$ & $ \pm\partial_z +\beta_2y\partial_x -\beta_2x\partial_y +u\partial_u $\\
\hline
${A^{11}}_{(\alpha_7=\beta_3=\gamma_5=0)}$ & $ x^{-4}H(\frac{y}{x},\frac{z}{x})$ & $ x\partial_x +y\partial_y +z\partial_z $\\
\hline
${A^{11}}_{(\alpha_7=\beta_3=0,\gamma_5\neq 0)}$ & $ x^{2c-4}H(\frac{y}{x},\frac{z}{x})$ & $ x\partial_x +y\partial_y +z\partial_z $\\
\hline
${A^{12}}_{(\alpha_8=\beta_4=\gamma_6=0)}$ & $ x^{-4}H(\frac{y\pm 1}{x},\frac{z}{x})$ & $ x\partial_x +(y\pm 1)\partial_y +z\partial_z $\\
\hline
${A^{12}}_{(\alpha_8=\beta_4=0,\gamma_6\neq 0)}$ & $ x^{2c-4}H(\frac{y\pm 1}{x},\frac{z}{x})$ & $ x\partial_x +(y\pm 1)\partial_y +z\partial_z $
\end{longtable}
\section{Some Local Solutions }
Based on the following theorem of~\cite{Tian}, The $2-$hessian equation in $\mathbb{R}^3$,
\begin{equation}\label{2hesi}
{\cal S}_2[u]=f(x,y,z,u,{\cal D}u) \qquad \mbox{on } \Omega\subset\mathbb{R}^3,
\end{equation}
which $f\in\textit{C}^{\infty}(\Omega\times\mathbb{R}\times\mathbb{R}^3)$, ${\cal D}u=(\partial_1u,\cdots,\partial_nu)$, has $\textit{C}^{\infty}$ local solutions in $\mathbb{R}^3$, and the solution is in the following form:
\begin{equation}\label{solution}
u(x,y,z)=\frac{1}{2}(\tau_1x^2+\tau_2y^2+\tau_3z^2)+\varepsilon^5\omega(\varepsilon^{-2}(x,y,z)),
\end{equation}
where $\varepsilon$ and $\tau_i$ are arbitrary constants, and $\omega$ is a given smooth function.
\begin{theorem}
Assume that $f\in\textit{C}^{\infty}(\Omega\times\mathbb{R}\times\mathbb{R}^3)$, then for any $Z_0=(x_0,u_0,p_0)\in\Omega\times\mathbb{R}\times\mathbb{R}^3$, we have that
$(1)$ If $f(Z_0)=0$, then \eqref{2hesi} admits a $1$-convex $\textit{C}^{\infty}$ local solution which is not convex.
$(2)$ If $f\geq0$ near $Z_0$, then \eqref{2hesi} admits a $2$-convex $\textit{C}^{\infty}$ local solution which is not convex.
If $f(Z_0)>0$, \eqref{2hesi} admits a convex $\textit{C}^{\infty}$ local solution.
$(3)$ If $f(Z_0)<0$, then \eqref{2hesi} admits a $1$-convex $\textit{C}^{\infty}$ local solution which is not $2$-convex. \\
Moreover, the equation \eqref{2hesi} is uniformly elliptic with respect to the above local solutions.
\end{theorem}
In this part we obtain the one-parameter groups generated by some operators, and since these groups are symmetry groups of $\mathrm{HESI}$ equation, then the above solution transforms to another solution of $\mathrm{HESI}$ equation.
\begin{itemize}
\item\textbf{Case 1:} The operator $V_1=\partial_u$ that produces one-parameter group $G_1=(x,y,z, t+u)$, $t\in\mathbb{R}$, for equation ${\cal S}_2[u]=f(x,y,z)$, transforms solution\eqref{solution} to the following solution
\begin{align*}
u(x)=\frac{1}{2}(\tau_1x^2+\tau_2y^2+\tau_3z^2)-t+\varepsilon^5\omega(\varepsilon^{-2}(x,y,z)),
\end{align*}
where $\varepsilon,\tau_i\in\mathbb{R}$.
\item\textbf{Case 2:} The operator $V_2=x\partial_u$ that produces one-parameter group $G_2=(x,y,z, tx+u)$, $t\in\mathbb{R}$, for equation ${\cal S}_2[u]=f(x,y,z)$, transforms solution\eqref{solution} to the following solution
\begin{align*}
u(x)=\frac{1}{2}(\tau_1x^2+\tau_2y^2+\tau_3z^2)-tx+\varepsilon^5\omega(\varepsilon^{-2}(x,y,z)),
\end{align*}
where $\varepsilon,\tau_i\in\mathbb{R}$.
In such a manner are the following cases:
\item\textbf{Case 3:} The operator $V_3=y\partial_u$, $G_3=(x,y,z, ty+u)$, $t\in\mathbb{R}$, for equation ${\cal S}_2[u]=f(x,y,z)$; So
\begin{align*}
u(x)=\frac{1}{2}(\tau_1x^2+\tau_2y^2+\tau_3z^2)-ty+\varepsilon^5\omega(\varepsilon^{-2}(x,y,z)),
\end{align*}
where $\varepsilon,\tau_i\in\mathbb{R}$.
\item\textbf{Case 4:} The operator $V_4=z\partial_u$, $G_4=(x,y,z, tz+u)$, $t\in\mathbb{R}$, for equation ${\cal S}_2[u]=f(x,y,z)$; So
\begin{align*}
u(x)=\frac{1}{2}(\tau_1x^2+\tau_2y^2+\tau_3z^2)-tz+\varepsilon^5\omega(\varepsilon^{-2}(x,y,z)),
\end{align*}
where $\varepsilon,\tau_i\in\mathbb{R}$.
\item\textbf{Case 5:} The operator $V_5=\pm\partial_x +u\partial_u$, $G_5=(x\pm t,y,z, tu+u)$, $t\in\mathbb{R}$,\\for equation ${\cal S}_2[u]=exp(\pm2x)H(y,z)$; So
\begin{align*}
u(x)&=\frac{1}{2(t+1)}\big[\tau_1(x\pm{t})^2+\tau_2y^2+\tau_3z^2+2\varepsilon^5\omega(\varepsilon^{-2}(x\pm t,y,z))\big],
\end{align*}
where $\varepsilon,\tau_i\in\mathbb{R}$.
\item\textbf{Case 6:} The operator $V_5=\gamma_1z\partial_y -\gamma_1y\partial_z +u\partial_u$, \\$G_5=(x,z\sin(\gamma_1t)+y\cos(\gamma_1t),z\cos(\gamma_1t)-y\sin(\gamma_1t), tu+u)$, $t\in\mathbb{R}$,\\
for equation ${\cal S}_2[u]=\displaystyle \exp(\frac{2}{\gamma_1}{\tan}^{-1}(\frac{y}{z})) H(x,y^2+z^2)$; So
\begin{align*}
u(x)&=\frac{1}{2(t+1)}\big[\tau_1x^2+\tau_2(z\sin(\gamma_1t)+y\cos(\gamma_1t))^2+\tau_3(z\cos(\gamma_1t)-y\sin(\gamma_1t))^2\\
&+2\varepsilon^5\omega(\varepsilon^{-2}(x,z\sin(\gamma_1t)+y\cos(\gamma_1t),z\cos(\gamma_1t)-y\sin(\gamma_1t)))\big],
\end{align*}
where $\varepsilon,\tau_i\in\mathbb{R}$.
\item\textbf{Case 7:} The operator $V_5=\pm\partial_x +\gamma_2z\partial_y -\gamma_2y\partial_z +u\partial_u$,\\
$G_5=(x\pm t,z\sin(\gamma_2t)+y\cos(\gamma_2t),z\cos(\gamma_2t)-y\sin(\gamma_2t), tu+u)$, $t\in\mathbb{R}$,\\ for equation ${\cal S}_2[u]=\displaystyle \exp(\frac{2}{\gamma_2}{\tan}^{-1}(\frac{y}{z}))\displaystyle H(y^2+z^2,x\mp \frac{1}{\gamma_2}{\tan}^{-1}(\frac{y}{z}))$; So
\begin{align*}
u(x)&=\frac{1}{2(t+1)}\big[\tau_1(x\pm t)^2+\tau_2(z\sin(\gamma_2t)+y\cos(\gamma_2t))^2+\tau_3(z\cos(\gamma_2t)-y\sin(\gamma_2t))^2\\
&+2\varepsilon^5\omega(\varepsilon^{-2}(x\pm t,z\sin(\gamma_2t)+y\cos(\gamma_2t),z\cos(\gamma_2t)-y\sin(\gamma_2t)))\big],
\end{align*}
where $\varepsilon,\tau_i\in\mathbb{R}$.
\item\textbf{Case 8:} The operator $V_5=\alpha_1z\partial_x -\alpha_1x\partial_z +u\partial_u $,\\ $G_5=(z\sin(\alpha_1t)+x\cos(\alpha_1t),y,z\cos(\alpha_1t)-x\sin(\alpha_1t), tu+u)$, $t\in\mathbb{R}$,\\
for equation ${\cal S}_2[u]=\exp(\frac{2}{\alpha_1}{\tan}^{-1}(\frac{x}{z}))H(y,x^2+z^2)$; So
\begin{align*}
u(x)&=\frac{1}{2(t+1)}\big[\tau_1(z\sin(\alpha_1t)+x\cos(\alpha_1t))^2+\tau_2y^2+\tau_3(z\cos(\alpha_1t)-x\sin(\alpha_1t))^2\\
&+2\varepsilon^5\omega(\varepsilon^{-2}(z\sin(\alpha_1t)+x\cos(\alpha_1t),y,z\cos(\alpha_1t)-x\sin(\alpha_1t)))\big],
\end{align*}
where $\varepsilon,\tau_i\in\mathbb{R}$.
\item\textbf{Case 9:} The operator $V_5=\pm\partial_y +u\partial_u$, $G_5=(x,y\pm t,z, tu+u)$, $t\in\mathbb{R}$,\\ for equation ${\cal S}_2[u]=exp(\pm2y)H(x,z)$; So
\begin{align*}
u(x)=\frac{1}{2(t+1)}\big[\tau_1x^2+\tau_2(y\pm t)^2+\tau_3z^2+2\varepsilon^5\omega(\varepsilon^{-2}(x,y\pm t,z))\big],
\end{align*}
where $\varepsilon,\tau_i\in\mathbb{R}$.
\item\textbf{Case 10:} The operator $V_5= \pm\partial_y +\alpha_2z\partial_x -\alpha_2x\partial_z +u\partial_u$,\\ $G_5=(z\sin(\alpha_2t)+x\cos(\alpha_2t),y\pm t,z\cos(\alpha_2t)-x\sin(\alpha_2t), tu+u)$, $t\in\mathbb{R}$,\\
for equation ${\cal S}_2[u]=\displaystyle \exp(\frac{2}{\alpha_2}{\tan}^{-1}(\frac{x}{z}))\displaystyle H(x^2+z^2,y\mp \frac{1}{\alpha_2}{\tan}^{-1}(\frac{x}{z}))$; So
\begin{align*}
u(x)&=\frac{1}{2(t+1)}\big[\tau_1(z\sin(\alpha_2t)+x\cos(\alpha_2t))^2+\tau_2(y\pm t)^2+\tau_3(z\cos(\alpha_2t)-x\sin(\alpha_2t))^2\\
&+2\varepsilon^5\omega(\varepsilon^{-2}(z\sin(\alpha_2t)+x\cos(\alpha_2t),y\pm t,z\cos(\alpha_2t)-x\sin(\alpha_2t)))\big],
\end{align*}
where $\varepsilon,\tau_i\in\mathbb{R}$.
\item\textbf{Case 11:} The operator $V_5=\beta_1y\partial_x -\beta_1x\partial_y +u\partial_u $,\\
$G_5=(y\sin(\beta_1t)+x\cos(\beta_1t),y\cos(\beta_1t)-x\sin(\beta_1t),z, tu+u)$, $t\in\mathbb{R}$,\\
for equation ${\cal S}_2[u]= \exp(\frac{2}{\beta_1}{\tan}^{-1}(\frac{x}{y}))H(z,x^2+y^2)$; So
\begin{align*}
u(x)&=\frac{1}{2(t+1)}\Big[\tau_1(y\sin(\beta_1t)+x\cos(\beta_1t))^2+\tau_2(y\cos(\beta_1t)-x\sin(\beta_1t))^2+\tau_3z^2\\
&+2\varepsilon^5\omega(\varepsilon^{-2}(y\sin(\beta_1t)+x\cos(\beta_1t),y\cos(\beta_1t)-x\sin(\beta_1t),z))\Big],
\end{align*}
where $\varepsilon,\tau_i\in\mathbb{R}$.
\item\textbf{Case 12:} The operator $V_5=\pm\partial_z +u\partial_u$, $G_5=(x,y,z\pm t, tu+u)$, $t\in\mathbb{R}$,\\ for equation ${\cal S}_2[u]= {\exp(\pm 2z)}{H(x,y)}$; So
\begin{align*}
u(x)=\frac{1}{2(t+1)}\Big[\tau_1x^2+\tau_2y^2+\tau_3(z\pm t)^2+2\varepsilon^5\omega(\varepsilon^{-2}(x,y,z\pm t))\Big],
\end{align*}
where $\varepsilon,\tau_i\in\mathbb{R}$.
\item\textbf{Case 13:} The operator $V_5=\pm\partial_z +\beta_2y\partial_x -\beta_2x\partial_y +u\partial_u$,\\ $G_5=(y\sin(\beta_2t)+x\cos(\beta_2t),y\cos(\beta_2t)-x\sin(\beta_2t),z\pm t, tu+u)$, $t\in\mathbb{R}$,\\
for equation ${\cal S}_2[u]=\displaystyle \exp(\frac{2}{\beta_2}{\tan}^{-1}(\frac{x}{y}))\displaystyle H(x^2+y^2,z\mp \frac{1}{\beta_2}{\tan}^{-1}(\frac{x}{y}))$; So
\begin{align*}
u(x)&=\frac{1}{2(t+1)}\big[\tau_1(y\sin(\beta_2t)+x\cos(\beta_2t))^2+\tau_2(y\cos(\beta_2t)-x\sin(\beta_2t))^2+\tau_3(z\pm t)^2\\
&+2\varepsilon^5\omega(\varepsilon^{-2}(y\sin(\beta_2t)+x\cos(\beta_2t),y\cos(\beta_2t)-x\sin(\beta_2t),z\pm t))\big],
\end{align*}
where $\varepsilon,\tau_i\in\mathbb{R}$.
\item\textbf{Case 14:} The operator $V_5=x\partial_x+y\partial_y+z\partial_z$,\\
$G_5=(e^tx,e^ty,e^tz,u)$, $t\in\mathbb{R}$, for equation ${\cal S}_2[u]= x^{2c-4}H(\frac{y}{x},\frac{z}{x})$; So
\begin{align*}
u(x)=\frac{1}{2}(\tau_1e^{2t}x^2+\tau_2e^{2t}y^2+\tau_3e^{2t}z^2)+\varepsilon^5\omega(\varepsilon^{-2}(e^tx,e^ty,e^tz)),
\end{align*}
where $\varepsilon,\tau_i\in\mathbb{R}$.
\item\textbf{Case 15:} The operator $V_5=x\partial_x+(y\pm 1)\partial_y+z\partial_z$, \\
$G_5=(e^tx,e^ty\pm e^t\mp1,e^tz,u)$, $t\in\mathbb{R}$, for equation ${\cal S}_2[u]= x^{2c-4}H(\frac{y\pm 1}{x},\frac{z}{x})$; So
\begin{align*}
u(x)&=\frac{1}{2}(\tau_1e^{2t}x^2+\tau_2(e^ty\pm e^t\mp1)^2+\tau_3e^{2t}z^2)+\varepsilon^5\omega(\varepsilon^{-2}(e^tx,e^ty\pm e^t\mp1,e^tz)),
\end{align*}
where $\varepsilon,\tau_i\in\mathbb{R}$.
\end{itemize}
\section{Conclusion}
In this paper, we performed preliminary group classification on equation, by studying the class of $3$-dimensional nonlinear $2-$hessian equations ${\cal S}_2[u]=f(x,y,z)$, and investigating the algebraic structure of the symmetry groups for the equation. Then, we obtained an optimal system of one-dimensional Lie subalgebras of this equation, with the aid of propositions \eqref{proposition1} and \eqref{proposition2}. The result of these work is a wide class of equations which summarized in table \ref{tab3}. And at the end of this work, some exact solutions of $2-$hessian equation are presented.
Of course, the results in table \ref{tab3} can be continued for remainder vectors, and it is possible to obtain the corresponding reduced equations for all the cases in the classification in up comming works.
|
1,116,691,498,694 | arxiv | \section{Introduction}
The development of small electronic storage devices has been going on for years and
the possibility to store information in magnetic nanostructures is an active topic of research.
Using exchange coupling between atomic spins, a recent experiment has shown that information
can in principle be stored in just a few antiferromagnetically aligned Fe atoms~\cite{loth12}.
This experiment involved imaging and manipulation of individual atomic spins by spin-polarized
scanning tunneling microscopy (STM), a technique which over the last decade has developed into a
powerful tool for studying spin dynamics of engineered atomic structures. In a series of seminal STM experiments
inelastic tunneling spectroscopy (IETS) has been used to investigate
spin excitation spectra of individual magnetic atoms~\cite{hein04}, to probe the exchange interaction between spins
in chains of Mn atoms and the orientation and strength of their magnetic anisotropy~\cite{hirj06,hirj07},
and to study the effect of this anisotropy on Kondo screening of magnetic atoms~\cite{otte08}. A few years later, experimental studies of
tunneling-induced spin dynamics in atomically assembled magnetic structures were performed: In 2012 Loth {\it et al.}
measured the voltage-induced switching rate between the two N\'eel ground states of an antiferromagnetically coupled chain
of Fe atoms~\cite{loth12}
and recently spin waves (magnons) have been imaged in chains of ferromagnetically aligned atoms, including the demonstation of
switching between the two oppositely aligned ground states and
local tuning of spin state mixing by exchange coupling~\cite{spin14,khat13,yan15}.
\vspace*{0.2cm} \\
The tunnel-current-induced spin dynamics of single magnetic atoms and engineered atomic chains in these experiments can be well described
by a spin Heisenberg Hamiltonian (see section~\ref{sec-Hamiltonian}),
which contains the magnetic anisotropy and exchange coupling between neighbouring atomic spins as phenomenological parameters.
This model has been succesfully
used to analyze, among others, the $I(V)$ characteristics of an electron interacting via exchange coupling
with a magnetic atom~\cite{hirj06,hirj07,otte08,fern09}, to explain step heights in inelastic conductance
measurements of adsorbed Fe atoms~\cite{lore09}, to provide a theoretical description based on rate equations of
spin dynamics in one-dimensional chains of magnetic atoms~\cite{delg10}, to analyze magnetic switching in terms of
the underlying quantum processes in ferrromagnetic chains~\cite{gauy13} and to calculate the
electron-induced switching rate between N\'eel states in antiferromagnetic chains of Fe atoms~\cite{gauy13PRL,li15}.
\vspace*{0.2cm} \\
In this paper we investigate the effect of single-spin transverse magnetic anisotropy on spin-flip assisted tunneling and spin transition rates in chains
of magnetic atoms.
To understand the role played by magnetic anisotropy in tunneling spectroscopy is of great importance both fundamentally
and for being able to engineer magnetic properties of atomic chains and clusters on surfaces.
Compared with the longitudinal (easy-axis) magnetic anisotropy, the quantitative effect of transverse magnetic anisotropy on tunneling spectroscopy of
magnetic chains has so far received little attention. In experiments involving antiferromagnetically coupled atoms
transverse anisotropy has often been small (i.e. too small to be observable) to negligible,
because the easy-axis anisotropy energy is much larger than the transverse exchange energy~\cite{hirj06,hirj07,otte08,loth12}.
However, such a uni-axial model does not always apply. Transverse anisotropy, together with the parity of the atomic spin, influences the degeneracy
of the energy spectrum~\cite{hirj07,delg12,jaco15}. Recent studies have demonstrated that the presence of
non-zero transverse anisotropy modifies the switching frequency of few-atom magnets when atoms are directly adsorbed on the substrate~\cite{khat13}.
It has also been predicted that finite values of transverse anisotropy lead to the appearance of peaks in the
differential conductance when using spin-polarized STM~\cite{misi13} and a recent experiment has
demonstrated that the strength of the magnetocrystalline anisotropy can be controllably enhanced or reduced by manipulating its local strain environment~\cite{brya13,multiaxial}.
In addition, ferromagnetically coupled atomic chains (nanomagnets) usually
exhibit non-negligible values of transverse anisotropy~\cite{spin14,khat13,yan15}.
From an engineering point of view, transverse anisotropy could be used to tune dynamic properties such as spin switching
in antiferromagnetic chains, since it breaks the degeneracy of the
N\'eel ground states and transforms them into N\'eel-like states that contain a larger number of different spin configurations~\cite{switching_vs_E}.
Recent experiments have investigated the three-dimensional distribution of the magnetic anisotropy of single Fe atoms and demonstrated
the electronic tunability of
the relative magnitude of longitudinal and transverse anisotropy~\cite{yan15_2}. This provides
further evidence for the potential importance of tunability of magnetic anisotropy for enhancing or weakening spin tunneling phenomena~\cite{ober14}. Given all this,
it is interesting and important to obtain direct and intuitive insight
in the effect of transverse anisotropy on inelastic tunneling transport and STM-induced spin
transition rates in chains of magnetic atoms. The aim of this paper is to provide a first step in this direction on a phenomenological level.
\vspace*{0.2cm} \\
Using a perturbative approach and including the strength of the transverse anisotropy up to first order, we analytically calculate the inelastic current $I(V)$,
differential conductance $dI/dV$ and corresponding IETS spectra $d^2I/dV^2$ for atomic chains with nearest-neighbour Ising exchange coupling. We also
perform numerical simulations of spin transition rates of an
antiferromagnetically coupled atomic spin chain. We find that finite transverse anisotropy introduces: 1) additional steps in the differential
conductance $dI/dV$ and corresponding sharp peaks in $d^2I/dV^2$ and 2) a
substantial increase of the spin transition rate between atomic levels.
We show that both are due to transverse anisotropy-induced coupling between additional atomic spin levels and provide a quantitative explanation of the
position and heights of the conductance steps and the dependence of the spin transition rates on the strength of the transverse anisotropy.
Our perturbative approach is valid for single-spin transverse anisotropy strengths
corresponding to typical experimental values.
\vspace*{0.2cm} \\
The outline of the paper is as follows. In Sec.~II A we discuss the phenomenological spin Hamiltonian and its energy spectrum and eigenfunctions
with the transverse anisotropy energy included up to first-order perturbation theory. We then derive analytical expressions for the inelastic tunneling
current $I(V)$, differential conductance $dI/dV$ and IETS spectra $d^2I/dV^2$ of an $N$-atomic spin chain (Sec. II B),
and for the tunneling-induced transition rates (Sec. II C). Application of these results to chains of antiferromagnetically coupled atoms
are presented and analyzed in Secs.~III and IV. Sec.~V contains conclusions and a discussion of open questions.
\section{Theory}
\subsection{Hamiltonian}
\label{sec-Hamiltonian}
In this section we first briefly discuss the spin Hamiltonian used to describe the atomic chain and then derive its eigenvalues and
eigenfunctions up to first order in the strength of the transverse magnetic anisotropy.
\\
The eigenenergies and spin eigenstates of a chain of $N$ magnetic atoms can be described by a phenomenological Heisenberg spin Hamiltonian,
consisting of a single-spin part and nearest-neighbour exchange interaction~\cite{hirj06,spin14,fern09,delg10,gatt06}:
\begin{equation}
{\cal H} = \sum_{i=1}^{N} \hat{\cal H}_{i,S} + \sum_{i=1}^{N-1} J\, \hat{\bf S}_i \cdot \hat{\bf S}_{i+1}
\label{TotalHamiltonian}
\end{equation}
with
\begin{equation}
\hat{\cal H}_{i,S} = D\hat{S}^2_{i,z} + E(\hat{S}^2_{i,x} - \hat{S}^2_{i,y}) - g^*{\mu_B} {\bf B} \cdot \hat{\bf S}_{i}.
\label{SinglespinHamiltonian}
\end{equation}
Here $D$ represents the single-spin longitudinal magnetocrystalline anisotropy, $E$ the transverse magnetic anisotropy, $g^*$ the Land\'{e} g-factor,
$\mu_B$ the Bohr-magnetron, ${\bf B}$ the external magnetic field and $J$ the exchange energy between neighbouring atoms~\cite{uniform}.
The parameters $D$, $E$ and $J$ can be inferred from experiments, see e.g. Refs.~\cite{loth12,hein04,hirj06,hirj07,otte08,spin14}.
$\hat{S}_{i,x}$, $\hat{S}_{i,y}$ and $\hat{S}_{i,z}$ are, respectively, the $x$-, $y$-, and $z$-components of the spin operator of the atom at site $i$
along the chain. Assuming ${\bf B} = B \hat{z}$, $E=0$ and exchange coupling between the $z$-components of the spin only (Ising coupling),
the eigenvalues $E_{m_1,..., m_N}$ of the Hamiltonian~(\ref{TotalHamiltonian}) are given by
\begin{equation}
E_{m_1,..., m_N} = D \sum_{i=1}^{N} m_i^2 + J \sum_{i=1}^{N-1} m_i m_{i+1} - E_Z \sum_{i=1}^{N} m_i,
\label{eq:energies}
\end{equation}
with $m_i$ the quantum number labeling the angular momentum in the $z$-direction of the $i^{th}$ atom and $E_Z\equiv g^{*} \mu_b B$ the Zeeman energy.
When adding the single-spin transverse anisotropy
$E(\hat{S}^2_{i,x} - \hat{S}^2_{i,y})$ as a perturbation, the corresponding eigenfunctions $\psi_{m_1,..,m_N}$ up to first order in $E$ are given by
\begin{equation}
\psi_{m_1,\ldots,m_N} = |m_1,..,m_N\rangle + |m_1,..,m_N\rangle^{(1)},
\label{eigenstate-gestoord}
\end{equation}
with
\begin{eqnarray}
|m_1,...m_N\rangle^{(1)} & = &
\frac{1}{2} E\, \sum_{i=1}^{N} \left( A_{m_{i-1},m_i,m_{i+1}} |m_1,..,m_i\! +\! 2,..,m_N\rangle \right. \nonumber \\
& & \left. - A_{m_{i-1},m_i-2,m_{i+1}} |m_1,..,m_i\! - \! 2,..,m_N\rangle \right)
\label{eq:corr_psi_1}
\end{eqnarray}
and
\begin{equation}
A_{m_{i-1},m_i,m_{i+1}} \equiv \frac{ \sqrt{36 - 12 (m_i + 1)^2 + m_i (m_i + 1)^2 (m_i + 2)}}{- 4D(m_i + 1) - 2J (m_{i-1} + m_{i+1})
+ 2 E_Z} .
\label{eq:Ami}
\end{equation}
Here $A_{m_{j-1},m_j,m_{j+1}} = 0$ for $j\leq 1$ or $m_{j-1},m_j,m_{j+1} \notin [-2,..,2]$.
Since the first-order correction of the eigenvalues (\ref{eq:energies}) is zero, Eqns.~(\ref{eq:energies}) and (\ref{eigenstate-gestoord}) thus represent the eigenvalues and eigenfunctions of the Hamiltonian (\ref{TotalHamiltonian})
(for ${\bf B} = B \hat{z}$ and Ising coupling) up to first order in $E$.
\\
Figure~\ref{fig:AFMspec2D} shows the energy spectrum (\ref{eq:energies}) for a chain consisting of four Fe atoms (spin $s=2$) with antiferromagnetic coupling.
The ground state of the chain consists of the two degenerate N\'eel states $|2,-2,2,-2\rangle$ and $|-2,2,-2,2\rangle$ with eigenenergy $16 D - 12 J$.
\begin{figure}[h]
\includegraphics[width=0.5\textwidth]{ES}
\caption{The energy spectrum of an antiferromagnetic chain consisting of four Fe atoms ($s=2$). Each cross represents an eigenstate. Parameters used are
$D=1.3$\,meV, $J=-1.7$\,meV, and $B=1$\,T.}
\label{fig:AFMspec2D}
\end{figure}
\subsection{Current}
\label{sec-current}
A powerful technique to probe the spin dynamics of single magnetic atoms or small atomic chains deposited on a surface (typically a thin insulating layer on top of a metallic surface) is
inelastic tunneling spectroscopy (IETS). In IETS, the spin of an electron tunneling from the tip of an STM interacts via exchange with the spin of an atom. When the energy provided
by the bias voltage matches the energy of an atomic spin transition, the latter can occur and a new conduction channel opens~\cite{jakl66}.
For a chain consisting of $N$ magnetic atoms with spin $s=2$ (such as Fe or Mn) and the STM tip located above atom $j$ the inelastic current $I(V)$ in an IETS experiment is given by
\begin{eqnarray}
I(V) & = & G_S\, \prod_{i=1}^N \sum_{\stackrel{m_i,m_i^{\prime}=-2}{\alpha=x,y,z;s=\pm}}^{2} \! \! \! P_M\, |\langle m_1,..,m_N | S_{\alpha}^{(j)} | m_1^{\prime},..,m_N^{\prime} \rangle |^2\,
\nonumber \\
& & \ \ \times\, F_{M,M^{\prime},s}(V)
\label{eq:I_V}
\end{eqnarray}
with
\begin{equation}
F_{M,M^{\prime},s}(V) \equiv \frac{eV - s \Delta_{M^{\prime},\!M}}{ 1 - e^{- s\beta (eV - s\Delta_{M^{\prime},\!M}
)} }.
\label{eq:FactorF}
\end{equation}
Here $M$ denotes the state $M\equiv |m_1,m_2,...,m_N\rangle$, $\Delta_{M^{\prime},\!M} \equiv E_{M^{\prime}} - E_{M}$ and $E_M$ is given
by Eq.~(\ref{eq:energies}). $P_M$ is the nonequilibrium occupation of eigenstate $|m_1,..,m_N\rangle$ (see also the appendix), $\beta \equiv (k_B T)^{-1}$
and $S_{\alpha}^{(j)}$ with $\alpha=x, y, z$
is the local spin operator acting on atom $j$. The conductance quantum $G_S \sim \frac{2e^2}{h} \rho_T \rho_S T_S^2$ with
$\rho_T, \rho_S$ the density of states at the Fermi energy of the STM tip and surface electrodes
and $T_S^2$ the tunneling probability between the local atomic spin and the transport electrons~\cite{delg10}. Eq.~(\ref{eq:I_V})
has been derived (for a single atomic spin) starting from a microscopic tunneling Hamiltonian that describes the exchange interaction between the spin of the tunneling
electron and the atomic spin assuming short-range exchange interaction~\cite{fern09,alternative}. The STM tip then only couples to the atom at site $j$ and the matrix element
$|\langle m_1,...,m_N | S_{\alpha}^{(j)} | m_1^{\prime},...m_N^{\prime} \rangle |^2$ describes the exchange spin interaction between the spin of the tunneling electron and this atomic spin~\cite{generalization}.
The function $F_{M^{\prime},\!M,s}(V)$ on the right-hand side of Eq.~(\ref{eq:I_V}) is the temperature-dependent activation energy
for opening a new
conduction channel: At energies where the
applied bias voltage $eV$ matches the energy that is required for an atomic spin transition $\Delta_{M^{\prime},\!M}$ a step-like increase in the differential conductance $dI/dV$ occurs.
At these same voltages the second derivative of the current $d^2I/dV^2$ exhibits a peak
(of approximately Gaussian shape). The area under these peaks corresponds to the relative transition intensity and is equal to the corresponding step height of the differential conductance~\cite{yan15_2}.
Analyzing $d^2 I/dV^2$ data, commonly called IETS spectra, thus probes the transition probability between atomic spin levels.~\cite{spin14,yan15}
We now calculate the inelastic current (\ref{eq:I_V}) for a nonmagnetic STM tip by calculating the spin exchange matrix element and the
occupation probabilities $P_M$ for the eigenstates in Eq.~(\ref{eigenstate-gestoord}). The populations $P_M$ are obtained by solving the master equation~\cite{delg10}
\begin{equation}
dP_M/dt = \sum_{M^{\prime}} P_{M^{\prime}} W_{M^{\prime},M} - P_M \sum_{M^{\prime}} W_{M, M^{\prime}},
\label{eq:master}
\end{equation}
with $W_{M,M^{\prime}}$ the transition rate from atomic spin state $M$ to $M^{\prime}$
(see for the calculation Sec.~\ref{subsec-transitionrates} below). For small tunneling current, i.e. small tip (electrode)-atom coupling, the
atomic chain is approximately in equilibrium and $P_M$ can be approximated by the equilibrium population~\cite{delg10}, which is given by
the stationary solution of the master equation (\ref{eq:master}) (see Eq.~(\ref{eq:Psol}) in the appendix). Substitution into Eq.~(\ref{eq:I_V}) yields
the current $I(V)$:
\begin{equation}
I(V) = G_S\, ( I^{(0)} (V) + I^{(1)}(V) ).
\label{eq:current}
\end{equation}
Here
\begin{eqnarray}
I^{(0)}(V) & = & \prod_{i=1}^N \sum_{\stackrel{m_i=-2}{s=\pm}}^2 P_M\, \left\{ (m_j)^2\, F_{0,0,s}(V) +
\frac{C_{m_j}}{2}\, F_{1,0,s}(V) \right. \nonumber \\
& & \left. +\, \frac{C_{m_j-1}}{2}\, F_{-1,0,s}(V) \right\}
\label{eq:current0}
\\
I^{(1)}(V) & = & \frac{1}{8}\, E^2\, \prod_{i=1}^N \sum_{\stackrel{m_i=-2}{s=\pm}}^2 P_M\, \left\{
8\, (A_{m_j}^2 + A_{m_j-2}^2)\, F_{0,0,s}(V) \right. \nonumber \\
& + &
C_{m_j}\, B_{m_j,2}^2\, F_{1,0,s}(V) +
C_{m_j-1}\, B_{m_j,0}^2\, F_{-1,0,s}(V) \nonumber \\
& + & A_{m_j-2}^2 \left( C_{m_j-3}\, F_{-3,-2,s}(V) +
C_{m_j-2}\, F_{-1,-2,s}(V) \right) \nonumber \\
& + & \left. A_{m_j}^2 \left(
C_{m_j+1}\, F_{1,2,s}(V) +
C_{m_j+2}\, F_{3,2,s}(V) \right)
\right\},
\label{eq:currentE}
\end{eqnarray}
with
\begin{eqnarray}
A_{m_j+n}^2 & \equiv & A_{m_{j-1},m_j+n,m_{j+1}}^2
\label{eq:Amj2} \\
B_{m_j,n}^2 & \equiv & A_{m_{j-2},m_{j-1},m_j}^2 + A_{m_{j-2},m_{j-1},m_j-1+n}^2 + \nonumber \\
& & A_{m_{j-2},m_{j-1}-2,m_j}^2 + A_{m_{j-2},m_{j-1}-2,m_j-1+n}^2 + \nonumber \\
& & A_{m_{j-1},m_j-1+n,m_{j+1}}^2 + A_{m_{j-1},m_j-3+n,m_{j+1}}^2 + \nonumber \\
& & A_{m_j,m_{j+1},m_{j+2}}^2 + A_{m_j-1+n,m_{j+1},m_{j+2}}^2 + \nonumber \\
& & A_{m_j,m_{j+1}-2,m_{j+2}}^2 + A_{m_j-1+n,m_{j+1}-2,m_{j+2}}^2 \\
C_{m_j} & \equiv & 6 - m_j (m_j + 1)
\label{eq:Cmj} \\
F_{n_1,n_2,s} & \equiv & \frac{eV - s \Delta_{n_1,n_2}}{ 1 - e^{- s\beta (eV - s\Delta_{n_1,n_2})} }
\label{eq:Fn1n2} \\
\Delta_{n_1,n_2} & \equiv & E_{m_1,..,m_j+n_1,..,m_N} - E_{m_1,..,m_j+n_2,..m_N} \nonumber \\
& = & (n_1 - n_2)\, \left( (2 m_j + n_1 + n_2)D\ + \right. \nonumber \\
& & \left. (m_{j-1} + m_{j+1})J - E_Z \right).
\end{eqnarray}
$A_{m_{j-1},m_j,m_{j+1}}$ is given by Eq.~(\ref{eq:Ami}) and $P_{M}$ given by Eq.~(\ref{eq:Psol}).
\subsection{Transition Rates}
\label{subsec-transitionrates}
In this section we derive expressions for the transition rates $W_{M,M^{\prime}}$ between eigenstates
$M$ and $M^{\prime}$ of the atomic spin chain up to first order in the transverse anisotropy energy $E$.
These rates are also used to calculate the equilibrium occupation $P_M$ of the energy levels, see the appendix.
When an electron tunnels from the STM tip to the surface (or vice versa, the case considered below) and interacts with the atomic spin chain three types of spin transitions can
occur~\cite{delg10,fourthterm}, denoted by the rates $W_{M,M}^{S\to T}$, $W_{M,M^{\prime}}^{S\to S}$ and $W_{M,M'}^{S \rightarrow T}$: \\
1. {\it Elastic tunneling} - $W_{M,M}^{S\to T}$ denotes the rate for an electron tunneling from surface (S) to tip (T) without interacting with the atomic spin, i.e. without inducing a spin transition.
This rate contributes to the elastic tunneling current. \\
2. {\it Substrate-induced relaxation} - $W_{M,M^{\prime}}^{S\to S}$ corresponds to the simultaneous creation of an electron-hole pair in the surface electrode and a flip of the atomic spin from state $M$ to state $M^{\prime}$.
This rate thus does not contribute to the current but does contribute to the equilibrium population $P_M$ at voltages that are sufficiently high for the atomic spin chain to be in an excited state.
At low bias voltages $(W_{M,M^{\prime}}^{S\to S})^{-1}$ is a measure for $T_1$, the atomic
spin relaxation time~\cite{delg10}. \\
3. {\it Spin-flip assisted inelastic tunneling} - $W_{M,M'}^{S \rightarrow T}$ describes the transfer of an electron from surface to tip combined with a transition of the spin chain from spin state $M$ to state $M^{\prime}$.
This process thus both contributes to the atomic spin dynamics and to the inelastic tunneling current.
For an unpolarized STM tip the three rates can be calculated to lowest order in the electrode-chain coupling using Fermi's golden rule. This results in:
\begin{eqnarray}
W_{M, M}^{{\rm S}\, \to \, {\rm T}}(V)
& = & \frac{4\pi}{\hbar} W_1 \left| \sum_{\alpha=0}
\langle M | S_{\alpha}^{(j)} | M \rangle \right|^2 \, \frac{eV}{1 - e^{-\beta eV}} \nonumber \\
& = & \frac{4\pi}{\hbar} W_1 \frac{eV}{1 - e^{-\beta e V}}
\label{eq:WW1}\\
W_{M, M^{\prime}}^{{\rm S}\, \to \, {\rm S}}
& = & \frac{4\pi}{\hbar} W_2 \left| \sum_{\alpha=x,y}
\langle M | S_{\alpha}^{(j)} | M^{\prime} \rangle \right|^2 \, \frac{\Delta_{M,M^{\prime}}}{1 - e^{-\beta \Delta_{M,M^{\prime}}}}
\label{eq:WW2}\\
W_{M, M^{\prime}}^{{\rm S}\, \to \, {\rm T}}(V)
& = & \frac{4\pi}{\hbar} W_3 \left| \sum_{\alpha=x,y,z}
\langle M | S_{\alpha}^{(j)} | M^{\prime} \rangle \right|^2 \, \times \nonumber \\
& & \hspace*{1.cm} \frac{eV + \Delta_{M,M^{\prime}}}{1 - e^{-\beta (eV + \Delta_{M,M^{\prime}})}},
\label{eq:WW3}
\end{eqnarray}
with $W_1 \equiv \rho_S \rho_T T_0^2$, $W_2 \equiv \rho_S^2 T_J^2$ and $W_3 \equiv \rho_S \rho_T T_J^2$.
$T_0$ and $T_J$ correspond to the direct (spin-independent) tunnel coupling and the tunneling-induced exchange coupling, respectively~\cite{delg10,zhan13}.
Calculating the total spin transition rates $W_{M}^{{\rm S}\, \to \, {\rm S}}$ $\equiv$ $\sum_{M^{\prime}}\, W_{M, M^{\prime}}^{{\rm S}\, \to \, {\rm S}}$ and
$W_{M}^{{\rm S}\, \to \, {\rm T}}(V)$ $\equiv$ $\sum_{M^{\prime}}\, W_{M, M^{\prime}}^{{\rm S}\, \to \, {\rm T}}$ for state $M$ we obtain,
up to first order in $E$ and for the STM tip coupled to atom $j$,
\begin{eqnarray}
W_{M}^{{\rm S}\, \to \, {\rm S}} & = &
\frac{2\pi}{\hbar} W_2\, \left\{ C_{m_j} F_{1,0,+}(0)\, + C_{m_j-1} F_{-1,0,+}(0) \right. \nonumber \\
& + & \frac{E^2}{4}\, \left(
C_{m_j} B_{m_j,2}^2\, F_{1,0,+}(0) +
C_{m_j-1} B_{m_j,0}^2\, F_{-1,0,+}(0) \right. \nonumber \\
& + &A_{m_j-2}^2\, (C_{m_j-3} F_{-3,-2,+}(0) +
C_{m_j-2} F_{-1,-2,+}(0)) \nonumber \\
& + & \left. \left.
A_{m_j}^2\, ( C_{m_j+1} F_{1,2,+}(0) +
C_{m_j+2} F_{3,2,+}(0) ) \right) \right\}
\label{eq:matrixelement6}
\end{eqnarray}
\begin{eqnarray}
W_{M}^{{\rm S}\, \to \, {\rm T}} & = &
\frac{2\pi}{\hbar} W_3\, \left\{ 2 m_j^2 F_{0,0,+}(V) + \right. \nonumber \\ & &
C_{m_j} F_{1,0,+}(V)\, +\, C_{m_j-1} F_{-1,0,+}(V) \nonumber \\
& + & \frac{E^2}{4}\, \left(
8\, (A_{m_j}^2 + A_{m_j-2}^2)\, F_{0,0,+}(V)
\right. \nonumber \\
& + & C_{m_j} B_{m_j,2}^2\, F_{1,0,+}(V) +
C_{m_j-1} B_{m_j,0}^2\, F_{-1,0,+}(V) \nonumber \\
& + & A_{m_j-2}^2\, ( C_{m_j-3} F_{-3,-2,+}(V) +
C_{m_j-2} F_{-1,-2,+}(V)) \nonumber \\
& + & \left. \left.
A_{m_j}^2\, ( C_{m_j+1} F_{1,2,+}(V) +
C_{m_j+2} F_{3,2,+}(V) ) \right)
\right\}.
\label{eq:matrixelement5}
\end{eqnarray}
Here $A_{m_j+n}^2$, $B_{m_j,n}^2$, $C_{m_j}$ and $F_{n_1,n_2,s}$
are given by Eqns.~(\ref{eq:Amj2})-(\ref{eq:Fn1n2}).
\section{$I(V)$, $dI/dV$ and $d^2I/dV^2$}
We now calculate the inelastic tunneling current [Eq.~(\ref{eq:current})], the corresponding differential conductance and the IETS spectra for
the ground state of a chain consisting of $N$ atoms with antiferromagnetic coupling and analyze
the effect of the transverse anisotropy energy $E$. For the N\'eel state $|-2,2,\ldots,-2,2\rangle$ and the STM tip located above the first atom we
obtain:
\begin{equation}
I_{\text{N\'eel}}(V) = G_S\, \left( I^{(0)}_{\text{N\'eel}}(V) + I^{(1)}_{\text{N\'eel}}(V) \right)
\label{eq:currentNeel}
\end{equation}
with
\begin{eqnarray}
I^{(0)}_{\text{N\'eel}}(V) & = & 2\, \sum_{s=\pm} \left( 2\, F_{0,0,s}(V) + F_{1,0,s}(V) \right)
\label{eq:current0Neel} \\
I^{(1)}_{\text{N\'eel}}(V) & = & \frac{1}{4}\, E^2\, \sum_{s=\pm} \left\{ 4 A_{0,-2,2}^2 F_{0,0,s}(V) + 2 B_{-2,2}^2 F_{1,0,s}(V) \right. \nonumber \\
& & \left. +\, 3\, A_{0,-2,2}^2 (F_{1,2,s}(V) + F_{3,2,s}(V)) \right\}.
\label{eq:currentENeel}
\end{eqnarray}
The tunneling currents (\ref{eq:current0Neel}) and (\ref{eq:currentENeel}) depend on three energy gaps, corresponding to transitions between atomic spin levels with different values of $m_1$:
\begin{eqnarray}
\Delta_1 & \equiv & \Delta_{1,0} = -3D + 2J - E_z \ \ {\small (m_1\! =\! -1\! \rightarrow\! m_1\! =\! -2)} \nonumber \\
\Delta_2 & \equiv & \Delta_{1,2} = D - 2J + E_z \ \ (m_1\! =\! -1 \rightarrow m_1=0)
\label{eq:Neel2} \\
\Delta_3 & \equiv & \Delta_{3,2} = D + 2J - E_z \ \ (m_1=1 \rightarrow m_1=0). \nonumber
\end{eqnarray}
In the derivation of Eq.~(\ref{eq:currentNeel}) we have taken $P_M=1$ for $|M\rangle = |-2,2,..-2,2\rangle$ and zero otherwise, since at low temperatures and voltages ($k_BT, eV \ll |\Delta_1|$)
the equilibrium population of the excited states is negligible (see also Fig.~\ref{fig:PM} and discussion thereof
in the text)~\cite{validityPM}. Inspection of Eq.~(\ref{eq:currentNeel}) using the requirement $I^{(1)}(V) \ll I^{(0)}(V)$ shows that our perturbative approach is valid for values
of transverse anisotropy $E^2 \ll J^2, (D-J)^2$.
\begin{figure}[h]
\includegraphics[width=0.5\textwidth]{FIGURETOTDIFF0}
\caption{Inelastic tunneling current $I$ [Eq.~(\ref{eq:currentNeel})] normalized to the zero-bias conductance $G_S$ for an antiferromagnetic chain consisting of four atoms in the ground state $|-2,2,-2,2\rangle$ with the
STM tip coupled to the first atom.
Parameters used are $D=1.3$ meV, $E=0.3$ meV, $J=-1.7$ meV, and $T=1$ K.}
\label{fig:IVAFM}
\end{figure}
\begin{figure}[ht]
\includegraphics[width=0.5\textwidth]{FIGURETOT3}
\caption{(color online) $dI/dV$ normalized to the zero-bias conductance $G_S$ for the chain in Fig.~\ref{fig:IVAFM}.
Curves are shifted for clarity.}
\label{fig:TOTAFM}
\end{figure}
\begin{figure*}[ht]
\centering
\begin{minipage}{0.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FIGURE1groot}
\label{fig:test1}
\end{minipage}%
\begin{minipage}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FIGURE2groot}
\label{fig:test2}
\end{minipage}
\begin{minipage}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FIGURE3groot}
\label{fig:test3}
\end{minipage}
\begin{minipage}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FIGURE4groot}
\label{fig:test3}
\end{minipage}
\begin{minipage}{.19\textwidth}
\centering
\includegraphics[width=1.\linewidth]{FIGURE5groot}
\label{fig:test3}
\end{minipage}
\caption{$dI/dV$ (normalized to the zero-bias conductance) of the five separate states in the sums of Eqns.~(\ref{eq:current0}) and (\ref{eq:currentE}) for an antiferromagnetic chain in the
ground state $|-2,2,-2,2\rangle$ with the STM tip coupled to the first atom. Parameters used are the same as in Fig.~\ref{fig:IVAFM}.}
\label{fig:DIDVAFM}
\end{figure*}
Fig.~\ref{fig:IVAFM} shows the current $I(V)$ for typical experimental values~\cite{khat13,spin14,yan15} of the transverse anisotropy strength $E$.
As expected, the current increases linearly with $V$. It shows a kink (change of slope) at $eV = \pm |\Delta_1| \approx \pm 7.5$\, meV, which corresponds to the energy gap
between the ground state $|-2,2,..-2,2\rangle$ and the first excited state $|-1,2,..-2,2\rangle$ of the atomic spin chain. The increase in slope is given by the coefficients of the corresponding
activation energy terms $F_{1,0,s}(V)$ in Eqns.~(\ref{eq:current0Neel}) and (\ref{eq:currentENeel}).
The finite transverse anisotropy energy $E$ introduces additional kinks in the voltage region $-|\Delta_1| < eV < |\Delta_1|$. This can be seen more
clearly in Fig.~\ref{fig:TOTAFM}, which shows the differential conductance $dI/dV$ for the same chain for several values of the transverse anisotropy energy $E$. The
large stepwise increase in $dI/dV$ at $eV = \pm |\Delta_1| \approx \pm 7.5$ meV in the figure corresponds to the kinks at these energies in Fig.~\ref{fig:IVAFM}. In addition,
however, also steps in $dI/dV$ occur at voltages $eV = \pm |\Delta_2|$ and $eV = \pm |\Delta_3|$. These correspond to transitions between higher-lying excited states:
The step in $dI/dV$ at $eV = |\Delta_3| \approx 2.3$\, meV corresponds to the excitation
from spin state $|0,2,..,-2,2\rangle$ to state $|1,2,..,-2,2\rangle$. Then,
around $eV \sim 4.9$\, meV, a second step occurs, corresponding to the excitation from state $|-1,2,..,-2,2\rangle$ to $|0,2,..,-2,2\rangle$. At this energy the first excited state
$|-1,2,..,-2,2\rangle$ has become somewhat populated allowing for this transition to occur~\cite{step}. At slightly higher voltage, however, a steplike decrease occurs, corresponding
to decay from state $|0,2,..-2,2\rangle$ to state $|-1,2,..-2,2\rangle$. Here the spin chain thus undergoes a transition from a higher-lying to a lower-lying state
and an electron tunnels from drain (the STM tip) to source (the surface)~\cite{decays}, thereby lowering the rate of increase of $I(V)$.
Fig.~\ref{fig:DIDVAFM} provides a more detailed illustration of the competition between these two processes. This figure shows each of the five terms $m_1$ $\in$ $[-2,...,2]$\ that
contribute to $dI/dV$ in Fig.~\ref{fig:TOTAFM} separately (the current (\ref{eq:currentNeel}) is the sum of these five terms weighed by the equilibrium population $P_M$ for each state).
When inspecting the figure, we see that in the panels corresponding to $m_1=-1$ and $m_1=0$
a sharp increase of $dI/dV$ occurs at, resp., energies $eV=|\Delta_2| \sim 4.9$ meV and $eV=|\Delta_3| \sim 2.3$ meV. Here the chain undergoes a transition to the next higher-lying state
(from $m_1=-1$ to $m_1=0$ and from $m_1=0$ to $m_1=1$, resp.) In the same two panels the differential conductance subsequently decreases at voltages $eV = |\Delta_1| \sim 7.5$ meV
and $eV = |\Delta_2| \sim 4.9$ meV,
when the spin chain decays to the next lower-lying state. Similar analysis applies for the steps in the other panels.
\\
Note that the position at which steps in $dI/dV$ occur does not depend on the strength of the transverse anisotropy, since the energy gaps $\Delta_1 - \Delta_3$ in Eq.~(\ref{eq:Neel2}) are
independent of $E$ (up to first order in $E$). Furthermore, the height of the steps at $eV=\Delta_{n_1,n_2}$ scales with $E^2$ and is
given by the (sum of the) prefactors $F_{n_1,n_2,s}$ in Eq.~(\ref{eq:currentNeel}); for example, the step height at $eV= \pm |\Delta_3|$ is given by $(3/4) E^2 A_{0,-2,2}^2 \approx E^2/(D-J)^2$ (for small
Zeeman energy). This step height is a direct measure for the spin excitation transition intensity. Finally, in the limit $|eV| \gg |\Delta_1|$, the differential conductance saturates at
\begin{eqnarray}
\frac{dI_{\text{N\'eel}}}{dV} & \stackrel{|eV| \gg |\Delta_1|}{\rightarrow} & 6 e G_S \left(1 + \frac{E^2}{12}\, (5 A_{0,-2,2}^2 + 2 B_{-2,2}^2) \right) \nonumber \\
& \approx & 6 e G_S \left(1 + \frac{E^2}{8}\, \left( \frac{5}{(D-J)^2} + \frac{3}{J^2} \right).
\right)
\label{eq:limitingvalue}
\end{eqnarray}
The second line in Eq.~(\ref{eq:limitingvalue}) is valid when the Zeeman energy is small, $|E_z| \ll |J|, |D-J|$.
Fig.~\ref{fig:D2IDV2AFM} shows the IETS spectra $d^2I/dV^2$ corresponding to the differential conductance $dI/dV$ in Fig.~\ref{fig:TOTAFM}. The additional peaks
and valleys induced
by the finite transverse anisotropy strength in the voltage region between -7.5 meV and 7.5 meV can clearly be seen.
\begin{figure}[ht]
\includegraphics[width=0.5\textwidth]{FIGURETOTDIFF2}
\caption{$d^2I/dV^2$ spectra corresponding to the differential conductance $dI/dV$ shown in Fig.~\ref{fig:TOTAFM}
for $E=0.3$\, meV.}
\label{fig:D2IDV2AFM}
\end{figure}
\section{Transition Rates}
\noindent In this section we analyze the transition rates $W^{S\to S}$ and $W^{S\to T}$ [Eqns.~(\ref{eq:matrixelement6}) and (\ref{eq:matrixelement5})] for the ground state $|-2,2,..,-2,2\rangle$
of an antiferromagnetic $N$-atomic spin chain. By evaluating the matrix element in Eqns.~(\ref{eq:matrixelement6}) and (\ref{eq:matrixelement5}) this results in, for the STM tip coupled to the first atom,
\begin{eqnarray}
W_{\text{N\'eel}}^{{\rm S}\, \to \, {\rm S}} & = &
\frac{8\pi}{\hbar} W_2\, \left\{ F_{1,0,+}(0) + \frac{E^2}{8} \left( 2 B_{-2,2}^2\, F_{1,0,+}(0) \right. \right. \nonumber \\
& & \left. \left. +\, 3 A_{0,-2,2}^2
(F_{1,2,+}(0) + F_{3,2,+}(0) \right) \right\}
\label{eq:Neel4}
\end{eqnarray}
and
\begin{eqnarray}
W_{\text{N\'eel}}^{{\rm S}\, \to \, {\rm T}}(V) & = &
\frac{8\pi}{\hbar} W_3\, \left\{ 2\, F_{0,0,+}(V)\ + F_{1,0,+}(V)\, + \right. \nonumber \\
& & \frac{E^2}{8} \left( 4 A_{0,-2,2}^2 F_{0,0,+}(V) + 2 B_{-2,2}^2\, F_{1,0,+}(V) \right. \nonumber \\
& & \left. \left. +\ 3 A_{0,-2,2}^2
(F_{1,2,+}(V) + F_{3,2,+}(V)) \right) \right\},
\label{eq:Neel3}
\end{eqnarray}
with $\Delta_1$, $\Delta_2$ and $\Delta_3$ given by Eqns.~(\ref{eq:Neel2}).
\begin{figure}[ht]
\centering
\begin{minipage}[b]{0.47\textwidth}
\includegraphics[width=1\textwidth]{RATES1}
\caption{(Color online) The spin transition rate $W\equiv W_{\text{N\'eel}}^{S \rightarrow T}$ [Eq.~(\ref{eq:Neel3})] for an antiferromagnetic atomic chain
in the ground state with the STM tip coupled to the first atom and $E=0.3$ meV or $E=0.6$ meV (blue dot-dashed line and red solid line, respectively) and with the tip coupled to the second atom
and $E=0.3$ meV or $E=0.6$ meV (blue dashed line and red dotted line, respectively). Other parameters used are $D=1.3$\, meV, $J=-1.7$\, meV, $B=1$\, T and $W_3=1.1 \times 10^{-5}$ (the latter value is taken from the experiment in Ref.~\cite{spin14}).}
\label{fig:Rates}
\end{minipage}
\hfill
\centering
\begin{minipage}[b]{0.47\textwidth}
\includegraphics[width=1\textwidth]{RATES2}
\caption{(Color online) First derivative of the transition rates $W_{\text{N\'eel}}^{S \rightarrow T}$ in Fig.~\ref{fig:Rates}.}
\label{fig:RatesDWDV}
\end{minipage}
\end{figure}
Fig.~\ref{fig:Rates} shows $W^{S\to T}_{\text{N\'eel}}$ for the STM tip coupled to either the first or the second atom along the chain. As expected,
when the tip interacts with the first atom, $W^{S\to T}_{\text{N\'eel}}$ exhibits a clearly visible kink (change of slope) at the same voltage $eV=|\Delta_1| \sim 7.5$ meV
as the inelastic current in Fig.~\ref{fig:IVAFM}, which corresponds to the energy gap between the ground state and the first excited state of the chain
(for the tip coupled to the second atom this gap is larger, given by $3D-4J-E_Z \approx 10.5$\, meV).
In addition, the finite transverse anisotropy energy also here induces additional kinks at $eV = |\Delta_2|$
and $eV = |\Delta_3|$. The positions of these kinks can be seen more clearly in the graph of the derivative $dW^{S\to T}_{\text{N\'eel}}/dV$ in Fig.~\ref{fig:RatesDWDV}.
The onset of $W^{S\to T}_{\text{N\'eel}}$ at $V=0$\,meV is due to thermally activated elastic tunneling.
\\
Fig.~\ref{fig:Rates} also shows that finite transverse anisotropy energy increases the spin transition rates $W^{S\to T}_{\text{N\'eel}}$ for any value of the voltage $V$.
From Eq.~(\ref{eq:Neel3}) and the tip coupled to the first atom we find that this relative increase scales as
$E^2/(D-J)^2$ for energies $eV \ll |\Delta_1|$ and as $\frac{E^2}{8}\, \left( \frac{5}{(D-J)^2} + \frac{3}{J^2} \right)$ for energies $eV \gtrsim |\Delta_1|$.
For the voltage-independent relaxation rate $W^{S\to S}_{\text{N\'eel}}$ (not
shown in Fig.~\ref{fig:Rates}) we obtain from
Eq.~(\ref{eq:Neel4}) for $|E_z| \ll |D-J|, |J|$
\begin{equation}
W^{S\to S}_{\text{N\'eel}} \approx \frac{8\pi}{\hbar} W_2 \left| \Delta_1 \right| \left( 1 + \frac{9}{16}\, \frac{E^2}{J^2} \right).
\nonumber
\end{equation}
Since $T_1 \sim 1/W^{S\to S}_{\text{N\'eel}}$ at low bias voltages, the presence of finite transverse anisotropy energy thus leads to a decrease of the spin relaxation time which scales as $E^2/J^2$.
\section{Summary and Conclusions}
We have presented a perturbative theory for the effect of single-spin transverse magnetic anisotropy
on tunneling-induced spin transitions in atomic chains with Ising exchange coupling. We quantitatively predict the dependence of the inelastic tunneling current $I$
and the transition rates between atomic spin levels on the transverse anisotropy energy $E$ and show that the presence of finite values of $E$ lead to additional steps in the differential conductance
$dI/dV$ and to higher spin transition rates. For an antiferromagnetically coupled chain in the N\'eel ground state both the heights of the additional steps and the increase in spin
transition rates at low bias voltage scale as $E^2/(D-J)^2$, while the latter crosses over to $E^2/J^2$ scaling for higher voltages $eV\gtrsim |\Delta_1|$.
Our model is relevant for materials in which the easy-axis exchange interaction dominates over the transverse exchange interaction (justifying the use of the Ising Hamiltonian),
measurements at low current with a non-magnetic STM tip and for values of transverse anisotropy $E^2 \ll J^2, (D-J)^2$. The latter requirement is in agreement with typical values
of $E$, $D$ and $J$ measured in chains
of, for example, Fe or Mn atoms~\cite{loth12,hein04,hirj06,hirj07,otte08,spin14,khat13,yan15} and we therefore expect our results to be applicable for antiferromagnetically coupled
chains consisting of these and similar magnetic atoms.
Although this requires further investigation, our results suggest that tuning of transverse anisotropy energy may potentially be interesting
for storage of information in atomic chains. Interesting questions for future research in this direction are to study the effect of transverse anisotropy on
non-equilibrium spin dynamics in chains of magnetic atoms, on dynamic spin phenomena such as the formation of e.g. magnons, spinons and domain walls,
and on switching of N\'eel states in antiferromagnetically coupled chains.
We acknowledge valuable discussions with F. Delgado and A.F. Otte.
This work is part of the research programme of the Foundation for Fundamental Research on Matter (FOM),
which is part of the Netherlands Organisation for Scientific Research (NWO).
|
1,116,691,498,695 | arxiv | \section{\textsf{Introduction}}
In a semi-Riemannian manifold the Ricci-Yamabe soliton is defined by
\begin{equation}\label{1.1}
\pounds_V g + (2\lambda-\beta r)g + 2\alpha S = 0,
\end{equation}
$\pounds$ denotes the Lie-derivative, $S$ is the Ricci tensor, $r$ being the scalar curvature and $\lambda, \alpha,\beta \in \mathbb{R}$. Ricci-Yamabe solitons are the special solutions of the Ricci-Yamabe flow
\begin{equation}
\frac{\partial g}{\partial t} = -2\alpha S + \beta r g,
\end{equation}
which was introduced by Guler and Crasmareanu\cite{guler}. Equation (\ref{1.1}) is called almost Ricci-Yamabe soliton provided $\lambda$ is a smooth function.\\
In particular, for $\alpha = 1$ and $\beta =0$, (\ref{1.1}) implies
\begin{equation}
\pounds_V g + 2\alpha S + 2\lambda g = 0,
\end{equation}
which is a Ricci soliton for $\lambda \in \mathbb{R}$. Thus almost Ricci-Yamabe solitons ( Ricci-Yamabe solitons) are the natural generalization of almost Ricci solitons (Ricci solitons). Several generalization of Ricci solitons are almost Ricci solitons(\cite{des1}, \cite{des2}, \cite{wan}, \cite{wan2}, \cite{wan1}, \cite{w7}), $\eta$-Ricci solitons (\cite{bla1}, \cite{bla2}, \cite{bla3}, \cite{de}, \cite{sar}, \cite{sar1}), $\ast$-Ricci solitons(\cite{dai}, \cite{ham}, \cite{kai}, \cite{ven}, \cite{wan3}) and many others.\\
Recently, Gomes et al.\cite{gom} extended the concept of almost Ricci soliton to h-almost Ricci soliton on a complete Riemannian manifold by
\begin{equation}\label{1.2}
\frac{h}{2}\pounds_V g + \lambda g + S = 0,
\end{equation}
where $h: M \rightarrow \mathbb{R}$ is a smooth function. Specifically, a Ricci soliton is the 1-almost Ricci soliton endowed with constant $\lambda$.\\
Now we introduce the new type of solitons named $h$-almost Ricci-Yamabe solitons (briefly, h-ARYS) which are the extended notion of almost Ricci-Yamabe solitons, which are given by
\begin{equation}\label{1.3}
\frac{h}{2}\pounds_V g + \alpha S + (\lambda-\frac{\beta}{2}r) g = 0,
\end{equation}
where $h$ is a smooth function.\\
If $V$ is a gradient of $f$ on the manifold, then the foregoing concept is called $h$-almost gradient Ricci-Yamabe soliton (briefly, h-AGRYS) and (\ref{1.3}) takes the form
\begin{equation}\label{1.4}
h\nabla^2 f + (\lambda -\frac{\beta}{2}r)g + \alpha S = 0.
\end{equation}
An $h$-AGRYS is named $h$-gradient Ricci-Yamabe soliton if $\lambda$ is constant.\\
An h-ARYS (or h-AGRYS) turns into\\
(i) $h$-almost Ricci soliton (or $h$-almost gradient Ricci soliton) if $\beta = 0$ and $\alpha = 1$,\\
(ii) $h$-almost Yamabe soliton (or $h$-almost gradient Yamabe soliton) if $\beta = 1$ and $\alpha = 0$,\\
(iii) $h$-almost Einstein soliton (or $h$-almost gradient Einstein soliton) if $\beta = -1$ and $\alpha = 1$.\\
The $h$-ARYS ( or $h$-AGRYS ) is called proper if $\alpha \neq 0,1$.\\
Recently, in (\cite{sar2}, \cite{sar3}), the first author and Sarkar studied Ricci-Yamabe solitons in Kenmotsu 3-manifolds and generalized Sasakian space forms. Also, Sing and Khatri\cite{kha} studied Ricci-Yamabe solitons in perfect fluid spacetime.\\\\
The current article is structured as:
After the introduction, required preliminaries have been mentioned in Section 2. In Section 3, we investigate $h$-ARYS and $h$-AGRYS in para-Kenmotsu manifolds. Next, we classify para-Sasakian manifolds admitting $h$-ARYS and $h$-AGRYS in Section 5. Besides these, we investigate $h$-ARYS and $h$-AGRYS in para-cosymplectic manifolds in Section 7. Finally, we construct two examples to illustrate our results.
\section{\textsf{Preliminaries}}
An almost paracontact structure on a manifold $M^{2n+1}$ consists of a (1,1)-tensor field $\phi$, a vector field $\zeta$ and a one-form $\eta$ obeying the subsequent conditions:
\begin{equation}\label{2.1}
\phi^2 = I - \eta\otimes \zeta, \hspace{.4cm} \eta(\zeta) = 1
\end{equation}
and the tensor field $\phi$ induces an almost paracomplex structure on each fibre of $\mathcal{D} = ker(\eta)$, that is, the $\pm 1$-eigendistributions, $\mathcal{D}^\pm = \mathcal{D}_\phi (\pm 1)$ of $\phi$ have equal dimension $n$.
The manifold $M$ with an almost paracontact structure is named an almost paracontact manifold. From the definition it can be established that $\phi \zeta = 0$, $\eta\circ \phi = 0$ and rank of $\phi$ is $2n$. If the Nijenhuis tensor vanishes identically, then this manifold is said to be normal. $M$ is named an almost paracontact metric manifold if there exists a semi-Riemannian metric $g$ such that
\begin{equation}\label{2.2}
g(\phi Z_1, \phi Z_2) = -g(Z_1,Z_2) + \eta(Z_1)\eta(Z_2)
\end{equation}
for all $Z_1,Z_2 \in \chi(M)$.\\
$(M,\phi,\zeta,\eta,g)$ is named a paracontact metric manifold if $d\eta(Z_1,Z_2) = g(Z_1,\phi Z_2) = \Phi(Z_1,Z_2)$, $\Phi$ being the fundamental 2-form of $M$.\\
An almost paracontact metric manifold $M^{2n+1}$, with a structure $(\phi,\zeta,\eta,g)$ is said to be an almost $\gamma$-paracosymplectic manifold, if
\begin{equation}\label{2.3}
d\eta = 0, d\Phi = 2\gamma \eta\wedge \Phi,
\end{equation}
where $\gamma$ is a constant or function on $M$. If we put $\gamma = 1$ in (\ref{2.3}), we acquire almost para-Kenmotsu manifolds. In \cite{erk}, the para-Kenmotsu manifold satisfies
\begin{equation}\label{2.4}
R(Z_1,Z_2)\zeta = \eta(Z_1)\;Z_2 - \eta(Z_2)\;Z_1,
\end{equation}
\begin{equation}\label{2.5}
R(Z_1,\zeta)\;Z_2= g(Z_1,\;Z_2)\zeta -\eta(Z_2)\;Z_1,
\end{equation}
\begin{equation}\label{2.6}
R(\zeta,Z_1)Z_2 = -g(Z_1,Z_2)\zeta +\eta(Z_2)Z_1,
\end{equation}
\begin{equation}\label{2.7}
\eta(R(Z_1,Z_2)Z_3) = - g(Z_2,Z_3)\eta(Z_1)+g(Z_1,Z_3)\eta(Z_2) ,
\end{equation}
\begin{equation}\label{2.8}
(\nabla_{Z_1} \phi)Z_2 = g(\phi Z_1,Z_2)\zeta -\eta(Z_2)\phi Z_1,
\end{equation}
\begin{equation}\label{2.9}
\nabla_{Z_1} \zeta = Z_1 - \eta(Z_1)\zeta,
\end{equation}
\begin{equation}\label{2.10}
S(Z_1,\zeta) = -2n \eta(Z_1).
\end{equation}
\begin{lem}(\cite{erk})
In a para-Kenmotsu manifold $M^3$, we have
\begin{equation}\label{2.11}
\zeta r = -2(r+6).
\end{equation}
\end{lem}
Also in a $M^3$, we have
\begin{equation}\label{2.12}
QZ_1 = (\frac{r}{2}+1)Z_1 -(\frac{r}{2}+3)\eta(Z_1)\zeta,
\end{equation}
which implies
\begin{equation}\label{2.13}
S(Z_1, Z_2) =-(\frac{r}{2}+3)\eta(Z_1)\eta(Z_2)+ (\frac{r}{2}+1)g(Z_1, Z_2) ,
\end{equation}
$Q$ indicates the Ricci operator defined by $S(Z_1,Z_2) = g(QZ_1,Z_2)$.
\vspace{.6cm}
{\section{\textsf{$h$-ARYS on para-Kenmotsu manifolds}}}
We assume that the manifold $M^{2n+1}$ admits an $h$-ARYS $(g,\zeta, \lambda, \alpha, \beta)$. Then from (\ref{1.3}), we get
\begin{equation}\label{3.1}
\frac{h}{2}(\pounds_{\zeta} g)(Z_1,Z_2) + \alpha S(Z_1,Z_2) + (\lambda -\frac{\beta}{2}r)g(Z_1,Z_2) = 0,
\end{equation}
which implies
\begin{equation}\label{3.2}
\frac{h}{2}[g(\nabla_{Z_1} \zeta ,Z_2) + g(Z_1,\nabla_{Z_2} \zeta)] + \alpha S(Z_1,Z_2) + (\lambda -\frac{\beta}{2}r)g(Z_1,Z_2) = 0,
\end{equation}
Using (\ref{2.9}) in (\ref{3.2}), we infer
\begin{equation}\label{3.3}
\alpha S(Z_1,Z_2) = h\eta(Z_1)\eta(Z_2)-(h+\lambda -\frac{\beta}{2}r)g(Z_1,Z_2) .
\end{equation}
Putting $Z_1 = Z_2 = \zeta$ in the foregoing equation entails that
\begin{equation}\label{3.4}
\frac{\beta}{2}r = \lambda - 2n\alpha.
\end{equation}
Equations (\ref{3.3}) and (\ref{3.4}) together give
\begin{equation}
\alpha S(Z_1,Z_2) = -(h+2n\alpha)g(Z_1,Z_2) + h\eta(Z_1)\eta(Z_2),
\end{equation}
which is an $\eta$-Einstein manifold. Hence we have:\\
\begin{theo}
If a $M^{2n+1}$ admits a proper $h$-ARYS, then the manifold becomes an $\eta$-Einstein manifold.
\end{theo}
\vspace{1cm}
Let $M^3$ admit an $h$-AGRYS. Then (\ref{1.4}) implies
\begin{equation}\label{4.1}
h\nabla_{Z_1} Df = -\alpha Q Z_1 - (\lambda -\frac{\beta}{2}r)Z_1.
\end{equation}
Taking covariant derivative of (\ref{4.1}), we get
\begin{eqnarray}\label{4.2}
h\nabla_{Z_2} \nabla_{Z_1} Df &=& \frac{1}{h}(Z_2 h)[\alpha Q Z_1 + (\lambda-\frac{\beta}{2}r)Z_1] - \alpha \nabla_{Z_2} QZ_1\\ \nonumber
&& -(Z_2\lambda)Z_1 -(\lambda-\frac{\beta}{2}r)\nabla_{Z_2} Z_1 + \frac{\beta}{2}(Z_2 r)Z_1.
\end{eqnarray}
Interchanging $Z_1$ and $Z_2$ in (\ref{4.2}) entails that
\begin{eqnarray}\label{4.3}
h\nabla_{Z_1} \nabla_{Z_2} Df &=& \frac{1}{h}(Z_1 h)[\alpha QZ_2 + (\lambda-\frac{\beta}{2}r)Z_2] - \alpha \nabla_{Z_1} QZ_2\\ \nonumber
&& -(Z_1\lambda)Z_2 -(\lambda-\frac{\beta}{2}r)\nabla_{Z_1} Z_2 + \frac{\beta}{2}(Z_1 r)Z_2.
\end{eqnarray}
Equation (\ref{4.1}) implies
\begin{equation}\label{4.4}
h\nabla_{[Z_1, Z_2]} Df = -\alpha Q([Z_1, Z_2]) - (\lambda -\frac{\beta}{2}r)([Z_1, Z_2]).
\end{equation}
Equations (\ref{4.2}), (\ref{4.3}) and (\ref{4.4}) reveal that
\begin{eqnarray}\label{4.5}
hR(Z_1,Z_2)Df &=& \frac{1}{h}(Z_1 h)[\alpha QZ_2 + (\lambda-\frac{\beta}{2}r)Z_2]\\ \nonumber
&& - \frac{1}{h}(Z_2 h)[\alpha QZ_1 + (\lambda-\frac{\beta}{2}r)Z_1]\\ \nonumber
&& -\alpha[(\nabla_{Z_1} Q)Z_2 - (\nabla_{Z_2} Q)Z_1]\\ \nonumber
&& + \frac{\beta}{2}[(Z_1 r)Z_2-(Z_2 r)Z_1] - [(Z_1 \lambda)Z_2 - (Z_2 \lambda)Z_1].
\end{eqnarray}
Equation (\ref{2.12}) implies
\begin{eqnarray}\label{4.6}
(\nabla_{Z_1} Q)Z_2 &=& \frac{Z_1 r}{2}[Z_2-\eta(Z_2)\zeta]\\ \nonumber
&& -(3+\frac{r}{2})[g(Z_1,Z_2)\zeta -2\eta(Z_1)\eta(Z_2)\zeta + \eta(Z_2)Z_1].
\end{eqnarray}
Using (\ref{4.6}) in (\ref{4.5}), we get
\begin{eqnarray}\label{4.7}
hR(Z_1,Z_2)Df &=& \frac{1}{h}(Z_1 h)[\alpha QZ_2 + (\lambda-\frac{\beta}{2}r)Z_2]\\ \nonumber
&& - \frac{1}{h}(Z_2 h)[\alpha QZ_1 + (\lambda-\frac{\beta}{2}r)Z_1]\\ \nonumber
&& -\alpha \frac{(Z_1 r)}{2}[Z_2 -\eta(Z_2)\zeta] + \alpha \frac{(Z_2 r)}{2}[Z_1 -\eta(Z_1)\zeta]\\ \nonumber
&& + \alpha(3+\frac{r}{2})[\eta(Z_2)Z_1 -\eta(Z_1)Z_2] - [(Z_1\lambda)Z_2-(Z_2\lambda )Z_1]\\\nonumber
&& + \frac{\beta}{2}[(Z_1 r)Z_2 - (Z_2 r)Z_1].
\end{eqnarray}
If we take $h$ = constant, then (\ref{4.7}) implies
\begin{eqnarray}\label{4.8}
hR(Z_1,Z_2)Df &=& -\alpha \frac{(Z_1 r)}{2}[Z_2 -\eta(Z_2)\zeta] + \alpha \frac{(Z_2 r)}{2}[Z_1 -\eta(Z_1)\zeta]\\ \nonumber
&& + \alpha(3+\frac{r}{2})[\eta(Z_2)Z_1 -\eta(Z_1)Z_2] - [(Z_1\lambda)Z_2(Z_2\lambda )Z_1]\\\nonumber
&& + \frac{\beta}{2}[(Z_1 r)Z_2 - (Z_2 r)Z_1].
\end{eqnarray}
Contracting (\ref{4.8}), we infer
\begin{equation}\label{4.9}
hS(Z_2,Df) = (\frac{\alpha}{2}-\beta)Z_2 r + 2(Z_2 \lambda).
\end{equation}
Replacing $Z_1$ by $Df$ in (\ref{2.13}) and comparing with (\ref{4.9}), we get
\begin{equation}\label{4.10}
h[(1+\frac{r}{2})Z_2 f -(3+\frac{r}{2})(\zeta f)\eta(Z_2)] = (\frac{\alpha}{2}-\beta)Z_2 r + 2(Z_2\lambda).
\end{equation}
Putting $Z_2=\zeta$ in (\ref{4.10}) entails that
\begin{equation}\label{4.11}
h(\zeta f) = (\frac{\alpha}{2}-\beta)(r+6) - (\zeta \lambda).
\end{equation}
Taking inner product of (\ref{4.8}) with $\zeta$ gives
\begin{eqnarray}\label{4.12}
h[\eta(Z_2)Z_1f - \eta(Z_1)Z_2f] &=& -[(Z_1\lambda)\eta(Z_2)-(Z_2\lambda)\eta(Z_1)]\\ \nonumber
&& + \frac{\beta}{2}[(Z_1r)\eta(Z_2)-(Z_2r)\eta(Z_1)].
\end{eqnarray}
Setting $Z_2 =\zeta$ in (\ref{4.12}) and using (\ref{4.11}), we get
\begin{equation}\label{4.13}
h(Z_1f) = \frac{\beta}{2}(Z_1 r) + \frac{\alpha}{2}(r+6)\eta(Z_1) - (Z_1\lambda).
\end{equation}
Let us assume that the scalar curvature $r$ = constant. Then from (\ref{2.11}) we get $r = -6$. Therefore the above equation implies
\begin{equation}\label{4.14}
h(Z_1 f) = -(Z_1 \lambda),
\end{equation}
which implies
\begin{equation}\label{4.15}
h (Df) = - (D\lambda).
\end{equation}
Using (\ref{4.15}) in (\ref{4.1}) reveals that
\begin{equation}\nonumber
-\nabla_{Z_1} D\lambda = -\alpha QZ_1 - (\lambda -\frac{\beta}{2}r)Z_1,
\end{equation}
which shows that it is an almost gradient Ricci-Yamabe soliton whose soliton function is $-\lambda$. Hence we have:\\
\begin{theo}
If a $M^3$ with a constant scalar curvature admits an $h$-ARYS, then the soliton becomes an almost gradient Ricci-Yamabe soliton whose soliton function is -$\lambda$.
\end{theo}
\vspace{.9cm}
{\section{\textsf{para-Sasakian manifolds}}}
A para-Sasakian manifold is a normal paracontact metric manifold. It is to be noted that the para-Sasakian manifold implies the $K$-paracontact manifold and conversely (only in 3 dimensions). In a para-Sasakian manifold the following relations hold:
\begin{equation}\label{6.1}
R(Z_1,Z_2)\zeta = \eta(Z_1)Z_2 - \eta(Z_2)Z_1,
\end{equation}
\begin{equation}\label{6.2}
(\nabla_{Z_1} \phi)Z_2 = -g(Z_1,Z_2)\zeta + \eta(Z_2)Z_1,
\end{equation}
\begin{equation}\label{6.3}
\nabla_{Z_1} \zeta = -\phi Z_1,
\end{equation}
\begin{equation}\label{6.4}
R(\zeta,Z_1)Z_2 = -g(Z_1,Z_2)\zeta + \eta(Z_2)Z_1,
\end{equation}
\begin{equation}\label{6.5}
S(Z_1,\zeta) = -2n\eta(Z_1).
\end{equation}
In a 3-dimensional semi-Riemannian manifold the curvature tensor $R$ is of the form
\begin{eqnarray}\label{6.6}
R(Z_1,Z_2)Z_3 &=& g(Z_2,Z_3)QZ_1 - g(Z_1,Z_3)QZ_2 + S(Z_2,Z_3)Z_1 \\ \nonumber
&&- S(Z_1,Z_3)Z_2 - \frac{r}{2}[g(Z_2,Z_3)Z_1 - g(Z_1,Z_3)Z_2].
\end{eqnarray}
Equation (\ref{6.6}) implies
\begin{equation}\label{6.7}
QZ_1 = (\frac{r}{2}+1)Z_1-(\frac{r}{2}+3)\eta(Z_1)\zeta,
\end{equation}
which implies
\begin{equation}\label{6.8}
S(Z_1,Z_2) = (\frac{r}{2}+1)g(Z_1,Z_2) -(\frac{r}{2}+3)\eta(Z_1)\eta(Z_2).
\end{equation}
\begin{lem}(\cite{erk})
For a para-Sasakian manifold $M^3$, we have
\begin{equation}\label{6.9}
\zeta r = 0.
\end{equation}
\end{lem}
\vspace{.6cm}
{\section{\textsf{$h$-ARYS on para-Sasakian manifolds}}}
Let us assume that a para-Sasakian manifold $M^{2n+1}$ admit an $h$-ARYS $(g, \zeta, \lambda, \alpha, \beta)$. Then equation (\ref{1.3}) implies
\begin{equation}\label{7.1}
\frac{h}{2}(\pounds_{\zeta} g)(Z_1,Z_2) + \alpha S(Z_1,Z_2) + (\lambda-\frac{\beta}{2}r)g(Z_1,Z_2) = 0,
\end{equation}
which implies
\begin{equation}\label{7.2}
\frac{h}{2}[g(\nabla_{Z_1} {\zeta} ,Z_2) + g(Z_1,\nabla_{Z_2} {\zeta})] + \alpha S(Z_1,Z_2) + (\lambda-\frac{\beta}{2}r)g(Z_1,Z_2) = 0.
\end{equation}
Using (\ref{6.3}) in (\ref{7.2}) entails that
\begin{equation}\label{7.3}
\alpha S(Z_1,Z_2) = (\frac{\beta}{2}r - \lambda)g(Z_1,Z_2).
\end{equation}
Putting $Z_1 = Z_2 = \zeta$ in (\ref{7.3}), we get
\begin{equation}\label{7.4}
\beta r = 2\lambda - 4n\alpha.
\end{equation}
Hence from (\ref{7.3}), we infer
\begin{equation}\nonumber
S(Z_1,Z_2) = -2n g(Z_1,Z_2),
\end{equation}
since for proper $h$-ARYS ($\alpha \neq 0$). Hence it is an Einstein manifold. Therefore we state:\\
\begin{theo}
If $M^{2n+1}$ admits a proper $h$-ARYS, then the manifold becomes an Einstein manifold.
\end{theo}
If we take $\alpha = 1$ and $\beta = 0$, then (\ref{7.4}) implies $\lambda = 2n$. Hence we get:
\begin{cor}
If a $M^{2n+1}$ admits a proper $h$-almost Ricci soliton, then the soliton is expanding.
\end{cor}
\vspace{15cm}
Suppose that a $M^3$ admit an h-AGRYS. Then equation (\ref{1.4}) implies
\begin{equation}\label{8.1}
h\nabla_{Z_1} Df = -\alpha QZ_1 - (\lambda -\frac{\beta}{2}r)Z_1.
\end{equation}
Using (\ref{6.7}) in the above equation entails that
\begin{eqnarray}\label{8.2}
h\nabla_{Z_1} Df &=& -[\frac{(\alpha-\beta)}{2}r + \alpha + \lambda]Z_1\\ \nonumber
&& + \alpha(\frac{r}{2}+3)\eta(Z_1)\zeta.
\end{eqnarray}
Taking covariant differentiation of (\ref{8.2}), we get
\begin{eqnarray}\label{8.3}
h\nabla_{Z_2}\nabla_{Z_1} Df &=& \frac{1}{h}(Z_2 h)[\lbrace \frac{(\alpha-\beta)}{2}r + \alpha + \lambda \rbrace Z_1 - \alpha(\frac{r}{2}+3)\eta(Z_1)\zeta ]\\ \nonumber
&&- [\frac{(\alpha-\beta)}{2}Z_2 r + Z_2 \lambda]Z_1 -[\frac{(\alpha-\beta)}{2}r + \alpha + \lambda]\nabla_{Z_2} Z_1 \\ \nonumber
&&+ \frac{\alpha}{2}(Z_2 r)\eta(Z_1)\zeta + \alpha(\frac{r}{2}+3)[(\nabla_{Z_2} \eta(Z_1))\zeta - \eta(Z_1)\phi Z_2].
\end{eqnarray}
Swapping $Z_1$ and $Z_2$ in (\ref{8.3}), we infer
\begin{eqnarray}\label{8.4}
h\nabla_{Z_1}\nabla_{Z_2} Df &=& \frac{1}{h}(Z_1 h)[\lbrace \frac{(\alpha-\beta)}{2}r + \alpha + \lambda \rbrace Z_2 - \alpha(\frac{r}{2}+3)\eta(Z_2)\zeta ]\\ \nonumber
&&- [\frac{(\alpha-\beta)}{2}Z_1 r + Z_1 \lambda]Z_2 -[\frac{(\alpha-\beta)}{2}r + \alpha + \lambda]\nabla_{Z_1} Z_2 \\ \nonumber
&&+ \frac{\alpha}{2}(Z_1 r)\eta(Z_2)\zeta + \alpha(\frac{r}{2}+3)[(\nabla_{Z_1} \eta(Z_2))\zeta - \eta(Z_2)\phi Z_1].
\end{eqnarray}
Equation (\ref{8.2}) implies
\begin{eqnarray}\label{8.5}
h\nabla_{[Z_1,Z_2]} Df &=& -[\frac{(\alpha-\beta)}{2}r + \alpha + \lambda]([Z_1,Z_2])\\ \nonumber
&& + \alpha(\frac{r}{2}+3)\eta([Z_1,Z_2])\zeta.
\end{eqnarray}
With the help of (\ref{8.3})-(\ref{8.5}), we get
\begin{eqnarray}\label{8.6}
h R(Z_1,Z_2)Df &=& \frac{1}{h}(Z_1 h)[\lbrace \frac{(\alpha-\beta)}{2}r + \alpha + \lambda \rbrace Z_2 - \alpha(\frac{r}{2}+3)\eta(Z_2)\zeta ]\\ \nonumber
&& - \frac{1}{h}(Z_2 h)[\lbrace \frac{(\alpha-\beta)}{2}r + \alpha + \lambda \rbrace Z_1 - \alpha(\frac{r}{2}+3)\eta(Z_1)\zeta ]\\ \nonumber
&& - [\frac{(\alpha-\beta)}{2}Z_1 r + Z_1\lambda]Z_2 + [\frac{(\alpha-\beta)}{2}Z_2 r + Z_2\lambda]Z_1\\ \nonumber
&& + \frac{\alpha}{2}[(Z_1r)\eta(Z_2)\zeta - (Z_2 r)\eta(Z_1)\zeta]\\ \nonumber
&& + \alpha(\frac{r}{2}+3)[2g(Z_1,\phi Z_2)\zeta -\eta(Z_2)\phi Z_1 + \eta(Z_1)\phi Z_2].
\end{eqnarray}
Contracting the foregoing equation entails that
\begin{eqnarray}\label{8.9}
hS(Z_1,Df) &=& -\frac{1}{h}(Z_2h)[2\lbrace \frac{(\alpha-\beta)}{2}r + \alpha + \lambda \rbrace -\alpha(\frac{r}{2}+3)]\\ \nonumber
&& -\frac{\alpha}{h}(\frac{r}{2}+3)(\zeta h)\eta(Z_2) + (\frac{\alpha}{2}-\beta)(Z_2 r) + 2(Z_2 \lambda).
\end{eqnarray}
Replacing $Z_1$ by $Df$ in (\ref{6.8}) and likening with the above equation, we get
\begin{eqnarray}\label{8.10}
h[(\frac{r}{2}+1)Z_2 f -(\frac{r}{2}+3)(\zeta f) \eta(Z_2)] &=& -\frac{1}{h}(Z_2 h)[2\lbrace \frac{(\alpha-\beta)}{2}r + \alpha + \lambda \rbrace -\alpha(\frac{r}{2}+3)]\\ \nonumber
&& -\frac{\alpha}{h}(\frac{r}{2}+3)(\zeta h)\eta(Z_2) + (\frac{\alpha}{2}-\beta)(Z_2 r) + 2(Z_2 \lambda).
\end{eqnarray}
Setting $Z_2 = \zeta$ in (\ref{8.10}) reveals that
\begin{eqnarray}\label{8.11}
h(\zeta f) &=& \frac{1}{h}[\frac{(\alpha-\beta)}{2}r + \alpha + \lambda ](\zeta h) - (\zeta \lambda).
\end{eqnarray}
Taking inner product of (\ref{8.6}) with $\zeta$, we get
\begin{eqnarray}\label{8.12}
h[\eta(Z_2)Z_1 f - \eta(Z_1)Z_2 f] &=&\frac{1}{h}(Z_1 h)[\lbrace \frac{(\alpha-\beta)}{2}r + \alpha + \lambda \rbrace - \alpha(\frac{r}{2}+3)]\eta(Z_2)\\ \nonumber
&& - \frac{1}{h}(Z_2 h)[\lbrace \frac{(\alpha-\beta)}{2}r + \alpha + \lambda \rbrace - \alpha(\frac{r}{2}+3)]\eta(Z_1)\\ \nonumber
&& -[\frac{(\alpha-\beta)}{2}Z_1 r + Z_1\lambda]\eta(Z_2)\\ \nonumber
&& + [\frac{(\alpha-\beta)}{2}Z_2 r + Z_2\lambda]\eta(Z_1)\\ \nonumber
&&+ \frac{\alpha}{2}[(Z_1 r)\eta(Z_2)-(Z_2 r)\eta(Z_1)] + 2\alpha (\frac{r}{2}+3)g(Z_1,\phi Z_2).
\end{eqnarray}
Substituting $Z_1$ by $\phi Z_1$ and $Z_2$ by $\phi Z_2$ in (\ref{8.12}) gives
\begin{equation}
\alpha(r+6)g(\phi Z_1,Z_2) = 0.
\end{equation}
Since for proper $\alpha \neq 0$, then the above equation implies $ r = -6$. Therefore from (\ref{6.8}), we get
\begin{equation}\label{8.14}
S(Z_1,Z_2) = -2 g(Z_1,Z_2),
\end{equation}
which is an Einstein manifold. In view of (\ref{6.6}) and (\ref{8.14}) reveals that
\begin{equation}
R(Z_1,Z_2)Z_3 = -[g(Z_2,Z_3)Z_1 -g(Z_1,Z_3)Z_2],
\end{equation}
which represents, it is a space of constant sectional curvature -1. Hence we have:\\
\begin{theo}
If a $M^3$ admits a proper $h$-ARYS, then the manifold is locally isometric to $\mathbb{H}^3(1)$.
\end{theo}
\vspace{.9cm}
{\section{\textsf{para-cosymplectic manifolds}}}
An almost paracontact metric manifold $M^{2n+1}$ with a structure $(\phi,\zeta,\eta,g)$ is named an almost $\gamma$-paracosymplectic manifold if
\begin{equation}
d\eta = 0, d\Phi = 2\gamma \wedge \Phi.
\end{equation}
Specifically, if $\gamma = 0$, we acquire almost paracosymplectic manifolds. The manifold is called para-cosymplectic if it is normal. We refer (\cite{dak}, \cite{erk1}) for more details. Any paracosymplectic manifold satisfies
\begin{equation}\label{9.1}
R(Z_1,Z_2)\zeta = 0,
\end{equation}
\begin{equation}\label{9.2}
(\nabla_{Z_1} \phi )Z_2 = 0,
\end{equation}
\begin{equation}\label{9.3}
\nabla_{Z_1} \zeta = 0,
\end{equation}
\begin{equation}\label{9.4}
S(Z_1,\zeta) = 0.
\end{equation}
\begin{lem}(\cite{erk})
For a 3-dimensional para-cosymplectic manifold $M^3$, we have
\begin{equation}\label{9.5}
Q Z_1 = \frac{r}{2}[Z_1 - \eta(Z_1)\zeta],
\end{equation}
\begin{equation}\label{9.6}
S(Z_1,Z_2) = \frac{r}{2}[g(Z_1,Z_2)-\eta(Z_1)\eta(Z_2)].
\end{equation}
\end{lem}
\begin{lem}(\cite{erk})
In a para-cosymplectic manifold $M^3$, we get
\begin{equation}\label{9.7}
\zeta r = 0.
\end{equation}
\end{lem}
{\section{\textsf{$h$-ARYS on para-cosymplectic manifolds}}}
Assume that the para-cosymplectic manifold admits an $h$-ARYS $(g,\zeta,\lambda,\alpha,\beta)$. Then (\ref{1.3}) implies
\begin{equation}\label{a.1}
\frac{h}{2}(\pounds_{\zeta} g)(Z_1,Z_2) + \alpha S(Z_1,Z_2) + (\lambda -\frac{\beta}{2}r)g(Z_1,Z_2) = 0,
\end{equation}
which turns into
\begin{equation}\label{a.2}
\frac{h}{2}[g(\nabla_{Z_1} {\zeta},Z_2) + g(Z_1,\nabla_{Z_2} {\zeta})] + \alpha S(Z_1,Z_2) + (\lambda -\frac{\beta}{2}r)g(Z_1,Z_2) = 0.
\end{equation}
Using (\ref{9.3}) in (\ref{a.2}) gives
\begin{equation}
\alpha S(Z_1,Z_2) = -(\lambda-\frac{\beta}{2}r)g(Z_1,Z_2),
\end{equation}
which is an Einstein manifold. Hence we have:\\
\begin{theo}
If a para-cosymplectic manifold admits a proper $h$-ARYS, then the manifold becomes an Einstein manifold.
\end{theo}
\vspace{15cm}
Let $M^3$ admits an $h$-AGRYS. Then from (\ref{2.4}), we get
\begin{equation}\label{b.1}
h\nabla_{Z_1} Df = -\alpha QZ_1 - (\lambda-\frac{\beta}{2}r)Z_1.
\end{equation}
Hence we have
\begin{eqnarray}\label{b.2}
hR(Z_1,Z_2)Df &=& \frac{1}{h}(Z_1 h)[\alpha QZ_2 + (\lambda-\frac{\beta}{2}r)Z_2]\\ \nonumber
&& - \frac{1}{h}(Z_2h)[\alpha QZ_1 + (\lambda-\frac{\beta}{2}r)Z_1]\\ \nonumber
&& -\alpha[(\nabla_{Z_1} Q)Z_2 - (\nabla_{Z_2} Q)Z_1] - (Z_1\lambda)Z_2 + (Z_2\lambda)Z_1\\ \nonumber
&& + \frac{\beta}{2}[(Z_1 r)Z_2 - (Z_2 r)Z_1].
\end{eqnarray}
Using (\ref{9.5}) in (\ref{b.2}) reveals that
\begin{eqnarray}\label{b.3}
hR(Z_1,Z_2)Df &=& \frac{1}{h}(Z_1h)[\alpha QZ_2 + (\lambda-\frac{\beta}{2}r)Z_2]\\ \nonumber
&& - \frac{1}{h}(Z_2 h)[\alpha QZ_1 + (\lambda-\frac{\beta}{2}r)Z_1]\\ \nonumber
&& -\frac{\alpha}{2}(Z_1 r)[Z_2-\eta(Z_2)\zeta] + \frac{\alpha}{2}(Z_2r)[Z_1-\eta(Z_1)\zeta]\\ \nonumber
&& - (Z_1\lambda)Z_2 + (Z_2\lambda)Z_1 + \frac{\beta}{2}[(Z_1 r)Z_2 - (Z_2 r)Z_1].
\end{eqnarray}
If we take $h$ = constant, then the above equation implies
\begin{eqnarray}\label{b.4}
hR(Z_1,Z_2)Df &=& -\frac{\alpha}{2}(Z_1 r)[Z_2-\eta(Z_2)\zeta] + \frac{\alpha}{2}(Z_2 r)[Z_1-\eta(Z_1)\zeta]\\ \nonumber
&& - (Z_1 \lambda)Z_2 + (Z_2 \lambda)Z_1 + \frac{\beta}{2}[(Z_1 r)Z_2 - (Z_2 r)Z_1].
\end{eqnarray}
Contracting the foregoing equation entails that
\begin{equation}\label{b.5}
hS(Z_2,Df) = (\frac{\alpha}{2}-\beta)Z_2 r + 2(Z_2 \lambda).
\end{equation}
Substituting $Z_1$ by $Df$ in (\ref{9.6}) and equating with (\ref{b.5}), we get
\begin{equation}\label{b.6}
\frac{hr}{2}[Z_2f -(\zeta f)\eta(Z_2)] = (\frac{\alpha}{2}-\beta)Z_2 r + 2(Z_2 \lambda).
\end{equation}
Putting $Z_2 = \zeta$ and using (\ref{9.7}), we infer
\begin{equation}\label{b.7}
\zeta \lambda = 0.
\end{equation}
Taking inner product of (\ref{b.4}) with $\zeta$ and using (\ref{9.1}) gives
\begin{equation}\label{b.8}
-(Z_1\lambda)\eta(Z_2) + (Z_2\lambda)\eta(Z_1) + \frac{\beta}{2}[(Z_1 r)\eta(Z_2) - (Z_2 r)\eta(Z_1)] = 0.
\end{equation}
Setting $Z_2 = \zeta$ in (\ref{b.8}), we get
\begin{equation}\label{b.9}
-(Z_1\lambda) + \frac{\beta}{2}(Z_1 r) = 0.
\end{equation}
If we take $r$ = constant, then (\ref{b.9}) implies
\begin{equation}
Z_1 \lambda = 0,
\end{equation}
which implies $\lambda$ is constant. Therefore we have:\\
\begin{theo}
If a $M^3$ with constant scalar curvature admits an $h$-AGRYS, then the soliton becomes an $h$-gradient Ricci-Yamabe soliton.
\end{theo}
In particular, if we take $\alpha =1$ and $\beta = 0$, then (\ref{b.9}) implies $Z_1\lambda = 0$. Therefore $\lambda$ is constant. Hence we have:
\begin{cor}
An $h$-almost gradient Ricci soliton in a $M^3$ becomes an $h$-gradient Ricci soliton.
\end{cor}
\vspace{.9cm}
{\section{\textsf{Examples}}}
{\bf{Example 1.}}
Let us consider $M^3 = \lbrace (x,y,z)\in \mathbb{R}^3 : (x,y,z) \neq (0,0,0)\rbrace$, where $(x,y,z)$ are the standard co-ordinates of $\mathbb{R}^3$.\\
We consider three linearly independent vector fields
\begin{equation} \nonumber
u_1 = e^z\frac{\partial}{\partial x},\hspace{.4cm} u_2 = e^{-z}\frac{\partial}{\partial y},\hspace{.4cm} u_3 = \frac{\partial}{\partial z}.
\end{equation}
Let $g$ be the semi-Riemannian metric defined by
\begin{equation}\nonumber
g(u_1,u_1) = 1,\hspace{.4cm} g(u_2,u_2) = -1,\hspace{.4cm} g(u_3,u_3) = 1,
\end{equation}
\begin{equation}\nonumber
g(u_1,u_2) = 0,\hspace{.4cm} g(u_1,u_3) = 0,\hspace{.4cm} g(u_2,u_3) = 0.
\end{equation}
Let $\eta$ be the 1-form defined by $\eta(Z_1) = g(Z_1,u_3)$ for any $Z_1 \in \chi(M)$.\\
Let $\phi$ be the (1,1)-tensor field defined by
\begin{equation}\nonumber
\phi u_1 = u_2,\hspace{.4cm} \phi u_2 = u_1,\hspace{.4cm} \phi u_3 = 0.
\end{equation}
Using the above relations, we acquire
\begin{equation}\nonumber
\phi^2 Z_1 = Z_1 -\eta(Z_1)u_3,\hspace{.4cm} \eta(u_3) = 1,
\end{equation}
\begin{equation}\nonumber
g(\phi Z_1, \phi Z_2) = -g(Z_1,Z_2) + \eta(Z_1)\eta(Z_2)
\end{equation}
for any $Z_1,Z_2 \in \chi(M)$. Hence for $u_3 = \zeta$, the structure $(\phi,\zeta,\eta,g)$ is an almost paracontact structure on $M$.\\
Using (\ref{6.3}), we have
\begin{equation}\nonumber
\nabla_{u_1} u_1 = -u_3,\hspace{.4cm} \nabla_{u_1} u_2 = 0,\hspace{.4cm} \nabla_{u_1} u_3 = u_1,
\end{equation}
\begin{equation}\nonumber
\nabla_{u_2} u_1 = 0,\hspace{.4cm} \nabla_{u_2} u_2 = u_3,\hspace{.4cm} \nabla_{u_2} u_3 = u_2,
\end{equation}
\begin{equation}\nonumber
\nabla_{u_3} u_1 =0,\hspace{.4cm} \nabla_{u_3} u_2 = 0,\hspace{.4cm} \nabla_{u_3} u_3 =0.
\end{equation}
Hence the manifold is a para-Kenmotsu manifold. \\
With the help of the above results we can easily obtain
\begin{equation}\nonumber
R(u_1,u_2)u_3 = 0,\hspace{.4cm} R(u_2,u_3)u_3 = - u_2,\hspace{.4cm} R(u_1,u_3)u_3 = -u_1,
\end{equation}
\begin{equation}\nonumber
R(u_1,u_2)u_2 = u_1,\hspace{.4cm} R(u_2,u_3)u_2 = u_3,\hspace{.4cm} R(u_1,u_3)u_2 = 0,
\end{equation}
\begin{equation}\nonumber
R(u_1,u_2)u_1 = u_2,\hspace{.4cm} R(u_2,u_3)u_1 = 0,\hspace{.4cm} R(u_1,u_3)u_1 = u_3
\end{equation}
and
\begin{equation}\nonumber
S(u_1,u_1) = -2,\hspace{.4cm} S(u_2,u_2) = 2,\hspace{.4cm} S(u_3,u_3) = -2.
\end{equation}
From the above results we get $r = -6$.\\
Again, suppose that $-f = \lambda = e^z$ and $2\alpha - 3\beta = 0$. Therefore ${-Df = D\lambda = e^z u_3}$. Hence we get
\begin{eqnarray} \nonumber
&&\nabla_{u_1} \mathcal{D}f = -e^z u_1,\\ \nonumber
&&\nabla_{u_2} \mathcal{D}f = -e^z u_2,\\ \nonumber
&&\nabla_{u_3} \mathcal{D}f =-e^z u_3.
\end{eqnarray}
Therefore, for $2\alpha - 3\beta = 0$ and $h = 1$ , equation (\ref{4.1}) is satisfied. Thus $g$ is an $h$-AGRYS with the soliton vector field $V = \mathcal{D}f$, where $-f = \lambda = e^z$ and $2\alpha - 3\beta = 0$. Since $-f = \lambda = e^z$ and $-Df = D\lambda = e^z u_3$, hence Theorem 3.2 is verified. \\\\
{\bf{Example 2.}}
Let $M^3 = \lbrace (x,y,z)\in \mathbb{R}^3 : (x,y,z) \neq (0,0,0)\rbrace$, where $(x,y,z)$ are the standard co-ordinates of $\mathbb{R}^3$.\\
We consider
\begin{equation} \nonumber
v_1 = e^z\frac{\partial}{\partial x} + e^{-z}\frac{\partial}{\partial y},\hspace{.4cm} v_2 = e^z\frac{\partial}{\partial x} - e^{-z}\frac{\partial}{\partial y},\hspace{.4cm} v_3 = \frac{\partial}{\partial z}
\end{equation}
which are linearly independent vector fields. Let the semi-Riemannian metric $g$ be defined by
\begin{equation}\nonumber
g(v_1,v_1) = 1,\hspace{.4cm} g(v_2,v_2) = -1,\hspace{.4cm} g(v_3,v_3) = 1,
\end{equation}
\begin{equation}\nonumber
g(v_1,v_2) = 0,\hspace{.4cm} g(v_1,v_3) = 0,\hspace{.4cm} g(v_2,v_3) = 0.
\end{equation}
The 1-form $\eta$ is defined by $\eta(Z_1) = g(Z_1,v_3)$ for any $Z_1 \in \chi(M)$ and $\phi$ is defined by
\begin{equation}\nonumber
\phi v_1 = v_2,\hspace{.4cm} \phi v_2 = v_1,\hspace{.4cm} \phi v_3 = 0.
\end{equation}
Using the above relations, we acquire
\begin{equation}\nonumber
\phi^2 Z_1 = Z_1 -\eta(Z_1)v_3,\hspace{.4cm} \eta(v_3) = 1,
\end{equation}
\begin{equation}\nonumber
g(\phi Z_1, \phi Z_2) = -g(Z_1,Z_2) + \eta(Z_1)\eta(Z_2)
\end{equation}
for any $Z_1,Z_2 \in \chi(M)$. Hence for $v_3 = \zeta$, the structure $(\phi,\zeta,\eta,g)$ is an almost paracontact structure on $M$.\\
Using (\ref{6.3}), we have
\begin{equation}\nonumber
\nabla_{v_1} v_1 = 0,\hspace{.4cm} \nabla_{v_1} v_2 = v_3,\hspace{.4cm} \nabla_{v_1} v_3 = -v_2,
\end{equation}
\begin{equation}\nonumber
\nabla_{v_2} v_1 = v_3,\hspace{.4cm} \nabla_{v_2} v_2 = 0,\hspace{.4cm} \nabla_{v_2} v_3 = -v_1,
\end{equation}
\begin{equation}\nonumber
\nabla_{v_3} v_1 =0,\hspace{.4cm} \nabla_{v_3} v_2 = 0,\hspace{.4cm} \nabla_{v_3} v_3 =0.
\end{equation}
Hence the manifold is a para-Sasakian manifold. The components of the curvature tensor and Ricci tensor are
\begin{equation}\nonumber
R(v_1,v_2)v_3 = 0,\hspace{.4cm} R(v_2,v_3)v_3 = - v_2,\hspace{.4cm} R(v_1,v_3)v_3 = -v_1,
\end{equation}
\begin{equation}\nonumber
R(v_1,v_2)v_2 =v_1,\hspace{.4cm} R(v_2,v_3)v_2 = v_3,\hspace{.4cm} R(v_1,v_3)v_2 = 0,
\end{equation}
\begin{equation}\nonumber
R(v_1,v_2)v_1 = -v_2,\hspace{.4cm} R(v_2,v_3)v_1 = 0,\hspace{.4cm} R(v_1,v_3)v_1 = v_3
\end{equation}
and
\begin{equation}\nonumber
S(v_1,v_1) = -2,\hspace{.4cm} S(v_2,v_2) = 2,\hspace{.4cm} S(v_3,v_3) = -2.
\end{equation}
From the above equations, we obtain $r = -6$.\\
From (\ref{7.3}) we obtain $\alpha S(u_1,u_1)= -3\beta - \lambda$, $\alpha S(u_2,u_2) = 3\beta+\lambda$ and $\alpha S(u_3,u_3) = -3\beta - \lambda$, hence $\lambda = 2\alpha-3\beta$. The data $(g, \zeta, \lambda, \alpha, \beta)$ defines an $h$-ARYS soliton on the para-Sasakian manifold.\\
|
1,116,691,498,696 | arxiv | \section{Introduction}\label{sec:intro}
Since the discovery of the Higgs boson~\cite{Aad_2012,Chatrchyan_2012,CMS:2013btf} at the LHC, an extensive program of measurements~\cite{PDG2020} has been undertaken to determine its properties and couplings to different types of particles and to assess whether these properties are consistent with those predicted by the standard model (SM).
With the successful running of the LHC, large data samples of proton-proton (\ensuremath{\Pp\Pp}\xspace) collisions at $\sqrt s = 13\TeV$ have been accumulated, increasing the sensitivity to Higgs boson decays with small branching fractions.
Such decays also provide probes for possible contributions arising from physics beyond the SM (BSM) and include the process \ensuremath{\PH\to\zg}\xspace~\cite{Abba96, Chen12, Htollg-FB-Sun, Passarino, Campbell_2013hz, Degrassi:2019yix, Low:2011gn,Hue:2017cph,Dedes:2019bew,Hammad:2015eca,Liu:2020nsm}.
Figure~\ref{fig:fey} shows Feynman diagrams for the key SM contributions to the \ensuremath{\PH\to\zg}\xspace decay process.
Experimentally, the final state resulting from \ensuremath{\PZ\to\lplm}\xspace ($\Pell = \Pe$ or \PGm) is the most accessible, since the leptons are highly distinctive, well-measured, and provide a means to trigger the recording of the events.
In the SM, the expected branching fraction for \ensuremath{\PH\to\zg}\xspace is $\ensuremath{\mathcal{B}(\hzg)}\xspace = (1.57 \pm 0.09) \times 10^{-3}$, assuming a Higgs boson mass of $\ensuremath{m_{\PH}}\xspace = 125.38 \pm 0.14\GeV$, taken from the most recent CMS Higgs boson mass measurement~\cite{CMS:2020xrn}.
While this branching fraction is comparable to $\ensuremath{\mathcal{B}(\hgg)}\xspace = (2.27 \pm 0.04) \times 10^{-3}$~\cite{LHC-YR4,CMS:2021kom}, the \ensuremath{\PZ\to\lplm}\xspace branching fraction reduces the relative predicted signal yield.
The ratio $\ensuremath{\brzg/\brgg}\xspace = 0.69 \pm 0.04$ is potentially sensitive to BSM physics, such as supersymmetry and extended Higgs sectors~\cite{Djouadi:1996yq,Zg_theory_extension,Zg_theory_decaywidth,Chen:2013vi,Hung:2019jue,Archer-Smith:2020gib}.
The effects from these models can shift the \ensuremath{\PH\to\zg}\xspace and \ensuremath{\PH\to\PGg\PGg}\xspace branching fractions by different amounts, making the ratio a sensitive observable.
The impact on the ratio can be $\mathcal{O}(10\%)$, depending on the model. The \ensuremath{\PH\to\zg}\xspace branching fraction is sensitive to a potential anomalous trilinear Higgs self-coupling~\cite{Degrassi:2019yix}, and a precise measurement of the branching fraction could help to test the SM prediction for this fundamental quantity.
\begin{figure*}[b]
\includegraphics[width=0.8\textwidth]{Figure_001.pdf}
\caption{Feynman diagrams for \ensuremath{\PH\to\zg}\xspace decay.} \label{fig:fey}
\end{figure*}
The ATLAS and CMS Collaborations have performed searches for the decay $\ensuremath{\PH\to\zg}\xspace\to\ensuremath{\Pellp\Pellm}\xspace\PGg$~\cite{atl-HZG,cms-HZG,Sirunyan:2018tbk,Aad:2020plj} at $\sqrt{s}=7$, 8, and 13\TeV in the $\ensuremath{\Pep\Pem}\xspace\PGg$ and $\ensuremath{\PGmp\PGmm}\xspace\PGg$ final states.
The most stringent bound has been set by the ATLAS Collaboration using a data sample at $\sqrt{s} = 13\TeV$ with an integrated luminosity of 139\fbinv~\cite{Aad:2020plj}. The observed (expected) upper limit at 95\% confidence level (\CL) on \ensuremath{\sigppH\brzg}\xspace relative to the SM is 3.6\,(2.6), assuming $\ensuremath{m_{\PH}}\xspace=125.09\GeV$.
The region with lower dilepton invariant mass (\ensuremath{m_{\lplm}}\xspace) has also been explored.
The ATLAS experiment has reported evidence for the decay $\PH\to\ensuremath{\lplm\PGg}\xspace$ with $\ensuremath{m_{\lplm}}\xspace < 30\GeV$ using both dilepton channels~\cite{atlas_llgrun2}.
The CMS Collaboration has also searched for the $\PH\to\ensuremath{\lplm\PGg}\xspace$ process with $\ensuremath{m_{\lplm}}\xspace < 50\GeV$ in the dimuon channel at $\sqrt{s}=8\TeV$~\cite{2016341} and 13\TeV~\cite{Sirunyan:2018tbk}.
This paper describes a search for the decay \ensuremath{\PH\to\zg}\xspace, where \ensuremath{\PZ\to\lplm}\xspace.
The data sample corresponds to an integrated luminosity of 138\fbinv of \ensuremath{\Pp\Pp}\xspace collisions at $\sqrt{s} = 13\TeV$ accumulated between 2016 and 2018.
The region at small dilepton invariant mass, $\ensuremath{m_{\lplm}}\xspace < 50\GeV$, is excluded from the analysis. This region contains a contribution from an additional process, $\PH\to\PGg^{\ast}\PGg\to\ensuremath{\lplm\PGg}\xspace$~\cite{Htollg-FB-Sun}.
The sensitivity of the analysis is enhanced by searching for Higgs boson production in a variety of mechanisms, including gluon-gluon fusion (\ensuremath{\Pg\Pg\PH}\xspace); vector boson fusion (VBF); and the associated production of a Higgs boson with either a weak vector boson (\ensuremath{\PV\PH}\xspace, where $\PV = \PZ$ or \PW) or a top quark pair (\ensuremath{\ttbar\PH}\xspace).
The dominant backgrounds arise from Drell--Yan production in association with an initial-state photon~({\ensuremath{\PZ/\PGg^{\ast}}\xspace}+\PGg) and Drell--Yan production in association with jets, where a jet or additional lepton is misidentified as a photon ({\ensuremath{\PZ/\PGg^{\ast}}\xspace}+jets).
After using a set of discriminating variables to suppress background in the different production mechanisms, the signal is identified as a narrow resonant peak around \ensuremath{m_{\PH}}\xspace in the distribution of the \ensuremath{\lplm\PGg}\xspace invariant mass (\ensuremath{m_{\llg}}\xspace).
The data sample is divided into mutually exclusive categories according to (i) the presence of an additional lepton produced by $\PZ(\to\ensuremath{\Pellp\Pellm}\xspace)$ or $\PW(\to\Pell\PGn)$ decay, indicating the possible associated production of a Higgs boson with a \PW or \PZ boson, or \ensuremath{\ttbar\PH}\xspace production with a leptonic top quark decay; (ii) the value of a multivariate analysis (MVA) discriminant characterizing the kinematic properties of a dijet system together with the \ensuremath{\lplm\PGg}\xspace candidate, indicating possible VBF production; and (iii) the value of an MVA discriminant characterizing the kinematic properties of the \ensuremath{\lplm\PGg}\xspace system.
A simultaneous maximum likelihood fit is performed to the \ensuremath{m_{\llg}}\xspace distribution in each category.
Tabulated results are provided in the HEPData record for this analysis~\cite{hepdata}.
This paper is organized as follows.
The CMS detector and event reconstruction are described in Sections~\ref{sec:cms} and~\ref{sec:reco}, respectively, and the data and simulated event samples are described in Section~\ref{sec:samples}.
Section~\ref{sec:preselection} outlines the event selection, and Section~\ref{sec:class} discusses the event categorization using the MVA discriminants described above.
The statistical procedure, including the modeling of signal and background shapes in the \ensuremath{m_{\llg}}\xspace distributions, is presented in Section~\ref{sec:modeling}. Systematic uncertainties are discussed in Section~\ref{sec:systematics}.
The final results obtained from the fits are discussed in Section~\ref{sec:results}, followed by a summary in Section~\ref{sec:summary}.
\section{The CMS detector}\label{sec:cms}
The CMS apparatus~\cite{CMS:2008xjf} is a multipurpose, nearly hermetic detector, designed to trigger on~\cite{CMS:2020cmk,CMS:2016ngn} and identify electrons, muons, photons, and (charged and neutral) hadrons~\cite{CMS:2015xaf,CMS:2018rym,CMS:2015myp,CMS:2014pgm}.
The central feature of the CMS apparatus is a superconducting solenoid of 6\unit{m} internal diameter, providing a magnetic field of 3.8\unit{T}.
Within the solenoid volume are a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass and scintillator hadron calorimeter (HCAL), each composed of a barrel and two endcap sections.
The ECAL consists of 75\,848 lead tungstate crystals, which provide coverage in pseudorapidity $\abs{\eta} < 1.48$ in a barrel region (EB) and $1.48 < \abs{\eta} < 3.0$ in two endcap regions (EE).
Preshower detectors consisting of two planes of silicon sensors interleaved with a total of 3 radiation lengths of lead are located in front of each EE detector.
Forward calorimeters extend the pseudorapidity coverage provided by the barrel and endcap detectors.
Muons are measured in the pseudorapidity range $\abs{\eta} < 2.4$ by gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid.
These detectors are arranged in planes made using three technologies: drift tubes, cathode strip chambers, and resistive plate chambers.
A more detailed description of the CMS detector, together with a definition of the coordinate system used and the relevant kinematic variables, can be found in Ref.~\cite{CMS:2008xjf}.
Events of interest are selected using a two-tiered trigger system.
The first level (L1), composed of custom hardware processors, uses information from the calorimeters and muon detectors to select events at a rate of around 100\unit{kHz} within a fixed latency of about 4\mus~\cite{CMS:2020cmk}.
The second level, known as the high-level trigger, consists of a farm of processors running a version of the full event reconstruction software optimized for fast processing, and reduces the event rate to around 1\unit{kHz} before data storage~\cite{CMS:2016ngn}.
\section{Event reconstruction}\label{sec:reco}
The candidate vertex with the largest value of summed squared physics-object transverse momentum (\pt) is taken to be the primary \ensuremath{\Pp\Pp}\xspace interaction vertex.
The physics objects used in the calculation of this quantity are (i) jets, clustered using the jet finding algorithm~\cite{Cacciari:2008gp,Cacciari:2011ma}, with the tracks assigned to candidate vertices as inputs, and (ii) the associated missing \pt, taken as the negative vector sum of the \pt of those jets.
The global event reconstruction (also called particle-flow (PF) event reconstruction~\cite{CMS:2017yfk}) aims to reconstruct and identify each individual particle in an event, with an optimized combination of all subdetector information.
In this process, the identification of the particle type (photon, electron, muon, charged hadron, neutral hadron) plays an important role in the determination of the particle direction and energy.
Photons are identified as ECAL energy clusters not linked to the extrapolation of any charged particle trajectory to the ECAL.
Electrons are identified as a primary charged particle track and potentially multiple ECAL energy clusters, which correspond to the extrapolation of this track to the ECAL and to possible bremsstrahlung photons emitted along the way through the tracker material.
Muons are identified as tracks in the central tracker consistent with either a track or several hits in the muon system, and associated with calorimeter deposits compatible with the muon hypothesis.
Charged hadrons are identified as charged particle tracks that are neither identified as electrons nor as muons.
Finally, neutral hadrons are identified as HCAL energy clusters not linked to any charged hadron trajectory, or as a combined ECAL and HCAL energy excess with respect to the expected charged hadron energy deposit.
For this analysis, the detector performance for photons, electrons, and muons is critical, because the energy and momentum resolutions for these particles determine the resolution of the Higgs boson signal peak in the \ensuremath{m_{\llg}}\xspace distribution.
The energy of photons is obtained from the ECAL measurement. In the EB, for photons that have energies in the range of tens of \GeVns, an energy resolution of about 1\% is achieved for unconverted or late-converting photons, \ie, photons converting near the inner face of the ECAL.
The energy resolution of the remaining barrel photons is about 1.3\% up to $\abs{\eta} = 1$, rising to about 2.5\% at $\abs{\eta} = 1.4$.
In the EE, the energy resolution for unconverted or late-converting photons is about 2.5\%, while the remaining endcap photons have a resolution between 3 and 4\%~\cite{CMS:2015myp}.
The energy of electrons is determined from a combination of the track momentum at the main interaction vertex, the corresponding ECAL cluster energy, and the energy sum of all bremsstrahlung photons attached to the track.
The measured energy resolution for electrons produced in \PZ boson decays in \ensuremath{\Pp\Pp}\xspace collision data ranges from 2--5\%, depending on electron pseudorapidity and energy loss through bremsstrahlung in the detector material~\cite{CMS:2020uim}.
The momentum of muons is obtained from the corresponding track momentum. Matching muons to tracks measured in the silicon tracker results in a \pt resolution, for muons with \pt up to 100\GeV, of 1\% in the barrel and 3\% in the endcaps.
The energy of charged hadrons is determined from a combination of the track momentum and the corresponding ECAL and HCAL energies, corrected for the response function of the calorimeters to hadronic showers.
Finally, the energy of neutral hadrons is obtained from the corresponding corrected ECAL and HCAL energies.
For tagging the VBF production mechanism, which produces an additional dijet system, jet reconstruction is important.
For each event, hadronic jets are clustered from PF particles using the infrared and collinear-safe anti-\kt algorithm~\cite{Cacciari:2008gp, Cacciari:2011ma} with a distance parameter of 0.4.
Jet momentum is determined as the vectorial sum of all particle momenta in the jet, and is found from simulation to be, on average, within 5--10\% of the true momentum over the entire \pt spectrum and detector acceptance. Additional \ensuremath{\Pp\Pp}\xspace interactions within the same or nearby bunch crossings (pileup) can contribute additional tracks and calorimetric energy depositions to the jet momentum.
To mitigate this effect, charged particles found to originate from pileup vertices are discarded and an offset correction is applied to correct for remaining contributions~\cite{CMS:2020ebo}.
Jet energy corrections are derived from simulation to bring the measured response of jets to that of particle-level jets on average.
In situ measurements of the momentum balance in dijet, photon+jet, {\PZ}+jet, and multijet events are used to account for residual differences in the jet energy scale and resolution between data and simulation~\cite{CMS:2016lmd}.
The jet energy resolution typically amounts to 15--20\% at 30\GeV, 10\% at 100\GeV, and 5\% at 1\TeV~\cite{CMS:2016lmd}.
Additional selection criteria are applied to each jet to remove jets potentially dominated by anomalous contributions from various subdetector components or reconstruction failures~\cite{CMS-PAS-JME-16-003}.
\section{Data and simulated samples}
\label{sec:samples}
The data sample corresponds to a total integrated luminosity of 138\fbinv and was collected over a data-taking period spanning three years: 36.3\fbinv in 2016, 41.5\fbinv in 2017, and 59.8\fbinv in 2018~\cite{CMS-LUM-17-003,LUM-17-004,LUM-18-002}.
To be considered in the analysis, events must satisfy the high-level trigger requirements for at least one of the dielectron or dimuon triggers.
The dielectron trigger requires a leading (subleading) electron with $\pt > 23\,(12)\GeV$, while the dimuon trigger requires a muon with $\pt > 17\,(8)\GeV$.
The efficiencies of these dilepton triggers, which depend on both the lepton \pt and $\eta$, are measured to be in the ranges 86--97 and 93--95\% for the electron and muon channels, respectively.
Signal samples for \ensuremath{\Pg\Pg\PH}\xspace, VBF, \ensuremath{\PV\PH}\xspace, and \ensuremath{\ttbar\PH}\xspace production, with \ensuremath{\PH\to\zg}\xspace and \ensuremath{\PZ\to\lplm}\xspace ($\Pell = \Pe$, \PGm, or \PGt), are generated at next-to-leading order (NLO) using \POWHEG v2.0~\cite{Nason:2004rx,Frixione:2007vw,Alioli:2010xd,cite:powheg1,cite:powheg2,Luisoni:2013cuh,Hartanto:2015uka}.
Samples are produced for \ensuremath{m_{\PH}}\xspace of 120, 125, and 130\GeV.
The SM Higgs boson production cross sections and branching fractions recommended by the LHC Higgs Working Group~\cite{LHC-YR4} are considered for each mass point.
The SM value of \ensuremath{\mathcal{B}(\hzg)}\xspace is calculated at LO in QCD. The effect of higher-order QCD corrections has been studied~\cite{Spira:1991tj,Bonciani:2015eua,Gehrmann:2015dua}, and found to be small.
The dominant backgrounds, $\ensuremath{\PZ/\PGg^{\ast}}\xspace(\to\ensuremath{\Pellp\Pellm}\xspace)$+\PGg and $\ensuremath{\PZ/\PGg^{\ast}}\xspace(\to\ensuremath{\Pellp\Pellm}\xspace)$+jets, are generated at NLO using the \MGvATNLO v2.6.0 (v2.6.1) generator~\cite{Alwall:2014hca} for 2016 (2017 and 2018) samples.
Events arising from \ttbar production~\cite{Frixione:2007nw} are a relatively minor background and are generated at NLO with \POWHEG v2.0~\cite{cite:powheg1,cite:powheg2}.
The background from vector boson scattering (VBS) production of {\ensuremath{\PZ/\PGg^{\ast}}\xspace}+\PGg pairs, with the \PZ boson decaying to a pair of leptons, is simulated at leading order using the \MGvATNLO generator.
The decay \ensuremath{\PH\to\mpmm}\xspace is considered as a resonant background and is generated for the \ensuremath{\Pg\Pg\PH}\xspace, VBF, \ensuremath{\PV\PH}\xspace, and \ensuremath{\ttbar\PH}\xspace production mechanisms. The SM predicted value of the \ensuremath{\PH\to\mpmm}\xspace branching fraction~\cite{LHC-YR4} is assumed.
The \ensuremath{\Pg\Pg\PH}\xspace production cross section is computed at next-to-next-to-NLO precision in QCD and at NLO in electroweak (EWK) theory~\cite{Anastasiou:2016cez}.
The cross sections for Higgs boson production in the VBF~\cite{PhysRevLett.115.082002} and \ensuremath{\PV\PH}\xspace~\cite{BREIN2004149} mechanisms are calculated at next-to-NLO in QCD, including NLO EWK corrections, while the \ensuremath{\ttbar\PH}\xspace cross section is computed at NLO in QCD and EWK theory~\cite{PhysRevD.68.034022}.
All simulated events are interfaced with \PYTHIA v8.226~(v8.230)~\cite{Sjostrand:2014zea} with the CUETP8M1~\cite{Khachatryan:2015pea} (CP5~\cite{Sirunyan:2019dfx}) underlying event tune for 2016\,(2017--2018) for the fragmentation and hadronization of partons and the internal bremsstrahlung of the leptons.
The NLO parton distribution function (PDF) set, NNPDF v3.0~\cite{nnpdf30}~(NNPDF v3.1)~\cite{nnpdf_new}, is used to produce these samples in 2016\,(2017--2018).
The response of the CMS detector is modeled using the \GEANTfour program~\cite{AGOSTINELLI2003250}.
The simulated events are reweighted to correct for differences between data and simulation in the number of additional \ensuremath{\Pp\Pp}\xspace interactions, trigger efficiencies, selection efficiencies, and efficiencies of isolation requirements for photons, electrons, and muons.
\section{Event selection}\label{sec:preselection}
Events are required to have at least one good primary vertex (PV) with a reconstructed longitudinal position within 24\cm of the geometric center of the detector and a transverse position within 2\cm of the nominal beam collision point.
Lepton candidates are required to have impact parameters with respect to the PV of less than 5\mm in the plane transverse to the beam and less than 10\mm along the beam direction.
This analysis focuses on promptly produced signal processes.
To reduce the contributions from photons or leptons arising from hadron decays within jets, isolation requirements are imposed.
For each photon and lepton candidate, a set of isolation variables is defined.
The quantity \ensuremath{\sum \pt^\text{charged}}\xspace is the scalar sum of the \pt of charged hadrons originating from the PV, and \ensuremath{\sum \pt^\text{neutral}}\xspace and \ensuremath{\sum \pt^{\PGg}}\xspace are the scalar sums of the \pt of neutral hadrons and photons, respectively.
The sums are over all PF candidates within a cone of radius $\DR = \sqrt{\smash[b]{(\Delta\phi)^2 + (\Delta\eta)^2}} = 0.3$ around the photon or lepton direction at the PV.
Photons are selected with an MVA discriminant that uses, as inputs, the isolation variables \ensuremath{\sum \pt^\text{charged}}\xspace, \ensuremath{\sum \pt^\text{neutral}}\xspace, and \ensuremath{\sum \pt^{\PGg}}\xspace; the ratio of the HCAL energy to the sum of the ECAL and HCAL energies associated with the cluster; and the transverse width of the electromagnetic shower.
The imperfect MC simulation modeling of the input variables is corrected to match the data using a chained quantile regression method~\cite{2012arXiv1211.6581S} based on studies of \ensuremath{\PZ\to\epem}\xspace events.
In this method, a set of boosted decision tree (BDT) discriminants is trained to predict the cumulative distribution function for a given input.
Its prediction is conditional upon the three kinematic variables (\pt, $\eta$, $\phi$) and the global event energy density~\cite{CMS:2018piu}, which are the input variables to the BDTs.
The corrections are then applied to the simulated photons such that the cumulative distribution function of each simulated variable matches that observed in data.
A conversion-safe electron veto~\cite{CMS:2015myp} is applied to avoid misidentifying an electron as a photon.
This veto suppresses events that have a charged particle track with a hit in the inner layer of the pixel detector that points to the photon cluster in the ECAL, unless that track is matched to a conversion vertex.
Photons are required to lie in the geometrical region $\abs{\eta} < 2.5$.
The efficiency of the photon identification is measured from \ensuremath{\PZ\to\epem}\xspace events using the ``tag-and-probe" technique~\cite{cite:tagandprobe}.
The efficiency is measured to be between 76--90 (72--90)\% in the barrel (endcaps) depending on the photon \pt, after including the electron veto~\cite{CMS:2015myp} inefficiencies measured with $\ensuremath{\PZ\to\mpmm}\xspace\PGg$ events, where the photon is produced by final-state radiation (FSR).
Electrons are selected using an MVA discriminant that includes observables sensitive to the shape of the electromagnetic shower in the ECAL, the geometrical and momentum-energy matching between the electron trajectory and the energy of the associated cluster in the ECAL, the presence of bremsstrahlung along the electron trajectory, isolation, and variables that discriminate against electrons originating from photon conversions~\cite{bib:htozz2016}.
The electron MVA discriminant includes the isolation sums described above (\ensuremath{\sum \pt^\text{charged}}\xspace, \ensuremath{\sum \pt^\text{neutral}}\xspace, and \ensuremath{\sum \pt^{\PGg}}\xspace).
Electron candidates must satisfy $\abs{\eta} <2.5$.
The optimized electron selection criteria give an efficiency of approximately 85--93\,(81--92)\% in the barrel (endcaps) for electrons from \PW or \PZ bosons.
Muons are selected from the reconstructed muon track candidates by applying minimal requirements on the track in both the muon system and inner tracker system and by taking into account compatibility with small energy deposits in the calorimeters.
A muon isolation requirement is used to veto potential muon candidates that are produced in the decays of heavy quarks.
We define the muon relative isolation
\begin{linenomath*}
\begin{equation}
\label{eqn:pfiso}
\mathcal{I}^{\PGm} \equiv \Big[ \ensuremath{\sum \pt^\text{charged}}\xspace +
\max\big(0, \ensuremath{\sum \pt^\text{neutral}}\xspace
+
\ensuremath{\sum \pt^{\PGg}}\xspace
- \ensuremath{\pt^{\PGm,\text{PU}}}\xspace \big) \Big]
/ \pt^{\PGm}
\end{equation}
\end{linenomath*}
and require $\mathcal{I}^{\PGm} < 0.35$.
Since the isolation variable is particularly sensitive to energy deposits from pileup interactions, a \ensuremath{\pt^{\PGm,\text{PU}}}\xspace contribution is subtracted, defined as $\ensuremath{\pt^{\PGm,\text{PU}}}\xspace\equiv0.5\sum_{i} \pt^{\text{PU},i}$, where $i$ runs over the momenta of the charged hadron PF candidates not originating from the PV, and the factor of 0.5 corrects for the different fraction of charged and neutral particles in the cone~\cite{CMS:2020ebo}.
The combined identification and isolation efficiency for single muons is measured using \ensuremath{\PZ\to\mpmm}\xspace decays and is found to be 87--98\% in the barrel region and 88--98\% in the endcaps.
We accept muons with $\abs{\eta}<2.4$ \cite{bib:htozz2016}.
To suppress backgrounds in which muons are produced in the decays of hadrons and electrons from photon conversions, we require each muon track to have a three-dimensional impact parameter with respect to the PV that is less than four times its uncertainty.
An FSR recovery procedure is performed for the selected muons, following a similar approach to that used in Ref.~\cite{bib:htozz2016}.
An FSR photon is identified and associated to its radiating muon based on the following criteria.
The photon must satisfy $\pt>2\GeV$, $\abs{\eta} < 2.4$, $\ensuremath{\DRgm/p_{\mathrm{T}\PGg}^{2}}\xspace < 0.012 \GeV^{-2}$, $\ensuremath{\DR(\PGg,\PGm)}\xspace < 0.4$, and relative isolation smaller than 1.8, where $\pt^{\text{PU}}$ is excluded from the isolation calculation.
If multiple FSR photons are associated to one muon, the photon with the smallest value of \ensuremath{\DRgm/p_{\mathrm{T}\PGg}^{2}}\xspace is selected.
The FSR recovery procedure improves the \ensuremath{m_{\llg}}\xspace resolution by 1\% in the muon channel.
A kinematic fit procedure is used to improve the dilepton mass and \ensuremath{m_{\llg}}\xspace resolutions, following a similar approach to that used in Ref.~\cite{bib:htozz2016}.
A maximum likelihood fit is performed, taking into account the true \PZ boson line shape, obtained from \ensuremath{\PH\to\zg}\xspace simulation, the \pt of each lepton,
and the \pt resolution of each lepton.
The outputs of this fit are the corrected \pt values for each lepton.
The corrected \pt values are used to recalculate the dilepton mass and \ensuremath{m_{\llg}}\xspace.
The improvement in \ensuremath{m_{\llg}}\xspace resolution varies with data-taking year and is between 20--27\% in the electron channel and 10--12\% in the muon channel. The effect of the kinematic fit is larger for the electron channel because of the poorer momentum resolution for electrons compared to muons.
The jets used in dijet-tagged event categories, discussed in Section~\ref{sec:class}, are required to have $\pt > 30\GeV$ and $\abs{\eta}<4.7$ and to be separated by at least 0.4 in $\DR$ from leptons and photons passing the selection requirements described above.
Events are required to contain a photon and at least two same-flavor, opposite-sign leptons ($\Pell = \Pe$ or \PGm) with $\ensuremath{m_{\lplm}}\xspace>50\GeV$.
The latter requirement, although relatively loose, is sufficient to suppress backgrounds that do not contain \PZ boson decays while retaining high signal efficiency.
The particles used to reconstruct the \ensuremath{\PZ\PGg}\xspace candidate system are required to have $\pt > 25\,(15)\GeV$ for the leading (subleading) electron, $\pt > 20\,(10)\GeV$ for the leading (subleading) muon, and $\pt > 15\GeV$ for the photon.
In events with multiple dilepton pairs, the pair with mass closest to the nominal \PZ boson mass~\cite{PDG2020} is selected.
Additional electrons (muons) with \pt greater than 7\,(5)\GeV are also used for categorization, as described in the next section.
The invariant mass of the \ensuremath{\lplm\PGg}\xspace system is required to be in the range $105 < \ensuremath{m_{\llg}}\xspace < 170\GeV$, which provides a broad range around the Higgs boson mass in which to perform the fit.
Events are required to have a photon satisfying $\pt^{\PGg} / \ensuremath{m_{\llg}}\xspace > 0.14$, which suppresses the {\ensuremath{\PZ/\PGg^{\ast}}\xspace}+jets background
without significantly reducing signal efficiency, with minimal bias in the \ensuremath{m_{\llg}}\xspace spectrum.
Each lepton is required to have $\DR>0.4$ with respect to the photon to reject events with FSR.
To further reject FSR from {\ensuremath{\PZ/\PGg^{\ast}}\xspace}+\PGg processes, we require $\ensuremath{m_{\llg}}\xspace + \ensuremath{m_{\lplm}}\xspace> 185$\GeV.
\section{Event categorization}
\label{sec:class}
To maximize the sensitivity of the analysis to Higgs boson signals arising from different production mechanisms, each with its own final-state properties, we divide the event sample into mutually exclusive categories.
The signal candidates from the \ensuremath{\PV\PH}\xspace and \ensuremath{\ttbar\PH}\xspace production mechanisms are targeted using a lepton-tagged category, in which at least one electron or muon is present beyond those used to reconstruct the \ensuremath{\PZ\PGg}\xspace system.
The signal candidates from the VBF production mechanism are targeted by identifying events that have an additional dijet system.
A BDT classifier (referred to as the VBF BDT) uses the properties of this dijet system to divide such events into a set of dijet categories.
The VBF BDT discriminant value, transformed such that the VBF signal distribution is uniform, is denoted by \ensuremath{\mathcal{D}_{\text{VBF}}}\xspace.
The signal candidates from the \ensuremath{\Pg\Pg\PH}\xspace production mechanism are targeted with events that do not fall within the lepton-tagged or dijet categories.
A BDT classifier (referred to as the kinematic BDT), trained on a set of kinematic variables, is used to further discriminate between signal and background events, defining a set of untagged categories.
The kinematic BDT discriminant value, transformed such that the total signal distribution is uniform, is denoted by \ensuremath{\mathcal{D_{\text{kin}}}}\xspace.
The procedure used for event categorization is described below.
\begin{enumerate}
\item Events with at least one additional electron (muon) with $\pt>7\,(5)\GeV$ are assigned to the lepton-tagged category.
\item Events not assigned to the lepton-tagged category, but which contain two jets satisfying the selection requirements described in Section~\ref{sec:preselection}, are classified as dijet events, indicative of possible VBF production.
If multiple dijet pairs exist within an event, the two jets with highest \pt are considered.
The subdivision of dijet events into a set of three dijet categories is described later in this section.
A VBF BDT classifier is trained to separate VBF signal events from {\ensuremath{\Pg\Pg\PH}\xspace}+jets and background events.
The following variables are used in the VBF BDT training:
(i)~the difference in pseudorapidity between the two jets;
(ii)~the difference in azimuthal angle between the two jets;
(iii)~the Zeppenfeld variable~\cite{Rainwater:1996ud} ($\eta_{\PGg} - (\eta_{\mathrm{j_1}}+\eta_{\mathrm{j_2}})/2$), where $\eta_{\PGg} ,\eta_{\mathrm{j_1}}$ and $\eta_\mathrm{{j_2}}$ are the pseudorapidities of the photon, leading jet, and subleading jet, respectively;
(iv)~the ratio between the \pt of the $\ensuremath{\PZ\PGg}\xspace{\mathrm{j_1}}{\mathrm{j_2}}$ system and the corresponding scalar sum of momenta
($\abs{\sum_{\PZ,\PGg,\mathrm{j_1},\mathrm{j_2}}{\ptvec}} / \sum_{\PZ,\PGg,\mathrm{j_1},\mathrm{j_2}}p_\mathrm{T}$);
(v)~the difference in azimuthal angle between the dijet system and the \ensuremath{\PZ\PGg}\xspace system;
(vi)~the \pt of each jet;
(vii)~$\pt^{t}$, defined as $\abs{\ptvec^{\ensuremath{\PZ\PGg}\xspace}\times\hat{t}}$, where $\hat{t}=(\ptvec^{\PZ}-\ptvec^{\PGg})/\abs{\ptvec^{\PZ}-\ptvec^{\PGg}}$~\cite{Ackerstaffetal.1998,VESTERINEN2009432}, the \pt of the \ensuremath{\PZ\PGg}\xspace system that is perpendicular to the difference of the three-momenta of the \PZ boson and the photon, a quantity that is strongly correlated with the \pt of the \ensuremath{\lplm\PGg}\xspace system;
(viii)~the $\DR$ separation between each jet and the photon, and (ix) \ensuremath{\mathcal{D_{\text{kin}}}}\xspace, described below. The distribution of \ensuremath{\mathcal{D}_{\text{VBF}}}\xspace is shown in Fig.~\ref{fig:BDT} (left) for both simulated event samples and data.
\item Events not assigned to the lepton-tagged or dijet categories are classified as untagged events.
The subdivision of untagged events into a set of four untagged categories is described later in this section.
A kinematic BDT classifier is trained to distinguish signal events from background events based on the kinematics of the leptons and photon in the \ensuremath{\PZ\PGg}\xspace candidate system, as well as on the measured properties of these physics objects.
The following variables are used in the kinematic BDT training:
(i)~the pseudorapidity of each lepton and the photon;
(ii)~the $\DR$ separation between each lepton and the photon;
(iii)~the \pt to mass ratio of the \ensuremath{\lplm\PGg}\xspace system;
(iv)~the production angle of the \PZ boson in the Higgs boson center-of-mass frame~\cite{HZg_angle1,HZg_angle2};
(v)~the polar and azimuthal decay angles of the leptons in the \PZ boson center-of-mass frame~\cite{HZg_angle1,HZg_angle2};
(vi)~the photon MVA discriminant score; and (vii) the photon energy resolution.
The distribution of \ensuremath{\mathcal{D_{\text{kin}}}}\xspace is shown in Fig.~\ref{fig:BDT} (right) for both simulated samples and data.
\end{enumerate}
\begin{figure}[tbp]
\centering
\includegraphics[width=0.47\textwidth]{Figure_002-a.pdf}
\includegraphics[width=0.47\textwidth]{Figure_002-b.pdf}
\caption{The \ensuremath{\mathcal{D}_{\text{VBF}}}\xspace (left) and \ensuremath{\mathcal{D_{\text{kin}}}}\xspace (right) distributions for signal, simulated background, and data.
The \ensuremath{\mathcal{D}_{\text{VBF}}}\xspace distribution includes only dijet-tagged events, and the \ensuremath{\mathcal{D_{\text{kin}}}}\xspace distribution includes only untagged events.
The sum of contributions from all signal production mechanisms is shown by the blue line, while the contribution from only the VBF mechanism is shown by the red line. Both contributions are scaled by a factor of 10.
The uncertainty band incorporates all statistical and systematic uncertainties in the expected background.
The dashed lines indicate the boundaries for the dijet and untagged categories.
The gray shaded region in the \ensuremath{\mathcal{D_{\text{kin}}}}\xspace distribution is excluded from the analysis.}\label{fig:BDT}
\end{figure}
The subdivision of dijet and untagged events into categories is based on the VBF BDT and kinematic BDT discriminants.
Category boundaries are defined as mutually exclusive regions of \ensuremath{\mathcal{D}_{\text{VBF}}}\xspace and \ensuremath{\mathcal{D_{\text{kin}}}}\xspace.
The locations of the boundaries defining the categories are optimized by iterating over all possible combinations of boundaries using $\sum_{i=1}^{n} {S_i}^2/B_i$ as a figure-of-merit.
The variables $S_i$ and $B_i$ represent the number of expected signal and background events in the $i$th category, and $n$ is the total number of categories.
We consider categories with boundaries corresponding to signal efficiencies between 0--100\% in 10\% increments.
The optimization procedure results in three dijet categories for the VBF BDT and four untagged categories for the kinematic BDT.
The lowest \ensuremath{\mathcal{D_{\text{kin}}}}\xspace boundary corresponds to the 10\% point in integrated signal efficiency, and events below the 10\% point are excluded from the analysis to preserve the stability of the background model.
The full categorization and optimization procedure results in the following eight mutually exclusive categories: one lepton-tagged category, three dijet categories, and four untagged categories.
The category definitions are summarized in Table~\ref{tab:category}.
\begin{table}[tbp]
\centering
\topcaption{Summary of the category definitions. The lepton-tagged category requires at least one additional electron or muon. Dijet categories are defined by regions of \ensuremath{\mathcal{D}_{\text{VBF}}}\xspace and untagged categories are defined by regions of \ensuremath{\mathcal{D_{\text{kin}}}}\xspace.}
\cmsTable{
\begin{tabular}{c@{\hskip 0.3in}ccc@{\hskip 0.3in}cccc}
\hline
Lepton & Dijet 1 & Dijet 2 & Dijet 3& Untagged 1 & Untagged 2 & Untagged 3 & Untagged 4 \\
\hline
\multirow{2}{*}{$\geq$ 1 $\Pe,\mu$} &\multicolumn{3}{c}{\ensuremath{\mathcal{D}_{\text{VBF}}}\xspace selection}&\multicolumn{4}{c}{\ensuremath{\mathcal{D_{\text{kin}}}}\xspace selection} \\
&0.5--1.0&0.3--0.5&0.0--0.3&0.9--1.0&0.8--0.9&0.4--0.8&0.1--0.4\\
\hline
\end{tabular}}\label{tab:category}
\end{table}
Table~\ref{tab:yield} lists the event categories used in the analysis, along with the expected event yields for an $\ensuremath{m_{\PH}}\xspace=125.38\GeV$ signal arising from \ensuremath{\Pg\Pg\PH}\xspace, VBF, \ensuremath{\PV\PH}\xspace, and \ensuremath{\ttbar\PH}\xspace production, as well as the resonant background contribution from FSR from \ensuremath{\PH\to\mpmm}\xspace, which is 3--8\% of the \ensuremath{\PH\to\zg}\xspace yield, depending on category.
Event yields from other Higgs boson backgrounds such as $\PH\to\PGtp\PGtm$ and \ensuremath{\PH\to\PGg\PGg}\xspace are estimated to be below the 1\% level relative to the \ensuremath{\PH\to\zg}\xspace yield and are neglected.
The dominant contribution to the signal yield is generally from \ensuremath{\Pg\Pg\PH}\xspace production, except in the lepton-tagged category, in which \ensuremath{\PV\PH}\xspace and \ensuremath{\ttbar\PH}\xspace events dominate, and in the dijet 1 category, in which VBF events dominate.
The categorization procedure increases the sensitivity of the analysis by 24\% with respect to an inclusive event selection.
The product of signal acceptance and efficiency for $\ensuremath{\Pp\Pp}\xspace\to\ensuremath{\PH\to\zg}\xspace\to\ensuremath{\lplm\PGg}\xspace$ for $m_{\PH} =125.38\GeV$ is 23\,(29)\% in the electron (muon) channel.
\begin{table}[tbp]
\centering
\topcaption{Yields and approximate significance ($S/\sqrt{B}$) for each category, where $S$ and $B$ are the expected number of signal and background events in the narrowest \ensuremath{m_{\llg}}\xspace interval containing 95\% of the expected signal distribution.
Also shown is the \ensuremath{m_{\llg}}\xspace resolution, computed using the narrowest interval containing 68\% of the expected signal distribution.
}
\cmsTable{
\begin{tabular}{c @{\hskip 0.2in}ccccccccc}
\hline
138\fbinv & Lepton & & Dijet 1 & Dijet 2 & Dijet 3& Untagged 1 & Untagged 2 & Untagged 3 & Untagged 4\\\hline
SM signal && & & & & & & & \\
yield && & & & & & & & \\
\multirow{2}{*}{\ensuremath{\Pg\Pg\PH}\xspace} & \multirow{2}{*}{0.51} & \ensuremath{\Pep\Pem}\xspace & 1.10 & 1.62 & 9.44 & 6.89 & 7.35 & 29.8 & 22.5 \\
& & \ensuremath{\PGmp\PGmm}\xspace & 1.41 & 2.05 & 12.1 & 8.52 & 9.17 & 38.0 & 29.0 \\ [\cmsTabSkip]
\multirow{2}{*}{VBF} & \multirow{2}{*}{0.09} & \ensuremath{\Pep\Pem}\xspace & 1.94 & 0.76 & 1.13 & 0.71 & 0.35 & 0.92 & 0.51 \\
& & \ensuremath{\PGmp\PGmm}\xspace & 2.40 & 0.97 & 1.43 & 0.89 & 0.43 & 1.18 & 0.65 \\ [\cmsTabSkip]
\multirow{2}{*}{$\ensuremath{\PV\PH}\xspace+\ensuremath{\ttbar\PH}\xspace$} & \multirow{2}{*}{1.84} & \ensuremath{\Pep\Pem}\xspace & 0.04 & 0.13 & 1.89 & 0.31 & 0.17 & 0.45 & 0.27 \\
& & \ensuremath{\PGmp\PGmm}\xspace & 0.05 & 0.16 & 2.36 & 0.39 & 0.21 & 0.57 & 0.33 \\ [\cmsTabSkip]
SM resonant &&&&&&&&&\\
background &&&&&&&&&\\
\ensuremath{\PH\to\mpmm}\xspace & 0.14& \ensuremath{\PGmp\PGmm}\xspace &0.27 & 0.27 & 0.43& 0.62& 0.49 & 2.02& 1.78\\ \hline
Mass resolution & \multirow{2}{*}{2.12} & \ensuremath{\Pep\Pem}\xspace & 1.91 & 2.06 & 2.15 & 1.80 & 1.97 & 2.12 & 2.33 \\
(\GeVns) & & \ensuremath{\PGmp\PGmm}\xspace & 1.52 & 1.61 & 1.72 & 1.37 & 1.42 & 1.62 & 1.83 \\ [\cmsTabSkip]
Data yield & 1485 && 168 & 589 & 11596& 1485 & 1541 & 2559 & 17608 \\ [\cmsTabSkip]
$S/\sqrt{B}$ & 0.06 && 0.54 & 0.24 & 0.26& 0.45 & 0.35 & 0.53 & 0.30 \\
\hline
\end{tabular}
}
\label{tab:yield}
\end{table}
\section{Statistical procedure}
\label{sec:modeling}
The signal search is performed using a simultaneous fit to the \ensuremath{m_{\llg}}\xspace distribution in the eight event categories described in Section~\ref{sec:class}.
Figures~\ref{fig:3} and \ref{fig:4} show the \ensuremath{m_{\llg}}\xspace distributions of the data events in each category.
The expected SM \ensuremath{\PH\to\zg}\xspace distributions, scaled by a factor of 10, are also shown.
The fit uses a binned maximum likelihood method in the range $105 < \ensuremath{m_{\llg}}\xspace < 170\GeV$.
In each category, a likelihood function is defined using analytic models of signal and background events, along with nuisance parameters for systematic uncertainties.
The combined likelihood function is the product of the likelihood functions in each category.
The parameter of interest in the maximum likelihood fit is the signal strength $\mu$, defined as the product of the cross section and the branching fraction [$\ensuremath{\sigppH\brzg}\xspace$], relative to the SM prediction. The fit results shown in Figs.~\ref{fig:3} and \ref{fig:4} are discussed further in Section~\ref{sec:results}.
The signal model is defined as the sum of Crystal Ball~\cite{CB-Oreglia} and Gaussian functions.
The signal shape parameters are determined by fitting this model to simulated signal events in each category.
To account for differences in mass resolution, these fits are performed separately for the event samples used to model each data-taking year, as well as for muon and electron channel events.
This results in six signal models that are summed to give the total signal expectation in a given category.
Table~\ref{tab:yield} gives these mass resolutions for \ensuremath{\PH\to\zg}\xspace, summed over the three years, as obtained from simulation.
The mass resolutions range from 1.4--2.3\GeV, depending on the category.
Separate sets of parameter values are found by fitting simulated events with \ensuremath{m_{\PH}}\xspace of 120, 125, and 130\GeV.
Using linear interpolation, parameter values are also determined at 1\GeV intervals in \ensuremath{m_{\PH}}\xspace from 120--130\GeV, as well as at 125.38\GeV.
In the fit to data, the mean and resolution parameters are allowed to vary subject to constraints from several systematic uncertainties, described in Section~\ref{sec:systematics}, while the remaining parameters are held fixed.
The resonant background contribution from \ensuremath{\PH\to\mpmm}\xspace is also modeled with the sum of Crystal Ball and Gaussian functions, using an analogous procedure.
\begin{figure*}[tbp]
\centering
\includegraphics[width=0.45\textwidth]{Figure_003-a.pdf}
\includegraphics[width=0.45\textwidth]{Figure_003-b.pdf}\\
\includegraphics[width=0.45\textwidth]{Figure_003-c.pdf}
\includegraphics[width=0.45\textwidth]{Figure_003-d.pdf}\\
\caption{Fits to the \ensuremath{m_{\llg}}\xspace data distribution in the lepton-tagged (upper left), dijet 1 (upper right), dijet 2 (lower left), and dijet 3 (lower right) categories.
In the upper panel, the red solid line shows the result of a signal-plus-background fit to the given category.
The red dashed line shows the background component of the fit.
The green and yellow bands represent the 68 and 95\% \CL uncertainties in the fit.
Also plotted is the expected SM signal, scaled by a factor of 10.
In the lower panel, the data minus the background component of the fit is shown.}\label{fig:3}
\end{figure*}
\begin{figure*}[tbp]
\centering
\includegraphics[width=0.45\textwidth]{Figure_004-a.pdf}
\includegraphics[width=0.45\textwidth]{Figure_004-b.pdf}\\
\includegraphics[width=0.45\textwidth]{Figure_004-c.pdf}
\includegraphics[width=0.45\textwidth]{Figure_004-d.pdf}
\caption{Fits to the \ensuremath{m_{\llg}}\xspace data distribution in the untagged 1 (upper left), untagged 2 (upper right), untagged 3 (lower left), and untagged 4 (lower right) categories.
In the upper panel, the red solid line shows the result of a signal-plus-background fit to the given category.
The red dashed line shows the background component of the fit.
The green and yellow bands represent the 68 and 95\% \CL uncertainties in the fit.
Also plotted is the expected SM signal, scaled by a factor of 10.
In the lower panel, the data minus the background component of the fit is shown.}\label{fig:4}
\end{figure*}
The background model in each category is obtained from the data using the discrete profiling method~\cite{Dauncey:2014xga}.
This technique accounts for the systematic uncertainty associated with choosing an analytic functional form to fit the background.
The background function is chosen from a set of candidate functions via a discrete nuisance parameter in the fit.
These functions are derived from the data in each category, with muon and electron events from all data-taking years combined.
As shown in Figs.~\ref{fig:3} and \ref{fig:4}, the \ensuremath{m_{\llg}}\xspace spectrum consists of a turn-on peak around 110--115\GeV, driven by the photon \pt selection, and a monotonically falling spectrum in the higher \ensuremath{m_{\llg}}\xspace region.
These features are modeled by the convolution of a Gaussian function, which is used to describe the lower-mass (turn-on) portion of the spectrum, with a step function that is multiplied by one of several functions, which are used to describe the higher-mass (tail) portion of the spectrum.
The complete function has the general form:
\begin{linenomath*}
\begin{equation}
\mathcal{F}(\ensuremath{m_{\llg}}\xspace; \mu_{\mathrm{G}}, \sigma_{\mathrm{G}}, s, \vec{\alpha}) = \int_{m_{\text{min}}}^{m_{\text{max}}}\mathcal{N}(\ensuremath{m_{\llg}}\xspace-t;\mu_{\mathrm{G}},\sigma_{\mathrm{G}})\Theta(t; s)f(t; \vec{\alpha})\rd t,
\end{equation}
\end{linenomath*}
where $t$ is the integration variable for the convolution, $m_{\text{min}}=105\GeV$ and $m_{\text{max}}=170\GeV$ are the limits of integration, $\mathcal{N}(\ensuremath{m_{\llg}}\xspace-t;\mu_{\mathrm{G}},\sigma_{\mathrm{G}})$ is the Gaussian function with mean $\mu_{\mathrm{G}}$ and standard deviation $\sigma_{\mathrm{G}}$, $\Theta(t; s)$ is the Heaviside step function with step location $s$, and $f(t; \vec{\alpha})$ is the falling spectrum function with shape parameters $\vec{\alpha}$.
The falling spectrum function families considered include exponential functions, power law functions, Laurent series, and Bernstein polynomials.
Functions from each family are selected based on a chi-squared goodness-of-fit criterion ($p\text{-value} > 0.01$) as well as an $\mathcal{F}$-test~\cite{Fisher:1922saa}, which determines the highest order function to be used.
A penalty term is added to the final likelihood to take into account the number of parameters in each function, ensuring that higher-order functions will not be preferred a priori.
The set of profiled background functions in each category is checked to ensure that any bias introduced into the fit results is small and that the associated \CL intervals have the appropriate frequentist coverage.
For each function, pseudo-data sets are generated under a fixed signal strength hypothesis.
A signal-plus-background fit is performed on each pseudo-data set, with the choice of background function profiled.
The average bias, expressed as a fraction of the signal strength uncertainty, is typically 2--10\%, depending on the category and choice of function, and the corresponding coverage for the 68\% \CL interval is 66--69\%.
The best fit value of the signal strength, $\hat{\mu}$, is determined by maximizing the likelihood, accounting for all nuisance parameters.
The uncertainty in $\hat{\mu}$ and the observed significance are derived from the profile likelihood test statistic~\cite{cite:l3},
\begin{linenomath*}
\begin{equation}
q(\mu) = -2\ln\Bigg(\frac{\mathcal{L}(\mu, \hat{\vec{\theta_{\mu}}})}{\mathcal{L}(\hat{\mu},\hat{\vec{\theta}})}\Bigg),
\end{equation}
\end{linenomath*}
where $\vec{\theta}$ is the set of nuisance parameters, $\hat{\mu}$ and $\hat{\vec{\theta}}$ are unconditional best fit values, and $\hat{\vec{\theta_{\mu}}}$ is the set of conditional best fit values of the nuisance parameters for a given value of $\mu$.
An upper limit on $\mu$ is determined using the profile likelihood statistic with the \CLs criterion.
The asymptotic approximation for the sampling distribution of $q(\mu)$ is assumed in the derivation of these results~\cite{cite:l1,cite:l2,cite:l3,Cowan:2010js}.
The expected significance under the SM hypothesis and the expected upper limits under the background-only hypothesis are also reported.
These are obtained by fitting to the corresponding Asimov data sets~\cite{Cowan:2010js}.
In addition, a combined maximum likelihood fit with the CMS measurement~\cite{CMS:2021kom} of \ensuremath{\PH\to\PGg\PGg}\xspace using the same data sample is performed to determine the ratio \ensuremath{\brzg/\brgg}\xspace.
The \ensuremath{\PH\to\PGg\PGg}\xspace analysis obtained a signal strength for $\ensuremath{\sigma(\pp\to\PH)}\xspace\ensuremath{\mathcal{B}(\hgg)}\xspace$ of $1.12\pm0.09$.
In this combined fit, the branching fraction \ensuremath{\mathcal{B}(\hgg)}\xspace is an additional free parameter.
The uncertainty in the measured ratio of the two branching fractions is dominated by statistical uncertainty.
Common sources of theoretical and experimental uncertainty in the two measurements, described in the next section, are treated as correlated in the fit.
The combination is performed at $\ensuremath{m_{\PH}}\xspace = 125.38\GeV$, and the discrete profiling method is used for the background modeling in both cases.
\section{Systematic uncertainties}
\label{sec:systematics}
The uncertainties associated with the choice of background shape are incorporated into the fit to the data through the use of the discrete profiling method.
They are, therefore, reflected in the statistical uncertainties obtained from the fit.
The systematic uncertainties, affecting either the normalization or the shape of the signal expectation, are listed below, and the numerical values are summarized in Table~\ref{tab:syst}, which also indicates whether the effect is correlated between the data-taking periods.
\begin{itemize}
\item Theoretical cross section calculations: These include the effects of the choice of PDFs, the value of the strong coupling constant (\alpS), and the effect of missing higher orders in the perturbative cross section calculations, evaluated from variations of the renormalization and factorization scales (\ensuremath{\mu_{\mathrm{R}}}\xspace, \ensuremath{\mu_{\mathrm{F}}}\xspace)~\cite{cite:cs1,cite:cs2,Butterworth:2015oua}. The uncertainties are treated as independent for each Higgs boson production mechanism. The uncertainty in \ensuremath{\mathcal{B}(\hzg)}\xspace is also considered~\cite{LHC-YR4}.
\item Underlying event and parton shower modeling: The uncertainty associated with the choice and tuning of the generator is estimated with dedicated samples which are generated by varying the parameters of the tune used to generate the original signal samples. The uncertainties are treated as correlated for the 2017 and 2018 samples, which use the CP5 tune~\cite{Sirunyan:2019dfx}, while being uncorrelated with the 2016 sample, which uses the CUETP8M1 tune~\cite{Khachatryan:2015pea}.
\item Integrated luminosity: The integrated luminosities for the 2016, 2017, and 2018 data-taking years have uncertainties of 1.2\%, 2.3\%, and 2.5\%~\cite{CMS-LUM-17-003,LUM-17-004,LUM-18-002}, respectively, corresponding to an overall uncertainty for the 2016--2018 period of 1.6\%, the improvement in precision reflecting the (uncorrelated) time evolution of some systematic effects.
\item L1 trigger: During the 2016 and 2017 data-taking periods, a gradual shift in the timing of the inputs of the ECAL L1 trigger in the $\abs{\eta} > 2.4$ region led to a specific inefficiency. A correction of approximately 1\% is applied to the simulation along with the corresponding uncertainty in the inefficiency measurement.
\item Trigger: Uncertainties are evaluated for the corrections applied to the simulation to match the trigger efficiencies measured in data with \ensuremath{\PZ\to\epem}\xspace and \ensuremath{\PZ\to\mpmm}\xspace events.
\item Photon identification and isolation: Uncertainties are evaluated for the corrections applied to the simulation to match the selection efficiencies in data measured with \ensuremath{\PZ\to\epem}\xspace events.
\item Lepton identification and isolation: Uncertainties are evaluated for the corrections applied to the simulation to match electron and muon selection efficiencies in data measured with \ensuremath{\PZ\to\epem}\xspace and \ensuremath{\PZ\to\mpmm}\xspace events.
\item Pileup modeling: The uncertainty in the description of the pileup in the signal simulation is estimated by varying the total inelastic cross section by $\pm4.6$\%~\cite{Sirunyan:2018nqx}.
\item Kinematic BDT: The uncertainties in the photon and lepton energy and the correction of the photon MVA discriminant are propagated to \ensuremath{\mathcal{D_{\text{kin}}}}\xspace. Changes in \ensuremath{\mathcal{D_{\text{kin}}}}\xspace cause the migration of signal events across category boundaries.
\item VBF BDT: The uncertainties in the jet energy and the uncertainty in \ensuremath{\mathcal{D_{\text{kin}}}}\xspace are propagated to \ensuremath{\mathcal{D}_{\text{VBF}}}\xspace. Changes in \ensuremath{\mathcal{D}_{\text{VBF}}}\xspace cause the migration of signal events across category boundaries.
\item Photon energy scale and resolution:
The photon energy in the simulation is varied due to the ECAL energy scale and resolution uncertainties, and the effects on the signal mean and resolution parameters are propagated to the fits.
\item Lepton momentum scale and resolution:
The lepton momentum in the simulation is varied due to the lepton momentum scale and resolution uncertainties, and the effects on signal mean and resolution parameters are propagated to the fits.
\end{itemize}
In the \ensuremath{\brzg/\brgg}\xspace measurement, the common sources of theoretical and systematic uncertainty in the two analyses are treated as correlated in the fit.
These are the theoretical uncertainties in the Higgs production cross section calculations, and the systematic uncertainties in the underlying event and parton shower modeling, the integrated luminosity, and the L1 trigger inefficiency.
The remaining uncertainties are treated as uncorrelated.
\begin{table}[tbp]
\centering
\topcaption{Sources of systematic uncertainty affecting the simulated signal. The normalization effect on the expected yield, or the effect on the signal shape parameters, is given as indicated, with the values averaged over all event categories. The third column shows the uncertainties that have a correlated effect across the three data-taking periods.}
\begin{tabular}{l@{\hskip 0.3in}c@{\hskip 0.3in}c}
\hline
Sources & Uncertainty (\%) & Year-to-year correlation \\ \hline
\multicolumn{3}{c}{\textit{Normalization}}\\
Theoretical & & \\
-- \ensuremath{\mathcal{B}(\hzg)}\xspace & 5.7 & Yes\\
-- \ensuremath{\Pg\Pg\PH}\xspace cross section (\ensuremath{\mu_{\mathrm{F}}}\xspace, \ensuremath{\mu_{\mathrm{R}}}\xspace) & 3.9 & Yes\\
-- \ensuremath{\Pg\Pg\PH}\xspace cross section (\alpS)& 2.6& Yes\\
-- \ensuremath{\Pg\Pg\PH}\xspace cross section (PDF)& 1.9 &Yes\\
-- VBF cross section (\ensuremath{\mu_{\mathrm{F}}}\xspace, \ensuremath{\mu_{\mathrm{R}}}\xspace)& 0.4 & Yes\\
-- VBF cross section (\alpS)& 0.5 & Yes\\
-- VBF cross section (PDF)& 2.1 &Yes\\
-- \ensuremath{\PW\PH}\xspace cross section (\ensuremath{\mu_{\mathrm{F}}}\xspace, \ensuremath{\mu_{\mathrm{R}}}\xspace) & $^{+0.6}_{-0.7}$ & Yes\\
-- \ensuremath{\PW\PH}\xspace cross section (PDF)& 1.7 & Yes\\
-- \ensuremath{\PZ\PH}\xspace cross section (\ensuremath{\mu_{\mathrm{F}}}\xspace, \ensuremath{\mu_{\mathrm{R}}}\xspace) & $^{+3.8}_{-3.1}$ &Yes \\
-- \ensuremath{\PZ\PH}\xspace cross section (PDF)& 1.3 & Yes\\
-- \ensuremath{\PW\PH}\xspace/\ensuremath{\PZ\PH}\xspace cross section (\alpS) & 0.9 &Yes\\
-- \ensuremath{\ttbar\PH}\xspace cross section (\ensuremath{\mu_{\mathrm{F}}}\xspace, \ensuremath{\mu_{\mathrm{R}}}\xspace)& $^{+5.8}_{-9.2}$ & Yes\\
-- \ensuremath{\ttbar\PH}\xspace cross section (\alpS)& 2.0 & Yes \\
-- \ensuremath{\ttbar\PH}\xspace cross section (PDF)& 3.0 &Yes \\
Underlying event and parton shower & 3.7--4.4 & Partial \\
Integrated luminosity & 1.2--2.5 & Partial \\
L1 trigger & 0.1--0.4 & No \\
Trigger & & \\
-- Electron channel &0.9--1.9 & No \\
-- Muon channel &0.1--0.4 & No \\
Photon identification and isolation & 0.2--5.0 & Yes \\
Lepton identification and isolation & & \\
-- Electron channel &0.5--0.7 & Yes \\
-- Muon channel &0.3--0.4 & Yes \\
Pileup & 0.4--1.0 & Yes \\
Kinematic BDT & 2.5--3.7 & Yes\\
VBF BDT & 5.9--14.0 & Yes\\
\multicolumn{3}{c}{\textit{Shape parameters}}\\
Photon energy and momentum && \\
-- Signal mean & 0.1--0.4 & Yes \\
-- Signal resolution & 3.1--5.9 & Yes \\
Lepton energy and momentum && \\
-- Signal mean & 0.007 & Yes \\
-- Signal resolution & 0.007--0.010 & Yes \\\hline
\end{tabular}\label{tab:syst}
\end{table}
\section{Results}\label{sec:results}
Figure~\ref{fig:SignalBackground} shows the signal-plus-background fit to the data and the corresponding distribution after background subtraction for the sum of all categories.
Each category is weighted by the factor $S/(S+B)$, where $S$ is the measured signal yield and $B$ is the background yield in the narrowest mass interval containing 95\% of the signal distribution.
\begin{figure*}[htbp]
\centering
\includegraphics[width=0.5\textwidth]{Figure_005.pdf}
\caption{Sum over all categories of the data points and signal-plus-background model after the simultaneous fit to each \ensuremath{m_{\llg}}\xspace distribution.
The contribution from each category is weighted by $S/(S+B)$, as defined in the text.
In the upper panel, the red solid line shows the signal-plus-background fit. The red dashed line shows the background component of the fit. The green and yellow bands represent the 68 and 95\% \CL uncertainties in the fit. Also plotted is the expected SM signal weighted by $S/(S+B)$ and scaled by a factor of 10. In the lower panel, the data minus the background component of the fit is shown.}\label{fig:SignalBackground}
\end{figure*}
The best fit value of the signal strength is $\ensuremath{2.4^{+0.8}_{-0.9}}\stat\,^{+0.3}_{-0.2}\syst$ at $\ensuremath{m_{\PH}}\xspace=125.38\GeV$.
$\ensuremath{\PH\to\zg}\xspace $ The corresponding measured value of \ensuremath{\sigppH\brzg}\xspace is $\ensuremath{0.21^{+0.07}_{-0.08}\stat\,^{+0.03}_{-0.02}\syst}\xspace\unit{pb}$.
This measurement is consistent with the SM prediction of $0.09 \pm 0.01\unit{pb}$ at the 1.6\xspace standard deviation level.
Figure~\ref{fig:lim-combo125} shows the signal strengths obtained for each category separately, corresponding to the fit results shown in Figs.~\ref{fig:3} and~\ref{fig:4}, as well as from simultaneous fits to the dijet categories, the untagged categories, and all categories combined.
Among the eight categories, dijet 1 is the most sensitive.
A category compatibility $p$-value, under the hypothesis of a common signal strength in all categories, is calculated from the likelihood ratio between the nominal combined fit, in which all categories have the same signal strength parameter, and a separate fit, in which each category has its own signal strength parameter.
This $p$-value is found to be 0.02\xspace, corresponding to 2.3\xspace standard deviations, and is driven by the dijet 3 category, which has a signal strength of $\hat{\mu}=\ensuremath{12.3^{+3.7}_{-3.5}}$.
The observed (expected) local significance is 2.7\,(1.2) standard deviations.
Upper limits on $\mu$ are calculated at 1\GeV intervals in the mass range of $120 < \ensuremath{m_{\llg}}\xspace < 130\GeV$ and at $\ensuremath{m_{\PH}}\xspace=125.38\GeV$, as shown in Fig.~\ref{fig:lim}.
The observed (expected) limit at 95\% \CL relative to the SM prediction for $\ensuremath{m_{\PH}}\xspace=125.38\GeV$ is 4.1\,(1.8).
The measured value of \ensuremath{\brzg/\brgg}\xspace from the combined fit with the \ensuremath{\PH\to\PGg\PGg}\xspace analysis is \ensuremath{1.5^{+0.7}_{-0.6}}\xspace.
This measurement is consistent with the SM prediction for the ratio at the 1.5\xspace standard deviation level.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.7\textwidth]{Figure_006.pdf}
\caption{
Observed signal strength ($\mu$) for an SM Higgs boson with $\ensuremath{m_{\PH}}\xspace=125.38\GeV$.
The labels ``untagged combined," ``dijet combined," and ``combined" represent the results obtained from simultaneous fits of the untagged categories, dijet categories, and full set of categories, respectively.
The black solid line shows $\mu=1$, and the red dashed line shows the best fit value $\hat{\mu}=\ensuremath{2.4\pm0.9}$ of all categories combined.
The category compatibility $p$-value, described in the text, is 0.02\xspace, corresponding to 2.3\xspace standard deviations.}\label{fig:lim-combo125}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.7\textwidth]{Figure_007.pdf}
\caption{
Upper limit (95\%~\CL) on the signal strength ($\mu$) relative to the SM prediction, as a function of the assumed value of the Higgs boson mass used in the fit.
\label{fig:lim}}
\end{figure}
\section{Summary}
\label{sec:summary}
A search is performed for a standard model (SM) Higgs boson decaying into a lepton pair (\ensuremath{\Pep\Pem}\xspace or \ensuremath{\PGmp\PGmm}\xspace) and a photon with $\ensuremath{m_{\lplm}}\xspace>50\GeV$.
The analysis is performed using a sample of proton-proton (\ensuremath{\Pp\Pp}\xspace) collision data at $\sqrt{s}=13\TeV$, corresponding to an integrated
luminosity of 138\fbinv.
The main contribution to this final state is from Higgs boson decays to a \PZ boson and a photon ($\ensuremath{\PH\to\zg}\xspace\to\ensuremath{\lplm\PGg}\xspace$).
The best fit value of the signal strength $\hat{\mu}$ for $\ensuremath{m_{\PH}}\xspace=125.38\GeV$ is $\hat{\mu}=\ensuremath{2.4^{+0.8}_{-0.9}}\stat\,^{+0.3}_{-0.2}\syst=\ensuremath{2.4\pm0.9}$.
This measurement corresponds to $\ensuremath{\sigppH\brzg}\xspace=\ensuremath{0.21\pm0.08}\xspace\unit{pb}$.
The measured value is 1.6\xspace standard deviations higher than the SM prediction.
The observed (expected) local significance is 2.7\,(1.2) standard deviations, where the expected significance is determined for the SM hypothesis.
The observed (expected) upper limit at 95\% confidence level on $\mu$ is 4.1\,(1.8).
In addition, a combined fit with the \ensuremath{\PH\to\PGg\PGg}\xspace analysis of the same data set~\cite{CMS:2021kom} is performed to measure
the ratio $\ensuremath{\brzg/\brgg}\xspace=\ensuremath{1.5^{+0.7}_{-0.6}}\xspace$, which is consistent with the ratio of $0.69 \pm 0.04$ predicted by the SM at the 1.5\xspace standard deviation level.
|
1,116,691,498,697 | arxiv | \section{Introduction}
Multi-agent systems have attracted widespread attention in recent years because of their better flexibility, good scalability, and excellent computing performance\cite{Bullo F}. Consensus is one of the most common tasks in multi-agent systems \cite{LiuQ} with applications in distributed estimation and optimization \cite{asqualetti}
, sensor fusion \cite{Xiao}, distributed energy management \cite{Zhao} and sensing scheduling \cite{He}, and time synchronization \cite{Schenato}
and so on.
Under the traditional average consensus algorithm of discrete-time, at each time step, each agent updates its state value to be a weighted average of its own previous state value and those of its neighbors.
Since there is no fusion center that can monitor the behavior of all agents at any time step, systems are very vulnerable to internal and external attacks \cite{Michiardi2002}. Attackers can cause a series of serious problems, such as system security and internal privacy issues.
``Malicious" and ``curious" attackers are two common attackers.
``Malicious'' attackers do not follow the average consensus algorithm but add additional input signals to the system in order to perturb the final consensus or prevent consensus. ``Curious'' attackers try to infer the initial states of other agents based on the update rule of the average consensus algorithm. This is extremely unfavorable in a privacy-sensitive situation.
In order to address the urgent need for security and privacy, a number of security and privacy protection methods related to the average consensus algorithm have been proposed. In order to ensure the security of consensus, in \cite{Sundaram2008_1}\cite{Sundaram2008_2}, Sundaram and Hadjicostis used the method of parity space for fault detection to show the resilience of linear consensus network from the perspective of network topology. Pasqualetti et al discussed the relationship between consensus computation in unreliable networks and fault detection and isolation problem for linear systems and gave some attack detection and identification algorithms based unknown input observer method\cite{Bullo F}.
On the other hand, to protect privacy, Huang et al. proposed an average consensus algorithm that adds Laplacian noise with exponential decay characteristics to the calculation, but the resulting convergence value is a random value \cite{Huang2012}.
In \cite{Man 2013}, Manitara and Hadjicostis proposed a privacy-preserving average consensus protocol and showed that the privacy of the initial state can be guaranteed when the network topology satisfies certain conditions, but they did not provide quantitative results on how good the initial state can be estimated. Mo and Murray proposed a privacy-preserving average consensus algorithm and proved that the initial state privacy of every benign agent can be effectively protected, under mild situations \cite{Mo 17 }. In \cite{Wang 2019}, Wang proposed a privacy-preserving protocol in which the state of every agent is randomly decomposed into two substates, such that the mean remains the same but only one of them is revealed to other neighboring agents. Hadjicostis and Dom\'inguez-Garc\'ia addressed the problem of privacy-preserving asymptotic average consensus in the presence of curious agents by using homomorphic encryption\cite{Had2020}.
However, the security and privacy problems will become more difficult to deal with when attackers are both ``malicious'' and ``curious''. \cite{Ruan2017} proposed a homomorphic cryptography-based approach with high computational complexity, which can guarantee privacy and security in decentralized average consensus. Nevertheless, the security problem considered in \cite{Ruan2017} is the security of communication rather than against malicious attackers.
In \cite{LiuQ}, Liu et al proposed a privacy-preserving average consensus algorithm equipped with a malicious attack detector by using the method of state estimation and used the reachable set to characterize the maximum disturbance that the attackers can introduce to the system.
In this paper, we consider the case where the system is threatened by a set of unknown attackers that are both ``malicious" and ``curious". The main differences between this paper and the reference \cite{LiuQ} are as follows. (1). We use an orthogonal projection matrix of the observation matrix of the system to construct the residual vectors, which is used to design an attack detector, while \cite{LiuQ} used the method of state estimation.
(2). We give the necessary and sufficient condition for there is no undetectable input in the system. Further, under this condition, we show that the system can achieve asymptotic consensus, and give an upper bound of convergence rate,
while the corresponding content is missing in \cite{LiuQ}.
(3). We give some quantitative results about the estimate of the error of final consensus value from the perspective of theoretical analysis when asymptotic consensus is reached. However, \cite{LiuQ} characterized the maximum disturbance that the attackers can introduce to the system by using the method of ellipsoid approximation of reachable set, and the estimate error region may be unbounded.
The main contributions of this paper are as follows.
Based on the privacy-preserving consensus algorithm proposed in \cite{Mo 17 }, we design a privacy-preserving average consensus algorithm equipped with an attack detector with a time-varying exponentially decreasing threshold for every benign agent, which can guarantee the initial state privacy of every benign agent, under mild conditions.
The detector will trigger an alarm if it detects the presence of malicious attackers.
An upper bound of false alarm rate in the absence of malicious attacker and the necessary and sufficient condition for there is no undetectable input in the system are given.
Under this condition, we show that the system can achieve asymptotic consensus almost surely when no alarm is triggered from beginning to end and give an upper bound of convergence rate and
some quantitative results about the estimate of the error of final consensus value from the perspective of theoretical analysis.
The rest of this paper is organized as follows: Section 2 briefly reviews the average consensus algorithm and introduces two kinds of attack models.
Sections 3 and 4 give the relevant results of privacy protection against curious attackers and security protection against malicious attackers, respectively, including the specific definition of concepts and detailed proof of theorems. Section 5 gives numerical case to illustrate the effectiveness of some theoretical results and Section 6 concludes this paper.
\textbf{Notations:} $ \mathbb{N} $ is the set of all non-negative integers. $\mathbb{R}^{n}$ is the set of $n\times 1$ real vectors. $\mathbb{R}^{n\times m}$ is the set of $n\times m$ real matrices. ${\rm tr}{M}$ is the trace of square matrix $M$. $\mathbf{1}$ is an all one vector of proper dimension. $\mathbf{0}$ is an all zero matrix of proper dimension. $\|v\|$ indicates the 2-norm of the vector $v$, while $\|M\|$ is the induced 2-norm of the matrix $M$. $X^+$ is the Moore--Penrose pseudoinverse of the matrix $X$. $ \{a(k)\}_{k=0}^{n} $ stands for the finite set $\{ a(0), a(1), \cdots, a(n)\}$ and $ \{a(k)\}_{k=0}^{\infty} $ stands for the infinite set $ \{a(0), a(1), \cdots \}$.
\section{Problem Formulation}
\subsection{Average Consensus}
In this subsection, we briefly introduce the average consensus algorithm.
Consider a network composed by $n$ agents as an undirected connected graph $G=(V,E)$, where $V=\{1,2,\cdots,n\}$ is the set of agents, and $E\subseteq V\times V$ represents the communication relationship among the agents. An edge between $i$ and $j$, denoted by $(i,j)\in E$, implies that $i$ and $j$ can communicate with each other. The set of neighbors of $i$ is denoted by $\mathcal{N}_i=\{j\in V:(i,j)\in E, j \neq i \}$.
Suppose that each agent $i\in V$ has an initial state $x_i(0)$. At any time $k$, agent $i$ first broadcasts its state to all of its neighbors and then updates its own state in the following linear combination manner:
\begin{equation}
\label{eq1}
x_i(k+1)= a_{ii}x_i(k) + \sum_{j\in \mathcal{N}_i}a_{ij}x_j(k).
\end{equation}
where $a_{ij}\ne 0$ if and only if $i$ and $j$ are neighbors. Define $x(k) \triangleq [x_1(k), x_2(k), \cdots, x_n(k)]^\top \in \mathbb{R}^n$ and $A\triangleq [a_{ij}]\in \mathbb{R}^{n\times n}$, where $ A $ is called \textit{weight matrix}. The state updating rule can be written in the following matrix form:
\begin{equation}
\label{eq2}
x(k+1)=Ax(k).
\end{equation}
We say the agents reach a consensus if $
\lim_{k\rightarrow \infty}x(k)= \gamma \mathbf{1}_{n \times 1}$,
where $\gamma$ is an arbitrary scalar constant. If $\gamma=\frac{1}{n}\sum_{i=1}^nx_i(0)$, then we say the average consensus is reached.
Assume the eigenvalues of $A$ are arranged as $\lambda_1\ge \lambda_2 \ge \cdots \ge \lambda_n$. It is well known that the necessary and sufficient conditions for average consensus are as follows:
(A1) $\lambda_1=1$, and $|\lambda_i|<1$, $i=2,\cdots,n$;
(A2) $A\mathbf{1}_{n \times 1}=\mathbf{1}_{n \times 1}$.
In the rest of this paper, assume that $ A $ is \textbf{symmetric} and satisfies Assumption (A1) and (A2) above.
\subsection{Attack Models}
In this subsection, we introduce two kinds of attack models.
\textit{Malicious Attack:} Some agents intend to disrupt the average consensus or prevent consensus by adding arbitrary input signals instead of following the updating rule (\ref{eq1}) of average consensus algorithm, i.e.,
\begin{equation}
\label{eq4}
x_i(k+1)= a_{ii}x_i(k) + \sum_{j\in \mathcal{N}_i}a_{ij}x_j(k)+u_i(k),
\end{equation}
where $u_i(k)\neq 0$ is the attack signal added by agent $i$ at time step $k$. Agent $ i $ is said to be a \textit{malicious} attacker if $ u_i(k) $ is nonzero for at least one time step $ k $, $ k \in \mathbb{N} $. The model for malicious attackers considered here is quite general, and the attack signal at every time step can be an arbitrarily determinant value.
This kind of malicious attackers can potentially either prevent benign agents, who follow the standard update rule \eqref{eq1} of
the average consensus algorithm, from reaching a consensus or manipulate the final consensus value to be arbitrary.
\textit{Curious Attack:} Some agents intend to infer the initial states of other agents, which may not be desirable when the initial state privacy is of concern. Such agents are called \textit{curious} attackers.
In this paper, we deal with a set of unknown agents that are both ``malicious'' and ``curious''. We assume that the set of these unknown agents that are both ``malicious'' and ``curious'' is $ \{i_1,\cdots,i_p\} $. Meanwhile, assume that each agent knows the weight matrix $ A $ defined in the previous subsection.
\section{Privacy Protection Against Curious Attackers}
In this section, we address the problem of curious attackers inferring other benign agents' initial states.
\subsection{Privacy Preserving Consensus Algorithm}
In order to protect each benign agent's privacy, we adopt the privacy-preserving algorithm proposed in \cite{Mo 17 }.
For the sake of completeness, we briefly describe the algorithm as follow.
\begin{algorithm}\label{algorithm}
Let $v_i(k) (i=1,2,\cdots,n; k=0,1,\cdots)$ be standard normal distributed random variables, which are independent across $i$ and $k$. Denote $v(k) \stackrel{\bigtriangleup}{=}[v_1(k),v_2(k),\cdots,v_n(k)]^\top $. Based on $v(k)$ we can construct the following noisy signals
\begin{equation}
\label{eq5}
w(k)=\left\{
\begin{array}{ll}
v(0), & \hbox{if} ~~k=0; \\
\varphi^kv(k)-\varphi^{k-1}v(k-1), & \hbox{otherwise};
\end{array}
\right.
\end{equation}
where $0 < \varphi < 1$ is a constant.
To protect the true value of states, the agents add noisy signals $w(k)$ into their states $x(k)$ and form a new state vector $x^+(k)$, before sharing with their neighbors, i.e., $x^+(k) = x(k) + w(k)$.
\end{algorithm}
\begin{remark}
According to \cite{Mo 17 }, the privacy-preserving average consensus algorithm above guarantees the initial state privacy of every benign agent, under mild conditions, and that random noises introduced to the consensus process do not affect the consensus result.
\end{remark}
Under this privacy-preserving algorithm, since the set of these unknown agents that are both ``malicious" and ``curious" are $ \{i_1,\cdots,i_p\} $, the state updating rule is as follows:
\begin{equation}
\begin{aligned}
x(k+1) =Ax^+(k)+B u(k)
=A(x(k)+w(k))+B u(k) \label{x(k+1)},
\end{aligned}
\end{equation}
where $B=[e_{i_1},e_{i_2},\cdots,e_{i_p}]$ with $e_i$ being the $ i $ th canonical basis vector in $\mathbb{R}^n$, and $u(k)=[u_{i_1}(k), u_{i_2}(k), \cdots, u_{i_p}(k)]^\top $ is the attack input signal at time step $ k $.
\begin{theorem}[\cite{Mo 17 }]\label{privacy}
The initial state value $ x_j(0) $ of agent $ j $ is kept private from these curious attackers $ \{i_1,\cdots,i_p\} $ if and only if $ \mathcal{N}_j \cup \{j\} \nsubseteq \mathcal{N}_{i_1} \cup \cdots \cup \mathcal{N}_{i_p} \cup \{i_1,\cdots, i_p\}$.
\end{theorem}
\section{Security Protection Against Malicious Attackers}
In this section, we address the problem of malicious attackers distributing the average consensus or preventing consensus.
\subsection{Attack Detector}
In order to deal with malicious attacks, we will design an attack detector for each benign agent.
Without of loss generality, assume that \textit{agent $1$ is benign}, and we focus on designing an attack detector for agent $ 1 $.
Suppose the neighbors of agent $ 1 $ are $\{j_1,j_2,\cdots,j_{m-1}\}$.
The values that are available to agent $ 1 $ at $ k$ time step will be denoted by
\begin{equation}
y(k)=C(x(k)+w(k)) \label{y(k)},
\end{equation}
where $C=[e_1, e_{j_1},e_{j_2},\cdots,e_{j_{m-1}}]^\top $.
We first propose a residual generator as follow, which uses the measurement sequence $\{y(k)\}_{k = 0}^{\infty}$ to generate a residual vector sequence $\{r(k)\}_{k=0}^{\infty}$ that will be a zero vector sequence when there is no noise protecting the agent's privacy and malicious attackers in the system.
The response of linear consensus system of the form (\ref{x(k+1)}) and (\ref{y(k)}) over $ n+1 $ times steps at each time step $ k $ is given by
\begin{multline}\label{response}
\underbrace{\begin{bmatrix}
y(k) \\
y(k+1) \\
\vdots \\
y(k+n)
\end{bmatrix}}_{Y_{[k,k+n]}} =\underbrace{\begin{bmatrix}
C \\
C A \\
\vdots \\
C A^{n}
\end{bmatrix}}_{\mathcal{O}_{n}} x(k) \\
+
\underbrace{
\begin{bmatrix}
C & \mathbf{0} & \cdots & \mathbf{0} \\
CA & C& \cdots & \mathbf{0} \\
\vdots & \vdots & \ddots & \vdots \\
CA^{n} & CA^{n-1} & \cdots & C
\end{bmatrix}
}_{\mathcal{H}_{n}}
\underbrace{\begin{bmatrix}
w(k) \\
w(k+1) \\
\vdots \\
w(k+n)
\end{bmatrix}}_{W_{[k,k+n]}} \\
+
\underbrace{
\begin{bmatrix}
\mathbf{0} & \mathbf{0} & \cdots & \mathbf{0} \\
CB & \mathbf{0} & \cdots & \mathbf{0} \\
\vdots & \vdots & \ddots & \vdots \\
CA^{n-1}B & CA^{n-2}B & \cdots & \mathbf{0}
\end{bmatrix}
}_{\mathcal{J}_{n}}
\underbrace{\begin{bmatrix}
u(k) \\
u(k+1) \\
\vdots \\
u(k+n)
\end{bmatrix}}_{U_{[k,k+n]}}.
\end{multline}
Now, we are ready to proceed with the construction of residual generator to be used to design an attack detector. In order to make the residual generator not affected by the initial state value $ x(0) $, we multiply the orthogonal projection matrix $ \mathcal{P} = I_{m(n+1)} - \mathcal{O}_{n}\mathcal{O}_{n}^+$ on both the left and right sides of the equality \eqref{response}, and equality \eqref{response} can be simplified to the following form:
\begin{equation}\label{response_1}
\mathcal{P}Y_{[k,k+n]} = \mathcal{P}\mathcal{H}_{n} W_{[k,k+n]}+ \mathcal{P}\mathcal{J}_{n} U_{[k,k+n]}.
\end{equation}
Up to now, based on the above results, we can construct the following residual generator, then use it to design an attack detector.
\begin{definition}[Residual Vector]\label{attack detector}
Define the residue vector $ r(k) $ at each time step $ k $ as shown below:
\begin{equation}\label{r(k)}
r(k) \stackrel{\bigtriangleup}{=} \mathcal{P} Y_{[k,k+n]}.
\end{equation}
Then a malicious attack detector is obtained, which compares $\|r(k)\|$ with a threshold $ c\rho^k $ decreasing exponentially over time and triggers an alarm if and only if $\|r(k)\|$ is greater than the given threshold $ c\rho^k $ at some time step $ k $,
where $c > 0 $ , $\varphi < \rho < 1 $ are two fixed constants selected by agent $ 1 $.
\end{definition}
If there is an alarm is triggered at some time step $ k $, we will think that there are malicious attackers in the system.
\begin{remark}
It can be seen that there is a delay of $ n $ time steps in the process of detecting at each time step $ k $. In general, the delay is inevitable, because at any time step $ k $, using the observations up to the current moment cannot get enough information about the attack input at the current moment.
\end{remark}
Note that according to the definition of the residual detector, it can be seen that in the absence of noisy signals preserving privacy, that there is no malicious attacker in the system implies that the residual vector $ r(k) \equiv \mathbf{0}_{ m(n+1) \times 1} $ for every $ k \in \mathbb{N} $, but the opposite is not necessarily true. This situation is undesirable because in this case, there exists some attack input $ u $ such that the attack detector does not detect its existence.
We will discuss in detail how to avoid this situation in the third subsection detectability.
\subsection{False Alarm Rate}
In this subsection, we will focus on the situation where there is no malicious attacker in the system.
According to the privacy-preserving consensus algorithm, the noisy signal $ w(k) $ will be added into agents' states $ x(k) $ at each time step $ k $, and therefore, even if there is no malicious attacker, an alarm may be triggered at some time step $ k $.
False alarm rate of the attack detector will be characterized here.
According the linearity of linear consensus system of the form (\ref{x(k+1)}) and (\ref{y(k)}) and the definition of residual vector $ r(k) $, $r(k)$ can be decomposed into the following two parts:
\begin{equation}
r(k) = r^a(k) + r^n(k),
\end{equation}
where $ r^a(k) $ and $ r^n(k) $ are respectively generated by malicious attackers' input and the noisy signals.
By using \eqref{response_1}, we can directly get the following equivalent relationship:
\begin{equation}\label{r(k)_0_a_n}
r^a(k) = \mathcal{P}\mathcal{J}_{n} U_{[k,k+n]}, \ \ \ \
r^n(k) = \mathcal{P}\mathcal{H}_{n} W_{[k,k+n]}.
\end{equation}
Now we define false alarm rate as follows.
\begin{definition}[False Alarm Rate] \label{False Alarm Rate}
Define false alarm rate $\alpha$ as the probability of triggering false alarm at least once from the initial time step
to infinity when there is no malicious attacker in the linear consensus system of the form (\ref{x(k+1)}) and (\ref{y(k)}), in other words,
\begin{equation}
\alpha \stackrel{\bigtriangleup}{=} \mathbb{P} \left[ \bigcup_{k=0}^{\infty} \left\{ \| r^n(k) \| > c\rho^k
\right\} \right] \label{rate}.
\end{equation}
\end{definition}
\begin{remark}
Under the same conditions, a smaller $ \alpha $ often means better performance of corresponding attack detector.
\end{remark}
Before characterizing false alarm rate $ \alpha $, we first focus on $ r^n(k) $. For the convenience of notation, the matrix $ \mathcal{P}\mathcal{H}_{n} $ is uniformly partitioned according to the columns as $ \begin{bmatrix}
\mathcal{P}_0 & \mathcal{P}_1 & \cdots & \mathcal{P}_{n}
\end{bmatrix}$,
where each $ \mathcal{P}_i $ are of dimension $ m(n+1) \times n $ and $ \mathcal{P}_0 = \mathcal{P} \mathcal{O}_{n} = \mathbf{0}_{p \times n}$.
Combined with the definition of $ w(k) $, for any time step $ k $, $ r^n(k) $ can then be expressed again as
\begin{equation}\label{r^n(k)}
r^n(k) = \sum_{i = 0}^{n-1} \varphi^{k+i} (\mathcal{P}_i - \mathcal{P}_{i+1}) v(k+i) + \varphi^{k+n} \mathcal{P}_{n} v(k+n).
\end{equation}
\begin{theorem}[An Estimation Of False Alarm Rate] \label{An Estimation of False Alarm Rate}
For a linear consensus system of the form (\ref{x(k+1)}) and (\ref{y(k)}), false alarm rate $ \alpha $ of the attack detector above satisfies
\begin{multline}\label{alpha_upper}
\alpha \leq \frac{1}{c^2} \frac{\rho^2}{\rho^2 - \varphi^2} \left(
\sum_{i = 0}^{n-1} \varphi^{2i} {\rm tr}\left[(\mathcal{P}_i - \mathcal{P}_{i+1})^\top (\mathcal{P}_i - \mathcal{P}_{i+1}) \right] \right.\\
+ \varphi^{2n} {\rm tr}\left[ \mathcal{P}_{n}^\top \mathcal{P}_{n} \right]
\Bigg) .
\end{multline}
\end{theorem}
\begin{proof}
According to the definition of false alarm rate $ \alpha $, we can express $ \alpha $ as follow:
\begin{equation}\label{alpha_proof}
\begin{aligned}
\alpha &= \mathbb{P} \left[ \bigcup_{k=0}^{\infty} \left\{ \|r^n(k) \| > c\rho^k
\right\} \right] \leq \sum_{k=0}^{\infty} \mathbb{P} \left[ \|r^n(k) \| > c\rho^k \right] \\
&\leq \sum_{k=0}^{\infty} \frac{\mathbb{E}\left[r^n(k)^\top r^n(k) \right]}{c^2 \rho^{2k}}
= \frac{\eta}{c^2} \sum_{k=0}^{\infty} \frac{\varphi^{2k}}{\rho^{2k}}
= \frac{\eta}{c^2}\frac{\rho^2}{\rho^2 - \varphi^2},
\end{aligned}
\end{equation}
where $ \eta $ represents the item in braces on the right side of \eqref{alpha_upper}. The first inequality holds because of the countable additivity of probability measures and the second inequality holds because of Chebyshev's inequality.
\end{proof}
\subsection{Detectability}
In this subsection, we will focus on the detectability of the attack detector.
\begin{definition}[Undetectable Input]
For a linear consensus system of the form (\ref{x(k+1)}) and (\ref{y(k)}), the attack input $ u $ introduced by
these unknown malicious agents $ \{i_1,\cdots,i_p\} $ is undetectable if
\[ \exists x_1,x_2 \in \mathbb{R}^n, {\rm s.t. } \forall k \in \mathbb{N}, y^{0+a}(x_1,u,k) = y^{0+a}(x_2,0,k),
\]
where $ y^{0+a}(x_1,u,k) $ is the part of $ y(x_1,u,k) $ generated by the initial state $ x_1 $ and the attack input $ u $ at time step $ k $.
\end{definition}
\begin{definition}[Undetectable Input By The Attack Detector]
For a linear consensus system of the form (\ref{x(k+1)}) and (\ref{y(k)}), the attack input $ u $ introduced by
these unknown malicious agents $ \{i_1,\cdots,i_p\} $ is undetectable by the attack detector
if
\[ \exists x_1,x_2 \in \mathbb{R}^n, {\rm s.t. } \forall k \in \mathbb{N}, r^{a}(x_1,u,k) = r^{a}(x_2,0,k).
\]
\end{definition}
Before giving the necessary and sufficient conditions for there exists no undetectable input by the attack detector in the linear consensus system of the form \eqref{x(k+1)} and \eqref{y(k)}, we need the following three lemmas.
\begin{lemma}\label{Detect_before}
For a linear consensus system of the form (\ref{x(k+1)}) and (\ref{y(k)}), if the first $ n $ columns of $ \begin{bmatrix}
\mathcal{O}_{n-1} & \mathcal{J}_{n-1}
\end{bmatrix} $ are independent of each other and the last $ np $ columns
i.e., $
{\rm rank} \begin{bmatrix}
\mathcal{O}_{n-1} & \mathcal{J}_{n-1}
\end{bmatrix} - {\rm rank} [ \mathcal{J}_{n-1}]
= n$,
then the first $ p $ columns of $ \mathcal{J}_{n-1} $ are independent of each other and the last $ np $ columns
i.e., $
{\rm rank} [ \mathcal{J}_n ]
- {\rm rank} [ \mathcal{J}_{n-1}] = p$,
where the specific relationship between $ \mathcal{J}_{n} $ and $ \mathcal{J}_{n-1} $ can be expressed as
$ \mathcal{J}_{n} = \begin{bmatrix}
\mathbf{0}_{m \times p} & \mathbf{0}_{m \times np} \\
\mathcal{O}_{n-1}B & \mathcal{J}_{n-1}
\end{bmatrix} $.
\end{lemma}
\begin{proof}
Just notice that $ B = [e_{i_1},e_{i_2},\cdots,e_{i_p}] $ is column full rank.
\end{proof}
\begin{lemma}[\cite{Sundaram2010}]\label{strongly observable}
A linear consensus system of the form (\ref{x(k+1)}) and (\ref{y(k)}), is said to be strongly observable, if $ y^0(k) + y^a(k) \equiv \mathbf{0}_{m \times 1} $ for all $ k \in \mathbb{N} $ implies $ x(0) = \mathbf{0}_{n \times 1} $(regardless of the values of the input $ u $), where
$ y^0(k) $ and $ y^a(k) $ are respectively the part of $ y(k) $ generated by initial state $ x(0) $ and that generated by malicious input $ u $. The following statements are equivalent.
\begin{enumerate}
\item $
{\rm rank} \begin{bmatrix}
\mathcal{O}_{n-1} & \mathcal{J}_{n-1}
\end{bmatrix} - {\rm rank} [ \mathcal{J}_{n-1}]
= n$;
\item the system is strongly observable.
\end{enumerate}
\end{lemma}
\begin{lemma}\label{observable}
For a linear consensus system of the form (\ref{x(k+1)}) and (\ref{y(k)}), it is observable, i.e, $ {\rm rank}[\mathcal{O}_{n-1}] = n $, almost surely.
\end{lemma}
\begin{proof}
This is a direct corollary of Theorem 2 in \cite{Sundaram2008_3}.
\end{proof}
\begin{theorem} \label{Detect}
For a linear consensus system of the form (\ref{x(k+1)}) and (\ref{y(k)}), the following statements are equivalent almost surely:
\begin{enumerate}
\item there is no undetectable input;
\item there is no undetectable input by the attack detector;
\item $
{\rm rank} \begin{bmatrix}
\mathcal{O}_{n-1} & \mathcal{J}_{n-1}
\end{bmatrix} - {\rm rank} [ \mathcal{J}_{n-1}]
= n.$
\end{enumerate}
\end{theorem}
\begin{proof}
($ 2 \Rightarrow 3 $): Suppose that $ {\rm rank} \begin{bmatrix}
\mathcal{O}_{n-1} & \mathcal{J}_{n-1}
\end{bmatrix} - {\rm rank} [ \mathcal{J}_{n-1}]
<n $, according to Lemma \ref{strongly observable}, the system is not strongly observable. Therefore, there exists a nonzero $ x(0) \in \mathbb{R}^n $ and an attack input $ u $ such that $ y^0(k) + y^a(k) \equiv \mathbf{0}_{m \times 1} $ for all $ k \in \mathbb{N} $. According to the definition of $ r(k) $, there must be that $ r^a(k) \equiv \mathbf{0} $ for all $ k \in \mathbb{N} $. Since there is no undetectable input by the attack detector, it follows that
\begin{multline}\label{key}
r^a(k) \equiv \mathbf{0}, \forall k \in \mathbb{N}
\Rightarrow u(k) \equiv \mathbf{0}, \forall k \in \mathbb{N} \\
\Rightarrow y^a(k) \equiv \mathbf{0}, \forall k \in \mathbb{N}
\Rightarrow y^0(k) \equiv \mathbf{0}, \forall k \in \mathbb{N},
\end{multline}
where the last step holds because $ y^0(k) + y^a(k) \equiv \mathbf{0}_{m \times 1} $ for all $ k \in \mathbb{N} $.
Combined with Lemma \ref{observable}, we can get the following relationship almost surely:
\begin{equation}\label{key}
y^0(k) \equiv \mathbf{0}, \forall k \in \mathbb{N} \stackrel{{\rm rank}[\mathcal{O}_{n-1}] = n}{\Longrightarrow}
x(0) = \mathbf{0},
\end{equation}
but it contracts the fact that $ x(0) $ is a nonzero vector.
($ 1 \Rightarrow 3 $): The proof is similar to that of ($ 2 \Rightarrow 3 $).
($ 3 \Rightarrow 2 $):
First, we assert that there is a $ p \times m(n+1) $ matrix $ \mathcal{Q}_B $ \footnote{The subscript $B$ here means that the matrix $ \mathcal{Q}_B $ is related to $B$.
} that satisfies $ \mathcal{Q}_B\mathcal{P}\mathcal{J}_{n} =
\left[
\begin{array}{c|c}
I_p & \mathbf{0}_{p \times np}
\end{array}
\right]$.
Otherwise, it implies that the statement that the first $ p $ columns of $ \mathcal{P}\mathcal{J}_{n} $ are linearly independent of each other and the last $ np $ columns is false. Therefore, there exists a nonzero input sequence $ \{u(i)\}_{i=0}^{n} $ with $ u(0) \neq \mathbf{0}_{p \times 1} $ such that $\mathcal{P}\mathcal{J}_{n}U_{[0,n]} = \mathbf{0}_{m(n+1) \times 1} $.
\begin{itemize}
\item If $ \mathcal{J}_{n}U_{[0,n]} \neq \mathbf{0}_{m(n+1) \times 1} $, according to the definition of $ \mathcal{P} $, there must exist a nonzero initial state $ x(0) \in \mathbb{R}^n $ such that $\mathcal{O}_{n} x(0) + \mathcal{J}_{n} U_{[0,n]} = \mathbf{0}_{m(n+1) \times 1} $, but this contracts the result $ {\rm rank} \begin{bmatrix}
\mathcal{O}_{n} & \mathcal{J}_{n}
\end{bmatrix} - {\rm rank} [ \mathcal{J}_{n}]
= n $, which is a direct corollary of the condition ${\rm rank} \begin{bmatrix}
\mathcal{O}_{n-1} & \mathcal{J}_{n-1}
\end{bmatrix} - {\rm rank} [ \mathcal{J}_{n-1}]
= n$.
\item If $ \mathcal{J}_{n}U_{[0,n]} = \mathbf{0}_{m(n+1) \times 1} $, however, it contracts the results of Lemma \ref{Detect_before}, for $ u(0) \neq \mathbf{0}_{p \times 1} $.
\end{itemize}
The above result shows that the initial assertion is correct.
Since $ r^a(k) = \mathcal{P}\mathcal{J}_{n} U_{[k,k+n]} $, we can get that
\begin{equation}\label{Q_B}
\mathcal{Q}_Br^a(k) = u(k) .
\end{equation}
Therefore, $ r^a(k) \equiv \mathbf{0}, \forall k \in \mathbb{N} \Rightarrow u(k) \equiv \mathbf{0}, \forall k \in \mathbb{N}$, i.e., there is no undetectable input by the attack detector.
($ 3 \Rightarrow 1 $): Suppose there is a nonzero attack input $ u $ and an initial state $ x(0) $
such that $ y^{0+a}(k) \equiv \mathbf{0} $ for all $ k \in \mathbb{N} $. Since $
{\rm rank} \begin{bmatrix}
\mathcal{O}_{n-1} & \mathcal{J}_{n-1}
\end{bmatrix} - {\rm rank} [ \mathcal{J}_{n-1}]
= n$, there must be that $ x(0) = \mathbf{0} $. Further, it follows that
\begin{multline}\label{key}
x(0) = \mathbf{0} \Rightarrow y^0(k) \equiv \mathbf{0}, \forall k \in \mathbb{N} \Rightarrow y^a(k) \equiv \mathbf{0}, \forall k \in \mathbb{N} \\
\Rightarrow r^a(k)\equiv \mathbf{0}, \forall k \in \mathbb{N} \Rightarrow u(k)\equiv \mathbf{0}, \forall k \in \mathbb{N},
\end{multline}
where \eqref{Q_B} in ($ 3 \Rightarrow 2 $) is used in the last step. This contracts that the fact that $ u $ is nonzero. Therefore, there is no undetectable input.
\end{proof}
\subsection{Asymptotic Consensus and Error}
In order to protect the security and privacy of the system, we have designed a privacy-preserving average consensus algorithm equipped with an attack detector with a time-varying exponentially decreasing threshold for every benign agent.
At this point, there are naturally three problems:
\begin{itemize}
\item When there exists no undetectable inputs and no alarm is triggered from beginning to end, will the system eventually achieve consensus?
\item If the system can achieve a final consensus, what is the rate of convergence?
\item How much error of the final consensus value will be?
\end{itemize}
We will answer these three problems in turn in this subsection.
\begin{lemma}\label{decomposition}(\cite{Mo 17 })
Define a matrix $\mathcal{A} \stackrel{\bigtriangleup}{=} A - \frac{\mathbf{1}\mathbf{1}^\top }{n}$.
For $\forall k \in \mathbb{N}$, $ \mathcal{A}^k = A^k - \frac{\mathbf{1}\mathbf{1}^\top }{n} $.
\end{lemma}
\begin{proof}
Just notice that by assumption (A1) and (A2), we can get that
$ \frac{\mathbf{1}\mathbf{1}^\top }{n}A = \frac{\mathbf{1}\mathbf{1}^\top }{n} = A\frac{\mathbf{1}\mathbf{1}^\top }{n}$.
\end{proof}
\begin{lemma}\label{u<infty}
For a linear consensus system of the form (\ref{x(k+1)}) and (\ref{y(k)}), the following inequality holds almost surely
\begin{equation}\label{u(k)_conver}
\sum_{k=0}^{\infty} \frac{\| u(k) \|}{(\rho + \varepsilon)^k} < \infty ,
\end{equation}
with $ \varepsilon $ is a fixed constant satisfying $ 0 < \varepsilon < 1 - \rho $, if
\begin{enumerate}
\item no alarm is triggered;
\item $
{\rm rank} \begin{bmatrix}
\mathcal{O}_{n-1} & \mathcal{J}_{n-1}
\end{bmatrix} - {\rm rank} [ \mathcal{J}_{n-1}]
= n
$.
\end{enumerate}
\end{lemma}
\begin{proof}
Since no alarm is triggered, by the definition of ``not triggering an alarm'', it follows that $ \|r^n(k) + r^a(k) \| \leq c \rho^k$. By using triangle inequality, we can get that $ \|r^a(k)\| \leq c\rho^k + \|r^n(k) \| $. Now we first focus on $ r^n(k) $. According to \eqref{r^n(k)}, there exists a fixed constant $ d > 0$ such that
$\|r^n(k)\| \leq d \varphi^k \sum_{i=0}^{n} \| v(k+i)\|$,
where $ d$ can be selected as $ \sum_{i = 0}^{n-1} \varphi^i \| \mathcal{P}_i - \mathcal{P}_{i+1}\| + \varphi^n \| \mathcal{P}_n \| $.
Therefore, it is not difficult to get that
\begin{equation}\label{r^n(k)_upper}
\sum_{k=0}^{\infty} \frac{\|r^n(k)\|}{(\rho+ \varepsilon)^k} \leq (n+1)d \sum_{k=0}^{\infty} \left(\frac{\varphi}{\rho+\varepsilon}\right)^k \|v(k)\|.
\end{equation}
By Chebyshev's inequality, for any positive integer $ k $, we have
\begin{equation}\label{key}
\mathbb{P} \left[ \|v(k)\| \geq k \right]
\leq \frac{\mathbb{E}[v(k)^\top v(k) ]}{k^2} = \frac{n}{k^2} \\
\end{equation}
and consequently it implies that
\begin{equation}\label{key}
\sum_{k=1}^{\infty} \mathbb{P} \left[ \|v(k)\| \geq k \right] \leq \sum_{k=1}^{\infty} \frac{n}{k^2} = \frac{n\pi^2}{6} < \infty.
\end{equation}
By Borel-Cantelli Lemma, it follows that
\begin{equation}\label{borel}
\mathbb{P} \left[ \limsup_{k \rightarrow \infty} \left\{ \|v(k)\| \geq k \right\} \right] = 0.
\end{equation}
Since a point belongs to $ \limsup_{k} \left\{ \|v(k)\| \geq k \right\} $ if and only if it belongs to infinitely many terms of the sequence $ \left\{ \|v(k)\| \geq k \right\}_{k = 1}^{\infty} $, this sequence of $ \left\{ \|v(k)\| \geq k \right\}$ occurs at most a finite number of times almost surely. Therefore, there exists a positive integer $ k_1 $ such that $ \forall k > k_1 $, $ \| v(k) \| < k $ holds almost surely\footnote{Hereafter, ``almost surely" will be abbreviated as ``a.s." sometimes.}.
It follows that
\begin{equation}\label{||v(k)||}
\begin{aligned}[alignment]
&\sum_{k= 0}^{\infty} \left( \frac{\varphi}{\rho+ \varepsilon} \right)^k \|v(k)\| = \left(
\sum_{k= 0}^{k_1} + \sum_{k= k_1+1}^{\infty}
\right) \left( \frac{\varphi}{\rho+ \varepsilon} \right)^k \|v(k)\| \\
&\stackrel{{\rm a.s.}}{<} \sum_{k= 0}^{k_1} \left( \frac{\varphi}{\rho+ \varepsilon} \right)^k \|v(k)\| + \sum_{k= k_1+1}^{\infty} k \left( \frac{\varphi}{\rho+ \varepsilon} \right)^k < \infty,
\end{aligned}
\end{equation}
where the last inequality holds because $0< \varphi < \rho $.
Combined with \eqref{r^n(k)_upper}, we have
$ \sum_{k=0}^{\infty} \frac{\| r^n(k) \|}{(\rho + \varepsilon)^k} \stackrel{{\rm a.s.}}{<} \infty $.
Since $ \|r^a(k)\| \leq c\rho^k + \|r^n(k) \| $ holds for all $ k \in \mathbb{N} $ when no alarm is triggered, then we have
\begin{equation}\label{key}
\sum_{k=0}^{\infty} \frac{\| r^a(k) \|}{(\rho + \varepsilon)^k}
\leq \sum_{k=0}^{\infty} \left( c \left(\frac{\rho}{\rho+\varepsilon}\right)^k +
\frac{\| r^n(k) \|}{(\rho + \varepsilon)^k}
\right) \stackrel{{\rm a.s.}}{<}
\infty.
\end{equation}
Note that for any $ k $, we have $ u(k) = \mathcal{Q}_Br^a(k) $, where $ \mathcal{Q}_B $ is defined in Theorem \ref{Detect}, this implies that $ \|u(k)\| \leq \| \mathcal{Q}_B\| \|r^a(k)\| $.
Therefore, \eqref{u(k)_conver} holds almost surely.
\end{proof}
Now, we define convergence rate here.
\begin{definition}[Convergence Rate]
Define the convergence rate $ \varrho $ of consensus algorithm as
\begin{equation}\label{key}
\varrho \stackrel{\bigtriangleup}{=} \limsup_{k \rightarrow \infty} \left\| x(k) - \overline{x}(k) \right\|^{\frac{1}{k}},
\end{equation}
whenever the limit on the right-hand side exists, where $ \overline{x}(k) = \frac{\mathbf{1}\mathbf{1}^\top }{n}x(k)$ denotes the average state vector at time step $ k $.
\end{definition}
\begin{theorem} \label{converage}
For a linear consensus system of the form (\ref{x(k+1)}) and (\ref{y(k)}), an asymptotic consensus will be reached almost surely, i.e. $\lim_{k\rightarrow \infty} x(k) - \overline{x}(k) \stackrel{{\rm a.s.}}{=} \mathbf{0}_{n \times 1}$, and the convergence rate $ \varrho $ satisfies $
\varrho \leq \max\{\rho,|\lambda_2|,|\lambda_n| \}$,
if
\begin{enumerate}
\item no alarm is triggered;
\item $
{\rm rank} \begin{bmatrix}
\mathcal{O}_{n-1} & \mathcal{J}_{n-1}
\end{bmatrix} - {\rm rank} [ \mathcal{J}_{n-1}]
= n
$.
\end{enumerate}
\end{theorem}
\begin{proof}
Using the result of Lemma \ref{decomposition}, we can get that
\begin{equation}\label{x-x_ave}
x(k) - \overline{x}(k) = \mathcal{A}^kx(0) + \sum_{i=0}^{k-1} \mathcal{A}^{k-i}w(i) + \sum_{i=0}^{k-1} \mathcal{A}^{k-1-i} B u(i).
\end{equation}
Now, we analyze the three terms on the right-side of the equality above in turn.
(1) ``$ \mathcal{A}^kx(0) $'' :
For any initial state value $ x(0) $, we have
$ \| \mathcal{A}^kx(0) \| \leq \max\{|\lambda_2|, |\lambda_n| \}^k \|x(0) \| \rightarrow 0$ as $ k \rightarrow \infty$.
(2) ``$ \sum_{i=0}^{k-1} \mathcal{A}^{k-i}w(i) $'': According to Algorithm \ref{algorithm}, we can re-express $ \sum_{i=0}^{k-1} \mathcal{A}^{k-i}w(i) $ as follow
\begin{multline}\label{key}
\sum_{i=0}^{k-1} \mathcal{A}^{k-i}w(i) = \mathcal{A}\varphi^{k-1}v(k-1) \\
+ \sum_{i=0}^{k-2}\varphi^i \mathcal{A}^{k-1-i}(\mathcal{A}-I)v(i).
\end{multline}
Similar to before, we also have $ \| \mathcal{A}\varphi^{k-1}v(k-1) \| \leq \max \{|\lambda_2|,|\lambda_n|\} \varphi^{k-1}
\|v(k-1)\|
\stackrel{{\rm a.s.}}{\rightarrow} 0 $ as $ k \rightarrow \infty$ similarly.
Based on the result given by \eqref{borel} and the triangle inequality, there exists a constant $ d_1 > 0$ such that for any time step $ k $,
\begin{equation}\label{36}
\left\| \sum_{i=0}^{k-2}\varphi^i \mathcal{A}^{k-1-i}(\mathcal{A}-I)v(i) \right\|
\stackrel{\text{a.s.}}{\leq } d_1 k^2 \max \{\varphi,|\lambda_2|,|\lambda_n|\}^k.
\end{equation}
Since $ 0 < \max\{\varphi, |\lambda_2|,|\lambda_n|\} < 1 $, if let $ k $ approach infinity, it follows that
$ \lim_{k \rightarrow \infty} \left\| \sum_{i=0}^{k-2}\varphi^i \mathcal{A}^{k-1-i}(\mathcal{A}-I)v(i) \right\| \stackrel{{\rm a.s.}}{=} \mathbf{0}_{n \times 1} $.
Therefore, we can get that
$\sum_{i=0}^{k-2}\varphi^i \mathcal{A}^{k-1-i}(\mathcal{A}-I)v(i)$ convergences to $\mathbf{0}_{n \times 1}$ a.s..
Combining with the previous results, further, we can get that $ \sum_{i=0}^{k-1} \mathcal{A}^{k-i}w(i)$ convergences to $\mathbf{0}_{n \times 1} $ a.s. .
(3) ``$ \sum_{i=0}^{k-1} \mathcal{A}^{k-1-i} B u(i) $'': According to the definition of $ B $ and the triangle inequality, it is not difficult to get that
\begin{equation}\label{key}
\left\| \sum_{i=0}^{k-1} \mathcal{A}^{k-1-i} B u(i) \right\| \stackrel{{\rm a.s.}}{\leq} \sum_{i=0}^{k-1} \max\{|\lambda_2|,|\lambda_n|\}^{k-1-i} \|u(i)\|.
\end{equation}
According to the result of Lemma \ref{u<infty}, for any $ 0 < \varepsilon < 1- \rho $, there exists a constant $ d_{\varepsilon} > 0 $ related to $ \varepsilon $ such that $ \|u(k)\| \leq d_{\varepsilon} (\rho+\varepsilon)^k $ holds a.s. for all time step $ k $. Combined with these inequalities above, we can get that
\begin{multline}\label{u_sum}
\left\| \sum_{i=0}^{k-1} \mathcal{A}^{k-1-i} B u(i) \right\| \stackrel{{\rm a.s.}}{\leq} d_{\varepsilon} \sum_{i=0}^{k-1} \max\{\rho+\varepsilon,|\lambda_2|,|\lambda_n|\}^{k-1} \\
= d_{\varepsilon} k \max\{\rho+\varepsilon,|\lambda_2|,|\lambda_n|\}^{k-1} .
\end{multline}
If let $ k $ approach infinity, we can directly get that $ \sum_{i=0}^{k-1} \mathcal{A}^{k-i}Bu(i) $ convergences to $ \mathbf{0}_{n \times 1} $ a.s. .
Combining all results above, for any initial state $x(0)$, $ x(k) - \overline{x}(k) $ converges to $\mathbf{0}_{n \times 1}$ almost surely.
Now we analyze the convergence rate whenever an asymptotic consensus can be reached. For convenience, we analyze the convergence rate of the three terms on the right-side of \eqref{x-x_ave} in turn and note that the final total convergence rate is the largest of these three.
(1) ``$ \mathcal{A}^kx(0) $'' : Obviously, the convergence rate of the first term is $ \max \{|\lambda_2|,|\lambda_n|\} $.
(2) ``$ \sum_{i=0}^{k-1} \mathcal{A}^{k-i}w(i) $'': Similar to the above, the convergence rate of $ \mathcal{A}\varphi^{k-1}v(k-1) $ is $ \varphi $.
According to \eqref{36}, we can get that the convergence rate of the item on left-hand side of inequality in \eqref{36} is no more than $ \max \{\varphi,|\lambda_2|,|\lambda_n|\} $ whenever the inequality holds.
(3) ``$ \sum_{i=0}^{k-1} \mathcal{A}^{k-1-i} B u(i) $'': According to \eqref{u_sum}, we can directly get that the convergence rate of the last term in right hand side of \eqref{x-x_ave} is no more than $ \max\{\rho+\varepsilon,|\lambda_2|,|\lambda_n|\} $ for any $ 0 < \varepsilon < 1-\rho $ whenever an asymptotic consensus can be reached. Indeed, this implies that its convergence rate is no more than $ \max\{\rho,|\lambda_2|,|\lambda_n|\} $.
Since $ 0 < \varphi < \rho $, combining with all results above, we can get that when an asymptotic consensus is reached, the convergence rate $ \varrho $ satisfies that $ \varrho \leq \max\{\rho,|\lambda_2|,|\lambda_n| \}$.
\end{proof}
Now we come to answer the last of these three problems raised at the beginning of this subsection, namely how to estimate the error of the final consensus value.
\begin{definition}(The Error Of The Final Consensus Value)
The error of the final consensus value of the system of the form \eqref{x(k+1)} and \eqref{y(k)} $e$ is defined as follow
\begin{equation}\label{key}
e \stackrel{\bigtriangleup}{=} \frac{1}{n}\mathbf{1}_{1 \times p}^\top \sum_{i = 0}^{\infty}u(i).
\end{equation}
\end{definition}
\begin{remark}
The definition of error above is well-defined, for the noisy signals have no effect on the final consensus value, which has been proved in Theorem \ref{converage}.
\end{remark}
It is worth noting that the estimation of error needs to be carried out under the premise that no alarm is triggered from beginning to end in the system. According to the definition of ``not triggering an alarm'', we have $ \|r(k) \| \leq c \rho^k$ for all time step $ k $. By using \eqref{r(k)_0_a_n}, \eqref{Q_B}, Algorithm \ref{algorithm} and summing the time step $ k $ from $0$ to infinity, we have
\begin{equation}\label{key}
\sum_{k=0}^{\infty} u(k) = \mathcal{Q}_B\footnote{Note that $\mathcal{Q}_B$ may not be unique. Here we select the one such that $ \|\mathbf{1}_{1 \times p}\mathcal{Q}_B\| $ achieves the minimum.} \left(\sum_{k=0}^{\infty} r(k) + \sum_{i=1}^{n}\varphi^{i-1}\mathcal{P}_i v(i-1)\right).
\end{equation}
Substituting the above equality into the definition of error $ e $, we can get
\begin{equation}\label{e}
e = \frac{\mathbf{1}_{1 \times p} \mathcal{Q}_B }{n}
\left(\sum_{k=0}^{\infty} r(k) + \sum_{i=1}^{n}\varphi^{i-1}\mathcal{P}_i v(i-1)\right).
\end{equation}
For convenience and simplicity of notation, let $ s_B $ and $ T_B $ denote $ \frac{\mathbf{1}_{1 \times p} \mathcal{Q}_B }{n} \sum_{k=0}^{\infty} r(k) $ and $ \sum_{i=1}^{n}\frac{\varphi^{i-1}}{n}\mathbf{1}_{1 \times p} \mathcal{Q}_B \mathcal{P}_i v(i-1) $ respectively. Since $v_i(k) (i=1,2,\cdots,n; k=0,1,\cdots)$ are standard normal distributed random variables, which are independent across $i$ and $k$, $ T_B $ is also a normal distributed random variable with $ \mathbb{E}[T_B] = 0 $ and $ {\rm Var}[T_B] = \sum_{i = 1}^n \frac{\varphi^{2(i-1)}}{n^2} \left\| \mathbf{1}_{1 \times p} \mathcal{Q}_B\mathcal{P}_i \right\|^2$. For any $ 0 < \beta < 1 $, let the point $ z_{B,\beta/2} $ satisfies $ \mathbb{P}\left[T_B > z_{B,\beta/2} \right] = \beta/2$. Therefore, we have $ \mathbb{P}\left[|T_B| \leq z_{B,\beta/2} \right] = 1-\beta $. Since $ e = s_B + T_B $, it follows that $ \mathbb{P}_e \left[|e -s_B| \leq z_{B,\beta/2} \right] = 1-\beta $ for any $ e \in \mathbb{R} $.
Now let $ \mu_B = \frac{c}{n(1-\rho)} \left\| \mathbf{1}_{1 \times p} \mathcal{Q}_B \right\|$.
Under the detectable condition $\operatorname{rank}\left[\mathcal{O}_{n-1} \quad \mathcal{J}_{n-1}\right]-\operatorname{rank}\left[\mathcal{J}_{n-1}\right]=n$, ``no alarm is triggered'' implies that $ |s_B| \leq \mu_B $ holds because of Cauchy-Schwarz inequality. Therefore, we can get that $ \mathbb{P}_e \left[ -\mu_B-z_{B,\beta/2} \leq e \leq \mu_B + z_{B,\beta/2} \right] \geq 1-\beta $ for any $ e \in \mathbb{R} $, i.e., $ [-\mu_B-z_{B,\beta/2}, \mu_B+z_{B,\beta/2}] $ is a confidence interval for $ e $ with confidence coefficient of not less than $ 1- \beta $.
Based on the results above and Theorem \ref{False Alarm Rate} , we can get the following theorem.
\begin{theorem}\label{error}
For a linear consensus system of the form (\ref{x(k+1)}) and (\ref{y(k)}), when an asymptotic consensus is reached, for any $ 0< \beta < 1 $, $
\bigcup_{B}\footnote{$\bigcup_{B}$ traverses all $B$s that meet the detectable condition $ {\rm rank} \begin{bmatrix}
\mathcal{O}_{n-1} & \mathcal{J}_{n-1}
\end{bmatrix} - {\rm rank} [ \mathcal{J}_{n-1}]
= n $. } [-\mu_B-z_{B,\beta/2}, \mu_B+z_{B,\beta/2}] $
is a confidence interval for $ e $ with confidence coefficient of not less than $ 1- \beta $,
if the following statements hold:
\begin{enumerate}
\item no alarm is triggered;
\item $
{\rm rank} \begin{bmatrix}
\mathcal{O}_{n-1} & \mathcal{J}_{n-1}
\end{bmatrix} - {\rm rank} [ \mathcal{J}_{n-1}]
= n
$.
\end{enumerate}
\end{theorem}
\section{Numerical Examples}
Consider the following network composed of $ 4 $ agents:
\begin{figure}[!htp]
\centering
\begin{tikzpicture}
[scale=1 ,auto=left,every node/.style={circle,fill=blue!30}]
\node (n3) at (1,1) {3};
\node (n4) at (-1,1) {4};
\node (n1) at (-1,-1) {1};
\node (n2) at (1,-1) {2};
\foreach \from/\to in {n1/n2,n1/n4,n2/n3,n3/n4}
\draw (\from) -- (\to);
\end{tikzpicture}
\caption{Network Topology}
\end{figure}
Suppose that the weight matrix is
\[A=
\begin{bmatrix}
0.136 & 0.461 & 0 & 0.403 \\
0.461 & 0.153 & 0.386 & 0 \\
0 & 0.386 & 0.278 & 0.336 \\
0.403 & 0 & 0.336 & 0.261
\end{bmatrix},
\]
which is generated randomly. Suppose that the initial state values of agents are $ x(0) = \begin{bmatrix}
100 & -50 & 50 & -100
\end{bmatrix}^\top. $
Without loss of generality, assume that agent $ 1 $ is benign and it is running an attack detector.
Then the matrix $ C $ is
$
\begin{bmatrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1
\end{bmatrix}.
$
Suppose that agent $ 3 $ is both malicious and curious and other agents all are benign. Since for any agent $ j $, $ j = 1,2,4 $, $ \mathcal{N}_i \cup \{i\} \nsubseteq \mathcal{N}_3 \cup \{3\} $, according to Theorem \ref{privacy}, the initial state privacy of every benign agent is guaranteed.
Since $ {\rm rank} \begin{bmatrix}
\mathcal{O}_{3} & \mathcal{J}_{3}
\end{bmatrix} - {\rm rank} [ \mathcal{J}_{3}]
= 4 $, according to Theorem \ref{Detect}, there is no undetectable input. In order to avoid being detected, agent $ 3 $ inputs the attack signals $ u_3(k) = -24\times 0.2^{k} $ at every time step $ k $ into the system.
\begin{figure}[htbp]
\centering
\input{fig_1.tex}
\caption{One snapshot of the comparison between $ \|r(k)\| $ and $ c\rho^k $}
\end{figure}
\begin{figure}[htbp]
\centering
\input{fig_2.tex}
\caption{The trajectory of each state value $ x_i(k) $. The blue, red, yellow and purple lines correspond to $ x_1(k), x_2(k), x_3(k), x_4(k) $ respectively. The black dashed line corresponds to the average value of the initial state $ x(0) $. The number ``$-7.5000$'' above these lines corresponds to the final convergence value of asymptotic consensus.}
\end{figure}
Suppose that agent $ 1 $ selects these parameters as follow: $ c = 16.2, \rho = 0.7, \varphi = 0.2$. According to Theorem \ref{An Estimation of False Alarm Rate}, false alarm rate $ \alpha $ is no more than $ 0.01$. Since that no alarm is triggered after $ 2000 $ time steps have been run, and the state values of the neighbors of agent $ 1 $ and its own state value have always been $ -7.5000 $ since the $ 30 $-th time step, it can be considered that the system has achieved an asymptotic consensus. According to Theorem \ref{converage}, when an asymptotic consensus is achieved, the convergence rate $ \varrho \leq \max\{\rho,|\lambda_2|,|\lambda_n| \} = \max\{0.7, |0.2229|,|-0.6057|\} = 0.7$. One snapshots of the comparison between $ \|r(k)\| $ and $ c\rho^k $ and the trajectories of agents' state values are shown in Fig. 2, Fig 3, respectively. From Fig 3, it can be seen that although an asymptotic consensus is achieved, the final convergence value $ -7.5000 $ is not the average value $ 0 $ of the initial state $ x(0) $.
Now, agent $ 1$ begin to estimate the error of the final convergence value.
Since agent $ 1 $ does not know which agents are malicious attackers, according to Theorem \ref{error}, it needs to consider all cases that meet the detectable condition $ {\rm rank} \begin{bmatrix}
\mathcal{O}_{3} & \mathcal{J}_{3}
\end{bmatrix} - {\rm rank} [ \mathcal{J}_{3}]
= 4 $.
According to Theorem \ref{error}, if let $ \beta = 0.001 $, $ [-57.9926,57.9926] $
is a confidence interval for $ e $ with confidence coefficient of not less than $ 0.999 $. If agent $ 1 $ has known that there is at most one malicious attacker in the system, $ [-29.5478,29.5478] $
is a confidence interval for $ e $ with confidence coefficient of not less than $ 0.999 $.
\section{Conclusion}
In this paper, we deal with the case that the consensus system is threatened by a set of unknown agents that are both ``malicious" and ``curious". We propose a privacy-preserving average consensus algorithm equipped with an attack detector with a time-varying exponentially decreasing threshold, for every benign agent, which can guarantee the initial state privacy of every benign agent, under mild conditions.
An upper bound of false alarm rate and the necessary and sufficient condition for there is no undetectable input by the attack detector
in the system are given. We prove that the system can achieve asymptotic consensus almost surely and give an upper bound of convergence rate and
some estimates about the error.
|
1,116,691,498,698 | arxiv | \section{Introduction}
6-mercaptopurine (6-MP) is one of the important chemotherapy drugs used for treating acute lymphocytic leukemia (ALL). It belongs to the class of medications called purine antagonists and works by stopping the growth of cancer cells. 6-MP undergoes extensive metabolic intracellular transformations that results in the production of thionucleotides and active metabolites, which have cytotoxic and immunosuppressive properties leading to various acute side-effects as kidney affection, hepatotoxicity, pancreatitis and neuropathy.
The conversion of 6-MP according the metabolic scheme presented in Fig.~\ref{scheme} involves several small pathways \cite{Cheo}. The desired pathway results in the formation of 6-Thioguanosine monophosphate (TGMP) that could be incorporated (via some metabolic transformations) into DNA and RNA leading to the tumor cell death \cite{Cheo1, Dev,Hed} in the case of successful treatment of ALL. The catabolic pathways regulated by the enzyme mercaptopurine methyltransferase (TPMT) lead to the production of various methyl-mercaptopurines affecting purine biosynthesis \cite{Cheo1} that leads to treatment failure in most cases \cite{Dev}. The transformation (see Fig.~\ref{scheme}) of 6-Thioinosine-5'-monophosphate (TIMP) to 6-Thioinosine-5'-triphosphate (TITP) is also an additional pathway, which results in the accumulation of cytotoxic products (TITP, TDTP) and slow production of TGMP. Since the realization of each pathway depends on enzymes properties, which are considered as the main regulators of ratio of activated and inactivated metabolites, then, in particular, a polymorphism in the corresponding genes can lead to the drug tolerance during ALL therapy \cite{Cheo1,Dor}. On the other hand, the energetic balance disturbance connected with the mitochondrial disfunction can play a crucial role in the appearance of side-effects and treatment failure \cite{Dae,FernandezRamos2016}.
Besides experimental studies, enzymes activity in 6-MP metabolism and regulation effects have been exposed to numerical simulations and mathematical modelling \cite{Dev,Cheo1,Kay}. These detailed semi-mechanistic models involve various compartments of human organism: from cell to organs to describe side-effects in dependence on the drug dose that allows for forecasting optimal dose for successful treatment. However, these models have been more attended to the properties of regulating enzymes and exclude energy metabolism, which may play a crucial role in the occurrence of side-effects \cite{FernandezRamos2016}. Moreover, the large scale networks of interacting components require the adjusting of an enormous number of kinetic constants that prevents understanding of principal mechanisms and key parameters of switching between the pathways in 6-MP metabolism.
This is why another approach proposed by L.~Glass and S.~Kaufmann \cite{Glass1973} gain popularity, see for a review of recent state of the art, e.g. \cite{Karlebach2008,Wang2012,LeNovere2015}: Boolean networks. The Boolean network represents a graph, whose nodes can take values 0 (inactive) or 1 (active) and the edges are matched to the rules of Boolean logic. Their evaluation with respect to the previous logical states of nodes determines the consequent state of the network's nodes.
Certainly, if ODE-based models are over-complicated, the Boolean networks are often over-simplified. For example, sometimes their over-simplicity requires introducing tricks, which are artificial to a certain extent, like over- and under- self- expressed nodes with value $1\pm0.5$ \cite{Davidich2013}. This situation calls for some hybrid models, which should exhibit the best sides of the both approaches \cite{LeNovere2015,Fisher2007}.
This problem is closely connected with the question about an interplay of ODEs modelling the scheme of kinetic reactions and Boolean networks simulating activity of reactants. This challenge induced a number of works, one can mention the pioneering article \cite{Davidich2008}, as well the recent developments \cite{Stoetzel2015,Menini2016}. However, the approaches considered in these works deals with the processes, which exhibit sharp transitions. In other words, such ODEs corresponds as a rule to the high-order Hill kinetics and the extraction of fast processes is possible. It is a natural situation for the gene/protein networks, but the components kinetics of biochemical metabolic networks is more smooth.
Thus, one of the goals of the present work is an attempt to overcome this difficulty utilizing a certain freedom, which provides probabilistic Boolean networks \cite{Shmulevich2002}: a set of Boolean networks, each of them corresponds to a different pathway, and a choice between them is determined by potential interactions between underlying biological components and their uncertainties.
\begin{figure
\includegraphics[width=\columnwidth]{scheme
\caption{Simplified scheme of 6-MP metabolism. $k_i/k_{-i}$-kinetic constants of forward-back reactions; 6-MPex, 6-MP$_{in}$- mercaptopurine outside and inside of cell, TIMP, TITP- 6-Thioinosine-5'-monophosphate and triphosphate; TXMP- 6-Thioxanthine 5'-monophosphate; TGMP-
6-Thioguanosine monophosphate, meTGMP- 6-Methylthioguanosine monophosphate; ATP, ADP,AMP - adenosine tri-, di- and monophosphates; $V_D$,$V_{PUR}$,$V_{OUT}$ - common fluxes describing incorporation to DNA and RNA of cells, inhibition of purine biosynthesis {\it {de novo}} and outflux to environment}
\label{scheme
\end{figure}
\section{Kinetic ODE model}
To describe the principal dynamics of 6-MP metabolic transformations and to single out key nodes of this "metabolic chain", we have proposed a model, which describes the simplified kinetic scheme shown in Fig.~\ref{scheme}. The dimensional model does not detalize dynamics of each enzyme but involves ATP concentration as a key player of the energy metabolism.
As a result, the system of ODE corresponding to the simplified kinetic model can be written as follows:
\begin{align*}
\frac{d}{dt}&MP_{ex}&=&-k_0MP_{ex},\\
\frac{d}{dt}&MP_{in}&=&-(V_{PUR}+k_1)MP_{in}+k_0MP_{ex}+k_{-1}TIMP,\\
\frac{d}{dt}&TIMP&=&k_1MP_{in}+k_{-8}TITP-(k_2+k_7ATP+k_{-1}+k_8PP)TIMP\\
&&&+k_{-2}TXMP+k_{-7}TITP{\cdot}AMP,\\
\frac{d}{dt}&TXMP&=&k_2TIMP-k_3TXMP{\cdot}ATP-k_{-2}TXMP+k_{-3}TGMP{\cdot}AMP{\cdot}PP,\\
\frac{d}{dt}&TGMP&=&k_3TXMP{\cdot}ATP-(k_4+V_{D})TGMP-k_{-3}TGMP{\cdot}AMP{\cdot}PP+k_{-4}meTGMP,\\
\frac{d}{dt}&meTGMP&=&k_4TGMP-V_{OUT}meTGMP-k_{-4}meTGMP,\\
\frac{d}{dt}&TITP&=&k_8TIMP{\cdot}PP-k_{-8}TITP+k_7TIMP{\cdot}ATP-k_{-7}TITP{\cdot}AMP,\\
\frac{d}{dt}&ATP&=&-k_7TIMP{\cdot}ATP+k_{-3}TGMP{\cdot}AMP{\cdot}PP-k_3TXMP{\cdot}ATP+k_{-7}TITP{\cdot}AMP,\\
\frac{d}{dt}&&=&-k_{-3}TGMP{\cdot}AMP{\cdot}PP+k_3TXMP{\cdot}ATP+k_7TIMP{\cdot}ATP-k_{-7}TITP{\cdot}AMP,\\
\frac{d}{dt}&PP&=&-k_8TIMP{\cdot}PP+k_{-8}TITP-k_{-3}TGMP{\cdot}AMP{\cdot}PP+k_3TXMP{\cdot}ATP.
\end{align*}
In our simulations, the kinetic constants corresponding to the biophysically relevant dynamics were determined as
$k_0=5~d^{-1}$,
$k_1=10~d^{-1}$,
$k_2=10~d^{-1}$,
$k_3=5~M^{-1}d^{-1}$,
$k_4=0.00001~d^{-1}$,
$k_7=0.01~d^{-1}$,
$k_8=0.5~M^{-1}d^{-1}$,
$k_{-7}=1~M^{-1}d^{-1}$,
$k_{-1}=0.01~d^{-1}$,
$k_{-2}=4~d^{-1}$,
$k_{-3}=0.01~M^{-2}d^{-1}$,
$k_{-4}=0.1~d^{-1}$,
$k_{-8}=0.01~d^{-1}$,
$V_{PUR}=0.01~d^{-1}$,
$V_{D}=0.9~d^{-1}$,
$V_{OUT}=0.0001~d^{-1}$,
where {\it M} means $\mu$M/mL, and d means days
The initial concentrations were equal to zero for all variables except the fixed value $MP_{ex}(0)=0.68$ $\mu$M/mL and $ATP(0)$, whose value plays a role of a control parameter.
\begin{figure
\includegraphics[width=0.29\textwidth]{1phase
\includegraphics[width=0.33\textwidth]{3phase
\includegraphics[width=0.33\textwidth]{2phase
\caption{Dependence of the dynamics of “metabolic chain” on the initial concentration of ATP, red curve denotes TXMP concentration, blue curve is TIMP concentration}
\label{phase
\end{figure}
\section{A Boolean network mimicking the key dynamical processes}
\subsection{Network construction}
The simplified metabolic network described above allows the representation in terms of the probabilistic Boolean network, which consists of five nodes $\{y_i\}$, $i=1..5$ and the threshold-based rule $A(\alpha_j)$ for a choice between possible pathways. The value of the continuous control parameter $\alpha_j$ could be non-stationary in dependence of the iteration's number $j$. The correspondence of these nodes to the metabolites and the transition rules for a parallel update of states are presented in Table~\ref{booltab}.
\begin{table}[b
\begin{center}
\begin{tabular}{ccp{0.6\textwidth}}
\hline
Node&Metabolite&Rules of interactions and updating\\
\hline
$y_1$&6MPin&Starting node activated, when 6-mercaptopurine enters the cell. It activates TIMP and then will be deactivated.\\
$y_2$&TIMP&This node is activated by 6MPin {\it or} by TITP and can activate nodes TXMP or TITP in dependence on a chosen pathway (the choice is governed by the variable $\alpha$); it is deactivated after this.\\
$y_3$&TXMP&This node is activated by TIMP and can activate TGMP or TIMP in dependence on a chosen pathway (the choice is governed by the variable $\alpha$); it is deactivated after this.\\
$y_4$&TGMP&This node indicate the target output, is activated by TXMP and deactivated after the completed output.\\
$y_5$&TITP&This node is activated by TIMP within one of possible pathways and activate TIMP; it is deactivated after this.\\
\hline
$\alpha$&ATP&The continual parameter, which governs a choice of pathways by as follows: {\it if} $\alpha<0.5$, then then the irreversible activation TXMP$\to$TGMP is chosen; the reversible transition TXMP$\rightleftharpoons$TGMP holds otherwise; {\it if} $\alpha<0.75$ the pathway through TXMP is chosen, the pathway through TITP holds otherwise. The parameter $\alpha$ is non-stationary and satisfies the decay kinetics $\dot{\alpha}=-\kappa\alpha$ if the process goes through the TIMP pathway.\\
\hline
\end{tabular}
\end{center}
\caption{The nodes and transition rules for the considered Boolean network}
\label{booltab}
\end{table}
The realisation of this its via Boolean and conditional operators reads as follows (here the states of nodes are grouped into the matrix $y(j,i)$):
\begin{verbatim}
y(:,1)=[1 0 0 0 0]';
for j=2:M;
if y(1,j-1)==1;
y(2,j)=1;
end
if alpha>0.5
if (y(2,j-1)==1)|(y(5,j-1)==1)
if alpha<0.75
y(3,j)=y(2,j-1);
y(2,j)=y(5,j-1);
else
y(5,j)=1;
alpha=(1-kappa)*alpha;
end
end
y(4,j)=y(3,j-1);
else
y(3,j)=y(2,j-1);
y(2,j)=1;
end;
end
\end{verbatim}
Here ``\verb!=!'', ``\verb!==!'', and ``\verb!|!'' operators denotes the assignment, the equality, and OR respectively. Note that the code represented above can be evaluated straightforwardly using MATLAB or other software, which support MATLAB-like syntax (e.g. OCTAVE, FreeMat) if supplied by initial conditions and a value of the decay parameter. The last one is introduced via the simplest discretization of of the equation $\dot{\alpha}=-\kappa\alpha$ via the Euler scheme with the unit time step (i.e. in accordance to the assumed step of networks nodes updating):
$\alpha_{j+1}=(1-\kappa)\alpha_j$.
For example, they may be stated as
\begin{verbatim}
N=5;
M=8;
y=zeros(N,M);
ATP=0.6;
kappa=0.1;
alpha=ATP;
\end{verbatim}
\subsection{Simulation results}
The growth of metabolites concentrations occurs sequentially for one node after another along the “metabolic chain” (see Fig.~\ref{scheme}). It is revealed that TIMP is the key node of the reactions cascade since it provides two pathways, slow and fast, that also defines the blockage of the slow way interacting with ATP. Here we can define the concentration of ATP as a “key player” in the 6-MP metabolism, which regulates transitions in two main points: the metabolic pathway of TITP production (the chain’s branch from TIMP) and the transition TXMP$\to$TGMP.
Simulations of this kinetic model show that small concentrations of ATP lead to the blockage of metabolic chain in the node TIMP (Fig ~\ref{phase},left). Large concentrations of ATP result in the competition between production of TITP (the end product of
branch) and TGMP (the product of the chain), see (Fig ~\ref{phase},middle). The optimal concentration of ATP, which shifts the pathway to higher production of TGMP is equal to 0.7 $\mu$M/mL ((Fig ~\ref{phase},right)).
The same situation can be observed at the simulation of Boolean Networks. Table~\ref{statestab} represents the results of simulations evaluated for a set of increasing initial values (\verb!ATP!) of the control parameter $\alpha$. They capture all principal features of the dynamics for the simulated network. Note that first two steps are the same for all cases since the activation of TIMP by 6-MP$_{in}$ is unconditional. The different pathways are realized during next iterations only.
For \verb!ATP=0.2! the dynamics is blocked at the transition from TXMP to TGMP. Instead of the forward activation, the process goes back reactivating the node corresponding to TIMP. At the same time, since this reaction is reversible, the reactivation of TXMP occurs, etc. Thus, the system reaches a steady state, which is reflected in unit values of the nodes $y_2$ and $y_3$ spreading {\it ad infinitum}.
The value \verb!ATP=0.2! corresponds to the situation, where the pathway TXMP $\to$ TGMP is allowed but the pathway leading to TITP is blocked. As a result, the transition process is direct and straightforward: the nodes $y_1$--$y_4$ are activated sequentially during the four sequential iterations. When $y_4$ is activated, this means that the target substance is released, and all nodes switch off to zeros in absence of a new influx into $y_1$.
Both the values \verb!ATP=0.8! and \verb!ATP=0.9! exceed the threshold value $\alpha=0.75$. Whence, the pathway to TITP is available now. It is reflected as $y_3=0$ but $y_5=1$ at the third iteration, i.e. the pathway is changed. However, there is a difference in the further time evolution of the network's states for these two cases. Namely, the table corresponding to
\verb!ATP=0.8! demonstrate the activation of $y_3$ (i.e. the backward transition TITP$\to$ TIMP) during the next iteration and the consequent sequential nodes activation along the pathway TIMP$\to$ TXMP$\to$ TGMP. On the other hand, these step are delayed in the case of \verb!ATP=0.8!: the both 3rd and 4th iterations contains $y_5=1$ and $y_i=0$, $i=1..4$ only.
Such a behaviour originates from the introduced non-stationarity of the control parameter $\alpha$, which resembles the concentration of ATP. As it was discussed above, the TIMP$\to$ TITP pathway is an ATP-consuming process. Thus, each iteration corresponding to this pathway diminishes $\alpha$ while it will cross the threshold $\alpha=0.75$ from above. Further, this pathway will be blocked. The cases \verb!ATP=0.8! and \verb!ATP=0.9! require one and two iterations for this decay of $\alpha$, respectively. Larger values of \verb!ATP! will result in larger delays.
Finally, we should note that the discussed results are quasi-deterministic, since they
correspond to individual realizations. In a general case, i.e. in the strict sense of probabilistic Boolean networks, one can generate an ensemble of realisations with \verb!ATP! randomly distributed with respect to some appropriate probability distribution. Correspondingly, the output will be a distribution of the nodes values during iterations. But this procedure is out of direct goals of the present work.
\begin{table}[h
\begin{center}
\begin{tabular}{c|ccccccc}
ATP=0.2&&&&&&&\\
\hline
$j$& 1& 2& 3& 4& 5& 6& 7\\
\hline
$y_1$& 1& 0& 0& 0& 0& 0& 0\\
$y_2$& 0& 1& 1& 1& 1& 1& 1\\
$y_3$& 0& 0& 1& 1& 1& 1& 1\\
$y_4$& 0& 0& 0& 0& 0& 0& 0\\
$y_5$& 0& 0& 0& 0& 0& 0& 0\\
\end{tabular}
\begin{tabular}{c|ccccccc}
ATP=0.6&&&&&&&\\
\hline
$j$& 1& 2& 3& 4& 5& 6& 7\\
\hline
$y_1$& 1& 0& 0& 0& 0& 0& 0\\
$y_2$& 0& 1& 0& 0& 0& 0& 0\\
$y_3$& 0& 0& 1& 0& 0& 0& 0\\
$y_4$& 0& 0& 0& 1& 0& 0& 0\\
$y_5$& 0& 0& 0& 0& 0& 0& 0\\
\end{tabular}
\begin{tabular}{c|ccccccc}
ATP=0.8&&&&&&&\\
\hline
$j$& 1& 2& 3& 4& 5& 6& 7\\
\hline
$y_1$& 1& 0& 0& 0& 0& 0& 0\\
$y_2$& 0& 1& 0& 1& 0& 0& 0\\
$y_3$& 0& 0& 0& 0& 1& 0& 0\\
$y_4$& 0& 0& 0& 0& 0& 1& 0\\
$y_5$& 0& 0& 1& 0& 0& 0& 0\\
\end{tabular}
\begin{tabular}{c|ccccccc}
ATP=0.9&&&&&&&\\
\hline
$j$& 1& 2& 3& 4& 5& 6& 7\\
\hline
$y_1$& 1& 0& 0& 0& 0& 0& 0\\
$y_2$& 0& 1& 0& 0& 1& 0& 0\\
$y_3$& 0& 0& 0& 0& 0& 1& 0\\
$y_4$& 0& 0& 0& 0& 0& 0& 1\\
$y_5$& 0& 0& 1& 1& 0& 0& 0\\
\end{tabular}
\end{center}
\caption{The evolution of network states for different various values of the control parameter.}
\label{statestab}
\end{table}
\section{Discussion}
It is known that the methylation of 6-MP resulted in a formation of the intermediate metabolites occurs at a low concentration of intracellular ATP (0.1~$\mu$mol/ml).
Simultaneously, the concentrations TIMP and TXMP remain at a prolonged constant level. At these conditions, a production of the final metabolite, TGMP, slows down. As a result, the therapeutic efficiency also diminishes but the risk of toxic action grows since intermediate metabolites of 6-MP inhibit biosynthesis of {\it de novo} purines. Thus, the lowering intracellular ATP pool in T-lymphocytes results in higher toxicity and lower efficiency of this drug \cite{FernandezRamos2016,Valente2016}.
Our results model an effect of high initial concentration of ATP on the metabolism of 6-MP. They show that ATP concentrations 0.1~$\mu$mol/ml produce high concentrations of the intermediate metabolite $TIMP$ that indicates an incomplete metabolism of the drug accompanied by a production of TGMP insufficient for the therapeutic action. Therefore, we suppose that the intensive TIMP formation plays a role of the marker indicating an accumulation of toxic final metabolites at a high level of intracellular ATP.
A concentration change of ATP is a key factor for the energy exchange deficit accompanied by the mitochondrial \cite{Beuster2011}. This results in decreasing therapeutic effect of drugs during the tumor treatment. It has been shown the glycolysis inhibition by an attenuation of the glucose consumption cell function leads to the diminishing of ATP level and, finally, results in tumor cell death. However, the process of energy deficiency is invertible since the cell activates another pathway, which supports ATP accumulation, and the cell will recover its function.
The results obtained using our model argue that the optimal initial ATP concentration is equal to 0.7~$\mu$mol/ml. It corresponds to the situation, when 6-MP metabolism is a completed process resulted in the both production of therapeutically active products and reducing the pool of toxic intermediate products.
At the administration of cytotoxic drugs according to the protocol BFM ALL 2000 \cite{Flohr2008}, it is expedient to keep the concentration of intracellular ATP within a middle range to prevent risks of adverse drugs reactions instead of an artificial inhibition of energy metabolism \cite{Beuster2011}.
The clinical indication of low ATP concentration is acidosis by lactates accumulation \cite{Beuster2011}. This leads to the mitochondrial dysfunction and an additional toxic effect. Higher ATP concentrations inhibit glycolysis resulting in a glucose accumulation, glycose tolerance, and, indirectly, in the cardiomyopathy development \cite{Guertl2011}.
Thus, we hypothesize that the maintenance of ATP intermediate level is a necessary condition to reach a complete therapeutic effect and diminish toxicity of a chemotherapy process.
\section{Conclusion and Outlooks}
In this work, we have analysed the dynamic behaviour of metabolic pathways of 6-mercaptopurine with a focus on the revealing a key parameter, which switches between two principal ``branches'', slow and fast one. The results of simulations based on the system of ordinary equations indicate that ATP is the desired “key player” in the 6-MP metabolism. This conclusion is supported by a number of phenomenological observations presented in the modern biomedical literature and allows for quantitative clarifying the underlying processes.
Basing on the results of ODE modelling, we have reformulated the problems in terms of the probabilistic Boolean networks. This approach is much more simpler in realization since it does not require a knowledge of multiple kinetic parameters but, in the same time, adequately reproduces the key details of switching principal dynamic regimes as a choice between different possible pathways. Therefore, it can be scaled to more detailed picture of metabolites interactions in future research of the studied process.
We also need to highlight the crucial feature introduced into a construction of the network: a non-stationary continual parameter, which governs the switching process. Such an approach, which has demonstrated its effectiveness in the considered case study, opens new perspectives for ``hybridising'' of continual (ODE-based) and discrete (Boolean) approaches to metabolic modelling. In contrast to previous works \cite{Davidich2008,Stoetzel2015,Menini2016}, which considered Boolean networks only as a limiting case of continual-time kinetic processes (in fact, as a mimicking of switching between unstable stationary state by nodes activity), the introduction of non-stationarity into the probabilistic parameter allows the consideration of smoother transitions, and, in principle, even an activity of small sub-networks with a small number of kinetic constants considered as building blocks for a large Boolean network.
\section*{Acknowledgement}
The work is supported by the Grant no. 14.575.21.0073, code RFMEFI57514X0073 of the Ministry of Education and Science of the
Russian Federation
|
1,116,691,498,699 | arxiv | \section{Introduction\label{Sec:1}}
This paper investigates connections between fractional viscoelasticity, fractional wave equations, causal models, and power-law attenuation within the framework of elastic wave modeling.
Paying extra attention to medical imaging applications, we intend to convey to the fractional calculus community information on developments related to time-fractional elastic wave equations.
The paper expands upon the conference proceeding paper \cite{Nasholm2012China} and the J.\ Acoust.\ Soc.\ Am.\ papers \cite{Holm2011, Nasholm2011}.
In Section \ref{sec:powerlaw}, the frequently encountered power-law attenuation nature of elastic waves in complex media is considered, especially in medical imaging.
In Section \ref{sec:fractionalzener}, a fractional generalization of the Zener viscoelastic model is reviewed. Parameter value restrictions to keep the model thermodynamically admissible are discussed and
experimental evidences are listed.
The section also demonstrates the important achievement that the fractional Zener model is equivalent to a Maxwell--Wiechert representation consisting of a set of conventional springs and dashpots. Also physically underlying principles which lead to fractional viscoelasticity are reflected upon.
Subsequently, Section \ref{sec:wave_equations} derives a fractional Zener wave equation from conservation laws and the fractional Zener constitutive stress-strain model.
Section \ref{sec:properties} then analyzes properties of this wave equation. A connection to the widely acknowledged Nachman--Smith--Waag wave equation for acoustic propagation with relaxation losses \cite{Nachman1990} is demonstrated, attenuation and finite phase speed power-law regimes are evaluated, and causality conditions are considered.
The final section provides conclusions and discussions including parallels to similar models in other fields, such as electromagnetics.
\section{Power-law attenuation in complex media\label{sec:powerlaw}}
Elastic wave attenuation in complex media such as biological tissue, polymers, rocks, and rubber often follows a frequency power law:
\begin{align}
\alpha_k(\omega) \propto \omega^\eta,
\end{align}
with the exponent $\eta \in [0,2]$.
Such power-laws can be valid over many frequency decades and examples are found all the way from infrasound to ultrasound \cite{Duck1990}.
See \eg{} Fig.~1 in \cite{Szabo00} which visualizes experimentally established power-law attenuation examples for both shear and compressional waves.
As summarized in \cite{Holm2010}, compressional wave attenuation in biological tissue commonly manifests a power-law exponent $\eta \in[1,2]$, while for shear waves in tissue one often finds $\eta \in[0,1]$.
\subsection{Power-law attenuation in medical imaging}
The establishment of accurate wave-propagation models in power-law lossy media is important for applications where broadband waves are utilized.
Below we list examples of such applications.
In photoacoustic imaging and tomography, laser pulses are delivered into tissue where an associated thermoelastic expansion induces a wideband ultrasound emission which is used for image reconstruction \cite{beard2011biomedical,kowar2012attenuation, roitner2012experimental, treeby2010reconstruction}.
Magnetic resonance (MR) elastography is a another means to estimate soft tissue stiffness \eg{} in the liver or the breast. This method utilizes the propagation of shear waves which are monitored using MR imaging \cite{2012ehmanreview, muthupillai1995science, papazoglou2012multifrequency, Sinkus2000, Sinkus2007, sinkus2012review, yasar2012wideband}.
Doppler techniques may also be applied to monitor the frequency-dependent tissue shear wave properties \cite{barry2012shearwavesispersion}.
A related elastography method is acoustic radiation force imaging (ARFI) techniques, where tissue is deformed by a short compressional pulse which creates a propagating shear wave. The displacement due to the shear wave as measured by ultrasound is then used to quantify the tissue's mechanical properties \eg{} by estimating the shear wave propagation speed \cite{bercoff2004supersonic, Chen2004, palmeri2011acoustic}.
\subsection{Modeling power-law attenuation}
For acoustic modeling, time-fractional derivative wave equations have been shown to imply power-law attenuation over wide frequency bands \cite{Holm2011, Holm2010}.
As further described in Section \ref{sec:wave_equations}, fractional wave equations can be obtained from linearized conservations of mass and momentum in combination with time-fractional constitutive relations connecting stress and strain.
Related linear wave-propagation simulations are reported \eg{} in \cite{Wismer1995, Liebler2004, Wismer06, Caputo2011}.
For waves at high amplitudes, non-linear effects need to be considered. Such models were developed in \cite{Prieur2011, Prieur2012} where the fractional Kelvin--Voigt constitutive equation was applied. See also the related recent paper \cite{treeby2012modeling}.
From a numerical modeling point of view when considering the low-frequency or small-attenuation regimes of a power-law attenuating medium, wave equations with a d'Alambertian part and time-fractional terms may conveniently be converted into space-fractional Laplacian models. This can be beneficial due to reduced time-signal storage needed in the propagation simulation steps \cite{Chen03, Treeby2010, Carcione2010, treeby2012modeling}. Special care needs to be taken to ensure causality of the resulting models.
The authors are not aware of similar valid conversions between fractional temporal derivatives and fractional Laplacians that are applicable for attenuation with $\eta < 1$ or within a high-frequency regime.
The multiple relaxation mechanism framework of \cite{Nachman1990} is widely considered as adequate for acoustic wave
modeling in lossy complex media like those encountered in medical ultrasound.
It relies on thermodynamics and first principles of acoustics. The corresponding wave equation for $N$ relaxation mechanisms is a causal partial differential equation with its highest time derivative order $N+2$. We denote this the Nachman--Smith--Waag (NSW) model.
In order to make the discrete NSW model attenuation adequately follow $\omega^\eta$, either the valid frequency band must be narrow, or the number of assumed mechanisms $N$ must be large thus inferring a partial differential equation of high order.
On the other hand, for a certain continuous distribution of relaxation mechanisms, the NSW and fractional Zener descriptions are equal and power-law attenuation is attained, as further reviewed in Section \ref{sec:link_to_nachman}.
Band-limited fits to power-law acoustic attenuation for relaxation models with $N=2$ and $3$ are exemplified by \cite{Tabei2003, Yang2005}. In the latter, one of the mechanisms is assumed to be of very high relaxation frequency, thus representing a thermoviscous component. The determination of the two other relaxation frequencies and their compressibilities, as well as the compressibility contribution of the thermoviscous component are then decided by numerical minimization of the resulting difference to the desired power-law attenuation.
For a large number of modeled relaxation mechanisms, such numerical optimization of the parameter fit turns very intricate.
Another approach is used in \cite{Kelly2009}, which is closely related to \cite{Schiessel1993, Schiessel95}. It demonstrates that hierarchical fractal ladder networks of springs and dashpots can lead to power-law acoustical attenuation in a low-frequency regime. This however requires a large number of degrees of freedom which makes parameter fits cumbersome.
\section{The fractional Zener constitutive relation\label{sec:fractionalzener}}
The history of fractional derivatives in physics goes back to Abel's integral equation from 1826 \cite{Abel1826}, which turns out to correspond to the $1/2$-order derivative.
Early viscoelasticity-related papers are \cite{Caputo1971, meshkov1971integral}, see also historical overviews in \cite{mainardi2012historical, Rossikhin2010B}.
Inclusion of fractional derivatives in the viscoelastic stress-strain relationship is convenient for describing many materials where the response depends on the past history, see \eg{} \cite{Mainardi2010, mainardi2012historical, Podlubny1999chapter10_2} (for reflections on this from an acoustical point of view, see \cite{Treeby2010sectionIIB}).
For a record on the most intuitive fractional generalizations of the conventional (non-fractional) viscoelastic models, see \eg{} the survey \cite{mainardi2011creeprelaxationviscosity}.
In addition, the comprehensive reviews \cite{Rossikhin1997, Rossikhin2010}, summarize research on fractional calculus in dynamic problems of solid mechanics.
Illustrations of the fractional derivative viscoelastic models commonly include the \emph{spring-pot} (or just \emph{pot}) element.
\subsection{Stress--strain relation}
A five-parameter fractional Zener model fractional generalization may be expressed as
\begin{align}
\sigma(t) +\tau_{\epsilon}^{\beta} \frac{\partial^{\beta}\sigma(t)}{\partial t^{\beta}} = E_0 \left[\epsilon(t) +\tau_{\sigma}^{\alpha} \frac{\partial^{\alpha}\epsilon(t)}{\partial t^{\alpha}}\right],
\label{Eq:gZener}
\end{align}
where $t$ is the time, $\sigma(t)$ the stress, $\epsilon(t)$ the strain, $\taus$ and $\taue$ positive time constants, and $E_0$ the modulus.
Here the nomenclature for the time-fractional derivatives follows \cite{Bagley83A}, however many authors display naming conventions where $\alpha$ and $\beta$ are interchanged. To be physically adequate, one requires $\alpha=\beta$ as further investigated in Section \ref{sec:parameter_restrictions} and Section \ref{sec:link_to_nachman}.
The fractional Kelvin--Voigt constitutive relation may be regarded as a low-and intermediate frequency representation of the fractional Zener model \cite{Holm2011} which corresponds to $\taue \rightarrow 0$ in \eqref{Eq:gZener}.
Other fractional stress-strain relations with either the same or more degrees of freedom may be used to describe material response, as stated in \eg{} \cite{Rossikhin2010}. One example is the 5-parameter approach described in ~\cite{Dinzart2009}.
Such and other generalized models could equally well be applied in the wave equation derivations described in the following.
\subsection{Parameter value restrictions\label{sec:parameter_restrictions}}
Based on arguments from \cite{Bagley1986, Glockle1991}, a monotonically decreasing stress relaxation requires $\alpha=\beta$ in the stress-strain relation \eqref{Eq:gZener}. Table \ref{tab:constraints} lists thermodynamical of constraints on the \eqref{Eq:gZener} parameters. See also the related recent paper \cite{atanakovic2011thermodynamical}.
\begin{table}[htb]
\caption{\label{tab:constraints}Fractional Zener stress-strain model \eqref{Eq:gZener} parameter constraints, from \cite{Bagley1986}. }
\centering
\begin{tabular}{l@{\quad} l@{\quad} l@{\quad} l@{\quad} l@{\quad} l@{\quad} l@{\quad} l}
\hline\hline
\rule{0pt}{11pt
$E_0 \geq 0$, & $E_0\taus^\alpha>0$, & $\taus^\alpha \geq \taue^\beta$, &$\taue^\beta>0$, & $\alpha=\beta$\\
\hline\hline
\end{tabular}
\end{table}
We note that even though the rheological model \eqref{Eq:gZener} is not thermodynamically well-behaved for $\alpha\neq\beta$, the corresponding underlying mechanical model can actually be admissible. However this is only if the instantaneous wave is allowed to propagate at infinite speed \cite{rossikhin2001analysis}.
The fractional Zener model with $\alpha=\beta$, which corresponds to the empirical Cole--Cole rheological relaxation spectrum formulation \cite{Cole1941}, was considered already in \cite{Caputo1971} alongside with experimental evidences.
\subsection{Experimental evidences}\label{sec:experimental_evidences}
Parameter fits of experimental measurements to the fractional Zener model for biological materials include
for
brain \cite{Davis2006, Klatt2007, Kohandel2005, Sack2009},
human root dentin \cite{petrovic2005},
cranial bone \cite{liu2006},
liver \cite{Klatt2007},
arteries \cite{Craiem2008},
breast \cite{Coussot2009},
and hamstring muscle \cite{grahovac2010}.
Non-biological materials are exemplified by
metals \cite{Caputo1971},
doped corning glass \cite{Bagley83A},
rubber \cite{Bagley1986},
and polymers \cite{Coussot2009, Metzler2003, Pritz1996,Pritz2001, Pritz2003, sasso2011application}.
For an account of experimental fits to fractional calculus stress-strain models made up to the year of 2009, see Section 2 in \cite{Rossikhin2010}.
The framework of viscoelasticity and acoustics in complex media has significant similarities with the framework of dielectrics and electromagnetic propagation. Not only do the wave equations have similarities in structure, but also the same constitutive relations connected to fractional derivative stress-strain modeling have experimentally been observed in the dielectrical properties of complex media. This is relevant for \eg{} ground-penetrating radar and medical diagnosis using ultrawideband or Terahertz electromagnetic waves. From an electromagnetic modeling point of view, the complex dielectric permittivity plays the same role as the compressibility (the complex compliance) does in acoustics and viscoelasticity.
\subsection{Maxwell--Wiechert representation}\label{sec:maxwellwiechert}
Viscoelastic constitutive stress-strain models are generally possible to convert into a Prony series of Maxwell elements, that is a Maxwell--Wiechert description with springs and dashpots in parallel as illustrated in Fig.~\ref{fig:MW}.
\begin{figure}[th!]
\begin{center}
\includegraphics[width=.7\columnwidth]{maxwellwiechert}
\end{center}
\caption{Illustration of the Maxwell--Wiechert viscoelastic representation. Damper $i$ is denoted by $E_i$ and dashpot $i$ by $\eta_i$. \label{fig:MW} }
\end{figure}
Already in \cite{mainardi1994fractional}, the fractional Zener model was actually interpreted as a relaxation distribution.
Several other authors have published related results where the fractional derivative stress-strain models are expressed without using the exotic \emph{spring-pot} viscoelastic element.
In \cite{Adolfsson2005}, a very large number of weighted Maxwell elements (Debye contributions) evenly distributed on a linear frequency scale are shown to give the same stress response as a fractional order viscoelastic model.
Ref.~\cite{Chatterjee2005}, presented examples where viscoelastic damping due to several simultaneously decaying processes with closely spaced exponential decay rates are shown to induce a constitutive behavior involving fractional order derivatives.
Furthermore, Machado and Galhano have shown that averaging over a large population of microelements, each having integer-order nature, gives global dynamics with both integer and fractional dynamics \cite{Machado2008}.
We also note that the rheological fractional spring-pot element was interpreted in terms of weighted springs and dashpots in \cite{Papoulia2010}.
The recent paper \cite{deOliveira2011} considers anomalous relaxation in dielectrics and interestingly provides relaxation distribution functions for non-Debye models flavors such as the Cole--Cole, Davidson--Cole, and Havriliak--Negami, out of which the Cole--Cole one is most tightly connected to the current work because it corresponds to $\alpha=\beta$ in the fractional Zener viscoelastic model.
As further discussed in Section \ref{sec:link_to_nachman}, a Maxwell--Wiechert description of the fractional Zener model was actually implicitly verified also in \cite{Nasholm2011} where it was connected to the multiple relaxation framework of \cite{Nachman1990} via a distribution with fractal properties.
\subsection{Physical background}
In \cite{bagley1991thermorheologically}, the author reflects on the surprisingly good fit of the fractional Zener model to measured data, suggesting that this hints at the existence of underlying governing principles.
The proposed model is for internal energy loss at the molecular level of a viscoelastic polymer.
A probability density to describe the motion of elevated energy states along the molecule or ``kinks'' was postulated.
Such states behave similar to particles in potential wells which have to overcome barriers. By certain assumptions on probability density functions for both an energy transition and a distribution of barrier heights, it is shown that the probability density for an energy transition is fractal.
The fractality leads to a relaxation function described by a power-law or a Mittag-Leffler function. This leads to a four parameter fractional Zener model linking stress and strain.
Another interpretation is due to Mainardi who justifies the four-parameter fractional Zener model from the thermoelastic equations when the temperature change due to diffusion and adiabatic strain change is a fractional derivative. The temperature change is thus a hidden variable in the stress-strain relationship \cite{mainardi1994fractional}.
A third point of view considers propagating waves. Attenuation of both compressional and shear waves is considered to be due to two mechanisms: Absorption
which is the conversion of energy into \eg{} heat, and scattering which is the reflection of energy in
all directions. Despite the different physical explanations, both mechanisms often seem to result in power law attenuation.
In medical ultrasound below about 10 MHz, it is generally considered that absorption is the dominating mechanism \cite{bamber2005attenuationandabsorption}. It is likely that the above models are relevant for this regime.
For elastography in tissue with propagating shear waves in the 10 and 1000 Hz range, recent experimental results indicate that attenuation due to multiple scattering can dominate \cite{chatelin2012measured, juge2012subvoxel}.
A 1-D model for multiple scattering attenuation is the O'Doherty--Anstey model \cite{ODoherty71}, however there are no well established 2-D or 3-D model equivalents. Assuming a fractal distribution of reflection coefficients, the O'Doherty--Anstey model can explain power-law attenuation \cite{Garnier2010}.
Connections between power-law attenuation, multiple scattering, and fractal geometry need to be further explored in real 2-D and 3-D media.
\section{Deriving the fractional wave equation\label{sec:wave_equations}}
Below we demonstrate that causal wave equations can be constructed from the expressions for linearized conservation of momentum and the linearized conservation of mass combined with a fractional constitutive relation between stress and strain.
The approach outlined was applied already in \cite{Meidav1964} to derive a wave equation based on the non-fractional Zener model, which is often denoted as the \emph{standard linear solid}.
\subsection{Conservation laws}
In the following, we consider an isotropic medium where the only non-negligible stiffness parameters are either the bulk modulus, $c_{11}$, or the shear modulus, $c_{44}$, \cite{Royer00}. Then the linearized conservation of mass corresponds to the strain being defined by
\begin{align}
\epsilon(t) = \nabla u(x,t), \Fourierbackfourth \epsilon(\omega) = -i k\:u(k, \omega),
\label{Eq:strain}
\end{align}
where $u$ is the displacement in the transverse (for compressional waves) or longitudinal (for shear waves) directions, $x$ is the 3-D spatial coordinate, and the symbol $\mathcal F$ denotes transformation into the spatio-temporal frequency domain where $\omega$ is the angular frequency and $k$ the wavenumber.
This is valid under the assumption of infinitesimal strains and rotations which is adequate \eg{} in acoustical medical imaging.
The linearized conservation of momentum is expressed as
\begin{align}
\nabla \sigma(t) = \rho_0 \frac{\partial^2u(x,t)}{\partial t^2} \Fourierbackfourth \ -i k \sigma(\omega) = \rho_0 (i \omega)^2 u(k,\omega),
\label{Eq:Newton}
\end{align}
where $\rho_0$ is the steady-state mass density and $\sigma$ denotes the stress, which in this context corresponds to the negative of the pressure.
\subsection{Generalized compressibility $\kappa(\omega)$}
The frequency-domain generalized compressibility is defined as the ratio between strain and stress: $\kappa(\omega) \triangleq \epsilon(\omega)/\sigma(\omega)$, therefore being related to the constitutive stress--strain relation.
Combining this definition with the conservation laws \eqref{Eq:strain} and \eqref{Eq:Newton} gives
\begin{align}
\nabla^2u(x,t) = \dfrac{\dd^2}{\dd t^2} \left[\kappa(t) \underset{t}{*} u(x,t)\right]
\Fourierbackfourth
\ k^2(\omega) = \omega^2\rho_0 \kappa(\omega) \label{eq:dispersion_relation}.
\end{align}
Under circumstances where the linearized conservations of mass and momentum are valid, the wave equation is thus completely determined by the generalized compressibility.
From \eqref{Eq:gZener}, the frequency-domain fractional Zener compressibility is obtained through the ratio $\epsilon(\omega)/\sigma(\omega)$:
\begin{align}
\kappa_\text{Z}(\omega) & \triangleq
\kappaz \frac{1 + (\tau_{\epsilon}i \omega)^{\beta}}{1+ (\tau_{\sigma}i \omega)^{\alpha}}\notag\\
&=
\kappaz -i\omega
\kappa_0 \dfrac{ (i\omega)^{\alpha-1} - (\tau_\epsilon^\beta/\tau_\sigma^\alpha)(i\omega)^{\beta-1}}{ \tau_\sigma^{-\alpha} + (i\omega)^\alpha}
\label{Eq:fZener_compressibility}
\end{align}
The generalized compressibility $\kappa(\omega)$ as given above is (\eg{} in viscoelasticity) called complex compliance $J^*(\omega)=1/G^*(\omega)$, where $G^*(\omega)$ is the complex modulus.
\subsection{The fractional Zener wave equation}
Insertion of the generalized compressibility \eqref{Eq:fZener_compressibility} into the dispersion relation \eqref{eq:dispersion_relation}, results in the time-domain fractional Zener wave equation \cite{Holm2011}
\begin{align}
\nabla^2 u -\dfrac 1{c_0^2}\frac{\partial^2 u}{\partial t^2} + \taus^\alpha \dfrac{\partial^\alpha}{\partial t^\alpha}\nabla^2 u - \dfrac {\taue^\beta}{c_0^2} \dfrac{\partial^{\beta+2} u}{\partial t^{\beta+2}} = 0.
\label{Eq:wave_equation_zener}
\end{align}
\section{Properties of the fractional wave equation\label{sec:properties}}
Below we explore some properties of the fractional Zener wave equation, first by connecting it to a multiple relaxation wave model, then by studying the attenuation and the phase velocity as a function of frequency where 3 characteristic power-law regions are identified. The causality of the model is finally verified.
More mathematics-oriented studies of fractional Zener-related wave equations can be found in \eg{} \cite{Atanackovic2002, Konjik2010, luchko2012fractional}.
\subsection{Link to the NSW multiple relaxation framework\label{sec:link_to_nachman}}
Under the assumption of a continuum of relaxation mechanisms,
the NSW model \cite{Nachman1990} is linked to fractional derivative modeling
in \cite{Nasholm2011},
where it was shown that the wave equation corresponding to a certain distribution of relaxation contributions is identical to the fractional Zener wave equation.
As resumed in the following, the associated compressibility contributions were shown to be distributed following a function related to the Mittag-Leffler function.
The current section is conceptually tightly connected to Section \ref{sec:maxwellwiechert} where the fractional Zener constitutive model is represented as a set of springs and dashpots.
\subsubsection{The NSW generalized compressibility\label{sec:nachman_generalized_compressibility}}
The NSW model of multiple discrete relaxation processes results in the generalized compressibility (which is equivalent to the rheological complex compliance $J^*(\omega)$)
\begin{align}
\kappao = \kappaz - i\omega \sum_{\nu=1}^N \dfrac{\kappan \taun}{1 + i\omega\taun},
\label{eq:Nachman_kappa_omega}
\end{align}
where the mechanisms $\nu=1\ldots N$, have the relaxation times $\tau_1,\ldots,\tau_N$ and the compressibility contributions $\kappa_1, \ldots, \kappa_N$ \cite{Nachman1990}.
Note that \eqref{eq:Nachman_kappa_omega} corresponds to a sum of $N$ weighted conventional Zener contributions, where each is given by \eqref{Eq:fZener_compressibility} with $\alpha=\beta=1$.
Following \cite{Nasholm2011}, a representation of \eqref{eq:Nachman_kappa_omega} when considering a continuum of relaxation mechanisms distributed in the frequency band $\Omega \in[\Omega_1,\Omega_2]$ with the compressibility contributions described by the distribution $\kappa_\nu(\Omega)$ becomes
\begin{align}
\kappa_\text{N}(\omega) \triangleq \kappaz - i\omega\int_{\Omega_1}^{\Omega_2} \dfrac{ \kappan(\Omega) }{\Omega + i\omega}\, \dd \Omega.
\label{eq:Nachman_kappa_omega_integral_zenertry}
\end{align}
Letting the limits of the integral go between $\Omega_1=0$ and $\Omega_2=\infty$, and instead incorporating any possible relaxation distribution bandwidth limitation into $\kappa_\nu(\Omega)$, the integral above is a Stieltjes transform. Applying the Laplace transform relation
\begin{align}
\mathcal L^{-1}_\Omega\left\{\dfrac 1 {\Omega+i\omega}\right\}(t) = e^{-i\omega t},
\end{align}
by virtue of Fubini's theorem \cite{widder1971introductionchapter5_13} the generalized compressibility \eqref{eq:Nachman_kappa_omega_integral_zenertry} then becomes
\begin{align}
\kappa_\text{N}(\omega)
&= \kappa_0 -i\omega \int_0^\infty \kappan(\Omega) \int_0^\infty e^{-\Omega t} e^{-i\omega t} \dd t\; \dd \Omega\notag\\
\ &= \kappa_0 -i\omega \mathcal F_t \Big\{H(t) \mathcal L_\Omega \left\{ \kappan(\Omega) \right\}\!\! (t) \Big\}(\omega).
\label{eq:nachman_fourier_laplace}
\end{align}
Provided that the conservations of mass \eqref{Eq:strain} and momentum \eqref{Eq:Newton} are valid, and provided that the NSW generalized compressibility $\kappa_\text{N}(\omega)$ of \eqref{eq:nachman_fourier_laplace}
is equal to the fractional Zener generalized compressibility $\kappa_\text{Z}(\omega)$ of \eqref{Eq:fZener_compressibility},
the dispersion relations from \eqref{eq:dispersion_relation} are also equal. Because the dispersion relation is a spatio-temporal Fourier representation of the wave equation,
$\kappa_\text{N}(\omega) = \kappa_\text{Z}(\omega)$ thus implies that the NSW wave equation becomes equal to the fractional Zener wave equation \eqref{Eq:wave_equation_zener}.
Direct comparison of $\kappa_\text{N}(\omega)$ in \eqref{eq:nachman_fourier_laplace} to $\kappa_\text{Z}(\omega)$ in \eqref{Eq:fZener_compressibility}, tells that they are equal in case the following is true:
\begin{align}
\mathcal F_t &\Big\{H(t) \mathcal L_\Omega \left\{ \kappan(\Omega) \right\}\!\! (t) \Big\}(\omega)
=\notag\\
&
\kappa_0 \dfrac{ (i\omega)^{\alpha-1} - (\tau_\epsilon^\beta/\tau_\sigma^\alpha)(i\omega)^{\alpha-(\alpha-\beta+1)}}{ \tau_\sigma^{-\alpha} + (i\omega)^\alpha}.
\label{eq:compressibilities_are_equal}
\end{align}
\subsubsection{The Cole--Cole equivalent $\alpha=\beta$ case}
First we choose to study the case $\alpha=\beta$ in a similar manner as in \cite{Nasholm2011}. Inverse Fourier transformation of both sides of \eqref{eq:compressibilities_are_equal}, then gives
\begin{align}
H(t) \mathcal L_\Omega &\left\{ \kappan(\Omega) \right\}\! (t
=
\kappa_0 (1-\tau_\epsilon^\alpha/\tau_\sigma^\alpha) \mathcal F_\omega^{-1} \left\{ \dfrac{ (i\omega)^{\alpha-1}}{ \tau_\sigma^{-\alpha} + (i\omega)^\alpha}\right\}\!(t)\notag\\
&=
\kappa_0 (1-\tau_\epsilon^\alpha/\tau_\sigma^\alpha) H(t) E_{\alpha,1} \left(-(t/\tau_\sigma)^\alpha \right),
\label{eq:compressibilities_are_equal_alphaisbeta}
\end{align}
where $E_{a,b}( \cdot )$ is the Mittag-Leffler function (see Appendix \ref{section:ML_appendix}), and $H(t)$ is the Heaviside step function. The Fourier transform relation used in the last step above is given in \eqref{eq:mittagleffler_fourier_transform}.
Moreover, using the inverse Laplace transform relation of \eqref{eq:mittagleffler_integral_relation}, Eq.~\eqref{eq:compressibilities_are_equal_alphaisbeta} hence gives
\begin{align}
\kappan(\Omega) &=
\kappa_0 (1-\tau_\epsilon^\alpha/\tau_\sigma^\alpha) f_{\alpha,1}\left(\Omega, \tau_\sigma^{-\alpha}\right)
\notag\\
& = \dfrac{1}{\pi} \dfrac{\kappaz(\taus^{\alpha}- \taue^\alpha)\Omega^{\alpha-1} \sin (\alpha\pi ) }{ (\taus\Omega)^{2\alpha} + 2(\taus\Omega)^\alpha \cos(\alpha\pi) + 1}
\triangleq \kappa_{\nu\text{ML}}(\Omega)
\label{eq:distribution_for_alphaisbeta}
\end{align}
where $f_{\alpha,1}(\Omega,a)$ was inserted from \eqref{eq:f_alpha_beta_distribution}.
Note that this link between the fractional Zener and the NSW models is valid also outside the small-attenuation regime $\Im\left\{k\right\} \ll \Re\left\{k\right\}$.
Furthermore, it is worth emphasizing that the distribution function $\kappa_{\nu\text{ML}}(\Omega)$ has three distinct power-law regions where the transition is given by the product $\Omega\taus$:
\begin{align}
\kappa_{\nu\text{ML}}(\Omega) \propto \left\{
\begin{array}{ll}
\displaystyle \Omega^{\alpha-1}, & \ \text{for } \Omega\taus \ll 1\\
\displaystyle \Omega^{-1}, & \ \text{for } \Omega\taus \approx 1\\
\displaystyle \Omega^{-\alpha-1}, & \ \text{for } 1 \ll \Omega\taus,
\end{array}
\right.
\label{eq:kappa_n_Omega_regimes}
\end{align}
We especially note that the high-frequency asymptote has fractal (self-similar) properties. Such fall-off also arises for Levy $\alpha$-stable distributions.
This might reveal information on the underlying physics.
Keeping in mind that $\taue$ and $\taus$ may be regarded as break-frequencies, we note that fractional Zener time-parameters estimation don't really represent single relaxation times (frequencies) as for discrete Debye models, but rather break-times (frequencies) around which the model characteristics change.
An inversion recipe to find the analogy of $\kappan(\Omega)$ given some attenuation law $\alpha_k(\omega)$ was presented in \cite{Vilensky2012}, by application of an approach tightly related to \cite{Pauly1971}.
The small-attenuation assumption can at least for low frequencies often be reasonable for compressional wave propagation in biological tissue. By contrast, for shear-wave propagation in tissue, the attenuation is generally much more pronounced \cite{Szabo00}.
\subsubsection{The $\alpha\neq \beta$ case}
For the more general situation when $\alpha\neq \beta $, inverse Fourier transform on both sides of \eqref{eq:compressibilities_are_equal} instead give
\begin{align}
H(t) &\mathcal L_\Omega \left\{ \kappan(\Omega) \right\}\! (t) =\notag\\
\kappa_0 \mathcal F_\omega^{-1} \left\{ \dfrac{ (i\omega)^{\alpha-1}}{ \tau_\sigma^{-\alpha} + (i\omega)^\alpha}\right\}\!(t) \notag\\
&
-\dfrac{\kappa_0 \tau_\epsilon^\beta}{\tau_\sigma^\alpha} \mathcal F_\omega^{-1} \left\{ \dfrac{ (i\omega)^{\alpha-(\alpha-\beta+1)}}{ \tau_\sigma^{-\alpha} + (i\omega)^\alpha}\right\}\!(t).
\label{eq:compressibilities_are_equal_alphaisNOTbetafirst}
\end{align}
Subsequently exercising the inverse Fourier transforms gives
\begin{align}
H&(t) \mathcal L_\Omega \left\{ \kappan(\Omega) \right\}\! (t) =\notag\\
&
H(t) \kappa_0 \Big[ E_{\alpha,1}\left(-(t/\taus)^\alpha\right)
\notag\\
&-(\tau_\epsilon^\beta/\tau_\sigma^\alpha) t^{\alpha-\beta+1} E_{\alpha,\alpha-\beta+1}\left(-(t/\taus)^\alpha\right) \Big].
\label{eq:compressibilities_are_equal_alphaisNOTbeta}
\end{align}
By recognizing in the equation above the Laplace transform relation \eqref{eq:mittagleffler_integral_relation} of the Appendix, the distribution which we choose to denote $\kappa_{\nu\text{ML}}'(\Omega)$ is identified:
\begin{align}
\kappan(\Omega) &=
\kappa_0 f_{\alpha,1}\left(\Omega, \tau_\sigma^{-\alpha}\right)
-\kappa_0(\tau_\epsilon^\beta/\tau_\sigma^\alpha) f_{\alpha,\alpha -\beta+1}\left(\Omega, \tau_\sigma^{-\alpha}\right)
\notag\\
=\ & \dfrac{\kappa_0}{\pi \Omega} \cdot
\dfrac{1}{ (\taus\Omega)^{2\alpha} + 2(\taus\Omega)^\alpha \cos(\alpha\pi) + 1
\cdot \notag\\
&\cdot \Big[ (\tau_\sigma \Omega)^{\alpha} \sin (\alpha\pi
-(\tau_\epsilon\Omega)^{\beta}\sin(\beta\pi) \notag\\
&\quad - (\tau_\sigma\Omega)^\alpha (\taue\Omega)^\beta \sin ((\beta-\alpha)\pi
\Big]
\notag\\
\triangleq\ & \kappa_{\nu\text{ML}}'(\Omega).
\label{eq:distribution_for_alphaisNOTbeta}
\end{align}
The fractional Zener wave equation \eqref{Eq:wave_equation_zener} may at a first glance hence be contained within the NSW framework of multiple relaxation \cite{Nachman1990}, when assuming a continuum of relaxation mechanisms with the compressibility contribution as given by the distribution $\kappa_{\nu\text{ML}}'(\Omega)$ of \eqref{eq:distribution_for_alphaisNOTbeta}.
We note that as a consequence of the $b<1$ criterion for the identity \eqref{eq:mittagleffler_integral_relation} to be valid, the step from \eqref{eq:compressibilities_are_equal_alphaisNOTbeta} to \eqref{eq:distribution_for_alphaisNOTbeta} is only correct for $\alpha \leq \beta$.
On the other hand, for a continuous distribution of relaxation process contributions \eqref{eq:Nachman_kappa_omega_integral_zenertry} to be physically meaningful, the distribution $\kappa_\nu(\Omega)$ must be non-negative for all $\Omega$. A closer investigation of the distribution $\kappa_{\nu\text{ML}}'(\Omega)$ in \eqref{eq:distribution_for_alphaisNOTbeta} above reveals that no matter how the non-negative parameters are set, it cannot be positive for all $\Omega$. This is in accordance with the $\alpha = \beta$ thermodynamical restriction discussed in Section \ref{sec:parameter_restrictions}.
This calls for a modification of the $\alpha \neq \beta$ version of the fractional Zener wave equation \eqref{Eq:wave_equation_zener} in order to make it thermodynamically admissible. Based on arguments presented in \cite{Pritz2003}, we suggest to start out from a modified, five-parameter form of the constitutive relation \eqref{Eq:gZener} which is equivalent to an ansatz analyzed in \cite{friedrich1992generalized} and reviewed in \cite{Rossikhin2010}:
\begin{align}
\sigma(t) +\tau_{\epsilon}^{\beta} \frac{\partial^{\beta}\sigma(t)}{\partial t^{\beta}}
= E_0 \left[\epsilon(t)
+ \tau_{\sigma}^{\alpha} \frac{\partial^{\alpha}\epsilon(t)}{\partial t^{\alpha}
+ \tau_{\sigma}^{\beta} \frac{\partial^{\beta}\epsilon(t)}{\partial t^{\beta}
\right],
\label{eq:fractional_zener_modified}
\end{align}
where we have $\alpha \leq \beta$.
A related model has been applied to cell rheology \cite{djordjevic2003cell}.
From the relation \eqref{eq:fractional_zener_modified} it is straightforward to construct a time-fractional wave equation using the methodology of Section \ref{sec:wave_equations} hence leading to
\begin{align}
\nabla^2 u -\dfrac 1{c_0^2}\frac{\partial^2 u}{\partial t^2}
+ \taus^\alpha \dfrac{\partial^\alpha}{\partial t^\alpha}\nabla^2 u
+ \taus^\beta \dfrac{\partial^\beta}{\partial t^\beta}\nabla^2 u
- \dfrac {\taue^\beta}{c_0^2} \dfrac{\partial^{\beta+2} u}{\partial t^{\beta+2}}
= 0.
\label{Eq:wave_equation_modified_zener}
\end{align}
We encourage researchers to execute the inverse transforms which yield the $\kappan(\Omega)$ NSW framework relaxation distribution corresponding to the above wave equation.
\subsubsection{Relation to the rheological relaxation time spectrum}
In viscoelasticity, a relaxation time spectrum, $\tilde H(\tau)$, related to $\kappa_\nu(\Omega)$ is commonly studied (see \eg{} \cite{Glockle1991} and references therein). It is related to the complex modulus through
\begin{align}
G(t) = G_\infty + \int_{-\infty}^\infty \tilde H(\tau) e^{-t/\tau} \dd \ln \tau.
\end{align}
It may be shown that for the fractional Zener model when setting $\Omega=\tau^{-1}$, the $\tau$-dependency of $\tilde H(\tau)$ differs by a factor $\tau$ from $\kappa_{\nu\text{ML}}(\Omega)$ of \eqref{eq:distribution_for_alphaisNOTbeta}.
Figs.~5 and 6 of \cite{Glockle1991} illustrate that $\alpha=\beta$ gives symmetric $\tilde H(\tau)$, while $\alpha\neq\beta$ breaks the symmetry, most significantly far away from the peak region.
\subsection{Attenuation and phase velocity}\label{sec:attenauation_and_phase_velocity}
The decomposition of the frequency-dependent wavenumber into its real and imaginary parts,
gives the phase velocity $c_p(\omega) = \omega/\Re\left\{k(\omega)\right\}$ and attenuation $\alpha_k(\omega) = -\Im\left\{k(\omega)\right\}$.
In general, the attenuation and the phase velocity are thus given by insertion of the generalized compressibility into the dispersion relation \eqref{eq:dispersion_relation} as
\begin{align}
\begin{array}{l}
\alpha_k(\omega) = -\Im\left\{k\right\} = -\omega\sqrt{\rho_0}\Im\left\{\sqrt{\kappa(\omega)}\right\} \quad\text{and}\\
c_p(\omega) = \omega/\Re\left\{k\right\} = {\rho_0^{-1/2}}/\Re\left\{\sqrt{\kappa(\omega)}\right\}.
\label{eq:atten_and_soundspeed_from_kappa}
\end{array}
\end{align}
For the fractional Zener wave equation \eqref{Eq:wave_equation_zener} with $\alpha=\beta$, the attenuation expression \eqref{eq:atten_and_soundspeed_from_kappa} can be combined with the fractional Zener compressibility $\kappa_\text{Z}(\omega)$ of \eqref{Eq:fZener_compressibility} to get the attenuation.
This results in three distinct frequency regimes of attenuation power-laws determined by the products $\taus\Omega$ and $\taue\Omega$ \cite{Nasholm2011}:
\begin{align}
\alpha_k(\omega) \propto
\left\{
\begin{array}{ll}
\omega^{1+\alpha} & \text{low-frequencies,} \\
\omega^{1-\alpha/2} & \text{intermediate frequencies,}\\
\omega^{1-\alpha} & \text{high-frequencies.}
\end{array}
\right.
\label{eq:freq_regions}
\end{align}
Below, the fractional Zener phase velocities and attenuations are further investigated numerically.
The results from such calculations are compared to what is found by insertion of the distribution $\kappa_{\nu\text{ML}}(\Omega)$ of \eqref{eq:distribution_for_alphaisbeta} into the NSW generalized compressibility integral formula \eqref{eq:Nachman_kappa_omega_integral_zenertry}. This generalized compressibility is finally applied to \eqref{eq:atten_and_soundspeed_from_kappa}, from which $\alpha_k(\omega)$ and $c_p(\omega)$ are found.
We use the latter calculation method to explore the effect of letting the continuum of relaxation mechanisms populate only a bounded frequency interval, rather than the entire $\Omega\in[0,\infty]$ region.
Figure \ref{fig:curves} compares attenuation curves, the frequency-dependent phase velocity, and the distribution $\kappa_{\nu\text{ML}}(\Omega)$.
Note that for many applications, the ratio $\taus/\taue$ is only slightly larger than one. This implies that the intermediate regime becomes negligible.
For attenuation in seawater \cite{Ainslie1998} and air \cite{Bass1995}, one usually considers only three discrete relaxation contributions each with $\alpha=1$, which results in the familiar $\alpha_k \propto \omega^2$ for low frequencies and constant attenuation for high frequencies.
\begin{figure}[th!]
\begin{center}
{\includegraphics[width=.81\columnwidth]{attenuation_0_5}}\\
\ \ \ {\includegraphics[width=.75\columnwidth]{soundspeed_0_5}}\\
{\includegraphics[width=.79\columnwidth]{kappa_nu_0_5}}
\end{center}
\caption{\label{fig:curves
Properties of the fractional Zener model exemplified for $\taus=1000\taue$ and $\alpha=\beta = 0.5$.
Top pane: frequency-dependent attenuation as a function of normalized wave frequency $\omega\cdot\taus$, where the three power-law regimes are distinguishable.
Middle pane: Frequency-dependent phase velocity as a function of normalized wave frequency $\omega\cdot\taus$.
Bottom pane: the corresponding normalized effective compressibilities $\kappa_{\nu\text{ML}}(\Omega)$ of the continuum of relaxation processes as a function of normalized relaxation frequency $\Omega\cdot\taus$.
}
\end{figure}
In the figures we observe that the parameter $\taus$ may be regarded as related to break frequencies, where the distribution of relaxation frequencies crosses over between different power-law regimes of $\kappa_\nu(\Omega)$.
Notably the low- and high-frequency asymptotes of $\kappa_\nu(\Omega)$, which both have fractal properties, are also well visible.
The break frequencies also correspond to where the attenuation crosses over between different power-law regimes.
\subsection{Causality and finite phase speed}
In particular, we observe that according to the NSW paper \cite{Nachman1990} any physical mechanism that fits into the multiple relaxation framework corresponds to finite phase speed, non-negative attenuation, and causal response at all wave frequencies.
This parallels the conversion of stress-strain models into the Maxwell--Wiechert representation.
The NSW model causality is also verified because it complies with the Kramers--Kronig causality relations. See \cite{Waters2005} for an acoustics-oriented treatment of these relations.
In Section \ref{sec:nachman_generalized_compressibility}, we re-wrote the continuous relaxation process distribution \eqref{eq:Nachman_kappa_omega_integral_zenertry}, which is a generalization of the NSW discrete sum \eqref{eq:Nachman_kappa_omega},
into expression \eqref{eq:nachman_fourier_laplace}: $\kappa_\text{N}(\omega) = \kappa_0 -i\omega \mathcal F_t \Big\{H(t) \mathcal L_\Omega \left\{ \kappan(\Omega) \right\}\!\! (t) \Big\}(\omega)$.
Studying this expression sparks the conclusion that any distribution of relaxation contributions $\kappan(\Omega)$ for which the successive Laplace and Fourier integrals of \eqref{eq:nachman_fourier_laplace} exist, the corresponding wave equation gives finite phase speed, non-negative attenuation, and causal solutions for all wave frequencies.
Moreover, it is worth emphasizing that some models are causal but can imply unbounded phase speeds. For example the fractional Kelvin--Voigt wave equation the unbounded phase speed at infinite frequencies \cite{Caputo1967, Wismer06, Holm2010}.
The causality properties of several acoustical attenuation models are investigated in \cite{kowar2012attenuation}.
Regarding restrictions on the attenuation power-law exponents, \cite{Weaver1981} argues that causality is maintained only if the attenuation has a slower than linear rise with frequency in the high-frequency limit. This requirement is met both for the fractional Zener wave attenuation and the NSW attenuation.
See also \cite{seredynska2010relaxationdispersion} for a related study.
\section{Discussion and concluding remarks}
Within the framework of elastic waves, the current work surveys connections between concepts of power-law and multiple relaxation attenuation, causality, and fractional derivative differential equations.
The fractional Zener elastic wave equation model fits attenuation measurements well and is characterized by a small number of parameters. The NSW model \cite{Nachman1990} is widely acknowledged in acoustical modeling and does not comprise fractional derivatives.
Here we have analyzed how the fractional Zener \eqref{Eq:gZener} and NSW \eqref{eq:Nachman_kappa_omega_integral_zenertry} wave equation models can be unified under the assumption of a continuous distribution of relaxation mechanisms \eqref{eq:distribution_for_alphaisbeta} which has fractal properties.
Because NSW-compatible wave equations are causal as well as consistent with non-negative attenuation and finite phase speed, we prefer to consider any such model as eligible form an physically intuitive point of view.
Nevertheless, it is still unclear to the acoustics community what are the underlying physical mechanisms which interplay in complex media to result in power-law attenuation of elastic waves.
The characteristics of viscoelasticity and acoustics of complex media, share many similarities with what is encountered for dielectrical properties of complex media.
We here point out that there are several theories within this field on how to explain \eg{} Cole--Cole behavior by medium disordering, scaling, and geometry as well as more probabilistic approaches. See \cite{weron2000probabilistic, nigmatullin2005theoryofdielectric, stanislavsky2007stochasticnature, feldman2012dielectric} and the references therein. Maybe the search for first principles explanations for the fractional behavior of complex elastic media can be inspired by such findings.
Furthermore, we note that relaxation processes in nuclear magnetic resonance (NMR) as often described by the so called Bloch equations actually also can give non-Debye appearance, see \eg{} \cite{bhalekar2012generalizedfractional} and the references therein.
In addition, developments related to dispersion and attenuation of elastic waves have many traits in common with the mathematical descriptions of luminescence decay, see \eg{} \cite{berberiansantos2008}.
We aim to encourage the acoustical community to more frequently adopt fractional calculus descriptions for wave modeling in complex media.
Hopefully we can also stimulate both mathematics-oriented and other researchers to spark further progress within fractional modeling of elastic waves by contributing to advance in e.g.\ model enhancements, existence and Green's function considerations, as well as in analytical and numerical investigations.
We call for further exploration of connections between fractional dynamics, the surprisingly prevalent power-law patterns of nature, and the micromechanical structure of complex materials.
\section{Appendices}
\subsection{Appendix: Mittag-Leffler function properties}
{\label{section:ML_appendix}
\subsubsection{Definition and Fourier Transform Relation}
The one-parameter Mittag-Leffler function was introduced in \cite{Mittag-Leffler1903}. A two-parameter analogy was presented in \cite{Wiman1905}, and may be written as
\begin{align}
\MittagLefflerAlphaBetaT \triangleq \sum_{n=0}^\infty \dfrac{t^n}{\Gamma(a n+ b )},
\label{eq:mittagleffler_definition}
\end{align}
where $\Gamma$ is the Euler Gamma function and the parameters are commonly restricted to $\{a, b \} \in \C,\ \Re{\{a, b \}}>0$, and $t\in\C$.
See~\cite{Haubold2011} for a comprehensive review of Mittag-Leffler function properties.
A useful Fourier transform pair involving the Mittag-Leffler function is \cite{Podlubny1999chapter1-2}
\begin{align}
\Fouriertransform{H(t)\: t^{ b -1}\MittagLeffler{ a }{ b }{-A t^ a }}(\omega) = & \dfrac{(i\omega)^{ a - b }}{A+ (i\omega)^ a }
\label{eq:mittagleffler_fourier_transform}
\end{align}
\subsubsection{\label{section:ML_appendix_integral_representation}Laplace Transform Integral Representation}
The function $t^{b-1}\MittagLeffler{ a }{ b }{{-A}t^a}$ may for $0< a \leq b < 1$ be written on an integral form \cite{Djrbashian1966,Djrbashian1993chapter1}:
\begin{align}
t^{ b -1}\MittagLeffler{ a }{ b }{-A t^ a } = \int_0^\infty e^{-\Omega t} f_{ a , b }(\Omega, A)\: \dd \Omega,
\label{eq:mittagleffler_integral_relation}
\end{align}
where
\begin{align}
f_{ a , b }(\Omega,A) = \dfrac{\Omega^{ a - b }}{\pi} \dfrac{A \sin [( b - a )\pi ] + \Omega^ a \sin( b \pi)}{ \Omega^{2 a } + 2 A \Omega^ a \cos( a \pi) + A^2}.
\label{eq:f_alpha_beta_distribution}
\end{align}
Careful reading of Appendix E in \cite{Mainardi2010} reveals that the above above functions $f_{a,b}(\Omega,A)$ may be denoted \emph{spectral functions}, which are non-negative.
\section*{Acknowledgements}
The authors would like to thank the FCAA editor and reviewers for recommending us to submit this Survey paper.
This paper has been partially supported by the ``High Resolution Imaging and Beamforming'' project of the Norwegian Research Council.
\bibliographystyle{jasanum}
|
1,116,691,498,700 | arxiv | \section{Introduction}
\ExecuteMetaData[sections/introduction.tex]{tag}
\section{Methodology}
\ExecuteMetaData[sections/methodology.tex]{tag}
\section{Numerical Results and Analysis}
\ExecuteMetaData[sections/results.tex]{tag}
\section{Conclusion and Future Work}
\ExecuteMetaData[sections/conclusions.tex]{tag}
\section*{Acknowledgment}
The authors would like to acknowledge the stimulating discussions and help from Victor Merchan, Jose Manuel Vera, and Joo Wang Kim, as well as Tiendas Industriales Asociadas Sociedad Anonima (TIA S.A.), a leading grocery retailer in Ecuador, for providing the necessary funding for this research effort.
\subsection{Detection system}
\subsection{Object Detection System}
\ExecuteMetaData[sections/detections.tex]{tag}
\subsection{Tracking System}
\ExecuteMetaData[sections/tracking.tex]{tag}
\end{document}
|
1,116,691,498,701 | arxiv | \section{Introduction}
Single scalar field inflationary models have solid predictions for the scalar and the tensor power spectra, and the amount of non-gaussianity produced by the interactions. These observable quantities are fixed by a few parameters like the slow-roll parameter of the potential. Moreover, in these models the quantum loop corrections to the standard inflationary predictions turn out to be quite small (see e.g. \cite{s1,s2,s3}). As a result of this firm structure, many single field scalar models are either ruled out or severely constrained by the recent Planck data \cite{pl} (see \cite{ls} for a scan of inflationary scenarios in the light of Planck). For example, the chaotic $m^2\phi^2$ model is ruled out by 95\% confidence level (provided the index is not running) by the contours in the the scalar-to-tensor ratio $r$ vs. the scalar spectral index $n_s$ data plane \cite{pl}.
The constancy of the superhorizon curvature perturbation $\zeta$ is very crucial for the inflationary predictions to hold. This helps to determine the cosmic microwave background (CMB) fluctuations from the correlation functions evaluated at the horizon crossing time. Technically, the conservation of $\zeta$ sets an upper bound for the time integrals that appear in the in-in perturbation theory \cite{mal}.
On the other hand, it is well known that the entropy perturbations can cause superhorizon evolution of the curvature perturbation (see e.g. \cite{mfb}). In reheating, those fields that are unimportant during the exponential expansion are exited by the inflaton decay and they start to dominate the universe. Moreover, in some models the decay can occur violently in a preheating stage \cite{reh1,reh2,reh3,reh4}. Although these entropy perturbations are not produced at cosmologically interesting scales, reheating stage ends with highly nonlinear processes (see e.g. \cite{n1,n2,n3}). While these nonlinearities can be effectively described by fluid dynamics that only affect local quantities \cite{bsz}, some of them are known to have important consequences (see e.g. \cite{coh1,coh2}).
Classical, and similarly quantum, nonlinearities imply that Fourier modes do not evolve independently. As a result of this mode-mode coupling, short distance fluctuations are expected to affect the long wavelength modes. For example, a cubic interaction term in a Lagrangian would allow two modes with nearly equal large momenta to change the amplitude of a mode with small momentum. In quantum theory, there are also virtual modes circulating in the loops that affect the correlation functions. Evidently, it is crucial to determine the size of such effects. In a recent work, one of us has shown that in the chaotic $m^2\phi^2$ model with a period of preheating where the inlaton $\phi$ decays to another scalar $\chi$, the parametrically amplified $\chi$ modes appearing in the loops would meaningfully modify the curvature power spectrum \cite{ali1}. This is an example of the entropy perturbations affecting the superhorizon curvature variable, however not by the real physical fluctuations but because of the virtual entropy modes appearing in the loops (entropy modes are known to give power loop infrared divergences during inflation \cite{ir1,ir2}).
Since $\zeta$ becomes an ill defined dynamical variable during reheating, the calculations in \cite{ali1} have been carried out in the $\zeta=0$ gauge. In that case, one may first calculate the $\chi$ loop corrections to the inflaton fluctuation power spectrum until the coherency of the inflaton oscillations is lost. After that moment, the possible effects on the superhorizon evolution are expected to be averaged out and become negligible. One may then apply a gauge transformation to read the $\zeta$ power spectrum. Since in the first stage of the preheating the background inflaton oscillates coherently, the superhorizon modes are affected without violating causality \cite{ca}. Note that as long as the relativistic equations are treated properly, there should not arise any issue with causality.
In this paper, our first aim is to carry out the calculation of \cite{ali1} in the $\varphi=0$ gauge, i.e. we will use $\zeta$ directly as the main dynamical variable. Because $\zeta$ is only ill defined at isolated times when the inflaton velocity vanishes, the propagator has ``spikes" and it diverges at these moments. We smooth out these spikes by using the time averaged background quantities in the $\zeta$ action. In \cite{ali1}, only the loops arising from the interaction potential have been considered. Here, we determine all cubic interactions involving $\chi$ and $\zeta$, and estimate the total one loop correction to the $\zeta$ power spectrum. Not surprisingly, our computations confirm the findings of \cite{ali1} and show a significant contribution to the $\left<\zeta\zeta\right>$ correlation function.
Our second aim in this work is to determine the $\chi$ loop contributions to the non-gaussianity and to the tensor power spectrum (in single field models, the bi-spectrum is not altered by the parametric resonance effects \cite{yeni1}). Notable modifications to the scalar power spectrum found in \cite{ali1} indicate the existence of similar significant corrections for these observables. We estimate the amount of non-gaussianity from the $\zeta$-three point function by calculating the one loop graphs arising from the cubic interactions. It turns out that these corrections to the three point function can be expressed in terms of the two point function and it is possible to read the shape independent non-gaussianity order parameter $f_{NL}$. Similarly, the $\chi$ field coupled to the tensor fluctuations yield loop corrections to the tensor power spectrum. Since the tensor field behaves like a test field propagating on the background, the tensor calculation is not affected by different time slices of spacetime. We find that the tensor power spectrum is moderately corrected by the loops in reheating.
The organization of the paper is as follows. In the next section, we consider the chaotic $m^2\phi^2$ model with the extra preheating scalar field $\chi$ to which the inflaton decays in the parametric resonance regime.
In section \ref{iii}, we determine the cubic interaction terms involving the curvature perturbation $\zeta$, the tensor mode $\gamma_{ij}$ and the preheating scalar $\chi$. We then calculate one loop corrections to the scalar power spectrum $\left<\zeta\zeta\right>$, the three-point function $\left<\zeta\zeta\zeta\right>$, the tensor power spectrum $\left<\gamma\cc\right>$ and make order of magnitude estimates of these corrections by using the theory of preheating. In section \ref{sec3}, we further consider some higher order interactions involving two $\chi$ fields and determine their loop effects. In \ref{sec4} we conclude with remarks and future directions.
\section{The model and linearized fluctuations}
\subsection{The background}
Let us consider the chaotic model that has the following potential
\begin{equation}\label{pot}
V(\phi,\chi)=\fr12 m^2\phi^2+\fr12 g^2 \phi^2\chi^2,
\end{equation}
where $\phi$ is the inflaton and $\chi$ is the reheating scalar, which are propagating in a flat FRW background
\begin{equation}
ds^2=-dt^2+a^2(dx^2+dy^2+dz^2).
\end{equation}
This model can be seen to be the prototype of the chaotic inflationary paradigm and preheating. As it is well known, a period of inflation can be realized if initially $\phi>\tilde{M}_p$ and the nearly exponential expansion ends roughly when $\phi\sim \tilde{M}_p/20$ (see e.g. \cite{reh4}), where $\tilde{M}_p^2=1/G$ (we define $M_p$ to be the reduced Planck mass $M_p^2=1/8\pi G$).
During inflation and in the first stage of preheating where the backreaction effects are negligible, the $\chi$ background vanishes
\begin{equation}
\chi=0.
\end{equation}
Following the exponential expansion, $\phi$ starts oscillating about its minimum $\phi=0$. Assuming
\begin{equation}
m\gg H,
\end{equation}
which is generically satisfied in this model, the background field equations can be approximately solved as
\begin{equation}\label{b1}
\phi(t)\simeq\Phi \sin(mt),\hs{5}a=a(t),
\end{equation}
where
\begin{equation} \label{fh}
a\simeq\left(\frac{t}{t_R}\right)^{2/3},\hs{5}\Phi\simeq\frac{\Phi_0}{mt},\hs{5}H\simeq\frac{2}{3t}.
\end{equation}
Note that the amplitude obeys $\dot{\Phi}+3H\Phi/2\simeq0$, where the dot denotes the time derivative.
We define $t_R$ and $t_F$ as follows:
\begin{eqnarray}
&&t_R:\textrm{Beginning of reheating,}\nonumber\\
&&t_F:\textrm{End of the first stage of preheating.}\nonumber
\end{eqnarray}
After the time $t_F$, the $\chi$ particles created out of the vacuum start affecting the background and thus the backreaction effects are set in. Our aim is to calculate the $\chi$ loop corrections to the cosmological correlation functions, which are effective in the time interval $(t_R,t_F)$.
Some features of preheating depend on the parameters of the model and many cases are discussed numerically in \cite{reh4}. For our estimates, we will use the following canonical set that gives the broad parametric resonance:
\begin{equation}\label{s1}
m=10^{-6}\tilde{M}_p\hs{10}g=10^{-2}.
\end{equation}
In that case, the first stage of preheating ends after about $11$ inflaton oscillations and one has \cite{reh4}
\begin{equation}\label{s2}
H(t_F)\simeq 10^{-2} m \hs{5}\Phi(t_F)=5\times 10^{-3}\tilde{M}_p.
\end{equation}
One may also note that
\begin{equation}\label{tn}
mt_R\simeq 1,\hs{7}mt_F\simeq \frac{200}{3},
\end{equation}
which can be determined from $m/H(t_F)=3mt_F/2\simeq 100$ and the fact that the first stage ends after $11$ oscillations. The initial amplitude in \eq{fh} is given by $\Phi_0\simeq \tilde{M}_p/20$.
In the physical momentum space, the first resonance band is given by $q_{phys}\in(0,q_*)$ where
\begin{equation}\label{pi}
q_*=\sqrt{gm\Phi}.
\end{equation}
In general, there are other resonance bands which can be important for preheating \cite{reh4}. In the model we are studying, the first instability band gives the largest contribution and in the following we simply underestimate the loop corrections by neglecting the effects of other resonance bands. The $\chi$ momentum modes sitting in the band $(0,q_*)$ encounter exponential amplification. In determining $q_*$, we will use the smallest value of $\Phi$, i.e $\Phi(t_F)$ in \eq{s2}. The first stage ends when the interaction potential energy density $g^2\phi^2\chi^2$ becomes comparable to the inflaton potential energy density $m^2\phi^2$, since after that moment the frequency of the inflaton oscillations are affected by the $\chi$ particles. This implies \cite{reh4}
\begin{equation}\label{ce}
\left< \chi(t_F)^2\right> \simeq \frac{m^2}{g^2}.
\end{equation}
As we will see below, \eq{ce} is important for estimating the $\chi$ loop corrections.
In a generic two-field model, the adiabatic field $\sigma$ and the entropy perturbation $\delta s$ are defined by \cite{ent}
\begin{eqnarray}
&&\dot{\sigma}=(\cos\theta)\dot{\phi}+(\sin\theta)\dot{\chi},\\
&&\delta s=(\cos\theta)\delta\chi-(\sin\theta)\delta \phi,
\end{eqnarray}
where
\begin{equation}
\cos\theta=\frac{\dot{\phi}}{\sqrt{\dot{\phi}^2+\dot{\chi}^2}},\hs{3} \sin\theta=\frac{\dot{\chi}}{\sqrt{\dot{\phi}^2+\dot{\chi}^2}}.
\end{equation}
Since the background value of $\chi$ is zero, we have $\theta=0$, $\sigma=\phi$ and $\delta s=\delta \chi$, which shows that in this model $\phi$ is the adiabatic mode and $\chi$ is the entropy mode.
\subsection{Quadratic actions and mode functions}
The full action governing the dynamics of the system can be written in the ADM form as (we set $M_p=1$)
\begin{equation}\label{a}
S=\fr12 \int \sqrt{h} \left[NA+\frac{B}{N}\right],
\end{equation}
where $N$ and $N^i$ are the standard lapse and shift functions of the metric
\begin{equation}
ds^2=-N^2dt^2+h_{ij}(dx^i+N^i dt)(dx^j + N^j dt),
\end{equation}
$K_{ij}=\fr12(\dot{h}_{ij}-D_i N_j-D_j N_i)$, $K=h^{ij}K_{ij}$, $D_i$ is the derivative operator of $h_{ij}$ and
\begin{eqnarray}
&&A=R^{(3)}-2V-h^{ij}\partial_i\phi\partial_j\phi-h^{ij}\partial_i\chi\partial_j\chi,\label{aa}\\
&&B=K_{ij}K^{ij}-K^2+(\dot{\phi}-N^i\partial_i\phi)^2+(\dot{\chi}-N^i\partial_i\chi)^2.\label{bb}
\end{eqnarray}
We define the perturbations as
\begin{eqnarray}
&&h_{ij}=a^2e^{2\zeta}(e^\gamma)_{ij},\\
&&\chi=0+\chi,\label{cpert}
\end{eqnarray}
where the gauge is completely fixed by imposing
\begin{equation}\label{gauge}
\varphi=0,\hs{5}\partial_i\gamma_{ij}=0,\hs{5} \gamma_{ii}=0.
\end{equation}
Here, $\varphi$ denotes the inflaton fluctuation and in this gauge the inflaton takes its background value $\phi=\phi(t)$ given in \eq{b1}. Note that we use the same letter $\chi$ to denote the reheating scalar fluctuation in \eq{cpert} since the background value of $\chi$ vanishes. As pointed out in \cite{ekw}, the lapse $N$ can be solved exactly as $N^2=B/A$. However, to determine the action up to cubic order it is enough to solve the constraints to linear order, which gives \cite{mal}
\begin{equation}\label{ls}
N=1+\frac{\dot{\zeta}}{H},\hs{5}N^i=\delta^{ij}\partial_j\psi,\hs{5}\psi=-\frac{\zeta}{a^2H}+\frac{\dot{\phi}^2}{2H^2}\partial^{-2}\dot{\zeta}.
\end{equation}
Note that neither $\chi$ nor $\gamma_{ij}$ appear in the solutions of $N$ and $N^i$ to this order. By expanding the action \eq{a}, one may obtain the following well known quadratic actions
\begin{eqnarray}
&&S_\zeta^{(2)}=\fr12\int a^3\frac{\dot{\phi}^2}{H^2}\left[\dot{\zeta}^2-\frac{1}{a^2}(\partial\zeta)^2\right],\nonumber\\
&&S_\chi^{(2)}=\fr12\int a^3\left[\dot{\chi}^2-\frac{1}{a^2}(\partial\chi)^2-g^2\phi^2\chi^2\right],\label{qac}\\
&&S_\gamma^{(2)}=\fr18\int a^3\left[\dot{\gamma}_{ij}^2-\frac{1}{a^2}(\partial\gamma_{ij})^2\right],\nonumber
\end{eqnarray}
which are valid both during inflation and reheating. The $\zeta$ kinetic term vanishes at times when $\dot{\phi}=0$ and the $\zeta$ propagator diverges at these times. This divergence must be cured to make the loop contributions well defined.
The free fields can be expanded as
\begin{eqnarray}
&&\zeta=\frac{1}{(2\pi)^{3/2}}\int d^3k\, e^{i\vec{k}.\vec{x}}\,\zeta_k(t) a_{\vec{k}}+h.c.\label{m1}\\
&&\chi=\frac{1}{(2\pi)^{3/2}}\int d^3k\, e^{i\vec{k}.\vec{x}}\,\chi_k(t) \tilde{a}_{\vec{k}}+h.c.\label{m2}\\
&&\gamma_{ij}=\frac{1}{(2\pi)^{3/2}}\int d^3k\, e^{i\vec{k}.\vec{x}}\,\gamma_k(t) \epsilon_{ij}^s \tilde{a}^s_{\vec{k}}+h.c.\nonumber
\end{eqnarray}
where $s=1,2$ and the ladder operators obey the usual commutator relations, e.g. $[a_k,a^\dagger_{k'}]=\delta^3(k-k')$. The polarization tensor $\epsilon^s_{ij}$ has the following properties
\begin{equation}
k^i\epsilon^s_{ij}=0,\hs{5} e^s_{ii}=0,\hs{5} \epsilon^s_{ij}e^{s'}_{ij}=2\delta^{ss'}.
\end{equation}
To satisfy the canonical commutation relations, the mode functions must obey the Wronskian conditions
\begin{eqnarray}
&&\zeta_k\dot{\zeta}_k^*-\zeta_k^*\dot{\zeta}_k=\frac{H^2i}{a^3\dot{\phi}^2},\nonumber\\
&&\chi_k\dot{\chi}_k^*-\chi_k^*\dot{\chi}_k=\frac{i}{a^3},\label{w}\\
&&\gamma_k\dot{\gamma}_k^*-\gamma_k^*\dot{\gamma}_k=\frac{4i}{a^3}.\nonumber
\end{eqnarray}
On the other hand, the linearized mode equations become
\begin{eqnarray}
&&\ddot{\zeta}_k+\left[3H+2\frac{\ddot{\phi}}{\dot{\phi}}-2\frac{\dot{H}}{H}\right]\dot{\zeta}_k+\frac{k^2}{a^2}\zeta_k=0,\nonumber\\
&&\ddot{\chi}_k+3H\dot{\chi}_k+\left[g^2\phi^2+\frac{k^2}{a^2}\right]\chi_k=0,\label{le}\\
&&\ddot{\gamma}_k+3H\dot{\gamma}_k+\frac{k^2}{a^2}\gamma_k=0.\nonumber
\end{eqnarray}
Note that the equation for $\chi_k$ gets a contribution from the potential \eq{pot}, which is responsible for the parametric resonance.
We will be interested in the superhorizon $\zeta_k$ and $\gamma_k$ modes. Neglecting the $k^2/a^2$ terms in \eq{le} one can easily obtain two linearly independent {\it superhorizon solutions} which can be written as
\begin{equation}\label{sh}
\zeta_k\simeq \left[\zeta_k^{(0)}+c_k f(t)\right], \hs{5} \gamma_k\simeq \left[\gamma_k^{(0)}+d_k g(t)\right],
\end{equation}
where $\zeta_k^{(0)}$, $\gamma_k^{(0)}$, $c_k$ and $d_k$ are constants and
\begin{equation}\label{fg}
\frac{df}{dt}=\frac{H^2}{a^3\dot{\phi}^2}, \hs{5} \frac{dg}{dt}=\frac{1}{a^3}.
\end{equation}
As usual, the modes \eq{sh} have the constant and the decaying pieces, and the normalization conditions in \eq{w} imply
\begin{equation}\label{n}
\zeta_k^{(0)} c_k^* -\zeta_k^{(0)}{}^* c_k=i,\hs{5} \gamma_k^{(0)} d_k^* -\gamma_k^{(0)}{}^* d_k=4i.
\end{equation}
One may note the mass dimensions\footnote{Note that $\gamma_{ij}$ commutation relation has a factor of $1/M_p^2$ in the right hand side, which is set to one. This is why the mass dimensions of $c_k^0$ and $d_k^0$ are different.} of the constants as $[\zeta_k^{(0)}]=M^{-3/2}$, $[c_k]=M^{3/2}$, $[\gamma_k^{(0)}]=M^{-3/2}$ and $[d_k]=M^{-1/2}$.
To be able to calculate the $\chi$ loop effects, we need to determine the behavior of the $\chi$ modes, especially the ones in the resonance band, in detail. For that, one may write the mode function in the WKB form as follows
\begin{equation}\label{est}
\chi_q=\frac{1}{\sqrt{2a^3\omega_q}}\left[\alpha_qe^{-i\int \omega_q}+ \beta_qe^{i\int \omega_q}\right],
\end{equation}
where
\begin{equation}\label{wq}
\omega_q^2=g^2\phi^2+\frac{q^2}{a^2}-\fr94 H^2-\fr32 \dot{H}.
\end{equation}
The Wronskian condition is satisfied by imposing $|\alpha_q|^2-|\beta_q|^2=1$. During inflation, $\chi$ becomes a very massive field with mass $g\Phi_0$. As a result, for the modes of interest the Bunch-Davies mode function in the beginning of reheating can be written up to an irrelevant phase as
\begin{equation}\label{ic}
\chi_q(t_R)\simeq \frac{1}{\sqrt{2a^3g\Phi_0}}.
\end{equation}
This shows that at the end of the exponential expansion these individual $\chi_q$ modes are suppressed by $a^{-3/2}$ and this is the main reason for the metric preheating scenario of \cite{ca,mph1,mph2} to break down, as it is discussed in \cite{mpk1,mpk2,mpk3} (it is possible to circumvent this suppression in some models, as it is shown in \cite{mph4,mph5,mph6}). During preheating, $\chi_q$ changes non-adiabatically as the inflaton passes through the potential minimum $\phi=0$. This process can be formulated as the particle creation by parabolic potentials which gives the exponential increase $\beta_q = e^{\mu_q m t}$ for the modes in the instability bands, where $\mu_q$ is an index characterizing the exponential growth.
From \eq{est} one may find that
\begin{equation}
|\chi_q|^2=\frac{1}{2a^3\omega_q}\left[1+2|\beta_q|^2+2\textrm{Re}\left(\alpha_q\beta_q^*e^{-2i\int \omega_q}\right)\right].
\end{equation}
For $|\alpha_q|\simeq|\beta_q|\gg1$, it is possible to see that $|\chi_q|^2$ oscillates between $1/(2a^3\omega_q)$ and $4|\beta_q|^2/(2a^3\omega_q)$ with the frequency $\omega_q$. To determine the phase of $\chi_q$, one may define $\theta_q$ as
\begin{equation}\label{c-amp}
\chi_q=|\chi_q|e^{-i\theta_q}.
\end{equation}
Then, the Wronskian condition \eq{w} gives
\begin{equation}\label{pep}
\frac{d\theta_q}{dt}=\frac{1}{2a^3|\chi_q|^2},
\end{equation}
i.e. up to an unimportant constant the phase is uniquely fixed by the amplitude $|\chi_q|$.
The growth of the modes in the first instability band can be described by introducing an effective index $\mu_q\simeq \mu$, and for the parameters given in \eq{s1} one has \cite{reh4}
\begin{equation}\label{apin}
\mu\simeq 0.13.
\end{equation}
Since $|\chi_q|^2\propto |\beta_q|^2$, the amplitude $|\chi_q|$ can be seen to be enlarged by a factor of $\exp(0.13\times 2 \pi)=2.26$, after each oscillation.
To estimate the magnitude of the amplitude $|\chi_q|$ at the end of the first stage of preheating, one may look at the expectation value $\left< \chi^2\right>$, which is given by
\begin{equation}\label{34}
\left< \chi^2\right>=\frac{1}{(2\pi)^3}\int d^3 q |\chi_q|^2 \simeq \frac{4\pi}{(2\pi)^3}a^3 q_*^3 |\chi_{q_*}|^2
\end{equation}
where in the last equality we restrict the momentum integral to the first (and the most important) instability band, which is supposed to give the dominant contribution to the vacuum expectation value; we switch to the physical momentum space and introduce $|\chi_{q_*}|$ to denote a mean value for the modes in this instability band. Note that the $4\pi$ factor in \eq{34} comes from the angular directions in the momentum space. Comparing with \eq{ce} one may deduce that at the end of the first stage
\begin{equation}\label{cestonce}
|\chi_{q_*}|_{max}^2\simeq 2\pi^2\frac{m^2}{a^3q_*^3g^2}.
\end{equation}
As pointed out above, the amplitude $|\chi_q|$ is actually an oscillating function that has frequency $\omega_q$. However, one has $\omega_{q_*}\gg m$ and thus $|\chi_{q_*}|$ oscillates much faster than the background inflaton field. As a result, \eq{cestonce} should be divided by 2 to give a time averaged value for the amplitude. We also use the index $\mu$ to obtain the amplitude in the middle of the period and define
\begin{equation}\label{cest}
|\chi_{q_*}|^2\simeq \frac{\pi^2m^2}{a^3q_*^3g^2}\,e^{-2\pi\mu}.
\end{equation}
The phase corresponding to \eq{cest} can determined from \eq{pep} as
\begin{equation}\label{cestp}
\theta_{q_*}\simeq \frac{q_*^3g^2}{2\pi^2m^2}\,e^{2\pi\mu}\,t.
\end{equation}
These estimates will be crucial in determining the strength of a graph in the in-in perturbation theory.
We define the scalar and the tensor power spectra in the momentum space, i.e. $P_k^\zeta$ and $P_k^\gamma$, from the two point functions in the form
\begin{eqnarray}\label{sp}
&&\left<\zeta(t,\vec{x})\zeta(t,\vec{y})\right>=\frac{1}{(2\pi)^{3}}\int d^3k \,e^{i\vec{k}.(\vec{x}-\vec{y})}\,P_k^\zeta(t),\\
\label{tp}
&&\left<\gamma_{ij}(t,\vec{x})\gamma_{kl}(t,\vec{y})\right>=\frac{1}{(2\pi)^{3}}\int d^3k \,e^{i\vec{k}.(\vec{x}-\vec{y})}\,P_k^\gamma(t)\Pi_{ijkl},
\end{eqnarray}
where the polarization tensor $\Pi_{ijkl}$, which is defined as
\begin{equation}\label{pt}
\Pi_{ijkl}=e^s_{ij}e^s_{kl},
\end{equation}
obeys $\Pi_{ijkl}\Pi_{klmn}=2\Pi_{ijmn}$. The tree level standard results can be read from \eq{sh} as
\begin{equation}\label{3l}
P_k^{\zeta(0)}(t)=|\zeta_k^{(0)}|^2,\hs{5} P_k^{\gamma(0)}(t)=|\gamma_k^{(0)}|^2.
\end{equation}
The constants $\zeta_k^{(0)}$ and $\gamma_k^{(0)}$ can be determined from the mode functions of the free fields during inflation and as it is well known they depend on the horizon crossing time for a given $k$ (see \cite{kemal} for a study of loop corrections to the mode functions during inflation).
\subsection{Smoothing out spikes of $\zeta$}
In finding $f(t)$ from \eq{fg}, an infinity arises when the limits of the integration contains a moment giving $\dot{\phi}=0$. To avoid these singularities one may try to fix $f(t)$ by an indefinite integral since one only needs a function whose derivative gives \eq{fg}. However, the function obtained in this way is unavoidably singular at times when $\dot{\phi}=0$. Moreover, the loop corrections turn out to involve the time integrals of $f(t)$ or $df/dt$, and these also diverge when $f(t)$ obeys \eq{fg}.
This pathologic behavior arises due to the bad choice of gauge.\footnote{To avoid this problem, one can use the inflaton fluctuation $\varphi$ as the main dynamical variable to calculate the loop quantum corrections and gauge transform to $\zeta$ at the end of the reheating stage. See the appendix of \cite{ali1} for an example of how gauge transformations change the time integrals in the loops.} Namely, $\varphi=0$ gauge breaks down at times when $\dot{\phi}=0$ giving rise to the spikes of $\zeta$. This has already been noted in some earlier work, see e.g. \cite{coh2,brz}. As discussed in \cite{brz}, although $\zeta$ becomes an ill defined variable in reheating, $(1+w)\zeta$ becomes well defined, where $w$ is the equation of state parameter. In our model $1+w=2\dot{\phi}^2/(\dot{\phi}^2+m^2\phi^2)$.
To smooth out the spikes of $\zeta$, we first note that the Einstein's equations for the background give
\begin{equation}\label{fav0}
\dot{\phi}^2=-2M_p^2\dot{H},
\end{equation}
where we display the Planck mass dependence for later use. Since we use \eq{fh} to approximate the Hubble parameter $H$, one may define $\dot{\phi}^2_{\textrm{av}}$ by using \eq{fh} in \eq{fav0} that yields
\begin{equation}\label{fav}
\dot{\phi}^2_{\textrm{av}} =\frac{4M_p^2}{3t^2}.
\end{equation}
It is clear that $\dot{\phi}^2_{\textrm{av}}$ gives the ``time" average of the oscillating function $\dot{\phi}^2$. To make $\zeta$ well defined, one may now replace $\dot{\phi}^2$ by $\dot{\phi}^2_{\textrm{av}}$ in the free action of $\zeta$ in \eq{qac}. In the context of the discussion carried out in \cite{brz}, this is equivalent to using an average equation of state parameter $w_{\textrm{av}}$ instead of the actual one. Consequently, one simply treats the $\zeta$ variable as if it evolves in a matter dominated universe. In that case, the new function obeys
\begin{equation}\label{fgn}
\frac{df}{dt}=\frac{H^2}{a^3\dot{\phi}^2_{\textrm{av}}}=\frac{1}{3M_p^2a^3}.
\end{equation}
A simple integration then gives
\begin{equation}\label{fs}
f\simeq\frac{2}{9M_p^2Ha^3}, \hs{5}g\simeq \frac{2}{3Ha^3},
\end{equation}
where we use \eq{fgn} and \eq{fg} for $f(t)$ and $g(t)$, respectively. As we will see below, the loop contributions turn out to depend on the difference of two $f(t)$ or the difference of two $g(t)$ functions, and therefore there is no need to fix the integration constants in \eq{fs}.
\section{Cubic interactions and loop corrections} \label{iii}
Using \eq{cpert} and \eq{ls} in \eq{a}, a straightforward calculation gives the following cubic action involving two $\chi$ fields:
\begin{eqnarray}
S^{(3)}=&&\fr12\int a^3\left[-3g^2\phi^2\zeta\chi^2-\frac{g^2\phi^2}{H}\dot{\zeta}\chi^2-\frac{1}{a^2}\zeta(\partial\chi)^2-\frac{1}{a^2H}\dot{\zeta}(\partial\chi)^2-\frac{1}{H}\dot{\zeta}\dot{\chi}^2+3\zeta\dot{\chi}^2-2N^i\dot{\chi}\partial_i\chi\right. \nonumber \\
&&\left. +\frac{1}{a^2}\gamma^{ij}\partial_i\chi\partial_j\chi\right]. \label{a3}
\end{eqnarray}
Combining this cubic action with the quadratic ones given in \eq{qac} and switching to the Hamiltonian formulation, one may find the cubic interaction Hamiltonian containing two $\chi$ fields as
\begin{equation}\label{h3}
H_I^{(3)}=\int d^3 x\,a^3 \left[\dot{\zeta} O_1+\zeta O_2+\gamma^{ij}O_{ij}\right],
\end{equation}
where
\begin{eqnarray}
&&O_1=\frac{g^2\phi^2}{2H}\chi^2+\frac{1}{2Ha^2}(\partial\chi)^2+\frac{1}{2H}\dot{\chi}^2-\frac{\dot{\phi}^2}{2H^2}\partial^{-2}\partial_i(\dot{\chi}\partial_i\chi),\label{o1}\\
&&O_2=\frac{3}{2}g^2\phi^2\chi^2+\frac{1}{2a^2}(\partial\chi)^2+\frac{3}{2}\dot{\chi}^2+\frac{1}{a^2H}\partial_i(\dot{\chi}\partial_i\chi),\label{o2}\\
&&O_{ij}=-\frac{1}{a^2}\partial_i\chi\partial_j\chi.
\end{eqnarray}
Although it is not indicated explicitly, all the fields appearing in \eq{h3} can be taken to be the interaction picture fields that enter in the in-in perturbation theory as it is formulated in \cite{w2}. In obtaining \eq{h3} we only spatially integrate by parts the last term in the first line of \eq{a3} to replace the shift $N^i$ by its potential $\psi$ given in \eq{ls}.
For any given operator $O$, the in-in formalism can be applied to obtain the following perturbative expansion for the vacuum expectation value \cite{w2}
\begin{equation}\label{inp}
\left< O(t)\right>=\sum_{N=0}^{\infty} i^N \int_{t_R}^t dt_N\int_{t_R}^{t_N}dt_{N-1}...\int_{t_R}^{t_2}dt_1\left< [H_I(t_1),[H_I(t_2),...[H_I(t_N),O(t)]...]\right>.
\end{equation}
where the lower limit of the time integrals is set to $t_R$ rather than $-\infty$ since we are interested in the loop effects during reheating. In general, the two terms in a given commutator in \eq{inp} have different $i\epsilon$ prescriptions, which would be important for the convergence of the time integrals if they were extended to $-\infty$. In \eq{inp}, this technical problem does not arise since the time integrals span a finite time interval. Because the Hamiltonian contains the products of the fields and their time derivatives (i.e. their momenta) there is an ordering ambiguity in \eq{inp}. Although it is crucial to solve this ambiguity to obtain exact results (for instance by utilizing a symmetric ordering prescription), this will not be a problem for our order of magnitude estimates.
\subsection{The scalar power spectrum}
We first calculate the one loop correction to the scalar power spectrum arising from the cubic interaction Hamiltonian \eq{h3}. Since $H_I^{(3)}$ is linear in $\zeta$, the first nonzero contribution in \eq{inp} appears for $N=2$ and the corresponding terms can be pictured like the graph in Fig. \ref{fig1}. Since $H_I^{(3)}$ contains two $\chi$ fields and a volume factor of $a^3$, the suppression of the $\chi_q$ mode by $a^{-3/2}$ is compensated in the interaction Hamiltonian. On the other hand, the three dimensional loop integral must be converted to the physical momentum space since the instability band is given in the physical scale in \eq{pi}. This yields an extra enlargement factor of $a^3$.
\begin{figure}
\centerline{
\includegraphics[width=6cm]{l3}}
\caption{The 1-loop graph arising from the interaction Hamiltonian \eq{h3} that contributes to the $\left<\zeta\z\right>$ correlation function during reheating. The graph schematically indicates the vertices coming from the interaction Hamiltonian and possible contractions or commutators of the external $\zeta$ fields and the internal $\chi$ fields giving rise to the loop. One may draw similar graphs with time or spatial derivatives acting on the fields. The disconnected graphs, where the two $\chi$ fields in the same interaction vertex are contracted with each other, are suppressed.}
\label{fig1}
\end{figure}
Using \eq{inp} for the operator $\zeta(t,\vec{x})\zeta(t,\vec{y})$ with $N=2$ gives the following vacuum expectation values of the nested commutators:
\begin{eqnarray}
&&\left<\left[\zeta(t_1,\vec{z}_1)O_2(t_1,\vec{z}_1),\left[\zeta(t_2,\vec{z}_2)O_2(t_2,\vec{z}_2),\zeta(t,\vec{x})\zeta(t,\vec{y})\right]\right] \right>, \label{most0}\\
&&\left<\left[\zeta(t_1,\vec{z}_1)O_2(t_1,\vec{z}_1),\left[\dot{\zeta}(t_2,\vec{z}_2)O_1(t_2,\vec{z}_2),\zeta(t,\vec{x})\zeta(t,\vec{y})\right]\right]\right>,\label{most}\\
&&\left<\left[\dot{\zeta}(t_1,\vec{z}_1)O_1(t_1,\vec{z}_1),\left[\zeta(t_2,\vec{z}_2)O_2(t_2,\vec{z}_2),\zeta(t,\vec{x})\zeta(t,\vec{y})\right]\right]\right>,\label{most1}\\
&&\left<\left[\dot{\zeta}(t_1,\vec{z}_1)O_1(t_1,\vec{z}_1),\left[\dot{\zeta}(t_2,\vec{z}_2)O_1(t_2,\vec{z}_2),\zeta(t,\vec{x})\zeta(t,\vec{y})\right]\right]\right>.\label{most2}
\end{eqnarray}
From the identity $[AB,C]=A[B,C]+[A,C]B$, one sees that there are terms either containing $\left< \zeta\z\right> [\zeta,\zeta]$ or $[\zeta,\zeta] [\zeta,\zeta]$ (or similar terms where two of the $\zeta$'s are replaced by $\dot{\zeta}$). Each commutator $[\zeta,\zeta]$ or $[\dot{\zeta},\zeta]$ yields a factor of $a^{-3}$. As pointed out above, there is one $a^3$ factor coming from the loop momentum integral, which may compensate a single $a^{-3}$. This shows that the terms involving two commutators $[\zeta,\zeta] [\zeta,\zeta]$ are suppressed. Similarly, the expectation value $\left<\zeta\dot{\zeta}\right>$ also gives an extra factor of $a^{-3}$ since the time derivative kills the constant piece in \eq{sh}, therefore these are also suppressed.
From \eq{fs} one observes that $df/dt\simeq Hf$. Besides, while the commutator $[\zeta,\zeta]$ gives the function $f(t)$ the commutator $[\zeta,\dot{\zeta}]$ yields the function $df/dt$. Namely, from the mode expansion \eq{m1} one easily calculates
\begin{eqnarray}
&&\left[\zeta(t_2,\vec{z}_2),\zeta(t,\vec{x})\right]=\frac{1}{(2\pi)^3}\int d^3 k e^{i\vec{k}.(\vec{z}_2-\vec{x})}\left[\zeta_k(t_2)\zeta_k^*(t)-\zeta_k^*(t_2)\zeta_k(t)\right],\\
&&\left[\dot{\zeta}(t_2,\vec{z}_2),\zeta(t,\vec{x})\right]=\frac{1}{(2\pi)^3}\int d^3 k e^{i\vec{k}.(\vec{z}_2-\vec{x})}\left[\zeta_k(t_2)\dot{\zeta}_k^*(t)-\zeta_k^*(t_2)\dot{\zeta}_k(t)\right].
\end{eqnarray}
Using \eq{sh} in these commutators, we see that for superhorizon modes the first commutator gives $[f(t_2)-f(t)]$ and the second one yields $df(t_2)/dt$ in the square brackets. From these observations and using $O_1$ and $O_2$ given in \eq{o1} and \eq{o2}, one may conclude that all the terms in \eq{most0}-\eq{most2} have the same order of magnitude. However, since $f(t)$ is a slowly varying function and moreover $[f(t_2)-f(t)]$ vanishes when $t_2=t$, we find that the loop corrections containing the commutator $[\zeta,\dot{\zeta}]$ is larger than the corrections with the commutator $[\zeta,\zeta]$. To sum up, we find that the largest of all the terms that arise in \eq{inp} is the one coming from \eq{most} that has the structure $\left< \zeta\z\right> [\zeta,\dot{\zeta}]$. Defining the function $F(t_1,t_2,k)$ by
\begin{equation}\label{oo}
\left<O_2(t_1,\vec{z}_1)O_1(t_2,\vec{z}_2)\right>=\frac{1}{(2\pi)^3}\int d^3 k\, e^{i\vec{k}.(\vec{z}_1-\vec{z}_2)}\,F(t_1,t_2,k),
\end{equation}
and using \eq{sh}, \eq{n} and \eq{sp}, one can determine the largest correction as
\begin{equation}\label{cor1}
P_k^\zeta(t_F)^{(1)}\simeq P_k^{\zeta(0)}\,(2i)\int_{t_R}^{t_F}dt_2\int_{t_R}^{t_2}dt_1\, a(t_1)^3\,a(t_2)^3\,\frac{df}{dt}(t_2)\left[F(t_1,t_2,k)-c.c.\right],
\end{equation}
where $k$ denotes the comoving cosmological superhorizon scale of interest and $t_F$ marks the end of the first stage of preheating as defined above. It is remarkable that the one-loop correction $P_k^\zeta(t_F)^{(1)}$ becomes a multiple of the the tree level function $P_k^{\zeta(0)}$ given in \eq{3l}. From \eq{o1} and \eq{o2}, $F(t_1,t_2,k)$ can be found as
\begin{equation}\label{49}
F(t_1,t_2,k)=\frac{3g^4}{2(2\pi)^3H(t_2)}\phi^2(t_1)\phi^2(t_2)\int d^3 q\, \chi_{q}(t_1)\chi_{k+q}(t_1)\chi_{q}^*(t_2)\chi_{k+q}^*(t_2)+...
\end{equation}
where only the contribution of the first terms in \eq{o1} and \eq{o2} are written explicitly.
The momentum integral in \eq{49}, and similar loop integrals below, do diverge and these must be regularized/renormalized before making any order of magnitude estimates. To figure out the contribution of the modes in the resonance band and for regularization, we simply cutoff the integral in \eq{49} with $aq_*$, where $q_*$ is given by \eq{pi}. It is easy to see that this procedure corresponds to the adiabatic regularization where one uses the WKB mode function \eq{est} and discard the pieces with $\alpha_q$ that give infinities.\footnote{The same regularization has been used in \cite{reh4} to determine the parametric resonance effects. Therefore, using the WKB regularization for our loop corrections is crucial for consistency since we heavily use the results of \cite{reh4} in our estimations.} Note that since $|\alpha_k|\to 1$ and $|\beta_k|\to 0$ as $k\to\infty$, adiabatic regularization guarantees the finiteness of the loop integrals. Initially we have $\alpha_q(t_R)=1$ and $\beta_q(t_R)=0$; and $\beta_q$ increases with time in the resonance band and stays vanishingly small for high energy modes since they propagate adiabatically. Therefore, using the resonance scale for the momentum cutoff is equivalent to the adiabatic regularization.
On the other hand, from \eq{s1} and \eq{s2} one sees that $q_*\simeq 10^{-6}\tilde{M}_p\ll \tilde{M}_p$. Consequently, in a standard renormalization procedure that is more systematic than the simple adiabatic regularization, the UV subtractions should not change our estimates since the cutoff scale $q_*$ corresponds to a relatively low energy scale. Indeed, it is not difficult to convince oneself that the adiabatic subtractions that is automatically performed by our momentum cutoff must be the same with the UV subtractions, i.e. the result obtained with our cutoff must be the same with the finite result obtained after UV subtractions. To see this, imagine that the loop integral is regularized by a UV cutoff $\Lambda\sim M_p$. Then, our method is equivalent to throwing out the momentum range $(q_*,\Lambda)$, which can be thought to be canceled out by the $\Lambda$-dependent counterterms. In this procedure, the finite renormalizations can be fixed by referring to the tree level inflationary results. Note that the dimensional regularization is very difficult to implement in this computation since the exact form of the mode function $\chi_q$ is not known.
The correction \eq{cor1} modifies not only the amplitude but also the index of the power spectrum. This nontrivial $k$-dependence ensures that \eq{cor1} cannot be interpreted as a finite renormalization effect. On the other hand, the change in the index turns out to be small for cosmologically interesting scales\footnote{The index is meaningfully modified for the modes entering the horizon during reheating that may change the primordial black hole formation, see \cite{bh}.} since in that case $k\ll a q_*$. Therefore, the $k$ dependence of \eq{cor1} is negligible and to a very good approximation one may ignore it by setting $k=0$.
As discussed above, $a(t_1)^3$ and $a(t_2)^3$ terms cancel out the scale factor suppressions of the four $\chi_q$ modes. The $1/a^3$ factor that appears in $df/dt$ in \eq{fgn} can be used to convert the comoving momentum integral in \eq{49} to the physical scale. Thus, all the scale factors in \eq{cor1} simply cancel out each other.
In what follows we estimate \eq{cor1} to determine the size of the loop effects in reheating. We first focus on the term that is explicitly shown in \eq{49} and then confirm that others give similar contributions. Since the resonant $\chi$ modes encounter most of their growth near the end of the first stage, one may focus on the last inflaton oscillation for the time integrals in \eq{cor1}, namely, the lower and the upper limits can be set to $mt_F-2\pi$ and $mt_F$, respectively. Using \eq{c-amp}, the square brackets in \eq{cor1} yields the following factor
\begin{equation}\label{f5}
\sin\left[\theta_q(t_1)+\theta_{k+q}(t_1)-\theta_q(t_2)-\theta_{k+q}(t_2)\right].
\end{equation}
We see that the leading order contribution does not cancel out since the phase factors have different time arguments. In \eq{49}, there are four $\chi_q$ modes integrated out in the first instability band, which can be estimated as $q_*^3|\chi_{q_*}|^4$, where $|\chi_{q_*}|$ is the mean value of the modes introduced in \eq{cest}. The function $df/dt$ can be read from \eq{fgn}. Treating the slowly changing factors like $H$ and $\Phi$ as constants one finally finds that
\begin{equation}\label{e1}
P_k^\zeta(t_F)^{(1)}\simeq P_k^{\zeta(0)}\left[\frac{24\pi}{(2\pi)^3}\right]\left[\frac{g^4\Phi(t_F)^4}{H(t_F)}\right]\left[q_*^3\left|\chi_{q_*}\right|^4\right]\left[\frac{C_1}{m^2}\right]\left[\frac{1}{3M_p^2}\right]+...
\end{equation}
where the dimensionless constant $C_1$ is given by
\begin{equation} \label{time1}
C_1=\int_{mt_F-2\pi}^{mt_F}mdt_2\int_{mt_F-2\pi}^{mt_2}mdt_1\sin^2(mt_1)\sin^2(mt_2)\sin\left[2\theta_{q_*}(t_1)-2\theta_{q_*}(t_2)\right].
\end{equation}
Recall that the phase $\theta_{q_*}$ is defined in \eq{cestp}.
For our set of parameters \eq{s1}, the constant $C_1$ can be determined by a numerical integration that yields $C_1\simeq 0.078$. Using then \eq{s1}, \eq{s2}, \eq{pi} and \eq{cest} in \eq{e1}, we obtain
\begin{equation}\label{e2}
P_k^\zeta(t_F)^{(1)}\simeq (1.1+...)\, P_k^{\zeta(0)},
\end{equation}
which becomes larger than the tree level contribution. The Planck mass suppression of \eq{e1} is compensated by many different factors. The smallest mass scale in the problem, i.e. the Hubble parameter, shows up in the denominator because of the interaction term \eq{o1}. The mass of the inflaton $m$, the second smallest, also appears in the denominator. On the other hand, the background inflaton amplitude $\Phi$, which is moderately smaller than $M_p$, appears in the numerator with power four due to the first two terms in the interactions \eq{o1} and \eq{o2}. Finally, the mode function $\chi_q$ is amplified exponentially, which also helps the growth considerably. Therefore, different ingredients of this chaotic model play crucial roles for overcoming the Planck mass suppression.
Let us now consider the contributions of the other terms in \eq{49}, which can be determined from the definition \eq{oo}. From \eq{o1} and \eq{o2}, these consist of the products of four $\chi$ fields, on which certain time or spatial derivatives act (there is also a nonlocal term with $1/\partial^{2}$ that involves the Green function of the Laplacian). In \eq{cor1}, only the imaginary part of $F(k_1,k_2)$ appears in the square brackets. One can easily see that after taking the imaginary part, each product yields a term similar to \eq{f5} and thus the leading order contributions do not cancel out. On the other hand, the time integrals are very similar to \eq{time1} and they can all be estimated to give $C/m^2$. One may also note that a partial derivative $\partial_i$ would produce $q_i$ in momentum space and $\dot{\chi}_q\simeq \omega_q \chi_q$, where $\omega_q$ is given in \eq{wq}. Therefore, to estimate the size of a correction one may simply replace $g^2\Phi^2$ factor in the second square bracket in \eq{e1}, which arises due to $g^2\phi^2$ terms in \eq{o1} and \eq{o2}, by $\omega_{q_*}^2$ corresponding to $\dot{\chi}^2$ or $q_*^2$ corresponding to $(\partial\chi)^2$ (note that $1/a^2$ factor, which multiplies $(\partial\chi)^2$ in \eq{o1} and \eq{o2}, converts the comoving momentum scale arising from the spatial partial derivative to the physical momentum scale). Similarly, the magnitudes of the nonlocal terms can be estimated by using the Green function for the Laplacian and the correlation length corresponding to the $\chi$ fluctuations, which is roughly equal to $1/q_*$ as shown in \cite{ali2}. In all these different cases one may see that the contributions have the same order of magnitude with \eq{e2}, since for our numerical choice of parameters \eq{s1} one has $g\Phi\simeq \omega_{q_*}\simeq 7q_*$. The sign of each contribution depends on the phases through the expressions like \eq{f5}, which is sensitive to the initial conditions \cite{reh4}. In any case, one deduces from \eq{e2} that
\begin{equation}\label{e222}
P_k^\zeta(t_F)^{(1)}\simeq {\cal O}(10)\, P_k^{\zeta(0)},
\end{equation}
since there are 16 similar contributions. Eq. \eq{e222} is consistent with the estimates given in \cite{ali1}.
Because the one loop correction \eq{e2} is larger than the tree level result, the in-in perturbation theory might be broken down in this model. Since the modes of the $\chi$ field is exponentially amplified during preheating, the quantum corrections are enlarged when more $\chi$ fields circulate in the loops. As we will see, the results of the next section will support this expectation, i.e. the lower order loop corrections that are supposed to give larger contributions than \eq{e2} become smaller due to the less number of $\chi$ modes circulating in the loops. A similar situation also arises for $f_{NL}$ as we will discuss in the next section.
\subsection{Non-gaussianity}
To calculate the non-gaussianity arising from the cubic interaction Hamiltonian \eq{h3}, we express the three point function in the position space as
\begin{equation}\label{def3}
\left<\zeta(t,\vec{x})\zeta(t,\vec{y})\zeta(t,\vec{z})\right>=\int d^3k_1 d^3k_2 d^3k_3\delta(k_1+k_2+k_3)e^{i\vec{k}_1.\vec{x}+\vec{k}_2.\vec{y}+\vec{k}_3.\vec{z}} P(k_1,k_2,k_3).
\end{equation}
The function $P(k_1,k_2,k_3)$ measures the size of the non-gaussianity involving the comoving superhorizon scales $k_1$, $k_2$ and $k_3$ that obey $k_1+k_2+k_3=0$. To pin down the loop corrections one may use \eq{inp} for $\zeta(t,\vec{x})\zeta(t,\vec{y})\zeta(t,\vec{z})$ and since $H_I^{(3)}$ is linear in $\zeta$ the first nonzero contribution arises for $N=3$, which gives the diagram in Fig. \ref{fig2}.
\begin{figure}
\centerline{
\includegraphics[width=3.5cm]{l2}}
\caption{The one loop graph arising from the interaction Hamiltonian \eq{h3} that contributes to the three point function $\left<\zeta\z\zeta\right>$. The external and the internal lines correspond to the $\zeta$ and the $\chi$ fields, respectively. The time and the spatial partial derivatives acting on the fields are not indicated in the graph.}
\label{fig2}
\end{figure}
As in the previous subsection, there is one extra enlargement factor of $a^3$ that appears after converting the comoving loop integral to the physical scale. Since the commutator $[\zeta,\zeta]$ or $[\dot{\zeta},\zeta]$ falls like $1/a^3$, only a single commutator would survive the suppression and all other terms containing two and three $\zeta$ commutators fall off by the powers of $1/a^3$ and $1/a^6$, respectively (recall that the suppressions of the $\chi$ modes are compensated by $a^3$ factors in the interaction Hamiltonian $H_I^{(3)}$). Moreover, as it is discussed in detail above, while the $[\zeta,\zeta]$ commutator involves the difference of two $f(t)$ functions, the $[\dot{\zeta},\zeta]$ commutator yields the function $df/dt$, and the latter gives a larger contribution. Therefore, the biggest one loop correction to $P(k_1,k_2,k_3)$ arises when one uses $\dot{\zeta}O_1$ in the first and $\zeta O_2$ in the second and in the third commutators in \eq{inp}. Repeatedly using the commutator identity $[AB,C]=A[B,C]+[A,C]B$ and defining the function $G(k_1,k_2,k_3)$ as
\begin{equation}\label{ngf}
\left<[O_2(t_1,z_1),[O_2(t_2,z_2),O_1(t_3,z_3)]]\right>=\int d^3k_1 d^3k_2 d^3k_3\delta(k_1+k_2+k_3)e^{i\vec{k}_1.\vec{x}+\vec{k}_2.\vec{y}+\vec{k}_3.\vec{z}} G(k_1,k_2,k_3),
\end{equation}
one may straightforwardly express the leading order one loop correction in terms of $G(k_1,k_2,k_3) $ as
\begin{equation}\label{ti}
P(k_1,k_2,k_3)^{(1)}\simeq-\int_{t_R}^{t_F} dt_3\,a(t_3)^3\int_{t_R}^{t_3}dt_2\,a(t_2)^3\int_{t_R}^{t_2}dt_1\,a(t_1)^3 \frac{df}{dt}(t_3)G(k_1,k_2,k_3)\,P_{k_1}^{\zeta(0)}P_{k_2}^{\zeta(0)}+\textrm{cyclic},
\end{equation}
where the extra two terms, which can be obtained by cyclic interchange of momenta, are not written explicitly.
Using \eq{o1} and \eq{o2} in \eq{ngf}, it is possible to express $G(k_1,k_2,k_3)$ as a loop momentum integral of the mode functions. Indeed a straightforward calculation gives
\begin{eqnarray}
&&G(k_1,k_2,k_3)=\frac{9g^6}{2H(t_3)}\phi^2(t_1)\phi^2(t_2)\phi^2(t_3)\frac{1}{(2\pi)^9}\int d^3 q \left[\chi_{q+k_2}(t_2)\chi_{q+k_2}^*(t_3)-c.c\right]\label{g50}\\
&&\left[\left(\chi_{q-k_1}(t_1)\chi_{q-k_1}^*(t_3)-c.c.\right)\left(\chi_q(t_1)\chi_q^*(t_2)+c.c.\right)\right.\nonumber\\
&&\hs{20}\left.+\left(\chi_{q-k_1}(t_1)\chi_{q-k_1}^*(t_2)-c.c.\right)\left(\chi_q(t_1)\chi_q^*(t_3)+c.c.\right)\right]+...\nonumber
\end{eqnarray}
where the contributions of the first terms in \eq{o1} and \eq{o2} are expressed explicitly. If $q$ denotes the loop variable that is restricted to the instability band $(0,a q_*)$, again one has $q\gg k_1,k_2,k_3$. Since the modes in the loop integral in \eq{g50} become functions of $q+k_i$, i.e. $\chi_{q+k_i}$, the dependence of $G(k_1,k_2,k_3)$ on its arguments is very weak and one may write $G(k_1,k_2,k_3)\simeq G$. Using \eq{c-amp} we obtain
\begin{eqnarray}\label{50}
&&|G|\simeq \int d^3q\, \frac{36}{(2\pi)^9}\frac{g^6\Phi^6}{H}|\chi_q|^6\sin^2(mt_1)\sin^2(mt_2)\sin^2(mt_3)\sin\left[\theta_q(t_3)-\theta_q(t_2)\right]\nonumber\\
&&\left(\sin[\theta_q(t_3)-\theta_q(t_1)]\cos[\theta_q(t_2)-\theta_q(t_1)]+\sin[\theta_q(t_3)-\theta_q(t_2)]\cos[\theta_q(t_3)-\theta_q(t_1)]\right)+...
\end{eqnarray}
Since the largest contribution to this loop integral comes when $q$ runs near $aq_*$, one may set $q=aq_*$ and use $\int d^3q\to 4\pi q_*^3$ to estimate the integral.
It is now possible to use \eq{50} in \eq{ti} to read the three point function. As before, the largest contribution to the time integrals come from the last oscillation period in which $\chi$ modes are amplified most. Keeping the slowly changing factors like $\Phi$ and $H$ as constants in this last cycle, we obtain
\begin{equation}\label{59}
P^{(1)}(k_1,k_2,k_3)\simeq \frac{144\pi}{(2\pi)^9}\left[q_*^3 \left|\chi_{q_*}\right|^6\right]
\left(\left[\frac{C_2}{m^3}\right]\frac{g^6\Phi(t_F)^6}{H(t_F)}+...\right)\left[\frac{1}{3M_p^2}\right]P_{k_1}^{\zeta(0)}P_{k_2}^{\zeta(0)}+\textrm{cyclic},
\end{equation}
where the dimensionless constant $C_2$ is given by
\begin{eqnarray}
&&C_2= \int_{mt_F-2\pi}^{mt_F}m dt_3\,\int_{mt_F-2\pi}^{mt_3}mdt_2\int_{mt_F-2\pi}^{mt_2}mdt_1\sin^2(mt_1)\sin^2(mt_2)\sin^2(mt_3)\sin[\theta_{q_*}(t_3)-\theta_{q_*}(t_2)]\nonumber\\
&&\left(\sin[\theta_{q_*}(t_3)-\theta_{q_*}(t_1)]\cos[\theta_{q_*}(t_2)-\theta_{q_*}(t_1)]+\sin[\theta_{q_*}t_3)-\theta_{q_*}(t_2)]\cos[\theta_{q_*}(t_3)-\theta_{q_*}(t_1)]\right).\label{c2}
\end{eqnarray}
We would like to recall that in this expression the scale factors cancel out each other and the time dependent dimension-full quantities are evaluated at the end of the first stage of preheating.
The non-gaussianity parameter $f_{NL}$ can be defined as \cite{fnl0,mal}
\begin{equation}\label{60}
\zeta=\zeta_g-\frac{3}{5}f_{NL} \zeta_g^2,
\end{equation}
where $\zeta_g$ denotes the corresponding free quantum field. This definition introduces a shape independent parameter that gives an overall order of magnitude estimate for the scalar non-gaussianity. Calculating the three point function by using \eq{60} and comparing with \eq{59} one finds
\begin{equation} \label{fnl0}
f_{NL}\simeq \frac{240\pi}{(2\pi)^3} \left[q_*^3 \left|\chi_{q_*}\right|^6\right]
\left(\left[\frac{C_2}{m^3}\right]\frac{g^6\Phi(t_F)^6}{H(t_F)}+...\right)\left[\frac{1}{3M_p^2}\right].
\end{equation}
For our canonical set \eq{s1}, $C_2$ can be found by a numerical integration that gives $C_2\simeq 0.00057$ (recall that $\theta_{q_*}$ is fixed in \eq{cestp}). Using the values of other dimension-full parameters in \eq{fnl0} we obtain
\begin{equation}\label{fnl1}
f_{NL}\simeq \, 1.4\times 10^{4}.
\end{equation}
This is a very large amount of non-gaussianity that is solely produced in reheating and it is obviously inconsistent with observations. On the other hand, by comparing \eq{e2} and \eq{fnl1} we observe that although they measure different one loop corrections, the latter has more $\chi$ modes circulating in the loops and it produces a much bigger number. Therefore, the large amount obtained in \eq{fnl1} can be an artifact of perturbation theory, which might become invalid in this model. It is possible to produce large non-gaussianity in inflationary models (see e.g. \cite{fnl1}), but the single scalar field models generically give $f_{NL}={\cal O}(\epsilon)$, where $\epsilon$ is the slow roll parameter. Although we are not capable of making non-perturbative estimates, our computations show that a large non-gaussianity can be produced during reheating.
Using a different approach, namely by looking at local nonlinear terms in field equations generated through interactions, it has also been shown in \cite{yeni2,yeni3,yeni4,yeni5} that parametric resonance effects might generate large non-gaussianity. Specifically, in \cite{yeni5} the chaotic $\lambda \phi^4$ model is considered and it is found that for a certain range of parameters one has $f_{NL}>{\cal O}(1000)$. As long as the parametric resonance effects are taken into account, $\lambda\phi^4$ and $m^2\phi^2$ models are very similar to each other and thus our result \eq{fnl1} perfectly agrees with \cite{yeni5}.
\subsection{The tensor power spectrum}
The interaction Hamiltonian \eq{h3} also modifies the tensor power spectrum due to the last term involving the graviton coupling. One may first think that this interaction is suppressed by $1/a^2$, however this factor simply converts the two comoving momenta arising from the two partial derivatives to the physical scale. The tensor field $\gamma_{ij}$ is similar to a spectator field since its background value vanishes. As a result, the tensor power spectrum is not affected by the (infinitesimal) changes of the spacetime slicing and the gauge can be fixed in a natural way without giving rise to any complications. Moreover, unlike the $\zeta$ propagator, the tensor propagator does not contain any singularities. The correction corresponding to \eq{h3} can be pictured as in Fig. \ref{fig3}.
\begin{figure}
\centerline{
\includegraphics[width=6cm]{l4}}
\caption{The 1-loop graph arising from the interaction Hamiltonian \eq{h3} that contributes to the graviton two point function $\left<\gamma_{ij}\gamma_{kl}\right>$ during reheating.}
\label{fig3}
\end{figure}
Using \eq{inp} for $\gamma_{ij}(t,\vec{x})\gamma_{kl}(t,\vec{y})$ with $N=2$, which gives the first nonzero contribution, and applying the identity $[AB,C]=A[B,C]+[A,C]B$, one finds terms with single or two graviton commutators. It is easy to see that the terms with two graviton commutators are suppressed by $1/a^3$ and hence they become completely negligible. A straightforward calculation then gives the following one loop correction to the tensor power spectrum in momentum space
\begin{equation}\label{ccl}
P_k^\gamma(t_F)^{(1)}\simeq P_k^{\gamma(0)}\frac{4i}{9M_p^2}\int_{t_R}^{t_F}dt_2\int_{t_R}^{t_2}dt_1\, a(t_1)\,a(t_2)\,\left[g(t_2)-g(t)\right]\left[H(t_1,t_2,k)-c.c\right],
\end{equation}
where $g(t)$ is defined in \eq{fg} and
\begin{equation}\label{th}
H(t_1,t_2,k)=\frac{1}{(2\pi)^3}\int d^3q \,q^2(k+q)^2\, \chi_{q}(t_1)\chi_{k+q}(t_1)\chi_{q}^*(t_2)\chi_{k+q}^*(t_2).
\end{equation}
In \eq{ccl} we reintroduce the Planck mass $M_p$, which can be fixed either by dimensional analysis or by keeping track of its presence starting from the action \eq{a}. Once again, the one loop correction in momentum space becomes a multiple of the tree level power spectrum. This is mainly because of the fact that the expectation value $\left< [O_{ij}(t_2,\vec{z}_2), O_{kl}(t_1,\vec{z}_1)]\right>$, which appears due to last term of the interaction Hamiltonian \eq{h3}, produces $\delta_{ik}\delta_{jl}+\delta_{il}\delta_{kj}$ and this index structure acting on the polarization tensor $\Pi_{ijkl}$, which is introduced in \eq{pt}, gives the same tensor.
Converting the comoving integration variable in \eq{th} to the physical scale generates the power $a^7$, and this factor together with $a(t_1)a(t_2)$ in \eq{ccl} completely compensate the suppressions of the mode functions $\chi_q$ and the $1/a^3$ decay of the function $g(t)$. As before, the change in the spectral index is negligible due to the large hierarchy between the superhorizon scale $k$ and the scale $q_*$ characterizing the instability band. Therefore, in \eq{ccl} one may ignore the $k$ dependence, set $q=q_*$ and let $d^3q\to 4\pi q_*^3$. For the $\chi$ modes, one may use \eq{c-amp} and \eq{cestp}. Finally, to estimate the time integral, we introduce the time dependence of the background quantities using \eq{fh}. As a result we find
\begin{equation}\label{corten}
P_k^\gamma(t_F)^{(1)}\simeq P_k^{\gamma(0)} \frac{8}{27\pi^2M_p^2}\left[\frac{C_3}{m^2} \right]\left[\frac{1}{H(t_F)}\right]\left[q_*^7 |\chi_{q_*}|^4\right],
\end{equation}
where
\begin{equation}\label{c3}
C_3=\int_{mt_F-2\pi}^{mt_F}mdt_2\int_{mt_F-2\pi}^{mt_2}mdt_1\frac{t_F^{8/3}}{(t_1t_2)^{4/3}}\left[\frac{t_F}{t_2}-1\right]\sin[2\theta_{q_*}(t_2)-2\theta_{q_*}(t_1)].
\end{equation}
For our canonical set of parameters \eq{s1}, we numerically integrate \eq{c3} that yields $C_3\simeq 0.029$. Using \eq{s2} for the Hubble parameter one finds
\begin{equation}\label{e3}
P_k^\gamma(t_F)^{(1)}\simeq 5\times10^{-5} P_k^{\gamma(0)}.
\end{equation}
The reason for this correction to be small compared to the scalar power spectrum \eq{e2} is that the factor $g^4\Phi^4$ in \eq{e1} is replaced by $q_*^4$ in \eq{corten} due to different forms of interactions in \eq{h3}, and one has $g\Phi\simeq 7q_*$. Nevertheless, the modification \eq{e3} is much larger than the quantum corrections that arise during inflation, which are suppressed by the ratio $H/M_p$ \cite{mal}.
\section{Some higher order interactions and loops} \label{sec3}
The results of the previous section show that cubic interactions involving two $\chi$ fields modify the scalar and the tensor power spectra and give rise to non-gaussianity. Although the interaction Hamiltonian \eq{h3} is cubic, the first nonzero contributions come from \eq{inp} with $N=2$ for the scalar and the tensor power spectra, and with $N=3$ for the three point function. The corresponding one loop corrections are sixth and ninth order in fluctuations, respectively.
In this section, we consider some higher order (e.g. fourth and fifth order) interactions, again involving two $\chi$ fields, and calculate the corresponding one loop effects. Our aim in considering such interactions is two fold. First, we would like to use \eq{inp} with $N=1$. Therefore, by a naive counting in perturbation theory the effects are supposed to be more prominent than the ones we have studied in the previous section (although this turns out to be incorrect as we will see below). Second, the loop effects calculated in the previous section involve the commutators of the $\chi$ fields and thus one must carefully treat the phase factors as we did in \eq{f5}. The loop corrections we consider in this section demonstrate the modifications more directly.
\subsection{The scalar power spectrum and non-gaussianity}
Starting from the action \eq{a}, one may obtain the following terms in the interaction Hamiltonian
\begin{equation}\label{h4}
H_I=\int d^3 x\, a^3\, e^{3\zeta}\rho_\chi+...=\fr92 \int d^3 x\, a^3\, \left[ \zeta^2+\zeta^3\right]\rho_\chi + ...
\end{equation}
where $\rho_\chi$ is the energy density of the $\chi$ field given by
\begin{equation}
\rho_\chi=\fr12 g^2\phi^2\chi^2+\frac{1}{2a^2}(\partial\chi)^2+\frac{1}{2}\dot{\chi}^2.
\end{equation}
The first term in \eq{h4} contributes to the scalar power spectrum and the second one produces scalar non-gaussianity. Note that the linear $\zeta$ term in \eq{h4} agrees with the cubic hamiltonian in \eq{h3}.
Let us first consider the one loop correction to the scalar power spectrum arising from \eq{h4} that can be pictured as in Fig. \ref{fig4}. Using \eq{inp} for the $\zeta(t,\vec{x})\zeta(t,\vec{y})$ with $N=1$, a straightforward calculation gives
\begin{equation}\label{e4}
P_k^\zeta(t_F)^{(1)}\simeq 18 P_k^{\zeta(0)}\int_{t_R}^{t_F} dt_1 a(t_1)^3\left<\rho_{\chi}(t_1)\right> \left[f(t_1)-f(t_F)\right].
\end{equation}
This equation clearly shows how the correction enlarges in time during preheating as the energy density $\left<\rho_{\chi}\right>$ increases as a result of $\chi$ particle creation. Note that \eq{e4} only modifies the amplitude of the spectrum since the correction multiplying the tree level result does not depend on the external momentum $k$. At the end of the first stage of preheating the energy density of the created $\chi$ particles catches up the background energy density, which gives $\left<\rho_{\chi}(t_1)\right>\simeq 3H^2M_p^2$. Reading $f(t)$ from \eq{fs}, it is easy to see that
\begin{equation}\label{e44}
P_k^\zeta(t_F)^{(1)}\simeq {\cal O}\left(\frac{H(t_F)}{m}\right) P_k^{\zeta(0)}.
\end{equation}
Indeed, using \eq{fh} for the background quantities one finds that
\begin{equation}
P_k^\zeta(t_F)^{(1)}\simeq \frac{12H(t_F)}{m}\int_{mt_F-2\pi}^{mt_F} mdt\left[\frac{t_F}{t}-1\right] P_k^{\zeta(0)}\simeq 0.05 P_k^{\zeta(0)},
\end{equation}
where, as before, we restrict the time integral to the last inflaton oscillation cycle.
One may find other terms in the interaction Hamiltonian that modifies the scalar power spectrum. For instance, by introducing $\exp(3\zeta)$ factor in \eq{h3}, which arises from $\sqrt{h}$, one obtains a fourth order term
\begin{equation}\label{h44}
H^{(4)}_I=3 \int d^3x \,a^3\,\zeta\dot{\zeta}O_1.
\end{equation}
After using \eq{h44} in \eq{inp} with $N=1$, one encounters terms either with $\left< \zeta\z\right>[\dot{\zeta},\zeta]$ or $\left< \zeta\dot{\zeta}\right>[\zeta,\zeta]$. It is easy to see that the latter is suppressed by $1/a^3$ and the former yields
\begin{equation}\label{e5}
P_k^\zeta(t_F)^{(1)}\simeq 6 P_k^{\zeta(0)}\int_{t_R}^{t_F} dt_1 a(t_1)^3\left< O_1(t_1)\right> \frac{df}{dt}(t_1).
\end{equation}
From \eq{o1}, one has $\left< O_1\right>\simeq\left<\rho_\chi\right>/H$ and using \eq{fgn} we obtain
\begin{equation}\label{e55}
P_k^\zeta(t_F)^{(1)}\simeq {\cal O}\left(\frac{H(t_F)}{m}\right) P_k^{\zeta(0)}.
\end{equation}
The main conclusion here is that although the corrections \eq{e44} and \eq{e55} correspond to lower order in perturbation theory, they give smaller contributions compared to \eq{e2}.
\begin{figure}
\centerline{
\includegraphics[width=5cm]{l5}}
\caption{The 1-loop graph arising from the interaction Hamiltonian \eq{h4} that contributes to the scalar power spectrum during reheating.}
\label{fig4}
\end{figure}
The fifth order term $\zeta^3\rho_\chi$ in \eq{h4} corrects the three point function and thus it gives rise to non-gaussianity. The corresponding graph is pictured in Fig. \ref{fig5}. Using \eq{inp} for $\zeta(t,\vec{x})\zeta(t,\vec{y})\zeta(t,\vec{z})$ with $N=1$ and using the definition of the three point function in momentum space given in \eq{def3}, one finds that
\begin{equation}\label{tii}
P(k_1,k_2,k_3)^{(1)}\simeq \frac{27}{(2\pi)^6}
\int_{t_R}^{t_F} dt_1\,a(t_1)^3 \left<\rho_{\chi}(t_1)\right> \left[f(t_1)-f(t_F)\right]\,P_{k_1}^{\zeta(0)}P_{k_2}^{\zeta(0)}+\textrm{cyclic}.
\end{equation}
From \eq{60}, the corresponding $f_{NL}$ parameter can be calculated as
\begin{equation}
f_{NL}\simeq 45 \int_{t_R}^{t_F} dt_1\,a(t_1)^3 \left<\rho_{\chi}(t_1)\right> \left[f(t_1)-f(t_F)\right].
\end{equation}
As in \eq{h44}, by introducing $\exp(3\zeta)$ factor in \eq{h3} gives the following interaction Hamiltonian:
\begin{equation}\label{h444}
H^{(4)}_I=\frac{9}{2} \int d^3x \,a^3\,\zeta^2\dot{\zeta}O_1.
\end{equation}
It's contribution to $f_{NL}$ can be found as
\begin{equation}
f_{NL}\simeq 15 \int_{t_R}^{t_F} dt_1\,a(t_1)^3 \left< O_1(t_1)\right> \frac{df}{dt}(t_1).
\end{equation}
In both of these cases it is easy to estimate the integrals so that
\begin{equation}\label{fc}
f_{NL}\simeq {\cal O}\left(\frac{H(t_F)}{m}\right).
\end{equation}
Therefore a small amount of non-gaussianity is produced by these interactions. As in the case of the power spectrum, the loop corrections to $f_{NL}$ coming from the interactions that can be pictured as in Fig. \ref{fig5} become much smaller than the previous one \eq{fnl1}.
\begin{figure}
\centerline{
\includegraphics[width=3cm]{l1}}
\caption{The 1-loop graph arising from the interaction Hamiltonian \eq{h4} that contributes to the three point function $\left<\zeta\z\zeta\right>$.}
\label{fig5}
\end{figure}
\subsection{Fourth order interactions that has the form $\gamma\cc\chi\chi$ and the tensor power spectrum}
Till now in this section we have considered some higher order interactions that modify the scalar power spectrum and the $f_{NL}$ parameter. It is clear that in a systematic study one should work out the complete fourth order action to determine the corrections more accurately. In that case, the lapse $N$ and the shift $N^i$ must be solved up to second order. This is a complicated calculation and the complete fourth order action is not very illuminating for the scalar field. However, the interactions studied above are generic enough to indicate that other corrections to the scalar power spectrum and $f_{NL}$ will be similar to the ones found above.
In this subsection, we determine the complete fourth order action involving the interactions of the tensor field $\gamma_{ij}$ and the reheating scalar $\chi$. Our aim is again to compare the corresponding corrections with \eq{e3} to see how the perturbation theory is working. Since we solely concentrate on the tensor modes we set
\begin{equation}
\zeta=0.
\end{equation}
(recall that we have been working in the $\varphi=0$ gauge). The quartic interactions involving $\gamma_{ij}$ and $\chi$ are necessarily in the from $\gamma\cc\chi\chi$ since the background values of $\gamma_{ij}$ and $\chi$ are zero. Similarly, there is no linear term in $N$ and $N^i$ after one sets $\zeta=0$. We define
\begin{eqnarray}
&&N=1+N^{(2)},\\
&&N^i=N^{(2)i}_T+\partial_i\psi^{(2)},
\end{eqnarray}
where $\partial_iN^{(2)i}_T=0$. To determine these second order quantities, one may use the exact solution for the lapse
\begin{equation}
N=\sqrt{\frac{B}{A}},
\end{equation}
where $A$ and $B$ are defined in \eq{aa} and \eq{bb}, and work out the momentum constraint, which reads
\begin{equation}
D_i\left(\frac{1}{N}\left[K^i{}_j-\delta^i_j K\right]\right)=\frac{1}{N}\left(\dot{\chi}-N^i\partial_i\chi\right)\partial_j\chi,
\end{equation}
where $K_{ij}$ and $K$ are defined above \eq{aa}. Up to second order in fluctuations, the Ricci scalar $R^{(3)}$ of the constant time hypersurface can be found as
\begin{equation}
R^{(3)}=-\frac{1}{4a^2}(\partial_i\gamma_{jk})(\partial_i\gamma_{jk}).
\end{equation}
After a relatively long but straightforward calculation we find
\begin{eqnarray}
&&\partial^2N^{(2)}=\frac{1}{8H}\partial_j(\dot{\gamma}_{ik}\partial_j\gamma_{ik})+\frac{1}{2H}\partial_j(\dot{\chi}\partial_j\chi),\label{lsf}\\
&&\partial^2\psi^{(2)}=-\frac{1}{16H}\dot{\gamma}_{ij}\dot{\gamma}_{ij}-\frac{1}{16Ha^2}(\partial_i\gamma_{jk})(\partial_i\gamma_{jk})-\frac{1}{4H}\dot{\chi}^2-\frac{1}{4Ha^2}(\partial_i\chi)(\partial_i\chi)-\frac{g^2\phi^2}{4H}\chi^2-\frac{m^2\phi^2}{2H}N^{(2)}.\nonumber
\end{eqnarray}
Similarly, the transverse part of the shift reads
\begin{equation}
\partial^2N^{(2)i}_T=\fr12\partial_i\frac{1}{\partial^2}\partial_k(\dot{\gamma}_{mn}\partial_k\gamma_{mn})-\fr12 \gamma_{jk}\partial_j\dot{\gamma}_{ki}+\fr12\dot{\gamma}_{jk}\partial_j\gamma_{ki}-\fr12\dot{\gamma}_{jk}\partial_i\gamma_{jk}-2\dot{\chi}\partial_i\chi+\frac{2}{\partial^2}\partial_i(\partial_j(\dot{\chi}\partial_j\chi)).\label{sf}
\end{equation}
In all these expressions the indices are contracted with the Kronecker delta and we set $M_p=1$.
Before discussing the loop corrections, it is interesting to check the validity of the perturbation theory from the quadratic expressions given for the lapse and the shift. As discussed in \cite{coh2}, the perturbation theory is applicable if one has
\begin{equation}\label{vp}
\left< N^{(2)}\right>\ll1,\hs{10}\left<\partial_i N^i\right>=\left<\partial^2\psi^{(2)}\right>\ll H.
\end{equation}
While the first condition is needed for keeping the time coordinate to be proper, the second ensures that the original foliation of the spacetime that is presumed for perturbation theory is not destroyed by the fluctuations. It is obvious that the terms containing the $\chi$ field are dangerous for the conditions \eq{vp}. From \eq{lsf} we find
\begin{equation}
\left< N^{(2)}\right>\simeq \frac{\omega_{q_*}}{2HM_p^2}\left<\chi^2\right>.
\end{equation}
Since near the end of the first stage $\left<\chi^2\right>\simeq m^2/g^2$ and $\omega_{q_*}\simeq g\Phi$, one has
\begin{equation}
\left< N^{(2)}\right>\simeq \frac{m^2\Phi}{gHM_p^2}.
\end{equation}
From this expression it is easy to see that the first condition in \eq{vp} is safe. On the other hand, using \eq{lsf} the second condition in \eq{vp} demands
\begin{equation}
\frac{\left<\rho_\chi\right>}{H^2M_p^2}\ll1.
\end{equation}
It is clear that when the energy density of $\chi$ particles catches up the background energy density, i.e. $\left<\rho_\chi\right>\simeq H^2M_p^2$, and this condition is invalidated. This result is independent of our loop considerations and separately indicates the failure of the perturbation theory in this model.
Returning to the interactions involving $\chi$ and $\gamma_{ij}$, one can use the solutions for the lapse and the shift in \eq{a} to find
\begin{equation}\label{s4}
S^{(4)}_{\gamma\cc\chi\chi}=\fr12\int a^3\left[-\frac{1}{2a^2}\gamma_{ik}\gamma_{jk}\partial_i\chi\partial_j\chi-2N^{i}_{\gamma}\dot{\chi}\partial_i\chi+(2m^2\phi^2)N_{\gamma}N_{\chi}+(K_{ij}K^{ij}-K^2)^{(4)}\right],
\end{equation}
where a subindex on $N$ or $N^i$ indicates that only the relevant terms must be kept in \eq{lsf} and \eq{sf}. To fix the action completely, one should also determine the fourth order terms in $K_{ij}K^{ij}-K^2$, however we will not need them for our analysis below. The corresponding corrections can be pictured as in Fig. \ref{fig6}.
\begin{figure}
\centerline{
\includegraphics[width=4.4cm]{l6}}
\caption{The 1-loop graph arising from the action \eq{s4} that contributes to the tensor spectrum $\left<\gamma\cc\right>$.}
\label{fig6}
\end{figure}
It is clear that in \eq{s4} the terms containing $\partial_i\gamma_{jk}$ are suppressed by the factors $\hat{k}/M_p$, $\hat{k}/q_*$ or $\hat{k}/H$, where $\hat{k}=k/a$ is the physical superhorizon scale of interest. Similarly, since a time derivative acting on $\gamma_{ij}$ kills the constant piece in the mode function, the terms containing $\dot{\gamma}\dot{\gamma}$ are completely negligible because they decay like $1/a^3$. Likewise, the terms that has the structure $\dot{\gamma}\gamma$ would be equivalent to $H\gamma\cc$ (note that these appear from the commutator $[\gamma,\dot{\gamma}]$). As a result, we conclude that the first term in \eq{s4} gives the typical correction to the tensor power spectrum and the corresponding interaction Hamiltonian becomes
\begin{equation}\label{hc4}
H_I^{(4)}=\frac{a}{4}\int \gamma_{ik}\gamma_{jk}\partial_i\chi\partial_j\chi.
\end{equation}
A straightforward calculation gives the following one loop correction to the tensor power spectrum:
\begin{equation}
P_k^\gamma(t_F)^{(1)}\simeq P_k^{\gamma(0)}\fr83 \int_{t_R}^{t_F}dt\, a(t)\, \left< \partial_i\chi(t)\partial_i\chi(t) \right> \, [g(t)-g(t_F)].
\end{equation}
To estimate this correction, we first note that $ \left< \partial_i\chi\partial_i\chi \right>\simeq a^2q_*^2\left<\chi^2\right>$. We then focus on the last inflaton oscillation cycle in which $\left<\chi^2\right>$ reaches its maximum value. Treating $\left<\chi^2\right>$ as a constant and using \eq{fh} for the background quantities one may estimate
\begin{equation}
P_k^\gamma(t_F)^{(1)}\simeq P_k^{\gamma(0)}\frac{16q_*^2\left<\chi(t_F)^2\right>}{9mH(t_F)M_p^2}\int_{mt_F-2\pi}^{mt_F}mdt\,\,\frac{t^2}{t_F^2}\left[\frac{t_F}{t}-1\right].
\end{equation}
For our canonical case \eq{s1}, the integral can be evaluated numerically to yield $0.28$. Using \eq{ce} and the values of the other dimension-full parameters we obtain
\begin{equation}\label{e42}
P_k^\gamma(t_F)^{(1)}\simeq 10^{-6}P_k^{\gamma(0)}
\end{equation}
We see that this correction is two orders of magnitude smaller than \eq{e3}. As before, a correction which is supposed to be larger according to the naive counting in perturbation theory turns out to be smaller. Note that both corrections \eq{e3} and \eq{e42} are still larger than the quantum effects produced during inflation, which are characterized by the ratio $H/M_p\simeq 10^{-8}$ \cite{mal}.
\section{Conclusions} \label{sec4}
In a recent work \cite{ali1}, one of us has shown that the loop quantum effects during reheating significantly modify the scalar power spectrum. In this paper, in an attempt to extend the findings of \cite{ali1} we consider how loops in reheating produce non-gaussianity and affect the tensor power spectrum in the chaotic $m^2\phi^2$ model. Based on the tree level results, this model is actually ruled out by 95\% confidence level by the Planck data (provided the running of the index is neglected), however our findings show that quantum effects during reheating can change this conclusion since the corresponding corrections can alter the tree level results appreciably.
In most of the scalar field inflationary models, inflation is followed by a period of coherent inflaton oscillations where the background is still homogeneous and isotropic. This phase continues until the backreaction effects are set in. As pointed out in \cite{ca}, in such a background causality does not preclude the emergence of the superhorizon effects because by coherency the same physical influence can appear at different positions at the same time. Therefore, the quantum effects can be important for cosmological variables in the first stage of reheating. On the other hand, it is known that the entropy perturbations can cause nontrivial superhorizon evolution of the curvature perturbation. Consequently, it is not surprising to see that the effects of entropy modes circulating in the loops become significant, especially in the parametric resonance regime. Indeed, we observe that the corrections get larger as the number of $\chi$ modes circulating in the loops increases, which indicates that the perturbation theory might become invalid.
It is well known that in the chaotic model we have studied, the curvature perturbation $\zeta$ becomes an ill defined variable during reheating. Because of that reason in \cite{ali1}, the calculations have been carried out in the $\zeta=0$ gauge till the end of the first stage of reheating and then a gauge transformation has been applied to read the $\left<\zeta\z\right>$ correlation function. In this paper, we utilize a different strategy and smooth out the spikes of $\zeta$ by using the averaged out background variables in the quadratic $\zeta$-action. As it is shown above, the results obtained in this way is consistent with \cite{ali1} and thus our conclusions about the scalar power spectrum (and non-gaussianity) are firm. Note that the tensor calculation is free from the gauge fixing issues.
It is possible to develop the results of this paper in different directions. Due to the importance of the chaotic $m^2\phi^2$ model, it would be valuable to perform a full numerical check of the loop corrections that are estimated in this paper. It would also be crucial to see whether the loops in reheating modify the predictions of the models that are favored by Planck data, like the Starobinsky model \cite{strm}. Finally, it would be interesting to determine the loop effects when the inflaton decay occurs perturbatively. In that case while the reheating scalar modes cannot take large values, the decay process is completed in a long time that might enhance the quantum effects, since according to in-in formalism \eq{inp}, the quantum corrections are proportional to the duration of the process.
\begin{acknowledgments}
N. Kat\i rc\i \ thanks Bo\~{g}azi\c{c}i University for the financial support provided by the Scientific Research Fund with project no: 7128
\end{acknowledgments}
|
1,116,691,498,702 | arxiv | \section{Introduction}
Modern comparative effectiveness research (CER) questions often require comparing the effectiveness of multiple treatments on a binary outcome \citep{hu2020estimation}. To answer these CER questions, specialized causal inference methods are needed. Methods appropriate for drawing causal inferences about multiple treatments include regression adjustment (RA) \citep{rubin1973use, linden2016estimating}, inverse probability of treatment weighting (IPTW) \citep{ feng2012generalized,mccaffrey2013tutorial}, Bayesian Additive Regression Trees (BART) \citep{hill2011bayesian,hu2020estimation}, regression adjustment with multivariate spline of the generalized propensity score (RAMS) \citep{hu2021estimation}, vector matching (VM) \citep{lopez2017estimation} and targeted maximum likelihood estimation (TMLE) \citep{rose2019double}. Drawing causal inferences using observational data, however, inevitably requires assumptions. A key causal identification assumption is the \emph{positivity} or sufficient overlap assumption, which implies that there are no values of pre-treatment covariates that could occur only among units receiving one of the treatments \citep{hu2020estimation}. Another key assumption requires appropriately conditioning on all pre-treatment variables that predict both treatment and outcome. The pre-treatment variables are known as confounders and this requirement is referred to as the \emph{ignorability} assumption (also as no unmeasured confounding) \citep{hu2021flexible}. An important strategy to handle the positivity assumption is to identify a common support region for retaining inferential units.
The ignorability assumption can be violated in observational studies, and as a result can lead to biased treatment effect estimates. One widely recognized way to address such concerns is sensitivity analysis \citep{erik2007strengthening,hu2021flexible}.
The \pkg{CIMTx} package provides an analysis pipeline to easily implement the new BART-based causal estimation methods developed by \cite{hu2020estimation} and the alternative methods described above. In addition, \pkg{CIMTx} provides strategies to define a common support region to address the positivity assumption and implements a flexible Monte Carlo sensitivity analysis approach \citep{hu2021flexible} for unmeasured confounding to address the ignorability assumption.
Finally, \pkg{CIMTx} proffers detailed examples of how to simulate data adhering to the complex structures in the multiple treatment setting. The simulated data can further be used to compare the performance of different causal inference methods designed for multiple treatments. Table \ref{tab:methods-comparison} summarizes key functionalities of \pkg{CIMTx} in comparison to recent \proglang{R} packages designed for drawing causal inferences about treatment effects from observational data. \pkg{CIMTx} provides a comprehensive collection of functionalities, from simulating data to estimating the causal effects to addressing causal assumptions and elucidating their ramifications.
To assist applied researchers who work with observational data and wish to draw inferences about the effects of multiple treatments on a binary outcome, which can be rare events (see Section~\ref{sec:rams}), this article provides a comprehensive illustration of the \pkg{CIMTx} package. In Section \ref{sec:causal_multiple}, we describe all the methodologies presented in \pkg{CIMTx}. Section~\ref{sec:simulation} presents ways in which datasets can be simulated that possess the multiple treatment characteristics in observational studies. Section \ref{sec:CIMTx package} outlines the main functions and their arguments. Section \ref{sec:example} demonstrates the use of these functions with simulated data.
\begin{table}[h]
\normalsize
\caption{Comparisons of \proglang{R} packages for causal inference.} \label{tab:methods-comparison}
\centering\label{tb:summary}
\begin{tabular}{lccccc}
\toprule
\textbf{\proglang{R} packages} & \thead{Multiple \\ treatments} & \thead{Binary \\ Outcome} & \thead{Sensitivity \\ Analysis} & \thead{Identification of\\ Common Support} & \thead{Design\\ factors} \\
\midrule
CIMTx & \cmark & \cmark & \cmark &\cmark &\cmark \\
PSweight & \cmark & \cmark & \xmark &\cmark &\xmark \\
twang & \cmark & \xmark & \cmark & \xmark &\xmark \\
WeightIt &\cmark& \xmark & \xmark & \cmark &\xmark \\
ATE & \cmark& \xmark & \xmark &\xmark &\xmark \\
CBPS & \cmark & \cmark & \xmark & \xmark &\xmark \\
causalweight & \cmark & \xmark & \xmark &\xmark &\xmark \\
PSW & \xmark & \cmark & \xmark & \xmark & \xmark \\
optweight & \cmark & \xmark & \xmark & \xmark &\xmark \\
sbw & \xmark & \xmark & \xmark & \xmark &\xmark \\
MatchIt & \xmark & \cmark & \xmark &\cmark &\xmark\\
Matching & \xmark & \xmark & \xmark &\cmark &\xmark\\
optmatch & \xmark & \xmark & \xmark &\cmark &\xmark\\
episensr & \xmark & \cmark & \cmark &\xmark &\xmark\\
EValue & \xmark & \cmark & \cmark &\xmark &\xmark\\
drtmle & \xmark & \cmark & \xmark &\xmark&\xmark \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\footnotesize \item \cmark: the feature is offered in the method; \xmark\; indicates otherwise;
\item References: \textbf{PSweight} (Version 1.1.4): \cite{PSweight};\textbf{twang} (Version 1.6) \cite{twang};\textbf{WeightIt} (Version 0.10.2) \cite{WeightIt};\textbf{ATE} (Version 0.2.0): \cite{ATE};\textbf{CBPS} (Version 0.22): \cite{CBPS}; \textbf{causalweight} (Version 1.0.2): \cite{causalweight}; \textbf{PSW} (Version 1.1.3): \cite{PSW}; \textbf{optweight} (Version 0.2.5): \cite{optweight};\textbf{sbw} (Version 1.1.5): \cite{sbw}; \textbf{MatchIt} (Version 4.3.0): \cite{MatchIt}; \textbf{Matching} (Version 4.9.9): \cite{Matching}; \textbf{optmatch} (Version 0.9.15): \cite{optmatch}; \textbf{episensr} (Version 1.0.0): \cite{episensr}; \textbf{EValue} (Version 4.1.2): \cite{EValue}; \textbf{drtmle} (Version 1.1.0): \cite{drtmle};
\end{tablenotes}
\vspace{-0.05in}
\end{table}
\section{Methodology} \label{sec:causal_multiple}
\subsection{Estimation of causal effects} \label{sec:estimation}
Consider an observational study with $N$ individuals, indexed by $i =1, \ldots, N$, drawn randomly from a target population. Each individual is exposed to a treatment, indexed by $W$. The goal of this study is to estimate the causal effect of treatment $W$ on a binary outcome $Y$. There are a total of $T$ possible treatments, and $W_i = w$ if individual $i$ is observed under treatment $w$, where $w \in \mathscr{W} = \{1, 2, \ldots, T\}$. Pre-treatment measured confounders are indexed by $\bm{X}_i$. Under the potential outcomes framework, \citep{rubin1974estimating, holland1986statistics}, individual $i$ has $T$ potential outcomes $\{Y_i(1), \ldots, Y_i(T)\}$ under each treatment of $\mathscr{W}$. For each individual, at most one of the potential outcomes is observed -- the one corresponding to the treatment to which the individual is exposed. All other potential outcomes are missing, which is known as the fundamental problem of causal inference \citep{holland1986statistics}. In general, three standard causal identification assumptions \citep{rubin1980randomization, hu2020estimation} need to be maintained in order to estimate the causal effects from observational data:
\begin{enumerate}
\item [(A1)] The stable unit treatment value assumption: there is no interference between units and there are no different versions of a treatment.
\item [(A2)] Positivity: the generalized propensity score (GPS) for treatment assignment $e(\bm{X}_i)=P(W_i=1 \mid \bm{X}_i)$ is bounded away from 0 and 1.
\item [(A3)] Ignorability: pre-treatment covariates $\bm{X}_i$ are sufficiently predictive of both treatment assignment and outcome, $p(W_i \mid Y_i(1), \ldots, Y_i(T), \bm{X}_i) = p (W_i \mid \bm{X}_i)$.
\end{enumerate}
The \pkg{CIMTx} package addresses assumption A(2) in Section~\ref{sec:positivity} and A(3) in Section~\ref{sec:sa}.
Causal effects can be estimated by summarizing functionals of individual-level potential outcomes. For dichotomous outcomes, causal estimands can be the risk difference (RD), odds ratio (OR) or relative risk (RR). For purposes of illustration, we define causal effects based on the RD. Let $s_1$ and $s_2$ be two subgroups of treatments such that $s_1,s_2 \subset \mathscr{W}$ and $s_1 \cap s_2 = \emptyset$, and define $|s_1|$ as the cardinality of $s_1$ and $|s_2|$ of $s_2$. Two commonly used causal estimands are the average treatment effect (ATE), $ATE_{s_1,s_2}$, and the average treatment effect on the treated (ATT), for example, among those receiving $s_1$, $ATT_{s_1|s_1,s_2}$. They are defined as:
\begin{equation}
\begin{split}
\label{eq:pop_est}
ATE_{s_1,s_2} &= E \bigg{[} \frac{\sum_{w \in s_1} Y_i(w)}{|s_1|} - \frac{\sum_{w' \in s_2} Y_i(w')}{|s_2|} \bigg{]},\\
ATT_{s_1|s_1,s_2} &= E \bigg{[} \frac{\sum_{w \in s_1} Y_i(w)}{|s_1|} - \frac{\sum_{w' \in s_2} Y_i(w')}{|s_2|} \bigg{|} W_i \in s_1 \bigg{]}.
\end{split}
\end{equation}
We now introduce six methods implemented in \pkg{CIMTx} for estimating the causal effects of multiple treatments: RA, IPTW, BART, RAMS, VM and TMLE.
\subsubsection{Regression adjustment} \label{sec:RA}
Regression adjustment \citep{rubin1973use, linden2016estimating}, also known as model-based imputation \citep{imbens2015causal}, uses a regression model to impute missing potential outcomes: what would have happened to a specific individual had this individual received a treatment to which they were not exposed. RA regresses the outcomes on treatment and confounders,
\begin{equation} \label{eq:RA_mod}
f(w,\bm{X}_i) = E[Y_i {\, \vert \,} W_i =w, \bm{X}_i] = \text{logit}^{-1} \left \{ \beta_0 + \beta_1 w + \bm{\beta}_2^{\top}\bm{X}_i)\right \} ,
\end{equation}
where $\beta_0$ is the intercept, $\beta_1$ is the coefficient for treatment and $\bm{\beta}_2$ is a vector of coefficients for covariates $\bm{X}_i$.
From the fitted regression model~\eqref{eq:RA_mod}, the missing potential outcomes for each individual are imputed using the observed data. The causal effects can be estimated by contrasting the imputed potential outcomes between treatment groups. \pkg{CIMTx} implements RA with the Bayesian logistic regression model via the \code{bayesglm} function of the \pkg{arm} package. For the ATE effects, we first average the $L$ predictive posterior draws $\{f^l(w,\bm{X}_i), l =1,\ldots, L\}$ over the empirical distribution of $\{\bm{X}_i\}_{i=1}^N$, and for the ATT effects using $s_1$ as the reference group, over the empirical distribution of $\{\bm{X}_i\}_{i: W_i \in s_1}$. We then take the difference of the averaged values between two treatment groups $w \in s_1$ and $w' \in s_2$.
\subsubsection{Inverse Probability of Treatment Weighting}\label{sec:IPTW}
The idea of IPTW was originally introduced by \cite{horvitz1952generalization} in survey research to adjust for imbalances in sampling pools. Weighting methods have been extended to estimate the causal effect of a binary treatment in observational studies, and more recently reformulated to accommodate multiple treatments \citep{imbens2000role, feng2012generalized,mccaffrey2013tutorial}. When interest is in estimating the pairwise ATE for treatment groups $s_1$ and $s_2$, a consistent estimator of $ATE_{s_1,s_2}$ is given by the weighted mean,
\begin{equation}
\widehat{ATE}_{s_1,s_2}=\frac{\sum_{i=1}^{N}Y_{i}I(W_{i}\in s_1)/|s_1|}{\sum_{i=1}^N I(W_{i}\in s_1)r(W_i,\bm{X_{i}})} -\frac{\sum_{i=1}^{N}Y_{i}I(W_{i}\in s_2)/|s_2|}{\sum_{i=1}^N I(W_{i}\in s_2)r(W_i,\bm{X_{i}})}
\end{equation}
where $r(w,\bm{X}_i)$ is the weights satisfying $r(w, \bm{X}_i) = 1 /P(W_i = w \mid \bm{X}_i )$, and $I(\cdot)$ is the indicator function. The \pkg{CIMTx} package provides three ways in which the weights can be estimated: (i) multinomial logistic regression \citep{feng2012generalized}, (ii) generalized boosted model (GBM) \citep{mccaffrey2013tutorial}, and (iii) super learner \citep{van2007super}. A challenge with IPTW is low GPS can result in extreme weights, which may yield erratic causal estimates with large sample variances \citep{little1988missing, kang2007demystifying}. This issue is increasingly likely as the number of treatments increases. Weight trimming or truncation can alleviate the issue of extreme weights \citep{cole2008constructing, lee2011weight}). \pkg{CIMTx} provides an argument for users to choose the percentile at which the weights should be truncated. We briefly describe the three weight estimators.
\begin{enumerate}
\item [(i)] The multinomial logistic regression model for treatment assignment is as follows:
\begin{align*}
P(W_i=w|\bm{X}_i) & =\frac{e^{\bm{\alpha'}_w\bm{X}_i}}{1+e^{\bm{\alpha'}_1\bm{X}_{i}} + \ldots + e^{\bm{\alpha'}_{T-1}\bm{X}_{i}}},
\end{align*}
where $\bm{\alpha}_w$ is a vector of coefficients for $\bm{X}_i$ corresponding to treatment $w$, and can be estimated by using an iterative procedure such as generalized iterative scaling or iteratively reweighted least squares.
\item[(ii)] GBM uses machine learning to flexibly model the relationships between treatment assignment and covariates. It does this by growing a series of boosted classification trees to minimize an exponential loss function. This process is effective for fitting nonlinear treatment models characterized by curves
and interactions. The procedure of estimating the GPS can be tuned to find the GPS model producing the best covariate balance between treatment groups.
\item[(iii)] Super learner is an algorithm that creates the optimally weighted average of several machine learning models. The machine learning models can be specified via the \code{SL.library} argument of the \pkg{SuperLearner} package.
This approach has been proven to be asymptotically as accurate as the best possible prediction algorithm that is included in the library \citep{van2007super}.
\end{enumerate}
\subsubsection{Bayesian Additive Regression Trees}
BART (\cite{chipman2010bart}) is a likelihood-based machine learning model and has been adapted into causal inference settings in recent years \citep{hill2011bayesian, hu2020estimation,hu2021estimation,hu2021estimatingsim,hu2021estimating}. For a binary outcome,
BART uses the probit regression
\begin{equation}
f(w,\bm{X}_i) = E[Y_i | W_i = w, \bm{X}_i] = \Phi \bigg{\{} \sum_{j=1}^J g_j(w, \bm{X}_i; T_j, M_j) \bigg{\}},
\end{equation}
where $\Phi$ is the the standard normal cumulative distribution function, $(T_j, M_j)$ indexes a single subtree model in which $T_j$ denotes the regression tree and $M_j$ is a set of parameter values associated with the terminal nodes of the $j$th regression tree, $g_j(w,\bm{X}_i; T_j, M_j)$ represents the mean assigned to the node in the $j$th regression tree associated with covariate value $\bm{X}_i$ and treatment level $w$, and the number of regression trees $J$ is considered to be fixed and known. BART uses regularizing priors for $(T_j, M_j)$ to keep the impact of each tree small.
Although the prior distributions can be specified via the \code{ce_estimate} function of \pkg{CIMTx}, the default priors tend to work well and require little modification in many situations \citep{hill2011bayesian,hu2020estimation,hu2020tree}. The details of prior specification and Bayesian backfitting algorithm for posterior sampling can be found in \citet{chipman2010bart}. The posterior inferences about the treatment effects can be drawn in a similar way as described for RA in Section~\ref{sec:RA}.
\subsubsection{Regression adjustment with multivariate spline of GPS} \label{sec:rams}
For a binary outcome, the number of outcome events can be small. The estimation of causal effects is challenging with rare outcomes because the great majority of units contribute no information to explaining the variability attributable to the differential treatment regimens in the health events \citep{hu2021estimation}. \cite{franklin2017comparing} found that regression adjustment on propensity score using one nonlinear spline performed best with respect to bias and root-mean-squared-error in estimating treatment effects. \cite{hu2021estimation} proposed RAMS, which accommodates multiple treatments by using a nonlinear spline model for the outcome that is additive in the treatment and multivariate spline function of the GPS as the following:
\begin{equation}
f(W_i,\bm{X}_i) = E[Y_i | W_i, \bm{X}_i] = \text{logit}^{-1} \bigg{\{} \bm{\beta} W_i + h(\bm{R}(\bm{X}_i),\bm{\phi}) \bigg{\}},
\end{equation}
where $h(\cdot)$ is a spline function of the GPS indexed by $\bm{\phi}$ and $\bm{\beta} = [\beta_1, \ldots, \beta_T]^\top$ are regression coefficients associated with the treatment $W_i$. The dimension of the spline function $h(\cdot)$ depends on the number of treatments $T$.
In \pkg{CIMTx}, RAMS is implemented using the \code{gam()} function with tensor product smoother \code{te()} between treatments from the \pkg{mgcv} package. Treatment effects can then be estimated by averaging and contrasting the predicted $\hat{f}(w,\bm{X}_i)$ between treatment groups.
\subsubsection{Vector matching}
\cite{lopez2017estimation} proposed the VM algorithm, which matches individuals with similar vector of the GPS. VM obtains matched sets using a combination of $k$-means clustering and one-to-one matching with replacement within each cluster strata. Currently, VM is only designed to estimate the ATT effects.
In \pkg{CIMTx} , VM is implemented via \code{method = "VM"}.
\subsubsection{Targeted maximum likelihood estimation}\label{tmle}
TMLE is a doubly robust approach that combines outcome estimation, IPTW estimation, and a targeting step to optimize the parameter of interest with respect to bias and variance. \cite{rose2019double} implemented TMLE to estimate the ATE effects of multiple treatments. Influence curves were used for variance estimation, though bootstrapping is also suggested. \pkg{CIMTx} calls the \proglang{R} package \pkg{tmle} to implement TMLE for the ATE effects.
\subsection{Identification of a common support region} \label{sec:positivity}
Turning to causal identification assumptions. If the positivity assumption (A2) is violated, problems can arise when extrapolating over the areas of the covariate space where common support does not exist. It is important to define a common support region to which the causal conclusions can be generalized. In \pkg{CIMTx}, the identification of a common support region is offered in three methods: IPTW, VM and BART. For IPTW, one strategy is weight truncation, by which extreme weights that fall outside a specified range limit of the weight distribution are set to the range limit. This functionality is offered in \pkg{CIMTx} via the \code{trim} argument. For VM, \cite{lopez2017estimation} proposed a rectangular support region defined by the maximum value of the smallest GPS and the minimum value of the largest GPS among the treatment groups. Individuals that fall outside the region are discarded from the causal analysis. This feature is automatically implemented with \code{"VM"} in \pkg{CIMTx}. \cite{hu2020estimation} supplied BART with a strategy to identify a common support region for retaining inferential units, which is to discard individuals with a large variability in their predicted potential outcomes. Specifically, for the ATT effects, any individual $i$ with $W_i = w$ will be discarded if
\begin{equation} \label{eq:discard}
s_i^{f_{w^\prime}} > \text{max}_j \{s_j^{f_w} \}, \forall j: W_j = w, w^\prime \neq w \in \mathscr{W},
\end{equation}
where $s_j^{f_w}$ and $s_i^{f_{w^\prime}}$ respectively denote the standard deviation of the posterior distribution of the potential outcomes under treatment $W = w$ and $W = w^\prime$, for a given sample $j$. For the ATE effects, the discarding rule in equation~\eqref{eq:discard} is applied to each treatment group. Users can implement the discarding rule by setting the \code{discard} argument in \pkg{CIMTx}.
\subsection{Sensitivity analysis for unmeasured confounding}\label{sec:sa}
The violation of the ignorability assumption (A3) can lead to biased treatment effect estimates. Sensitivity analysis is useful in gauging how much the causal conclusions will be altered in response to different magnitude of departure from the ignorability assumption. \pkg{CIMTx} implements a new flexible sensitivity analysis approach developed by \cite{hu2021flexible}. This approach first defines a confounding function for any pair of treatments $(w, w')$ as
\begin{eqnarray} \label{eq:cf}
c(w, w', \bm{x}) = E \left[ Y(w) {\, \vert \,} W = w, \bm{X}=\bm{x}\right] - E \left[ Y (w) {\, \vert \,} A =w', \bm{X}=\bm{x} \right].
\end{eqnarray}
The confounding function, also viewed as the sensitivity parameter in a sensitivity analysis, directly represents the difference in the mean potential outcomes $Y(w)$ between those treated with $W=w$ and those treated with $W=w'$, who have the same level of $\bm{x}$. If the ignorability assumption holds, the confounding function will be zero for all $w \in \mathscr{W}$. When treatment assignment is not ignorable, the unmeasured confounding is present and the causal effect estimates using measured $\bm{X}$ will be biased. \cite{hu2021flexible} derived the form of the resultant bias as:
\begin{align} \label{eq:biasform}
\begin{split}
\text{Bias}(w,w') =&-p_{w} c(w', w, \bm{x}) + p_{w'}c(w,w',\bm{x})\\
&-\sum\limits_{l: l \in \mathscr{W}\setminus\{w, w'\}} p_{l} \left \{ c(w', l, \bm{x}) -c(w,l,\bm{x}) \right \} ,
\end{split}
\end{align}
where $p_{w} = P(W= w {\, \vert \,} \bm{X}= \bm{x})$, $w \neq w' \in \mathscr{W} = \{1, \ldots, T\} $.
\cite{hu2021flexible} further discussed (i) strategies to specify the confounding functions that represent our prior beliefs about the degrees of unmeasured confounding via the remaining variability in the outcomes unexplained by measured $\bm{X}$ \citep{hogan2014bayesian}; and (ii) ways in which the causal effects can be estimated adjusting for the presumed degree of unmeasured confounding. The proposed methods include the following steps:
\begin{enumerate}
\item Fit a multinomial probit BART model \citep{kindo2016multinomial} $f^{\text{MBART}}(A{\, \vert \,} X)$ to estimate the GPS, $p_l \equiv P (W = l {\, \vert \,} X =x) \; \forall l \in \mathscr{W}$, for each individual.
\item
\begin{algorithmic}
\For{$w \gets 1$ to $T$}
\State {Draw $M_1$ GPS $\tilde{p}_{l1}, \ldots, \tilde{p}_{lM_1}, \forall l \neq w \wedge l \in \mathscr{W}$ from the posterior predictive distribution of $f^{\text{MBART}} (W {\, \vert \,} \bm{X})$ for each individual.}
\For{$m \gets 1$ to $M_1$}
\State {Draw $M_2$ values $\eta^*_{lm1}, \ldots, \eta^*_{lmM_2}$ from the prior distribution of each of the confounding functions $c(w, l, \bm{x})$, for each $l \neq j \wedge l \in \mathscr{W}$. }
\EndFor
\EndFor
\end{algorithmic}
\item Compute the adjusted outcomes, $Y^{\text{CF}}_i \equiv Y_i - \sum_{l \neq w}^T P(W_i = l{\, \vert \,} \bm{X}_i= \bm{x}) c(w, l,\bm{x})$, for each treatment $w$, for each of $M_1M_2$ draws of $\{\tilde{p}_{l1}, \eta^*_{l11}, \ldots, \eta^*_{l1M_2}, \ldots, \tilde{p}_{lM_1}, \eta^*_{lM_11}, \ldots, \eta^*_{lM_1M_2}; l \neq w \wedge l \in \mathscr{W}\}$.
\item Fit a BART model to each of $M_1\times M_2$ sets of observed data with the adjusted outcomes $Y^{\text{CF}}$, and estimate the combined adjusted causal effects and uncertainty intervals by pooling posterior samples across model fits arising from the $M_1 \times M_2$ data sets.
\end{enumerate}
\section{Design factors for data simulation}\label{sec:simulation}
\pkg{CIMTx} provides specific functions to simulate data possessing complex data characteristics of the multiple treatment setting. Seven design factors are considered: (1) sample size, (2) ratio of units across treatment groups, (3) whether the treatment assignment model and the outcome generating model are linear or nonlinear, (4) whether the covariates that best predict the treatment also predict the outcome well, (5) whether the response surfaces are parallel across treatment groups, (6) outcome prevalence, and (7) degree of covariate overlap.
\subsection{Design factors (1)--(5)}\label{raio_of_units}
For the data generating process of treatment assignment, consider a multinomial logistic regression model,
\begin{equation}\label{eq:trt_assign}
\begin{split}
\ln \dfrac{P(W=1)}{P(W=T)} &= \delta_1 + \bm{X}\xi_1^L + \bm{Q}\xi_1^{NL}\\
\ldots &\ldots \ldots \ldots \ldots \\
\ln \dfrac{P(W=T-1)}{P(W=T)} &= \delta_{(T-1)} + \bm{X}\xi_{(T-1)}^{NL} + \bm{Q}\xi_{(T-1)}^{NL}
\end{split}
\end{equation}
where $\bm{Q}$ denotes the nonlinear transformations and higher-order terms of the predictors $\bm{X}$. $\xi_1^L,\ldots, \xi_{(T-1)}^{L}$ are vectors of coefficients for the untransformed versions of the predictors $\bm{X}$ and $\xi_1^{NL}, \ldots, \xi_{(T-1)}^{NL}$ for the transformed versions of the predictors captured in $\bm{Q}$.
The intercepts $\delta_1, \ldots,\delta_{(T-1)}$ can be specified to create the corresponding ratio of units across $T$ treatment groups. The $T$ sets of potential response surfaces can be generated as follows:
\begin{equation} \label{eq:outcome_gen}
\begin{split}
E[Y(1) | \bm{X}]& = \text{logit}^{-1} \{ \tau_1+ \bm{X}\gamma_1^{L} + \bm{Q} \gamma_1^{NL} \} \\
\ldots &\ldots \ldots \ldots \ldots \\
E[Y(T) | \bm{X}]& = \text{logit}^{-1} \{ \tau_T+\bm{X}\gamma_T^{L} + \bm{Q} \gamma_T^{NL} \}
\end{split}
\end{equation}
where the coefficient setting $\gamma_1^L = \ldots = \gamma_T^L$, $\gamma_1^{NL} = \ldots = \gamma_T^{NL}$ and $\tau_1 \neq \ldots \neq \tau_T$ corresponds to the parallel response surfaces, and by assigning different values to $\gamma_w^L$ and $\gamma_w^{NL}$ and setting $\tau_1 = \ldots = \tau_T =0$, nonparallel response surfaces are generated, which imply treatment effect heterogeneity. Note that the predictors $\bm{X}$ and the transformed versions of the predictors $\bm{Q}$ in the treatment assignment model~\eqref{eq:trt_assign} can be different than those in the outcome generating model~\eqref{eq:outcome_gen} to create various degrees of alignment. The observed outcomes are related to the potential outcomes through $Y_i = \sum_{w \in \mathscr{W}} Y_i(w)$. Covariates $\bm{X}$ can be generated from user-specified data distributions.
\subsection{Outcome prevalence}\label{outcome_prevalence}
Values for parameters $\tau_1, \ldots, \tau_{T}$ in model~\eqref{eq:outcome_gen} can be chosen to create various outcome prevalence rates. The outcomes are considered rare if the prevalence rate is $< 5\%$.
\subsection{Covariate overlap}\label{covariate_overlap}
With observational data, it is important to investigate how the sparsity of covariate overlap impacts the estimation of causal effects. We can modify the formulation of the treatment assignment model~\eqref{eq:trt_assign} to adjust the sparsity of overlap by including a multiplier parameter $\psi$ \citep{hu2021estimatingsim} as follows:
\begin{equation}\label{eq:trt_assign_overlap}
\begin{split}
\ln \dfrac{P(W=1)}{P(W=T)} &= \delta_1 + \bm{X}\psi\xi_1^L + \bm{Q}\psi\xi_1^{NL}\\
\ldots &\ldots \ldots \ldots \ldots \\
\ln \dfrac{P(W=T-1)}{P(W=T)} &= \delta_{(T-1)} + \bm{X}\psi\xi_{(T-1)}^{NL} + \bm{Q}\psi\xi_{(T-1)}^{NL},
\end{split}
\end{equation}
where larger values of $\psi$ correspond to increased sparsity degrees of overlap.
\section{The R package CIMTx}\label{sec:CIMTx package}
\pkg{CIMTx} has three main functions: (i) \code{data_sim()} to simulate data in the multiple treatment setting using the 7 design factors described in Section \ref{sec:simulation}, (ii) \code{ce_estimate()} to implement the 6 methods introduced in Section~\ref{sec:estimation}, and (iii) \code{sa()} to implement the flexible Monte Carlo sensitivity analysis for unmeasured confounding in the context of multiple treatments and binary outcomes. We list the arguments of the function in details as follows:
\subsection{Arguments for data simulation}
\begin{itemize}
\item \code{sample_size} is the total number of units.
\item \code{n_trt} is the number of treatments.
\item \code{X} is a vector of characters representing covariates, with each covariate being generated from the standard probability distributions in the \pkg{stats} package.
\item \code{lp_y} is a vector of characters of length $T$, representing the linear effects $\bm{X}\gamma_i^{L}$ in the outcome generating model~\eqref{eq:outcome_gen}.
\item \code{nlp_y} is a vector of characters of length $T$, representing the nonlinear effects $\bm{X}\gamma_i^{NL}$ in the outcome generating model~\eqref{eq:outcome_gen}.
\item \code{align} is logical, indicating whether the predictors in the treatment assignment model~\eqref{eq:trt_assign_overlap} are the same as the predictors for the outcome generating model~\eqref{eq:outcome_gen}. The default is \code{TRUE}. If the argument is set to \code{FALSE}, users need to specify additional two arguments \code{lp_w} and \code{nlp_w}.
\item \code{lp_w} is a vector of characters of length $T-1$, representing $\bm{X}\xi_i^L$ in the treatment assignment model~\eqref{eq:trt_assign_overlap}.
\item \code{nlp_w} is a vector of characters of length $T-1$, representing $\bm{X}\xi_i^{NL}$ in the treatment assignment model~\eqref{eq:trt_assign_overlap}.
\item \code{tau} is a numeric vector of length $T$ inducing different outcome event probabilities across treatment groups.
\item \code{delta} is a numeric vector of length $T-1$ inducing different ratio of units across treatment groups.
\item \code{psi} is a numeric value for the parameter $\psi$ in the treatment assignment model~\eqref{eq:trt_assign_overlap}, governing the sparsity of covariate overlap.
\end{itemize}
\subsection{Arguments for the estimation of causal effects}
\begin{itemize}
\item \code{y} is a numeric vector $(0, 1)$ representing a binary outcome.
\item \code{x} is a dataframe, including all the covariates but not treatments.
\item \code{w} is a numeric vector $(1, \dots, T)$ representing the treatment groups.
\item \code{estimand} is a character string (\code{"ATT"}, \code{"ATE"}) representing the type of causal estimand. When the \code{estimand = "ATT"}, users also need to specify the reference treatment group by setting the \code{reference_trt} argument.
\item \code{method} is a character string. Users can selected from the following methods including \code{"RA", "VM", "BART", "TMLE", "IPTW-Multinomial", "IPTW-GBM", "IPTW-SL", \\ "RAMS-Multinomial", "RAMS-GBM", "RAMS-SL"}.
\end{itemize}
\subsubsection{Additional arguments for RA}
\begin{itemize}
\item \code{ndpost} is the number of posterior samples for Bayesian generalized linear models.
\end{itemize}
\subsubsection{Additional arguments for VM}
\begin{itemize}
\item \code{n_cluster} is a numeric value denoting the number of clusters to form using $k$-means clustering on the logit of GPS. The default value is 5.
\item \code{caliper} is a numeric value denoting the caliper which should be used when matching on the logit of GPS within each cluster formed by $k$-means clustering. The caliper is in standardized units. For example, \code{caliper = 0.25} means that all matches greater than 0.25 standard deviations of the logit of GPS are dropped. The default value is 0.25.
\item \code{boot} is logical, indicating whether or not to use nonparametric bootstrap to calculate the 95\% confidence intervals of the causal effect estimates.
\item \code{nboots} is a numeric value representing the number of bootstrap samples.
\end{itemize}
\subsubsection{Additional arguments for TMLE}
\begin{itemize}
\item \code{SL.library} is a character vector of prediction algorithms. A list of functions included in the \pkg{SuperLearner} package can be found with \code{listWrappers()}.
\end{itemize}
\subsubsection{Additional arguments for BART}
\begin{itemize}
\item \code{discard} provides the option (\code{"No"}, \code{"Yes"}) to use the discarding rules for the BART based method. The default is \code{"No"}.
\item \code{ndpost} is the number of posterior draws with a default value \code{ndpost} = 1000.
\end{itemize}
All tuning parameters of \pkg{BART}\code{::pbart()} are passed through \code{ce_estimate()}.
\subsubsection{Additional arguments for IPTW and RAMS}
\begin{itemize}
\item \code{trim_perc} is the percentile at which the inverse probability of treatment weights should be trimmed.
\item \code{boot} is logical, indicating whether or not to use nonparametric bootstrap to calculate the 95\% confidence intervals of the causal effect estimates.
\item \code{nboots} is a numeric value representing the number of bootstrap samples.
\end{itemize}
For \code{"IPTW-SL"} and \code{"RAMS-SL"}, all arguments of \code{SuperLearner()} are passed through \code{ce_estimate()}. For \code{"IPTW-GBM"} and \code{"RAMS-GBM"}, any tuning parameters and their default values in \pkg{twang}\code{::mnps} are passed through \code{ce_estimate()}.
\subsection{Arguments for sensitivity analysis}
\begin{itemize}
\item \code{y} is a numeric vector $(0,1)$ representing a binary outcome.
\item \code{x} is a dataframe, including all the covariates but not treatments.
\item \code{w} is a numeric vector $(1,\ldots, T)$ representing the treatment groups.
\item \code{estimand} is a character string (\code{"ATT"}, \code{"ATE"}) representing the type of causal estimand.
\item \code{M1} is a numeric value indicating the number of draws of the GPS from their posterior predictive distribution.
\item \code{M2} is a numeric value indicating the number of draws from the prior distributions of the confounding functions.
\item \code{nCore} is a numeric value indicating the number of cores to use for parallel computing.
\item \code{prior_c_functions} could be (1) a vector of characters indicating the prior distributions for the confounding functions. Each character contains the random number generation code from the standard probability distributions in the \pkg{stats} package,
(2) a vector of characters for a range of point mass priors to be placed on the confounding functions, and (3) a matrix indicating the point mass prior for the confounding functions.
\end{itemize}
\section{Examples}\label{sec:example}
\subsection{Data simulation}\label{sec:example_data_simulation}
We first use the \code{data_sim} function to simulate a dataset with the following characteristics: (1) sample size = 300, (2) ratio of units = 1:1:1 across three treatment groups, (3) nonlinear treatment assignment and outcome generating models, (4) different predictors for the treatment assignment and outcome generating mechanisms, (5) parallel response surfaces, (6) outcome prevalence = $(0.24, 0.48, 0.76)$ in three treatment groups with an overall rate = 0.5 and (7) moderate covariate overlap. Note that for the design factor (6), we can adjust \code{tau} to generate rare outcome events. \\
\code{R> library(CIMTx)}
\code{R> set.seed(111111)}
\code{R> data <- data_sim(\\
sample_size = 300,\\
n_trt = 3,\\
X = c( \\
"rnorm(300, 0, 0.5)",\# x1 \\
"rbeta(300, 2, .4)", \# x2\\
"runif(300, 0, 0.5)",\# x3\\
"rweibull(300,1,2)", \# x4\\
"rbinom(300, 1, .4)",\# x5\\
\# linear terms in parallel response surfaces \\
lp_y = c(rep(".2*x1 + .3*x2 - .1*x3 - .1*x4 - .2*x5", 3)) \\
\# nonlinear terms in parallel response surfaces \\
nlp\_y = c(rep(".7*x1*x1 - .1*x2*x3", 3))\\
align = F, \# different predictors used in treatment and outcome models\\
\# linear terms in treatment assignment model\\
lp_w = c(\\
".4*x1 + .1*x2 - .1*x4 + .1*x5", \# w = 1\\
".2*x1 + .2*x2 - .2*x4 - .3*x5"), \# w = 2 \\
\#nonlinear terms in treatment assignment model \\
nlp\_w = c(\\
"-.5*x1*x4 - .1*x2*x5",\# w = 1 \\
"-.3*x1*x4 + .2*x2*x5),\# w = 2\\
tau = c(-1.5,0,1.5),
delta = c(0.5,0.5),
psi = 1)}
The outputs of the simulated data object are: (1) \code{data\$covariates} for $\bm{X}$, (2) \code{data\$w} for treatment indicators, (3) \code{data\$y} for observed binary outcomes, (4) \code{data\$y_prev} for the outcome prevalence rates, (5) \code{data\$ratio_of_units} for the proportions of units in each treatment group, (6) \code{data\$overlap_fig} for the visualization of covariate overlap via boxplots of the distributions of true GPS.
In this simulated dataset, the ratio of units is:
\code{R> data\$ratio_of_units}
\begin{verbatim}
w
1 2 3
0.33 0.33 0.34
\end{verbatim}
The outcome prevalences within each treatment group are
\code{R> data\$y_prev}
\begin{verbatim}
w y_prev
1 1 0.24
2 2 0.48
3 3 0.76
4 Overall 0.50
\end{verbatim}
Figure \ref{fig:covariate_overlap_moderate_plot} shows the distributions of true GPS for each treatment group, suggesting moderate covariate overlap.
\code{R> data\$overlap_fig}
\begin{figure}[htbp]
\centering
\includegraphics[width = 0.95\textwidth]{data_simulation_moderate_covariate_overlap.jpg}
\caption{Moderate overlap with \code{psi = 1}. Each panel presents boxplots by treatment group of the true GPS for one of the treatments for every unit in the sample. }
\label{fig:covariate_overlap_moderate_plot}
\end{figure}
We can change structures of the simulated data by modifying arguments of the \code{data_sim} function. For example, setting \code{delta = c(1.5, 0.5)} yields unequal sample sizes across treatment groups with the ratio of unit $.58:.27:.15$. Assigning smaller values to \code{psi} can increase overlap: \code{psi = 0.1} corresponds to a strong covariate overlap as shown in Figure \ref{fig:covariate_overlap_strong_plot}.
\begin{figure}[htbp]
\centering
\includegraphics[width = 0.95\textwidth]{data_simulation_strong_covariate_overlap.jpg}
\caption{Strong overlap with \code{psi = 0.1}. Each panel presents boxplots by treatment group of the true GPS for one of the treatments for every unit in the sample. }
\label{fig:covariate_overlap_strong_plot}
\end{figure}
\subsection{Methods to estimate causal effects}\label{examples:ca_estimate}
We now implement the 6 methods described in Section~\ref{sec:estimation} to estimate the causal effects using the simulated dataset. We first provide a step-by-step guide of each method, and then demonstrate the functions in the \pkg{CIMTx} package to implement each method.
\subsubsection{Regression adjustment}
Fit a Bayesian generalized linear regression model, $ E( y \mid \bm{x},w)= \text{logit}^{-1} (\beta_0 + \beta_1 w + \bm{\beta}_2^\top \bm{x} )$, using the \pkg{arm}\code{:bayesglm()} function and draw the posteriors using \pkg{arm}\code{:sim()}.
\code{R> x <- data\$covariates; w <- data\$w; y <- data\$y; ndpost <- 100}
\code{R> xwydata <- cbind(x,w,y)}
\code{R> reg_ra <- arm::bayesglm(y\texttildelow ., data = xwydata, family = binomial, x = TRUE)}\\
\code{R> sim_beta <- coef(arm::sim(reg_ra, n.sims = ndpost)) \# Get posterior distribution}\\
To estimate $ATE_{1,2}$, we predict the potential outcomes to each of three treatments:
\code{R> x_tilde <- as.data.frame(model.matrix(reg_ra))}\\
\code{R> x_tilde1 <- x_tilde2 <- x_tilde3 <- x_tilde; n <- dim(x_tilde)[1]}\\
\code{R> x_tilde1\$w <- 1; x_tilde2\$w <- 2; x_tilde3\$w <- 3}\\
\code{R> y1_tilde <- array(NA, c(ndpost, n)); y2_tilde <- array(NA, c(ndpost, n))}\\
\code{R> for (s in 1:ndpost) \{ \# Predict potential outcomes for all units\\
p1_tilde <- arm::invlogit(as.matrix(x_tilde1) \%*\% sim_beta[s,]) \\
y1_tilde[s,] <- rbinom(n, 1, p1_tilde)\\
p2_tilde <- arm::invlogit(as.matrix(x_tilde2) \%*\% sim_beta[s,]) \\
y2_tilde[s,] <- rbinom(n, 1, p2_tilde)\}}\\
To estimate $ATT_{1|1,2}$, we predict the potential outcomes for individuals in the reference treatment group $w=1$:
\code{R> x_tilde11 <- x_tilde12 <- x_tilde13 <- x_tilde[x_tilde\$w==1,] \# w=1 reference}
\code{R> x_tilde12\$w <- 2; x_tilde13\$w <- 3; n1 <- dim(x_tilde11)[1] \# Switch w=1 to 2 and 3}
\code{R> y11_tilde <- array(NA, c(ndpost, n1)); y12_tilde <- array(NA, c(ndpost, n1))}
\code{R> for (s in 1:ndpost) \{\# Predict potential outcomes for those receiving w = 1 \\
p11_tilde <- arm::invlogit(as.matrix(x_tilde11) \%*\% sim_beta[s,]) \\
y11_tilde[s,] <- rbinom(n1, 1, p11_tilde)\\
p12_tilde <- arm::invlogit(as.matrix(x_tilde12) \%*\% sim_beta[s,]) \\
y12_tilde[s,] <- rbinom(n1, 1, p12_tilde)\}}\\
The causal effect estimates are presented in terms of OR, RR and RD:
\code{R> y1_pred <- y2_pred <- NULL;
RD12_est <- RR12_est <- OR12_est <- NULL}\\
\code{R> for (m in 1:ndpost) \{ \# Demonstrate ATE12 only \\
y1_pred[m] <- mean(y1_tilde[m,]) \\
y2_pred[m] <- mean(y2_tilde[m,]) \\
RD12_est[m] <- y1_pred[m] - y2_pred[m]\\
RR12_est[m] <- y1_pred[m] / y2_pred[m] \\
OR12_est[m] <- (y1_pred[m]/(1-y1_pred[m])) / (y2_pred[m]/(1-y2_pred[m]))\}}\\
In our package \pkg{CIMTx}, we can specify \code{method = "RA"} and \code{estimand = "ATE"} in the \code{ce_estimate()} function to get the ATE effects via RA:
\code{R> ce_estimate_ra_ate <- ce_estimate(y = data\$y, x = data\$covariates, \\ w = data\$w, method = "RA", estimand = "ATE", ndpost = 100)}\\
The output is 3 dataframes listing effect estimates (EST), standard errors (SE) and 95\% credible intervals (LOWER, UPPER):
\code{R> ce_estimate_ra_ate}
\begin{verbatim}
$ATE12
EST SE LOWER UPPER
RD -0.24 0.05 -0.33 -0.13
RR 0.50 0.09 0.32 0.69
OR 0.34 0.09 0.19 0.56
$ATE13
EST SE LOWER UPPER
RD -0.50 0.06 -0.64 -0.39
RR 0.33 0.06 0.20 0.45
OR 0.11 0.04 0.05 0.19
$ATE23
EST SE LOWER UPPER
RD -0.26 0.05 -0.34 -0.17
RR 0.66 0.05 0.56 0.75
OR 0.33 0.07 0.20 0.47
\end{verbatim}
Specifying \code{estimand = "ATT"} and setting \code{reference_trt} will get us the ATT effects:
\code{R> ce_estimate_ra_att <- ce_estimate(y = data\$y, x = data\$covariates, \\ w = data\$w, method = "RA", estimand = "ATT", ndpost = 100, reference_trt = 1)}
\code{R> ce_estimate_ra_att}
\begin{verbatim}
$ATT12
EST SE LOWER UPPER
RD -0.24 0.07 -0.37 -0.09
RR 0.51 0.13 0.31 0.78
OR 0.35 0.13 0.18 0.68
$ATT13
EST SE LOWER UPPER
RD -0.50 0.09 -0.68 -0.33
RR 0.33 0.09 0.18 0.49
OR 0.12 0.05 0.04 0.25
\end{verbatim}
\subsubsection{Vector Matching}
VM is currently designed only for the ATT effects. We first estimate the GPS from the full data using multinomial logistic regression and then discard units that fall outside the rectangular region (common support region) defined by the maximum value of the smallest GPS and the minimum value of the largest GPS in each treatment group.
\code{R> xwdata <- cbind(w, x)}\\
\code{R> gps_fit <- nnet::multinom(w \texttildelow ., data = xwdata) \# Calculate GPS}\\
\code{R> probs_logit1 <- data.frame(fitted(gps_fit))}\\
\code{R> colnames(probs_logit1) <- c("gps1", "gps2", "gps3")}\\
\code{R> probs_logit <- fitted(gps_fit); xwdata <- cbind(xwdata, probs_logit1)}
\code{R> min_max_gps <- xwdata \%>\% \\
dplyr::group_by(w) \%>\% \# Min and max of GPS in each w\\
dplyr::summarise(min1 = min(gps1), max1 = max(gps1), \# w=1 \\
min2 = min(gps2), max2 = max(gps2), \# w=2\\
min3 = min(gps3), max3 = max(gps3)) \# w=3}
\code{R> data\$Eligible <- \# Determine eligibility for w=1; w=2; w=3\\
xwdata\$gps1 >= max(min_max_gps\$min1) \& xwdata\$gps1 <= min(min_max_gps\$max1) \& \\
xwdata\$gps2 >= max(min_max_gps\$min2) \& xwdata\$gps2 <= min(min_max_gps\$max2) \& \\
xwdata\$gps3 >= max(min_max_gps\$min3) \& xwdata\$gps3 <= min(min_max_gps\$max3)}\\
We then recalculate the GPS for the inferential units within the common support region. For vector matching, we perform $k$-means clustering and then 1:1 matching based on the GPS.
\code{R> xwydata <- cbind(xwdata,y); n_cluster <- 2}
\code{R> \# Classify all units using $k$-Means Clustering on the logit GPS}
\code{R> temp1 <- kmeans(logit(xwydata\$gps1), n_cluster); xwydata\$cluster1 <- temp1\$cluster}
\code{R> temp2 <- kmeans(logit(xwydata\$gps2), n_cluster); xwydata\$cluster2 <- temp2\$cluster}
\code{R> temp3 <- kmeans(logit(xwydata\$gps3), n_cluster); xwydata\$cluster3 <- temp3\$cluster}
\code{R> temp12<-xwydata[w!= 3,];temp13<-xwydata[w!= 2,];temp23<-xwydata[w!= 1,]}
\code{R> \# Those receiving w = 1 are matched to those receiving w = 2 using \\ logit(temp12\$gps1) within K-means strata of logit(xwydata\$gps3) }
\code{R> match12 <- Matching::Matchby(Y = temp12\$y, Tr = temp12\$w == 1,\\
X =logit(temp12\$gps1), by = temp12\$cluster3,
replace = TRUE)}
\code{R> \# Those receiving w = 1 are matched to those receiving w = 3 using \\ logit(temp13\$gps1) within K-means strata of logit(xwydata\$gps2)}
\code{R> match13 <- Matching::Matchby(Y = temp13\$y, Tr = temp13\$w == 1,\\
X = logit(temp13\$gps1), by = temp13\$cluster2,
replace = TRUE)}\\
After matching, we extract the units receiving $w=1$ who were matched to units receiving $w=2$ and $w=3$ as well as their matches.
\code{R> rownames(xwydata) <- 1:nrow(xwydata); xwydata\$id <- 1:nrow(xwydata)}
\code{R> xwydata\$both_1 <- xwydata\$id \%in\% match12\$index.treated \& \\ xwydata\$id \%in\% match13\$index.treated }
\code{R> temp <- xwydata[xwydata\$both_1 == "TRUE", ] }
\code{R> m12 <- cbind(match12\$index.treated, match12\$index.control)}
\code{R> m13 <- cbind(match13\$index.treated, match13\$index.control)}
\code{R> m12 <- m12[m12[,1] \%in\% rownames(temp), ] \# Identify those matched to w=2}
\code{R> m13 <- m13[m13[,1] \%in\% rownames(temp), ] \# Identify those matched to w=3}
\code{R> triplets <- cbind(m12[order(m12[,1]), ], m13[order(m13[,1]), ])}
\code{R> triplets <- as.matrix(triplets[,c(1, 2, 4)]); n_matched <- nrow(triplets)}\\
Then we can estimate the ATT effects based on the matched individuals:
\code{R> Y1_obs <- xwydata\$y[triplets[,1]]; y1_hat <- mean(Y1_obs) \# w=1}
\code{R> Y2_imp <- xwydata\$y[triplets[,2]]; y2_hat <- mean(Y2_imp) \# w=2}
\code{R> Y3_imp <- xwydata\$y[triplets[,3]]; y3_hat <- mean(Y3_imp) \# w=3}
\code{R> D12_est <- y1_hat - y2_hat; RD13_est <- y1_hat - y3_hat}
\code{R> RR12_est <- y1_hat / y2_hat; RR13_est <- y1_hat / y3_hat}
\code{R> OR12_est <- (y1_hat / (1 - y1_hat)) / (y2_hat / (1 - y2_hat))}\\
To implement VM in \pkg{CIMTx}, we set the reference group \code{reference_trt = 1}, the number of clusters to form using $k$-means clustering \code{n_cluster = 2}, and use nonparametric bootstrap to estimate the 95\% confidence intervals by setting \code{boot = TRUE} and \code{nboots = 100}.
\code{R> ce_estimate_vm_att <- ce_estimate(y = data\$y, x = data\$covariates, \\ w = data\$w, method = "VM", estimand = "ATT", reference_trt = 1, \\ boot = TRUE, nboots = 100, n_cluster = 2)}
\code{R> ce_estimate_vm_att[1:2]}
\begin{verbatim}
$ATT12
EST SE LOWER UPPER
RD -0.01 0.11 -0.22 0.25
RR 1.01 0.25 0.65 1.62
OR 1.05 0.53 0.40 2.83
$ATT13
EST SE LOWER UPPER
RD -0.01 0.13 -0.27 0.29
RR 1.05 0.33 0.61 1.95
OR 1.14 0.71 0.32 3.34
\end{verbatim}
The numbers of matched individuals are also stored in the output list:
\code{R> summary(ce_estimate_vm_att\$number_matched)}
\begin{verbatim}
Min. 1st Qu. Median Mean 3rd Qu. Max.
41.00 63.00 69.00 67.94 75.00 89.00
\end{verbatim}
\subsubsection{TMLE}
The TMLE method is currently implemented only for the ATE effects. We first create the counterfactual treatment matrix.
\code{R> xwdata <- cbind(w, x); n <- length(w); n_trt <- length(unique(w))}\\
\code{R> xwdata\$w1 <- ifelse(xwdata\$w == 1, 1, 0)}\\
\code{R> xwdata\$w2 <- ifelse(xwdata\$w == 2, 1, 0)}\\
\code{R> xwdata\$w3 <- ifelse(xwdata\$w == 3, 1, 0)}\\
\code{R> w_mat <- xwdata[,c("w1", "w2", "w3")]}\\
\code{R>w_counterfactual <- rbind(xwdata, xwdata, xwdata)[,-1]}\\
\code{R> w_counterfactual\$w1 <- c(rep(1, n), rep(0, 2*n))}\\
\code{R> w_counterfactual\$w2 <- c(rep(0, n), rep(1, n), rep(0,n))}\\
\code{R> w_counterfactual\$w3 <- c(rep(0, 2*n), rep(1, n))}\\
Then run super learner and obtain initial predicted values for all the counterfactual scenarios:
\code{R> sl_fit <- \# Step 1:Estimating the outcome regression using super learner\\
SuperLearner( Y = y,
X = xwdata[,-1],
newX = w_counterfactual,
SL.library = \\c("SL.glm", "SL.glmnet", "SL.rpart"),
family = binomial(), verbose = F)} \\
\code{R> Q0 <- rep(0,n*n_trt); Qtvector <- cbind(Q0,sl_fit\$SL.predict)}\\
\code{R> Qtmat <- matrix(unlist(split(as.data.frame(Qtvector), rep(1:n_trt, each = n))), ncol=2*n_trt)}\\
We then estimate the counterfactual outcome probabilities to each treatment using TMLE. The counterfactuals can be contrasted to get the ATE effects.
\code{R> w_results <- matrix(NA, nrow = n_trt, ncol = 4);start <- 1;end <- 2}\\
\code{R> for(w in 1:n_trt) \{\\
fit <- tmle::tmle( \# Step 2:Super learner fit for EY_w in tmle call\\
Y = y, A = NULL, \# Steps 3-4:tmle:target EY_w using A=NULL and Delta=Tmat[,t]\\
Delta = w_mat[, w], Q = Qtmat[,c(start,end)] \\
W = x, \# Covariates \\
g.SL.library = c("SL.glm", "SL.glmnet", "SL.rpart"), \\
family = "binomial", verbose = TRUE) \# Binomial outcome \\
start <- start+2; end <- end+2 \# Step 5:The EY_w estimates are saved in tmle fit\\
w_results[w, 1] <- fit\$estimates\$EY1\$psi \# EY_w estimates \\
w_results[w, 2] <- fit\$estimates\$EY1\$var.psi \} \# EY_w sd } \\
Calling \code{method = "TMLE"} implements TMLE in \pkg{CIMTx}. We show the point estimates below; nonparametric bootstrap has been recommended to get the confidence intervals.
\code{R> ce_estimate_tmle_ate <- ce_estimate(y = data\$y, x = data\$covariates, \\w = data\$w, method = "TMLE", estimand = "ATE", \\ SL.library = c("SL.glm", "SL.glmnet", "SL.rpart"))}
\code{R> ce_estimate_tmle_ate}
\begin{verbatim}
$ATE12
EST
RD -0.22
RR 0.52
OR 0.37
$ATE13
EST
RD -0.52
RR 0.31
OR 0.10
$ATE23
EST
RD -0.30
RR 0.60
OR 0.27
\end{verbatim}
\subsubsection{BART}
First fit a BART model $E(y \mid \bm{x}, w) = \text{BART} (\bm{x}, w)$:
\code{R> w <- data\$w; xwdata <- cbind(w, x); n <- dim(x)[1]}\\
\code{R> bart_mod <- BART::pbart(x.train = xwdata, y.train = y)}\\
To estimate the ATE effects (e.g., $ATE_{1,2}$), we predict potential outcomes for all individuals to each treatment and contrast the average potential outcomes between two treatment groups:
\code{R> xw2 <- xw1 <- xwdata; xw2[,1] <- 2 \# Everyone received w=2 instead of w=1}
\code{R> bart_pred1 <- predict(bart_mod, newdata = xw1); pred_prop1 <- bart_pred1\$prob.test }
\code{R> bart_pred2 <- predict(bart_mod, newdata = xw2); pred_prop2 <- bart_pred2\$prob.test}
\code{R> for (m in 1:ndpost) \{ \\
y1 <- rbinom(n,1,pred_prop1[m,]);
y2 <- rbinom(n,1,pred_prop2[m,]) \\
y1_pred[m] <- mean(y1); y2_pred[m] <- mean(y2) \# E(Y1); E(Y2) \\
RD12_est[m] <- y1_pred[m] - y2_pred[m] \\
RR12_est[m] <- y1_pred[m] / y2_pred[m] \\
OR12_est[m] <- (y1_pred[m] / (1 - y1_pred[m])) / (y2_pred[m] / (1 - y2_pred[m]))\} }\\
For the ATT effects, the prediction and contrast are for those in the reference group:
\code{R> xw1 <- xwdata[w == 1,]; n1 <- dim(xw1)[1] \# Assuming w=1 is the reference \\}
\code{R> xw2 <- xw1; xw2[,1] <- 2 \# Switch treatment label 1 to 2 }\\
\code{R> bart_pred11 <- predict(bart_mod, newdata = xw1) }\\
\code{R> pred_prop11 <- bart_pred11\$prob.test}\\
\code{R> bart_pred12 <- predict(bart_mod, newdata = xw2)}
\code{R> pred_prop12 <- bart_pred12\$prob.test \# Predicted p if w=1 received w=2}
\code{R> for (m in 1:ndpost) \{ \\
y1 <- c(rbinom(n1,1,pred_prop11[m,])) \\
y2 <- c(rbinom(n1,1,pred_prop12[m,])) \\
y1_pred[m] <- mean(y1); y2_pred[m] <- mean(y2) \\
RD12_est[m] <- y1_pred[m] - y2_pred[m] \\
RR12_est[m] <- y1_pred[m] / y2_pred[m] \\
OR12_est[m] <- (y1_pred[m] / (1 - y1_pred[m])) / (y2_pred[m] / (1 - y2_pred[m]))\} }
\code{R> RD12_mean <- mean(RD12_est); RD12_se <- sd(RD_est)}
\code{R> RD12_lower <- quantile(RD12_est, probs=0.025, na.rm = TRUE)}
\code{R> RD12_upper <- quantile(RD12_est, probs=0.975, na.rm = TRUE)}\\
Setting \code{method = "BART"} and specifiying the \code{estimand = "ATE"} or \code{estimand = "ATT"} of the \code{ce_estimate()} function implements the BART method.
\code{R> ce_estimate_bart_att <- ce_estimate(y = data\$y, x = data\$covariates, w = data\$w, method = "BART", estimand = "ATT", ndpost = 100, reference_trt = 1)}
\code{R> ce_estimate_bart_att[1:2]}
\begin{verbatim}
$ATT12
EST SE LOWER UPPER
RD -0.20 0.06 -0.32 -0.08
RR 0.59 0.10 0.42 0.80
OR 0.43 0.12 0.24 0.69
$ATT13
EST SE LOWER UPPER
RD -0.45 0.06 -0.56 -0.34
RR 0.39 0.06 0.29 0.50
OR 0.15 0.04 0.08 0.24
\end{verbatim}
Next we demonstrate the BART-specific discarding rule to identify a common support region \citep{hu2020estimation}. When estimating $ATT_{1|1,2}$, individuals with a large variability in the predicted potential outcomes in the reference group $w=1$ will be discarded: \\
\code{R> post_prop_sd11 <- apply(pred_prop11, 2, sd) \# sd of predicted prop for w=1} \\
\code{R> post_prop_sd12 <- apply(pred_prop12, 2, sd) \# sd of predicted prop for w=2}\\
\code{R> post_prop_sd13 <- apply(pred_prop13, 2, sd) \# sd of predicted prop for w=3}\\
\code{R> threshold1 = max(post_prop_sd11)}\\
\code{R> \# Discard posterior sd of counterfactual outcomes > threshold1 (ref:w=1)}\\
\code{R> eligible1 <- (post_prop_sd12 <= threshold1) \& (post_prop_sd13 <= threshold1) }
For the ATE effects, the discarding rule is applied to each of the treatment groups:
\code{R> eligible1 <- (post_prop_sd12 <= threshold1) \& (post_prop_sd13 <= threshold1)}
\code{R> eligible2 <- (post_prop_sd21 <= threshold2) \& (post_prop_sd23 <= threshold2)}
\code{R> eligible3 <- (post_prop_sd31 <= threshold3) \& (post_prop_sd32 <= threshold3)}\\
The argument \code{discard = "Yes"} implements the discarding rule. Using $ATT_{1|1,2}$ as an example, 5 individuals in the reference group $w=1$ were discarded from the simulated data.
\code{R> ce_estimate_bart_discard_att <- ce_estimate(y = data$y, x = data$covariates, w = data\$w, method = "BART", estimand = "ATT", discard = "Yes", ndpost = 100, \\reference_trt = 1)}
\code{R> ce_estimate_bart_discard_att\$n_discard}
\code{R> 5}
We re-estimate $ATT_{1|1,2}$ using individuals within the common support region:
\code{R> ce_estimate_bart_discard_att[1:2]}
\begin{verbatim}
$ATT12
EST SE LOWER UPPER
RD -0.24 0.07 -0.34 -0.11
RR 0.66 0.08 0.53 0.83
OR 0.37 0.11 0.22 0.63
$ATT13
EST SE LOWER UPPER
RD -0.43 0.05 -0.52 -0.32
RR 0.52 0.05 0.43 0.62
OR 0.11 0.04 0.05 0.21
\end{verbatim}
\subsubsection{Inverse Probability of Treatment Weighting}\label{IPTW_demo}
We first demonstrate the weight estimators for the IPTW method via the following three approaches: (i) Multinomial logistic regression, (ii) GBM, and (iii) super learner.
\begin{enumerate}
\item [(i)] We use the \code{multinom} function of the \pkg{nnet} package to fit the multinomial logistic regression model with treatment indicator as the outcome. \\
\code{R> psmod <- nnet::multinom(w\texttildelow., data = xwdata,trace = FALSE)}
\code{R> pred_gps <- fitted(psmod)}
\code{R> ate_wt1 <- 1/pred_gps[,1];ate_wt2 <- 1/pred_gps[,2];ate_wt3 <- 1/pred_gps[,3]}
\code{R> att_wt12 <- pred_gps[,1]/pred_gps[,2]; att_wt13 <- pred_gps[,1]/pred_gps[,3]}
\item[(ii)] GBM is implemented via the \code{mnps()} function of the \pkg{twang} package. The weights can be extracted by using the \code{get.weights()} function.
\code{R> psmod_gbm_ate <- twang::mnps(as.factor(w)\texttildelow V1+V2+V3+V4+V5, data=xwdata, \\ estimand = "ATE")} \\
\code{R> wt_gbm_ate <- twang::get.weights(psmod_gbm_ate, estimand = "ATE")}\\
\code{R> psmod_gbm_att <- twang::mnps(as.factor(w)\texttildelow V1+V2+V3+V4+V5, data=xwdata, \\ estimand = "ATT", treatATT = 1)}\\
\code{R> wt_gbm_att <- twang::get.weights(psmod_gbm_att, estimand = "ATT")}
\item[(iii)] To implement super learner, we use \pkg{WeightIt}\code{::weightit()} and set \code{method = "super"}.
\code{R> weightit_sl_ate <- WeightIt::weightit(w\texttildelow., data = xwdata, method = "super", \\ estimand = "ATE", SL.library = c("SL.glm", "SL.glmnet", "SL.rpart"))}\\
\code{R> wt_sl_ate <- weightit_sl_ate\$weights}\\
\code{R> weightit_sl_att <- WeightIt::weightit(as.factor(w)\texttildelow., data = xwdata,\\ method = "super", estimand = "ATT", focal = 1, \\SL.library = c("SL.glm", "SL.glmnet", "SL.rpart"))}\\
\code{R> wt_sl_att <- weightit_sl_att\$weights}
\end{enumerate}
Following the estimation of the weights, we obtain the causal effects as follows:
\code{R> \# Demonstrate ATE12 and ATT12 only: }\\
\code{R> mu1_ate = sum(y[w == 1] * wt_gbm_ate[w == 1]) / sum(wt_gbm_ate[w == 1])}\\
\code{R> mu2_ate = sum(y[w == 2] * wt_gbm_ate[w == 2]) / sum(wt_gbm_ate[w == 2])}\\
\code{R> mu1_att = mean(y[w == 1]) \# trt_reference = 1}\\
\code{R> mu2_att = sum(y[w == 2] * wt_gbm_att[w == 2]) / sum(wt_gbm_att[w == 2])}\\
\code{R> RD12_ate <- mu1_ate - mu2_ate; RR12_ate <- mu1_ate / mu2_ate}\\
\code{R> OR12_att <- (mu1_att / (1 - mu1_att)) / (mu2_att / (1 - mu2_att))}\\
IPTW can be implemented in \pkg{CIMTx} by setting a specific method (\code{method = " "}) and estimand (\code{estimand = " "}). As a strategy to handle the positivity assumption (A2), the weights can be trimmed at a user-specified level via \code{trim_perc = c()}. Figure~\ref{fig:p_est_weights} shows the distributions of the weights estimated by the three methods before and after weight trimming at the 5\% and 95\% of the distribution.
\code{R> ce_estimate_iptw_multinomial_ate <- ce_estimate(y = data\$y, \\ x = data\$covariates, w = data\$w, method = "IPTW-Multinomial", estimand = "ATE")}
\code{R> ce_estimate_iptw_multinomial_trim_ate <- ce_estimate(y = data\$y, \\ x = data\$covariates, w = data\$w, method = "IPTW-Multinomial", estimand = "ATE", trim_perc = c(0.05, 0.95))}
\code{R> ce_estimate_iptw_sl_ate <- ce_estimate(y = data\$y,\\ x = data\$covariates, w = data\$w, method = "IPTW-SL", estimand = "ATE",\\ SL.library = c("SL.glm", "SL.glmnet", "SL.rpart"))}
\code{R> ce_estimate_iptw_sl_trim_ate <- ce_estimate(y = data\$y,\\ x = data\$covariates, w = data\$w, method = "IPTW-SL", trim_perc = c(0.05,0.95),\\SL.library = c("SL.glm", "SL.glmnet", "SL.rpart"), estimand = "ATE")}
\code{R> ce_estimate_iptw_gbm_ate <- ce_estimate(y = data\$y, \\ x = data\$covariates, w = data\$w, method = "IPTW-GBM", estimand = "ATE")}
\code{R> ce_estimate_iptw_gbm_trim_ate <- ce_estimate(y = data\$y, trim_perc = c(0.05, 0.95), x = data\$covariates, w = data\$w, method = "IPTW-GBM-Trim", estimand = "ATE")}
\code{R> plot_boxplot(ce_estimate_iptw_multinomial_ate, ce_estimate_iptw_sl_ate,\\
ce_estimate_iptw_gbm_ate, ce_estimate_iptw_multinomial_trim_ate,\\ ce_estimate_iptw_sl_trim_ate, ce_estimate_iptw_gbm_trim_ate)}
\begin{figure}[htbp]
\centering
\includegraphics[width = 0.95\textwidth]{p_weight_comparison.jpg}
\caption{Distributions of the inverse probability of treatment weights estimated by multinomial logistic regression, super learner and generalized boosted models. Panel (a) shows results before weight trimming. Panel (b) displays results after trimming the weights at 5\% and 95\% of the distribution. }
\label{fig:p_est_weights}
\end{figure}
We can then estimate the causal effects and bootstrap confidence intervals.
\code{\#ATE effects}
\code{R> ce_estimate_iptw_sl_trim_ate_boot <- ce_estimate(y = data\$y, x = data\$covariates, w = data\$w, method = "IPTW-SL",estimand = "ATE", boot = TRUE, nboot = 100, \\ trim_perc = c(0.05, 0.95), SL.library = c("SL.glm", "SL.glmnet", "SL.rpart"))}
\code{R> ce_estimate_iptw_sl_trim_ate_boot}
\begin{verbatim}
$ATE12
EST SE LOWER UPPER
RD -0.25 0.07 -0.40 -0.12
RR 0.47 0.12 0.25 0.70
OR 0.33 0.11 0.13 0.58
$ATE13
EST SE LOWER UPPER
RD -0.53 0.06 -0.63 -0.41
RR 0.29 0.06 0.17 0.41
OR 0.10 0.03 0.05 0.18
$ATE23
EST SE LOWER UPPER
RD -0.27 0.07 -0.40 -0.13
RR 0.64 0.08 0.52 0.81
OR 0.32 0.10 0.16 0.57
\end{verbatim}
\code{\# ATT effects}
\code{R> ce_estimate_iptw_sl_trim_att_boot <- ce_estimate(y = data\$y, x = data\$covariates, w = data\$w, method = "IPTW-SL",estimand = "ATT", boot = TRUE, reference_trt = 1, nboot = 100, \\ trim_perc = c(0.05, 0.95), SL.library = c("SL.glm", "SL.glmnet", "SL.rpart"))}
\code{R> ce_estimate_iptw_sl_trim_att_boot}
\begin{verbatim}
$ATT12
EST SE LOWER UPPER
RD -0.22 0.06 -0.33 -0.11
RR 0.53 0.10 0.35 0.74
OR 0.39 0.11 0.21 0.61
$ATT13
EST SE LOWER UPPER
RD -0.50 0.07 -0.63 -0.37
RR 0.33 0.06 0.22 0.46
OR 0.11 0.04 0.05 0.21
\end{verbatim}
\subsubsection{Regression adjustment with multivariate spline of GPS}
To implement RAMS, we estimate the GPS and fit a generalized additive model using the treatment indicator and multivariate spline function of the logit of GPS as the predictors.
\code{R> gps1 <- pred_gps[,1]; gps2 <- pred_gps[,2]; \# Use GPS from multinomial to demo}\\
\code{R> logit_gps1 <- stats::qlogis(gps1) \# logit of gps for w = 1 } \\
\code{R> logit_gps2 <- stats::qlogis(gps2) \# logit of gps for w = 2 } \\
\code{R> RAMS_data <- as.data.frame(cbind(w = xwdata\$w, logit_gps1, logit_gps2, y))}\\
\code{R> RAMS_model <- mgcv::gam(y\texttildelow w + te(logit_gps1,logit_gps2), \\
family = binomial, data = RAMS_data)}\\
Using the fitted generalized additive model, we can predict the potential outcomes and estimate the causal effects.
\code{R> \# Demo ATE12 only}
\code{R> counterfactual_w1_ate <- data.frame(w=rep(1,n), logit_gps1, logit_gps2)}
\code{R> RAMS_pred1_ate <- plogis(predict(RAMS_model, newdata = counterfactual_w1_ate))}
\code{R> y1_hat_ate <- mean(RAMS_pred1_ate) \# E(Y) for w=1 (ATE)}
\code{R> counterfactual_w2_ate <- data.frame(w=rep(2,n), logit_gps1, logit_gps2)}
\code{R> RAMS_pred2_ate <- plogis(predict(RAMS_model, newdata = counterfactual_w2_ate))}
\code{R> y2_hat_ate <- mean(RAMS_pred2_ate) \# E(Y) for w=2 (ATE) Demo ATE12 only}
\code{R> RD12_ate <- y1_hat_ate - y2_hat_ate; RR12_ate <- y1_hat_ate / y2_hat_ate }
\code{R> OR12_ate <- (y1_hat_ate / (1 - y1_hat_ate)) / (y2_hat_ate / (1 - y2_hat_ate))} \\
The RAMS can be called by setting \code{method = "RAMS-Multinomial"} and specifying the estimand \code{estimand = "ATE"} or \code{estimand = "ATT"}.
\code{R> ce_estimate(y = data\$y, x = data\$covariates, w = data\$w, \\ method = "RAMS-Multinomial", estimand = "ATE", boot = TRUE, nboot = 100)}
\begin{verbatim}
$ATE12
EST SE LOWER UPPER
RD -0.25 0.03 -0.30 -0.19
RR 0.50 0.07 0.34 0.63
OR 0.34 0.06 0.23 0.45
$ATE13
EST SE LOWER UPPER
RD -0.50 0.06 -0.61 -0.38
RR 0.34 0.06 0.19 0.46
OR 0.12 0.04 0.05 0.20
$ATE23
EST SE LOWER UPPER
RD -0.25 0.03 -0.33 -0.19
RR 0.67 0.04 0.58 0.73
OR 0.34 0.06 0.23 0.45
\end{verbatim}
\subsection{Sensitivity analysis}\label{sec:example_sa}
We demonstrate the Monte Carlo sensitivity analysis approach for unmeasured confounding \citep{hu2021flexible}. We first simulate a small dataset in a simple causal inference setting to illustrate the key steps of the approach. The are two binary confounders: $X_1$ is measured and $X_2$ is unmeasured.
\code{R> set.seed(111)}
\code{R> data_SA <- data_gen(
sample_size = 100,
n_trt = 3,\\
X = c( "rbinom(100, 1, .5)",\# x1:measured confounder\\
"rbinom(100, 1, .4)"), \# x2:unmeasured confounder \\
lp_y = rep(".2 * x1 + 2.3 * x2", 3), \# parallel response surfaces \\
nlp_y = NULL,
align = F, \\
lp_w = c("0.2 * x1 + 2.4 * x2", \# w = 1 \\
"-0.3 * x1 - 2.8 * x2"), \# w = 2, \\
nlp_w = NULL,
tau = c(-2, 0, 2),
delta = c(0, 0),
psi = 1
)}\\
The key sensitivity analysis steps described in Section~\ref{sec:sa} are implemented as follows.
\begin{enumerate}
\item Estimate the GPS for each individual.
\code{R> M1 <- 50}
\code{R> w_model <- mbart2(x.train = data_SA\$covariates, y.train = data_SA\$w, \\ ndpost = M1 * sample_gap) \# gap-sampling to reduce the dependence in MCMC}
\code{R> gps <- array(w_model\$prob.train[seq(1, M1 * sample_gap, sample_gap),], dim = c(M1, \# 1st dimension is M1\\
length(unique(data_SA\$w)), \# 2nd dimension is the number of treatments \\
dim(data_SA\$covariates)[1]) \# 3rd dimension is sample size}
\code{R> dim(gps)}
\code{50 3 100}
The output of the posterior GPS is a three-dimensional array. The first dimension is the number of posterior draws for the GPS ($M_1$). The second dimension is the number of treatments $T$, and the third dimension is the total sample size.
\item Specify the prior distributions of the confounding functions and the number of draws ($M_2$) for the confounding functions $c(w,w',\bm{x})$. In this illustrative example, we know the true values of the confounding functions within each stratum of $x_1$, which will be used for the sensitivity analysis.
\code{R> x1 <- data_SA\$covariates[,1]}\\
\code{R> x2 <- data_SA\$covariates[,2] \# x2 as the unmeasured confounder}\\
\code{R> w <- data_SA\$w} \\
\code{R> x1w_data <- cbind(x1, w) }\\
\code{R> Y1 <- data_SA\$y_truth[,1]; Y2 <- data_SA\$y_truth[,2]}\\
\code{R> Y3 <- data_SA\$y_truth[,3]}\\
\code{R> \# Calculate the true confounding functions within x1 = 1 stratum}\\
\code{R> c_1_x1_1 <- mean(Y1[w==1\&x1==1])-mean(Y1[w==2\&x1==1])\# c(1,2)} \\
\code{R> c_2_x1_1 <- mean(Y2[w==2\&x1==1])-mean(Y2[w==1\&x1==1])\# c(2,1)} \\
\code{R> c_3_x1_1 <- mean(Y2[w==2\&x1==1])-mean(Y2[w==3\&x1==1])\# c(2,3)}\\
\code{R> c_4_x1_1 <- mean(Y1[w==1\&x1==1])-mean(Y1[w==3\&x1==1])\# c(1,3)}\\
\code{R> c_5_x1_1 <- mean(Y3[w==3\&x1==1])-mean(Y3[w==1\&x1==1])\# c(3,1)} \\
\code{R> c_6_x1_1 <- mean(Y3[w==3\&x1==1])-mean(Y3[w==2\&x1==1])\# c(3,2)}\\
\code{R> c_x1_1 <- cbind(c_1_x1_1, c_2_x1_1, c_3_x1_1, c_4_x1_1, \\ c_5_x1_1, c_6_x1_1) \# True confounding functions among x1 = 1}
The true values of the confounding functions within the stratum $x_1 = 0$ can be calculated in a similar way.
\code{R> c_x1_0 <- cbind(c_1_x1_0, c_2_x1_0, c_3_x1_0, c_4_x1_0, \\ c_5_x1_0, c_6_x1_0) \# True confounding functions among x1 = 0}
\code{R> true_c_functions <- rbind(c_x1_1, c_x1_0)}
\item Calculate the confounding function adjusted outcomes with the drawn values of GPS and confounding functions.
\code{R> i <- 1; j <- 1}\\
\code{R> ycf <- ifelse(
x1w_data[, "w"] == 1 \& x1 == 1, \\ \# w = 1, x1 = 1 \\
y - (c_x1_1[i, 1] * gps[j, 2,] + c_x1_1[i, 4] * gps[j, 3,]),\\
ifelse( x1w_data[, "w"] == 1 \& x1 == 0, \\ \# w = 1, x1 = 0 \\
y - (c_x1_0[i, 1] * gps[j, 2,] + c_x1_0[i, 4] * gps[j, 3,]),\\
ifelse( x1w_data[, "w"] == 2 \& x1 == 1, \\ \# w = 2, x1 = 1 \\
y - (c_x1_1[i, 2] * gps[j, 1,] + c_x1_1[i, 3] * gps[j, 3,]),\\
ifelse( x1w_data[, "w"] == 2 \& x1 == 0, \\ \# w = 2, x1 = 0\\
y - (c_x1_0[i, 2] * gps[j, 1,] + c_x1_0[i, 3] * gps[j, 3,]),\\
ifelse(x1w_data[, "w"] == 3 \& x1 == 1, \\ \# w = 3, x1 = 1 \\
y - (c_x1_1[i, 5] * gps[j, 1,] + c_x1_1[i, 6] * gps[j, 2,]),\\
\# w = 3, x1 = 0 \\
y - (c_x1_0[i, 5] * gps[j, 1,] + c_x1_0[i, 6] * gps[j, 2,]) \\
))))) }
\item Use the adjusted outcomes to estimate the causal effects.
\code{R> bart_mod_sa <- wbart(x.train = x1w_data, y.train = ycf, ndpost = 1000)}\\
\code{R> predict_1_ate_sa <- pwbart(cbind(x1, w = 1), bart_mod_sa\$treedraws)} \\
\code{R> predict_2_ate_sa <- pwbart(cbind(x1, w = 2), bart_mod_sa\$treedraws)}\\
\code{R> predict_3_ate_sa <- pwbart(cbind(x1, w = 3), bart_mod_sa\$treedraws)}\\
\code{R> RD_ate_12_sa <- rowMeans(predict_1_ate_sa - predict_2_ate_sa)}\\
\code{R> RD_ate_23_sa <- rowMeans(predict_2_ate_sa - predict_3_ate_sa)}\\
\code{R> RD_ate_13_sa <- rowMeans(predict_1_ate_sa - predict_3_ate_sa)}\\
\code{R> predict_1_att_sa <- pwbart(cbind(x1[w==1], w = 1), bart_mod_sa\$treedraws)} \\
\code{R> predict_2_att_sa <- pwbart(cbind(x1[w==1], w = 2), bart_mod_sa\$treedraws)}\\
\code{R> RD_att_12_sa <- rowMeans(predict_1_att_sa - predict_2_att_sa) \# w=1 is the reference}
\end{enumerate}
Repeat steps 3 and 4 $M_1 \times M_2$ times to form $M_1 \times M_2$ datasets with adjusted outcomes. The uncertainty intervals are estimated by pooling the posteriors across the $M_1 \times M_2$ model fits.
The \code{sa()} function implements the sensitivity analysis approach while fitting the $M_1 \times M_2$ models using parallel computation.
\code{R> SA_adjust_result <- sa(M1 = 50, M2 = 1, y = data_SA\$y, x = x1,\\ w = data_SA\$w, ndpost = 100, prior_c_functions = true_c_functions, nCores = 3, \\estimand = "ATE")}
\code{R> SA_adjust_result}
\begin{verbatim}
EST SE LOWER UPPER
ATE12 -0.44 0.10 -0.64 -0.24
ATE13 -0.58 0.11 -0.80 -0.36
ATE23 -0.14 0.12 -0.38 0.10
\end{verbatim}
We compare the sensitivity analysis results to the naive estimators where we ignore the unmeasured confounder $X_2$, and to the results where we had access to $X_2$.
\code{R> ce_estimate_bart_ate_without_x2 <- ce_estimate(y = data_SA\$y, x = x1, \\ w = data\$w, method = "BART", estimand = "ATE", ndpost = 100)}\\
\code{R> ce_estimate_bart_ate_with_x2 <- ce_estimate(y = data_SA\$y, x = cbind(x1, x2), \\ w = data\$w, method = "BART", estimand = "ATE", ndpost = 100)}\\
Figure \ref{fig:p_SA_demo} compares the estimates of $ATE_{1,2}$, $ATE_{2,3}$ and $ATE_{1,3}$ from the three analyses. The sensitivity analysis estimators are similar to the results that could be achieved had the unmeasured confounder $X_2$ been made available.
\begin{figure}[htbp]
\centering
\includegraphics[width = 1\textwidth]{p_SA_demo.jpg}
\caption{Estimates and 95\% credible intervals for $ATE_{1,2}$, $ATE_{2,3}$ and $ATE_{1,3}$ }
\label{fig:p_SA_demo}
\end{figure}
We can also conduct the sensitivity analysis for the ATT effects by setting \code{estimand = "ATT"}.
\code{R> SA_adjust_att_result <- sa(M1 = 50, M2 = 1, y = data_SA\$y, x = x1,\\ w = data_SA\$w, ndpost = 100, prior_c_functions = true_c_functions, nCores = 3, \\estimand = "ATT", reference_trt = 1)}
\code{R> SA_adjust_att_result}
\begin{verbatim}
EST SE LOWER UPPER
ATT12 -0.42 0.10 -0.62 -0.22
ATT13 -0.57 0.11 -0.78 -0.35
\end{verbatim}
Finally, we demonstrate the \code{sa()} function in a more complex data setting with 3 measured confounders and 2 unmeasured confounders.
\code{R> set.seed(1)}
\code{R> data_SA_2 <- data_gen(
sample_size = 100,
n_trt = 3, \\
X = c(
"rnorm(100, 0, 0.5)",\# x1 \\
"rbeta(100, 2, .4)", \# x2\\
"runif(100, 0, 0.5)",\# x3\\
"rweibull(100, 1, 2)", \# x4 as one of the unmeasured confounders\\
"rbinom(100, 1, .4)"),\# x5 as one of the unmeasured confounders\\
lp_y = rep(".2 * x1 + .3 * x2 - .1 * x3 - 1.1 * x4 - 1.2 * x5", 3), \\
nlp_y = rep(".7 * x1 * x1 - .1 * x2 * x3", 3) \# parallel response surfaces \\
align = FALSE, \\
lp_w = c(".4 * x1 + .1 * x2 - 1.1 * x4 + 1.1 * x5", \# w = 1 \\
".2 * x1 + .2 * x2 - 1.2 * x4 - 1.3 * x5"), \# w = 2, \\
nlp_w = c("-.5 * x1 * x4 - .1 * x2 * x5", \# w = 1 \\
"-.3 * x1 * x4 + .2 * x2 * x5") \# w = 2 \\
tau = c(0.5,-0.5,0.5),
delta = c(0.5,0.5),
psi = 2)}\\
There are three ways in which the prior for the confounding functions can be used: (i) point mass prior, (ii) re-analysis over a range of point mass priors, (iii) full prior with uncertainty specified. We have demonstrated (i) in the previous illustrative example, and now show how (ii) and (iii) can be used. We will use a range of point mass priors for $c(1,3,\bm{x})$ and $c(3,1,\bm{x})$, and use uniform distributions for the other confounding functions.
\code{R> c_grid <- c(
"runif(-0.6, 0)",\# c(1,2)\\
"runif(0, 0.6)",\# c(2,1)\\
"runif(-0.6, 0)", \# c(2,3) \\
"seq(-0.6, 0, by = 0.15)", \# c(1,3) \\
"seq(0, 0.6, by = 0.15)", \# c(3,1) \\
"runif(0, 0.6)", \# c(3,2)
)}
\code{R> SA_grid_result <- sa(y = data_SA_2\$y, w = data_SA_2\$w, estimand = "ATE",\\ x = data_SA_2\$covariates[,-c(4,5)], prior_c_functions = c_grid, nCores = 3)}
The sensitivity analysis results can be visualized via a contour plot. Figure \ref{fig:p_contour} shows how the estimate of $ATE_{1,3}$ would change under different
pairs of values of the two confounding functions $c(1,3,\bm{x})$ and $c(3,1,\bm{x})$.
Suppose the outcome event is death. The specification of $c(1,3,\bm{x})<0$ and $c(1,3,\bm{x})>0$ assumes that the unmeasured confounders guiding clinicians to make a treatment decision tend to lead them to recommend treatment $w=1$ over $w=3$ to healthier patients because those receiving $w=1$ would have lower probability of death to either treatment. Under this assumption, when the effect of unmeasured confounding increases (moving up along the $-45^{\circ}$ line), the beneficial treatment effect associated with $w=3$ becomes more pronounced evidenced by larger estimates of $ATE_{1,3}$.
\code{R> plot_contour(SA_grid_result, ATE = "1,3"
)}
\begin{figure}[htbp]
\centering
\includegraphics[width = 0.65\textwidth]{p_contour.jpg}
\caption{Contour plot of the confounding function adjusted $ATE_{1,3}$. The blue lines report the adjusted causal effect estimates corresponding to pairs of values for $c(1,3)$ and $c(3,1)$ spaced on a grid $(-0.6, 0) \times (0, 0.6)$ incremented by 0.15, and under the prior distributions: $c(1,2), c(2,3) \sim U(-0.6,0); c(2,1), c(2,3) \sim U(0, 0.6)$. }
\label{fig:p_contour}
\end{figure}
\newpage
|
1,116,691,498,703 | arxiv | \section{Introduction}
\label{intro}
The realm of topological field theories can be divided, according to Witten
\cite{witten}, in two broad classes:
the {\sl cohomological, or semiclassical theories}, whose
prototypes are either the
topological Yang-Mills theory \cite{wittenym} or the
topological $\sigma$-model \cite{wittensigma}
and the {\sl quantum theories}, whose prototype is
the abelian Chern-Simons theory
\cite{chernsimons,report}.
In this paper we deal
with the cohomological theories and present new properties of these models.
In particular, we propose a new formulation of 2D topological
gravity \cite{witten,topgr},
leading to correlation functions apparently different
from those of Witten's theory. Nevertheless, our
considerations
have a more general scope and can apply to a wider
number of models.
The new features of cohomological theories that we analyse are the
reduction of moduli space to a constrained submanifold
of the ordinary moduli space and the field
theoretical mechanism that implements such a reduction.
Specifically,
we derive a theory of 2D topological gravity where
the physical correlators are
{\sl intersection numbers} in a proper submanifold ${\cal V}_{g,s}
\subset {\cal M}_{g,s}$
of the moduli space ${\cal M}_{g,s}$ of genus $g$ Riemann surfaces
$\Sigma_{g,s}$ with $s$ marked points.
${\cal V}_{g,s}$ is defined as follows.
Consider the $g$-dimensional vector bundle
${\cal E}_{hol} \, \longrightarrow \, {\cal M}_{g,s}$, whose
sections $s(m)$ are the holomorphic
differentials $\omega$ on the Riemann surfaces
$\Sigma_{g,s}$, $m$
denoting the point of the base-manifold ${\cal M}_{g,s}$
(i.e.\ the {\sl polarized} Riemann surface).
Let $c({\cal E}_{hol})=\det (1+{\cal R})$ be the total Chern class
of ${\cal E}_{hol}$, ${\cal R}$
being the curvature two-form of a holomorphic connection on
${\cal E}_{hol}$. For instance, we can choose the canonical connection
$\Gamma=h^{-1}\partial h$ associated with the natural fiber metric
$h_{jk}={\rm Im}\,
\Omega_{jk}$, $\Omega_{jk}$ being the period matrix of $\Sigma_{g}$.
Then ${\cal V}_{g,s}$ is the Poincar\'e dual
of the top Chern class $c_g({\cal E}_{hol})=\det {\cal R}=
\det\left({1\over \Omega-\bar\Omega}\bar\partial
\bar\Omega {1\over \Omega-\bar\Omega}\partial\Omega\right)$,
$d=\partial+\bar \partial$ being the exterior derivative on the moduli space.
${\cal V}_{g,s}$ is therefore a submanifold
of codimension $g$ described as the locus of
those Riemann surfaces $\Sigma_{g,s}(m)$ where
some section $s(m)$ of ${\cal E}_{hol}$ vanishes \cite{griffithsharris}.
Explicitly, the
topological correlators of our theory
are the intersection numbers of the standard Mumford-Morita
cohomology classes $ c_1 \left ( {\cal L}_i \right )$ on the constrained
moduli space, namely
\begin{eqnarray}
<{\cal O}_1 \left ( x_1 \right ) {\cal O}_2 \left ( x_2 \right )
&\cdots &{\cal O}_n \left ( x_n \right ) > \, =\int_{{\cal V}_{g,s}}
\left [ c_1 \left ( {\cal L}_1 \right ) \right ]^{d_1} \wedge \cdots\wedge
\left [ c_1 \left ( {\cal L}_n \right ) \right ]^{d_n}=\nonumber\\&
=&\int_{{{\cal M}}_{g,s}} c_g ( {\cal E}_{hol})\wedge
\left [ c_1 \left ( {\cal L}_1 \right ) \right ]^{d_1} \wedge \cdots
\wedge\left [ c_1 \left ( {\cal L}_n \right ) \right ]^{d_n}.
\label{intro_0}
\end{eqnarray}
Precisely,
$ c_1 \left ( {\cal L}_i \right )$ are the first Chern-classes
of the line bundles
${\cal L}_i\longrightarrow{\cal M}_{g,s}$ defined
by the cotangent spaces
$T^{ * }_{x_i} \Sigma_{g}(m)$ at the marked points $x_i$.
The above theory will be called {\sl 2D constrained topological gravity}.
We derive the definition of 2D constrained
topological gravity in an algorithmic
way, by performing
the topological twist of N=2 Liouville theory, that we assume as the
correct definition of
N=2 supergravity in two-dimensions. The formal set-up for
twisting an N=2 locally supersymmetric theory was established in ref.\
\cite{ansfre}.
The origin of a constraint on moduli space is due to the presence
of the graviphoton, absent in the existing
formulations of 2D topological
gravity. The graviphoton is initially a physical gauge-field and
after the twist it maintains zero ghost-number. Nevertheless,
in the twisted theory, it is no longer a physical field,
rather it is a Lagrange multiplier (in the BRST sense). Indeed,
it appears in the right-hand side of the BRST-variation of suitable
antighosts. Since this Lagrange multiplier possesses global degrees
of freedom (the $g$ moduli of the graviphoton), it
imposes $g$ constraints on the space ${\cal M}_{g}$, which can be
viewed as the space
of the global degrees of freedom of the metric tensor. The metric
tensor,
on the other hand, is the only
field that remains physical also after twist.
We are lead to conjecture that
the inclusion of {Lagrange multiplier gauge-fields}
is a general mechanism producing the appearance of {constrained}
moduli spaces. We recall that
in four dimensions, instead, the role of the graviphoton
$A$ \cite{ansfre,ansfre2} is that of producing ghosts for ghosts {\sl via}
the self-dual and anti-self-dual components $F^{\pm \, ab}$ of the field
strength $F^{ab}$.
To develop further the introductory description of our theory, we begin
with a brief discussion of cohomological theories from our
specific viewpoint.
In Witten's words \cite{witten},
cohomological field theories are concerned with
{\sl sophisticated counting
problems}. The fundamental idea is that
a generic correlation function of $n$ physical observables
$\{ {\cal O}_1 , \ldots, {\cal O}_n \}$
has an interpretation as the {\sl intersection number}
\begin{equation}
<{\cal O}_1 {\cal O}_2 \cdots {\cal O}_n > \,=\,\# \left ( H_1 \cap H_2
\cap \cdots \cap H_n \right )
\label{intro1}
\end{equation}
of $n$ {\sl homology cycles} $H_i \, \subset \, {\cal M}$ in the {\sl moduli
space} ${\cal M}$
of suitable {\sl instanton} configurations $\Im \left [\phi (x) \right ]$ of
the basic
fields $\phi$ of the theory.
For example in the topological $\sigma$-model
\cite{wittensigma,topsigma,calabi,index1,index2}
the basic fields are the maps
\begin{equation}
X:\, \Sigma_g \longrightarrow{\cal N}
\label{intro2}
\end{equation}
from a genus $g$ Riemann surface $\Sigma_g$ into a K\"ahlerian
manifold ${\cal N}$. In this case
the instantons $\Im \left [X (z,{\bar z}) \right ]$ are the holomorphic
maps
$\partial_{\bar z} X\, = \,\partial_{z}{\bar X }\, = \, 0$ and the moduli
space is, for each
homotopy class of holomorphic embeddings of degree $k$, the parameter
space ${\cal M}_k$ of such class
of maps. The degree $k$ is defined by $\int_{\Sigma_g} X^{ * } K =k$,
where $X^{ * } K$
is the pull-back of the K\"ahler two-form $K$
on ${\cal N}$. The observables
${\cal O}_{A_i}(z_i)$ are in
one-to-one correspondence with the de Rahm cohomology classes
$A_i \in H^{p} ({\cal N})$ of
the target manifold ${\cal N}$ and the homology
cycles $H_i$ are defined
as the subvarieties of ${\cal M}_k$ that contain all those instantons
such that
$X(z_i) \in [ A_i ]^{ * }$.
In this definition we have denoted by $[ A_i ]^{ * }
\subset
{\cal N }$ the Poincar\'e dual of the cohomology class $A_i \in
H^{p}({\cal N})$.
It is clear that
topological field theories \cite{report}
can been defined in completely geometrical terms.
However, in every topological model,
the right hand side of equation (\ref{intro1}) should admit an
independent definition as a
functional integral in a suitable Lagrangian quantum field theory,
in order to be of physical interest.
The basic feature of the classical Lagrangian
is that of possessing
a very large group of gauge symmetries,
the {\sl topological symmetry}, which is the most general continuous
deformation
of the classical fields. The topological symmetry is treated
through the standard techniques of BRST quantization and
the instanton equations are imposed as a gauge-fixing.
In this
way, eq. (\ref{intro1}), rather than a definition, becomes a map
between a {\sl physical} and
a {\sl mathematical} problem, which evenience is the main source
of interest for topological field theories.
{}From the physical point of view, the basic
properties of a topological field
theory are encoded in the BRST algebra ${\cal B}$ \cite{BRST,bonora} and
the anomaly of ghost number.
The moduli space cohomology originates the cohomology
of the BRST operator $s$:
the left-hand side of eq. (\ref{intro1}) is
the vacuum expectation
value of the product of $n$ representatives ${\cal O}_i$
of non-trivial BRST cohomology classes
\begin{equation}
s {\cal O}_i =0,\quad\quad
{\cal O}_i \ne s\{{\rm anything} \}.
\label{intro13}
\end{equation}
Correspondingly, the right-hand side of eq.\ (\ref{intro1}) can be
expressed as an integral of a product of cocycles over the moduli space.
In full generality, the BRST algebra ${\cal B}$ can be decomposed as
\begin{equation}
{\cal B}={\cal B}_{gauge-free}\oplus{\cal B}_{gauge-fixing},
\label{intro3}
\end{equation}
where ${\cal B}_{gauge-free} \subset {\cal B}$ is the subalgebra
that contains only the
physical fields and the ghosts (fields of non negative ghost number),
while ${\cal B}_{gauge-fixing}$ is the extension
of ${\cal B}_{gauge-free}$ by means of antighosts and Lagrange
multipliers (or the corresponding gauge-fixing conditions),
of non positive ghost number. Usually, ${\cal B}_{gauge-fixing}$
is trivial, but this is not the case we deal with, since the
interesting features of our theory come precisely from the
nontrivial nature of ${\cal B}_{gauge-fixing}$.
We postpone the discussion of the structure of ${\cal B}_{gauge-free}$
and ${\cal B}_{gauge-fixing}$, in order to consider the
mathematical meaning of the other basic
aspect of the field theoretical approach, i.e.\ the anomaly of
ghost number.
The left-hand side of eq. (\ref{intro1}) can be non-zero only if
\begin{equation}
\sum_{i} d_i =\Delta U =\int \partial^\mu J^{(ghost)}_\mu d^D x,
\label{intro13b}
\end{equation}
where $J^{(ghost)}_\mu $ is the ghost-number current, $\Delta U$ is its
integrated anomaly and $d_i= gh[{\cal O}_i]$ is the ghost number of
${\cal O}_i$ \cite{index1,index2}.
The divergence of the ghost-current has an interpretation as index-density
for some elliptic
operator $\nabla$ that appears in the quantum action through the kinetic
term of the ghost($C$)-antighost($\bar C$)
system:
\begin{equation}
S_{quantum}=\int ( \cdots + {\bar C } \nabla C + \cdots ).
\label{intro14}
\end{equation}
We have
\begin{equation}
{\rm index}_{\nabla}=\# ~{\rm zero~modes~of~ghosts} ~-~\#~{\rm zero~modes~of~
antighosts}.
\label{intro15}
\end{equation}
On the other hand, the right-hand side of eq. (\ref{intro1}) can be
non-zero only if the sum of the codimensions of the homology
cycles $H_i$
adds up to the total dimension of the moduli space,
\begin{equation}
\sum_{i} {\rm codim} \, H_i ~=~{\rm dim} \, {\cal M}.
\label{intro15b}
\end{equation}
In other words, the physical observables must reduce, after functional
integration on the irrelevant
degrees of freedom, to cocycle forms $\Omega_i$ of degree $d_i$ on
the moduli-space
${\cal M}$ (the Poincar\'e duals of the cycles $H_i$) and their wedge
product must be a top-form.
This means
\begin{equation}
\Delta U={\rm dim} \,{\cal M}.
\label{intro16}
\end{equation}
Such an equation is understood in the following way. In the
background of an instanton,
namely of a gauge-fixed configuration, the zero-modes of the
topological ghosts
correspond to the
residual infinitesimal deformations that preserve the gauge condition.
Their number is therefore
the dimension of the tangent space to the parameter space of the instanton.
The zero-modes
of the antighosts correspond, instead, to potential global obstructions to
the integration
of these infinitesimal deformations \cite{index1,index2}. The index $\Delta U$
is therefore named the formal dimension
of the moduli space ${\cal M}$. The true dimension of
the moduli space
is larger or equal to its formal dimension,
\begin{equation}
{\rm dim}^{true}{\cal M}\ge{\rm dim}^{formal}{\cal M}=\Delta U,
\label{intro17}
\end{equation}
depending on whether the potential obstructions become real obstructions
or not.
In the case of 2D topological gravity, Witten started \cite{witten}
from the right-hand
side of eq. (\ref{intro1}),
proposing a completely geometrical definition. He
assumed that the
relevant moduli-space is the standard moduli-space ${\cal M}_{g,s}$
of Riemann surfaces of
genus $g$ with $s$ marked points, whose dimension is well known to be
\begin{equation}
{\rm dim}_{\bf C} {\cal M}_{g,s}=3g - 3 + s
\label{intro18}
\end{equation}
and identified the observables ${\cal O}_i$
with the Mumford-Morita cohomology classes, namely the
$2d_i$-forms
$\left [ c_1 \left ( {\cal L}_i \right ) \right ]^{d_i}$ \cite{moritamumford}
on ${\cal M}_{g,s}$ introduced in eq. (\ref{intro_0}).
Correspondingly, Witten obtained the selection rule:
\begin{equation}
\sum_{i=1}^{s} d_i =3g - 3 + s.
\end{equation}
For a reason that will be clear in a moment, it is convenient, from
the
field theoretical point of view, to rewrite this condition as
\begin{equation}
\sum_{i=1}^{s} ( d_i - 1) =3g - 3,
\label{intro19}
\end{equation}
where now the right hand side is the dimension of the moduli space
${\cal M}_{g}$ without marked points.
In this way, Witten assumed that in the field-theoretical formulation
of 2D topological
gravity, whatever it might be, the integrated anomaly of the
ghost-number current should be
\begin{equation}
\Delta U =\int \partial^\alpha J^{(ghost)}_\alpha d^2 x=
3g - 3.
\label{intro20}
\end{equation}
To understand this way of formulating the sum rule, we have to recall
the concept of
descent equations for the physical observables.
Every local observable ${\cal O}_i$ of 2D topological gravity can be
written as $\sigma_{d_i}^{(0)}(x_i)=\gamma_0^{d_i}(x_i)$,
$\gamma_0(x)$ being a suitable composite field.
${\cal O}_i$ is a zero form of ghost-number
$2 d_i$ and it is related to a
one-form $\sigma_{d_i}^{(1)}(x_i)$ of ghost number $2d_i-1$ and to
a two-form
$\sigma_{d_i}^{(2)}(x_i)$ of ghost number $2(d_i-1)$
{\sl via} the descent equations
\begin{equation}
s\sigma_{d_i}^{(0)}=0,\quad\quad
s \sigma_{d_i}^{(1)}= d \sigma_{d_i}^{(0)},\quad\quad
s \sigma_{d_i}^{(2)}=d \sigma_{d_i}^{(1)},\quad\quad 0=d\sigma^{(2)}_{d_i}.
\label{intro21}
\end{equation}
As a consequence, the integrated observables
$\int_{\Sigma_{g}} \sigma_{d_i}^{(2)}$ are BRST-closed,
\begin{equation}
s \int_{\Sigma_{g}} \sigma_{d_i}^{(2)}=0,
\label{intro22}
\end{equation}
and can be traded for the local ones, by an equivalence
\begin{equation}
< {\cal P}(x_1)\sigma_{d_1}^{(0)}(x_1) \cdots
{\cal P}(x_n)\sigma_{d_n}^{(0)}(x_n) >\,\approx\,
< \int_{\Sigma_{g}} \sigma_{d_1}^{(2)}(x_1) \cdots
\int_{\Sigma_{g}} \sigma_{d_n}^{(2)}(x_n) >.
\label{intro23}
\end{equation}
${\cal P}(x_i)$ denote certain {\sl picture changing operators}
\cite{verlindesquare}, that
have ghost number $-2$ and whose mathematical meaning is that
of marking the points $x_i$ where the local operators $\sigma^{(0)}(x_i)$
are inserted.
Both members of (\ref{intro23}) can be calculated as intersection
integrals over ${\cal
M}_g$. The integrations appearing on the right hand side, however, can be
also understood as integrations over the positions of ``marked
points'' $x_i$, so that one is allowed to conjecture the
correspondence
\begin{equation}
\sigma^{(2)}_{d_i}(x_i)\sim \gamma_0^{d_i}(x_i)\sim
[c_1({\cal L}_i)]^{d_i},
\quad \gamma_0(x_i)\sim c_1({\cal L}_i).
\end{equation}
Eq.\ (\ref{intro23}) says that the topological amplitude can also be viewed
as the correlator of BRST cohomology classes of degree $2(d_i-1)$
on the space ${\cal M}_{g}$ and
Witten's conjecture (\ref{intro20}) on the integrated anomaly
of the ghost-current in any field-theoretical
formulation of the theory is explained.
Indeed, in \cite{verlindesquare} Verlinde and Verlinde constructed
an explicit field theory model where eq. (\ref{intro20}) is verified.
On the contrary, the key result of the present paper is the following one.
We present a different field
theoretical model of topological gravity where eq.(\ref{intro20}) is
replaced by
\begin{equation}
\Delta U =\int \partial^\alpha J^{(ghost)}_\alpha d^2 x=
2g - 2
\label{intro21b}
\end{equation}
The geometrical interpretation of this fact has already been anticipated.
Eq. (\ref{intro21b})
indicates that we are dealing with a {\sl constrained} moduli space
${\cal V}_{g}$
whose formal complex dimension is
${\rm dim}^{formal}_{\bf C}\, {\cal V}_{g} =2 g - 2$. Actually
the constrained
moduli-space ${\cal V}_{g}$
is the Poincar\'e dual of $c_g({\cal E}_{hol})$ and
its true dimension turns out to be
${\rm dim}^{true}_{\bf C}{\cal V}_{g} =2g-3$,
which is smaller than
the formal dimension, another apparently puzzling result. However,
if we recall that the effective
moduli space emerges from a constraint on a larger moduli-space,
then the fact that the formal-dimension
is bigger than the actual dimension becomes less mysterious. Indeed,
this time, in the sector of
the BRST-algebra that implements the constraint, usual rules are inverted.
Antighost zero-modes correspond
to local vector fields normal to the constrained surface and ghost
zero-modes correspond to possible obstructions to the
globalization of such local vector fields. As a consequence, the difference,
in the constraint sector of the BRST algebra, of antighost zero-modes
minus ghost zero-modes, expresses the minimum number of
constraints that are imposed. If the potential obstructions do not occur,
then all the
antighosts correspond to actual normal directions to the constrained
surface and the true
dimension of the constraint surface is smaller than its formal dimension.
Notice that the constraint imposed is not a ``generic'' constraint,
but a BRST constraint, i.e.\ a gauge-fixing of some symmetry.
This translates geometrically into the fact, already pointed out, that the
constrained moduli space is not a specific hypersurface in ${\cal M}_g$,
rather it is a ``slice choice''
of a representative in a homology class of closed
submanifolds (the Poincar\'e dual of $c_g({\cal E}_{hol})$).
Let us now discuss the general structure of our topological field-theory
in comparison
with that introduced by Verlinde and Verlinde in \cite{verlindesquare}.
The basic idea of \cite{verlindesquare} is
that the moduli-space of Riemann surfaces $\Sigma_g$ can be related
to the moduli-space of
$SL(2,R)$ flat connections on the same surface. This goes back to the
classical Fenchel-Nielsen
parametrization of the Teichmuller space. A flat $SL(2,R)$ connection
$\left \{ e^\pm , e^0 \right \}$
contains the zweibein $e^\pm$
and the spin connection of a constant curvature metric on the imaginary
upper half-plane $H$.
If that connection is pull-backed to the quotient $H/\Gamma_g$, where
$\Gamma_g$ is a Fuchsian
group realizing the homotopy group $\pi_1 \left ( \Sigma_g \right )$ of
a genus $g$
surface, then the connection $\left \{ e^\pm , e^0 \right \}$ realizes a
constant curvature
metric on that surface. In view of this, the authors of \cite{verlindesquare}
identified
the gauge-free BRST algebra ${\cal B}_{gauge-free}^{2D~grav}$ of 2D
topological gravity with
${\cal B}_{gauge-free}^{\bf SL(2,R)}$, namely the {\sl gauge-free}
topological algebra
associated with the Lie-algebra ${\bf SL(2,R)}$.
For a Lie algebra ${\bf g}$ with structure constants $f^I_{\phantom{I}JK}$,
$I=1,\ldots {\rm dim}\, {\bf g}$,
${\cal B}_{gauge-free}$ is
\begin{eqnarray}
s A^I&=& \Psi^I- dC^I-f^I_{\phantom{I}JK}A^JC^K, \nonumber\\
s \Psi^I &=& -d\Gamma^I- f^I_{\phantom{I}JK}A^J\Gamma^K-f^I_{\phantom{I}JK}
C^J\Psi^K, \nonumber\\
s C^I&=&\Gamma^I-{1\over 2}f^I_{\phantom{I}JK}C^JC^K,\nonumber\\
s \Gamma^I &=& f^I_{\phantom{I}JK}C^J\Gamma^K,
\end{eqnarray}
$\Psi$ being the topological ghost, $C$ the ordinary gauge ghost
and $\Gamma$ being the ghost for the ghosts.
In the case of ${\bf SL(2,R)}$, the structure constants $f^I_{\phantom{I}JK}$
are encoded in the curvature definitions
\begin{equation}
R^\pm = d e^{\pm} \pm e^0 \wedge \, e^{\pm},\quad\quad
R^0 = d e^0 + a^2 e^+ \wedge e^-,
\end{equation}
where $a^2\in{\bf R}_+$ expresses the size of the
constant negative
curvature.
The gauge-fixing algebra
${\cal B}_{gauge-fixing}^{\bf SL(2,R)}$ introduced by Verlinde and
Verlinde realizes in an obvious
manner the geometrical idea of flat-connections. The primary
topological symmetry is broken
by introducing antighosts whose BRST-variations are the Lagrange
multipliers for the
constraints $R^\pm = R^0=0$. In addition antighosts and Lagrange
multiplier are also
introduced to fix diffeomorphisms, Lorentz invariance
and the gauge-symmetry of the topological ghosts,
namely superdiffeomorphisms.
After gauge-fixing and in the limit
$a^2\rightarrow 0$, the model of ref.\ \cite{verlindesquare}
reduces to the sum of two topological
conformal field theories $Liouville\, \oplus \, Ghost$, that can be
untwisted to N=2 conformal
field-theories of central charges $c_{Liouville}=9$ and $c_{Ghost}=-9$.
\par
As one sees, this construction is in the spirit of the Baulieu-Singer
approach to topological
field-theories, where the gauge-fixing sector is invented {\sl ad hoc}.
On the other hand,
the topological twisting algorithm produces topological theories
where the gauge-fixing part of the BRST-algebra is already encoded in
the original untwisted
N=2 field-theory model. For instance, applying these ideas to the case
of D=4, N=2
$\sigma$-models we discovered the concept of hyperinstantons
\cite{ansfre2}.
In this paper we adopt the
twisting strategy also in the case of 2D topological gravity. The necessary
input is a
definition of two-dimensional N=2 supergravity. This problem is solved by
N=2 supersymmetrizing
a reasonable definition of two-dimensional gravity. Following
\cite{cange,teitel} we assume as Lagrangian of
ordinary 2D gravity
the following one:
\begin{equation}
{\cal L}_{2D-grav}=\Phi(R[g]+a^2) \sqrt { \det g}
\label{intro923}
\end{equation}
where $\Phi$ and the metric $g_{\alpha\beta}$ are treated as
independent fields.
The variation in $\Phi$ imposes the constant curvature constraint
on $g_{\alpha\beta}$.
The Lagrangian (\ref{intro923}) is equivalent,
through the
field redefinition $g_{\mu\nu}\rightarrow g_{\mu\nu}{\rm e}^\Phi$
to the more conventional Liouville Lagrangian
\begin{equation}
{\cal L}_{Liouville} =[\nabla_\alpha \Phi \nabla^\alpha \Phi +
\Phi(R[g]+a^2 {\rm e}^\Phi ) ]
\sqrt { \det g}.
\label{intro24}
\end{equation}
Both Lagrangians (\ref{intro923}) and (\ref{intro24}) can be N=2
supersymmetrized and the results
are related to each other by a field redefiniton as in the N=0 case.
Hence, we work with the
N=2 analogue of the simpler form (\ref{intro923}) and to it we apply
the topological twist.
The rest follows, although it requires interpretations that are by no
means straightforward. As anticipated, the
essential new feature is the presence, in the N=2 gravitational
multiplet, of the
graviphoton, a $U(1)$ gauge-connection $A$ that maintains
ghost number $0$ after the twist. Hence,
the geometrical structure we deal with is that of a $U(1)$ bundle
on a Riemann surface.
The possible deformations of this structure are more than the
deformations of a bare
Riemann surface. Indeed, the total number of moduli for this
bundle is $4g - 3$,
$g$ new moduli being contributed by the deformations of ${A}$.
The naive conclusion
would be that the gauge-free topological algebra underlying the
twisted theory is the
direct sum ${\cal B}_{gauge-free}^{\bf SL(2,R)} \oplus
{\cal B}_{gauge-free}^{\bf U(1)}$.
This would lead to intersection theory in the $4g-3$
dimensional moduli-space
of the $U(1)$-bundle over $\Sigma_g$, but it is not the case. What
actually happens is that
the graviphoton belongs to ${\cal B}_{gauge-fixing}$, rather than
to ${\cal B}_{gauge-free}$,
satisfying a BRST-algebra of the type
\begin{equation}
s{\bar\psi}=A-d{\gamma},\quad\quad sA=-dc,\quad\quad s\gamma=c,\quad
\quad sc=0,
\label{intro25}
\end{equation}
where ${\bar \psi}$ is a one-form of ghost number $-1$,
${\gamma}$ is a zero-form of ghost number $0$ and
$c$ is the ordinary gauge ghost (with ghost-number $1$).
The geometrical meaning of the gauge-fixing
algebra (\ref{intro25}) has already been anticipated. The deformations
of $A$ correspond to constraints on the allowed deformations of the bundle
base-manifold, namely of the Riemann surface.
How this mechanism is implemented by the functional integral
is what we show in the later technical sections of our paper.
We conjecture that BRST-algebras of type (\ref{intro25}), associated
with a principal G-bundle $E \rightarrow M$ always correspond
to constraints on the moduli-space of the base-manifold $M$.
Hence, our formulation of 2D topological gravity is based on the same
{\sl gauge-free}
algebra ${\cal B}_{gauge-free}^{\bf SL(2,R)}$ as the model of Verlinde
and Verlinde, but it has a much different and more subtle
{\sl gauge-fixing} algebra. At the level of conformal
field theories there is also a crucial difference, which
keeps trace of the constraint on moduli space.
Indeed, after gauge-fixing
and in the limit $a^2\rightarrow 0$, our model also reduces to the sum
of two topological
conformal field theories $Liouville\, \oplus \, Ghost$;
the central charges, however, are $c_{Liouville}=6$
and $c_{Ghost}=-6$,
rather than $9$ and $-9$. We discuss the structure of these conformal
field theories in more detail later on.
Let us conclude by summarizing our
viewpoint. Equation (\ref{intro1}) possesses two deeply different
meanings,
depending on whether one reads it from the left (= physics)
to the right (= mathematics), or from the right to the left.
i) From the right to the left, equation (\ref{intro1}) means:
``given a well defined mathematical problem (intersection theory on the
moduli space of the instantons of some class of maps), find a quantum
field theory
that represents the intersection forms as physical amplitudes (averages
of products of physical observables)''.
This problem is solved by BRST quantizing the most general continuous
deformations
of the classical fields (which sort of fields depending on the class of maps
that is under consideration) and imposing the instantonic equations as
a gauge-fixing.
ii) From the left to the right, (\ref{intro1}) means
``given a physically well-defined topological quantum field theory
(as it is the topological twist of an N=2 supersymmetric theory), find
the mathematical problem (maps, instantons, intersection theory)
that it represents.
Of course, there is no general recipe for solving this second problem.
We started considering it in four dimensions \cite{ansfre,ansfre2,ansfre3}
and this lead to interesting results. The topological twist
of the N=2 supersymmetric $\sigma$-model permitted to
introduce a concept of instantons ({\sl hyperinstantons})
\cite{ansfre2} that we later
\cite{ansfre3} identified with a triholomorphicity condition on the
embeddings of four dimensional almost quaternionic manifolds into
almost quaternionic manifolds. With this paper, we continue the program
begun in ref.\ \cite{billofre} of considering the same problem
in two dimensions, aiming to uncover the mathematical
meanings of the topological field theories obtained by twisting the D=2
N=2 supersymmetric theories. We feel that the amazing secrets
of N=2 supersymmetry have not yet been fully uncovered.
Our paper is organized in three main parts. The first part (sections
\ref{N2D2}-\ref{twist})
is devoted to the
construction of N=2 Liouville theory and its twist. In section
\ref{N2D2} we present D=2 N=2
supergravity in the rheonomic approach,
while in section \ref{coupling} we couple it to N=2
Landau-Ginzburg chiral matter \cite{LG,topLG}. In section \ref{liouville},
we derive the Lagrangian of N=2 Liouville theory, that involves
a suitable combination of the gravitational multiplet
and a simple chiral multiplet.
We BRST quantize the theory in section \ref{gaugefree} and perform the
topological twist in section \ref{twist}.
The second part (sections \ref{conformal} and \ref{twist1}) is devoted
to conformal field theory.
In section \ref{conformal}, we gauge-fix N=2 Poincar\`e gravity and show
that it corresponds to a conformal field theory
with $c=c_{Liouville}+c_{ghost}$,
$c_{Liouville}=6$ and $c_{ghost}=-6$. We study the
N=2 currents, the BRST current and various conformal properties.
In section \ref{twist1} we study the topological twist of the
gauge-fixed theory, describing the match between the general twist procedure
of section \ref{twist} and the procedure that is well known
in conformal field theories \cite{eguchiyang}.
We study some properties that emphasize
the differences between our model and the Verlinde and Verlinde one.
The third part corresponds to section \ref{geometry}, where
we suggest the mathematical interpretation of the
topological theory and draw
the correspondence between quantum field theory and geometry.
Finally in section \ref{concl} we address some open problems.
\section{N=2 D=2 supergravity}
\label{N2D2}
Following \cite{cange} we assume as the classical Lagrangian of pure gravity
the one displayed in eq.\ (\ref{intro923}).
Alternatively, in view of the equivalence
between (\ref{intro923}) and (\ref{intro24}),
we can also describe the action of
pure 2D gravity as the Polyakov action for a Liouville system.
We insist on the concept of Polyakov formulation, since the key point
in eq.\ (\ref{intro923}) is that both $\Phi$ ad $g_{\alpha\beta}$ have
to be treated as independent fields. This being clarified, we define
N=2, D=2 supergravity as the supersymmetrization of eq.\ (\ref{intro923}).
To perform such a supersymmetrization, we need the following
two ingredients.
i) An off-shell representation of the N=2 algebra, that corresponds to the
graviton multiplet containing the metric $g_{\alpha\beta}$.
ii) An off-shell representation of the N=2 algebra that corresponds
to the chiral scalar multiplet containing the field $\Phi$.
The final Lagrangian is obtained by combining
these two multiplets.
For the purpose of this construction, we use the so called rheonomic
formalism. This formalism was
originally proposed in 1978-79
\cite{libro}, as an alternative method with respect to
the superfield formalism in studying
supersymmetric theories.
Its main advantage is the reduction of the
computational effort for constructing supersymmetric
theories to a simple series of geometrically meaningful steps.
{\sl Rheonomy} means ``law of the flux'' and refers to the
fact that the supersymmetrization of a theory can be viewed as a
Cauchy problem, in which spacetime, described by the inner (bosonic)
components $x$ of superspace, represents the boundary, while the outer
(Grassmann) components $\theta$
represent the direction of ``motion'': the rheonomic
principle represents the ``equation of motion''.
At the end of the rheonomic procedure, the only free choice
is the boundary condition, which is the spacetime theory,
projection of the superspace theory onto the inner components.
The fields (or
more generally differential forms) are functions on superspace and
the supersymmetry transformations are viewed as odd diffeormorphisms in
superspace.
We shall give, along the paper, many details on the rheonomic approach
in order to furnish enough information for using it.
In a recent paper \cite{billofre},
the rheonomic formalism was applied to the construction
of D=2 globally supersymmetric N=2 systems. In particular,
the rheonomic formulation of N=2 chiral multiplets
coupled to N=2 gauge multiplets
was provided. This involved the solution of Bianchi identities
and the construction of rheonomic actions in the background of a flat
N=2 superspace.
In this section we generalize the construction to curved superspace.
To this effect an important preliminary point to be discussed is the
following. The formulation of supergravity is performed in the
physical Minkowski signature $\eta_{ab}={\rm diag} (+,-)$, yet, after
Wick rotation we shall deal with supersymmetry defined on compact
Riemann surfaces. These have a positive, null or negative
curvature ${\cal R}$ depending on their genus $g$, (${\cal
R} > 0$ for $g=0$, ${\cal R}=0$ for $g=1$ and ${\cal R}<0$ for
$g \ge 2$). The sign of the curvature in the Euclidean theory is
inherited from the sign of the curvature in the Minkowskian formulation.
Since we want to discuss the theory for all genera $g$, we need
formulations of supergravity that can accommodate both signs of the
curvature. From the group-theoretical point of view, curved superspace
with ${\cal R}> 0$ is a continuous deformation of the supersymmetric
version of the de Sitter space, whose isometry group is $SO(1,D)$, $D$
being the space-time dimension. Hence, for ${\cal R}>0$ the
appropriate superalgebra to begin with is, if it exists, the N-extended
supersymmetrization of $SO(1,D)$. Similarly, for ${\cal R}<0$,
curved superspace is a continuous deformation of the supersymmetric
version of the anti de Sitter space, whose isometry group is $SO(2,D-1)$.
Hence, in this second case, the appropriate superalgebra to start from
in the construction of supergarvity is the N-extended
supersymmetrization of $SO(2,D-1)$. Alternatively, one can start from
the Poincar\'e superalgebra, that corresponds to an Inon\"u-Wigner
contraction of either the de Sitter or the anti de Sitter algebra,
and reobtain either one of the decontracted algebras as
vacua configurations alternative to the Minkowski one,
by giving suitable expectation values to the auxiliary
fields appearing in the rheonomic parametrizations of the Poincar\'e
curvatures. Actually, in space-time dimensions different from D=2, the
supersymmetric extensions of both the de Sitter and the anti de Sitter
algebras are not guaranteed to exist. For instance, in the relevant D=4
case, the real orthosymplectic algebra $Osp(4/N)$ is the
N-superextension of the anti de Sitter algebra $SO(2,3)$ but a
superextension of the de Sitter algebra $SO(1,4)$ does not exist.
This is the group-theoretical rationale of some otherwise well known facts.
In four-dimensions all supergravity vacua with a positive sign of
the cosmological constant (de Sitter vacua) break supersymmetry
spontaneously, while the only possible supersymmetric vacua are either
in Minkowski or in anti de Sitter space. Indeed, starting from a
formulation of D=4 supergravity based on the Poincar\'e superalgebra,
one obtains both de Sitter and anti de Sitter vacuum configurations
through suitable expectation values of the auxiliary fields,
but it is only in the
anti de Sitter case that these constant expectation values are
compatible with the Bianchi identities of a superalgebra, namely
respect supersymmetry \cite{libro}.
In the de Sitter case the gravitino develops
a mass and supersymmetry is broken. In other words, in D=4,
supersymmetry chooses a definite sign for the curvature ${\cal R}<0$.
\par
If this were the case also in $D=2$, supersymmetric theories could not
be constructed on all Riemann surfaces, but only either in genus $g\ge
1$ or in genus $g\leq 1$. Fortunately, for D=2 it happens that the de
Sitter group $SO(1,2)$ and the anti de Sitter group $SO(2,1)$ are
isomorphic. Hence, a supersymmetrization of one is also a
supersymmetrization of the other, upon a suitable correspondence.
Once we have fixed the conventions for what we
call the physical zweibein, spin connection and gravitini, we obtain
an off-shell formulation of supergravity where the sign of the
curvature is fixed: it is either non-negative or non-positive.
Through
a field correspondence,
we can however make a transition from one case to
the other, but a continuous deformation of the auxiliary field
vacuum expectation value is not sufficient for this
purpose. Furthermore, as we are going to see, the Inon\"u-
Wigner contraction of the N=2 algebra displays also some new features
with respect to the $U(1)$ generator associated with the graviphoton.
The most general $D=2$ superalgebra one can write down,
through Maurer-Cartan equations, is obtained by setting to zero the
following curvatures:
\begin{eqnarray}
T^+ &=& de^+ + \omega e^+ -\o{i}{2} \zeta^+ \zeta^-, \nonumber\\
T^- &=& de^- - \omega e^- -\varepsilon\o{i}{2} \tilde\zeta_+ \tilde\zeta_-,
\nonumber\\
\rho^+ &=& d \zeta^+ + \o{1}{2} \omega \zeta^+ +\o{ia_1}{4} A\zeta^+
+a_1 a_2 \tilde\zeta_- e^+, \nonumber\\
\rho^- &=& d \zeta^- + \o{1}{2} \omega \zeta^- -\o{ia_1}{4} A\zeta^-
+a_1 a_2 \tilde\zeta_+ e^+, \nonumber\\
\tilde\rho_+ &=& d \tilde\zeta_+ -\o{1}{2} \omega \tilde\zeta_+ -\o{ia_1}{4} A
\tilde\zeta_+ -\varepsilon a_1 a_2 \zeta^- e^-,\nonumber\\
\tilde\rho_- &=& d \tilde\zeta_- -\o{1}{2} \omega \tilde\zeta_- +\o{ia_1}{4} A
\tilde\zeta_-
-\varepsilon a_1 a_2 \zeta^+ e^-,\nonumber\\
R &=& d \omega -2 \varepsilon a_1^2 a_2^2 e^+ e^- -\o{i}{2} a_1 a_2 (\zeta^+
\tilde\zeta_+ + \zeta^- \tilde\zeta_-),\nonumber\\
F&=& dA -a_2 (\zeta^- \tilde\zeta_- -\zeta^+ \tilde\zeta_+),
\label{rhbi}
\end{eqnarray}
where $e^+$ and $e^-$ denote the two components (left and right
moving) of the world sheet zweibein one form, while $\zeta^+$,
$\tilde\zeta_+$ are the two
components of the gravitino one form, $\zeta^-$,
$\tilde\zeta_-$ are the two components of its complex conjugate.
$\varepsilon$ can take the values $\pm 1$ and distinguishes the de
Sitter ($\varepsilon =1$) and anti de Sitter ($\varepsilon =-1$)
cases. Formally, one can pass from positive to negative curvature
by replacing $e^-$ and $T^-$ with $-e^-$ and $-T^-$.
The algebra (\ref{rhbi}) contains two free (real) parameters $a_1, a_2$ in its
structure constants. Choosing $a_1=a_2=\o{a}{\sqrt{2}}\ne 0$ we have
the usual curvature definitions for a de Sitter algebra with
cosmological constant $\Lambda= \varepsilon a^2$,
namely the superextension of the
$SL (2, R)$ Lie algebra. In the limit $a_2 \to 1$ and $a_1 \to 0$ we
get the usual $D=2$ analogue of the
$N=2$ super Poincar\'e Lie algebra, where, calling $L$ the
$U(1)$ generator dual to the graviphoton, the supercharges $Q^\pm ,
\t Q_\pm$ are neutral under $L$,
\begin{equation}
[L,Q^\pm ]= [ L,\tilde Q_\pm ]=0.
\label{2}
\end{equation}
In this case, the generator $L$ can be interpreted
as a ``central charge", since
it appears in the supercharges anticommutators:
\begin{equation}
\{ Q^- , \t Q_- \} \sim L, \qquad
\{ Q^+ , \t Q_+ \} \sim L. \end{equation}
Finally, in the limit $a_2 \to 0$, $a_1 \to 1$ we get a new kind of
Poincar\'e superalgebra, named by us ``charged Poincar\'e ", where the
supercharges do rotate under the $U(1)$ action:
\begin{equation}
[ L,Q^\pm ]=\pm Q^\pm,\quad\quad [ L,\tilde Q_\pm ]=\mp \tilde Q_\pm,
\end{equation}
In this case $L$ is not a central charge, since it does not appear in the
supercharge anticommutators. Indeed, one has
\begin{eqnarray}
\{ Q^+ , Q^- \} = P, \quad &\quad & \{ \tilde Q_+ , \tilde Q_- \}=
\tilde P, \nonumber\\
\{ Q^+ , \t Q_+ \} = 0, \quad &\quad & \{ Q^- , \tilde Q_- \}=0,
\end{eqnarray}
$P$ and $\t P$ being the left and right translations, dual to $e^+$ and
$ e^-$ respectively.
In \cite{billofre}
the construction of global $N=2$ supersymmetric theories was
based on the use of the ordinary Poincar\'e superalgebra. In this case
we can always choose the gauge $\omega =A=0$ and we can altogether
forget about these one forms. In the solution of Bianchi identities we
simply have to respect global Lorentz and U(1) symmetries. However
the flat case is actually unable to distinguish between the ordinary
and charged Poincar\'e algebra. At the level of curved superspace,
on the other hand,
there is a novelty that distinguishes $D=2$ from
higher dimensions. It turns out that the correct algebra is the
charged one.
The field content of
the off shell graviton multiplet is easily described. The zweibein
describes one bosonic degree of freedom (four components restricted to
one by two diffeomorphisms and by the Lorentz symmetry), while each
gravitino describes two degrees of freedom
(four components restricted by two supersymmetries). Finally, the
graviphoton $A$ yields one bosonic degree of freedom (two components
restricted by the $U(1)$ gauge symmetry). The mismatch of two bosonic degrees
of freedom is filled by a complex scalar auxiliary field $M$ and by
its conjugate $\bar M$. The problem is therefore that of writing a
rheonomic parametrization for the curvatures (\ref{rhbi}) using as free
parameters their space-time components plus an auxiliary complex
scalar $M$.
As can be easily read from (\ref{rhbi}) the curvature two forms satisfy:
\begin{eqnarray}
\nabla T^+ &=& Re^+ - \o{i}{2} (\rho^+ \zeta^- -\zeta^+ \rho^-),\nonumber\\
\nabla T^- &=& -Re^- - \o{i}{2} \varepsilon
(\tilde\rho_+ \tilde\zeta_- -\tilde\zeta_+ \tilde\rho_-),
\nonumber\\
\nabla \rho^\pm &=& \o{1}{2} R \zeta^\pm \pm\o{ia_1}{4} F \zeta^\pm
+a_1 a_2 (\tilde\rho^\mp e^+ -\tilde\zeta_\mp T^+ ),\nonumber\\
\nabla \tilde\rho_\pm &=&- \o{1}{2} R \tilde\zeta_\pm \mp\o{ia_1}{4}
F \tilde\zeta_\pm
-a_1 a_2 \varepsilon (\rho^\mp e^- - \zeta^\mp T^-),\nonumber\\
\nabla R &=& -2 a_1^2 a_2^2\varepsilon
(T^+e^- -e^+T^- )-\o{i}{2}a_1 a_2
(\rho^+ \tilde\zeta_+ -\zeta^+ \tilde\rho_+ + \rho^- \tilde\zeta_- -\zeta^-\t
\rho^- ),\nonumber\\
\nabla F &=& -a_2
( \rho^- \tilde\zeta_- -\zeta^-\tilde\rho_- -\rho^+ \tilde\zeta_+ +\zeta^+
\tilde\rho_+).
\label{bianchigrav}
\end{eqnarray}
The general solution for the above Bianchi identities with vanishing
torsions is
\begin{eqnarray}
\begin{array}{ll}
T^+ = 0, &\quad\quad
T^- = 0, \cr
\rho^+ = \tau^+ e^+ e^- -a_1 (M-a_2) \tilde\zeta_- e^+, & \quad\quad
\tilde\rho_+= \tilde\tau_+ e^+ e^- +a_1 \varepsilon (M -a_2)\zeta^- e^-,\cr
\end{array}\nonumber\\
\begin{array}{l}
R = ({\cal R} -2a_1^2a_2^2\varepsilon )e^+ e^- +
\o{i}{2}\varepsilon e^-(\tau^+ \zeta^- + \tau^- \zeta^+)
+ \o{i}{2} e^+ (\tilde\tau_- \tilde\zeta_+ +\tilde\tau_+ \t
\zeta^-) \cr
\phantom{R=}+\o{ia_1}{2}
\left [(M-a_2) \zeta^- \tilde\zeta_- + (\bar M -a_2)\zeta^+ \t
\zeta^+ \right ],\cr
F ={ \cal F} e^+ e^- + (M-a_2) \zeta^- \tilde\zeta_- - (\bar M -a_2)\zeta^+
\tilde\zeta_+ -\o{1}{a_1} \varepsilon (\tau^+ \zeta^- - \tau^- \zeta^+)e^- \cr
\phantom{R=}+\o{1}{a_1}(\tilde\tau_-
\tilde\zeta_+ -\tilde\tau_+ \tilde\zeta_-)e^+.
\end{array}
\label{rheograv}
\end{eqnarray}
The formulae for $\rho^-$ and $\tilde\rho_-$
can be derived from those of $\rho^+$ and $\tilde\rho_+$ by
complex conjugation. In doing this, one has to keep into account that
the complex conjugation reverses the order of the fields in a product
of fermions.
It is immediate to see in eq.s. (\ref{rheograv})
that the limit $a_1 \to 0$ is singular, and this
reflects the fact that we are not able to find the correct
parametrizations for this case.
On the contrary the limit $a_2 \to 0$ is perfectly
consistent and we call it "charged Poincar\'e algebra".
{}From now on, to avoid any confusion in using the formulae
for the curvature
definition, {\it we will always refer to the symbols
$R,F, \rho^\pm,\tilde\rho^\pm$
as to the ones defined in (\ref{rhbi}) with }$a_1=1, a_2=0$.
The general rule for obtaining the solution to the Bianchi identities is the
rheonomic principle. We briefly describe it in three steps.
i) One expands the curvatures two-forms $\rho^\pm$,
$\tilde\rho^\pm$, $R$ and $F$ in a basis of superspace two-forms:
the ``spacetime'' form $e^+e^-$ and the ``superspace'' forms, which
can be fermionic, like
$\zeta^\pm e^\pm$ and $\tilde\zeta^\pm e^\pm$,
or bosonic, like $\zeta^\pm\zeta^\pm$,
$\zeta^\pm\tilde\zeta^\pm$ and $\tilde\zeta^\pm\tilde\zeta^\pm$.
ii) The coefficients $\tau^\pm$, $\tilde\tau^\pm$, ${\cal R}$ and ${\cal
F}$ of the spacetime form $e^+e^-$ are independent ones: the rheonomic
parametrizations (\ref{rhbi}) can be viewed as a definition of them.
They are the supercovariantized derivatives of
the fields. In particular, ${\cal R}$ is the supercurvature
and ${\cal F}$ is the super-field-strength.
iii) The coefficients of the superspace forms, instead, are functions of
the fields and of the supercovariantized derivatives
$\tau^\pm$, $\tilde\tau^\pm$, ${\cal R}$ and ${\cal F}$.
They are determined by solving the Bianchi identities (\ref{bianchigrav})
in superspace. Their form is strongly constrained by Lorentz
invariance, global $U(1)$
invariance and scale invariance.
These restrictions are such that the role of (\ref{bianchigrav}) is
simply that of fixing some numerical coefficients, while providing
also several self-consistency checks.
Moreover, imposition of (\ref{bianchigrav}) also provides the
rheonomic parametrizations of
$\nabla \tau^\pm$, $\nabla \tilde\tau^\pm$, $\nabla {\cal R}$ and
$\nabla{\cal F}$ and
of the covariant derivatives $\nabla M$
and $\nabla \bar M$ of the auxiliary fields $M$ and $\bar M$, namely
\begin{eqnarray}
\nabla\tau^+&=&\nabla_+\tau^+e^++\nabla_-\tau^+e^-+
\left({1\over 2}{\cal R}+{i\over 4}{\cal F} -\varepsilon M\bar M\right)\zeta^+
+\nabla_-M\tilde\zeta_-,\nonumber\\
\nabla\tilde\tau_+&=&\nabla_+\tilde\tau_+e^++\nabla_-\tilde\tau_+e^--
\left({1\over 2}{\cal R}+{i\over 4}{\cal F} -\varepsilon
M\bar M\right)\tilde\zeta_+
+\nabla_+M\zeta_-,\nonumber\\
\nabla M&=&\nabla_+ Me^++\nabla_- Me^--{i\over 2}\varepsilon
(\tilde\tau_+\zeta^++\tau^+\tilde\zeta_+),\nonumber\\
\nabla{\cal R}&=&\nabla_+{\cal R}e^++\nabla_-{\cal R}e^-+
{i\over 2}(\varepsilon \nabla_-\tilde\tau_-\tilde\zeta_++\varepsilon
\nabla_-\tilde\tau_+
\tilde\zeta_- -\nabla_+\tau^-\zeta^+ -\nabla_+\tau^+\zeta^-)\nonumber\\
&-& i [\bar M (\tau^+ \tilde\zeta_+ + \tilde\tau_+
\zeta^+)+M(\tau^-\tilde\zeta_- + \tilde\tau_- \zeta^- )],\nonumber\\
\nabla{\cal F}&=&\nabla_+{\cal F}e^++\varepsilon \nabla_-{\cal F}e^- -
\varepsilon \nabla_-\tilde\tau_-\tilde\zeta_+ +\nabla_-\tilde\tau_+
\tilde\zeta_-+\nabla_+\tau^-\zeta^+-\nabla_+\tau^+\zeta^-.
\label{residuo}
\end{eqnarray}
These equations, in their turn, are the definitions of the
supercovariantized derivatives of $\tau^\pm$, $\tilde\tau^\pm$, ${\cal
R}$, ${\cal F}$, $M$ and $\bar M$.
Finally, the $e^+e^-$ sector of the Bianchi
identities (\ref{bianchigrav}) gives the ``space-time'' counterparts
of the Bianchi identities themselves, i.e.\ the formul\ae\ for
$[\nabla_+,\nabla_-]\Phi$ of any field $\Phi$.
The formal correspondence between de Sitter and anti de Sitter
theories is summarized by
\begin{equation}
e^-\rightarrow -e^-,\quad \tau\rightarrow -\tau,\quad {\cal R}\rightarrow
-{\cal R},\quad {\cal F}\rightarrow -{\cal F},\quad \nabla_-\rightarrow
-\nabla_-.
\end{equation}
{}From (\ref{residuo}) we can confirm that $\varepsilon =1$
corresponds to positive curvature, while $\varepsilon =-1$
corresponds to negative
curvature. Indeed, setting ${\cal R}$=const and ${\cal F}=0$,
the expressions of $\nabla {\cal R}$ and $\nabla {\cal F}$
imply either $M=\bar M=0$
or $\tau^\pm=\tilde\tau^\pm=0$. If $M=\bar M=0$, then $\nabla M$
and $\nabla \bar M$
also imply $\tau^\pm=\tilde\tau^\pm=0$. So, we can conclude that
$\tau^\pm=\tilde\tau^\pm=0$ is in any case true. Finally,
$\nabla \tau$ implies ${\cal R} =2\varepsilon M\bar M$
and $M=$const. This also shows that one cannot move from the de Sitter
to the anti de Sitter case by a continuous deformation of the expectation
value of $M,\bar M$.
For simplicity, from now on we set $\varepsilon=+1$.
\section{Coupling gravity with chiral matter}
\label{coupling}
In the previous section we have derived the first ingredient we need,
namely the off-shell graviton multiplet structure. In the present
section we extend the rheonomic construction of chiral multiplets
\cite{LG} discussed in \cite{billofre} for flat superspace,
to the curved superspace environment.
The field content of an off-shell chiral multiplet is $X^I, \psi^I ,
\tilde\psi^I, H^I $ where $X^I$ is a complex scalar field,
$\psi^I ,\tilde\psi^I$ are complex spin $\pm\o{1}{2}$ fields
and $H^I$ is a complex auxiliary
scalar field. The complex conjugate fields will be denoted by a star.
The index notation is $I=(0,i)$, $i=1,\cdots n$.
The multiplet corresponding to the value $I=0$ plays a
special role in coupling to supergravity, namely it is the multiplet
containing the Lagrange multiplier $\Phi$ introduced in eq.s
(\ref{intro923}) and (\ref{intro24}).
To start our program we need the
covariant derivatives for the matter fields
\footnotemark\footnotetext{Our notation for the covariant derivative is
$\nabla \phi= d \phi - s\omega \phi -\o{i}{2}q A$, where $s,q$ are
the spin and the $U(1)$ charge for the field $\phi$}, which are
\begin{eqnarray}
\nabla X^I &=& d X^I, \nonumber\\
\nabla \psi^I &=& d\psi^I -\o{1}{2} \omega \psi^I + \o{i}{4} A \psi^I,
\nonumber\\
\nabla \tilde\psi^I &=& d \tilde\psi^I +
\o{1}{2}\omega \tilde\psi^I -\o{i}{4}A \tilde\psi^I, \nonumber\\
\nabla H^I &=& dH^I.
\label{cova}
\end{eqnarray}
{}From the Bianchi identities, which are easily read off (\ref{cova}), we find
the following rheonomic pa\-rame\-tri\-za\-tions:
\begin{eqnarray}
\nabla X^I &=& \nabla_+ X^I e^+ + \nabla_- X^I e^- +
\psi^I \zeta^- +\tilde\psi^I \tilde\zeta_-, \nonumber\\
\nabla\psi^I &=& \nabla_+\psi^I e^+ + \nabla_-\psi^I e^- -\o{i}{2}
\nabla_+ X^I \zeta^+ + H^I \tilde\zeta_-,\nonumber\\
\nabla \tilde\psi^I &=& \nabla_+ \tilde\psi^I e^+ + \nabla_-
\tilde\psi^I e^- -\o{i}{2}
\nabla_- X^I \tilde\zeta_+ - H^I \zeta^-, \nonumber\\
\nabla H^I &=& \nabla_+ H^I e^+ + \nabla_- H^I e^- -
\o{i}{2} \nabla_- \psi^I \tilde\zeta_+
+\o{i}{2} \nabla_+ \tilde\psi^I \zeta^+.
\label{rheomatter}
\end{eqnarray}
The usual choice for the auxiliary field in the Landau-Ginzburg matter is
\begin{equation}
H^I= \eta^{I {J^*}} \partial_{J^*} \bar W,
\label{auxi}
\end{equation}
$\eta_{IJ^*}$ denoting a flat (constant) metric and $W(X)$ being a
(polynomial) chiral potential.
If we explicitly make this choice,
we also find the fermionic equation of
motions from the self-consistency of the parametrization $\nabla H^I$:
\begin{eqnarray}
&\o{i}{2}& \nabla_- \psi^I - \eta^{I {J^*}} \partial_{M^*} \partial_{J^*}
\bar W \t
\psi^{M^*}=0, \nonumber\\
&\o{i}{2}&\nabla_+ \tilde\psi^I +\eta^{I {J^*}}\partial_{M^*}\partial_{j^*}
\bar W
\psi^{M^*}=0.
\label{auxi2}
\end{eqnarray}
Finally, from the supersymmetric variations of the fermionic field equation
we find the bosonic field equation
\begin{eqnarray}
[ \nabla_- \nabla_+ &+& \nabla_+\nabla_- ]X^I - 8\eta^{I {J^*}} \partial_{M^*}
\partial_{{J^*}} \partial_{L^*} \bar W \psi^{L^*} \tilde\psi^{M^*} +8
\eta^{I {J^*}}\partial_{M^*} \partial_{{J^*}} \bar W \eta^{M^* L}\partial_L W \nonumber\\
&-&4i \bar M \eta^{I {J^*}}\partial_{J^*} \bar W
+ \tau^- \psi^I - \tilde\tau_- \tilde\psi^I=0.
\label{auxi3}
\end{eqnarray}
The coupling of the Landau-Ginzburg matter with N=2 supergravity
is described
by the following Lagrangian, derived from the field equations (\ref{auxi}),
(\ref{auxi2}) and (\ref{auxi3})
\begin{equation}
{\cal L}_{Liouville}= {\cal L}_{kin} + {\cal L}_{W},
\end{equation}
where ${\cal L}_{kin}$ and ${\cal L}_W$ are
the kinetic and superpotential terms
\begin{eqnarray}
{\cal L}_{kin} &=& \eta_{I{J^*}} (\nabla X^I - \psi^I
\zeta^- - \tilde\psi^I \tilde\zeta_-
) (\Pi^{{J^*}}_+ e^+ - \Pi_-^{J^*} e^- )\nonumber\\
&+& \eta_{I{J^*}} (\nabla X^{{J^*}} + \psi^{{J^*}}
\zeta^+ + \tilde\psi^{J^*} \tilde\zeta_+) (\Pi^{I}_+ e^+ - \Pi_-^I e^- )\nonumber
\\
&+& \eta_{I{J^*}} (\Pi^{I}_+ \Pi_-^{{J^*}}+ \Pi_-^I\Pi_+^{J^*} )e^+ e^-
\nonumber\\
&+&2i \eta_{I{J^*}} (-\psi^I \nabla \psi^{J^*} e^+ - \psi^{J^*} \nabla \psi^I e^+
+ \tilde\psi^I \nabla \tilde\psi^{J^*} e^- + \tilde\psi^{J^*} \nabla
\tilde\psi^I e^- )\nonumber\\
&+& \eta_{I{J^*}}(\nabla X^{J^*} \psi^I \zeta^- - \nabla X^I \psi^{J^*} \zeta^+
- \nabla X^{J^*} \tilde\psi^I \tilde\zeta_- + \nabla X^I \tilde\psi^{J^*}
\tilde\zeta_+)\nonumber\\
&+& \eta_{I{J^*}} (\psi^I \tilde\psi^{J^*} \zeta^- \tilde\zeta_+ + \psi^{J^*}
\tilde\psi^I \tilde\zeta_- \zeta^+ ) -8 \eta_{I {J^*}}H^I H^{J^*} e^+ e^-,
\nonumber\\
{\cal L}_W&=& 4i (\psi^I \partial_I W \tilde\zeta_+ e^+
+\psi^{J^*} \partial_{{J^*}} \bar W \tilde\zeta_- e^+
+\tilde\psi^I \partial_I W \zeta^+ e^- + \tilde\psi^{I^*} \partial_{I^*}
\bar W \zeta^- e^-)\nonumber\\
&+& 8 [ (\partial_I \partial_J W \psi^I \tilde\psi^J - \partial_{{I^*}}
\partial_{J^*} \bar W
\psi^{I^*}\tilde\psi^{J^*} ]e^+e^-\nonumber\\
&+& 4i(MW-\bar M\bar W)e^+ e^- +2 \bar W \tilde\zeta_- \zeta^- - 2 W
\tilde\zeta_+
\zeta^+ \nonumber\\
&+& (8 H^I \partial_I W + 8H^{I^*} \partial_{I^*} \bar W) e^+ e^-.
\label{matterlagr}
\end{eqnarray}
The fields $\Pi_\pm^I$ and $\Pi_\pm^{I*}$ are auxiliary fields for
the first order formalism: their equation of motion equates them to
the supercovariant derivatives of the $X$-fields,
\begin{equation}
\Pi_\pm^I=\nabla_\pm X^I, \hskip 2truecm
\Pi_\pm^{I*}=\nabla_\pm X^{I*}.
\end{equation}
Substitution of these expressions in ${\cal L}_{kin}$ gives the usual
second order Lagrangian. The rheonomic parametrizations
of $\nabla\Pi_\pm^I$ and $\nabla\Pi_\pm^{I*}$ are derived from the Bianchi
identities and the rheonomic parametrizations (\ref{rheomatter}),
in the same way as (\ref{residuo}) are derived from the Bianchi identities
(\ref{bianchigrav}) and the rheonomic parametrizations (\ref{rheograv}).
\section{N=2 Liouville gravity}
\label{liouville}
Let us consider the chiral multiplet labelled with the index $I=0$.
We call it ``dilaton" multiplet.
For convenience, we relabel the dilaton multiplet as
\begin{equation}
(X^0,X^{0^*}, \psi^0, \psi^{0^*},\tilde\psi^0,\tilde\psi^{0^*}, H^0, H^{0^*})
\to (X, \bar X, \lambda_-, \lambda_+, \tilde\lambda^-,
\tilde\lambda^+ , H, \bar H ). \nonumber
\end{equation}
The $N=2$ extension of the Lagrangian $(X +\bar X)R$ is given by
\begin{eqnarray}
{\cal L}_1 &=& (X + \bar X)R - \o{i}{2}(X - \bar X) F
- 2 \lambda_- \rho^- + 2 \lambda_+ \rho^+
+ 2 \tilde\lambda^- \tilde\rho_- - 2\tilde\lambda^+
\tilde\rho_+ \nonumber\\&&
- 4i \bar M H e^+ e^- + 4i M \bar H e^+ e^-.
\label{lagra}
\end{eqnarray}
Let us remind the reader how a supersymmetric Lagrangian is
constructed in the rheonomic
framework. It is sufficient to find an ${\cal L}$
that satisfies
\begin{equation}
\nabla {\cal L}=d{\cal L}=0.
\label{rheocond}
\end{equation}
In checking this equation one has to use
the rheonomic parametrizations
(\ref{rheograv}) and (\ref{rheomatter}) together with the
definitions (\ref{rhbi}) and (\ref{cova}).
${\cal L}_1$ was determined
starting from the first term $(X+\bar X)R$ and guessing the other
ones in order to satisfy (\ref{rheocond}).
One can pass from the second order formalism to the first order one
by adding the term
\begin{equation}
{\cal L}_T=p_+ T^+ +p_- T^- ,
\end{equation}
where $p_+, p_-$ are (bosonic) Lagrangian multipliers implementing the
torsion constraint $T^\pm=0$. ${\cal L}_T$ is clearly
supersymmetric (the supersymmetry variation of the spin connection
is still determined from the variations of zweibein and gravitini:
this is the so-called {\sl 1.5 order formalism}).
Moreover, one can add to eq. (\ref{lagra}) a ``cosmological constant
term"
compatible with the N=2 local supersymmetry
\begin{eqnarray}
{\cal L}_2 &=& (MX + \bar M\bar X) e^+ e^-
+\lambda_- \tilde\zeta_+ e^+ - \lambda_+ \tilde\zeta_- e^+ + \tilde\lambda^-
\zeta^+ e^- - \tilde
\lambda^+
\zeta^- e^- \nonumber\\
&+& {i\over 2} X
\zeta^+ \tilde\zeta_+ + {i\over 2}\bar X\zeta^- \tilde\zeta_-
+ 2i (\bar H -H)e^+ e^-,
\label{desitter}
\end{eqnarray}
so that the total Lagrangian is ${\cal L}={\cal L}_1 + {\cal L}_2$.
The equations for the auxiliary fields are
\begin{equation}
H= -\o{i}{4} \bar X, \quad\quad \bar H =\o{i}{4}X, \quad\quad
M=\bar M = -\o{1}{2}.
\end{equation}
Using the equation of motions of
$H$, $\bar H$ and $X+\bar X$,
we get precisely a de Sitter supergravity with
cosmological constant $\Lambda=\o{1}{2}$. The field strength $F$, on
the other hand, is set to zero by the $X-\bar X$ field equation.
Notice that when in the matter Lagrangian (\ref{matterlagr})
the index $I$ takes the value $0$ and $W=-\o{i}{4} X^0$,
then ${\cal L}_W$ coincides with the de Sitter term ${\cal L}_2$.
To conclude this section, let us show that the kinetic term of the
dilaton multiplet can be produced from the Poincar\'e Lagrangian
${\cal L}_1$ with some field redefinitions.
One can perform the substitutions
\begin{eqnarray}
\begin{array}{ll}
e^+\rightarrow e^+ {\rm e}^{-{1\over 4}(X+\bar X)},\quad & \quad
e^-\rightarrow e^- {\rm e}^{-{1\over 4}(X+\bar X)},\cr
M\rightarrow (M+iH){\rm e}^{{1\over 2}(X+\bar X)},\quad & \quad
\bar M\rightarrow (\bar M-i\bar H){\rm e}^{{1\over 2}(X+\bar X)},\cr
\zeta^+\rightarrow {\rm e}^{-{1\over 4}X}(\zeta^+-i\lambda_- e^+),\quad & \quad
\zeta^-\rightarrow {\rm e}^{-{1\over 4}\bar X}(\zeta^-+i\lambda_+ e^+),\cr
\tilde\zeta_+\rightarrow {\rm e}^{-{1\over 4}X}(\tilde\zeta_+-i\tilde
\lambda^- e^-),\quad & \quad
\tilde\zeta_-\rightarrow {\rm e}^{-{1\over 4}\bar X}(\tilde\zeta_-+i
\tilde\lambda^+ e^-),\cr
\lambda_-\rightarrow \lambda_-{\rm e}^{{1\over 4}\bar X},\quad & \quad
\lambda_+\rightarrow \lambda_+{\rm e}^{{1\over 4}X},\cr
\tilde\lambda^-\rightarrow \tilde\lambda^-{\rm e}^{{1\over 4}\bar X},
\quad & \quad
\tilde\lambda^+\rightarrow \tilde\lambda^+{\rm e}^{{1\over 4}X},\cr
\end{array}
\end{eqnarray}
and
\begin{eqnarray}
\Omega &\rightarrow & \Omega-{1\over 2}[
\nabla_+\bar Xe^+-\nabla_-\bar Xe^--\lambda_+\zeta^+
+\tilde\lambda^+\tilde\zeta_+],\nonumber\\
\bar \Omega &\rightarrow & \bar \Omega-{1\over 2}[
\nabla_+Xe^+-\nabla_-Xe^-+\lambda_-\zeta^-
-\tilde\lambda^-\tilde\zeta_-],
\label{2222}
\end{eqnarray}
where $\Omega=\omega-{i\over 2}A$, $\bar \Omega=
\omega+{i\over 2}A$.
Then, the Poincar\'e Lagrangian ${\cal L}_1$ (\ref{lagra}) goes into
\begin{equation}
{\cal L}_1+{\cal L}_{kin},
\end{equation}
${\cal L}_{kin}$ being the kinetic Lagrangian of the dilaton multiplet,
the first formula of
(\ref{matterlagr}) with $I$ restricted to the value $0$.
As one can easily check,
the replacement of $\omega={1\over 2}(\Omega+\bar \Omega)$
implied by (\ref{2222})
is consistent with the preservation of vanishing torsions.
\section{Gauge free algebra of the N=2 Liouville theory}
\label{gaugefree}
The local symmetries of the N=2 Liouville theory are:
diffeomorphism, local lorentz rotations,
supersymmetries and U(1) gauge symmetry.
The procedure for writing down the
free BRST algebra \cite{ansfre,ansfre2,bonora}
is straightforward, once the curvature
definitions and the rheonomic parametrizations are BRST extended
to ghost forms, by introducing the ghosts of the local symmetries:
\begin{eqnarray}
\begin{array}{ll}
\hat e^+ = e^+ + C^+, \quad & \quad
\hat e^- = e^- + C^-, \\
\hat \zeta^\pm = \zeta^\pm + \Gamma^\pm, \quad & \quad
\hat {\tilde{\zeta}}_\pm = \tilde\zeta_\pm + \tilde\Gamma_\pm, \\
\hat \omega = \omega +C^0, \quad & \quad
\hat A =A + C. \\
\end{array}
\end{eqnarray}
The exterior derivative is BRST extended to $\hat d= d+ s$.
The BRST variations of the fields are easily derived by
selecting out the correct ghost number sector in the BRST extensions
of formul\ae\ (\ref{rhbi}), (\ref{rheograv})
(graviton multiplet) and (\ref{cova}), (\ref{rheomatter})
(dilaton multiplet).
We have, in particular, for the graviton multiplet,
\begin{eqnarray}
s e^+ &=& -\nabla C^+ - C^0 e^+ +\o{i}{2} \zeta^+ \Gamma^-
+\o{i}{2} \Gamma^+ \zeta^-, \nonumber\\
s e^- &=& -\nabla C^- + C^0 e^- +\o{i}{2} \tilde\zeta_+ \tilde\Gamma_-
+\o{i}{2} \tilde\Gamma_+ \tilde\zeta_-,\nonumber\\
s \zeta^+ &=&-\nabla \Gamma^+ - \o{1}{2} C^0 \zeta^+ -\o{i}{4}C
\zeta^+ + \tau^+ (C^+ e^- + e^+ C^-)
- M (\tilde\Gamma_- e^+ + \tilde\zeta_- C^+), \nonumber\\
s \tilde\zeta_+ &=&-\nabla \tilde\Gamma_+ + \o{1}{2}
C^0 \tilde\zeta_+ +\o{i}{4}
C \tilde\zeta_++ \tilde\tau_+( C^+ e^- + e^+ C^-)
+ M (\Gamma^- e^- + \zeta^- C^-), \nonumber\\
s\omega &=&-dC^0+
{\cal R}(C^+ e^-+ e^+ C^-)+
\o{i}{2}C^-(\tau^+\zeta^-+\tau^-\zeta^+)+
\o{i}{2}C^+(\tilde\tau_+\tilde\zeta_-+\tilde\tau_-\tilde\zeta_+)
\nonumber\\&&+\o{i}{2}e^-(\tau^+\Gamma^-+\tau^-\Gamma^+)
+\o{i}{2}e^+(\tilde\tau_+\tilde\Gamma_-+\tilde\tau_-\tilde\Gamma_+)
\nonumber\\&&
+\o{i}{2}M(\zeta^-\tilde\Gamma_-+\Gamma^-\tilde\zeta_-)+
\o{i}{2}\bar M(\Gamma^+
\tilde\zeta_++\zeta^+\tilde\Gamma_+),\nonumber\\
sA &=&-dC+{\cal F}(C^+ e^-+ e^+ C^-)-(\tau^+\zeta^--\tau^-\zeta^+)C^-
\nonumber\\&&
+(\tilde\tau_-\tilde\zeta_+-\tilde\tau_+\tilde\zeta_-)C^+
-(\tau^+\Gamma^--\tau^-\Gamma^+)e^- \nonumber\\&&+
(\tilde\tau_-\tilde\Gamma_+-\tilde\tau_+\tilde\Gamma_-)e^+
\nonumber\\&&+M(\zeta^-\tilde\Gamma_-+\Gamma^-\tilde\zeta_-)-\bar M(\Gamma^+
\tilde\zeta_+ +\zeta^+\tilde\Gamma_+).
\end{eqnarray}
On the other hand,
the BRST transformations of the fields of the dilaton multiplet are
\begin{eqnarray}
sX&=&\nabla_+X C^++\nabla_-X C^-+\lambda_-\Gamma^-+
\tilde\lambda^-\tilde\Gamma_-,\nonumber\\
s\lambda_-&=&\nabla_+\lambda_-C^++\nabla_-\lambda_-C^-
-{i\over 2}\nabla_+X\Gamma^++H\tilde\Gamma_-,\nonumber\\
s\tilde\lambda^-&=&\nabla_+\tilde\lambda^-C^++\nabla_-\tilde\lambda^-C^-
+{i\over 2}\nabla_+X\tilde \Gamma_+-H\Gamma^-,\nonumber\\
sH&=&\nabla_+HC^++\nabla_-HC^--{i\over 2}
\nabla_-\lambda_-\tilde\Gamma_++{i\over 2}\nabla_+\tilde\lambda^-
\Gamma^+.
\end{eqnarray}
\section{Topological Twist of the N=2 Liouville Theory}
\label{twist}
In this section we perform the topological twist of the theory.
The formal set-up is analogous to the one in four
dimensions \cite{ansfre,ansfre2}. One has to change consistently,
the spin, the BRST charge and the ghost number.
In particular, the new BRST charge is obtained by the so-called
{\sl topological shift}, which is a simple redefinition of the supersymmetry
ghosts that get ghost number zero. Due to the existence of
Majorana-Weyl spinors in two dimensions, one has two possibilities,
known in the literature as the A and B twists \cite{index1,billofre}.
The geometrical and physical
meaning of the two types of twists was discovered at the
level of globally supersymmetric N=2 matter theories.
As noticed in eq.\ (\ref{matterlagr}), the most general
interaction of a set of chiral multiplets involves
two separately supersymmetric Lagrangian terms,
the kinetic term
${\cal L}_{kin}$ and the superpotential term
${\cal L}_{W}$. The choice between the A and B twists decides
which term is BRST nontrivial and which one
is BRST exact. In the A twist the nontrivial BRST cohomology is
carried by ${\cal L}_{kin}$,
while in the B twist it is carried by ${\cal L}_W$.
In the first case, the topologically meaningful coupling parameters are those
corresponding to the K\"ahler class deformations of the target space metric,
the correlation functions being instead independent of the
deformation parameters of the superpotential. In the B twist
the situation is reversed.
The above considerations apply both to globally
and locally supersymmetric theories. However, in presence of supergravity
and as far as the A twist is concerned,
${\cal L}_W$ cannot be set to zero from the beginning,
since it contains the
de Sitter term ${\cal L}_2$ (\ref{desitter}) (formally
obtainable from ${\cal L}_W$ with $W=-{i\over 4}X^0$) and
the parameter $a$ that can be put
in front of ${\cal L}_2$ is the cosmological constant,
the sign of which has to
be compatible with the Euler characteristic $\int R=2(1-g)$.
Technically, the A and B twists emerge as follows.
We first
notice that the Lagrangian (\ref{lagra}) of Poincar\'e gravity,
possesses a global $R$-symmetry
[which will be denoted by $U(1)^\prime$], under which
the fields transform with the charges shown in table \ref{topotable}.
$U(1)^\prime$ is not a local symmetry and it is not even a global
symmetry for the de Sitter Lagrangian (\ref{desitter}).
In general R-symmetries and R-dualities play a crucial role
\cite{ansfre2} in the
topological twist of the N=2 theories.
Depending on the choice of the twist (A or B), the new Lorentz group is
defined as a combination of the old one with the $U(1)^\prime$
or $U(1)$ symmetry; viceversa for the ghost number.
For the A twist the new assignments and the topological shift are
\begin{eqnarray}
\begin{array}{ll}
{\rm spin}^\prime = \hbox{spin} + U(1)^\prime, \quad &\quad
\Gamma^+ \to \Gamma^+ + \alpha,\nonumber\\
{\rm ghost}^\prime = {\rm ghost} + 2 U(1),\quad &\quad
\tilde\Gamma_- \to \tilde\Gamma_- +\beta,
\end{array}
\label{spina}
\end{eqnarray}
while for the B twist they are
\begin{eqnarray}
\begin{array}{ll}
{\rm spin}^\prime = \hbox{spin} + U(1), \quad &\quad
\Gamma^+ \to \Gamma^+ + \alpha, \nonumber\\
{\rm ghost}^\prime = {\rm ghost} + 2 U (1)^\prime,
\quad &\quad
\tilde\Gamma_+ \to \tilde\Gamma_+ + \beta.
\end{array}
\label{spinb}
\end{eqnarray}
$\alpha$ and $\beta$ are the so-called {\sl brokers} \cite{ansfre2}.
They are to be treated formally as constant ($d\alpha=d\beta=0$) and
their (purely formal) role is to bring the correct contributions
of spin and ghost number to the fields (see the last column of
table \ref{topotable}). Their quantum numbers are given in the table.
In this paper we focus on the A twist.
The topological theory that emerges from our
analysis is apt to perform the gravitational dressing of the topological
theories dealing with K\"ahler class deformations \cite{billofre}.
The gravitational dressing of the complex structure deformations need the B
twist of the N=2 Liouville theory, whose analysis is postponed to future work.
The shift produces a new BRST operator $s^\prime$ which
equals $s+\delta_T$, $\delta_T$ being the topological variation
(known as ${\cal Q}_s$ in conformal field theory). On the
graviton multiplet, $\delta_T$ acts as
\begin{eqnarray}
\begin{array}{ll}
\delta_T e^+ ={i\over 2}\alpha \zeta^-, \quad & \quad
\delta_T e^- ={i\over 2} \tilde \zeta_+\beta, \\
\delta_T\zeta^+ = - {1\over 2} \omega \alpha - {i\over 4} A \alpha - M
\beta e^+ \equiv B_1\alpha,
\quad & \quad
\delta_T \zeta^- =0,\\
\delta_T \tilde\zeta_+=0,
\quad & \quad
\delta_T \tilde\zeta_- = {1\over 2} \omega \beta - {i\over 4}
A \beta + \bar M \alpha e^- \equiv B_2\beta,\\
\delta_T M=-{i\over 2}\tilde\tau_+\alpha ,\quad &\quad
\delta_T \bar M=-{i\over 2}\tau^-\beta,
\end{array}\nonumber\\
\begin{array}{l}
\delta_T\omega = {i\over 2}M\zeta^-\beta +
{i\over 2}\bar M\alpha \tilde\zeta_+ +{i\over 2}e^-\tau^-\alpha
+{i\over 2}e^+\tilde\tau_+\beta,
\quad\quad\quad\quad\quad\quad\quad\quad\\
\delta_T A = M\zeta^-\beta -\bar M\alpha
\tilde\zeta_+ +\tau^-\alpha e^--\tilde\tau_+\beta e^+.
\quad\quad\quad\quad\quad\quad\quad\quad
\end{array}
\label{deltaT}
\end{eqnarray}
Taking into account that the BRST algebra closes
off-shell, we see that $B_1$
and $B_2$ play the role of Lagrange multipliers, since they are
the BRST variations of the antighosts $\zeta^+$ and $\tilde\zeta_-$.
$B_1$ and $B_2$ can be considered as redefinitions of $A$, $M$ and
$\bar M$. Indeed, since $M$ and $\bar M$ have spin $1$ and $-1$ after the
twist,
$Me^+$ and $\bar M e^-$ can be considered as one forms. In particular,
we have shown that the graviphoton $A$ belongs to
${\cal B}_{gauge-fixing}$. On the other hand, it is clear that
the gauge-free topological algebra is that of $SL(2,{\bf R})$,
since the above formul\ae\ show that the topological symmetry
is the most continuous
deformation of the zweibein.
On the dilaton multiplet, $\delta_T$ is
\begin{eqnarray}
\begin{array}{ll}
\delta_T X = \tilde\lambda^- \beta, \quad & \quad
\delta_T {\bar X} = - \lambda_+ \alpha, \\
\delta_T \lambda_- = -{i\over 2} \nabla_+ X \alpha + H \beta
\equiv H_1\alpha,\quad & \quad
\delta_T \tilde\lambda^- = 0, \\
\delta_T \lambda_+ = 0, \quad & \quad\delta_T \tilde\lambda^+ =
{i\over 2} \nabla_- {\bar X} \beta - \bar H \alpha\equiv H_2\beta, \\
\delta_T H ={i\over 2} \nabla_+ \tilde\lambda^- \alpha, \quad & \quad
\delta_T {\bar H} = - {i\over 2} \nabla_- \lambda_+ \beta.\\
\end{array}
\label{deltaT2}
\end{eqnarray}
$H_1$ and $H_2$ are also Lagrange multipliers, redefinitions
of $H$ and $\bar H$.
Finally, the topological variation of the brokers vanishes, but
nilpotence of $s^\prime$ and $s$ requires
\begin{eqnarray}
\begin{array}{ll}
s^\prime \alpha =- {1\over 2}
C^0 \alpha - {i\over 4} C \alpha=s\alpha, \quad & \quad
s^\prime \beta = {1\over 2} C^0 \beta - {i\over 4} C \beta=s\beta.
\end{array}
\end{eqnarray}
In other words, even if formally, $\alpha$ and $\beta$ have to be considered
as sections with definite spin and $U(1)$ charge.
{}From the above formul\ae\ it is simple to check that $\delta_T$ is
nilpotent, $\delta_T^2=0$, as expected. Summarizing, we have
\begin{equation}
s^\prime=\delta_T+s,\quad\quad s^{\prime \, 2}=s^2=\delta_T^2=s\delta_T+
\delta_T s=0.
\end{equation}
Using the above formulae and
the notation shown in the last column of
table \ref{topotable}, we can write the
full Lagrangian $\cal L$ as the topological variation of a suitable
gauge fermion $\Psi$ plus a total derivative term. Precisely,
\begin{equation}
{\cal L}_1=\delta_T\Psi_1+2\nabla(XM_-e^-+\bar X M_+e^+),
\,\quad\,
{\cal L}_2=\delta_T\Psi_2+\nabla(\bar Xe^+-X e^-),
\end{equation}
where
\begin{eqnarray}
\Psi_1&=&-2X d\bar{\tilde \xi}+2\bar X d\bar \xi-4i\chi_+M_-e^+e^-
-4i\chi_-M_+e^+e^-+2XB_2\bar{\tilde \xi}-2\bar X B_1\bar \xi,\nonumber\\
\Psi_2&=&(\bar X e^+-Xe^-)(\bar{\tilde \xi}-\bar \xi)-2i(\chi_++\chi_-)
e^+e^-.
\end{eqnarray}
{}From the formula of $\Psi_2$, it is apparent that $U(1)^\prime$
and correspondingly spin$^\prime$
are violated by ${\cal L}_2$. On the other hand, ${\cal L}_2$
is purely a gauge-fixing term, so that this violation does
not affect the Lorentz
symmetry in the physical correlators. It can be thought
as the choice of a noncovariant gauge-fixing.
Let us analyze the gauge-fixing conditions of the twisted theory.
We take ${\cal L}_{tot}={\cal L}_1-2a{\cal L}_2$, ${\cal L}_1$
and ${\cal L}_2$ being given by (\ref{desitter}).
The Lagrange multipliers $A$, $M$-$\bar M$ and $H$-$\bar H$
impose the following constraints
\begin{equation}
\nabla_+(X-\bar X)=\nabla_-(X-\bar X)=0,\quad \,\,
H={i\over 2}a\bar X,\quad \,\, \bar H=-{i\over 2} a X,\quad \,\,
M=\bar M=a.
\label{constr}
\end{equation}
A check of consistency is that the $\delta_T$ variations of the constraints
(\ref{constr}) imposed by the Lagrange multipliers are the field
equations of the topological ghosts $\zeta^-$, $\tilde\zeta_+$,
$\lambda_+$ and $\tilde\lambda^-$, obtained from the variations
of ${\cal L}_{tot}$
with respect to the corresponding antighosts
$\lambda_-$, $\tilde\lambda^+$, $\zeta^+$ and $\tilde\zeta_-$, i.e.\
\begin{eqnarray}
\begin{array}{ll}
\tau^-=0,\quad &\quad
\tilde \tau_+=0,\cr
\nabla_+\lambda_-=0,\quad &\quad
\nabla_-\lambda_-=a\tilde\lambda^+,\cr
\nabla_+\tilde\lambda^-=-a\lambda_+,\quad &\quad
\nabla_-\tilde\lambda^-=0.
\end{array}
\end{eqnarray}
To verify this, one has to keep into account that the
$\omega$ field equation gives
$p_+=-\nabla_+(X+\bar X)$, and $p_-=\nabla_-(X+\bar X)$.
The observables of the topological theory are easily derived, as in
the case of the Verlinde and Verlinde model, from the descent
equations $\hat d \hat R^n=0$, $\hat R=R+\psi_0+\gamma_0$ being
the BRST extension of the curvature $R$. In particular, the local
observables are
\begin{equation}
\sigma^{(0)}_n(x)=\gamma_0^n(x),
\end{equation}
as anticipated in the introduction. On the other hand, the field
strength $F$ does not provide any new observables, due to the fact
that $A\in {\cal B}_{gauge-fixing}$.
\section{The conformal field theory associated with N=2 Liouville gravity}
\label{conformal}
In this section, we analyse N=2 Liouville gravity in detail. We gauge-fix it
and show that, in the limit $a^2\rightarrow 0$, it reduces to a
conformal field theory of vanishing total central charge,
summarized by formul\ae\ (\ref{6.30}) and (\ref{ghrepr}).
The total
central charge is the sum of the central charge of the Liouville system,
equal to $6$, and that of the ghost system, equal to $-6$.
We start from the rheonomic Lagrangian of Poincar\'e N=2 D=2 supergravity,
that we rewrite here for convenience,
\begin{equation}
{\cal L}_1=(X+\bar X) R-{i\over 2}(X-\bar X)F-2
\lambda_- \rho^-+
2 \lambda_+\rho^++2\tilde\lambda^- {\tilde\rho}_--2
\tilde\lambda^+
{\tilde\rho}_++4i(M\bar H-\bar M H)E^zE^{\bar z}.
\end{equation}
In this section, we use the notation $E^z$ and $E^{\bar z}$ instead
$e^+$ and $e^-$.
We do this for the sake of a tensorial notation that is useful in
the gauge fixed theory. We shall have tensor indices $t^{z\cdots z{\bar
z}\cdots{\bar z}}$ that are raised and lowered with the flat metric
${\hat g}_{z{\bar z}} ={\hat g}^{z{\bar z}}=1$ and spinor indices
$s^\pm$ and $\tilde s_\pm$ such that $s^{+-} \sim t^z$ and
$\tilde s_{+-} \sim t^{\bar z}$. In this way it is immediate to read
the spin assignments of the fields.
We also recall that in order to deal with the torsions, we have to
add a term
\begin{equation}
\pi_zT^z+\pi_{\bar z}T^{\bar z},
\label{tors}
\end{equation}
that allows to treat the spin connection $\omega$ as an independent
variable.
In the case of locally supersymmetric theories, one has to perform
the topological twist
on the BRST quantized version of the theory,
as discussed in detail in ref.\ \cite{ansfre}.
We have developed the full gauge-free BRST algebra
of N=2 Liouville theory in section \ref{gaugefree}. Now we proceed to gauge-fix
diffeomorphisms, Lorentz rotations, supersymmetries and local $U(1)$
gauge transformations. Then we discuss the gauge-fixed BRST theory.
Diffeomorphisms and Lorentz rotations are fixed by choosing the conformal gauge
\begin{equation}
\matrix{E^z\wedge dz=0,&E^z\wedge d\bar z+E^{\bar z}\wedge
dz=0,&E^{\bar z}\wedge d\bar z=0.}
\label{diffgf}
\end{equation}
These conditions permit to express the zweibein as
\begin{equation}
\matrix{E^z={\rm e}^{\varphi(z,\bar z)} dz,&E^{\bar z}=
{\rm e}^{\varphi(z,\bar z)}d\bar z,}
\label{gfcond1}
\end{equation}
where $\varphi(z,\bar z)$ is the conformal factor, which is to be
identified with the Liouville quantum field.
Supersymmetries are fixed by extending the conformal gauge by means of
the conditions
\begin{equation}
\matrix{\zeta^+\wedge E^z=0,\,\, &\quad
\zeta^-\wedge E^z=0,\,\, &\quad\tilde\zeta_+\wedge E^{\bar z}=0,
\,\, &\quad\tilde\zeta_-\wedge E^{\bar z}=0,}
\label{susygf}
\end{equation}
that, together with (\ref{diffgf}) make the so-called
superconformal gauge. (\ref{susygf}) corresponds to the usual condition
$\gamma^\mu\zeta^A_\mu=0$, $A=1,2$ and permit to
express the gravitini as
\begin{equation}
\matrix{\zeta^+=\eta^+_z {\rm e}^\varphi dz,&
\zeta^-=\eta^-_z {\rm e}^\varphi dz,&
\tilde\zeta_+=\eta_{+\bar z} {\rm e}^\varphi d\bar z,&
\tilde\zeta_-=\eta_{-\bar z} {\rm e}^\varphi d\bar z,}
\label{gfcond2}
\end{equation}
where $\eta^\pm_z(z,\bar z)$ and $\eta_{\bar z}^\pm(z,\bar z)$
are anticommuting fields of spin $1/2$ and $-1/2$ (the superpartners
of the Liouville field $\varphi$).
The $U(1)$ gauge transformations have to be treated carefully.
The critical N=2 string possesses {\sl two} local $U(1)$
gauge-symmetries, that permit to gauge-fix the graviphoton $A$ to zero
and introduce the two $b$-$c$ systems {\sl a la} Faddeev-Popov,
one in the left moving sector and one in the right
moving one.
In such a way, a complete chiral factorization
into two superconformal field theories (left and right moving) is achieved.
The theory that we are now dealing with, on the other hand,
possesses a single local $U(1)$ symmetry, the $U(1)^\prime$ R-symmetry
being only global. From a field theoretical point of view,
it is not immediate to see how a pair of $b$-$c$ systems can
be introduced {\sl a la} Faddeev-Popov. Indeed, enforcing the
usual Lorentz gauge $\partial^\mu A_\mu=0$ produces
a second order ghost-antighost system. That is why we pay
a particular attention to this fact. We arrange things in the correct way
by using a trick.
Let us introduce an additional trivial BRST system
(a ``one dimensional topological $\sigma$-model'')
$\{\xi,C^\prime\}$, $\xi$ being a ghost number zero scalar
and $C^\prime$ being a ghost number one scalar.
Their BRST algebra is chosen to be
trivial, namely
\begin{equation}
\matrix{s\xi=C^\prime,&sC^\prime=0.}
\label{trivial}
\end{equation}
The meaning of this BRST system is the gauging
of the R-symmetry $U(1)^\prime$.
Indeed, $U(1)^\prime$, which is only a {\sl global} symmetry of the
starting theory, becomes a {\sl local} symmetry
in the gauge-fixed version of the same theory.
This will become clear later on, when the complete factorization
between left and right moving sectors will be apparent.
We fix both the $U(1)$ gauge symmetry and the trivial
symmetry (\ref{trivial}) by choosing the following two gauge-fixings
\begin{equation}
\matrix{A_z-\partial_{z}\xi=0,&A_{\bar z}+\partial_{\bar z}\xi=0.}
\label{1.7}
\end{equation}
corresponding to $A=*d\xi$, where $A=A_zdz+A_{\bar z}d\bar z$.
With obvious notation, the $\bar C$ and $\bar \Gamma$ fields
being antighosts, the gauge-fermion $\Psi$ is
\begin{eqnarray}
\Psi&=&\Psi_{diff}+\Psi_{susy}+\Psi_{gauge}\nonumber\\&&=
\bar C_{zz}E^z\wedge dz +\bar C_{z\bar z}(E^z\wedge d\bar z+E^{\bar z}\wedge
dz)+\bar C_{\bar z\bar z}E^{\bar z}\wedge d\bar z \nonumber\\&&+
\bar\Gamma_{+ z}\zeta^+\wedge E^z+\bar\Gamma_{-z}\zeta^-\wedge E^z
+\bar\Gamma^+_{\bar z}\tilde\zeta_+\wedge E^{\bar z}+
\bar\Gamma^-_{\bar z}\tilde\zeta_-\wedge E^{\bar z}\nonumber\\&&+
\bar C\wedge (A-*d\xi),
\end{eqnarray}
$\bar C$ being a one form, $\bar C=\bar C_z dz + \bar C_{\bar z}d\bar z$.
The gauge-fixed Poincar\'e Lagrangian is thus
\begin{equation}
{\cal L}={\cal L}_1+s\Psi.
\end{equation}
In writing the gauge-fixed Lagrangian we proceed as follows.
The Lagrange multipliers of the algebraic gauge-fixings (i.e.\
diffeomorphisms, Lorentz rotations and supersymmetries) will
be functionally integrated away, thus solving the gauge-fixing conditions
(\ref{diffgf}) and (\ref{susygf}). The
Lagrange multipliers $P_z$ and $P_{\bar z}$ of the $U(1)$ and
$U(1)^\prime$ gauge-fixings ($s\bar C_z=P_z$ and $s\bar C_{\bar z}=P_{\bar z}$)
will be conveniently retained for now,
since the corresponding gauge-fixings (\ref{1.7}) contain
derivatives of the fields.
The remaining part of the Lagrangian can thus be
greatly simplified by using the algebraic gauge-fixing conditions.
Moreover, noticing that $\zeta^+\zeta^-=\tilde\zeta_+\tilde\zeta_-=0$
on the gauge-fixing condition (\ref{gfcond2}), the torsion constraints
imposed by the Lagrange multipliers $\pi_z$ and $\pi_{\bar z}$
become algebraic ``gauge-fixing conditions'' on $\omega$
that are solved by
\begin{equation}
\omega=\partial_{z} \varphi \, dz-\partial_{\bar z} \varphi \, d\bar z.
\end{equation}
Notice that such an $\omega$ does not depend on the gravitini.
The Lorentz ghost $C^0$
appears only algebraically. Consequently,
we can eliminate $\bar C_{z\bar z}$, by expressing $C^0$ in terms of the other
fields
\begin{eqnarray}
2{\rm e}^\varphi C^0 dz\wedge d\bar z &=&-\nabla C^z \wedge d\bar z
-\nabla C^{\bar z} \wedge dz\nonumber\\&&+{i\over 2}
[(\Gamma^{+}\zeta^-+\zeta^+\Gamma^{-}) \wedge d\bar z
+(\tilde\zeta_+\tilde\Gamma_{-}+\tilde\Gamma_{+}\tilde\zeta_-)
\wedge dz].
\label{varepsilon}
\end{eqnarray}
Due to this, one finds
\begin{eqnarray}
s\Psi_{diff}+s\Psi_{susy}&=&\bar C_{zz}\nabla C^z \wedge dz+
\bar C_{\bar z\bar z}\nabla C^{\bar z} \wedge d\bar z\nonumber\\&&+
\bar\Gamma_{+z}\nabla (\Gamma^{+} E^z+\zeta^+ C^z )
+\bar\Gamma_{-z}\nabla(\Gamma^{-} E^z+\zeta^- C^z )
\nonumber\\
&&+\bar \Gamma^+_{\bar z}\nabla(\tilde\Gamma_{+} E^{\bar z}+
\tilde \zeta_+ C^{\bar z} )
+\bar\Gamma^-_{\bar z}\nabla(\tilde\Gamma_{-} E^{\bar z}+
\tilde\zeta_- C^{\bar z} ).
\end{eqnarray}
It is convenient to introduce the following substitutions
\begin{eqnarray}
\begin{array}{ll}
\eta^{\pm\prime}_z=\eta^\pm_z{\rm e}^{{1\over 2}\left(\varphi\mp
{i\over 2}\xi\right)},&\quad
\eta^\prime_{\pm\bar z}=\eta_{\pm\bar z}
{\rm e}^{{1\over 2}\left(\varphi\mp{i\over 2}\xi\right)},\\
\lambda^\prime_\pm=\lambda_\pm{\rm e}^{{1\over 2}
\left(\varphi\pm{i\over 2}\xi\right)},&\quad
\tilde\lambda^{\pm\prime}=\tilde\lambda^\pm{\rm e}^{{1\over 2}
\left(\varphi\pm{i\over 2}\xi\right)},\\
C^{z\prime}= C^z {\rm e}^{-\varphi},&\quad
C^{\bar z\prime}= C^{\bar z} {\rm e}^{-\varphi},\\
\bar C_{zz}^\prime=\bar C_{zz} {\rm e}^{\varphi},&\quad
\bar C_{\bar z\bar z}^\prime=\bar C_{\bar z\bar z} {\rm e}^{\varphi},\\
\Gamma^{\pm\prime}=\Gamma^\pm
{\rm e}^{-{1\over 2}\left(\varphi\pm{i\over 2}\xi\right)}
-\eta^{\pm\prime}_zC^{z\prime},&\quad
\tilde\Gamma^\prime_\pm=
\tilde\Gamma_\pm\,
{\rm e}^{-{1\over 2}\left(\varphi\pm{i\over 2}\xi\right)}
-\eta^\prime_{\pm\bar z}C^{\bar z\prime},\\
\bar\Gamma_{\pm z}^\prime=\bar\Gamma_{\pm z}{\rm e}^
{{3\over 2}\varphi\pm{i\over 4}\xi},&\quad
\bar\Gamma_{\bar z}^{\pm\prime}=\bar\Gamma_{\bar z}^\pm{\rm e}^
{{3\over 2}\varphi\pm{i\over 4}\xi},\\
\end{array}
\end{eqnarray}
which are also allowed in the functional integral, since the
Jacobian determinant is one.
The gauge-fixed versions of the gravitini curvatures give
\begin{equation}
\tau^++M\eta_{-\bar z}=-{\rm e}^{-{3\over 2}\varphi+
{i\over 4}\xi}
\partial_{\bar z}\eta^{+\prime}_z,
\end{equation}
and similar relations, that provide gauge-fixed
formul\ae\ for the supercovariantized derivatives.
On the other hand, the field equation of $\omega$ gives expressions
for $\pi_z$ and $\pi_{\bar z}$, that will be useful for computing the BRST
charge ${\cal Q}_{BRST}$. An alternative way of finding $\pi_z$
and $\pi_{\bar z}$ is that of imposing the independence of ${\cal Q}_{BRST}$
from $C^0$.
With $\pi=1/2 \, (X+\bar X)$ and
$\chi=i/2 \, (X-\bar X)$, we have
\begin{eqnarray}
{\cal L}_1+s\Psi_{diff}+s\Psi_{susy}&=&
-4\pi\partial_{z} \partial_{\bar z}\varphi
+\chi(\partial_{\bar z} A_z-\partial_{z} A_{\bar z})
-2\lambda^{\prime}_+\partial_{\bar z}\eta^{+\prime}_z
+2\lambda^{\prime}_-\partial_{\bar z}\eta^{-\prime}_z\nonumber\\&&
-2\tilde\lambda^{+\prime}\partial_{z} \eta^{\prime}_{+\bar z}
+2\tilde\lambda^{-\prime}\partial_{z}\eta^{\prime}_{-\bar z}
+\bar C_{zz}^\prime\partial_{\bar z} C ^{z\prime}
-\bar C_{\bar z\bar z}^\prime\partial_{z} C^{\bar z\prime}\nonumber\\&&+
\bar\Gamma_{+z}^\prime\partial_{\bar z}\Gamma^{+\prime}
+\bar\Gamma_{-z}^\prime\partial_{\bar z}\Gamma^{-\prime}
-\bar\Gamma_{\bar z}^{+\prime}\partial_{z} \tilde\Gamma_+^\prime
-\bar\Gamma_{\bar z}^{-\prime}\partial_{z} \tilde\Gamma_-^\prime
\nonumber\\
&+&{i\over 4}(A_z-\partial_{z} \xi)(2\tilde\lambda^{+\prime}
\eta^{\prime}_{+\bar z}+2\tilde\lambda^{-\prime}\eta^{\prime}_{-\bar z}
+\bar\Gamma^{+\prime}_{\bar z}\tilde\Gamma_+^\prime-
\bar\Gamma_{\bar z}^{-\prime}\tilde\Gamma_-^\prime)
\nonumber\\&-&{i\over 4}(A_{\bar z}+\partial_{\bar z}\xi)
(2\lambda^{\prime}_+
\eta^{+\prime}_z+2\lambda^{\prime}_-\eta^{-\prime}_z-
\bar\Gamma_{+z}^\prime\Gamma^{+\prime}+
\bar\Gamma_{-z}^\prime\Gamma^{-\prime}).
\end{eqnarray}
Now, it remains to deal with the term $s\Psi_{gauge}$. We find
\begin{eqnarray}
s\Psi_{gauge}&=&P_z(A_{\bar z}-\partial_{\bar z}\xi)-
P_{\bar z}(A_z-\partial_{z} \xi)\nonumber\\&&
-\bar C_z[\partial_{\bar z}(C+C^\prime)+C^{z\prime}
(\partial_{z} A_{\bar z} -\partial_{\bar z} A_z)
+\partial_{\bar z}\eta^{+\prime}_z\Gamma^{-\prime}-\partial_{\bar z}
\eta^{-\prime}_z
\Gamma^{+\prime}]\nonumber\\&&
+\bar C_{\bar z}[\partial_{z} (C-C^\prime)-C^{\bar z\prime}(
\partial_{z} A_{\bar z}-\partial_{\bar z} A_z)
+\partial_{z}\eta^{\prime}_{-\bar z}\tilde\Gamma_+^\prime
-\partial_{z}
\eta^{\prime}_{+\bar z}\tilde \Gamma^{\prime}_-]
\nonumber\\&&+{i\over 4}\bar C_{\bar z}(A_z-\partial_{z}\xi)(
\eta^{\prime}_{-\bar z}\tilde\Gamma_+^\prime
+\eta^{\prime}_{+\bar z}\tilde\Gamma_-^\prime)
\nonumber\\&&-{i\over 4}\bar C_z
(A_{\bar z}+\partial_{\bar z}\xi)(\eta^{+\prime}_z\Gamma^{-\prime}
+\eta^{-\prime}_z\Gamma^{+\prime}).
\end{eqnarray}
Integrating away both $P_z$, $P_{\bar z}$ and $A_z$, $A_{\bar z}$,
we can use the corresponding field equations.
Defining
\begin{eqnarray}
\begin{array}{ll}
c=C+C^\prime-2C^{z\prime}\partial_{z} \xi+\eta^{+\prime}_z
\Gamma^{-\prime}-\eta^{-\prime}_z\Gamma^{+\prime},&\quad\quad
\bar C_z=b_z,\\
\bar c=C-C^\prime+2C^{\bar z\prime}\partial_{\bar z} \xi-
\eta^{\prime}_{+\bar z}
\tilde\Gamma_-^\prime+\eta^{\prime}_{-\bar z}\tilde\Gamma_+^\prime,&
\quad\quad
\bar C_{\bar z}=b_{\bar z},
\end{array}\nonumber\\
\begin{array}{ll}
\bar C_{zz}^{\prime\prime}=
\bar C_{zz}^\prime-2b_z\partial_{z} \xi,&\quad
\bar C_{\bar z\bar z}^{\prime\prime}=
\bar C_{\bar z\bar z}^\prime+2b_{\bar z}\partial_{\bar z}\xi,\quad\quad\quad\\
\bar\Gamma_{+z}^{\prime\prime}=\bar\Gamma_{+z}^\prime-b_z
\eta^{-\prime}_z,&\quad
\bar\Gamma_{-z}^{\prime\prime}=\bar\Gamma_{-z}^\prime+b_z
\eta^{+\prime}_z,\quad\quad\quad\\
\bar\Gamma_{\bar z}^{+\prime\prime}=\bar\Gamma_{\bar z}^{+\prime}+
b_{\bar z}\eta^{\prime}_{-\bar z},&\quad
\bar\Gamma_{\bar z}^{-\prime\prime}=\bar\Gamma_{\bar z}^{-\prime}-
b_{\bar z}\eta^{\prime}_{+\bar z},\quad\quad\quad\\
\end{array}
\label{s2}
\end{eqnarray}
the total gauge-fixed Poincar\'e Lagrangian takes the form
\begin{eqnarray}
{\cal L}_{Poincar\grave e}&=&-4\pi\partial_{z} \partial_{\bar z}\varphi+
2\chi\partial_{z} \partial_{\bar z}\xi
-2\lambda^{\prime}_+\partial_{\bar z}\eta^{+\prime}_z
+2\lambda^{\prime}_-\partial_{\bar z}\eta^{-\prime}_z\nonumber\\&&
-2\tilde\lambda^{+\prime}\partial_{z} \eta^{\prime}_{+\bar z}
+2\tilde\lambda^{-\prime}\partial_{z} \eta^{\prime}_{-\bar z}
+\bar C_{zz}^{\prime\prime}\partial_{\bar z} C^{z\prime}
-\bar C_{\bar z\bar z}^{\prime\prime}\partial_{z} C^{\bar z\prime}
-b_z\partial_{\bar z} c+b_{\bar z} \partial_{z}\bar c
\nonumber\\&&
+\bar\Gamma_{+z}^{\prime\prime}\partial_{\bar z}\Gamma^{+\prime}
+\bar\Gamma_{-z}^{\prime\prime}\partial_{\bar z}\Gamma^{-\prime}
-\bar\Gamma_{\bar z}^{+\prime\prime}\partial_{z} \tilde\Gamma_+^\prime
-\bar\Gamma_{\bar z}^{-\prime\prime}\partial_{z} \tilde\Gamma_-^\prime.
\label{lapoi}
\end{eqnarray}
Again, the Jacobian determinant
corresponding to (\ref{s2}) is one.
It is natural to conjecture that
Poincar\'e N=2 supergravity corresponds to an N=2 superconformal
field theory. To show that this is indeed the case, we compute
the energy-momentum tensor $T_{zz}$, the supercurrents
$G_{+z}$ and $G_{-z}$, the $U(1)$ current $J_z$.
In order to do this, we first compute the BRST charge
${\cal Q}^{BRST}=\oint J_z^{BRST}dz$,
$J^{BRST}_z$ denoting the left moving BRST current.
Acting with ${\cal Q}^{BRST}$ on the various antighost fields
it is then simple to derive the ``gauge-fixings'', which are, in our
case, the N=2 currents. Since the BRST symmetry
is a global symmetry and the gauge-fixed action is BRST-invariant,
the BRST current $J^{BRST}$ can be found by performing a {\sl local}
BRST transformation $\delta_{BRST}=\kappa(x)\, s$, where $s$ is
the BRST operator and $\kappa (x)$ is a point-dependent ghost number
$-1$ scalar parameter. The variation of ${\cal L}$ can
then be expressed as
\begin{equation}
\delta_{BRST}{\cal L}=*d\kappa\wedge J^{BRST}=
(\partial_z\kappa \,\,J_{\bar z}^{BRST}+\partial_{\bar z}\kappa \,\,
J_z^{BRST})\, dz
\wedge d \bar z.
\end{equation}
As anticipated, expressions for $\pi_z$ and $\pi_{\bar z}$ can be found
by requiring the independence of $J^{BRST}$ from $C^0$. In particular,
one finds
\begin{eqnarray}
\pi_z{\rm e}^\varphi &=&-2\partial_z\pi+
\lambda_-^\prime\eta_z^{-\prime}
-\lambda_+^\prime\eta_z^{+\prime}+\bar C_{zz}^\prime C^{z\prime}
-{1\over 2}b_z(\eta_z^{+\prime}\Gamma^{-\prime}-\eta_z^{-\prime}
\Gamma^{+\prime})
\nonumber\\&&
+{1\over 2}(\bar \Gamma_{+z}^\prime\Gamma^{+\prime}+
\bar \Gamma_{-z}^\prime\Gamma^{-\prime})-
(\bar \Gamma_{+z}^\prime\eta^{+\prime}_z+
\bar \Gamma_{-z}^\prime\eta^{-\prime}_z)C^{z\prime}.
\end{eqnarray}
We separate the Liouville and ghost sectors by writing
\begin{equation}
\matrix{T_{zz}=T_{zz}^{grav}+T_{zz}^{gh},&J_z=J_{z}^{grav}+J_{z}^{gh},\cr
G_{+z}=G_{+z}^{grav}+G_{+z}^{gh},&G_{-z}=G_{-z}^{grav}+G_{-z}^{gh},}
\end{equation}
and similarly for the complex conjugates $T_{\bar z\bar z}$, $J_{\bar z}$,
$\bar G_{\bar z}^+$ and $\bar G_{\bar z}^-$.
Let us first focus on the Liouville sector. We have, on shell
and up to total derivative terms,
\begin{eqnarray}
J_z^{BRST\, grav}&=&-C^{z\prime}T_{zz}^{grav}-{i\over 4}
cJ_{z}^{grav}+
{i\over 2}\Gamma^{+\prime}G_{+z}^{grav}+
{1\over 2}\Gamma^{-\prime}G_{-z}^{grav},
\end{eqnarray}
where, after a simple redefinition of the fields
\begin{eqnarray}
\begin{array}{llll}
\varphi\rightarrow \varphi,\quad&\quad \pi\rightarrow {1\over 4}
\pi,\quad&\quad\xi\rightarrow -2i\xi,\quad&\quad\chi\rightarrow
{i\over 4}\chi,\cr
\lambda^{\prime}_+\rightarrow -{i\over 4}\lambda_+,\quad&\quad
\lambda_-^\prime\rightarrow {1\over 4}\lambda_-,\quad&\quad
\eta^{+\prime}_z\rightarrow 2i\eta^+_z,\quad&\quad
\eta^{-\prime}_z\rightarrow 2 \eta^-_z,
\end{array}
\end{eqnarray}
the N=2 currents are written as
\begin{eqnarray}
T_{zz}^{grav}&=&-\partial_{z} \pi\partial_{z}\varphi+
{1\over 2}\partial_{z}^2\pi+\partial_{z}
\chi\partial_{z}\xi+{1\over 2}(\partial_{z}\lambda_-\eta^-_z-
\lambda_-\partial_{z}\eta^-_z)+
{1\over 2}(\lambda_+\partial_{z}\eta^+_z-\partial_{z}\lambda_+
\eta^+_z),\nonumber\\
G_{+z}^{grav}&=&\partial_{z}\lambda_+-\lambda_+\partial_{z}(\varphi+\xi)
+\eta^-_z\partial_{z}(\chi+\pi),\nonumber\\
G_{-z}^{grav}&=&\partial_{z}\lambda_--\lambda_-\partial_{z}(\varphi-\xi)
+\eta^+_z\partial_{z}(\chi-\pi),\nonumber\\
J_{z}^{grav}&=&\partial_{z}\chi-\lambda_-\eta^-_z-\lambda_+\eta^+_z.
\label{6.30}
\end{eqnarray}
The background charge term $1/2 \,\, \partial_{z}^2\pi$ has
no influence on the central charge, that turns out to be $c_{grav}=6$.
This implies that there is also an N=4 conformal symmetry, according
to the analysis of \cite{ALE}.
The fundamental operator product expansions are normalized as follows:
\begin{eqnarray}
\begin{array}{ll}
\partial_{z}\pi(z)\partial_{w}\varphi(w)=-{1\over (z-w)^2},&\quad
\partial_{z}\chi(z)\partial_{w}\xi(w)={1\over (z-w)^2},\\
\lambda_+(z)\eta^+_w(w)=-{1\over z-w},&\quad
\lambda_-(z)\eta^-_w(w)={1\over z-w}.
\end{array}
\end{eqnarray}
It is easy to check that the N=2 operator product
expansions are indeed satisfied by
(\ref{6.30}).
Before going on, let us dwell for a moment on
the above N=2 $c=6$ superconformal algebra and discuss
some of its features. First of all, notice that it
can be decomposed into the
direct sum of two $N=2$ superconformal algebr\ae\ with
central charges $c_1=3/2$ and $c_2=9/2$. We have
\begin{eqnarray}
\matrix{T_{zz}^{grav}=T_{zz}^{(1)}+T_{zz}^{(2)},&
J_{z}^{grav}=J_z^{(1)}+J_z^{(2)},\cr
G_{+z}^{grav}=G_{+z}^{(1)}+G_{+z}^{(2)},&
G_{-z}^{grav}=G_{-z}^{(1)}+G_{-z}^{(2)},}
\end{eqnarray}
where
\begin{eqnarray}
T_{zz}^{(1)}&=&\partial_{z} \varphi_1\partial_{z}
\varphi_1^\prime+{1\over 4}\partial_{z}^2(\varphi_1+
\varphi_1^\prime)+
{1\over 2}(\partial_{z}\lambda_+^{(1)}\lambda_-^{(1)}-\lambda_+^{(1)}
\partial_{z}\lambda_-^{(1)}),\nonumber\\
G_{+z}^{(1)}&=&{1\over \sqrt{2}}
(\partial_{z}\lambda_+^{(1)}+2\lambda_+^{(1)}\partial_{z}\varphi_1),
\quad\quad\quad
G_{-z}^{(1)}={1\over \sqrt{2}}
(\partial_{z}\lambda_-^{(1)}+2\lambda_-^{(1)}\partial_{z}
\varphi_1^\prime),\nonumber\\
J_z^{(1)}&=&{1\over 2}\partial_{z}(\varphi_1-\varphi_1^\prime)
+\lambda_+^{(1)}\lambda_-^{(1)},
\end{eqnarray}
and
\begin{eqnarray}
T_{zz}^{(2)}&=&-\partial_{z} \varphi_2\partial_{z}
\varphi_2^\prime-{1\over 4}\partial_{z}^2
(\varphi_2+
\varphi_2^\prime)-
{1\over 2}(\partial_{z}\lambda_+^{(2)}\lambda_-^{(2)}-\lambda_+^{(2)}
\partial_{z}\lambda_-^{(2)}),\nonumber\\
G_{+z}^{(2)}&=&{1\over \sqrt{2}}
(\partial_{z}\lambda_+^{(2)}+2\lambda_+^{(2)}\partial_{z}\varphi_2),
\quad\quad\quad G_{-z}^{(2)}={1\over \sqrt{2}}
(\partial_{z}\lambda_-^{(2)}+2\lambda_-^{(2)}\partial_{z}\varphi_2^\prime),
\nonumber\\
J_z^{(2)}&=&-{1\over 2}\partial_{z}(\varphi_2-\varphi_2^\prime)
-\lambda_+^{(2)}\lambda_-^{(2)}.
\end{eqnarray}
The correspondence with the previous fields is
\begin{equation}
\matrix{
\varphi_1=-{1\over 2}(\varphi+\xi-\chi-\pi),&
\varphi_1^\prime=-{1\over 2}(\varphi-\xi+\chi-\pi),\cr
\lambda_-^{(1)}={1\over \sqrt{2}}(\lambda_--\eta^+_z),&
\lambda_+^{(1)}={1\over \sqrt{2}}(\lambda_++\eta^-_z),\cr
\varphi_2=-{1\over 2}(\varphi+\xi+\chi+\pi),&
\varphi_2^\prime=-{1\over 2}(\varphi-\xi-\chi+\pi),\cr
\lambda_-^{(2)}={1\over \sqrt{2}}(\lambda_-+\eta^+_z),&
\lambda_+^{(2)}={1\over \sqrt{2}}(\lambda_+-\eta^-_z),}
\end{equation}
Let us recall that unitary representations of the N=2 algebra with $c<
3$ are given by the minimal model series, where $c={3k\over k+2}$
($k\in {\bf N}$), so that our representation $T_{zz}^{(1)}$, $G_{+z}^{(1)}$,
$G_{-z}^{(1)}$,
$J_z^{(1)}$ corresponds to a free field realization of the $k=2$ minimal
model. Construction of these models \cite{N2GKO} as GKO \cite{GKO}
cosets of $SU(2)$ level $k$
supersymmetric Ka\v c-Moody algebra are well known. We also recall
that unitary representations of the N=2 algebra with $c>3$ have
also been obtained as GKO cosets of the $SL(2,{\bf R})$ supersymmetric
Ka\v c-Moody algebra of level $k$, yielding a series $c={3k\over k-2}$
\cite{bars}.
We nickname these representations ``maximal models''.
Therefore, our $T_{zz}^{(2)}$, $G_{+z}^{(2)}$, $G_{-z}^{(2)}$,
$J_z^{(2)}$ representation is a free field realization of the $k=6$
``maximal model''.
Moreover, it is also easy to check that the N=2 $c=6$
superconformal algebra (\ref{6.30}) can be decomposed into the
direct sum of two N=2 $c=3$ superconformal algebr\ae. They correspond
to the subsets of fields $\{(\varphi+\xi)/\sqrt{2},(\chi-\pi)
/\sqrt{2},\lambda_+,\eta^+_z\}$ and
$\{(\varphi-\xi)/\sqrt{2},(\chi+\pi)
/\sqrt{2},\lambda_-,\eta^-_z\}$.
Now, let us come to the ghost currents. We proceed as before, namely,
we first determine the BRST current $J_z^{BRST\, gh}$ from
a local BRST variation of the action and then act with
${\cal Q}^{gh}_{BRST}=\oint J_z^{BRST\, gh}$ on
the antighost fields. One finds
\begin{eqnarray}
J_z^{BRST\, gh}&=&-\partial_{z} C^{z\prime}
\bar C_{zz}^{\prime\prime}C^{z\prime}-{1\over 2}
\partial_{z} C^{z\prime}(\bar\Gamma_{+z}^{\prime\prime}\Gamma^{+\prime}
+\bar\Gamma_{-z}^{\prime\prime}\Gamma^{-\prime})+
C^{z\prime}(\bar\Gamma_{+z}^{\prime\prime}\partial_{z}\Gamma^{+\prime}+
\bar\Gamma_{-z}^{\prime\prime}\partial_{z}\Gamma^{-\prime})
\nonumber\\&&-
{i\over 2}\bar C_{zz}^{\prime\prime}\Gamma^{+\prime}\Gamma^{-\prime}
-{i\over 4}c
(\bar\Gamma_{+z}^{\prime\prime}
\Gamma^{+\prime}-\bar\Gamma_{-z}^{\prime\prime}
\Gamma^{-\prime})+b_zC^{z\prime}\partial_{z} c
\nonumber\\&&
+b_z(\partial_{z}\Gamma^{+\prime}\Gamma^{-\prime}-\Gamma^{+\prime}
\partial_{z}\Gamma^{-\prime}).
\label{arr}
\end{eqnarray}
After the replacements
\begin{equation}
b_z\rightarrow -1/2 \, i b_z,\quad \quad c\rightarrow 2 ic
\end{equation}
and the redefinitions
\begin{equation}
\matrix{\bar C_{zz}^{\prime\prime}= -b_{zz},&C^{+\prime}
= c^z ,\cr
\bar\Gamma_{+z}^{\prime\prime}= -i\beta_{+z},&\Gamma^{+\prime}
= -i\gamma^+,\cr
\bar\Gamma_{-z}^{\prime\prime}=\beta_{-z},&\Gamma^{-\prime}
= - \gamma^-,}
\end{equation}
one gets
\begin{eqnarray}
T_{zz}^{gh}&=&2b_{zz}\partial_{z} c^z +\partial_{z} b_{zz} c^z +{3\over 2}
\beta_{+z}\partial_{z}\gamma^++{1\over 2}\partial_{z}\beta_{+z}\gamma^++
{3\over 2}
\beta_{-z}\partial_{z}\gamma^-+{1\over 2}\partial_{z}\beta_{-z}\gamma^-+
b_z\partial_{z} c,\nonumber\\
G_{+z}^{gh}&=&3\beta_{+z}\partial_{z} c^z +2\partial_{z}\beta_{+z} c^z
-\gamma^-b_{zz}- \gamma^-\partial_{z} b_z-2 \partial_{z}\gamma^-b_z
-\beta_{+z} c,
\nonumber\\
G_{-z}^{gh}&=&3\partial_{z} c^z \beta_{-z}+2 c^z \partial_{z}\beta_{-z}
-b_{zz}\gamma^++\partial_{z} b_z \gamma^++2 b_z \partial_{z}
\gamma^++c\beta_{-z},
\nonumber\\
J_{z}^{gh}&=&\beta_{-z}\gamma^--\beta_{+z}\gamma^+-2\partial_{z}(b_z
c^z ).
\label{ghrepr}
\end{eqnarray}
The fundamental operator product expansions are
\begin{eqnarray}
\begin{array}{ll}
b_{zz}(z) c^w (w)=-{1\over z-w},&\quad
b_z(z)c(w)=-{1\over z-w},\\
\beta_{+z}(z)\gamma^+(w)={1\over z-w},&\quad
\beta_{-z}(z)\gamma^-(w)={1\over z-w}.\\
\end{array}
\end{eqnarray}
The ghost contribution to the central charge is $c_{gh}=-6$,
so that $c_{tot}=c_{grav}+c_{gh}=0$, as claimed.
Notice that $\beta$ and $\gamma$ commute among themselves, but
anticommute with $b$ and $c$. This is because they carry an odd ghost number
together with an odd fermion number, while $b$ and $c$ carry
zero fermion number and odd ghost number.
In the usual convention, instead, $\beta$ and $\gamma$
commute with everything. The above
currents satisfy the usual N=2 superconformal algebra with both
conventions. This is because we chose an {\sl ad hoc}
ordering between the fields when writing down the supercurrents,
namely $\beta$-$\gamma$ before $b$-$c$ in $G_{+z}^{gh}$ and {\sl vice
versa} in
$G_{-z}^{gh}$. In all other manipulations the double grading
should be taken into account. Only after the topological twist the
fermion number grading is absent (things are correctly arranged by
the broker \cite{ansfre,ansfre2}).
Notice that the ghost sector is made of two
$b$-$c$-$\beta$-$\gamma$ N=1
systems, namely
a system $b_{zz}$-$c^z$-$\beta_{-z}$-$\gamma^-$ with weight
$\lambda_{\beta_{-z}}=3/2$ and a system
$c$-$b_z$-$\gamma^+$-$\beta_{+z}$
with weight $\lambda_{\gamma^+}=-1/2$. These systems
also possess, as it is well-known, an accidental N=2 symmetry
\cite{gliozzi}.
However the standard representation of
the N=2 $c=6$ superconformal algebra made of these
$b$-$c$-$\beta$-$\gamma$ N=1 systems does not coincide with
(\ref{ghrepr}).
It is interesting to notice that in the new notation, one has
\begin{equation}
J_z^{BRST\, grav}=-c^zT_{zz}^{grav}+{1\over 2}c J_{z}^{grav}
+{1\over 2}\gamma^+{G}_{+z}^{grav}
-{1\over 2}\gamma^-{G}_{-z}^{grav},
\label{star}
\end{equation}
while (\ref{arr}) gives
\begin{eqnarray}
J_z^{BRST\, gh}&=&\partial_{z} c^zb_{zz}c^z+{1\over 2}\partial_{z} c^z
(\beta_{+z}\gamma^++\beta_{-z}\gamma^-)-c^z
(\beta_{+z}\partial_{z}\gamma^++\beta_{-z}\partial_{z}\gamma^-)
-{1\over 2}b_{zz}\gamma^+\gamma^-\nonumber\\&&
-{1\over 2}c(\beta_{+z}\gamma^+-\beta_{-z}\gamma^-)+b_zc^z\partial_{z} c
+{1\over 2}b_z(\partial_{z}\gamma^+\gamma^--\gamma^+
\partial_{z}\gamma^-)\nonumber\\&=&
{1\over 2}\left(-c^zT_{zz}^{gh}+{1\over 2}c J_{z}^{gh}
+{1\over 2}\gamma^+{G}_{+z}^{gh}
-{1\over 2}\gamma^-{G}_{-z}^{gh}\right),
\end{eqnarray}
a formula that is analogous to (\ref{star}),
with the usual ${1\over 2}$ overall factor.
We have omitted some total derivative terms, that are immaterial as far
as ${\cal Q}_{BRST}$ is concerned. Thus
we can also write
\begin{equation}
J_z^{BRST}=-c^z{\cal T}_{zz}
+{1\over 2}c{\cal J}_{z}
+{1\over 2}\gamma^+{\cal G}_{+z}-{1\over 2}\gamma^-
{\cal G}_{-z},
\end{equation}
where
\begin{equation}
\matrix{
{\cal T}_{zz}=T_{zz}^{grav}+{1\over 2}T_{zz}^{gh},&
{\cal J}_{z}=J_{z}^{grav}+{1\over 2}J_{z}^{gh},\cr
{\cal G}_{+z}={G}_{+z}^{grav}+{1\over 2}{G}_{+z}^{gh},&
{\cal G}_{-z}=G_{-z}^{grav}+{1\over 2}G_{-z}^{gh}.}
\end{equation}
The ghost number charge is
\begin{equation}
{\cal Q}_{gh}=\oint b_{zz}c^z+\beta_{+z}\gamma^++\beta_{-z}\gamma^-+b_zc,
\end{equation}
so that ${\cal Q}_{BRST}=\oint J_z^{BRST}$
has ghost number one:
\begin{equation}
[{\cal Q}_{gh},{\cal Q}_{BRST}]={\cal Q}_{BRST}.
\label{exa}
\end{equation}
In the new notation the Lagrangian (\ref{lapoi}) is written as
\begin{eqnarray}
{\cal L}_{Poincar\grave e}&=&
-\pi\partial_{z}\partial_{\bar z}\varphi+\chi\partial_{z}\partial_{\bar z}\xi+
\lambda_-\partial_{\bar z}\eta^-_z-\lambda_+\partial_{\bar z}\eta^+_z
\nonumber\\&&+\tilde\lambda^-\partial_{z}\eta_{-\bar z}-\tilde
\lambda^+\partial_{z}\eta_{+\bar z}-b_{zz}\partial_{\bar z} c^z
+b_{\bar z\bar z}\partial_{z} c^{\bar z}\nonumber\\&&
-\beta_{+z}\partial_{\bar z}\gamma^+-\beta_{-z}\partial_{\bar z}\gamma^-
+\beta_{+\bar z}\partial_{z}\tilde\gamma_++
\beta_{-\bar z}\partial_{z}\tilde\gamma_-
-b_z\partial_{\bar z} c+b_{\bar z} \partial_{z} \bar c.
\label{poinc}
\end{eqnarray}
Let us make a comment about the addition of the kinetic Lagrangian
for the dilaton supermultiplet (see section \ref{liouville}
for an analogous
comment before gauge-fixing). Formula (\ref{matterlagr})
with the index $I$ restricted
only to the value $0$ and with vanishing superpotential $W$ gives,
after gauge-fixing,
\begin{equation}
{\cal L}_{kin}=-{1\over 2}\partial_z\pi\partial_{\bar z}\pi+{1\over 2}
\partial_z\chi\partial_{\bar z}\chi+\lambda_-\partial_{\bar z}\lambda_+
+\tilde\lambda^-\partial_z\tilde\lambda^+
\end{equation}
(a convenient overall numerical factor has been chosen).
It is immediate to see that the redefinitions
\begin{eqnarray}
\begin{array}{ll}
\varphi\rightarrow \varphi-{1\over 2}\pi,&\quad\xi
\rightarrow\xi-{1\over 2}\chi,\\
\eta^\pm_z\rightarrow\eta_z^\pm\mp{1\over 2}\lambda_\mp,&\quad
\eta_{\pm\bar z}\rightarrow\eta_{\pm\bar z}\mp{1\over 2}\tilde\lambda^{\mp},
\end{array}
\end{eqnarray}
turn ${\cal L}_{Poincar\grave e}$ into
${\cal L}_{Poincar\grave e}+{\cal L}_{kin}$.
This completes the program of studying the N=2 algebra associated with
the gauge-fixed Poincar\'e Lagrangian. Before making the topological
twist, we make some comments on the structure of moduli space, zero
modes
and amplitudes.
The global degrees of freedom of the metric are the $3(g-1)$
moduli $m_i$, that are in one-to-one correspondence with
the $3(g-1)$ zero
modes of the spin 2 antighost $b_{zz}$.
There are $g$ ($={\rm dim}\, H^1$) global degrees of freedom (moduli)
$\nu^j$ of the
graviphoton $A$. These moduli are in one-to-one correspondence with the
$g$ zero modes of the spin 1 antighost $b$.
The total number of moduli is thus $4g-3$.
The supermoduli $\hat m$, $\hat \nu$
can be counted by computing the number of zero modes of the
spin $3/2$ antighosts $\beta_{+z}$ and $\beta_{-z}$, that is $4g-4$, one more
than the number of moduli.
The field
$\xi$ describes the local degrees of freedom of $A$
that survive the $U(1)$ gauge-fixing,
in the same way as
$\varphi$ describes the local degrees of freedom of
the metric that survive the gauge-fixing of diffeomorphisms.
Let us write explicitly the form of the amplitudes of the
N=2 Liouville theory:
\begin{eqnarray}
<{\cal O}_1\cdots {\cal O}_n>&=&\int d\Phi
\int_{{\cal M}_g}\prod_{i=1}^{3g-3}dm_id\bar m_i
\int_{{{\bf C}^g / \Lambda}}\prod_{j=1}^gd\nu_jd\bar\nu_j
\int d\hat m d\hat\nu
\nonumber\\&&
\times
\prod_{i=1}^{3g-3}
<\mu^{iz}_{\phantom{i}\bar z}|b_{zz}><\mu^{i\bar z}_{\phantom{i}z}
|b_{\bar z\bar z}>
\prod_{j=1}^g
<\omega^j_{\bar z}|b_{z}><\omega^j_z|b_{\bar z}>\nonumber\\&&\times
\prod_{k=1}^{2g-2}
\delta(<\zeta^{-k}_{\bar z}|\beta_{-z}>)
\delta(<\tilde \zeta^{-k}_z|\beta_{-\bar z}>)
\delta(<\zeta^{+k}_{\bar z}|\beta_{+z}>)
\delta(<\tilde \zeta^{+k}_z|\beta_{+\bar z}>)
\nonumber\\&&
\times M(\lambda,\eta)
c(z_0)\bar c(\bar z_0)\,\prod_i{\rm e}^{{q_i\over 2}\tilde\pi(z_i)}\,
{\rm e}^{-S(m,\hat m,\nu,\hat\nu)}{\cal O}_1\cdots {\cal O}_n.
\label{n=2amplitudes}
\end{eqnarray}
Let us explain the notation.
i) $\mu^{iz}_{\phantom{i}\bar z}$ denote the $3g-3$ Beltrami differentials,
while $\mu^{i\bar z}_z$ are their complex conjugates.
$<\mu^z_{\bar z}|b_{zz}>=\int_{\Sigma_g}
\mu^z_{\bar z}b_{zz}d^2z$ are the correct insertions that
take care of the $b_{zz}$ zero modes \cite{dhoker} and, at the same time,
take into account of the Jacobian determinant (Beltrami differential)
coming from the change of variables $dg_{\mu\nu}\rightarrow \prod_i
dm_id\bar m_i$.
ii) $\zeta^{\pm k}_{\bar z}$ and $\tilde \zeta^{\pm k}_z$ are
the super Beltrami differentials.
The integration $\int d\hat m d\hat \nu$
over supermoduli (that usually produce the
supercurrent insertions) is not explicitly performed, since $S(m,
\hat m,\nu,\hat\nu)$ does not depend trivially on them and moreover
the observables ${\cal O}_i$ possibly depend on them.
iii) $\omega^j_z$ ($\omega^j_{\bar z}$) are the (anti-)holomorphic
differentials parametrizing
the global degrees of freedom of the graviphoton $A$.
The $g$-dimensional moduli space
of $A$ is the Jacobian variety \cite{kra}
${{\bf C}^g\over \Lambda}$, $\Lambda={\bf Z}^g+\Omega
{\bf Z}^g$ denoting the lattice $\nu_j\approx
\nu_j+n_j+m_k\Omega_{kj}$, $n_j$ and
$m_k$ being integer numbers and $\Omega_{jk}$ being the period matrix.
The reason why one has to restrict the integration of the $U(1)$
moduli to the unit cell is the same as the one that enforces the
restriction of the integration on metrics to the proper moduli space
$T/\Gamma$, $T$ denoting the Teichmuller space and $\Gamma$ the
mapping class group. Indeed, ${\bf C}^g$ is the Teichmuller space
parametrizing the deformations of $A$ that are orthogonal to gauge
transformations. There are, however, gauge
transformations that are not connectible to the identity
and the homotopy classes of these are in one-to-one correspondence
with the Jacobian lattice ${\bf Z}^g+\Omega {\bf Z}^g$.
The $U(1)$ gauge transformations that are
not connectible to the identity correspond to
the shifts $\nu_j\rightarrow\nu_j+n_j+m_k\Omega_{kj}$.
For fixed Chern class, two gauge equivalent
$U(1)$ connections $A_1$ and $A_2$
are such that $A_{1}-A_{2}$ can be
written as $U^{-1}dU$ for $U={\rm e}^{i\phi}$, $\phi:\Sigma_g
\rightarrow S^1$ being a map that winds $n_j$ times around the $B_j$
cycles and $m_k$ times around the $A_k$ cycles.
iv) $M(\lambda,\eta)$ generically denotes the insertions that are necessary
in order to remove the zero modes of the $\eta$'s and the $\lambda$'s
(insertions that can be provided by the observables ${\cal O}_i$),
while $c(z_0)\bar c(\bar z_0)$ is the insertion for the (constant)
zero modes of $c$ and $\bar c$. It is understood that
the constant zero modes of $\pi$, $\varphi$, $\chi$ and $\xi$ are also
reabsorbed.
v) $d\Phi$ denotes the functional integration over the local degrees
of freedom of the fields. The action $S$ depends on the local degrees
of freedom of the fields, as well as on the moduli $m_i$,
$\nu_j$ and supermoduli $\hat m$ and
$\hat\nu$. The other fields
(Lagrange multipliers, auxiliary fields and some ghost
fields) are those that we have integrated away.
vi) ${\rm e}^{{q_i\over 2}\tilde\pi(z_i)}$ are the $\delta$-type
insertions that simulate the curvature $R$ such that
$\int_{\Sigma_g}R=2(1-g)$. $\tilde\pi$ is the BRST invariant
extension of $\pi$ and the $q_i$ satisfy the condition
\begin{equation}
\sum_i q_i=2(1-g).
\end{equation}
One finds the solution (left moving part)
\begin{equation}
\tilde\pi=\pi+\chi-{2\over \gamma^+}c^z\lambda_- .
\label{tildepi}
\end{equation}
\section{The topological twist on the gauge-fixed theory}
\label{twist1}
In this section, we perform the topological twist on the N=2
gauge-fixed theory. We know that the formal set-up for the topological
twist is entirely encoded into the broker, which correctly changes the
spin, the ghost number and the BRST charge. However, in the gauge-fixed
conformal theory, as we show in a moment, we have more equivalent
possibilities. In particular, we can
adapt the formalism in order to make more evident
contact with the well-known
procedure \cite{eguchiyang} in conformal field theory
that consists of redefining the energy momentum tensor by adding to it
the derivative of the $U(1)$ current \cite{topLG}.
In particular, we conveniently
separate the operation of changing the spin from the rest of the
twist procedure (change of ghost number and BRST charge), the rest still being
performed by the broker.
In order to produce a twisted energy-momentum tensor equal to
$T_{zz}+{1\over 2}\partial_{z} J_z$ we can make
a redefinition of the ghost $c$ of the form
\begin{equation}
c^\prime=c-\partial_{z} c^z.
\label{redef1}
\end{equation}
Such a replacement, which changes the spin of the fields,
is to be viewed as a
redefinition of the $U(1)^\prime$ ghost $C^\prime$ rather
than the $U(1)$ ghost $C$, since the new spin is defined
(see section \ref{twist}) by adding
the $U(1)^\prime$ charge
(not the $U(1)$ charge) to the old spin.
$c^\prime$ has a nonvanishing operator product expansion with
$b_{zz}$ so that it is also necessary to redefine $b_{zz}$, namely
\begin{equation}
b_{zz}^\prime=b_{zz}-\partial_{z} b_{z},
\label{redef2}
\end{equation}
in order to preserve
the operator product expansions.
The spin changes can be read from table \ref{topotable} and justify
the following change in notation
\begin{equation}
\begin{array}{llll}
\eta_z^+\rightarrow\eta_z,\quad &\quad\lambda_+\rightarrow \lambda,
\quad &\quad \beta_{+z}\rightarrow \beta_z,\quad &\quad
\gamma^+\rightarrow \gamma,\\
\eta^-_{z}\rightarrow\eta,\quad &\quad\lambda_-\rightarrow \lambda_z,
\quad &\quad \beta_{-z}\rightarrow \beta_{zz},\quad &\quad
\gamma^-\rightarrow \gamma^z.
\end{array}
\end{equation}
Similarly, the supercurrents become
\begin{equation}
G_{+z}\rightarrow G_z,\quad\quad G_{-z}\rightarrow G_{zz}.
\end{equation}
Redefinitions (\ref{redef1}) and (\ref{redef2}) produce a new
BRST current $J_z^{\prime\, BRST}$ (equal to the old one apart from a
total derivative term) given by
\begin{equation}
J^{\prime\, BRST}_z=-c^z{\cal T}_{zz}^\prime
+{1\over 2}c^\prime {\cal J}_{z}
+{1\over 2}\gamma{\cal G}_{z}-{1\over 2}\gamma^z{\cal G}_{zz},
\end{equation}
where ${\cal T}_{zz}^\prime={\cal T}_{zz}+{1\over 2}\partial_z {\cal J}_z$.
As anticipated,
$J^{\prime\, BRST}_z$ generates
a new energy-momentum tensor (obtained by acting
with the new BRST charge on $b_{zz}^\prime$) equal to
\begin{eqnarray}
T_{zz}^\prime&=&T_{zz}+{1\over 2}\partial_{z} J_z
=-\partial_{z} \pi\partial_{z}\varphi+{1\over 2}\partial_{z}^2\pi+
\partial_{z}\chi\partial_{z}\xi+{1\over 2}\partial_{z}^2 \chi-\lambda_z
\partial_{z}\eta
-\partial_{z}\lambda \eta_z
\nonumber\\&&+2b^\prime_{zz}\partial_{z} c^z+\partial_{z} b_{zz}^\prime c^z
+2\beta_{zz}\partial_{z}\gamma^z+\partial_{z}\beta_{zz}\gamma^z
+\beta_{z}\partial_{z}\gamma+b_{z}\partial_{z} c^\prime.
\end{eqnarray}
{}From this expression, it is immediate to check the new spin assignments.
It is interesting to note that the total derivative term in
the $U(1)$ current $J_z^{gh}$ (\ref{ghrepr})
combines with redefinitions (\ref{redef1})
and (\ref{redef2}) to give the correct energy-momentum tensor
for the ghosts $T_{zz}^{\prime\, gh}$.
The other ingredient of the topological twist is the topological shift
\cite{ansfre}
\begin{equation}
\gamma\rightarrow \gamma+\alpha.
\label{redef3}
\end{equation}
Since the spin has been already changed by
(\ref{redef1}), (\ref{redef3}) does not change the spin
a second time. Indeed, the new spin of
$\gamma$ is zero and so that of $\alpha$.
Moreover, after twist
$\gamma$ possesses a zero mode (the constant).
In this case, $\alpha$
represents a shift of the zero mode of $\gamma$.
(\ref{redef1}), (\ref{redef2}) and (\ref{redef3}) permit to
move directly from the N=2 amplitudes to the amplitudes of the topologically
twisted theory\footnotemark
\footnotetext{However, one has to be careful about the zero modes and
the global degrees of freedom. Later on we shall come back to this point.}.
This shows that the amplitudes of the topological theory are a
subset of the amplitudes of the N=2 supersymmetric theory.
Nevertheless, the twisting procedure cannot be
described simply as a change of variables in the functional integral,
as (\ref{redef1}), (\ref{redef2}) and (\ref{redef3})
should seem to suggest,
since the physical content of the theory could not be changed
in this way.
This is made apparent by the fact that before twist $\gamma$
($\gamma^+$) has spin $-1/2$ and possesses no zero mode, so that
the choice $\alpha=$const is only consistent after redefining the spin
with (\ref{redef1})
and (\ref{redef2}), while in section \ref{twist} it was only
a formal position.
For convenience, as far as the gradings are concerned (ghost number
and fermion number), we use the same conventions as before twist
and leave the broker explicit. Thus $\alpha$ carries odd fermion number
and odd ghost number.
The topological shift (\ref{redef3})
produces a total BRST current equal to
\begin{equation}
J_z^{BRST\, tot}=J_z^{\prime BRST}+{1\over 2}\alpha {G}_{z},
\end{equation}
(again, a total derivative term has been omitted).
If we denote, as usual,
${\cal Q}_{BRST}=\oint J_z^{BRST\, tot}$, ${\cal Q}_v
=\oint J_z^{\prime\, BRST}dz$ and ${\cal Q}_s=\oint G_{z}dz$,
we see that the BRST charge is precisely shifted by the supersymmetry
charge ${\cal Q}_s$,
as explained in section \ref{twist}.
Let us now discuss some properties of the twisted theory.
It is convenient to write down the
${\cal Q}_s$ transformations of the fields.
We denote it by $\delta_s$.
\begin{equation}
\matrix{
\delta_s(\xi-\varphi)=2\eta,&
\delta_s\eta=0,&
\delta_s\lambda_z=\partial_{z}(\pi+\chi), &
\delta_s(\pi+\chi)=0,\cr
\delta_s(\pi-\chi)=2\lambda,&
\delta_s\lambda=0,&
\delta_s\eta_z=\partial_{z} (\xi+\varphi),&
\delta_s(\xi+\varphi)=0,\cr
\delta_s b_{zz}^\prime=0,&
\delta_s \beta_{zz}=-b_{zz}^\prime,&
\delta_s c^z=\gamma^z,&
\delta_s\gamma^z=0,\cr
\delta_s b_{z}=\beta_{z},&
\delta_s\beta_{z}=0,&
\delta_s c^\prime=0,&
\delta_s \gamma=c^\prime.}
\label{ation}
\end{equation}
These transformations are the analogue, in the gauge-fixed case, of
the $\delta_T$ transformations (\ref{deltaT}) and (\ref{deltaT2}).
As explained in the previous section, there are two
$b$-$c$-$\beta$-$\gamma$ systems, rearranged in the
last two lines of (\ref{ation}). In particular, the last line
represents the sector of ${\cal B}_{gauge-fixing}$ that
is reminiscent of the constraint on the moduli space.
The last but one line represents the usual $b$-$c$-$\beta$-$\gamma$
ghost for ghost system
of topological gravity \cite{verlindesquare}.
It is evident
that the roles of $b$ and $\beta$ and the roles of $c$
and $\gamma$ are inverted in the two cases\footnotemark
\footnotetext{In ref.\ \cite{distler},
it is claimed that the topological twist of N=2 supergravity leads
to the Verlinde and Verlinde model. To obtain this, a certain reduction
mechanism in
the ghost sector is advocated,
corresponding to setting $\gamma=0$ and $c^\prime=0$.
The neglected sector is precisely the sector
that is responsible for the constraint on the moduli
space, i.e.\ the last line of (\ref{ation}), which makes the difference
between our model and the model of \cite{verlindesquare}.}.
The theory is topological, since the energy-momentum tensor $T_{zz}^\prime$
is a physically trivial left moving operator.
Indeed, recalling that $G_{zz}=-2\{{\cal Q}_v,
\beta_{zz}\}$, we have
\begin{equation}
\alpha T_{zz}^\prime=\{{\cal Q}, G_{zz}\}, \quad \quad \{{\cal Q}_v,
G_{zz}\}=0.
\end{equation}
In ref.\ \cite{eguchietal}, it is shown that a ``homotopy'' operator
$U$ can be defined in the Verlinde and Verlinde model of topological
gravity \cite{verlindesquare}, such that $U{\cal Q}_{BRST}U^{-1}={\cal Q}_s$.
This shows that ${\cal Q}_{BRST}$ and ${\cal Q}_s$ have the same spectra
and provides a ``matter'' representation of the gravitational
observables when the theory is coupled to a Landau-Ginzburg model.
We now find the operator $U$ in our case and study some of its
properties. This is not only useful for the future program of coupling
matter to constrained topological gravity, but also permits
to define a third nilpotent operator, called $S$, that only acts
on the ghost sector and further puts into evidence, in some sense,
the presence of the constraint on moduli space
To begin with, it is straightforward to prove that
\begin{equation}
J_z^{\prime\, BRST}={1\over 2}\delta_s (\gamma{\cal J}_{z}-c^z{\cal G}_{zz}).
\label{eqo1}
\end{equation}
Moreover, one also has
\begin{equation}
{1\over 2}(\gamma{\cal J}_{z}-c^z{\cal G}_{zz})=\delta_v
(\gamma b_{z}-c^z\beta_{zz})+{1\over 4}\delta_s (c^z\gamma\beta_{zz}
+b_{z}\gamma\gamma),
\label{eqo2}
\end{equation}
where $\delta_v$ denotes the action of ${\cal Q}_v$.
Now, defining
\begin{equation}
\Theta={1\over 2}\oint(\gamma{\cal J}_{z}-c^z{\cal G}_{zz})-
{1\over 4}\delta_s \oint (c^z\gamma\beta_{zz}
+b_{z}\gamma\gamma),
\end{equation}
one can write
\begin{equation}
\matrix{
{\cal Q}_v=\{{\cal Q}_s,\Theta\},\quad &\quad
\{{\cal Q}_v,\Theta\}=0.}
\end{equation}
Thus, the desired ``homotopy'' operator is
\begin{equation}
U={\rm exp}\,\left(-2\alpha^{-1}\Theta\right)
\end{equation}
and we have
\begin{equation}
U{\cal Q}U^{-1}={1\over 2}\alpha{\cal Q}_s.
\end{equation}
In this way, it is possible to turn to the ``matter picture'', which
is also simpler from the computational point of view.
The key point, in presence of matter coupling, is that
the condition of equivariant cohomology is correspondingly
changed \cite{eguchietal}.
Let us now introduce the operator $S$.
We have, from (\ref{eqo1}) and (\ref{eqo2}),
\begin{equation}
J_z^{\prime\, BRST}=\delta_v\delta_s(\gamma b_{z}-c^z\beta_{zz}).
\label{8.16}
\end{equation}
Due to $\delta_v J_z=0$,
we can also write
\begin{equation}
[{\cal Q}_{gh}^\prime,{\cal Q}_v]={\cal Q}_v,
\end{equation}
where
\begin{equation}
{\cal Q}_{gh}^\prime={\cal Q}_{gh}+\oint J_z=
\oint b_{zz}^\prime c^z+2\beta_{zz}\gamma^z+b_zc^\prime-\lambda_z
\eta
-\lambda\eta_z
\end{equation}
is the ghost number charge of the twisted theory, equal to the sum
of the ghost number charge of the initial N=2 theory plus the $U(1)$
charge. This corresponds to eq.\ (\ref{spina}).
Define
\begin{equation}
S=\oint(\gamma
b_{z}-c^z\beta_{zz}).
\end{equation}
Then, eq.\ (\ref{8.16}) implies
\begin{equation}
{\cal Q}_v=[{\cal Q}_v,\{{\cal Q}_s,S\}],\quad\quad
\Theta=[{\cal Q}_v, S],
\end{equation}
while the ghost charge can be expressed as
\begin{equation}
{\cal Q}^\prime_{gh}=\oint J_z-\{{\cal Q}_s,S\}.
\end{equation}
$S$ is a nilpotent operator, that acts trivially on the Liouville
sector, while in the ghost sector it gives
\begin{equation}
\matrix{Sb_{zz}^\prime=-\beta_{zz},&
S\beta_{zz}=0,&
S c^z=0,&
S\gamma^z=-c^z,\cr
S b_{z}=0,&
S \beta_{z}=b_z,&
S c^\prime=-\gamma,&
S \gamma=0.}
\end{equation}
These rules should be compared with the last two lines of (\ref{ation}).
In some sense,
the action of $S$ is dual to the action of $\delta_s$ in the ghost sector.
We have already discussed the last two lines of (\ref{ation})
and the difference between the roles of $b$ and $\beta$,
$c$ and $\gamma$, in the two cases. The action of the operator $S$,
compared to the one of $\delta_s$, inverts the roles of the two
$b$-$c$-$\beta$-$\gamma$ systems.
Clearly, the existence of the operator $S$
is strictly related to the presence of the
graviphoton and thus to the constraint on moduli space.
Let us give some of the transformations corresponding to the change
of basis due to $U$
\begin{equation}
\begin{array}{cc}
U\pi U^{-1}=\pi+{1\over 1-\gamma/\alpha}{c^z\lambda_z\over \alpha},&\quad
U\varphi U^{-1}=\varphi+{1\over 1-\gamma/\alpha}{c^z\eta_z\over \alpha},\\
U\chi U^{-1}=\chi+{1\over 1-\gamma/\alpha}{c^z\lambda_z\over \alpha},&\quad
U\xi U^{-1}=\xi+{1\over 1-\gamma/\alpha}{c^z\eta_z\over \alpha}+
\ln(1-\gamma/\alpha),\\
U\lambda_z U^{-1}={1\over 1-\gamma/\alpha}\lambda_z,&\quad
U\eta U^{-1}=(1-\gamma/\alpha)\eta+f(\varphi,\xi,\gamma,\eta_z,c^z),\\
U\eta_z U^{-1}={1\over 1-\gamma/\alpha}\eta_z,&\quad
U\lambda U^{-1}=(1-\gamma/\alpha)\lambda+f(\pi,\chi,\gamma,
\lambda_z,c^z),\\
Uc^zU^{-1}={1\over 1-\gamma/\alpha}c^z,&\quad
Ub_{zz}^\prime U^{-1}=b_{zz}^\prime-{1\over \alpha} G_{zz},\\
Uc^\prime U^{-1}={1\over 1-\gamma/\alpha}c^\prime+f(\gamma,c^z),&\quad
Ub_zU^{-1}=b_{z}+{1\over \alpha} (c^z\beta_{zz}-\gamma b_z),\\
U\gamma^z U^{-1}=\gamma^z+f(\gamma,c^z,c^\prime),&\quad
U\beta_{zz}U^{-1}=\beta_{zz},\\
U\gamma U^{-1}={1\over 1-\gamma/\alpha}\gamma,&\quad
U\beta_{z}U^{-1}=(1-\gamma/\alpha)^2
\beta_{z}+f({\rm all\, but}\,\beta_z),\\
UT_{zz}^\prime U^{-1}=T_{zz}^\prime,&\quad
UJ_zU^{-1}=J_z-{2\over \alpha}\theta_z,\\
UG_{zz}U^{-1}=G_{zz},&\quad
UG_{z}U^{-1}=G_{z}-{2\over \alpha}\delta_s\theta_{z},
\end{array}
\label{arra}
\end{equation}
where $\theta_z$ is such that
$\Theta=\oint \theta_zdz$.
$T_{zz}^\prime$, $G_{z}-{2\over \alpha}\delta_s\theta_z$,
$G_{zz}$ and $J_z-{1\over 2\alpha}\theta_z$ is another
representation of the same topological algebra.
The functions $f$ appearing in (\ref{arra}) are complicated
expressions of their arguments, which we do not report here.
The above information is sufficient to prove
that the operator $U$ defines a changes of variables in the functional
integral with unit Jacobian determinant.
Notice that $\pi+\chi$ is the field that permits the insertion
of curvature delta-type singularities. It is ${\cal Q}_s$-closed
and $\tilde\pi=U(\pi+\chi)U^{-1}$ is its ${\cal Q}$-closed generalization.
Moreover, since $UT^\prime_{zz}U^{-1}=T^\prime_{zz}$, it is immediate
to prove that $\tilde\pi$ has the same operator product expansion
with $T^\prime_{zz}$ as
$\pi$. In particular, ${\rm e}^{\tilde\pi}$ is a primary field. The limit
$\alpha\rightarrow 0$ of $\tilde\pi$ is the field (\ref{tildepi})
that allowed the curvature insertions in the N=2 theory (also
called $\tilde\pi$).
\section{Geometrical Interpretation}
\label{geometry}
We now discuss the moduli space of the twisted theory
and the gauge-fixing sector that implements
the constraint defining the submanifold ${\cal V}_g\subset {\cal M}_g$.
The number of moduli of the twisted theory is $4g-3$,
the same as that of
the N=2 theory, $3(g-1)$ moduli $ m_i $ corresponding to the metric
and $g$ moduli $\nu_j$
corresponding to the $U(1)$ connection $A$.
The number of supermoduli,
on the other hand, changes by one: it was $4(g-1)$ for the N=2
theory, it is $4g-3$ for the topological theory, $3(g-1)$ supermoduli
$\hat m_i $
corresponding
to the zero modes of the spin 2 antighost $\beta_{zz}$ and $g$
supermoduli $\hat \nu_j$
corresponding to the zero modes of $\beta_{z}$.
The mismatch of one supermodulus is filled by the presence of
one super Killing vector field, corresponding to the (constant)
zero mode of $\gamma$.
Thus, comparing the N=2 theory with the twisted one, we can say that
the $2(g-1)+2(g-1)$ zero modes of the $\beta_{\pm z}$ fields rearrange into
the $3(g-1)$ zero modes of $\beta_{zz}$ plus the $g$ zero modes of $\beta_z$
plus one zero mode of $\gamma$.
Similarly, the $2(g-1)+2(g-1)$ supermoduli rearrange into
$3(g-1)$ moduli $\hat m$ of the topological ghosts plus
$g$ moduli $\hat \nu$
of the topological antighosts plus one super Killing vector field.
The zero modes of the $\lambda$ and $\eta$ fields rearrange
among themselves.
In particular, after the twist,
the number of bosonic moduli equals the number of fermionic moduli,
as expected for a topological theory.
However, the two kinds of supermoduli $\hat m $ and $\hat \nu$
do not carry the same ghost number
after the twist. Indeed, $\hat m_i $ carry ghost number $1$, while
$\hat \nu_j$ carry ghost number $-1$.
Thus, we can interpret $\hat m_i $ as the topological variation
of $ m_i $, but we cannot interpret $\hat \nu_j$ as the topological
variation of $\nu_j$, rather $\nu_j$ is the topological
variation of $\hat \nu_j$:
\begin{equation}
\matrix{
s m_i =\hat m_i ,&\quad s\hat m_i =0,&
\quad s\hat \nu_j=\nu_j,&\quad s \nu_j=0.}
\label{smu}
\end{equation}
This is in agreement with the interpretation of $A$ as a Lagrange
multiplier, so that it is only introduced {\sl via} the gauge-fixing algebra:
$ m $ and $\hat m $ belong to ${\cal B}_{gauge-free}$, while
$\nu$ and $\hat \nu$ belong to ${\cal B}_{gauge-fixing}$.
The amplitudes can be written as
\begin{equation}
<\prod_k\sigma_{n_k}>=\int d\Phi
\int_{{\cal M}_g}\prod_{i=1}^{3g-3}d m_i
\int_{{{\bf C}^g / \Lambda}}\prod_{j=1}^gd\nu_j
\int d\hat m d\hat\nu
\,\prod_i{\rm e}^{q_i\tilde\pi(z_i)}\,
{\rm e}^{-S( m ,\hat m ,\nu,\hat\nu)}\prod_k\sigma_{n_k},
\label{ampltw}
\end{equation}
where $\sigma_{n_k}$ are the observables.
In this expression, the insertions that remove the zero modes of
$b_{zz}$, $\beta_{zz}$, $\beta_{z}$, $b_{z}$, $\eta$, $\lambda$, $\lambda_z$
and $\eta_z$
are understood, but attention has to be paid to the fact that a
super Killing vector field, corresponding to the zero mode of $\gamma$,
forbids one fermionic integration.
The ghost number of the supermoduli measure
adds up to $-2g+3$. Nevertheless, due to the presence
of one super Killing vector field,
the selection rule is that the total ghost number
of $\prod_k\sigma_{n_k}$ must be equal to $2(g-1)$ and not
to $2g-3$. This is the mismatch between true dimension and formal
dimension addressed in the introduction.
To explain why the graviphoton is responsible for the
constraint, let us rewrite the action making the dependence
on the $U(1)$-moduli $\nu_j$ and the corresponding
supermoduli $\hat \nu_j$ explicit.
\begin{eqnarray}
S( m ,\hat m ,\nu,\hat\nu)&=&S( m ,\hat m ,0,0)
+\nu_j\int_{\Sigma_g}\omega^{j}_{\bar z}J_{z}d^2z+
\hat\nu_j\int_{\Sigma_g}\omega^{j}_{\bar z}G_{z}d^2z\nonumber\\&&+
\bar\nu_j\int_{\Sigma_g}\omega^{j}_{z}J_{\bar z}d^2z+
\hat{\bar\nu}_j\int_{\Sigma_g}\omega^{j}_{z}G_{\bar z}d^2z
+\nu\-\hat\nu{\rm -terms}.
\label{actio}
\end{eqnarray}
The terms that are quadratic in $\nu\-\hat\nu$ are due
to the fact that the gravitini are initially $U(1)$-charged. They have
not been reported explicitly, since they can be neglected, as we show
in a moment.
The coefficient of $\bar\nu_j$ is the $U(1)$ current $J_z$
folded with the $j$-th (anti)holomorphic differential $\omega^j_{\bar z}$.
Similarly, the coefficient of ${\hat {\bar\nu}}_j$ is the supercurrent
$G_z$ folded with the same differential.
We want to perform the $\nu$-$\hat\nu$ integrals explicitly.
This is allowed, since the observables should not depend
on $\nu$ and $\hat\nu$. Indeed, $\nu$ and $\hat\nu$
belong to ${\cal B}_{gauge-fixing}$ and not to ${\cal B}_{gauge-free}$,
while the observables are constructed entirely
from ${\cal B}_{gauge-free}$. Anyway, since $\nu$ and $\hat\nu$
form a closed BRST subsystem, we can consistently
project down to the subset
$\nu=\hat\nu=0$, while retaining the BRST nilpotence.
The $U(1)$ moduli $\nu$ are not integrated all over ${\bf C}^g$,
which would be nice since the integration would be very easy, rather
on the unit cell $L={\bf C}^g/({\bf Z}^g+\Omega {\bf Z}^g)$
defined by the Jacobian lattice. To overcome this problem,
we take the semiclassical limit, which is exact in a topological
field theory. We multiply the action $S$ by a constant $\kappa$
that has to be stretched to infinity. $\kappa$ can
be viewed as a gauge-fixing
parameter, rescaling the gauge-fermion: no physical amplitude depends
on it.
Let us define
\begin{equation}
\nu^\prime_j=\kappa\nu_j\quad\quad \hat \nu^\prime_j=\kappa\hat\nu_j.
\end{equation}
We have
\begin{equation}
\int_L\prod_{j=1}^gd\nu_jd\hat\nu_j=
\int_{\kappa L}\prod_{j=1}^gd\nu^\prime_jd\hat\nu_j,
\label{p1}
\end{equation}
where and $\kappa L$ is unit cell rescaled.
We see that the $\nu$-$\hat\nu$-terms of (\ref{actio}) are
suppressed in the $\kappa\rightarrow\infty$ limit, as claimed.
We can replace $\kappa L$ with ${\bf C}^g$ in this limit.
Finally, the integration over the $U(1)$ moduli and supermoduli
produces the insertions
\begin{equation}
\prod_{j=1}^g\int_{\Sigma_g} \omega^j_{\bar z}G_zd^2z\cdot\delta\left(
\int_{\Sigma_g}\omega^j_{\bar z}J_zd^2z\right).
\label{2.47}
\end{equation}
The delta-function is the origin of the desired constraint on moduli space.
Indeed, the current $J_z$ can be thought as a (field dependent)
section of ${\cal E}_{hol}$. The requirement of its vanishing
is equivalent to projecting onto the Poincar\'e dual of the top Chern
class $c_g({\cal E}_{hol})$
of ${\cal E}_{hol}$, due to a theorem that one can find for example in
\cite{griffithsharris}. Changing section only changes the representative
in the cohomology class of $c_g({\cal E}_{hol})$.
Indeed, the Poincar\`e dual of the top Chern class of a holomorphic
vector bundle $E\rightarrow M$ is shown to be the submanifold of the base
manifold $M$ where one holomorphic section $a\in \Gamma(E,M)$
vanishes identically. In other words, the dual of $c_{g}({\cal
E}_{hol})$ is the divisor of some section. For a line bundle
$L\rightarrow M$, this is easily seen. Let $h$ be a fiber metric so
that
$||a||^2=a(z)\bar a(\bar z)h(z,\bar z)$ is the norm of the
section $a$. The top Chern class $c_1(L)$ can be written as the
cohomology class of the curvature $R=\bar \partial\Gamma$
of the canonical holomorphic connection $\Gamma=h^{-1}\partial h$,
so that
\begin{equation}
c_1(L)=\bar \partial\partial\ln ||a(z)||^2.
\end{equation}
Patchwise, the metric $h$ can be reduced to the identity, but then
$c_1(L)$ becomes a de Rahm current, namely a singular $(1,1)$ form
with delta-function support on the divisor Div[$a$], i.e.\ the
locus of zeroes and poles of $a(z)$. The divisor Div[$a$]
is the Poincar\`e dual of $c_1(L)$.
For a holomorphic vector bundle $E\rightarrow M$ of rank $n$, the
same theorem can be understood using the so-called splitting principle.
For the purpose of calculating Chern classes, $E$ can always be
regarded
as the Whitney sum of $n$ line-bundles $L_i$ corresponding,
naively, to the eigendirections of the curvature matrix two form
${\cal R}^{jk}$,
\begin{equation}
E=L_1\oplus\cdots\oplus L_n.
\end{equation}
Then we have
\begin{equation}c_n(E)=\prod_{i=1}^nc_1(L_i)=
\bar\partial\partial\ln ||a_1||^2\wedge\cdots\wedge
\bar \partial\partial\ln ||a_n||^2,
\end{equation}
where $a_i$ are the components of a section $a$ in a suitable basis.
{}From this formula, we see that $c_n(E)$ has delta-function support
on the divisor of $a$.
That is why in our derivation of the topological correlators from the
functional integral, we do not pay particular
attention to the explicit form of $J_z$ and to its dependence on the
other fields. What matters is that it is a conserved holomorphic one
form, namely a section of ${\cal E}_{hol}$. The functional integral
imposes its vanishing, so that the Riemann surfaces
that effectively contribute lie in the homology class of the
Poincar\`e dual of $c_g({\cal E}_{hol})$.
Summarizing, we argue that the topological observables $\sigma_{n_k}$
correspond to the Mumford-Morita classes, as in the
case of topological gravity \cite{verlindesquare},
but that in constrained topological gravity the
correlation functions are intersection forms
on the Poincar\'e dual ${\cal V}_g$ of $c_g({\cal E}_{hol})$ and not on the
whole moduli space ${\cal M}_g$.
It can be convenient to
represent
$c_g({\cal E}_{hol})$ by introducing the natural
fiber metric $h_{jk}={\rm Im}\, \Omega^{jk}=\int_{\Sigma_g}
\omega^j_z\omega^k_{\bar z}d^2z$ on ${\cal E}_{hol}$.
The canonical connection associated
with this metric is then
\begin{equation}
\Gamma=h^{-1}\partial h={1\over \Omega-\bar \Omega}\partial \Omega,
\end{equation}
which leads to a curvature
\begin{equation}
{\cal R}=\bar\partial\Gamma=
{1\over \Omega-\bar \Omega}\bar\partial\bar\Omega
{1\over \Omega-\bar \Omega}\partial\Omega.
\end{equation}
Let $\{\omega^1,\ldots \omega^g\}$
denote a basis of holomorphic differentials.
Locally, we can expand $J_z$ in this basis
\begin{equation}
J_z=a_j\omega^j_z.
\end{equation}
The field dependent
coefficients $a_j$ are the components of the section $J_z\in\Gamma(
{\cal E}_{hol},{\cal M}_g)$.
The constraint then reads
\begin{equation}
{\rm Im}\,\Omega^{jk}a_k=0,\quad\quad\forall j,
\end{equation}
which, due to the positive definiteness of ${\rm Im}\,\Omega$, is equivalent
to
\begin{equation}
a_j=0,\quad\quad\forall j.
\end{equation}
These are the equations that (locally) identify the
submanifold ${\cal V}_g\subset {\cal M}_g$.
It is also useful to introduce
the vectors $v_j={\partial\over \partial a_j}$ that
provide a local basis for the normal bundle ${\cal N}({\cal V}_g)$ to
${\cal V}_g$. Of course, the vectors $v_j$ commute among themselves:
\begin{equation}
[v_j,v_k]=0.
\end{equation}
In these explicit local coordinates, the top Chern class
$c_g({\cal E}_{hol})$ admits the following representation as a
de Rham current:
\begin{equation}
c_g({\cal E}_{hol})=\delta({\cal V}_g)\tilde\Omega_g,
\label{9.15}
\end{equation}
where
\begin{equation}
\tilde\Omega_g=\prod_{j=1}^g da_j,\quad\quad
\delta({\cal V}_g)=\prod_{j=1}^g\delta (a_j).
\end{equation}
This explicit notation is useful to trace back the correspondence
between the geometrical and field theoretical definition of the
correlators.
To begin with,
a convenient representation of the BRST operator (\ref{smu})
on the space
$\{ m ,\hat m ,\nu,\hat\nu\}$ is given by
\begin{equation}
{\cal Q}_{global}=\hat m_i {\partial\over \partial m_i }+
\nu_j{\partial\over \partial \hat \nu_j}.
\label{operator}
\end{equation}
${\cal Q}_{global}$
is not the total BRST charge, rather it only represents the
BRST charge on the sector of the
global degrees of freedom. The total BRST
charge is the sum of the above operator plus the usual BRST charge
${\cal Q}={\cal Q}_s+{\cal Q}_v$,
that acts only on the local degrees of freedom.
Since the total BRST charge acts trivially inside the physical
correlation functions, we see that the action of ${\cal Q}$
inside correlation
functions is the opposite of the action of ${\cal Q}_{global}$.
This means that
${\cal Q}$ can be identified, apart from an overall immaterial sign,
with the operator (\ref{operator}).
We know that the geometrical meaning of the supermoduli $\hat m_i$
are the differentials $d m_i $ on the moduli space ${\cal M}_g$
and that
the ghost number corresponds to the form degree. In view of this, we
argue that the geometrical meaning of the $U(1)$ supermoduli
$\hat \nu_j$ are contraction operators
$i_{v_j}$ with respect to the associated vectors $v_j$.
Since the $U(1)$ moduli $\nu_j$ are the BRST variations of $\hat\nu_j$
and the BRST operation should be
identified with the exterior derivative,
it is natural to conjecture that $\nu_j$ correspond to
the Lie derivatives along the vectors $v_j$.
The correspondence between field theory and geometry is summarized
in table \ref{table1}.
We now give arguments in support of this interpretation.
For instance, since ${\cal Q}\sim d$,
${\rm Im}\,\Omega^{jk}\, a_k\sim \int\omega_{\bar z}^kJ_zd^2z$ and
$[{\cal Q},J_z]=-G_z$, then
the insertions $\int \omega_{\bar z}^jG_zd^2z$ correspond to
$d({\rm Im}\,\Omega^{jk}a_k)$, so that
\begin{equation}
\prod_{j=1}^g\int_{\Sigma_g} \omega^j_{\bar z}G_zd^2z\cdot\delta\left(
\int_{\Sigma_g}\omega^j_{\bar z}J_zd^2z\right)\sim
\tilde\Omega_g\delta({\cal V}_g)=c_g({\cal E}_{hol}).
\end{equation}
If $\alpha_k$ denote the Mumford-Morita classes corresponding
to the observables ${\cal O}_k$, the amplitudes are
\begin{equation}
<{\cal O}_1\cdots {\cal O}_n>=
\int_{{\cal M}_g}\delta({\cal V}_g)\tilde\Omega_g\wedge \alpha_1
\wedge\cdots\wedge \alpha_n=\int_{{\cal V}_g}\alpha_1
\wedge\cdots\wedge \alpha_n.
\end{equation}
{}From the geometrical point of view, it is immediate to
show that the action of (\ref{operator}) on a correlation
function is precisely the exterior derivative, as already advocated.
Indeed, we can write the $d$-form $\omega_d$ corresponding to a
physical amplitude (not necessarily a top form, if we freeze, for the moment,
the integration over the global degrees of freedom) as
\begin{equation}
\omega_d=i_{v_1}\cdots i_{v_g}\Omega_{d+g}=
\left(\prod_{j=1}^g\hat\nu_j\right)\hat m_{i_1}\cdots\hat m_{i_d}
\Omega_{d+g}^{i_1\cdots i_d},
\end{equation}
where $\Omega_{d+g}$ is a suitable $d+g$-form on ${\cal M}_g$
(equal to $\tilde\Omega_g\wedge\omega_d$).
Now, using the representation (\ref{operator}) of the operator
${\cal Q}$, we find
\begin{eqnarray}
\{{\cal Q},\omega_d\}&=&(-1)^g\left(\prod_{j=1}^g\hat\nu_j\right)
\hat m_i \hat m_{i_1}\cdots\hat m_{i_d}
{\partial\Omega_{d+g}^{i_1\cdots i_d}\over\partial m_i }\nonumber\\&&+
\sum_{k=1}^g(-1)^{k+1}\nu_k\left(\prod_{j\neq k}\hat\nu_j\right)
\hat m_{i_1}\cdots\hat m_{i_d}
\Omega_{d+g}^{i_1\cdots i_d}.
\end{eqnarray}
Using the correspondence given in table \ref{table1}, we have
\begin{equation}
\{{\cal Q},\omega_d\}=(-1)^gi_{v_1}\cdots i_{v_g}d\Omega_{d+g}
+\sum_{k=1}^g(-1)^{k+1}{\cal L}_{v_k}
i_{v_1}\cdots i_{v_{k-1}}i_{v_{k+1}}\cdots i_{v_g}\Omega_{d+g}=
d\omega_d.
\end{equation}
The second piece of (\ref{operator}) replaces a
contraction with the vector
$v_j$ with the Lie derivative with respect to the same vector.
Finally, we describe a more intuitive description of the
submanifold ${{\cal V}_g}$ \footnotemark\footnotetext{
We would like to thank C.\ Reina for essential private
discussions on this point.}.
${{\cal V}_g}$ is a representative of a homology cycle, so it can be
convenient to make a special choice of this representative,
for example, a ${{\cal V}_g}$ lying on the boundary of the moduli space.
That means that we are considering degenerate Riemann surfaces.
Take $g$ independent and non intersecting homology
cycles on the Riemann surface, the $A$-cycles,
and pinch them. You get $g$ nodes. Then,
separate the two branches of each node: you get a sphere $S_{2g}$
with $2g$ pairwise identified marked points. We conjecture
that ${{\cal V}_g}$ is representable as the space of such spheres.
The dimensions turn out to match: indeed, $2g$ complex
parameters are the positions of the marked points, but, due to
$SL(2,{\bf C})$ invariance on the sphere, three points can be fixed
to $0$, $1$ and $\infty$, as usual. Thus, the dimension of $S_{2g}$
equals $2g-3$, which is the correct result. The $g$ holomorphic
differentials of $\Sigma_g$ become differentials of the third
kind on $S_{2g}$, with opposite residues on the pairwise
identified points.
\section{Outlook and Open Questions}
\label{concl}
In the present paper we have shown that there exists a new class of
topological field theories whose correlation functions can be interpreted
as intersection numbers of cohomology classes in a constrained moduli space.
The constrained moduli space is, by itself, a homology class of cycles in
an ordinary, unconstrained, moduli space. The specific example that
we have considered is a formulation of 2D topological gravity that is
obtained through the A-twist of N=2 Liouville theory.
Usually, in a topological field theory the space of field
configurations is projected onto a finite dimensional subspace made of
instantons. In two dimensional gravity, however,
the space of configurations is
the moduli space ${\cal M}_g$ of Riemann surfaces of genus $g$ and
ordinary topological gravity deals with the full ${\cal M}_g$.
Constrained topological gravity,
instead, deals with a proper submanifold ${\cal V}_g\subset{\cal M}_g$,
so that also in this case we have a nontrivial concept of instanton
configurations. The ``instantons'' are the solutions to the moduli
space constraint. Our formulation of topological gravity bears the
same relation with 2D gravity as a generic topological field theory
with its non topological version, namely the former is a ``proper''
projection of the latter.
Our result raises several questions that are so far unanswered.
In Witten's topological gravity, where the
correlators are the intersection numbers of Mumford-Morita classes in
the ordinary moduli space ${\cal M}_{g,s}$,
the generating function satisfies
the integrable KdV hierarchy \cite{lez}. This result was shown
by Kontsevich \cite{Kontsevich}
from algebraic geometry, through a systematic
triangulation of moduli space leading to an integral \'a la matrix-model.
It was also justified, in field theoretical terms,
by the work of Verlinde and Verlinde \cite{verlindesquare}
and Dijkgraf, Verlinde and Verlinde \cite{deigraf}.
It is clear that the first question
raised by our paper is: {\it which integrable hierarchy is
satisfied by the correlators defined in eq.(\ref{intro_0})?}
The answer to this question is left open by our work and it is not yet clear
whether it can be more easily obtained from field-theoretical or geometrical
considerations: both ways are equally good, since we have established
a correspondence between the topological definition (\ref{intro_0}) and
the field-theoretical one (\ref{ampltw}).
The next open question concerns matter coupling.
One should investigate the effects of the
moduli space constraint when topological gravity is coupled to topological
matter. This involves the study of the topological twist of matter coupled
N=2 Liouville theory. In the present paper we have extended to curved
superspace the construction of \cite{billofre} where N=2 matter coupled to
gauge theories was analysed. The next step of the program is
to write down the most general N=2, D=2 theory that
contains the graviton multiplet, the gauge multiplets and the chiral and
twisted chiral multiplets \cite{kounnas},
interacting through a generalized K\"ahler metric,
a superpotential and a dilaton coupling to the two dimensional curvature. Then,
by investigating the A-twist of such a theory, one obtains the matter coupling
of the present {\it constrained topological gravity} to the topological
$\sigma$-model.
If we are interested in coupling the
topological Landau-Ginzburg model, we should instead perform
the B-twist. Indeed, another question
raised by our paper that should be addressed in the next future is
{\it whether the B-twist of the N=2 Liouville theory produces a similar or
different topological gravity}.
In this paper we have shown that the
cohomology of ${\cal Q}_s$ is equivalent to the cohomology
of the full BRST operator ${\cal Q}_{BRST}$, the same way as it happens
for the Verlinde and Verlinde theory. This property can be exploited
fruitfully by coupling the (B-twisted)
topological gravity to Landau-Ginzburg matter.
Finally, other open questions are related to conformal
field theory. The splitting into a minimal plus a maximal model
of the N=2 superconformal theory associated
with the gauge-fixed Liouville model deserves attention. It might be
the way to understand better the relation with the Polyakov formulation
in terms of a level $k$ $SL(2,R)$ Ka\v c-Moody algebra \cite{poly}.
\vspace{24pt}
\begin{center}
{\bf Acknowledgements}
\end{center}
\vspace{12pt}
We are very grateful to our friends B.\ Dubrovin, C.\ Reina, C.\ Becchi,
C.\ Imbimbo, L.\ Bonora, F.\ Gliozzi, G.\ Falqui, M.\ Martellini
and A.\ Zaffaroni
for enlightening and essential discussions.
\vspace{24pt}
|
1,116,691,498,704 | arxiv | \section{INTRODUCTION} \label{s:intro}
The X-shaped radio galaxy (XRG) population has been the subject of several recent works that have sought to understand the formation mechanism for the non-classical symmetric double lobed radio sources that have peculiar off-axis extended radio structures. The two main contending formation scenarios differ quite starkly: the rapid axis flip and slow axis precession mechanisms require the AGN beam axis to have undergone a rapid or slow rotation by a large angle (e.g.\ due to accretion disk instabilities; \cite{M2002}; \cite{D2002}, \cite{G2011}) whereas the backflow-origin scenario requires a backflow to either have been deflected by the thermal halo surrounding the host elliptical (\cite{L1984}; \cite{W1995}) or be escaping from a high pressure region along the steepest pressure gradient along the minor axis \citep{C2002}. Besides these models there are at least two other models that seek to explain the X-structures of
radio galaxies involving either twin radio-loud AGN with axes oriented at large angles with respect to each other \citep{L2007} or jet-shell interactions \citep{GK2012}. The models are not without drawbacks (e.g.\ see \cite{S2009} and \cite{GK2012}). While it is possible that the proposed mechanisms may all give rise to non-classical radio structures in different instances it might also be that one mechanism is more commonly responsible for off-axis emission than others.
While theoretical modeling studies of the proposed mechanisms is a way forward, progress requires detailed radio-optical-X-ray observations of sufficient quality that would allow much needed
characterization of the population against which the models could be tested. A few studies in this direction (\cite{C2002, S2009, HK2010, M2011}) have been carried out which reported the relatively high ellipticities of the XRG host ellipticals, host minor axis preference of the wings, host major axis proximity of the main lobes, prevalence of X-ray coronae surrounding XRG hosts
and the relatively young ages of the hosts. Besides, Saripalli \& Subrahmanyan have examined the observational data in the context of the formation of a wider class of radio structures and revealed a connection between XRGs and the parent class of sources with lobe distortions suggesting the possibility of a more widespread or common phenomenon at work
in creating peculiar radio galaxy morphologies.
We present herein our attempt at resolving this issue using archival data on the sample of XRG candidates compiled by \cite{C2007}, which forms a useful resource for taking the characterization studies further. In this paper we have imaged existing (as yet unanalyzed) archival VLA data of a subsample of his 100 XRG candidates. Here we present new maps of 52 radio galaxies, all of those for which archival L-band A-array and/or C-band B-array data exist. Each of the sources is discussed briefly. Several have been followed up in the optical and have redshifts available \citep{L2010}. We also provide new optical identifications for several of these sources.
In Section~\ref{s:results} we present our images of 52 sources, with overlays on the corresponding optical fields. Section~\ref{s:discussion} discusses the classification of the sources by the location of their deviant off-axis lobes. The goal is to identify those sources that we believe are {\em candidates} for ``true X-shaped morphology.'' {\em We define a ``true X-shaped morphology'' as one where the deviant off-axis emission is not traced to either of the main lobes of the radio galaxy and instead is seen as an independent transverse feature centered on the host.} The sources in our sample that meet this definition are listed in Section~\ref{s:nature}; particularly good examples are J1043+3131 and J1327$-$0203. Our results are summarized in Section~\ref{s:summary}.
In \cite{PaperII} we discuss the implications of our results for the gravitational wave background.
\section{OBSERVATIONS AND DATA ANALYSIS}
The NRAO Data Archive was searched for historical VLA data on the 100 sources in Cheung's sample. We used all existing L band observations in the A-array and C band observations in the B-array. Thus the resulting images had resolution of about one arc second. The typical observation was a snapshot of a few minutes duration. The data were calibrated in AIPS using standard techniques and self-calibrated in DIFMAP, with final images made using the AIPS task IMAGR. In a few cases no flux calibrator was available, so the flux scale was bootstrapped from observations of the phase calibration sources at nearby times.
\section{RESULTS} \label{s:results}
For each source we provide contour images at each band for which historical VLA data were available, and overlays of the radio structures on optical images from the Sloan Digital Sky Survey (SDSS), or DSS~II images if there is no SDSS image of the field (four cases). The contours are spaced by factors of $\sqrt{2}$ and run from the lowest contour levels to the peaks given in the individual captions. In the overlays the contours are spaced by factors of two and some may be omitted to make the optical ID clearer. Below we provide notes describing interesting aspects for some of the sources.
In the new analysis we present we detect compact cores in several sources. We detect 30 new radio cores while for 15 the new higher resolution imaging has failed to detect cores. All detected cores are seen at the IDs suggested by \cite{C2007} except in J1202+4915 where a core is detected but no ID is seen and in J0846+3956 where a likely core is present at a newly identified, faint object. Large-scale radio emission seen in the FIRST images is resolved out in several sources in the new imaging. For 14 sources there is at least 80 percent of the flux captured in our higher resolution maps where as for 12 sources more than 50 percent of the extended flux is missed. The observed properties of the sources are given in Table~\ref{tab:Observed}.
Below we provide notes to each of the sources where we also provide description of the morphology.
\subsection{Notes on Individual Sources}
\noindent J0001$-$0033 (Figure~\ref{fig:J0001-0033}). Our new image shows the core clearly and a narrow, jet-like feature that connects it to the eastern lobe. Both lobes although edge-brightened and well confined lack compact hotspots, showing instead presence of recessed emission peaks. Diffuse extension to the north from the eastern lobe seen in the FIRST image is completely resolved out in our higher resolution map.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f1a}
\includegraphics[width=0.45\columnwidth]{f1b}
\caption[J0001$-$0033 (L)]{J0001$-$0033. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. In all of the figures the contours are spaced by factors of $\sqrt{2}$; in the overly every-other contour is skipped. Here the lowest contour = 0.1~mJy/beam and the peak = 2.36~mJy/beam.\label{fig:J0001-0033}}
\end{figure}
\noindent J0045+0021 (Figure~\ref{fig:J0045+0021}). For this source we have presented images at two frequencies. The source, which appears as a classic XRG in the FIRST image continues to exhibit the diffuse orthogonally, oriented extensions in both our maps. The C-band map clearly shows the connection between the transverse extension and the eastern lobe, which also shows a sharp inner edge. A compact core midway between the lobes is detected in our C-band map.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f2a}
\includegraphics[width=0.45\columnwidth]{f2b}\\
\includegraphics[width=0.45\columnwidth]{f2c}
\includegraphics[width=0.45\columnwidth]{f2d}\\
\caption[J0045+0021 (L \& C)]{J0045+0021. (top) (left) VLA image at L band and (right) VLA image overlaid on red SDSS image. Lowest contour= 0.3~mJy/beam, peak =178~mJy/beam. (bottom) (left) VLA image at C band and (right) VLA image overlaid on red DSS II image. Lowest contour = 0.125~mJy/beam, peak = 60.7~mJy/beam. The red DSS\,II plate does not show any candidate sources. \label{fig:J0045+0021}}
\end{figure}
\noindent J0049+0059 (Figure~\ref{fig:J0049+0059}). Although the FIRST map shows an edge-brightened radio galaxy our new map has revealed a complex structure for this source. Interestingly, the northern lobe is resolved into a source with an edge-brightened double structure. The lobe is extended along a position angle similar to the southern lobe however it is offset by nearly 10\arcsec\ to the west. The offset northern lobe is connected to the core by a weak narrow extension. A narrow, linear feature is also seen extending SE from the core in the opposite direction. The southern lobe has a faint optical ID at the location of the peak at the leading end. We speculate whether the radio source is in fact a collection of three, extended and independent radio sources.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f3a}
\includegraphics[width=0.45\columnwidth]{f3b}
\caption[J0049+0059 (L)]{J0049+0059. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.1~mJy/beam, peak = 2.51~mJy/beam.\label{fig:J0049+0059}}
\end{figure}
\noindent J0113+0106 (Figure~\ref{fig:J0113+0106}). The inversion symmetric transverse extensions seen in the FIRST image are also seen in the L-band image but are completely resolved out in the C-band image. In both the FIRST and L-band image the transverse extensions show inner edges and connect to respective lobes.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f4a}
\includegraphics[width=0.45\columnwidth]{f4b}\\
\includegraphics[width=0.45\columnwidth]{f4c}
\includegraphics[width=0.45\columnwidth]{f4d}\\
\caption[J0113+0106 (L \& C)]{J0113+0106. (top) (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.7~mJy/beam, peak = 18.0~mJy/beam. (bottom) (left) VLA image at C band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.1~mJy/beam, peak = 2.56~mJy/beam.\label{fig:J0113+0106} } \end{figure}
\noindent J0143$-$0119 (Figure~\ref{fig:J0143-0119}). There is impressive structure revealed in our map of this source. The extended diffuse region orthogonal to the source axis visible in the FIRST image is completely resolved out. A central bright knotty jet is seen with the peak at extreme west identified with a galaxy, making the source extremely asymmetric. The nature of the source is not clear.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f5a}
\includegraphics[width=0.45\columnwidth]{f5b}\\
\includegraphics[width=0.45\columnwidth]{f5c}
\includegraphics[width=0.45\columnwidth]{f5d}\\
\caption[J0143$-$0119 (L \& C)]{J0143$-$0119. (top) (left) VLA image at L band and (right) VLA image overlaid on red SDSS image. Lowest contour = 0.2~mJy/beam, peak = 44.4~mJy/beam. (bottom) (left) VLA image at C band and (right) VLA image overlaid on red SDSS image. Lowest contour = 0.1~mJy/beam, peak = 50.3~mJy/beam. \label{fig:J0143-0119}}
\end{figure}
\noindent J0144$-$0830 (Figure~\ref{fig:J0144-0830}). Our map does not reveal any compact features in this source, which is seen as a centrally bright X-shaped source in the FIRST image.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f6a}
\includegraphics[width=0.45\columnwidth]{f6b}
\caption[J0144$-$0830 (L)]{J0144$-$0830. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.09~mJy/beam, peak =1.40~mJy/beam. \label{fig:J0144-0830}}
\end{figure}
\noindent J0145$-$0159 (Figure~\ref{fig:J0145-0159}). Although the FIRST map shows an edge-brightened radio galaxy our map shows only a central narrow twin-jet feature with all the extended emission seen in the FIRST map resolved out. The bright galaxy ID is located at the base of the northern jet. The map reveals a feature offset from the southern jet corresponding to the bright extended emission at the end of the southern lobe seen in the FIRST map. Our map reveals no hotspots in this edge-brightened source.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f7a}
\includegraphics[width=0.45\columnwidth]{f7b}
\caption[J0145$-$0159 (L)]{J0145$-$0159. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.2~mJy/beam, peak = 2.56~mJy/beam. \label{fig:J0145-0159}}
\end{figure}
\noindent J0211$-$0920 (Figure~\ref{fig:J0211-0920}). The map reveals the NE galaxy constituting the galaxy pair to be the likely host. Two narrow jets are seen leading to the lobes where transverse emission is seen only for the NW lobe.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f8a}
\includegraphics[width=0.45\columnwidth]{f8b}
\caption[J0211$-$0920 (L)]{J0211$-$0920. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.1~mJy/beam, peak = 2.81~mJy/beam. \label{fig:J0211-0920}}
\end{figure}
\noindent J0702+5002 (Figure~\ref{fig:J0702+5002}). The new map has revealed two distinct lobes on either side of a previously unseen central core. The lobes are at least as long as the source extent. Each of the lobes extends orthogonally in opposite directions. Neither of the lobes is seen to have compact hotspots.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f9a}
\includegraphics[width=0.45\columnwidth]{f9b}
\caption[J0702+5002 (L)]{J0702+5002. (left) VLA image at L band, (right) VLA image overlaid on red DSS\,II image. Lowest contour = 0.2~mJy/beam, peak = 7.17~mJy/beam. \label{fig:J0702+5002}}
\end{figure}
\noindent J0813+4347 (Figure~\ref{fig:J0813+4347}). Our higher resolution map reveals a central triple structure (two lobes straddling a core) which is itself embedded in a much larger diffuse emission region.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f10a}
\includegraphics[width=0.45\columnwidth]{f10b}
\caption[J0813+4347 (L)]{J0813+4347. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.125~mJy/beam, peak = 12.0~mJy/beam. \label{fig:J0813+4347}}
\end{figure}
\noindent J0821+2922 (Figure~\ref{fig:J0821+2922}). The map shows a compact core at the location of a faint galaxy straddled by two hotspots. The more compact NE hotspot is connected to it by a jet. The SW hotspot is offset to the west from the core-jet axis. The hotspots are accompanied by regions of diffuse emission that extend in opposite directions.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f11a}
\includegraphics[width=0.45\columnwidth]{f11b}
\caption[J0821+2922 (L)]{J0821+2922. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.2~mJy/beam, peak = 2.89~mJy/beam. \label{fig:J0821+2922}}
\end{figure}
\noindent J0845+4031 (Figure~\ref{fig:J0845+4031}). The map reveals a very inversion-symmetric structure. There is an inner pair of emission peaks along an axis after which the respective outer (edge-brightened) lobes bend in opposite directions. Both outer lobes are associated with trailing faint emission extended again in opposite directions. We regard this source as a prime example of an AGN with rotating jets.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f12a}
\includegraphics[width=0.45\columnwidth]{f12b}
\caption[J0845+4031 (L)]{J0845+4031. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.15~mJy/beam, peak = 20.8~mJy/beam. \label{fig:J0845+4031}}
\end{figure}
\noindent J0846+3956 (Figure~\ref{fig:J0846+3956}). The new map hints at a core at the location of a faint galaxy at the center. The transverse extensions are clearly seen to be associated with the individual lobes.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f13a}
\includegraphics[width=0.45\columnwidth]{f13b}
\caption[J0846+3956 (L)]{J0846+3956. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.2~mJy/beam, peak = 21.4~mJy/beam. \label{fig:J0846+3956}}
\end{figure}
\noindent J0859$-$0433 (Figure~\ref{fig:J0859-0433}). Our higher resolution map fails to detect a core in this source. The two hotspots are well mapped and the transverse emission to the north is clearly seen to connect to the W hotspot. The diffuse emission feature to the south is seen to have a bounded appearance with clear edges.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f14a}
\includegraphics[width=0.45\columnwidth]{f14b}
\caption[J0859$-$0433 (L)]{J0859$-$0433. (left) VLA image at L band, (right) VLA image overlaid on red DSS\,II image. Lowest contour = 0.2~mJy/beam, peak = 23.4~mJy/beam. \label{fig:J0859-0433}}
\end{figure}
\noindent J0917+0523 (Figure~\ref{fig:J0917+0523}). This source is very similar to J0859-0433. No core is detected in our map. The two hotspots are well mapped although the extended transverse emission is only partially imaged.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f15a}
\includegraphics[width=0.45\columnwidth]{f15b}
\caption[J0917+0523 (L)]{J0917+0523. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.6~mJy/beam, peak = 117~mJy/beam. \label{fig:J0917+0523}}
\end{figure}
\noindent J0924+4233 (Figure~\ref{fig:J0924+4233}). No distinct core is seen in the map. The bright ID is straddled to the east by a core-like feature and to the west by a short terminated jet. The structure suggests a restarted AGN activity. The diffuse lobe emission to the south is traced clearly and seen to be linked with the W lobe. The northern diffuse extension is mostly resolved out.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f16a}
\includegraphics[width=0.45\columnwidth]{f16b}
\caption[J0924+4233 (L)]{J0924+4233. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.15~mJy/beam, peak = 5.40~mJy/beam. \label{fig:J0924+4233}}
\end{figure}
\noindent J0941$-$0143 (Figure~\ref{fig:J0941-0143}). We detect a weak core at the location of a galaxy ID. The source structure is revealed to be inversion-symmetric about the ID. The prominent SE extension to the northern lobe is only partially imaged.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f17a}
\includegraphics[width=0.45\columnwidth]{f17b}
\caption[J0941$-$0143 (L)]{J0941$-$0143. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.60~mJy/beam, peak = 73.1~mJy/beam. \label{fig:J0941-0143}}
\end{figure}
\noindent J1005+1154 (Figure~\ref{fig:J1005+1154}). The map detects compact emission at the location of an optical object. It makes the source highly asymmetric in extent. The northern lobe reaches most of the way to the core along the radio axis before appearing to change direction to the west well ahead of the core. The offset emission shows a well-bounded inner edge.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f18a}
\includegraphics[width=0.45\columnwidth]{f18b}
\caption[J1005+1154 (L)]{J1005+1154. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.09~mJy/beam, peak = 3.77~mJy/beam. \label{fig:J1005+1154}}
\end{figure}
\noindent J1008+0030 (Figure~\ref{fig:J1008+0030}). The halo-like diffuse emission in which a central radio source is embedded (as seen in the FIRST map) is completely resolved out. Instead the map reveals a bright core from which a short bright jet is seen extending toward the NE hotspot.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f19a}
\includegraphics[width=0.45\columnwidth]{f19b}
\caption[J1008+0030 (L)]{J1008+0030. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.3~mJy/beam, peak = 63.5~mJy/beam. \label{fig:J1008+0030}}
\end{figure}
\noindent J1015+5944 (Figure~\ref{fig:J1015+5944}). The source is revealed to be a narrow edge-brightened radio galaxy. The northern diffuse extension is detected as a narrow feature with a sharp inner edge.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f20a}
\includegraphics[width=0.45\columnwidth]{f20b}
\caption[J1015+5944 (L)]{J1015+5944. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.2~mJy/beam, peak = 98.1~mJy/beam. \label{fig:J1015+5944}}
\end{figure}
\noindent J1043+3131 (Figure~\ref{fig:J1043+3131}). The source structure is well imaged in the new maps, which show a compact core connected by a pair of narrow straight jets to two compact hotspots at the extreme ends of the source. The core coincides with a centrally located and brighter of three galaxies that lie in a line along the source axis. Our map reveals a well-bounded wide feature oriented orthogonal to the source axis. An interesting, narrow jet-like feature is seen to the west associated with the core along the axis of this broad diffuse feature.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f21a}
\includegraphics[width=0.45\columnwidth]{f21b}\\
\includegraphics[width=0.45\columnwidth]{f21c}
\includegraphics[width=0.45\columnwidth]{f21d}\\
\caption[J1043+3131 (L \& C)]{J1043+3131. (top) (left) VLA image at L band and (right) VLA image overlaid on red SDSS image. Lowest contour = 0.4~mJy/beam, peak = 34.0~mJy/beam. (bottom) (left) VLA image at C band and (right) VLA image overlaid on red SDSS image. Lowest contour = 0.08~mJy/beam, peak = 44.6~mJy/beam. \label{fig:J1043+3131}}
\end{figure}
\noindent J1054+5521 (Figure~\ref{fig:J1054+5521}). The map does not reveal a compact core although we note a weak source between the lobes at the location of a very faint object. The edge-brightened lobes extend along axes that are offset from each other. The off axis emission connected to the well-bounded eastern lobe shows a sharp inner edge.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f22a}
\includegraphics[width=0.45\columnwidth]{f22b}
\caption[J1054+5521 (L)]{J1054+5521. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.175~mJy/beam, peak = 14.1~mJy/beam. \label{fig:J1054+5521}}
\end{figure}
\noindent J1111+4050 (Figure~\ref{fig:J1111+4050}). The source is clearly seen to be a radio galaxy with bent jets, a likely wide-angle tail source.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f23a}
\includegraphics[width=0.45\columnwidth]{f23b}\\
\includegraphics[width=0.45\columnwidth]{f23c}
\includegraphics[width=0.45\columnwidth]{f23d}\\
\caption[J1111+4050 (L \& C)]{J1111+4050. (top) (left) VLA image at L band and (right) VLA image overlaid on red SDSS image. Lowest contour = 1.0~mJy/beam, peak = 73.3~mJy/beam. (bottom) (left) VLA image at C band and (right) VLA image overlaid on red SDSS image. Lowest contour = 0.125~mJy/beam, peak = 11.3~mJy/beam. \label{fig:J1111+4050}}
\end{figure}
\noindent J1135$-$0737 (Figure~\ref{fig:J1135-0737}). A weak hotspot is seen associated with the north lobe, which is accompanied by lobe emission to its east, whereas in the south lobe a narrow extended feature is seen accompanied by lobe emission to the west. No core is detected in the new map. However an optical object is seen along the axis formed by the northern hotspot and the narrow, elongated feature in the south.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f24a}
\includegraphics[width=0.45\columnwidth]{f24b}
\caption[J1135$-$0737 (L)]{J1135$-$0737. (left) VLA image at L band, (right) VLA image overlaid on red DSS\,II image. Lowest contour = 0.2~mJy/beam, peak = 1.99~mJy/beam. \label{fig:J1135-0737}}
\end{figure}
\noindent J1202+4915 (Figure~\ref{fig:J1202+4915}). The edge-brightened lobes are both revealed to have extended hotspot structures at their ends. The new map also reveals a compact radio core although no optical ID is visible. The diffuse extensions to the lobes are mostly resolved out especially that associated with the southern lobe.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f25a}
\includegraphics[width=0.45\columnwidth]{f25b}
\caption[J1202+4915 (L)]{J1202+4915. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.15~mJy/beam, peak = 7.84~mJy/beam. \label{fig:J1202+4915}}
\end{figure}
\noindent J1206+3812 (Figure~\ref{fig:J1206+3812}). The maps have revealed the hotspots in finer detail and the bridge emission leading from them along the source axis towards the compact core.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f26a}
\includegraphics[width=0.45\columnwidth]{f26b}\\
\includegraphics[width=0.45\columnwidth]{f26c}
\includegraphics[width=0.45\columnwidth]{f26d}\\
\caption[J1206+3812 (L)]{J1206+3812. (top) (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.2~mJy/beam, peak = 47.0~mJy/beam. (bottom) (left) VLA image at C band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.3~mJy/beam, peak = 15.6~mJy/beam. \label{fig:J1206+3812}}
\end{figure}
\noindent J1207+3352 (Figure~\ref{fig:J1207+3352}). This source appears to be one where offset emission is seen originating both from inner end (of both lobes in the FIRST map) as well as outer end (of the NW lobe in the shown figure). The two recessed inner peaks in the two lobes are aligned with the core whereas the two outer peaks are separately aligned with the core. The two axes are separated by less than $10^\circ$.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f27a}
\includegraphics[width=0.45\columnwidth]{f27b}\\
\includegraphics[width=0.45\columnwidth]{f27c}
\includegraphics[width=0.45\columnwidth]{f27d}\\
\caption[J1207+3352 (L \& C)]{J1207+3352. (top) (left) VLA image at L band and (right) VLA image overlaid on red SDSS image. Lowest contour = 0.3~mJy/beam, peak = 25.2~mJy/beam. (bottom) (left) VLA image at C band and (right) VLA image overlaid on red SDSS image. Lowest contour = 0.1~mJy/beam, peak = 27.5~mJy/beam. \label{fig:J1207+3352}}
\end{figure}
\noindent J1210$-$0341 (Figure~\ref{fig:J1210-0341}). The new map reveals sharply bounded edge-brightened lobes connected to inversion-symmetric transverse emission features. A weak core is seen associated with the central bright galaxy connected to the SE hotspot by a jet.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f28a}
\includegraphics[width=0.45\columnwidth]{f28b}
\caption[J1210$-$0341 (L)]{J1210$-$0341. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.1~mJy/beam, peak = 2.95~mJy/beam. \label{fig:J1210-0341}}
\end{figure}
\noindent J1211+4539 (Figure~\ref{fig:J1211+4539}). This source is similar to J1206+3812 (Figure \ref{fig:J1206+3812}). The two lobes have high axial ratios, which have short orthogonal extensions close to the center. No core is detected in the new map.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f29a}
\includegraphics[width=0.45\columnwidth]{f29b}
\caption[J1211+4539 (L)]{J1211+4539. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.2~mJy/beam, peak = 43.6~mJy/beam. \label{fig:J1211+4539}}
\end{figure}
\noindent J1227$-$0742 (Figure~\ref{fig:J1227-0742}). The map misses much of the extended flux seen in the FIRST image. While a hotspot is detected at the SE lobe end no core is detected nor is a hotspot seen in the western lobe. The map shows a transverse emission feature associated with the SE lobe.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f30a}
\includegraphics[width=0.45\columnwidth]{f30b}
\caption[J1227$-$0742 (L)]{J1227$-$0742. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.18~mJy/beam, peak = 0.86~mJy/beam. \label{fig:J1227-0742}}
\end{figure}
\noindent J1227+2155 (Figure~\ref{fig:J1227+2155}). A prominent feature revealed in the map is a narrow, extended arc of emission running orthogonal to another narrow
and elongated emission with prominent emission peaks, a likely radio galaxy. At its center a radio core is detected at the location of the optical object. The physical association of the curved extended feature with the radio galaxy is unclear.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f31a}
\includegraphics[width=0.45\columnwidth]{f31b}
\caption[J1227+2155 (L)]{J1227+2155. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.1~mJy/beam, peak = 3.74~mJy/beam. \label{fig:J1227+2155}}
\end{figure}
\noindent J1228+2642 (Figure~\ref{fig:J1228+2642}). Our map reveals distinct hotspots at the extremities of the source. Although a distinct core is not seen a local emission peak in the center coincides with a bright elliptical galaxy. The lobes are well-bounded but are not distinct and instead form a continuous bridge between the two hotspots.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f32a}
\includegraphics[width=0.45\columnwidth]{f32b}
\caption[J1228+2642 (L)]{J1228+2642. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.08~mJy/beam, peak = 4.37~mJy/beam. \label{fig:J1228+2642}}
\end{figure}
\noindent J1253+3435 (Figure~\ref{fig:J1253+3435}). A distinct core and clear hotspots are revealed for this source. The NE lobe has a pair of distinct hotspots at its extremity where as the SW lobe has an extended emission peak at the lobe end indicative of a pair of hotspots. The source is skewed, with both hotspot pairs displaced to the east with respect to the core. The lobes show sharp boundaries.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f33a}
\includegraphics[width=0.45\columnwidth]{f33b}
\caption[J1253+3435 (L)]{J1253+3435. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.09~mJy/beam, peak = 15.8~mJy/beam. \label{fig:J1253+3435}}
\end{figure}
\noindent J1309$-$0012 (Figure~\ref{fig:J1309-0012}). As seen in the FIRST image, transverse emission is seen associated only with the western lobe; this has a sharp inner edge that is almost orthogonal to the source axis.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f34a}
\includegraphics[width=0.45\columnwidth]{f34b}
\caption[J1309$-$0012 (C)]{J1309$-$0012. (left) VLA image at C band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.3~mJy/beam, peak = 113~mJy/beam. \label{fig:J1309-0012}}
\end{figure}
\noindent J1310+5458 (Figure~\ref{fig:J1310+5458}). A strong core and hotspots are seen in our high-resolution map. The source has prominent lobes with low axial ratio.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f35a}
\includegraphics[width=0.45\columnwidth]{f35b}
\caption[J1310+5458 (L)]{J1310+5458. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.2~mJy/beam, peak = 11.4~mJy/beam. \label{fig:J1310+5458}}
\end{figure}
\noindent J1327$-$0203 (Figure~\ref{fig:J1327-0203}). Our new map reveals a central core. The map shows prominent, well-bounded lobes that are devoid of compact hotspots. The transverse, central protrusions are also seen to be well bounded.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f36a}
\includegraphics[width=0.45\columnwidth]{f36b}
\caption[J1327$-$0203 (L)]{J1327$-$0203. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.2~mJy/beam, peak = 20.9~mJy/beam. \label{fig:J1327-0203}}
\end{figure}
\noindent J1342+2547 (Figure~\ref{fig:J1342+2547}). The source shows several signs of projection effects: strong core, the asymmetry in separation of the hotspots from the core, and the presence of a narrow jet on the side of the more distant hotspot. The southern lobe is traced as a distinct, broad feature that is extended at an angle of nearly $120^\circ$ from the source axis.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f37a}
\includegraphics[width=0.45\columnwidth]{f37b}
\caption[J1342+2547 (L)]{J1342+2547. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.3~mJy/beam, peak = 72.5~mJy/beam. \label{fig:J1342+2547}}
\end{figure}
\noindent J1345+5233 (Figure~\ref{fig:J1345+5233}). The source is revealed to have two edge-brightened lobes . A weak core may be detected at the location of the central faint object.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f38a}
\includegraphics[width=0.45\columnwidth]{f38b}
\caption[J1345+5233 (L)]{J1345+5233. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.09~mJy/beam, peak = 0.92~mJy/beam. \label{fig:J1345+5233}}
\end{figure}
\noindent J1348+4411 (Figure~\ref{fig:J1348+4411}). The map reveals a non-collinear, edge-brightened source, where the two hotspots lie along axes that make an angle of nearly $30^\circ$ to each other. The presumed jet to the north appears to bend by nearly $40^\circ$ to the east at a location about halfway to the northern hotspot. The lobes steer away from these respective axes with the northern lobe extended away by more than $90^\circ$.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f39a}
\includegraphics[width=0.45\columnwidth]{f39b}
\caption[J1348+4411 (L)]{J1348+4411. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.07~mJy/beam, peak = 5.96~mJy/beam. \label{fig:J1348+4411}}
\end{figure}
\noindent J1406$-$0154. (Figure~\ref{fig:J1406-0154}). Bright hotspots are seen at the extremities of the lobes. No core is detected although there is a faint object close to the radio axis between the two lobes. Both lobes are well confined including the two transverse emission regions which are also separated by a significant gap in emission.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f40a}
\includegraphics[width=0.45\columnwidth]{f40b}
\caption[J1406$-$0154 (L)]{J1406$-$0154. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.3~mJy/beam, peak = 57.6~mJy/beam. \label{fig:J1406-0154}}
\end{figure}
\noindent J1406+0657 (Figure~\ref{fig:J1406+0657}). Our new map reveals two bright hotspots at the lobe ends. Lobes are not seen as distinct components and instead a tapering broad swathe of emission, which is centrally located, is seen across the source axis. A weak core may be present at the location of a galaxy close to the source axis.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f41a}
\includegraphics[width=0.45\columnwidth]{f41b}
\caption[J1406+0657 (L)]{J1406+0657. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.2~mJy/beam, peak = 71.5~mJy/beam. \label{fig:J1406+0657}}
\end{figure}
\noindent J1408+0225 (Figure~\ref{fig:J1408+0225}). The almost featureless FIRST source is revealed to have a complex structure in our high-resolution map. Several compact sources and emission peaks are seen embedded within a large emission region. A prominent elongated core is seen at the location of the galaxy at the center. The elongation is along the direction to the hotspot at the northeast. Interestingly the core is located on an axis formed by another pair of hotspots at an angle of $60^\circ$ Higher resolution observations are required to understand the complex structure.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f42a}
\includegraphics[width=0.45\columnwidth]{f42b}
\caption[J1408+0225 (L)]{J1408+0225. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.1~mJy/beam, peak = 14.0~mJy/beam. \label{fig:J1408+0225}}
\end{figure}
\noindent J1430+5217 (Figure~\ref{fig:J1430+5217}). Our high-resolution map has resolved the central emission region into a pair of peaks one of which (to the SE) coincides with a galaxy. The second peak, which is slightly less compact, could be part of a jet to the west. Further to the west along the axis of the twin peaks lies a compact hotspot which also shows a partial collimated feature along the axis towards the pair of peaks. The bright region at the leading end of the eastern lobe may be showing a complex hotspot. This hotspot region does not lie on the source axis formed by the core, jet and the western hotspot. However, the eastern lobe shows an elongated feature along the axis offset from the rest of the lobe. Both lobes deflect away nearly orthogonally from the source axis.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f43a}
\includegraphics[width=0.45\columnwidth]{f43b}
\caption[J1430+5217 (L)]{J1430+5217. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.2~mJy/beam, peak = 40.2~mJy/beam. \label{fig:J1430+5217}}
\end{figure}
\noindent J1434+5906 (Figure~\ref{fig:J1434+5906}). There are two bright and compact hotspots at the extremities along with a swathe of broad emission orthogonal to the source axis mostly seen on one side. A weak core may be present at the location of the faint object in the center.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f44a}
\includegraphics[width=0.45\columnwidth]{f44b}
\caption[J1434+5906 (L)]{J1434+5906. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.15~mJy/beam, peak = 47.3~mJy/beam. \label{fig:J1434+5906}}
\end{figure}
\noindent J1456+2542 (Figure~\ref{fig:J1456+2542}). The map shows an edge-brightened morphology for the source. Neither of the lobes however is found to have a compact hotspot nor is a core or jet seen, suggesting a relic nature for the source. The extended diffuse emission region is resolved out in the new map.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f45a}
\includegraphics[width=0.45\columnwidth]{f45b}
\caption[J1456+2542 (L)]{J1456+2542. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.11~mJy/beam, peak = 1.16~mJy/beam. \label{fig:J1456+2542}}
\end{figure}
\noindent J1459+2903 (Figure~\ref{fig:J1459+2903}). See Figure~\ref{fig:J1459+2903}. The core emission is elongated with jet-like extensions on either side of a bright optical ID. Although edge-brightened no compact hotspots are seen. Much of the extended emission is resolved out in our map.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f46a}
\includegraphics[width=0.45\columnwidth]{f46b}\\
\includegraphics[width=0.45\columnwidth]{f46c}
\includegraphics[width=0.45\columnwidth]{f46d}\\
\caption[J1459+2903 (L \& C)]{J1459+2903. (top) (left) VLA image at L band and (right) VLA image overlaid on red SDSS image. Lowest contour = 0.2~mJy/beam, peak = 8.67~mJy/beam. (bottom) (left) VLA image at C band and (right) VLA image overlaid on red SDSS image. Lowest contour = 0.1~mJy/beam, peak = 10.1~mJy/beam. \label{fig:J1459+2903}}
\end{figure}
\noindent J1600+2058 (Figure~\ref{fig:J1600+2058}). Although the source is edge brightened the lobes are found to have only relatively weak and recessed emission peaks. Both lobes are narrow with multiple emission peaks. Both off axis emission regions are sharply bounded and there is a clear emission gap between the southern lobe and the core.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f47a}
\includegraphics[width=0.45\columnwidth]{f47b}
\caption[J1600+2058 (L)]{J1600+2058. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.2~mJy/beam, peak = 6.89~mJy/beam. \label{fig:J1600+2058}}
\end{figure}
\noindent J1606+0000 (Figure~\ref{fig:J1606+0000}). Our new map fails to detect any of the extended emission which is distributed skewed with respect to the main source in the FIRST map (see \cite{HK2010} for a detailed study of this source). Our map reveals a largely well confined source (also seen previously) except along a direction (through the core) that is nearly orthogonal with respect to the source axis. At these locations there are two protrusions on either side that may form the base of the elongated feature seen in the FIRST map.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f48a}
\includegraphics[width=0.45\columnwidth]{f48b}
\caption[J1606+0000 (L)]{J1606+0000. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.2~mJy/beam, peak = 82.0~mJy/beam. \label{fig:J1606+0000}}
\end{figure}
\noindent J1606+4517 (Figure~\ref{fig:J1606+4517}). The edge brightened lobes are found to have compact hotspots at their extremities. The transverse inner extension to the southern lobe has a sharp inner edge. Much of the extended emission seen in the FIRST map is resolved out.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f49a}
\includegraphics[width=0.45\columnwidth]{f49b}
\caption[J1606+4517 (L)]{J1606+4517. (top) (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.3~mJy/beam, peak = 3.21~mJy/beam. \label{fig:J1606+4517}}
\end{figure}
\noindent J1614+2817 (Figure~\ref{fig:J1614+2817}). A narrow channel is seen to run through the length of the source, ending in weak and bounded emission peaks at the leading ends. The source is well-bounded with two transverse extensions with only the northern extension orthogonal and centered on the core.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f50a}
\includegraphics[width=0.45\columnwidth]{f50b}
\caption[J1614+2817 (L)]{J1614+2817. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.2~mJy/beam, peak = 8.02~mJy/beam. \label{fig:J1614+2817}}
\end{figure}
\noindent J1625+2705 (Figure~\ref{fig:J1625+2705}). The quite featureless FIRST source is found to have a wealth of structure in our map. Two compact hotspots are seen. The SE hotspot is isolated appearing almost as a background source. The bright core, jet pointing to the farther and stronger SE hotspot reflect the effects of projection in the broad line AGN. A compact isolated hotspot is seen at the SE end along the axis.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f51a}
\includegraphics[width=0.45\columnwidth]{f51b}
\caption[J1625+2705 (L)]{J1625+2705. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.7~mJy/beam, peak = 28.0~mJy/beam. \label{fig:J1625+2705}}
\end{figure}
\noindent J1656+3952 (Figure~\ref{fig:J1656+3952}). A classic inversion symmetric structure is revealed for this source. A bright core is seen straddled by two edge-brightened lobes to which it is connected by narrow jets. The two lobes extend away from the source axis in opposite, orthogonal directions.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{f52a}
\includegraphics[width=0.45\columnwidth]{f52b}
\caption[J1656+3952 (L)]{J1656+3952. (left) VLA image at L band, (right) VLA image overlaid on red SDSS image. Lowest contour = 0.2~mJy/beam, peak = 8.06~mJy/beam. \label{fig:J1656+3952}}
\end{figure}
\section{DISCUSSION} \label{s:discussion}
Although a persistent structure associated with extended radio galaxies is that of two lobes straddling an elliptical galaxy (often also associated with a radio core), deviations from the radio axis are common. These deviations are most often associated with the diffuse lobes although jets are also found to show sharp bends. The sample of 52 sources observed with the VLA and which forms the data presented herein is a subset of the 100 low axial-ratio FIRST radio sources compiled by \citet{C2007} that were chosen to be candidates for XRG morphologies \citep{C2009}, displaying recognized signs of X-shapes. The sample has also been observed in the optical and spectra and redshifts have been obtained for most \citep{L2010}. The redshift range for this subsample is 0.05--0.8. The new VLA observations were aimed at obtaining structural details. Our analysis of follow-up high resolution observations of these sources, selected on the basis of their low axial ratios, provides an excellent opportunity to (1) examine the types of different structural distortions in radio galaxies, (2) examine their occurrence rates and, most importantly, (3) determine the fraction that might be classified as a genuine X-shaped radio source. The observed properties of the sources are given in Table~\ref{tab:Observed} and derived properties in Table~\ref{tab:Derived}.
\subsection{Classifying Radio Lobe Distortions} \label{s:classfiication}
Although several sources appear, in the imaging of the new VLA data, to have well-resolved structures, in a significant number of new high resolution images the diffuse large-scale radio emission, which is seen in the FIRST images presented by \citep{C2007}, is completely resolved out (e.g. J0143$-$0119). Among the well-imaged sources the new observations reveal structural distortions where lobe deviations appear to occur at `strategic' locations in the radio galaxy, either from the outer lobe ends (with or without hotspots) and extending away from the radio axis at large angles or from the inner lobe ends closer to the host galaxy. In each case the off-axis emission appears connected and a continuation of one of the two individual radio lobes. The deviations in the two lobes are mostly inversion symmetric (about the center or host galaxy or radio core). In a small fraction of sources the off-axis emission in the central part is in the form of a common band of emission running through the host galaxy or radio core. A few radio galaxies have jets seen to be bent into an ``integral'' (gradual ``S'') sign shape.
These off-axis deviations to radio galaxy structures, which otherwise exhibit basic twin-lobe morphologies, represent influences that need to be understood. Of course, effects of projection also enter and need to be considered in understanding the origin of the distortions. In an early study, \citet{L1984} noted and categorized the different kinds of distortions found in the well-studied 3CR radio galaxy sample and attempted to relate them with thermal halo gas associated with host galaxies at the centers as well as with any previous beam-activity episodes experienced by the AGN. However, lobe deviations seen originating from the outer ends of radio galaxies, far from the host galaxy influence, may have different underlying physical mechanisms at work. In the case of radio sources with a single broad swathe of emission at the centers that appear as a central band of emission a mechanism independent of the ongoing activity has also been proposed (e.g. \cite{M2002}).
\subsection{The Nature of Low Axial Ratio Sources} \label{s:nature}
Examination of the higher resolution images in conjunction with low resolution images allows us to gain insights into factors that may be causing deviations from the simple classical twin-lobe structure (thereby rendering a low axial ratio to the sources at low resolution) that is expected in the beam model. Our advantage in this exercise is that we have a fairly large sample of radio galaxies all of which were selected in a uniform manner from a single survey and all of which have now been imaged at higher resolution with a similar setup (in frequency and array) allowing us to also note the occurrence rate of different types of structures responsible for the low axial ratios. We have chosen three categories for classifying the structures of the 52 radio sources: sources where the non-collinear structures are caused by features originating at the inner ends of lobes, outer ends of lobes and the third category which includes the rest where neither condition holds.
Among the 52 low-axial-ratio sources, there are 25 sources where the deviating structures originate at the inner ends of lobes and eight where the origin appears to be at the outer ends of lobes (see Table~\ref{tab:Observed}). The remaining 19 have structures that remain unclassified in this respect. In nearly all of the 33 sources with off-axis deviations connected to the individual lobes (whether at the inner ends or outer ends) the distortions are in opposite directions except in one case, J1434+5906 (with lobe extension at the inner end), where the deviant emission is mostly on one side. The sample sources are predominantly edge-brightened FR-II type and only seven source morphologies are either of FR-I type or blends of multiple radio sources or remain unclear.
Among the 19 radio sources that fall in neither category there are some for which the data do not allow for classifying their low-axial ratio structures (e.g., J0001$-$0033, J0049+0059, J0143$-$0119, J0145$-$0159, J0813+4347, J1227$-$0742, J1227+2155, \& J1228+2642). In these cases we will need to pursue further, higher, resolution imaging.
Among the 19 sources there are 11 sources where there is a possibility of an independent transverse feature centered on the host (J0144$-$0830, J1008+0030, J1015+5944, J1043+3131, J1327$-$0203, J1345+5233, J1406+0657, J1408+0225, J1606+0000, J1614+2817, \& J1625+2705). In all these sources there is extended transverse emission seen either in the FIRST image (but not imaged in our high resolution maps) or in our maps which cannot be traced to either of the two lobes in the radio galaxy. Given the lack of association of this extended emission with the radio galaxy components (whether individual lobes or hotspots as seen in the two groups discussed above) we consider these eleven sources as potential candidates for ``genuine'' X-shaped radio galaxies, although better imaging would help clarify the nature of the extended emission in these sources further.
\subsection{Characteristics of Sources Identified with Inner-end and Outer-end Lobe Deviations} \label{s:inner outer}
Having classified the different types of deviations that are revealed when low-axial ratio sources are imaged at higher resolution, we attempt to characterize the properties of sources in the two groups, one with sources having inner-end deviations and the other with sources having outer-end deviations (see Table~\ref{tab:Observed} for the two groups of sources listed separately).
For this we have used different measures and associations like the projected physical separation (where possible) between the locations of deflections in each lobe, the fractional extents of the deflections, the presence or absence of a radio core, fractional core flux, whether the sources have FR-I, FR-II or hybrid-type structures and the presence of broad emission lines in the optical spectra (where available).
Among the 25 sources where the deviations occur at the inner ends of lobes we have measured structural parameters (from radio maps) for 19. For the remainder, the exercise was hampered by absence of a core and host galaxy or poor quality of the radio image (see Table~\ref{tab:Observed}). The variety in radio structures lead to the following inferences:
\begin{enumerate}
\item All 25 sources with inner-end deviations (Table~\ref{tab:Observed}) have FR-II morphologies.
\item Within the limits of sensitivity more often than not the transverse deviations on the two sides are unequal in extent.
\item Emission gaps between the location where the transverse deviation occurs and the cores are common. Fourteen out of 19 show distinct inner edges to the deviations which are also separated by recognizable or significant gaps (see Table~\ref{tab:Observed} for the sources that show sharp edges). The physical extents of the gaps are at most 60~kpc from the center of the host galaxy.
\item In sources with clear hotspots on the two sides, we detect no discernible correlation or anti-correlation between presence of stronger hotspot on one side and size of the gap or extent of the off-axis emission on that side.
\item In five cases (0211$-$0920, 0702+5002, 0859$-$0433, 0941$-$0143, \& 1054+5521) the transverse deviations extend to as much or more than their respective lobe extents, at least on one side. In three cases (J0702+5002, J0859$-$0433, \& 0941$-$0143) both transverse deviations have fractional extents exceeding unity. It may be noted that our measurements of the wing extents is mostly based on the lower resolution FIRST survey images.
\end{enumerate}
The fact that the inner-end deviations are not collinear and centered on the host and are instead separated by clear gaps implies that they may not be representing visible lobes created in a previous activity epoch or even channels left behind in a previous epoch and which are now made visible by new lobe plasma that has flowed into them.
We have also measured the position angles of the major axes of the host elliptical galaxies for a few of the sources in this group.
The purpose was to examine if sources with inner-end deviations adhered to the same tendency of the radio axis being closer to the host major axis as shown by XRGs \citep{C2002, S2009} and the subset of 3CR radio galaxies with central distortions to the lobes \citep{S2009}. With the prevalence of central lobe distortions in radio galaxy samples and the adherence to the same correlation in radio-optical axes as shown by the much longer winged XRGs the latter authors had suggested a generic physical mechanism like deflection of backflows by thermal gaseous halos \citep{L1984} rather than jet axis flips as the mechanism that may be causing the commonly seen small-extent central distortions as well as the more extreme ones seen in XRGs. For large angle flips in jet axis to be the responsible mechanism it would require axis flips to be commonly occurring and would need the jet axis to flip from minor axis to the major axis (see discussion of the contending XRG models by \cite{S2009}).
Measurements were possible for sources whose hosts were bright; values have been noted only for those that were clearly non-circular in appearance. For a total of eight sources it was possible to measure the major axis position angle (we used the ELLIPSE task in IRAF). For each of the sources we also measured the position angle of the radio axis. The axes were defined by the line joining the core and the hotspots in each of the lobes; where the core was not seen the axis was defined to be the line between the (likely) host galaxy and hotspots. For this group of galaxies the radio axes are within 20$\degr$ of their respective host major axes in six out of eight sources. This correspondence between host major axis and radio axis is consistent with that found previously for a subsample of 3CR radio galaxies with off-axis and inversion symmetric lobe distortions \citep{S2009} suggesting the possibility of a generic mechanism like backflow deflection underlying commonly seen off-axis lobe distortions as well as possibly also in the more extreme XRGs.
For the smaller sample of eight sources having outer-end deviations, all of which have FR-II morphologies, this exercise was possible only for two sources and in both cases the radio axes are within 30$\degr$ of the host major axes.
Three out of eight outer-end deviation sources (J0845+4031, J1253+3435, \& J1430+5217) show structures in their lobes that may be attributed to a drift or rotation in axis. In J0845+4031 each lobe has corresponding emission peaks and trailing emission in opposing directions that form a clear ``S'' with the inner peaks forming an axis with the core. In J1253+3435 the edge-brightened lobes have twin or extended hotspots at the lobe ends accompanied by sharp-edged oppositely-extending lobe emission, whereas in J1430+5217 there are several emission peaks and collimated extensions on either side indicating axis change besides the nearly transverse twin-hotspots at the lobe ends and oppositely extended trailing lobe emission. In the ``neither'' category of sources no source shows signs of axis rotation. Interestingly only one of the 25 inner-end deviation sources (J1207+3352) shows circumstantial evidence of axis rotation.
Two of the ``inner-end'' deviation sources, J0924+4233 and J1459+2903, show clear signs that the AGN beam activity has in the past ceased and restarted; their structures display an inner-double source embedded within a large pair of outer lobes that are devoid of compact hotspots.
\subsection{Physical Implications} \label{s:implications}
Whether the transverse structures originate at the inner ends or outer ends of lobes can have quite different physical significance.
Transverse structures originating at the inner ends of lobes are also seen in several known XRGs. Given the observational characteristics displayed by XRGs (described in Section~\ref{s:intro}; \cite{S2009}) the axis-flip model would require the axis to mostly flip from host minor axis direction to near host major-axis direction, would require the minor mergers responsible for axis flips or drifts to also displace the galaxy by several tens of kiloparsecs and for the relic emission to remain visible when often there is a deficit of relic radio galaxies, for the relic lobes to always have edge-darkened (FR-I) morphology and for the active main lobes to mostly have FR-II morphology. Where as the prevalence of inner-end distortions to radio galaxies (although less pronounced in lateral extent than in the more extreme wings in XRGs), the continuum of properties related to orientations of radio axis and optical axis \citep{S2009} between the two populations of radio galaxies, the presence of X-ray halos with major axis close to host major axis \citep{K2005, HK2010}, the mostly FR-II morphology of the main lobes, the clear connection seen between the off axis distortions and individual lobes, the separation between the off axis features and the often sharp inner edges to the off axis emission all are more simply explained via models that suggest backflowing lobe synchrotron plasma getting deflected in the thermal halos associated with the host galaxies \citep{C2002, S2009, HK2011}. In the case of off-axis emission connected to outer ends of lobes the wings might represent relic synchrotron plasma deposited in outer lobe regions as the jets drifted in position angle, perhaps in a precessing beam.
Neither of the two physical mechanisms can be supported at present via evidence other than circumstantial but simulations have been carried out for the former mechanism (\cite{C2002, HK2011}) as well as for reproducing the inner-lobe deviations via radio axis precession where effects of projection and light-travel time differences play a major role \citep{G2011}. It is nevertheless important to identify such sources that may be used as test-beds for mechanisms such as backflow deflection and radio axis rotation that are fundamental to understanding the AGN central engine and stability of its black hole spin axis.
\section{SUMMARY AND CONCLUSIONS} \label{s:summary}
We have analyzed 1.4 and 5~GHz archival VLA continuum data on a sample of 52 FIRST radio sources selected on the basis of low-axial ratio radio structures. Our primary results are:
\begin{enumerate}
\item The exercise has allowed examination of features that contribute to off-axis emission in radio sources that is not expected to naturally arise in the standard beam model for radio galaxies.
\item Our higher resolution imaging has aided in characterizing low-axial radio sources into ones where the off-axis emission is traced to individual radio lobes and ones where it instead appears as a common swathe of emission through the center and across the source axis.
\item A large fraction of the sample (60\%) constitutes sources where the off-axis emission is traced to individual lobes.
\item Eleven sources (20\% of our sample) have been identified as potentially genuine X-shaped radio galaxy candidates. Although the parent sample from which the 52-source subsample (used here) has been drawn is itself drawn from those FIRST fields that had ``sufficient dynamic range in the images to be able to see extended low surface brightness wings'' \citep{C2007} we cannot discount sources with even fainter extended wings that FIRST survey may have missed.
\end{enumerate}
The implications of these results for the predicted gravitational wave background are discussed in \cite{PaperII}.
\section{ACKNOWLEDGMENTS}
The National Radio Astronomy Observatory is a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc.
Funding for the SDSS and SDSS-II has been provided by the Alfred P.\ Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.\ S.\ Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is http://www.sdss.org/.
D.\ H.\ R.\ gratefully acknowledges the support of the William R. Kenan, Jr.\ Charitable Trust.
We thank Anand Sahay who assisted in the derivation of host galaxy position angles. The anonymous referee is thanked for helpful comments that led to an improved presentation.
\bigskip
\noindent {\em Facilities:}: Historical VLA (data archives, project codes: AB808, AC406, AC4450, AC572, AC818, AD100, AF91, AF918, AG143B, AJ250, AL252, AM67, AM222, AM364, AO80, AO80D, AP326, AR123, VC35).
\clearpage
|
1,116,691,498,705 | arxiv | \section{Introduction}
\label{sect:intro}
The efficiency of a gamma-ray burst (GRB) $\eta$ has aroused great
interests in the theoretical studies \citep{Meszaros1993, Xu2004},
because it firmly links with the total energy of the central engine
ejected. The energy of the gamma rays is almost standard energy with
about $1.3\times 10^{51} \rm{erg}$\citep{Bloom2001}. Thus, if the
efficiency is too low, the total ejected energy should be too large
and incredible. If it is too high, however, the required large
velocity diversity of the ejected shells is far from understanding.
Many previous works calculated the efficiency, but the results are
rather different. \citet{Kumar1999} drew the conclusion that the
efficiency can only be around 1\%, while \citet{Beloborodov2000}
argued that it can reach about 100\%. Considering different models
for the velocity diversities, \citet{Kobayashi2001, Guetta2001}
found that the results are highly model-dependent, and the
efficiency varies in the range of 0.1 to 0.9.
The efficiency can also be obtained by the model fitting for
individual bursts \citep[e.g.][]{Freedman2001, Panaitescu2002}. They
found that the $\eta$ varies for different bursts, and some bursts
have a very high value about 0.9 \citep{Freedman2001}. Some
statistics were carried out for the efficiency by
\citet{Lloyd-Ronning2004, Eichler2005}, who found that it may be
related to some energies such as the peak energy or the total energy
of gamma-rays.
Recently, the shallow decay phase of early X-ray afterglows
\citep{Nousek2006} appeared in many bursts detected by Swift. This
phenomenon is generally interpreted as post-burst energy injection.
This model enhances the efficiency enormously \citep{Eichler2005}.
\citet{Granot2006} suggested alternate models (e.g., two-component
jet model) to avoid this paradox, while by considering the reverse
Compton scattering effect, \citet{Fan2006} found that the efficiency
may not be too large even with the energy injection.
In the dynamics, the essential problem is the diversity of the
velocities (or Lorentz factors) of sub-shells ejected from the
central engine. It is not natural for the central engine to eject
shells with very high diversity. Enlightened by the equal energetic
ejecta coming from a differentially rotating pulsar \citep{Dai2006},
we suggest that the ejected shells from the central engine are all
equal, and the burst itself and the followed X-ray
flares\citep{Nousek2006} all originate from collisions between
ejected shells and a slow proceeding shell at the front, which may
originally be the envelope of the massive progenitor and be
accelerated by the rapid overtaken shells. This model has been used
to successfully fit the multi-band observations of the burst GRB
050904\citep{Zou2006}. In this model, the initial Lorentz factor of
the envelope may be unity at the beginning. In this paper, we
consider the dynamical evolution of the envelope and the collisions
with the upcoming ejected shells, and calculate the emission
efficiency, which is taken as the efficiency of GRBs. In this model,
the emitted energy of photons can be much greater than the final
kinetic energy. We give the dynamics in section
\S\ref{sec:analysis}, calculate the efficiency numerically in
\S\ref{sec:numeric}, and summarize our results in
\S\ref{sec:conclusion}.
\section{Analysis} \label{sec:analysis}
We consider one collision between two shells with mass $m_1$ and
$m_2$, Lorentz factor $\gamma_1$ and $\gamma_2$ respectively. They
merge into one whole shell with mass $m$ (where the materials may be
hot) and Lorentz factor $\gamma$. We assume that a fraction
$\epsilon$ of the internal energy will convert into photons. This
process should obey the conservation of energy and momentum:
\begin{eqnarray}
\gamma_1 m_1 c^2 + \gamma_2 m_2 c^2 = \gamma m c^2 + E_{\gamma}, \label{eq:energy_con}\\
\gamma_1 m_1 \beta_1 c +\gamma_2 m_2 \beta_2 c = \gamma m \beta c +E_{\gamma} /c, \label{eq:momentum_con}
\end{eqnarray}
where $\beta$ is the velocity in units of $c$ (corresponding to the
Lorentz factor $\gamma$), $E_{\gamma} = \gamma \epsilon E$ is
radiated as photons and $(1-\epsilon)E$ remains in the merged shell,
while $E$ is the additional internal energy produced in this
collision in the comoving frame of the merged shell. The mass of the
merged shell is $m=m_1 +m_2 +(1-\epsilon)E/c^2$. Define
$\epsilon$ as the radiation efficiency, which should be
differentiated with the total efficiency of the gamma-ray burst
$\eta$. For one collision, the efficiency is $\eta =
E_{\gamma}/(\gamma_1 m_1 c^2+\gamma_2 m_2 c^2)$. For $M$ shells with
$N$ collisions, the total efficiency is then
\begin{equation}
\eta = \frac{\sum_{i=1}^N E_{\gamma,(i)}}{\sum_{i=1}^{M}\gamma_{(i)} m_{(i)} c^2}.
\label{eq:effi}
\end{equation}
From these equations, one can find that the evolution depends very
sensitively on the collision history. For example, if all the rapid
shells are merged first, which cannot produce many photons for a low
diversity of velocities, and then, the whole merged shell overtakes
a slowly proceeding shell, then the final efficiency will be very
small, provided that the slow shell has much less mass than the
whole fast shell. A more efficient strategy is that faster shells
catch up with slower ones one by one. In this model, all the nearly
equal energetic and massive shells colliding with the envelope of
the progenitor\citep[like][]{Zou2006}, it just satisfies the
efficient strategy. We can expect that a high efficiency appears in
this scenario. As all the shells are colliding with the sole
foregoing one, we obtain $M=N+1$. However, to get the
final efficiency, Eqs. (\ref{eq:energy_con}) and
(\ref{eq:momentum_con}) should be used in each collision, numerical
simulations must be performed.
\section{Calculations} \label{sec:numeric}
We assume: $\gamma_1$ and $m_1$ are the initial Lorentz factor and
mass of the slow shell in the front;
$\gamma_2$ and $m_2$ are the Lorentz factor and mass of the ejected
shells from the central engine, which are taken as constant in the
model; $\gamma$ is Lorentz factor of the accelerating merged shell,
which is originally $\gamma_1$, and afterwards, mixed with the
ejected shells. We suppose that the ejected shell with mass $m_2$
and Lorentz factor $\gamma_2$ collides with the envelope. They merge
into one shell proceeding with Lorentz factor $\gamma$, and then
another ejected shell with same mass $m_2$ and Lorentz factor
$\gamma_2$ collides with the merged shell, which proceeds with a new
Lorentz factor $\gamma$, and so on.
Fig. \ref{fig:ef_eps-1_g1-1} shows the efficiency varying with the
total mass of ejected shells.
The efficiency increases with $(\Sigma m_2)/m_1$ first,
because when the $m_1 > \Sigma m_2$, from Eq. \ref{eq:effi},
the denominator can be regarded as constant approximately, while
the numerator increase with the number of collisions. With the
increase of $(\Sigma m_2)/m_1$, the Lorentz factor of merged shell
$\gamma$ approach the Lorentz factor of the faster shells $\gamma_2$
more and more closely, then the collisions produce less and
less emissions, therefore, the efficiency decreases.
Another clear tendency is that, for larger
value of $(\Sigma m_2)/m_1$, the collisions are less efficient but
with higher final Lorentz factor, and for smaller value of $(\Sigma
m_2)/m_1$, the collisions are more efficient but with lower Lorentz
factor.
For the case $\gamma_2=1000$, the efficiency is greater than
0.95, even though the total ejected shells is 100 times more massive
than the envelope. At the same time, one should consider the final
Lorentz factor to be reasonable. As this final Lorentz factor is the
initial Lorentz factor for the afterglow, the value of final Lorentz
factor should be in the range of tens to hundreds. Fig.
\ref{fig:g_eps-1_g1-1} shows the final Lorentz factor corresponding
to the cases shown in Fig. \ref{fig:ef_eps-1_g1-1}. It shows, for
these values of $\gamma_2$, $(\Sigma m_2)/m_1$ should not be less
than 20, and a value larger than 100 is also reasonable. For larger
$\gamma_2$, the efficiency and the final Lorentz factor become
larger. But a very large $\gamma_2$ may be difficult to occur from
the central engine.
These figures may have one implication about the evolution history
of the merged shell. The merged shell is initially the envelope at
rest. It is accelerated with the accumulation of ejected fast
shells. The efficiency increases before the $(\Sigma m_2)$ is less
than several times $m_1$, and then it decreases.
It is possible that the radius of the envelope is too small, and the
number density of electrons is too large, which prevents the
gamma-rays to radiate because of a high optical depth. Therefore,
the envelope may be accelerated by the collisions but few photons
radiate at first. With the collision radius increasing, the merged
shell becomes optically thin. We can simply consider this optical
thin shell as the initial foregoing shell, with mass $m_1$ and
$\gamma_1>1$. We calculate the efficiency and the final Lorentz
factor for different $\gamma_1$, while $\epsilon$ and $\gamma_2$ are
constant, which are shown in Figs. \ref{fig:ef_eps-1_g2-1000} and
\ref{fig:g_eps-1_g2-1000}. Generally, the efficiency is less and the
final Lorentz factor is larger than those in the case of
$\gamma_1=1$, which are also plotted in Figs.
\ref{fig:ef_eps-1_g2-1000} and \ref{fig:g_eps-1_g2-1000} as solid
lines. For the case $\gamma_1=300$, the efficiency is too small and
the final Lorentz factor is too large, which should be ruled out for
a normal gamma-ray burst. One may select a set of eclectic
parameters for a real burst.
The microscopic radiative efficiency $\epsilon$ may not be 1
perfectly, i.e., all the internal energy cannot be transferred into
radiation. But $\epsilon$ shouldn't be too small neither, otherwise,
the internal energy will accumulate more and more with collisions
going on, and then the very high proportion of internal energy will
be emitted out definitely. The efficiency and the Lorentz factor as
function of $(\Sigma m_2)/m_1$ for different $\epsilon$ are plotted
in Figs. \ref{fig:ef_g1-1_g2-1000} and \ref{fig:g_g1-1_g2-1000},
with $\gamma_1=1$ and $\gamma_2=1000$. For simplicity, we set
$\epsilon$ as constant. As a part of the internal energy is not
radiated, the efficiency decreases greatly and the final
Lorentz factor increases appreciably as $\epsilon$ decreases.
In Fig. \ref{fig:ef_g1-1_g2-1000}, for smaller $\epsilon$s, it looks like
the efficiency decreases with the increase of $(\Sigma m_2)/m_1$
directly. In fact, it is just that the stage of increasing stops for less
$(\Sigma m_2)/m_1$. As for smaller $\epsilon$, it means more
produced internal energy is left in the merged shell. This makes
the total mass of merged shell be comparable with $m_1$ more
early (see Eq. \ref{eq:effi}), and correspondingly, the decreasing
stage comes more early.
In the calculations, we assume that $m_2=m_1/10$. But the testing
calculations for different $m_2$ show that the results do not depend
on the particular value of $m_2$, provided $m_2 < m_1$. Therefore,
the mass of ejected shell is not required to be equal for the
results to be valid.
\section{Conclusions}
\label{sec:conclusion}
Considering the conservations of energy and momentum, we calculate
the efficiency of a gamma-ray burst in different cases, assuming
that the ejected shells from the central engine are equally
energetic and massive and they all collide onto a slower shell
proceeding in the front. A general conclusion is that for a large
diversity of the low and high initial Lorentz factors, the
efficiency of bursts is higher, which is consistent with the above
analysis. We have detailedly considered the influences of the values
of initial Lorentz factors and the microscopic radiative efficiency.
The final Lorentz factor depends sensitively on the initial Lorentz
factor of the slower shell, and the efficiency depends more
sensitively on the values of the Lorentz factor of the ejected
shells, while both are sensitive to the microscopic radiative
efficiency. The scenario provided here is a possible solution for
the very high efficiency obtained in the model fittings. Please note
that because the envelopes only exist in collapsars, these
calculations are only suitable for long duration bursts.
In this scenario, there are four free parameters: the initial
Lorentz factor of the slow shell $\gamma_1$ and of the fast shell
$\gamma_2$, the radiative microscopic efficiency $\epsilon$, and the
mass ratio of fast shells and the slow shell $(\Sigma m_2)/m_1$.
These can make the final efficiency be adaptive for a relative large
range, say (0.1,0.9). On the other hand, these parameters will be
restricted by other aspects of observations, e.g, the peak energy,
the deductive Lorentz factor from the afterglow observations, and
son on.
In the view of efficiency, this scenario is the most efficient one to produce
gamma-rays. In other scenarios, the efficiency is relatively low.
For example, if all the faster shells
merge first, which will not produce any photons because of the same
Lorentz factors, and then the merged shell collides with the foregoing slow one,
the emission will be very inefficient.
\section*{Acknowledgements}
The authors thank Y. F. Huang, X. F. Wu and L. Shao for helpful
discussions. This work was supported by the Natural Science
Foundation of China (grants 10233010 and 10221001).
|
1,116,691,498,706 | arxiv |
\section{Introduction}
In this work we look at Byzantine consensus in asynchronous systems under the \emph{local broadcast} model.
In the \emph{local broadcast} model \cite{Bhandari:2005:RBR:1073814.1073841, Koo:2004:BRN:1011767.1011807}, a message sent by any node is received identically by \emph{all of its neighbors} in the communication network, preventing a faulty node from transmitting conflicting information to different neighbors.
Our recent work \cite{Undirected_PODC2019} has shown that in the \emph{synchronous} setting, network connectivity requirements for Byzantine consensus are lower under the local broadcast model as compared to the classical point-to-point communication model.
Here we show that the same is not true in the asynchronous setting, and the network requirements for Byzantine consensus stays the same under local broadcast as under point-to-point communication model.
A classical result \cite{fischer1982impossibility} shows that it is impossible to reach exact consensus even with a single crash failure in an asynchronous system.
However, despite asynchrony, approximate Byzantine consensus among $n$ nodes in the presence of $f$ Byzantine faulty nodes is possible in networks with vertex connectivity at least $2f+1$ and $n \ge 3f + 1$ \cite{Dolev:1986:RAA:5925.5931}.
Motivated by results in the synchronous setting \cite{Undirected_PODC2019}, one might expect a lower connectivity requirement under the local broadcast model.
In this work we show that, in fact, the network conditions do not change from the point-to-point communication model.
\section{System Model and Notation}
We represent the communication network by an undirected graph $G = (V, E)$.
Each node knows the graph $G$.
Each node $u$ is represented by a vertex $u \in V$.
We use the terms \emph{node} and \emph{vertex} interchangeably.
Two nodes $u$ and $v$ are \emph{neighbors} if and only if $uv \in E$ is an edge of $G$.
Each edge $uv$ represents a FIFO link between two nodes $u$ and $v$.
When a message $m$ sent by node $u$ is received by node $v$, node $v$ knows that $m$ was sent by node $u$.
We assume the \emph{local broadcast} model wherein a message sent by a node $u$ is received identically and correctly by each node $v$ such that $uv \in E$ (i.e., by each neighbor of $u$)\footnote{Our results apply even for the stronger model where messages must be received at the same time by all the neighbors.}.
We assume an asynchronous system where the nodes proceed at varying speeds, in the absence of a global clock, and messages sent by a node are received after an unbounded but finite delay\footnote{Our results apply even for the stronger model where messages are received after a known bounded delay as well as (with slight modifications to the proofs) to the case where message delay is unbounded but nodes have a global clock for synchronization.}.
A \emph{Byzantine} faulty node may exhibit arbitrary behavior.
There are $n$ nodes in the system of which at most $f$ nodes may be Byzantine faulty, where $0 < f < n$\footnote{The case where $f = 0$ is trivial and the case when $n = f$ is not of interest.}.
We consider the \emph{$\epsilon$-approximate Byzantine consensus problem} where each of the $n$ nodes starts with a \emph{real valued input}, with known upper and lower bounds $U$ and $L$ such that $L < U$ and $U - L > \epsilon > 0$.
Each node must output a real value satisfying the following conditions.
\begin{enumerate}[label=\arabic*)]
\item \textbf{$\epsilon$-Agreement:}
For any two non-faulty nodes, their output must be within a fixed constant $\epsilon$.
\item \textbf{Validity:}
The output of each non-faulty node must be in the convex hull of the inputs of non-faulty nodes.
\item \textbf{Termination:}
All non-faulty nodes must decide on their output in finite time which can depend on $U$, $L$, and $\epsilon$.
\end{enumerate}
Once a node terminates, it takes no further steps.
\section{Impossibility Results}
In this section we show two impossibility results.
\begin{theorem} \label{theorem number of nodes}
If there exists an $\epsilon$-approximate Byzantine consensus algorithm under the local broadcast model on an undirected graph $G$ tolerating at most $f$ Byzantine faulty nodes, then $n \ge 3f+1$.
\end{theorem}
\begin{theorem} \label{theorem connectivity network}
If there exists an $\epsilon$-approximate Byzantine consensus algorithm under the local broadcast model on an undirected graph $G$ tolerating at most $f$ Byzantine faulty nodes, then $G$ is $(2f+1)$-connected.
\end{theorem}
Both the proofs follow the state machine based approach \cite{Attiya:2004:DCF:983102, Dolev:1986:RAA:5925.5931, Fischer1986}.
\begin{proof_of}{Theorem \ref{theorem number of nodes}}
We assume that $G$ is a complete graph; if consensus can not be achieved on a complete graph consisting of $n$ nodes, then it clearly cannot be achieved on a partially connected graph consisting of $n$ nodes.
Suppose for the sake of contradiction that $n \le 3f$ and there exists an algorithm $\mathcal{A}$ that solves $\epsilon$-approximate Byzantine consensus in an asynchronous system under the local broadcast model.
Then there exists a partition $(A, B, C)$ of $V$ such that $\abs{A}, \abs{B}, \abs{C} \le f$.
Since $n > f \ge 1$, we can ensure that both $A$ and $B$ are non-empty.
Algorithm $\mathcal{A}$ outlines a procedure $\mathcal{A}_u$ for each node $u$ that describes $u$'s state transitions.
We first create a network $\mathcal{G}$ to model behavior of nodes in $G$ in two different executions $E_1$ and $E_2$, which we will describe later.
Figure \ref{figure number of nodes} depicts $\mathcal{G}$.
The network $\mathcal{G}$ consists of two copies of each node in $C$, denoted by $C_{\operatorname{crash}}$ and $C_{\operatorname{slow}}$, and a single copy of each of the remaining nodes.
For each node $u$ in $G$, we have the following cases to consider:
\begin{enumerate}[label=\arabic*)]
\item If $u \in A$, then there is a single copy of $u$ in $\mathcal{G}$.
With a slight abuse of terminology, we denote the copy by $u$ as well.
\item If $u \in B$, then there is a single copy of $u$ in $\mathcal{G}$.
With a slight abuse of terminology, we denote the copy by $u$ as well.
\item If $u \in C$, then there are two copies of $u$ in $\mathcal{G}$.
We denote the two copies by $u_{\operatorname{crash}} \in C_{\operatorname{crash}}$ and $u_{\operatorname{slow}} \in C_{\operatorname{slow}}$.
\end{enumerate}
For each edge $uv \in E(G)$, we create edges in $\mathcal{G}$ as follows:
\begin{enumerate}[label=\arabic*)]
\item If $u, v \in A \cup B$, then there is an edge between the corresponding copy of $u$ and $v$ in $\mathcal{G}$.
\item If $u \not \in C, v \in C$, then there is a single edge $u v_{\operatorname{slow}}$ in $\mathcal{G}$.
\item If $u, v \in C$, then there is an edge $u_{\operatorname{crash}} v_{\operatorname{crash}}$ and an edge $u_{\operatorname{slow}} v_{\operatorname{slow}}$ in $\mathcal{G}$.
\end{enumerate}
Note that the edges in $G$ and $\mathcal{G}$ are both undirected.
Observe that the structure of $\mathcal{G}$ ensures the following property.
For each edge $uv$ in the original graph $G$, each copy of $u$ receives messages from at most one copy of $v$ in $\mathcal{G}$.
This allows us to create an algorithm for $\mathcal{G}$ corresponding to $\mathcal{A}$ by having each copy $u_i \in \mathcal{G}$ of node $u \in G$ run $\mathcal{A}_u$.
The nodes in $C_{\operatorname{crash}}$ start off in a crashed state and never take any steps.
The nodes in $C_{\operatorname{slow}}$ are ``slow'' and start taking steps after time $\Delta$, where the value of $\Delta$ will be chosen later.
\begin{figure}
\centering
\begin{tikzpicture}
\node[draw, circle, minimum size=2cm, label={left:$L$}] at (0, 0) (A) {$A$};
\node[draw, circle, minimum size=2cm, label={right:$U$}] at (6, 0) (B) {$B$};
\node[draw, circle, minimum size=2cm, label={above:$U$}] at (3, 3) (C_crash) {$C_{\operatorname{crash}}$};
\node[draw, circle, minimum size=2cm, label={below:$U$}, red] at (3, -3) (C_slow) {$C_{\operatorname{slow}}$};
\draw[-] (A) to (B);
\draw[-] (B) to (C_slow);
\draw[-] (C_slow) to (A);
\end{tikzpicture}
\caption{
Network $\mathcal{G}$ to model executions $E_1$ and $E_2$ in proof of Theorem \ref{theorem number of nodes}.
Edges within the sets are not shown while edges between sets are depicted as single edges.
The labels adjacent to the sets are the corresponding inputs in execution $\mathcal{E}$.
}
\label{figure number of nodes}
\end{figure}
Consider an execution $\mathcal{E}$ of the above algorithm on $\mathcal{G}$ as follows.
Each node in $A$ has input $L$ and each node in $B \cup C_{\operatorname{slow}} \cup C_{\operatorname{crash}}$ has input $U$.
Observe that it is not guaranteed that nodes in $\mathcal{G}$ will satisfy any of the conditions of $\epsilon$-approximate Byzantine consensus, including the termination property.
We will show that the algorithm does indeed terminate but the output of the nodes do not satisfy the validity condition, which will give us the desired contradiction.
We use $\mathcal{E}$ to describe two executions $E_1$ and $E_2$ of $\mathcal{A}$ on the original graph $G$ as follows.
\begin{enumerate}[label=$E_{\arabic*}$:,labelindent=0pt]
\item
$C$ is the set of faulty nodes which crash immediately at the start of the execution.
Each node in $A$ has input $L$ while all other nodes have input $U$.
Since $\mathcal{A}$ solves $\epsilon$-approximate Byzantine consensus on $G$, nodes in $A \cup B$ reach $\epsilon$-agreement and terminate within some finite time, without receiving any messages from nodes in $C$.
We set $\Delta$ for the delay above for $C_{\operatorname{slow}}$ to be this value.
Since $U - L > \epsilon$, the outputs of (non-faulty) nodes in $A \cup B$ are either not $U$ or not $L$.
WLOG we assume that the outputs are not $U$\footnote{For the other case, we can switch the faulty set in $E_2$ to $B$ and change the input of $C_{\operatorname{slow}}$ to be $L$}.
Note that the behavior of non-faulty nodes in $A$ and $B$ \emph{for the first $\Delta$ time period} is modeled by the corresponding (copies of) nodes in $\mathcal{G}$, while the behavior of the (crashed) faulty nodes is captured by $C_{\operatorname{crash}}$.
\item
$A$ is the set of Byzantine faulty nodes.
A faulty node broadcasts the same messages as the corresponding node in $\mathcal{G}$ in execution $\mathcal{E}$.
Each node in $A$ has input $L$ while all other nodes have input $U$.
The output of the non-faulty nodes will be described later.
The behavior of nodes (both faulty and non-faulty) in $A$ and $B$ is modeled by the corresponding (copies of) nodes in $\mathcal{G}$, while the behavior of the (non-faulty) nodes in $C$ is captured by $C_{\operatorname{slow}}$.
\end{enumerate}
Due to the behavior of nodes in $A$ and $B$ in $E_1$, each of the corresponding copies in $\mathcal{G}$ decides on a value distinct from $U$ and terminates within time $\Delta$ in execution $\mathcal{E}$.
Therefore, the behavior of nodes in $A$ and $B$ is completely captured by the corresponding copies in $\mathcal{E}$.
It follows that in $E_2$, nodes in $B$ have outputs other than $U$.
However, all non-faulty nodes have input $U$ in $E_2$.
Recall that, by construction, $B$ is non-empty.
This violates validity, a contradiction.
\end{proof_of}\\
\begin{proof_of}{Theorem \ref{theorem connectivity network}}
Suppose for the sake of contradiction that $G$ is not $(2f+1)$-connected and there exists an algorithm $\mathcal{A}$ that solves $\epsilon$-approximate Byzantine consensus in an asynchronous system under the local broadcast model on $G$.
Then there exists a vertex cut $C$ of $G$ of size at most $2f$ with a partition $(A, B, C)$ of $V$ such that $A$ and $B$ (both non-empty) are disconnected in $G - C$ (so there is no edge between a node in $A$ and a node in $B$).
Since $\abs{C} \le 2f$, there exists a partition $(C^1, C^2)$ of $C$ such that $\abs{C^1}, \abs{C^2} \le {f}$.
Algorithm $\mathcal{A}$ outlines a procedure $\mathcal{A}_u$ for each node $u$ that describes $u$'s state transitions.
We first create a network $\mathcal{G}$ to model behavior of nodes in $G$ in three different executions $E_1$, $E_2$, and $E_3$, which we will describe later.
Figure \ref{figure connectivity network} depicts $\mathcal{G}$.
The network $\mathcal{G}$ consists of three copies of each node in $C^1$, two copies of each node in $A$ and $B$, and a single copy of each node in $C^2$.
We denote the three sets of copies of $C_1$ by $C^1_{\operatorname{crash}}$, $C^1_L$, and $C^1_U$.
We denote the two sets of copies of $A$ (resp. $B$) by $A_L$ and $A_U$ (resp. $B_L$ and $B_U$).
For each edge $uv \in E(G)$, we create edges in $\mathcal{G}$ as follows:
\begin{enumerate}[label=\arabic*)]
\item If $u, v \in A$ (resp. $\in B$), then there are two copies of $u$ and $v$, $u_L, v_L \in A_L$ (resp. $\in B_L$) and $u_U, v_U \in A_U$ (resp. $\in B_U$).
There is an edge $u_L v_L$ and an edge $u_U v_U$ in $\mathcal{G}$.
\item If $u, v \in C^1$, then there are three copies $u_L, v_L \in C^1_L$, $u_U, v_U \in C^1_U$, and $u_{\operatorname{crash}}, v_{\operatorname{crash}} \in C^1_{\operatorname{crash}}$ of $u$ and $v$.
There are edges $u_L v_L$, $u_U v_U$, $u_{\operatorname{crash}} v_{\operatorname{crash}}$ in $\mathcal{G}$.
\item If $u, v \in C^2$, then there is an edge $uv$ between the corresponding copies in $\mathcal{G}$.
\item If $u \in C^1, v \in C^2$, then there are three copies $u_L \in C^1_L$, $u_U \in C^1_U$, and $u_{\operatorname{crash}} \in C^1_{\operatorname{crash}}$ of $u$, and a single copy of $v$.
There is an undirected edge $u_U v$ and a directed edge $\overrightarrow{v u_L}$ in $\mathcal{G}$.
\item If $u \in A, v \in C^1$, then there are two copies $u_L \in A_L$ and $u_U \in A_U$ of $u$, and three copies $v_L \in C^1_L$, $v_U \in C^1_U$, and $v_{\operatorname{crash}} \in C^1_{\operatorname{crash}}$ of $v$.
There are two undirected edges $u_L v_L$ and $u_U v_U$ in $\mathcal{G}$.
\item If $u \in B, v \in C^1$, then there are two copies $u_L \in B_L$ and $u_U \in B_U$ of $u$, and three copies $v_L \in C^1_L$, $v_U \in C^1_U$, and $v_{\operatorname{crash}} \in C^1_{\operatorname{crash}}$ of $v$.
There are two undirected edges $u_L v_L$ and $u_U v_U$ in $\mathcal{G}$.
\item If $u \in A, v \in C^2$, then there are two copies $u_L \in A_L$ and $u_U \in A_U$ of $u$, and a single copy of $v$.
There is an undirected edge $u_L v$ and a directed edge $\overrightarrow{v u_U}$ in $\mathcal{G}$.
\item If $u \in B, v \in C^2$, then there are two copies $u_L \in B_L$ and $u_U \in B_U$ of $u$, and a single copy of $v$.
There is an undirected edge $u_U v$ and a directed edge $\overrightarrow{v u_L}$ in $\mathcal{G}$.
\end{enumerate}
$\mathcal{G}$ has some directed edges.
We describe their behavior next.
We denote a directed edge from $u$ to $v$ as $\overrightarrow{u v}$.
All message transmissions in $\mathcal{G}$ are via local broadcast, as follows.
When a node $u$ in $\mathcal{G}$ transmits a message, the following nodes receive this message identically: each node with whom $u$ has an undirected edge and each node to whom there is an edge directed away from $u$.
Note that a directed edge $e = \overrightarrow{u v}$ behaves differently for $u$ and $v$.
All messages sent by $u$ are received by $v$.
No message sent by $v$ is received by $u$.
Observe that with this behavior of directed edges, the structure of $\mathcal{G}$ ensures the following property.
For each edge $uv$ in the original graph $G$, each copy of $u$ receives messages from at most one copy of $v$ in $\mathcal{G}$.
This allows us to create an algorithm for $\mathcal{G}$ corresponding to $\mathcal{A}$ by having each copy $u_i \in \mathcal{G}$ of node $u \in G$ run $\mathcal{A}_u$.
The nodes in $C^1_{\operatorname{crash}}$ start off in a crashed state and never take any steps.
The nodes in $C^1_L$ and $C^1_U$ are ``slow'' and start taking steps after time $\Delta$, where the value of $\Delta$ will be chosen later.
\begin{figure}
\centering
\begin{tikzpicture}
\node[draw, circle, minimum size=2cm, label={above left:$L$}] at (0, 2) (A_0) {$A_L$};
\node[draw, circle, minimum size=2cm, label={above right:$L$}] at (8, 2) (B_0) {$B_L$};
\node[draw, circle, minimum size=2cm, label={below left:$U$}] at (0, -3) (A_1) {$A_U$};
\node[draw, circle, minimum size=2cm, label={below right:$U$}] at (8, -3) (B_1) {$B_U$};
\node[draw, ellipse, minimum height=2cm, label={above:$U$}] at (4, 4) (C^1_c) {$C^1_{\operatorname{crash}}$};
\node[draw, ellipse, minimum height=2cm, label={above:$L$}, red] at (4, 1) (C^1_0) {$C^1_L$};
\node[draw, ellipse, minimum height=2cm, label={below:$U$}, red] at (4, -2) (C^1_1) {$C^1_U$};
\node[draw, ellipse, minimum height=2cm, label={below:$U$}] at (4, -5) (C^2) {$C^2$};
\draw[dashed] (A_0) to node[cross out,draw,solid]{} (A_1);
\draw[dashed] (B_0) to node[cross out,draw,solid]{} (B_1);
\draw[dashed] (C^1_0) to node[cross out,draw,solid]{} (C^1_1);
\draw[-{Latex[width=3mm,length=3mm]}] (C^2) to (B_0);
\draw[-{Latex[width=3mm,length=3mm]}] (C^2) to (A_1);
\draw[-{Latex[width=3mm,length=3mm]},bend right] (C^2) to (C^1_0);
\draw[-] (C^1_0) to (A_0);
\draw[-] (C^1_0) to (B_0);
\draw[-] (C^2) to (A_0);
\draw[-] (C^2) to (B_1);
\draw[-] (C^1_1) to (A_1);
\draw[-] (C^1_1) to (B_1);
\draw[-,bend right] (C^1_1) to (C^2);
\end{tikzpicture}
\caption{
Network $\mathcal{G}$ to model executions $E_1$, $E_2$, and $E_3$ in proof of Theorem \ref{theorem connectivity network}.
Edges within the sets are not shown while edges between sets are depicted as single edges.
The crossed dotted lines emphasize that there are no edges between the corresponding sets.
The labels adjacent to the sets are the corresponding inputs in execution $\mathcal{E}$.
}
\label{figure connectivity network}
\end{figure}
Consider an execution $\mathcal{E}$ of the above algorithm on $\mathcal{G}$ as follows.
Each node in $A_L \cup B_L \cup C^1_L$ has input $L$ and all other nodes have input $U$.
Observe that it is not guaranteed that nodes in $\mathcal{G}$ will satisfy any of the conditions of $\epsilon$-approximate Byzantine consensus, including the termination property.
We will show that the algorithm does indeed terminate but nodes do not reach $\epsilon$-agreement in $\mathcal{G}$, which will be useful in deriving the desired contradiction.
We use $\mathcal{E}$ to describe three executions $E_1$, $E_2$, and $E_3$ of $\mathcal{A}$ on the original graph $G$ as follows.
\begin{enumerate}[label=$E_{\arabic*}$:,labelindent=0pt]
\item
$C^1$ is the set of faulty nodes which crash immediately at the start of the execution.
Each node in $A$ has input $L$ while all other nodes have input $U$.
Since $\mathcal{A}$ solves $\epsilon$-approximate Byzantine consensus on $G$, nodes in $A \cup B \cup C^2$ reach $\epsilon$-agreement and terminate within some finite time, without receiving any messages from nodes in $C^1$.
We set $\Delta$ for the delay above for $C^1_L$ and $C^1_U$ to be this value.
The output of the non-faulty nodes will be described later.
Note that the behavior of non-faulty nodes in $A$, $B$, and $C^2$ \emph{for the first $\Delta$ time period} is modeled by the corresponding (copies of) nodes in $A_L$, $B_U$, and $C^2$ respectively, while the behavior of the (crashed) faulty nodes is captured by $C^1_{\operatorname{crash}}$.
\item
$C^2$ is the set of faulty nodes.
A faulty node broadcasts the same messages as the corresponding node in $\mathcal{G}$ in execution $\mathcal{E}$.
All non-faulty nodes have input $L$.
The behavior of non-faulty nodes in $A$, $B$, $C^1$ is modeled by the corresponding (copies of) nodes in $A_L$, $B_L$, and $C^1_L$ respectively, while the behavior of the faulty nodes is captured by $C^2$.
Since $\mathcal{A}$ solves $\epsilon$-approximate Byzantine consensus on $G$, nodes in $A \cup B \cup C^1$ decide on output $L$.
\item
$C^2$ is the set of faulty nodes.
A faulty node broadcasts the same messages as the corresponding node in $\mathcal{G}$ in execution $\mathcal{E}$.
All non-faulty nodes have input $U$.
The behavior of non-faulty nodes in $A$, $B$, $C^1$ is modeled by the corresponding (copies of) nodes in $A_U$, $B_U$, and $C^1_U$ respectively, while the behavior of the faulty nodes is captured by $C^2$.
Since $\mathcal{A}$ solves $\epsilon$-approximate Byzantine consensus on $G$, nodes in $A \cup B \cup C^1$ decide on output $U$.
\end{enumerate}
Due to the output of nodes in $A$ and $B$ in $E_1$, the nodes in $A_L$ and $B_U$ decide on an output within time $\Delta$ in execution $\mathcal{E}$.
Therefore, the behavior of nodes in $A$ and $B$ in $E_1$ is completely captured by the corresponding nodes in $A_L$ and $B_U$ in $\mathcal{E}$.
Now, due to the output of nodes in $A$ in $E_2$, the nodes in $A_L$ output $L$ in $\mathcal{E}$.
Similarly, due to the output of nodes in $B$ in $E_3$, the nodes in $B_U$ output $U$ in $\mathcal{E}$.
It follows that in $E_1$, nodes in $A$ have output $L$ while nodes in $B$ have output $U$.
Recall that, by construction, both $A$ and $B$ are non-empty.
This violates $\epsilon$-agreement, a contradiction.
\end{proof_of}
\section{Summary}
In \cite{Undirected_PODC2019} we showed that network requirements are lower for Byzantine consensus in synchronous systems under the local broadcast model, as compared with the point-to-point communication model.
One might expect a lower connectivity requirement in the asynchronous setting as well.
In this work, we have presented two impossibility results in Theorems \ref{theorem number of nodes} and \ref{theorem connectivity network} that show that local broadcast does not help improve the network requirements in asynchronous systems.
\nocite{Khan2019ExactBC}
\bibliographystyle{abbrv}
|
1,116,691,498,707 | arxiv | \section{Introduction}
\begin{figure*}[bhpt]
\begin{tabbing}
\includegraphics[viewport=50 40 500 800,angle=270,width=6.6cm,clip]{Fig_1a.png}\hfill
\includegraphics[viewport=50 110 500 800,angle=270,width=6.0cm,clip]{Fig_1b.png}\hfill
\includegraphics[viewport=50 110 500 800,angle=270,width=6.0cm,clip]{Fig_1c.png}\hfill \\
\includegraphics[viewport=50 40 560 800,angle=270,width=6.6cm,clip]{Fig_1d.png}\hfill
\includegraphics[viewport=50 110 560 800,angle=270,width=6.0cm,clip]{Fig_1e.png}\hfill
\includegraphics[viewport=50 110 560 800,angle=270,width=6.0cm,clip]{Fig_1f.png}\hfill
\end{tabbing}
\caption{
Six examples of quasars with weak emission lines from the WLQ sample.
For comparison the quasar composite spectrum from
Vanden Berk et al. (\cite{VandenBerk2001}; smooth red curve)
is over-plotted, shifted to the redshift
of the quasar and normalised to the total flux density between 3900 \AA\ and 9000 \AA\ (observer frame).
At the top of each panel, the SDSS name, the redshift $z$, and the parameter FFT = {\sc first\_fr\_type}
(0 - not detected by FIRST; 1 - core-dominated radio source) from the Shen catalogue are given.
The dotted vertical lines indicate the usually strong emission lines.
}
\label{fig:examples}
\end{figure*}
Broad emission lines (BELs) are a defining characteristic of type 1 active galactic nuclei (AGN).
The weakness or even absence of BELs is the most remarkable feature
of a class of high-luminosity AGNs called weak-line quasars (WLQs).
The first discovered WLQ was the radio-quiet quasar PG 1407+265 at redshift $z = 0.94$
(McDowell et al. \cite{Mcdowell1995}) with undetectably weak H$\mathrm{\beta}$ and UV
BELs although the continuum properties are similar to those of normal radio-quiet quasars.
Fan et al. (\cite{Fan1999}) discovered the first high-$z$ WLQ, SDSS J153259.96+003944.1
($z = 4.62$) and suggested that it is either the most distant known BL Lac object
with very weak radio emission or a new type of unbeamed quasars whose broad emission line
region (BLR) is very weak or absent.
Based on the multi-colour selection of the Sloan Digital Sky Survey (SDSS;
York et al. \cite{York2000}), about one hundred high-$z$ WLQs have been found with Ly$\alpha$-\ion{N}{v}
rest-frame equivalent width $< 15$ \AA\ (Diamond-Stanic et al. \cite{Diamond2009};
Shemmer et al. \cite{Shemmer2010}; Wu et al. \cite{Wu2012}).
Low values of the equivalent widths of the BELs can be the result of abnormally low
line fluxes or of an unusually strong continuum. Relativistic beaming provides an
example for dilution of the line strength by a boosted continuum. However, such an
interpretation of the WLQ phenomenon is widely considered unlikely because many properties of
the WLQs (e.g. radio-loudness, variability, and polarisation) are different from those of
BL Lac objects
(McDowell et al. \cite{Mcdowell1995};
Shemmer et al. \cite{Shemmer2006};
Diamond-Stanic et al. \cite{Diamond2009};
Plotkin et al. \cite{Plotkin2010};
Lane et al. \cite{Lane2011};
Wu et al. \cite{Wu2012}).
WLQs are also different from type 2 quasars where only the broad line
components are missed in the unpolarised spectra, and the
Eddington ratios $\varepsilon = L/L_{\rm Edd}$\footnote{The Eddington luminosity $L_{\rm Edd}$ is the
luminosity for the critical stable case where the gravitational pressure of the accretion flow is exactly
balanced by the pressure of the radiation flow.}
are usually lower
(Tran et al. \cite{Tran2003};
Shi et al. \cite{Shi2010};
Shemmer et al. \cite{Shemmer2010}).
Factors that can mimic WLQ spectra are line absorption or a strong \ion{Fe}{ii} pseudo-continuum
(e.g. Lawrence et al. \cite{Lawrence1988}). Such explanations may work for some objects but do
not explain the WLQ phenomenon in general (McDowell et al. \cite{Mcdowell1995}). Any scenario based
on dust absorption has to be able to explain the weakening of the BELs
without any reddening of the continuum.
Though a number of ideas have been developed, the WLQ phenomenon remains puzzling.
The hypotheses can be roughly grouped into two families based on
either an extraordinary BLR or unusual properties of the central ionising source.
The former includes, in particular, the ideas of
generally abnormal properties of the BEL emitting clouds (Shemmer et al. \cite{Shemmer2010}),
a low covering factor of the BLR (i.e. a low fraction of the central source covered by
BEL clouds, Niko{\l}ajuk et al. \cite{Nikolajuk2012}),
or a relative shortage of high-energy UV/X-ray photons due to a shielding gas with a high covering factor
that prevents the X-ray photons from reaching the BLR
(Lane et al. \cite{Lane2011};
Wu et al. \cite{Wu2012}).
Abnormal properties of the continuum source may include a high
Eddington ratio as in PHL 1811
(e.g. Leighly et al. \cite{Leighly2007};
but see Hryniewicz et al. \cite{Hryniewicz2010};
Shemmer et al. \cite{Shemmer2010}),
a freshly launched wind from the accretion disk (Hryniewicz et al. \cite{Hryniewicz2010}),
an optically dull AGN (Comastri et al. \cite{Comastri2002}; Severgnini et al. \cite{Severgnini2003}),
or a cold accretion disk around a high-mass ($M \ge 3\cdot 10^9 M_\odot$) black hole
(Laor \& Davis \cite{Laor2011}).
In a previous study (Meusinger et al. \cite{Meusinger2012}, hereafter Paper 1), we used
the database of the $10^5$ quasar spectra from the SDSS Seventh Data Release (DR7,
Abazajian et al. \cite{Abazajian2009}) to select the approximately one per cent
of the quasars with the strongest deviations of their spectra from the ordinary quasar spectrum
as represented by the SDSS quasar composite spectrum (Vanden Berk et al. \cite{VandenBerk2001}).
About one fifth of this sample was classified as uncommon because of remarkably
weak BELs. We found that these WLQs are, on average, more luminous, have a steeper composite
spectrum (i.e. lower value of $\alpha_\lambda, F_\lambda \propto \lambda^{\alpha_\lambda}$) at
$\lambda \ga 2000$\AA, and have a high percentage of radio-loud quasars (26\%). In addition, in a
study of the variability of the quasars in the SDSS stripe 82 revealed, it was
found that WLQs tend to have lower variability amplitudes (Meusinger et al. \cite{Meusinger2011}).
No effort has been made to create complete WLQ samples in these previous studies.
The present paper is aimed at the construction and analysis of a more voluminous and more
thoroughly selected sample of WLQs and
WLQ-like objects by taking again advantage of the unprecedented spectroscopic data from the SDSS DR7.
This study exploits the compilations of quasar properties by Schneider et al. (\cite{Schneider2010})
and Shen et al. (\cite{Shen2011}) that are based on the SDSS DR7 and include data from the
1.4 GHz radio survey Faint Images of the Radio Sky at Twenty-Centimeters (FIRST; Becker et al. \cite{Becker1995}).
We apply essentially the same selection method as in Paper 1. The selection and the construction of the
WLQ sample is described in Sect.\,\ref{sec:selection}.
Section\,\ref{sec:properties} is considered with the UV composite spectrum, the luminosities,
black hole masses, Eddington ratios, accretion rates, and variability, where
particular attention is paid to the role of selection effects. The wide band spectral energy
distribution (SED) and the radio properties are the subject of Sect.\,\ref{sec:SED}.
The results are discussed in Sect.\,\ref{sec:discussion} and summarised in
Sect.\,\ref{sec:summary}.
\section{Selection of the WLQ sample}\label{sec:selection}
In Paper 1, we have shown that the selection of unusual spectra from the huge database of the
SDSS can be performed efficiently by the combination of the power of the Kohonen self-organising map
(SOM) algorithm and the eyeball inspection of the resulting SOMs in the form of spectra icon maps.
The Kohonen algorithm (Kohonen \cite{Kohonen2001}), an unsupervised learning process
based on an artificial neural network, generates a low-dimensional
(typically two-dimensional) map of complex input data. The SOM algorithm
and the software tool
ASPECT\footnote{http://www.tls-tautenburg.de/fileadmin/research/meus/ASPECT/ ASPECT.html}
used for the computation were described in detail in a separate paper
(in der Au et al. \cite{inderAu2012}).
The present study is based on the 36 SOMs from Paper 1 for the nearly $10^5$ objects classified as quasars
with $z = 0.6$ - 4.2 by the spectroscopic pipeline of the SDSS DR7.
Each SOM consists of the quasars within a $z$ interval of the width $\Delta z = 0.1$, sorted (clustered)
according to the relative similarity of their spectra. For each SOM, an icon map was created
where each object is represented by its SDSS spectrum with largely reduced spectral
resolution. Despite the loss of resolution, the icon maps are well suited to quickly localise objects
with special broad spectral features, such as unusually red or reddened continua, broad absorption lines (BALs),
and unusually strong or unusually weak BELs. In Paper 1, we selected $10^3$ unusual
spectra of different types. This selection was not primarily aimed at high completeness, and
particularly the subsample of quasars with weak BELs was supposed to be substantially incomplete.
In the present study, we re-inspected all 36 spectra icon SOMs with the purpose to select solely
spectra with relatively weak BELs. Unlike Paper 1, the new selection includes only
spectra with clearly recognised quasar-typical spectral features that allows us to estimate the redshift; i.e.,
featureless blue spectra were not included here. In the first step, a total number of
nearly $\sim2500$ candidates were selected as the initial sample of visually selected candidate WLQs,
among them are 2249 quasars listed in the SDSS DR7 quasar catalogue (Schneider et al. \cite{Schneider2010}).
The selection procedure is of course subjective and the selected sample is contaminated
by other types of objects such as quasars with line absorption (BALs or
associated narrow absorption lines) reducing the BEL flux, some early-type stars,
and also by different types of objects with noisy spectra where the signal-to-noise (S/N) ratio is
too low for a certain classification.
To purify the initial sample, the selected objects were inspected individually
in more detail by fitting the de-redshifted and foreground extinction corrected SDSS spectra
to the SDSS quasar composite spectrum from Vanden Berk et al. (\cite{VandenBerk2001}).
The fitting algorithm is aimed to match both the positions of the typical quasar emission lines
and the shape of the pseudo-continuum. The $z$ values derived in that way are generally in good agreement
with those from the SDSS DR7 quasar catalogue, with only a few exceptions. Objects not identified in
the quasar catalogue, as well as objects with discrepant $z$ values, were simply rejected from the sample.
The final WLQ sample consists of 365 quasars with counterparts in the SDSS DR7 quasar catalogue.
The mean redshift is $\overline{z} = 1.50\pm0.45$. Figure\,\ref{fig:examples} displays six typical spectra.
\begin{figure}[bhtp]
\includegraphics[viewport=40 20 560 560,angle=270,width=8.0cm,clip]{Fig_2.png}
\caption{Equivalent width $W_{\ion{C}{iv}}$ of the \ion{C}{iv} $\lambda$1549 line versus equivalent
width $W_{\ion{Mg}{ii}}$ of the \ion{Mg}{ii} $\lambda$2798 line for the quasars from the
WLQ sample with $1.50\le z \le 2.22$ (filled red squares) and for 35137 quasars in the same redshift range
from the Shen catalogue (equally spaced logarithmic local point density contours estimated with a
grid size of $\Delta = 2$\AA\ on both axes).
}
\label{fig:EWs}
\end{figure}
Relevant data for the WLQs were taken from Schneider et al. (\cite{Schneider2010})
and Shen et al. (\cite{Shen2011}). From the catalogue of quasar properties from SDSS DR7
(Shen et al. \cite{Shen2011}; hereafter Shen catalogue) we took in particular the equivalent widths (EWs) of the
prominent BELs, the flux of the continuum close to these lines,
the bolometric luminosity $L$, the fiducial virial black hole mass $M$ from scaling relations, the Eddington
ratio, and radio properties.
The line measurements in the Shen catalogue were made in the rest frame after removal of the
Galactic foreground extinction. For each line, a local power law plus an iron template was fitted and
then subtracted from the spectrum. The resulting line spectrum was modelled by various functions.
Measurements of the EWs for both the \ion{Mg}{ii} $\lambda$2798 line and the \ion{C}{iv} $\lambda$1549 line
are available for 136 of the selected WLQs with redshifts $1.50\le z \le 2.22$.
Figure\,\ref{fig:EWs} shows the distribution of
the WLQs on the $W_{\ion{Mg}{ii}}-W_{\ion{C}{iv}}$ plane in comparison with all
SDSS DR7 quasars in the same redshift interval. The centroid of the WLQs at
$(W_{\ion{Mg}{ii}},W_{\ion{C}{iv}}) =$ (16.6$\pm$6.5\AA,14.2$\pm$14.6\AA) is clearly distant
from that of the ordinary quasars. On the other hand, our sample also contains quasars where
obviously only one of the equivalent widths, $W_{\ion{Mg}{ii}}$ or $W_{\ion{C}{iv}}$,
is low while the other is normal and even some quasars with normal values for both lines.
We did not reject these quasars to clean the WLQ sample further because a wider span of properties
allows us to study some trends of other properties with the EW.
\begin{table}[hbpt]
\caption{32 objects rejected from the initial sample because of featureless spectra. The
type classification and the proper motion (pm) data were taken from the catalogues labelled in
the colums 3 and 6 and listed in the footnote to the table.
}
\begin{tabular}{lccrrc}
\hline
SDSS J & Type & Ref. & pm \ \ \ \ & $I_{\rm pm}$& Ref.\\
& & &(mas/yr)\\
\hline
003745.52+074423.2 & WD & 1 & 44 & 5.1 & 9\\
021923.41+010413.4 & WD & 1 & 18 & 2.2 & 9\\
024058.79-003934.5 & WD & 1 & 23 & 2.7 & 9\\
073249.49+354651.5 & WD & 1 & 40 & 5.6 & 9\\
084732.74+172819.1 & WD & 2 & 70 & 9.9 & 9\\
084749.21+183016.8 & WD & 1 & 151 & 21.4 & 9\\
085246.87+100523.0 & WD & 1 & 61 & 5.9 & 9\\
093958.63+340152.4 & pm star& 3 & 58 & 8.4 & 9\\
094857.88+123243.0 & WD & 1 & 188 & 23.4 & 9\\
095933.22+144548.9 & WD & 4 & 70 & 9.3 & 9\\
100149.22+144123.8 & WD & 5 & 346 & 90.7 & 5\\
101509.57+351813.8 & WD & 1 & 65 & 8.3 & 9\\
105430.62+221054.8 & BL Lac & 6,7& 4 & 0.7 & 9\\
113245.62+003427.7 & BL Lac & 6,7& 2 & 0.4 & 9\\
120423.80+230913.3 & WD & 1 & 51 & 7.8 & 9\\
121856.69+414800.2 & WD & 1 & 34 & 4.6 & 9\\
122008.29+343121.7 & BL Lac & 6,7& 8 & 1.3 & 9\\
124510.00+570954.3 & BL Lac & 6,7& 5 & 0.9 & 9\\
124818.78+582028.9 & BL Lac & 6,7& 6 & 0.9 & 9\\
125335.04+163020.5 & WD & 1 & 139 & 18.5 & 9\\
130210.74+454424.3 & pm star& 3 & 46 & 6.2 & 9\\
132232.05+373032.9 & WD & 1 & 49 & 7.4 & 9\\
133040.69+565520.1 & BL Lac & 6,7& 6 & 0.6 & 9\\
141904.67+110306.2 & WD & 4 & 19 & 2.6 & 9\\
145427.13+512433.7 & BL Lac & 6,7& 5 & 0.8 & 9\\
152913.56+381217.5 & BL Lac & 6,7& 4 & 0.7 & 9\\
153324.26+341640.3 & BL Lac & 6,7& 4 & 0.8 & 9\\
160410.22+432614.6 & WLQ & 7 & 9 & 1.4 & 9\\
161315.36+511608.3 & WD & 1 & 112 & 6.6 & 9\\
170108.89+395443.0 & BL Lac & 6,7& 5 & 0.7 & 9\\
220911.31-003543.0 & WD & 1 & 6 & 0.7 & 9\\
224303.81+221456.0 & CV & 8 & 7 & 1.0 & 9\\
\hline
\end{tabular}
\vspace{3mm}
{\bf References.}
1 - Kleinman et al. (\cite{Kleinman2013});
2 - McCook \& Sion (\cite{McCook1999});
3 - SDSS DR10 explorer;
4 - Girven et al. (\cite{Girven2011});
5 - Paper 1 (Tab.\,2);
6 - Massaro et al. (\cite{Massaro2009}, catalogue version August 2012);
7 - Plotkin et al. (\cite{Plotkin2010});
8 - Thostensen \& Skinner (\cite{Thorstensen2012});
9 - R\"oser et al. (\cite{Roeser2010})
\label{tab:featureless}
\end{table}
As mentioned above, the WLQ selection was confined to spectra with clearly recognisable quasar-typical
features. The advantage of reliable redshifts is however at the expense of the completeness of such a sample
at low EWs. In particular, we rejected 32 objects with blue spectra from our initial sample
because the spectra appeared more or less featureless to us. To estimate how strongly our final sample may
be biased against low-EW WLQs, we checked, a posteriori, several catalogues for additional information
on these 32 rejected objects (Tab.\,\ref{tab:featureless}).
Altogether 18 objects were classified as white dwarfs (WDs) of spectral
type DC or DQ, another one as a cataclysmic variable (CV). Ten sources are classified as high-confidence
radio-loud BL Lacertae objects in the catalogue of optically selected BL Lac objects
from the SDSS DR7 (Plotkin et al. (\cite{Plotkin2010}) and also in the Roma-BZCAT
multifrequency catalogue of blazers (Massaro et al. \cite{Massaro2009}).
One object, SDSS J160410.22+432614.6, is listed in the Plotkin et al. catalogue of weak-featured
radio-quiet objects and may be a WLQ that is missed in the present sample. For the remaining two objects,
the SDSS DR10 explorer tool gives high proper motions suggesting that they are nearby stars, most likely WDs.
Absolute proper motions are very helpful in distinguishing extragalactic
objects from nearby white dwarfs. Table\,\ref{tab:featureless} lists the proper motions
taken from the PPMXL catalogue (R\"oser et al. \cite{Roeser2010}). In addition, a proper motion index,
$I_{\rm pm}$ is given that is defined as the pm in units of the pm error.
For the 10 BL Lac objects, the mean pm index $I_{\rm pm} = 0.8$ indicates zero proper motion.
In contrast, the sample of the 18 WDs plus one CV has a mean pm of 79 mas/yr and a mean pm index
$I_{\rm pm} = 12.9$. The inspection of the individual sources yields that the pm index does not indicate
significant proper motions for all BL Lac objects, the probably missed WLQ SDSS J160410.22+432614.6, and for
two objects classified as stellar in the literature (SDSS J220911.31-003543.0 and SDSS J224303.81+221456.0).
A low pm does not necessarily exclude that these objects are nearby stars. On the other hand, however,
we cannot definitively exclude that they are WLQs.
Only three objects from Tab.\ref{tab:featureless} are in the Shen catalogue: the probable WLQ
SDSS J160410.22+432614.6 and the two BL Lac objects SDSS J113245.62+003427.7 and
SDSS J170108.89+395443.0. The EWs of the \ion{Mg}{ii} line are
1.0\,\AA, 5.8\,\AA, and 9.4\,\AA. The mean value of 5.4\,\AA\ is close to the 5\,\AA\ EW limit
for the selection of BL Lac objects (e.g. Plotkin et al. \cite{Plotkin2008}; Ghisellini
et al. \cite{Ghisellini2011}). The EWs of the WLQ sample follow a log-normal distribution with the lower
3$\sigma$ deviation from the mean value at 4.6\,\AA. There are only two WLQs (0.5\%) with lower EWs
in our sample. We assume therefore that the minimum \ion{Mg}{ii} EW threshold of our selection is at
about 5\,\AA. Given that this threshold is exceeded by SDSS J170108.89+395443.0 with $W_{\ion{Mg}{ii}} = 9.4$\,\AA,
we conclude that Tab.\,\ref{tab:featureless} may contain up to altogether four wrongly rejected WLQs.
For some purposes it is useful to restrict the quasar sample to the redshift interval $0.7 \le z \le 1.7$
(Meusinger \& Weiss \cite{Meusinger2013}). First, the formal uncertainties of the
catalogued bolometric luminosities and black hole masses are lowest for such redshifts.
As pointed out by Shen et al. (\cite{Shen2011}), the mass uncertainty was
propagated from the measurement uncertainties of the continuum luminosities and the line width, but does
neither include systematic effects nor the statistical uncertainty from the calibration of the
scaling relations. Secondly, in that $z$ interval, the fiducial black hole masses from the Shen catalogue
are uniformly derived from the scaling relation for the \ion{Mg}{ii} line.
Virial masses derived from the \ion{C}{iv} line are considered to be less reliable than those from
the H$\beta$ or the \ion{Mg}{ii} line (e.g. Shen \& Liu \cite{Shen2012}).
Moreover, the estimation
of the accretion rate (Sect.\,\ref{subsec:lum_etc}) requires the knowledge of the continuum
slope $\alpha_\lambda$ between 3000 and 4800\AA, which can be measured from the SDSS spectra
only for not too high $z$. The restricted WLQ sample (hereafter: rWLQ sample) with $0.7 \le z \le 1.7$ contains
261 quasars. It is particularly useful when luminosities, black hole masses, Eddington ratios,
or accretion rates are considered.
Because it is difficult to compare the resulting WLQ sample with known WLQ samples defined
by the measured EWs, it is helpful to consider a subsample that follows a
statistically based selection. Following Diamond-Stanic et al. (\cite{Diamond2009}), we assumed a log-normal EW
distribution and defined WLQs as quasars having EWs below a 3$\sigma$ threshold.
For the quasars from the Shen catalogue in the redshift range of the rWLQ sample
we obtain the mean EWs 34.1\,\AA, 38.7\,\AA\
and the WLQ selection thresholds 11\,\AA, 4.8\,\AA\ for the \ion{Mg}{ii} line and the \ion{C}{iv} line,
respectively.
The subsamples from the entire WLQ sample and the rWLQ sample, respectively, with EWs below these
thresholds are hereafter indicated by the suffix EWS.
The WLQ-EWS subsample consists of 46 quasars with a mean redshift $\overline{z} = 1.48\pm0.56$ and mean
EWs $8.8\pm1.9$\,\AA, $2.6\pm1.2$\,\AA.
Assuming that Tab.\,\ref{tab:featureless} contains at most four wrongly rejected WLQs, the completeness of
this sample, compared with other samples based on the SDSS DR7 spectroscopic quasar catalogue, is expected to be
about 90\%. It must be noted, however, that this catalogue itself is biased against quasars with very
week emission lines. The rWLQ-EWS subsample consists of 33 quasars with $\overline{z} = 1.24\pm0.24$ and
$9.0\pm1.8$\,\AA, $2.5\pm1.3$\,\AA.
\vspace{0.2cm}
\begin{figure}[htbp]
\begin{centering}
\includegraphics[viewport=95 0 555 790,angle=270,width=9.0cm,clip]{Fig_3a.png}
\includegraphics[viewport=85 0 555 790,angle=270,width=9.0cm,clip]{Fig_3b.png}
\includegraphics[viewport=85 0 580 790,angle=270,width=9.0cm,clip]{Fig_3c.png}
\end{centering}
\caption{Arithmetic median composite spectrum (thick red curve) and standard deviation (green shaded area)
for (a) the WLQ-EWS sample, (b) the entire WLQ sample, and (c) the radio-loud subsample.
For comparison, the SDSS quasar composite
spectrum (thin blue curve) from Vanden Berk et al. (\cite{VandenBerk2001}) is over-plotted,
normalised to match the WLQ composites at the right hand side.}
\label{fig:composite}
\end{figure}
\section{UV/optical properties of the selected quasars}\label{sec:properties}
In this Section, we analyse mean properties of our WLQs where various subsamples are considered.
Particular attention should be payed to the WLQ-EWS and rWLQ-EWS subsamples because they are closest to traditional
WLQ samples. On the other hand, these subsamples are small and it thus makes sense to also consider the
corresponding parent samples for comparison. In the context of the black hole mass, the Eddington ratio, and
especially the accretion rate, the rWLQ-EWS and the rWLQ sample should be preferred (see above).
For the comparison with normal quasars, the $z$ dependence of quasar properties
must be taken into consideration because the $z$ distribution of the rWLQ sample is different from that of
the entire Shen catalogue. Therefore, we created a comparison sample of ordinary quasars with the
same $z$ distribution following the procedure outlined in Paper 1. The comparison sample is about ten times
larger than the rWLQ sample. For the mean EW of the comparison quasars we find
$(W_{\ion{Mg}{ii}}, W_{\ion{C}{iv}}) =$ (42$\pm$22 \AA, 51$\pm$52 \AA) compared with
(17$\pm$7 \AA, 17$\pm$18 \AA) for the rWLQ sample (Tab.\,\ref{tab:mean-prop}).
\subsection{Composite spectra}\label{subsec:composite}
The individual inspection of the SDSS spectra led to the impression that many WLQs show a steeper continuum
than the SDSS quasar composite spectrum (see Fig.\,\ref{fig:examples}).
This tendency was already reported in Paper 1, but it appears even more pronounced in the present WLQ sample.
We computed composite spectra in the same way as in Paper 1. The procedure is essentially based
on the combining technique described by Vanden Berk et al. (\cite{VandenBerk2001}).
Fig.\,\ref{fig:composite} shows the resulting arithmetic median composite spectra for
(a) the WLQ-EWS subsample and (b) the entire WLQ sample.
For comparison, the SDSS quasar composite spectrum from
Vanden Berk et al. (\cite{VandenBerk2001}) is over-plotted, arbitrarily normalised at 3200 \AA. The comparison
clearly illustrates the weaker BELs and the steeper (i.e. less reddened) ultraviolet continuum
for both (a) the WLQ-EWS subsample and (b) the entire WLQ sample.
We also confirm the finding from Paper 1 that the WLQ spectra flatten at short wavelengths,
approximately below 2000 \AA. We estimated the spectral slopes of the individual spectra by
fitting the $F_\lambda \propto \lambda^{\alpha_\lambda}$
power law in pseudo-continuum windows (see Paper 1) and found a mean value of
$\overline{\alpha_\lambda}=-1.69\pm0.36$ for the rWLQs compared to $-1.53\pm0.39$ for the ordinary
quasars from the comparison sample with S/N$>5$ in both cases.
For the 97 quasars with $1.7 < z < 3$ in the full WLQ sample the mean slope is
$\overline{\alpha_\lambda}=-1.52\pm0.35$.
A possible explanation of the steeper continuum could be an additional non-thermal component in WLQ spectra.
Therefore, we also computed the composite spectrum of the subsample of radio-loud WLQ. Usually, quasars
are classified as radio-loud based on the
radio-to-optical flux ratio. Kellermann et al. (\cite{Kellermann1989}) defined the radio-loudness parameter
$R = F_{\rm 5GHz}/F_{\rm 4400\AA}$, where $F_{\rm 5GHz}$ and $F_{\rm 4400\AA}$ are the
flux densities at rest-frame 5 GHz and 4400\AA,
and called quasars with $R$ greater than 10 as radio-loud. Since then,
$R=10$ is commonly used to distinguish radio-loud and radio-quiet quasars
(e.g.
Francis et al. \cite{Francis1993};
Urry \& Padovani \cite{Urry1995};
Ivezi\'c et al. \cite{Ivezic2002};
McLure \& Jarvis \cite{McLure2004};
Richards et al. \cite{Richards2011}),
although this value is to some degree rather arbitrary
(e.g. Falcke et al. \cite{Falcke1996};
Wang et al. \cite{Wang2006};
Sect.\,\ref{sec:discussion} below).
We took the radio-loudness parameter from the Shen catalogue. Shen et al. (\cite{Shen2011}) estimated
the radio-loudness $R = F_{\rm 6cm}/F_{\rm 2500\AA}$ following Jiang et al. (\cite{Jiang2007}), where
$F_{\rm 6cm}$ is the flux density at rest-frame 6\,cm determined from
the FIRST integrated flux density at 20\,cm assuming a power law $F_\nu \propto \lambda^{\alpha_\nu}$ with
$\alpha_\nu = -0.5$. The rest-frame
2500\,\AA\ flux density $F_{\rm 2500\AA}$ is determined from the power-law continuum fit to the SDSS spectrum.
Following Jiang et al. (\cite{Jiang2007}), we use the criterion $R>10$ to classify radio-loud quasars.
Figure\,\ref{fig:composite}\,c shows the radio-loud WLQ composite spectrum.
It is very similar to that of the entire WLQ sample, which is dominated by radio-quiet quasars. We conclude
that there is no significant difference between the composites of radio-loud and of radio-quiet WLQs, in
accordance with earlier findings for ordinary quasars (e.g. Francis et al. \cite{Francis1993}).
\vspace{0.2cm}
\begin{figure}[htbp]
\begin{centering}
\includegraphics[viewport=30 20 560 795,angle=270,width=8.8cm,clip]{Fig_4a.png}
\includegraphics[viewport=30 20 570 795,angle=270,width=8.8cm,clip]{Fig_4b.png}
\end{centering}
\caption{
Two interpretations of the steeper slope of the WLQ continuum.
(a): At the bottom of the panel, the WLQ composite spectrum (red) is compared with the slightly
smoothed SDSS quasar composite spectrum (solid blue) fitted by a MTBB model with temperature parameter
$T^\ast = 8\,10^4$\,K (dotted blue).
The thick black curve above is a modified version of the SDSS quasar composite with a hotter continuum
fitted to the MTBB model for $T^\ast = 10^5$\,K and compared with the WLQ composite spectrum (red) that
is normalised to match the former at the right hand side.
(b): As (a), but the modification of the SDSS quasar composite spectrum consists of an additional
power-law component instead of a hotter continuum. Vertical lines: continuum windows.
}
\label{fig:fit}
\end{figure}
It is known from both multi-epoch photometry and spectroscopy of SDSS quasars that the variability in the
emission line flux is only $\sim10\%$ of the variability in the underlying continuum
(Wilhite et al. \cite{Wilhite2005}; Meusinger et al. \cite{Meusinger2011}).
Let us assume that the WLQs represent a stage where the continuum is significantly enhanced but
the lines are not.
At the bottom of Fig.\,\ref{fig:fit}\,a, the continuum of the SDSS composite spectrum
between the Ly$\alpha$ and the \ion{Mg}{ii} line is fitted by a simple multi-temperature black body (MTBB) model
with a temperature parameter $T^\ast \approx 8\, 10^4$\,K that provides a good fit to the observed quasar
composite spectra over a much wider wavelength range (e.g. Meusinger \& Weiss 2013).
The steeper WLQ composite requires a higher temperature.
Indeed, the fit is considerably improved when we subtract the continuum of the $8\cdot 10^4$\,K MTBB from
the SDSS composite and add instead the continuum of the $10^5$\,K model.
An alternative way to achieve a similar result is shown in Fig.\,\ref{fig:fit}\,b.
We simply added a hypothetical further power-law component $F_\lambda \propto \lambda^{-1.7}$,
i.e. $\alpha_\nu = -0.3$, of approximately the same level as the thermal continuum at 3000\,\AA.
In both cases, the WLQ composite spectrum is well matched by the modified SDSS quasar composite spectrum.
The enhancement of the continuum flux dilutes the line flux and reduces the EWs of the lines
correspondingly.
\subsection{Luminosity, black hole mass, Eddington ratio, and accretion rate}\label{subsec:lum_etc}
\subsubsection{Mean properties and correlation diagrams}
Table\,\ref{tab:mean-prop} lists the mean values of the bolometric luminosity $L$, the black hole mass $M$,
the Eddington ratio $\varepsilon$, and the accretion rate $\dot{M}$
for the WLQ-EWS subsample, the rWLQ-EWS subsample, the corresponding parent samples,
and the comparison sample. The rWLQ sample is subdivided into the subsamples of radio-loud
and not radio-loud quasars, and the radio-loud subsample is subdivided into core-dominated and lobe-dominated
radio sources. Quasars with $R>10$ are classified as radio-loud (Sect.\,\ref{subsec:composite}).
The not radio-loud subsample
includes both radio-quiet quasars, i.e. $R<10$, and quasars outside the FIRST footprint area.
With the exception of $\dot{M}$, all data were taken from the Shen catalogue.
The distinction of lobe-dominated and core-dominated radio sources is based on the parameter
{\sc first\_radio\_flag}.
\begin{table*}[hbpt]
\caption{
Mean properties of the WLQ, rWLQ, WLQ-EWS, rWLQ-EWS, and comparison sample.
}
\centering
\begin{tabular}{lrcccccccc}
\hline
Sample &
Number &
Mean $z$ &
$W_{\rm \ion{Mg}{ii}}$ (\AA)&
$W_{\rm \ion{C}{iv}}$ (\AA) &
$L (10^{46}$erg/s) &
$M (10^9 M_\odot)$ &
$\varepsilon$ &
$\dot{M} (M_\odot/$yr) \\
\hline
WLQ-EWS\tablefootmark{a} & 46&$1.48\pm0.56$&$ 9\pm2$&$ 3\pm1 $&$5.5\pm3.6$ &$1.8\pm1.8$&$0.47\pm0.46$& - \\
rWLQ-EWS\tablefootmark{a} & 33&$1.24\pm0.24$&$ 9\pm2$&$ 3\pm1 $&$4.9\pm3.5$ &$1.7\pm1.8$&$0.38\pm0.29$&$7.6\pm6.4$\\
WLQ all & 365&$1.50\pm0.45$&$ 17\pm8$&$13\pm14 $&$7.7\pm11.2$&$3.1\pm4.1$&$0.32\pm0.30$& - \\
rWLQ all & 261&$1.32\pm0.24$&$ 17\pm7$&$17\pm18 $&$5.7\pm5.7$ &$2.7\pm2.9$&$0.28\pm0.22$&$6.5\pm7.0$\\
rWLQ nRL\tablefootmark{b} & 202&$1.32\pm0.24$&$ 18\pm7$&$17\pm17 $&$5.3\pm5.2$ &$2.5\pm2.9$&$0.27\pm0.21$&$6.0\pm6.9$\\
rWLQ RL\tablefootmark{c} & 59&$1.31\pm0.22$&$ 14\pm6$&$20\pm23 $&$7.3\pm7.0$ &$3.1\pm2.9$&$0.30\pm0.26$&$8.0\pm7.0$\\
rWLQ RL,c\tablefootmark{d}& 49&$1.33\pm0.22$&$ 14\pm6$&$20\pm24 $&$7.2\pm7.3$ &$2.9\pm2.8$&$0.32\pm0.27$&$7.9\pm7.1$\\
rWLQ RL,l\tablefootmark{e}& 10&$1.22\pm0.21$&$ 17\pm5$&$22\pm00 $&$7.7\pm5.5$ &$4.3\pm2.9$&$0.22\pm0.18$&$8.3\pm7.0$\\
Comparison &2750&$1.32\pm0.24$&$42\pm22$&$51\pm52 $&$1.5\pm1.6$ &$1.1\pm1.1$&$0.17\pm0.17$&$2.6\pm3.3$\\
\hline
\end{tabular}
\tablefoot{
\tablefoottext{a}{equivalent width selected subsample with $W_\ion{Mg}{ii}<11$\AA,$W_\ion{C}{iv}<4.8$\AA;}
\tablefoottext{b}{not radio-loud;}
\tablefoottext{c}{radio-loud;}
\tablefoottext{d}{radio-loud and core-dominated;}
\tablefoottext{e}{radio-loud and lobe-dominated.}
}
\label{tab:mean-prop}
\end{table*}
\begin{table*}[hbpt]
\caption{Test statistics $D_{\rm max}$ from the Kolmogorov-Smirnov two sample test for the comparison of the
first six samples from Tabl.\,\ref{tab:mean-prop} with the comparison sample.
{\bf In the last two columns},
the critical values $D_{\rm crit,\alpha}$ of the one-tailed test are given for
the error probabilities $\alpha =$ 0.01 and $\alpha =$ 0.001.}
\centering
\begin{tabular}{lcccccc}
\hline
Sample &
$D\,(\log L)$ &
$D\,(\log M)$ &
$D\,(\varepsilon)$ &
$D\,(\dot{M})$ &
$D_{\rm crit,0.01}$ &
$D_{\rm crit,0.001}$ \\
\hline
WLQ-EWS & 0.67 & 0.20 & 0.52 & - & 0.23 & 0.28\\
rWLQ-EWS & 0.62 & 0.21 & 0.50 & 0.55 & 0.27 & 0.33\\
WLQ all & 0.66 & 0.32 & 0.33 & - & 0.09 & 0.10\\
rWLQ all & 0.62 & 0.31 & 0.30 & 0.40 & 0.10 & 0.12\\
rWLQ nRL & 0.59 & 0.28 & 0.28 & 0.39 & 0.11 & 0.14\\
rWLQ RL & 0.69 & 0.42 & 0.38 & 0.49 & 0.20 & 0.24\\
rWLQ RL,c& 0.70 & 0.37 & 0.39 & 0.50 & 0.22 & 0.27\\
\hline
\end{tabular}
\label{tab:KS-test}
\end{table*}
We estimated the accretion rate in the same way as in Meusinger \& Weiss (\cite{Meusinger2013}).
This approach is based on the scaling relation for $\dot{M}$ derived by Davis \& Laor (\cite{Davis2011})
from the standard accretion disk model. As was shown by Davis \& Laor (\cite{Davis2011}),
this relation can be used to compute $\dot{M}$ from the optical luminosity and is thus less affected
by the uncertainties of the innermost part of the accretion disk. We slightly modified this relation to
\begin{equation}\label{eqn:AR_DL}
\dot{M} = 3.5\cdot 0.64^{-1.5\cdot(1+\alpha_\lambda)}\, L_{3000, 45}^{1.5}\, M_8^{-0.89},
\end{equation}
by extrapolating the optical luminosity from the continuum luminosity $L_{3000}$ at 3000 \AA\,
adopting a power law with the spectral index $\alpha_\lambda$ where $L_{3000, 45}$ is
$L_{3000}$ in units of $10^{45}\,\mbox{erg}\,\mbox{s}^{-1}$
and $M_8 $ is the black hole mass in units of $10^8 M_\odot$.
We derived the continuum spectral index $\alpha_\lambda$ for each quasar individually
by fitting a power law to the foreground extinction-corrected SDSS spectrum in
the continuum windows. These individual slopes were used for all those
quasars having spectra with $S/N>3$ in at least three windows. For quasars with noisier spectra we simply
adopted the mean slope $\alpha_\lambda = -1.52$ from the composite spectrum of the parent sample.
The average accretion rate for the rWLQ-EWS sample is three times higher than for the
comparison sample. The rWLQ sample shows a higher average accretion rate as well.\footnote{The accretion
rates estimated for the WLQ-EWS sample and the entire WLQ sample, respectively, are even higher, but are
not listed in Tab.\,\ref{tab:mean-prop} because of the anticipated higher uncertainties.}
\begin{figure*}[htbp]
\begin{tabbing}
\includegraphics[viewport=0 52 550 560,angle=0,width=6.0cm,clip]{Fig_5a.png}\hfill
\includegraphics[viewport=0 52 550 560,angle=0,width=6.0cm,clip]{Fig_5b.png}\hfill
\includegraphics[viewport=0 52 550 560,angle=0,width=6.0cm,clip]{Fig_5c.png}\hfill \\
\includegraphics[viewport=0 52 550 560,angle=0,width=6.0cm,clip]{Fig_5d.png}\hfill
\includegraphics[viewport=0 52 550 560,angle=0,width=6.0cm,clip]{Fig_5e.png}\hfill
\includegraphics[viewport=0 52 550 560,angle=0,width=6.0cm,clip]{Fig_5f.png}\hfill \\
\includegraphics[viewport=0 52 550 560,angle=0,width=6.0cm,clip]{Fig_5g.png}\hfill
\includegraphics[viewport=0 52 550 560,angle=0,width=6.0cm,clip]{Fig_5h.png}\hfill
\includegraphics[viewport=0 52 550 560,angle=0,width=6.0cm,clip]{Fig_5i.png}\hfill \\
\includegraphics[viewport=0 52 550 560,angle=0,width=6.0cm,clip]{Fig_5j.png}\hfill
\includegraphics[viewport=0 52 550 560,angle=0,width=6.0cm,clip]{Fig_5k.png}\hfill
\includegraphics[viewport=0 52 550 560,angle=0,width=6.0cm,clip]{Fig_5l.png}\hfill
\end{tabbing}
\caption{Bolometric luminosity (left), Eddington ratio $\varepsilon$ (middle), and accretion rate (right)
as a function the black hole mass.
Top: rWLQ sample (magenta frames: rWLQ-EWS subsample, filled red squares: FIRST-detected, open red squares:
FIRST undetected) and comparison sample (CS; contours).
Second row: high-luminosity subsample (blue symbols) from a simulated
quasar sample (contours).
Third row:
The subsample of comparison quasars with $W_\ion{Mg}{ii}<15$\AA\ and $W_\ion{C}{iv}<7$\AA\ (black symbols)
compared with the entire comparison sample (contours).
Bottom:
quasars with high-S/N spectra (magenta symbols) from the comparison sample
compared with the entire comparison sample (contours).
}
\label{fig:all_parameters}
\end{figure*}
Table\,\ref{tab:mean-prop} suggests that our WLQs tend to have on average higher luminosities, black hole masses,
Eddington ratios, and accretion rates compared to ordinary quasars. We
tested the null hypothesis $H0$ that the WLQ samples and the comparison sample are drawn from the same
population against the alternative hypothesis $H1$ that the values of the population from which the WLQs were
drawn are statistically higher than the values of the population of ordinary quasars. We applied the
one-tailed Kolmogorov-Smirnov two sample test (e.g. Siegel \& Castellan \cite{Siegel1988}).
$H0$ has to be rejected with an error probability $\alpha$ if the
test statistic $D_{\rm max}$ is larger than a critical value $D_{\rm crit, \alpha} (n_1,n_2)$, where $\alpha$ is the
probability of rejecting $H0$ when it is in fact true, $n_1$ and $n_2$ are the numbers of quasars in the two samples.
Table\,\ref{tab:KS-test} lists the values for $D_{\rm max}$ for
the first seven samples from Tab.\,\ref{tab:mean-prop}, the subsample of lobe-dominant radio-loud WLQ was omitted
because of the small sample size. In the last two columns, the critical values $D_{\rm crit}$ are listed
for $\alpha$ = 0.01 and 0.001, respectively. With two exceptions, $H0$ has to be rejected in favour of $H1$ at
$\alpha = 0.001$. The null hypothesis cannot be rejected for the black hole masses of the EWS subsamples.
There is no strong difference, on the other hand, between WLQs of different radio properties, although the
radio-loud subsample is a little bit more extreme than the radio-quiet one.
Figure\,\ref{fig:all_parameters} displays the distributions in the $L-M-\varepsilon-\dot{M}$ parameter space.
The top row shows the rWLQ-EWS sample and the rWLQ sample as symbols and the comparison sample by contours.
The WLQ sample contains several extremely luminous quasars. The highest luminosity is measured
for SDSS J152156.48+520238.5 with $\log L = 48.1$, which is one of the four most
luminous quasars in the SDSS DR7 quasar catalogue.\footnote{The luminosity $\log L = 48.29$ of the
most luminous quasar, SDSS J074521.78+473436.1, is only marginally higher.}
Also the black hole mass $M = 1.3\,10^{10}\,M_\odot$ and the Eddington ratio $\varepsilon = 0.95$ of
SDSS J152156.48+520238.5 are very high, whereas the accretion rate $\dot{M} = 2.1 M_\odot/$yr is
rather normal.
However, these values should be interpreted with caution because the virial mass is derived from the
\ion{C}{iv} line (see Sect. \ref{sec:selection}).
High-quality data provided by Wu et al. (\cite{Wu2011}), based on near-infrared spectroscopy, yield
$M = 6.2\,10^{9}\,M_\odot$ and $\varepsilon = 0.81$. With $z = 2.238$ (Wu et al. \cite{Wu2011}),
this quasar does not belong to the
rWLQ sample, but there is also an overabundance of very luminous quasars in the rWLQ sample.
On the other hand, the top left panel of Fig.\,\ref{fig:all_parameters} indicates a lower luminosity threshold
for the rWLQs at $\log L_{\rm low} \approx 46.3$, though there is some scatter.
The higher mass of the rWLQs is probably the consequence of the correlation between $L$ and $M$ in the
parent sample (see also Meusinger \& Weiss \cite{Meusinger2013}).
There is no significant difference between the distributions of the radio-loud subsample and the rest, though
radio-loud WLQs tend to be slightly more luminous.
\begin{figure}[bhtp]
\includegraphics[viewport=30 30 590 800,angle=270,width=9.2cm,clip]{Fig_6.png}
\caption{Left: Signal-to-noise ratio versus i band magnitude for the rWLQ sample (filled red squares;
magenta frames: rWLQ-EWS subsample) and for the WLQ candidates that were rejected as noisy
(black plus signs). For comparison, the contour curves show the distribution of the comparison sample
(equally spaced logarithmic local point density contours estimated with a
grid size of $\Delta x,\Delta y = 0.1,0.05$). Horizontal dashed line: explicit selection
threshold S/N$>2$, dotted line: 12th percentile of the rWLQ sample and 88th percentile of the
unclassified noisy spectra at S/N=9.
Right: Frequency distribution for the rWLQ sample (solid red), the rWLQ-EWS subsample
(dashed magenta), and the WLQ candidates rejected as noisy (solid black). The histogram for the
rWLQ-EWS subsample is stretched by a factor of two for better visibility.
}
\label{fig:snr_i}
\end{figure}
\begin{figure*}[bhtp]
\includegraphics[viewport=0 0 550 780,angle=0,width=5.15cm,clip]{Fig_7a.png}
\includegraphics[viewport=100 0 550 780,angle=0,width=4.2cm,clip]{Fig_7b.png}
\includegraphics[viewport=110 0 550 780,angle=0,width=4.12cm,clip]{Fig_7c.png}
\includegraphics[viewport=100 0 550 780,angle=0,width=4.2cm,clip]{Fig_7d.png}
\caption{Equivalent width of the \ion{Mg}{ii} line versus (a) bolometric luminosity,
(b) black hole mass, (c) Eddington ratio, and (d) accretion rate for the rWLQ sample.
Filled red squares: FIRST detected WLQs, open red squares: FIRST undetected,
magenta frames: rWLQ-EWS subsample.
Straight lines: linear regression curves for the rWLQ sample.
The distributions for the comparison sample are indicated by equally
spaced logarithmic local point density contours estimated with a grid size of
$\Delta x,\Delta y = 0.1,0.05$ dex.}
\label{fig:Edd_ratio}
\end{figure*}
Because the lower luminosity cut does obviously not significantly depend on $M$, it produces $M$-dependent
lower limits of the Eddington ratio with $\log \varepsilon_{\rm low} \propto -\log M$ and of the accretion rate
with $\log \dot{M}_{\rm low} \propto -0.89 \log M$ that are clearly seen in the corresponding panels in the top
row of Fig.\,\ref{fig:all_parameters}. We demonstrate such an effect by a simple simulation.
The approach is described in detail in Meusinger \& Weiss (\cite{Meusinger2013}),
here we only repeat the most important steps.
We simulated a quasar sample with a $z$ distribution similar to that of the rWLQ sample and with a
reasonable mass distribution. Then we assigned a randomly chosen value of the radiative efficiency
of the accretion process $\eta = 0.057\ldots 0.321$ for non-rotating to maximum spin black holes.
With $\eta = L/(\dot{M}c^2)$ and the scaling relation for $\dot{M}$, we get a relation between $\eta, L,$
and $M$ and can thus compute $L$ for each quasar
assuming, for simplicity, the same spectral slope $\alpha_\lambda = -1.52$ for all quasars.
We applied a $z$-dependent lower $L$ limit
corresponding to the flux limit of the survey, an upper $L$ limit corresponding to the luminosity
distribution, and Gaussian distributed errors of the ``observed'' mass. This simulated ``observed''
quasar sample shows distributions on the $L - M$, $\varepsilon - M$, and $\dot{M}-M$ planes
similar to those of our comparison sample of ordinary quasars.
The differences may be due primarily to the very rough assumptions on the mass distribution and the $\eta-M$ relation
and are not relevant here. Next, we created a subsample of high-luminosity quasars simply by
arbitrarily selecting a proportion (20 \%) of the quasars with $\log L > 46.3$. The simulated sample and
the high-$L$ subsample are shown in the second row of Fig.\,\ref{fig:all_parameters}.
The distributions in the $\varepsilon-M$ plane and in the $\dot{M}-M$ plane are very similar
to the observed ones in the top row of Fig.\,\ref{fig:all_parameters}. Such an agreement cannot be achieved for
a subsample determined by a threshold of the Eddington ratio. Therefore, we conclude that
the visually selected WLQ sample is better characterised by higher luminosities. The differences
in $\varepsilon$ and $\dot{M}$ are then at least partly a consequence of the higher $L$ in combination
with the intrinsic $L-M$ relation of the parent SDSS quasar sample and the $\varepsilon-L$ or $\dot{M}-M$ relation,
respectively.
The third row of Fig.\,\ref{fig:all_parameters} shows a subsample of low-EW quasars drawn from the
comparison sample of ordinary quasars by an EW threshold. There are too few quasars in the comparison
sample that satisfy the selection criterion of the rWLQ-EWS subsample. Therefore, the constraint
had to be relaxed to $W_\ion{Mg}{ii}<15$\,\AA\ and $W_\ion{C}{iv}<7$\,\AA. The thus selected subsample
appears to be characterised by higher Eddington ratios, accretion rates, and luminosities compared
with the parent sample, but clearly not by higher masses.
Finally, the bottom row of Fig.\,\ref{fig:all_parameters} shows another subsample from the
comparison sample. Here, only quasars with high S/N of their spectra, S/N $>9$, were selected.
The distributions are remarkably similar to those of the rWLQ sample in the top row.
\subsubsection{S/N bias}
Is the tendency of WLQs to be more luminous than ordinary quasars at the same $z$ a selection effect?
When measuring the spectral index $\alpha_\lambda$, the mean S/N in the pseudo continuum windows was
estimated and quasars with a mean S/N $<2$ were excluded. However, the number of the rejected quasars is very
low and this explicit selection threshold is thus not very important. A much stronger
bias towards higher S/N might be implicitly introduced by the visual selection from the icon maps.
A certain classification as a WLQ candidate requires sufficient S/N and S/N is tightly correlated
with the flux density.
Figure\,\ref{fig:snr_i} shows the measured S/N versus SDSS i band magnitude for both the rWLQ sample
and the comparison sample. The distribution of the rWLQ-EWS subsample is similar to that of the rWLQ sample.
The rWLQ sample only populates the upper part of the region populated by the comparison sample. There are two
possible explanations for this difference: WLQs either have higher S/N due to a selection effect
or are more luminous than normal quasars. In reality, both effects can be combined, of course.
When we consider the distribution of those quasars that were rejected in the course
of the classification procedure because the spectra appeared too noisy, we find a
dichotomy. 88\% of the rWLQ sample have S/N = 9, while about the same percentage of the
objects with noisy spectra are below that value. Taking S/N = 9 as representative for
the selection threshold and applying this threshold to the comparison sample, we find the mean
luminosity of the high-S/N subsample to be about two times higher than that of the entire comparison
sample. The quasars from the rWLQ sample are on average $\sim 4$ times brighter than normal
(Tab.\,\ref{tab:mean-prop}). About half of the luminosity excess is explained by the S/N
selection bias.
To summarise, Figs.\,\ref{fig:all_parameters} and \ref{fig:snr_i} along with Tab.\,\ref{tab:mean-prop}
suggest that WLQs have higher luminosities, Eddington ratios, and accretion rates than normal quasars, although
our basic sample is strongly affected by a selection bias. The relatively high masses of our WLQs are most
likely a consequence of this selection effect.
\begin{figure*}[bhtp]
\begin{tabbing}
\includegraphics[viewport=15 35 550 800,angle=0,width=4.5cm,clip]{Fig_8a.png}
\includegraphics[viewport=15 35 550 800,angle=0,width=4.5cm,clip]{Fig_8b.png}
\includegraphics[viewport=15 35 550 800,angle=0,width=4.5cm,clip]{Fig_8c.png}
\includegraphics[viewport=15 35 550 800,angle=0,width=4.5cm,clip]{Fig_8d.png}
\end{tabbing}
\caption{
Variability index $V$ in the u, g, r, and i band (left to right) versus luminosity for the 23 WLQs
identified in the variability catalogue (filled red squares, magenta frames:
WLQ-EWS subsample).
For comparison, the entire sample from the variability catalogue
is indicated by the median relation (thick blue curve) with standard deviation (shaded green area)
and by the equally spaced logarithmic local point density contours (estimated with a grid size of
$\Delta x,\Delta y = 0.2,0.2$). Open circle with error bars:
mean value and 1$\sigma$ deviations of the WLQs.
}
\label{fig:var}
\end{figure*}
\subsubsection{Baldwin effect}
The anti-correlation between the luminosity\footnote{More precisely, the rest-frame continuum luminosity at
1450\,\AA.} and the EW of the \ion{C}{iv} line was first discovered by Baldwin (\cite{Baldwin1977}) and has
become known as the Baldwin effect. A number of subsequent studies have confirmed this effect for the
\ion{C}{iv} line and for many other emission lines in the ultraviolet and optical
(e.g. Green et al. \cite{Green2001};
Dietrich et al. \cite{Dietrich2002};
Wu et al. \cite{Wu2009};
Richards et al. \cite{Richards2011};
Bian et al. \cite{Bian2012}).
Several interpretations about the origin of the Baldwin effect have been proposed, but there
is currently no consensus about the fundamental mechanisms behind it. A promising explanation
is that it is related to a luminosity-dependence of the SED shape
(e.g. Netzer et al. \cite{Netzer1992};
Zheng \& Malkan \cite{Zheng1993};
Dietrich et al. \cite{Dietrich2002};
Wu et al. \cite{Wu2009}). In the last decade, several studies proposed that the
Baldwin effect may be driven by the Eddington ratio
(Shang et al. \cite{Shang2003};
Bachev et al. \cite{Bachev2004};
Baskin \& Laor \cite{Baskin2004};
Dong et al. \cite{Dong2009, Dong2011};
Bian et al. \cite{Bian2012}).
In Fig.\,\ref{fig:Edd_ratio}, the \ion{Mg}{ii} equivalent width is plotted versus $L, M, \varepsilon,$ and $\dot{M}$,
respectively. The Baldwin effect is clearly indicated for the comparison sample where we observe
slightly stronger anti-correlations with $\varepsilon$ and $\dot{M}$ than with $L$.
The strongest effect is found for $\dot{M}$ probably indicationg that the accretion rate is the main driver.
No significant correlation is indicated with the black hole mass $M$.
As in Fig.\,\ref{fig:all_parameters}, there is no significant difference between
radio-loud and not radio-loud quasars. Similar to Fig.\,\ref{fig:all_parameters},
the areas populated by the rWLQ sample in Fig.\,\ref{fig:Edd_ratio}
are identical with the regions of low $W$ and high $L$, $\varepsilon$, and $\dot{M}$ of normal quasars.
The rWLQ-EWS sample is simply the low-EW part
of the rWLQ sample. (Note that there are a few WLQs with $W_\ion{Mg}{ii}$ below but
$W_\ion{C}{iv}$ above the EWS threshold.) The high mean values of the luminosity, the Eddington ratios,
and the accretion rate in Tab.\,\ref{tab:mean-prop} are thus partly a reflex of the Baldwin effect.
The Baldwin effect is also present in the rWLQ sample
(regression lines). Because of the inherent selection bias it is, however, not useful
to compare the slopes with those from the literature
\subsection{Variability}\label{subsec:variability}
In a previous study (Meusinger et al. \cite{Meusinger2011}), we exploited the
Light-Motion Curve Catalogue (LMCC; Bramich et al. \cite{Bramich08})
of 3.7 million objects with multi-epoch photometry from the S82 of the
SDSS DR7 to analyse the light curves for about 9\,000 quasars in the five SDSS bands.
We computed the rest-frame first-order structure function (SF) $D(\tau_{\rm rf})$ for
each LMCC light curve in each band.
The SF is a sort of running variance of the magnitudes as a function
of the rest-frame time-lag $\tau_{\rm rf}$, i.e. the time difference between two
arbitrary measurements. The arithmetic mean of all noise-corrected SF data points in the interval
$\tau_{\rm rf, max} \approx 1-2$\,yr was taken as variability estimator. In other words, the quantity
$V$ used to describe the strength of the variability of a quasar is the variance of its magnitude
differences from all those pairs of two measurements that have rest-frame time-lags between 1 and 2 yr.
$V$ was computed for each of the five SDSS bands.
The sample from our previous study included three quasars from the WLQ sample of Diamond-Stanic et al.
(\cite{Diamond2009}). Variability was found to be close to the median value for one of them
but is remarkably lower for the other two.
Also other quasars with relatively weak BELs selected from the same sample
were found to vary by less than the general median dispersion. However, this previous WLQ sample was
very low and the interpretation of some of these objects remains uncertain.
We identified 23 WLQs from our present sample with entries in the variability catalogue
of S82 quasars. Using the variability estimators from this catalogue, we found again a tendency
towards lower variability for the WLQs compared to the entire sample.
Figure\,\ref{fig:var} shows the variability-luminosity diagrams using the variability indices
$V$ in the u, g, r, and i band, respectively.
It has been known for a long time that variability is anti-correlated with luminosity, in the sense
that, at a given rest-frame wavelength, more luminous AGNs tend to vary with lower fractional
amplitudes than less luminous AGNs (e.g.
Pica \& Smith \cite{Pica1983};
Hook et al. \cite{Hook1994};
Paltani \& Courvoisier \cite{Paltani1997};
Vanden Berk et al. \cite{VandenBerk2004};
De Vries et al. \cite{DeVries2005};
Meusinger et al. \cite{Meusinger2011}).
The main driver behind the $V-L$ relation may be the Eddington ratio or the accretion rate
(e.g. Ai et al. \cite{Ai2010}; Meusinger \& Weiss \cite{Meusinger2013}).
A tendency towards lower variability indices of our WLQs is thus expected as a consequence of their higher
luminosities. However, the mean values for the 23 WLQs are below the mean $V-L$ relation in all SDSS bands.
Figure\,\ref{fig:var} thus indicates that the continuum in WLQs is, on average, at least not more variable than
in ordinary quasars. This conclusion is boosted by the fact that the variability of the line flux is weaker
than the variability of the underlying continuum (Wilhite et al. \cite{Wilhite2005};
Meusinger et al. \cite{Meusinger2011}).
\section{Wide band spectral energy distribution}\label{sec:SED}
\begin{figure*}[bhtp]
\begin{tabbing}
\includegraphics[viewport=10 100 550 670,angle=0,width=4.6cm,clip]{Fig_9a.png}\hfill
\includegraphics[viewport=10 100 550 670,angle=0,width=4.6cm,clip]{Fig_9b.png}\hfill
\includegraphics[viewport=10 100 550 670,angle=0,width=4.6cm,clip]{Fig_9c.png}\hfill
\includegraphics[viewport=10 100 550 670,angle=0,width=4.6cm,clip]{Fig_9d.png}\hfill \\
\includegraphics[viewport=10 60 550 650,angle=0,width=4.6cm,clip]{Fig_9e.png}\hfill
\includegraphics[viewport=10 60 550 650,angle=0,width=4.6cm,clip]{Fig_9f.png}\hfill
\includegraphics[viewport=10 60 550 650,angle=0,width=4.6cm,clip]{Fig_9g.png}\hfill
\includegraphics[viewport=10 60 550 650,angle=0,width=4.6cm,clip]{Fig_9h.png}\hfill
\end{tabbing}
\caption{SDSS and WISE colour-redshift diagrams for the WLQ sample (filled red squares;
magenta frames: WLQ-EWS subsample).
For comparison, the distributions of all quasars from the Shen catalogue are shown by contours
(equally spaced logarithmic local point density contours estimated with a grid size of
$\Delta x,\Delta y = 0.1,0.1$) and by the median
colour redshift relation (thick blue curve) with standard deviation (shaded green area).
}
\label{fig:SDSS_WISE}
\end{figure*}
\subsection{From Ly$\alpha$ to the mid-infrared}\label{subsec:wide-band}
Information on the SED of the quasars with a much wider wavelength coverage than
the SDSS spectra is available particularly thanks to the sky surveys in the infrared.
We used the photometric data for the J, H, and K band from the Two Micron All-Sky Survey (2MASS; Skrutskie et al.
\cite{Skrutskie2006}) and for the w1 to w4 bands from the Wide-Field Infrared Survey Explorer
(WISE; Wright et al. \cite{Wright2010}) in combination with the u,g,r,i, and z magnitudes from the SDSS.
The SDSS and 2MASS magnitudes were taken from the SDSS DR7 quasar catalogue (Schneider et al. \cite{Schneider2010}).
We identified 95\% of the SDSS DR7 quasars in the WISE All-Sky Source
Catalog\footnote{http://irsa.ipac.caltech.edu/cgi-bin/Gator/nph-dd} within a search radius of $6\arcsec$.
99\% of the identified quasars have SDSS-WISE position differences less than $3\arcsec$.
2MASS J, H, and K band magnitudes are only available for 38, 28, and 28\%, respectively.
The colour-redshift diagrams from the SDSS and WISE data are shown in Fig.\,\ref{fig:SDSS_WISE}.
Although at first glance the mean colours of the WLQs are similar to those of the entire quasar
population, the wavy structure is less pronounced for the WLQs and
even less for the WLQ-EWS subsample.
The features in the SDSS colour-$z$ diagrams can be understood as being caused by the strong emission lines
moving in and out of the filters with changing redshift (e.g. Richards et al. \cite{Richards2001}).
For example, normal quasars have a relatively blue $r-i$ at $z \approx 1$, where the \ion{Mg}{ii} line
dominates the r band, and a relatively red $r-i$ at $z \approx 1.7$ where it falls into the i band.
The weaker \ion{Mg}{ii} line of the WLQs results in a redder colour $r-i$ at
$z \approx 1$ and a bluer at $z \approx 1.7$ compared to the median colour redshift relation.
There are no systematic differences
in the WISE colour-$z$ diagrams between the WLQs and the normal quasars.
For the rWLQ sample, we derived
mean colour indices
$w1-w2 = 1.22\pm0.25, w2-w3 = 2.97\pm0.33, w3-w4 = 2.66\pm0.52$ compared to
$w1-w2 = 1.28\pm0.22, w2-w3 = 2.97\pm0.35, w3-w4 = 2.71\pm0.51$ for the comparison samples.
However, in a colour-$z$ diagram for a combined SDSS-WISE colour, the WLQs appear to be significantly
bluer on average, as shown for $i-w1$ in the Fig.\,\ref{fig:SDSS_WISE}\,e.
For $z=0.7\ldots 2$, this colour index reflects the slope of the SED between rest-frame near-ultraviolet
and near-infrared.
After applying Galactic extinction corrections to the photometry in all 12 bands, the magnitudes were transformed
into monochromatic fluxes per wavelength interval, $F_\lambda$, and the effective wavelengths of each filter
were transformed into rest-frame wavelengths.
We arbitrarily normalised the rest-frame SED of each quasar at 3000 \AA. Therefore, the
redshift range had to be restricted to $ 0.3 \le z \le 2$. The sample contains 79\,055 quasars.
After normalisation, we computed the arithmetic median and
standard deviation of all data points in narrow wavelength intervals.
\begin{figure*}[bhtp]
\begin{tabbing}
\includegraphics[viewport=40 50 570 780,angle=270,width=9.1cm,clip]{Fig_10a.png}
\includegraphics[viewport=40 50 570 780,angle=270,width=9.1cm,clip]{Fig_10b.png}\\
\includegraphics[viewport=40 50 570 780,angle=270,width=9.1cm,clip]{Fig_10c.png}
\includegraphics[viewport=40 50 570 780,angle=270,width=9.1cm,clip]{Fig_10d.png}
\end{tabbing}
\caption{
Wide band (wb) SED.
(a) Shen catalogue with $z<2$ (yellow solid curve: median, green shaded area: $1\sigma$ errors)
and WLQ comparison sample (black dotted curve), normalised at 3000 \AA. Over-plotted (blue): SDSS quasar
composite spectrum from Vanden Berk et al. (\cite{VandenBerk2001}).
The horizontal lines indicate the wavelengths intervals covered by the different photometric bands.
The inset shows the wb SED of all SDSS quasars (solid green) and of the comparison sample (black dotted) in the
wavelength range 1300\AA\ $ - 3000$\,\AA\ in linear scale, as well as the SDSS quasar composite spectrum
smoothed with a 300 \AA\ boxcar (blue dashed).
(b) WLQ-EWS sample (red solid curve) and comparison sample
(black dotted curve and green shaded area).
Over-plotted (blue): SDSS WLQ composite spectrum from Fig.\,\ref{fig:composite}.
(c) As (b), but for the entire WLQ sample instead of the WLQ-EWS subsample.
(d) As (b), but the comparison sample SED is boosted by a power-law component (plc) with
$\alpha_\lambda = -1.7$ and re-normalised, the green shaded area indicates the 1$\sigma$ errors of the WLQ sample,
and the SDSS composite spectrum was omitted for the sake of clarity.
}
\label{fig:wide_band}
\end{figure*}
In Fig.\,\ref{fig:wide_band}\,a, we plotted the arithmetic median wide band SED and the 1$\sigma$ error
interval for the entire quasar sample from the SDSS DR7 quasar catalogue. It is well matched by the
SDSS composite spectrum from Vanden Berk et al. (\cite{VandenBerk2001}). The de-redshifted wavelengths intervals
covered by the various photometric bands are indicated by the vertical lines and labelled in the lower part.
It is evident that the structures seen in the 1$\sigma$ area are related to the edges of the bands in combination with
the different photometric errors. In the near infrared, the strong incompleteness of the 2MASS photometry must be mentioned.
Quasar to quasar variations of the host galaxy fraction contribute to the increase of the scatter from UV to optical
wavelengths. Because the host contamination depends on the quasar luminosity, the WLQ sample with its higher
mean luminosity is expected to have a smaller host contribution compared to the entire quasar sample.
We therefore have to compare the SED of the WLQs with that of a comparison sample with the same luminosity distribution.
We constructed such a control sample in a similar way as mentioned in Sect.\ref{sec:selection}, but this time the
two-dimensional distribution in the $L-z$ space is required to match that of the WLQ sample.
The size of this ($L,z$) comparison sample is again ten times that of the WLQ sample.
The wide band SED of the ($L,z$) comparison sample, over-plotted in Fig.\,\ref{fig:wide_band}\,a,
indicates smaller host contributions compared to the parent SDSS quasar sample, as expected.
The wide band SED of the WLQ-EWS sample is shown in Fig.\,\ref{fig:wide_band}\,b along with
that of the ($L,z$) comparison sample. The agreement is relatively good, but the WLQ SED turns out to be
slightly steeper over nearly the covered wavelength range. The same trend is seen for the
wide band SED of the entire WLQ sample (Fig.\,\ref{fig:wide_band}\,c) that shows less fluctuations
because of the larger sample size. Figure\,\ref{fig:wide_band}\,d
shows that the agreement between the WLQ sample and the comparison sample is improved
after adding a power-law component
$F_{\rm \lambda,\ add} = c \lambda^{\alpha_\lambda}$ to the comparison sample SED. We checked various parameter
combinations $(c,\alpha_\lambda)$ and found a good match for $\alpha_\lambda \approx -1.8\ldots -1.7$, i.e.
$\alpha_\nu = -0.2\ldots -0.3$, and an enhancement of the total flux density at 3000 \AA\ by a factor $\sim 2$.
\subsection{Radio emission}\label{subsec:radio}
Table\,\ref{tab:mean-prop} indicates a
relatively high percentage of radio-loud WLQs compared to the entire SDSS DR7 quasar sample.
In general, the distribution of the radio-loudness of quasars appears to
be bimodal with about 10\% being radio-loud (Kellermann et al. \cite{Kellermann1989}; White et al. \cite{White2000};
Ivezi\'c et al. \cite{Ivezic2002}).
\begin{table}[htbp]
\begin{center}
\caption{Radio properties of the WLQs and of the comparison samples.
(\#: number of quasars,
$f_{\rm rl}$: proportion of radio-loud quasars;
$f_{\rm r}$: proportion of radio-detected quasars in FIRST footprint;
$R$: radio-loudness,
$r_{\rm N,cl}$: number ratio of core-dominated to lobe-dominated sources;
$r_{\rm R,cl}$ ratio of mean radio-loudness of core-dominated to lobe-dominated sources)
}
\begin{tabular}{lrccccc}
\hline
Sample & \# \ \ & $f_{\rm rl}$ & $f_{\rm r}$ & $\overline{\log R}$ & $r_{\rm N,cl}$ & $r_{\rm R,cl}$\\
\hline
WLQ-EWS\tablefootmark{a} & 46 & 0.35 & 0.42 & 1.70 & 8.0 & 0.04 \\
rWLQ-EWS\tablefootmark{a} & 33 & 0.37 & 0.47 & 1.67 & 6.0 & 0.04 \\
WLQ\tablefootmark{a} & 365 & 0.26 & 0.34 & 1.65 & 9.5 & 0.30 \\
rWLQ\tablefootmark{a} & 261 & 0.22 & 0.33 & 1.58 & 6.9 & 0.22 \\
Comparison\tablefootmark{a} & 2750 & 0.06 & 0.06 & 2.23 & 2.7 & 0.60 \\
WLQ Paper 1\tablefootmark{b} & 161 & 0.25 & 0.34 & 1.52 & 54/0 & - \\
WLQ Lit\tablefootmark{c} & 98 & 0.10 & 0.28 & 1.46 & 26.0 & 0.36 \\
WLQ Lit ($z<3$)\tablefootmark{d} & 25 & 0.24 & 0.36 & - & 9/0 & - \\
Shen EW$<15$\tablefootmark{e} & 1268 & 0.22 & 0.24 & 1.93 & 10.9 & 0.25 \\
\hline
\end{tabular}
\tablefoot{
\tablefoottext{a}{As in Tab.\,\ref{tab:mean-prop}};
\tablefoottext{b}{WLQs from Paper 1 with identification in Shen catalogue};
\tablefoottext{c}{WLQs from literature (see text)};
\tablefoottext{d}{WLQs from literature with $z<3$};
\tablefoottext{e}{EW selected sample from the Shen catalogue in FIRST footprint with $0<W_{\ion{Mg}{ii}}<15$\AA\
and {\sc BAL\_FLAG = 0}}.
}
\label{tab:radio-prop}
\end{center}
\end{table}
Table \,\ref{tab:radio-prop} gives an overview of the radio properties of various samples. The
proportion of radio-loud quasars, i.e. the ratio $f_{\rm rl} = N_{\rm rl}/N_{\rm all}$ of the number
$N_{\rm rl}$ of radio-loud quasars to the number $N_{\rm all}$ of all WLQs in FIRST footprint amounts
to 0.26 for the WLQ sample and 0.22 for the rWLQ sample. The radio-loudness proportion is even higher
for the EWS subsamples (0.35 and 0.37). In the comparison sample, only 6\% of the quasars are radio-loud,
in good agreement with the value of 8\% found for the SDSS quasar sample (Ivezi\'c et al. \cite{Ivezic2002}).
A high amount of radio-loud quasars was also found for the WLQ sample from Paper 1. In addition, we constructed
a sample of 98 WLQs from the literature (Diamond-Stanic et al. \cite{Diamond2009};
Shemmer et al. \cite{Shemmer2010}; Lane et al. \cite{Lane2011}; Niko{\l}ajuk \& Walter \cite{Nikolajuk2012};
Wu et al. \cite{Wu2012}). The percentage of radio-loud quasars is lower for that sample.
However, the majority of these quasars have higher redshifts than ours and when
we reject those with $z>3$, the remaining subsample has a radio-loudness rate as
high as in our WLQ sample. The lower radio-loudness percentage among the high redshift WLQs from the
literature is mainly due to the fact that many of these sources were selected to be not radio-loud.
Finally, we selected all quasars with \ion{Mg}{ii} equivalent widths
$W_{\ion{Mg}{ii}}<15$\AA\ from Shen et al. (\cite{Shen2011}). Again, the radio-loudness proportion
(0.22) is much higher than typical for the parent quasar sample. On the other hand, though $f_{\rm rl}$
is high, the mean radio-loudness $\overline{R}$ for the WLQs is much lower than for
the comparison sample.
\begin{figure}[ht]
\begin{centering}
\includegraphics[viewport=60 20 610 785,angle=270,width=8.8cm,clip]{Fig_11a.png}
\includegraphics[viewport=60 20 610 785,angle=270,width=8.8cm,clip]{Fig_11b.png}
\end{centering}
\caption{Fraction of (a) FIRST radio-detected quasars and (b) radio-loud quasars in luminosity bins for
the quasars from the Shen catalogue (blue asterisks), the subsample of Shen quasars with
$W_\ion{Mg}{ii}<11$\,\AA\ and $W_\ion{C}{iv}<4.8$\,\AA\ (open blue squares), our WLQ sample
(filled red squares), and the WLQ-EWS subsampls of WLQs (magenta framed squares).
All samples restricted to $0.6\le z \le 2$.
}
\label{fig:radio_fraction}
\end{figure}
\begin{figure*}[ht]
\begin{centering}
\includegraphics[viewport=0 40 600 785,angle=0,width=9.0cm,clip]{Fig_12a.png}
\includegraphics[viewport=0 40 600 785,angle=0,width=9.0cm,clip]{Fig_12b.png}
\end{centering}
\caption{Radio loudness and equivalent width of \ion{Mg}{ii} (left) and \ion{C}{iv} (right).
Top: radio-loudness parameter $R$
(filled red squares: rWLQ sample,
magenta framed filled squares: rWLQ-EWS sample,
large open red circle with error bars: mean value and 1$\sigma$ error of rWLQs,
blue plus signs: comparison sample, contours: all quasars from the Shen catalogue with {\sc bal\_flag = 0}).
Middle: proportion $f_{\rm r}$ of FIRST-detected quasars in EW bins
(filled red squares: rWLQ sample,
magenta framed filled square: mean value from the rWLQ-EWS sample,
open blue squares: comparison sample,
black lozenge: WLQ Lit ($z<3$) sample,
black asterisks: all quasars from the Shen catalogue with {\sc bal\_flag = 0},
horizontal bars: binning intervals,
vertical bars: uncertainties from Poisson statistics and error propagation).
Bottom: histogram distribution of the EW
(solid red: rWLQ sample, dashed blue: comparison sample).
}
\label{fig:EW_radio}
\end{figure*}
The proportion $f_{\rm r} \equiv N_{\rm r; R>0}/N_{\rm all}$ of the radio-detected
quasars in the FIRST survey area amounts to 0.34 for the entire WLQ sample,
0.33 for the rWLQ sample, but only 0.06 for the comparison sample.
Again, the ratios are higher for the EWS subsamples (0.42 and 0.47).
The radio proportion is also higher for the other WLQ samples
in Tab.\,\ref{tab:radio-prop}. It is qualitatively expected that the percentage of radio detections
is higher for our WLQ sample than for the comparison sample because of the higher luminosity.
As a consequence of the relatively narrow $z$ range, the luminosity strongly correlates with the flux
density and the WLQ sample is thus expected to also have a higher number of quasars with
1.4 GHz radio flux densities exceeding the FIRST detection limit, even if $R$ is independent of $L$.
Figure\,\ref{fig:radio_fraction} shows the proportion of (a) the radio-detected quasars
and (b) the radio-loud quasars as functions of the luminosity
for both WLQs and ordinary quasars from the Shen catalogue.
Both samples were restricted to $z = 0.6-2.0$,
where the overwhelming majority (93\%) of the WLQs are found. For the Shen catalogue sample,
the trend with $L$ is only moderate for the proportion of radio-loud quasars but strong
for the proportion of radio detections. On the other hand,
Fig.\,\ref{fig:radio_fraction} clearly shows that the luminosity dependence of the
proportion of radio detections is by far not strong enough to explain the high rates
of FIRST-detected WLQs.
The WLQ-EWS subsample follows the same trend as our visually selected WLQ sample.
We conclude that the high percentage of radio sources among our WLQs is not primarily a
consequence of the higher luminosities, i.e. of the Baldwin effect in combination with
the S/N selection bias discussed in Sect.\,\ref{subsec:lum_etc}.
In the top panels of Fig.\,\ref{fig:EW_radio}, we plotted $R$ as a function of
$W_{\rm \ion{Mg}{ii}}$ (left) and $W_{\rm \ion{C}{iv}}$ (right), respectively,
for the rWLQ sample and the comparison sample of normal quasars. In addition, the distributions
for all quasars from the Shen catalogue is shown by contour lines.
There is a wide scatter, but the linear regressions for the comparison sample indicates
an increase of $R$ with increasing $W$. The centroid of the WLQ distribution is slightly
below the regression line. The middle panels of Fig.\,\ref{fig:EW_radio} show
the proportins $f_{\rm r}(W)$ of radio-detected quasars in EW bins. There seems to be a
negative correlation between $f_{\rm r}$ and $W$ for the WLQs.
The overwhelming majority of the FIRST-detected WLQs are core-dominant radio sources.
The ratio $r_{N, {\rm cl}} = N_{\rm c}/N_{\rm l}$ of the numbers of core-dominant to
lobe-dominant sources is about three times higher than for the
comparison sample. Again, high ratios $r_{N, {\rm cl}}$ are also found for the other WLQ samples.
On the other hand, the ratio $r_{R, {\rm cl}} = \overline{R}_{\rm c}/\overline{R}_{\rm l}$
of the mean radio-loudness of the core-dominant
to that of the lobe-dominant sources is about $2\ldots 3$ times lower than usual.
The very low value of $r_{R, {\rm cl}} = 0.04$ for the rWLQ-EWS subsample is likely an effect of the
poor statistics with only two lobe-dominated quasars.
\subsection{X-rays}
Finally, we briefly consider the rate of X-ray sources. We use the column
{\sc rass\_offset} from the Shen catalogue to select X-ray detections. For an offset less
then 30 arcsec, we find a percentage of 6.5\% for the rWLQ sample compared to
2.6\% for the comparison sample. If we restrict the offset to less than 10 arcsec, the
corresponding proportions are 1.8\% and 0.6\%, respectively. Therewith, the X-ray percentage
appears to be three times higher for the WLQs than for the ordinary quasars,
but this low-number statistics is very uncertain. The mean redshifts and RASS
count rates are similar for both samples.
\section{Discussion}\label{sec:discussion}
We found that the continuum of the composite SED for the WLQ sample is steeper than for ordinary quasars. The
WLQ composite spectrum is reasonably well matched by a modified composite of normal quasars where the BELs are normal but
the continuum is hotter than usual. We also found that the WLQs have higher luminosities, Eddington ratios,
and accretion rates, while the variability is lower. Finally, the WLQ sample is also significantly different from
the comparison sample of ordinary quasars with respect to the radio properties. In this section, we try to
give a consistent interpretation of these findings.
In the standard picture (Shakura \& Sunyaev \cite{Shakura73}; Frank et al. \cite{Frank02}), the
temperature of the accretion disk around a black hole of given mass is determined by the
accretion rate. Higher accretion rates lead to higher disk temperatures, luminosities, and Eddington ratios.
The variability is known to anti-correlate with the Eddington ratio and the accretion rate
(Wilhite et al. \cite{Wilhite2008}; Bauer et al. \cite{Bauer2009}; Zuo et al. \cite{Zuo2012};
Meusinger \& Weiss \cite{Meusinger2013}).
The accretion process in quasars is probably accompanied by strong local temperature fluctuations
(Dexter \& Agol \cite{Dexter2011}; Dexter et al. \cite{Dexter2012}; Ruan et al. \cite{Ruan2014}) and
perhaps also by global fluctuations of the accretion rate (Pereyra et al. \cite{Pereyra2006};
Li \& Cao \cite{Li2008}; Zuo et al. \cite{Zuo2012}). The variability strength of the
BEL flux is an order of magnitude less than for the underlying continuum (Wilhite et al. \cite{Wilhite2005};
Meusinger et al. \cite{Meusinger2011}). A change of the level of the continuum flux is not
immediately accompanied by a change of the line flux on the same level.
Hryniewicz et al. (\cite{Hryniewicz2010}) proposed a scenario where quasar activity has an
intermittent character with several subphases. Each subphase starts with a slow development of the BLR.
At least in a statistical sense, the WLQs in our sample can be consistently understood as AGNs
in the beginning of a phase of stronger accretion, i.e. accretion rate and luminosity enhanced and
variability thus reduced, whereas the BLR has not
yet adapted to the level of the disk. In such a scenario, high ionisation BELs, like
\ion{C}{iv} and Ly$\alpha$, are expected to be weaker than low ionisation BELs,
like \ion{Mg}{ii}, because they are formed at a higher distance from the disk. In fact,
our WLQ sample indicates more weakening of the high ionisation compared to the low
ionisation lines (Tab.\,\ref{tab:mean-prop}; Fig.\,\ref{fig:fit}).
In order to account for the high amount of radio-detected quasars among our WLQs, we consider
the viewing angle towards a radio jet as another possibly important aspect for the WLQ phenomenon.
We found that our WLQs are, on average, less radio-loud than ordinary quasars but are much more
likely to have FIRST counterparts. About one third of the WLQs in our sample are radio sources
on the FIRST level.
First, however, we have to make sure that the high percentage of radio detections is not a selection effect.
In Paper\,1, we found remarkably high radio detection rates for different types of unusual quasars,
particularly for unusual BAL quasars. The radio detection rate was found there to positively
correlate with the degree of the deviation from the SDSS quasar composite spectrum.
The ratio, $N_{\rm F}/N_{\rm C}$, of the number of quasars with the FIRST target flag set to the number
of quasars with their colour target flag set was found there significantly higher for the unusual quasars
(0.33) than for the entire quasar catalogue (0.07). In addition, we considered the ratio
$N_{\rm Fs}/N_{\rm Cs}$, where $N_{\rm Fs}$ is the number of quasars selected solely by the FIRST
selection but not the colour selection and $N_{\rm Cs}$ is the number of quasars without FIRST
counterparts but selected solely based on their colours. Again, we found a much higher value (0.22)
for the unusual than for the entire quasar sample (0.01).
For the WLQ sample of the present study, we have $N_{\rm F}/N_{\rm C} = 0.33$, reflecting once again
the high proportion of radio detections, but $N_{\rm Fs}/N_{\rm Cs} = 0.01$, which is in accordance with
the ordinary quasars. The latter is actually not surprising because the SDSS
colour-$z$ relations of the WLQs are not much different from the mean relations (Fig.\ref{fig:SDSS_WISE}).
This indicates that, different from e.g. the unusual BAL quasars in Paper 1, the high radio detection rate
of the WLQs cannot be unambiguously attributed to a selection bias in the sense that a high number
were only targeted by SDSS because they had been detected as radio sources. As a consequence, we
have to assume that the high percentage of radio sources is an intrinsic property.
Our WLQ sample was visually selected from the Kohonen icon maps (Sect.\,\ref{sec:selection}).
The basic criterion for the inclusion into the final sample was the presence of weak but clearly
identifiable BELs. Moreover, we only accepted objects identified in the SDSS DR7 quasar catalogue
(Schneider et al. \cite{Schneider2010}; Shen et al. \cite{Shen2011}) where the presence of broad
lines in the spectra is required to be included. The presence of BELs is usually considered as
evidence that the continuum radiation is not dominated by a beamed synchrotron component.
Therefore our sample is unlikely to be dominated by BL Lac objects where the beamed synchrotron emission
strongly overpowers the thermal emission. The variability data (Sect.\,\ref{subsec:variability}) are also
inconsistent with a dominance of beamed non-thermal emission.
It has been suggested that the radio-to-optical flux ratio $R_{\rm c}$ from the radio core is a useful
statistical measure of orientation (Baker \& Hunstead \cite{Baker1995}; Wills \& Brotherton \cite{Wills95};
Kimball et al. \cite{Kimball11}). In standard unification theory, orientation has an effect on
the optical spectrum both via obscuration in the plane of the accretion flow and via relativistic
boosting in the line of sight close to the relativistic outflow. High values of $R_{\rm c}$ are
supposed to indicate a low angle to the jet axis. We did not explicitly separate the radio
emission into core and extended components. However, because the majority ($>90$\%) of our radio-detected
WLQs are core dominant, we simply identify $R$ with $R_{\rm c}$. Then, Fig.\,\ref{fig:EW_radio}
would mean that our radio-detected WLQs have higher inclination angles to the jet. A similar trend of $R_{\rm c}$ with
$W_\ion{Mg}{ii}$ and $W_\ion{C}{IV}$ was reported by Kimball et al. (\cite{Kimball11}) who predicted that
this correlation may be caused by anisotropic emission from the BLR. From the study of composite spectra of
quasar samples grouped by the radio core-to-lobe ratio, Baker \& Hunstead (\cite{Baker1995}) found
stronger reddening in lobe-dominated quasars thought to be observed at higher inclination angles to
the jet (i.e. edge-on). Because the core-to-lobe ratio is correlated with $R_{\rm c}$
(Kimball et al. \cite{Kimball11}), we would expect that our WLQs exhibit stronger reddened spectra
because of their lower mean $R$ value. This, however, is clearly not the case. The median SED of the WLQs is
steeper than that of the comparison sample. If this would be caused by reddening, the WLQ sample would be
less reddened. We conclude that the WLQs in our sample are not seen preferentially edge on.
The majority (65\%) of the WLQs with $R>10$ have radio-loudness parameters in the range $25\ldots 250$, sometimes
called radio-intermediate. It was proposed that flat-spectrum radio-intermediate quasars are
relativistically boosted radio-quiet quasars (Falcke et al. \cite{Falcke1996}; Wang et al. \cite{Wang2006}).
If this also applies to the radio-intermediate WLQs in our sample, we expect to view these quasars
at low inclination angles to the jet.
We argued (Fig.\,\ref{fig:wide_band}) that the difference between the slopes of the median wide band
SEDs of the WLQ sample and the comparison sample can be explained by an additional power-law component,
respectively a hotter continuum (Fig.\,\ref{fig:fit}), that does not dominate but makes a substantial
contribution to the observed flux of the WLQs. An enhancement of the continuum by a factor of two caused by
an additional component reduces the EW of the lines correspondingly (Fig.\,\ref{fig:fit}).
\section{Summary and conclusions}\label{sec:summary}
We performed a new search for quasars with weak emission lines in the spectroscopic data from the
SDSS DR7. We visually inspected the 36 self-organising maps (Kohonen maps) from Paper 1 for nearly $10^5$ spectra
classified as quasars by the SDSS spectroscopic pipeline and selected a sample of $\sim 2500$ WLQ candidates. After the
thorough individual analysis of all selected spectra we created a final sample of 365 WLQs with mean redshift
$z = 1.50 \pm 0.45$. The mean equivalent widths are $17 \pm 8$ \AA\ for \ion{Mg}{ii} and $13 \pm 14$ \AA\ for
\ion{C}{iv}.
To avoid contamination, featureless spectra were not included. The corresponding incompleteness is estimated to
about 10\%. The sample includes a subsample of 46 WLQs with EWs below 3-sigma thresholds defined by the EW
distribution of the ordinary quasars (EWS subsample). Especially for the analysis of the accretion rates,
a subsample restricted to the redshift range $0.7 < z < 1.7$ was considered (rWLQ sample).
The investigation of the properties of the WLQs (WLQ-EWS subsample, rWLQ-EWS subsample, and their parent
samples) and their comparison with corresponding control samples yields the following results:
\begin{itemize}
\item
The SDSS composite spectrum for the WLQs shows significantly weaker BELs and a bluer continuum compared
to the ordinary quasars. Therefore it can be excluded that WLQs are stronger affected by dust reddening
than usual and it is unlikely that WLQs are
preferentially seen edge-on. No significant differences are found between the composites of the radio-loud and
the not radio-loud WLQs. (Sect. \ref{subsec:composite})
\item
The wide-band SED constructed from the SDSS, 2MASS, and WISE photometric data
(Sect. \ref{subsec:wide-band}) shows that the trend of a steeper continuum continues towards the mid infrared.
\item
The WLQs have, on average, significantly higher luminosities, Eddington ratios, and accretion rates,
but not significantly higher black hole masses (Sect. \ref{subsec:lum_etc}).
About half of the luminosity excess is produced by a S/N bias in the selection process.
The remaining intrinsic luminosity excess corresponds to the Baldwin effect and also manifests by
higher accretion rates and Eddington ratios. The higher mean mass is simply a consequence of the
higher mean luminosity in combination with the positive $M - L$ correlation in the parent sample.
\item
Indicators for the strength of the UV and optical variability are available for 23 WLQs from our sample.
They show a wide scatter with a tendency towards lower variability than
that of the ordinary quasars of comparable luminosities
(Sect. \ref{subsec:variability}).
\item
The WLQ sample has remarkable radio properties (Sect. \ref{subsec:radio}) that are probably not produced by
selection effects. The percentage of radio-detected quasars is more than five times higher then for the
control sample and the ratio of the number of core-dominant to the number of lobe-dominant radio sources
is about three times higher. On the other hand, the mean radio-loudness of the radio-detected WLQs is
much lower than for the ordinary quasars. The propotion of radio-sources increases towards lower equivalent
widths while the mean radio loudness decreases.
\end{itemize}
The higher luminosities and Eddington ratios in combination with a bluer SED can be consistently explained by hotter
accretion disks, i.e. by stronger accretion. According to the scenario proposed by Hryniewicz et al. (\cite{Hryniewicz2010}),
a change towards a higher accretion rate is accompanied by an only slow development of the BLR. We have shown
that the composite WLQ spectrum can be reasonably matched by the ordinary quasar composite where the
continuum has been replaced by that of a hotter disk (Fig. \ref{fig:fit}). Therewith, at least
a substantial percentage of the WLQs can be normal quasars in an early stage of increased accretion activity.
On the other hand, a similar effect can be achieved by an additional power-law component, perhaps in relativistically
boosted radio-quiet quasars viewed at low inclination angles to the jet.
Our WLQ sample is thus probably a mixture of quasars at the beginning of a stage of increased accretion
activity on the one hand side and of beamed radio-quiet quasars on the other. There are hints on a close
link between the accretion process and the relativistic jets (e.g. Falcke \& Biermann \cite{Falcke1995};
Rawlings \& Saunders \cite{Rawlings1991}; Cao \& Jiang \cite{Cao1999}; Dexter et al. \cite{Dexter2012})
and one may expect that the two scenarios are closely linked to each other where an
initially high or an enhanced accretion rate is the reason for the relatively high rate of
radio detected quasars.
\begin{acknowledgements}
We thank the anonymous referee for suggestions that significantly improved our manuscript.
We further thank Martin Haas for valuable comments and tips.
This research has made use of data products from the Sloan Digital Sky Survey (SDSS),
the Two Micron All-Sky Survey (2MASS), and the Wide-Field Infrared Survey (WISE).
Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation,
the Participating Institutions (see below), the National Science Foundation, the National
Aeronautics and Space Administration, the U.S. Department of Energy, the Japanese
Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for
England. The SDSS Web site is http://www.sdss.org/. The SDSS is managed by the Astrophysical Research
Consortium (ARC) for the Participating Institutions. The Participating Institutions are: the American
Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge
(Cambridge University), Case Western Reserve University, the University of Chicago, the Fermi National
Accelerator Laboratory (Fermilab), the Institute for Advanced Study, the Japan Participation Group,
the Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for
Particle Astrophysics and Cosmology, the Korean Scientist Group, the Los Alamos National Laboratory,
the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA),
the New Mexico State University, the Ohio State University, the University of Pittsburgh, University
of Portsmouth, Princeton University, the United States Naval Observatory, and the University of
Washington.
The Two Micron All Sky Survey is a joint project of the University of Massachusetts
and the Infrared Processing and Analysis Center/California Institute of Technology,
funded by the National Aeronautics and Space Administration and the National Science
Foundation.
The Wide-field Infrared Survey Explorer is a joint project of the University of California, Los Angeles,
and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National
Aeronautics and Space Administration.
\end{acknowledgements}
|
1,116,691,498,708 | arxiv | \section{Introduction}
\label{sec_intro}
Microfluidic systems have been shown to be very useful
for continuous manipulation and separation of microparticles with increased control and sensitivity, which is important for a wide range
of applications in chemistry, biology, and medicine. Traditional microfluidic techniques of particle manipulation rely on low Reynolds number laminar flow. Under these conditions, when no external forces are applied, particles follow fluid streamlines. Contrary to this, particles migrate across streamlines to some stationary positions in microchannels when inertial aspects of the flow become significant. This migration is attributed to inertial lift forces, which are currently successfully used in microfluidic systems to focus and separate particles of different
sizes continuously, at high flow rate, and without external forces~\citep{dicarlo07,bhagat08}.
The rapid development of inertial microfluidics has raised a considerable interest in the
lift forces on particles in confined flows. We mention below what we believe are the most relevant contributions.
Inertial lift forces on neutrally buoyant particles have been originally reported for
macroscopic channels~\citep{Segre:Silb62b}. This pioneering work has concluded that particles
focus to a narrow annulus at radial position $0.6$ of a pipe radius, and argued that lift forces vanish at this equilibrium position. However, no particle manipulation systems have been
explored based on macroscale systems. Much later this inertial focusing has provided the basis for various methods of particle separation by size
or shape in microfluidics devices (see \citet{martel2014inertial} and \citet{zhang2016} for recent
reviews). In these microfluidic applications the inertial lift has been balanced by the Dean force due to a secondary rotational
flow caused by inertia of the fluid itself, which can be generated in curved channels~\citep{bhagat08}. These Dean drag forces alter
equilibrium positions of particles. The preferred location of particles in microchannels could also be controlled by the
balance between inertial lift and external forces, such as electric~\citep{zhang2014real} or
magnetic~\citep{dutz2017fractionation}.
In recent years extensive efforts have gone into experimental investigating particle equilibrium positions in cylindrical~\citep{Matas:etal:JFM04,morita2017} and rectangular channels~\citep{choi2011,miura2014,hood2016}. \citet{Matas:etal:JFM04} have shown that the Segr\'{e}-Silberberg annulus for neutrally-buoyant particles shifts toward the wall as $\Re$ increases and toward the pipe center as particle size increases. At large $\Re\geq 600$, some particles accumulate in an inner
annulus near the pipe centre. \citet{morita2017} have found that the inner annulus is not a true equilibrium position, but a transient zone, and that in a long enough pipe all particles accumulate within the Segr\'{e}-Silberberg annulus. It has also been found that equilibrium positions of slightly non neutrally-buoyant particles in a horizontal pipe are shifted toward a pipe bottom~\citep{Matas:etal:JFM04}.
During last several years numerical calculations~\citep{dicarlo2009prl,liu2015,Loisel2015} and computer simulations~\citep{Chun:Ladd06,kilimnik2011} have also been concerned with phenomena of the inertial migration. It has been shown that in rectangular channels particles initially migrate rapidly to manifolds, and then slowly focus within the manifolds to stable equilibrium positions near wall centers and channel corners~\citep{Chun:Ladd06,dicarlo2009prl,hood2016}. There could be two, four or eight equilibrium positions depending on the particle size, channel aspect ratio and Reynolds number. Overall, simulations are consistent with experimental results~\citep{choi2011,miura2014,hood2016}.
There is also a large literature describing attempts to provide a theory of inertial lift. An asymptotic approach, which can shed light on these phenomena, has been developed by several authors~\citep{Saffman65,Ho:Leal74,Vass:Cox76,Cox:Hsu77,Schon:Hinch89,
Asmolov99,Matas:etal:JFM04,matas2009}. Most papers have considered a plane Poiseuille flow except the work by~\citet{matas2009} where a pipe flow has been addressed. The approach can be applied when the particle Reynolds number, $\mathrm{Re}_{p}=a^{2}G/\nu$, where $a$ is
the particle radius, $G$ is the characteristic shear rate and $\nu$ is the
kinematic viscosity, is small. If so, to the leading order in $\mathrm{Re}_{p}$, the disturbance
flow is governed by the Stokes equations, and a spherical particle experiences
a drag and a torque, but no lift. The Stokeslet disturbance originates from the particle
translational motion relative to the fluid and is proportional to the slip velocity $V'_s=V'-U'$, where $V'$ and $U'$ are forward velocities of the particle and of the undisturbed flow
at the particle center. The stresslet is induced by free rotation of the sphere in the shear flow and is proportional to $G$.
The lift force has then been deduced from the solution of the
next-order equations which accounts a non-linear coupling between the two disturbances~\citep{Vass:Cox76}:
\begin{equation}
F_{l}^{\prime }=\rho a^{2}\left( c_{l0}{a^{2}}G^{2}+c_{l1}aGV_{s}^{\prime
}+c_{l2}V_{s}^{\prime 2}\right) ,
\label{cher}
\end{equation}%
where $\rho $ is the fluid density.
The coefficients $c_{li}$ ($i=0,1,2$) generally depend
on several dimensionless parameters, such as $z/a$, $H/a$, $V_s'/U_{m}'$, and on the channel Reynolds
number, $\mathrm{Re}=U_{m}'H/\nu$, where $z$ is the distance to the closest
wall, $H$ is the channel thickness, and $U'_m$ is the maximum velocity of the
channel flow. Solutions for $c_l$ have been obtained in some limiting cases only,
and no general analytical equations have still been proposed for finite-sized particles in the channel. Thus, \citet{Vass:Cox76} have calculated the coefficients $c_{l0}^{VC},c_{l1}^{VC},c_{l2}^{VC}$ for pointlike particles
at small channel Reynolds numbers, $\Re \ll 1$, which depend on $z/H$ only and are applicable when $z\gg a$.
\label{add2} \citet{Cheruk:Mclau94} have later evaluated the coefficients $c_{li}^{CM}(z/a)$ for finite-size particles near a single wall in a linear shear flow assuming that $z\sim a$ and proposed simple fits for them. However, it remains unclear if and how earlier theoretical results for pointlike particles at $\Re \ll 1$ or for finite-size particles near a single wall can be generalized to predict the lift of finite-size particles at any $z$ and a finite $\Re$ of a microfluidic channel.
According to Equation~(\ref{cher}) the contribution of the slip velocity to the lift forces dominates when $V_s'\gg Ga$. Since the slip velocity is induced by external forces, such as gravity, it is believed that it impacts a hydrodynamic lift only in the case of non-neutrally buoyant particles. For neutrally buoyant particles with equal to $\rho$ density, the slip velocity is normally considered to be negligibly small~\citep{Ho:Leal74,hood2015}. A corollary from that would be that the lift of neutrally buoyant particles could be due to the stresslet only. Such a conclusion, however, can be justified theoretically only for small particles far from walls, $z \gg a$, but hydrodynamic interactions at finite distances $z\sim a$ can induce a finite slip, $V_s'\sim Ga$, so that all terms in Equation~(\ref{cher}) become comparable~\citep{Cheruk:Mclau94}. The variation of the slip velocity of neutrally buoyant particles in a thin near-wall layer can impact the lift force, but we are unaware of any previous work that has addressed this question.
The purpose of this introduction has been to show that, in spite of its importance for inertial microfluidics, the lift forces of finite-size particles in a bounded geometry of a microchannel still remain poorly understood. In particular, there is still a lack of simple analytical formulas quantifying the lift, as well as of general solutions valid in the large range of parameters typical for real microfluidic devices.
Given the current upsurge of interest in the inertial hydrodynamic phenomena and their applications to separation of particles in microfluidic devices it would seem timely to provide a more satisfactory theory of a hydrodynamic lift in a microchannel and also to bring some of modern simulation techniques to bear on this problem. In this paper we present some results of a study of a migration of finite-size particles at moderate
channel Reynolds numbers, $\mathrm{Re}\sim 10$, with the special focus on the role of the slip
velocity in the hydrodynamic lift.
Our paper is arranged as follows. In \S 2 we propose a general expression for the lift force on a neutrally buoyant
particle in a microchannel, which reduces to earlier theoretical results~\citep{Vass:Cox76,Cheruk:Mclau94} in relevant limiting cases. We also extend our expression to the case of slightly non-neutrally buoyant particles with the slip velocity smaller than $G_ma$. To access the validity of the proposed theory we use a simulation method described in \S 3, and the numerical results are presented in \S 4.
We conclude in \S 5 with a discussion of our results and their possible relevance for a fractionation of particles in microfluidic devices. Appendices~\ref{slip} and~\ref{app_b} contain a summary of early calculations of lift coefficients and the derivation of differential equations that determine trajectories of particles.
\begin{figure}
\centering
\includegraphics[height=5.6cm]{Asmolov_Fig01.eps}\\
\caption{Sketch of a migration of a particle of radius $a$ to an equilibrium position in a pressure-driven flow. The locus of this position is determined by the balance between lift, $F_l'$, and gravity, $F_g'$, forces.}
\label{fig_sketch}
\end{figure}
\section{Theory}
\label{sec_theo}
In this section we propose an analytical expression for the lift force on neutrally buoyant and slightly non-neutrally buoyant particles of radius $a$, which translate parallel to a channel wall. Our expression is valid for $a/H \ll 1$ at any distance $z$ from the channel wall.
We consider a pressure-driven flow in a flat inclined microchannel of thickness $H$. An inclination angle $\alpha\geq0$
is defined relative to the horizontal. The coordinate axis $x$ is
parallel to the channel wall, and the normal to the wall coordinate is denoted by $z$. The geometry is shown in
Figure~\ref{fig_sketch}. The undisturbed velocity profile in such a channel is given by
\begin{equation}
U'(z)=4U_m'z\left( 1-z/H\right) /H.
\label{eq:Uflow}
\end{equation}
Let us now introduce a
dimensionless slip velocity $V_s=V_s'/(a G_{m})$, where $G_m=4U'_m/H$ is the maximum shear rate at the
channel wall. We can then rewrite Equation~(\ref{cher}) as
\begin{equation}
F_{l}'={\rho a^{4}} G_m^{2}c_{l},
\label{f_scale}
\end{equation}%
with the lift coefficient
\begin{equation}
c_{l}=c_{l0}+c_{l1}V_{s}+c_{l2}V_{s}^{2},
\label{eq_force1}
\end{equation}
which depends on the slip velocity, $V_s$, which in turns can be determined from the Stokes equations (a zero-order solution). Therefore, to construct a general expression for a lift force acting on finite size particles in the channel it is necessary to estimate $V_s$ as a function of $z$.
We begin by studying the classical case of neutrally buoyant (i.e. force- and a torque-free) particles with a density $\rho_p$ equal to that of liquid, $\rho$. The expression for $V_s$ in a linear shear flow near a single wall has been derived before~\citep{Goldman1967} and can be used to calculate the slip velocity in the near-wall region of our channel. The fits for $V_s$ are given in Appendix~\ref{slip}, Equations~(\ref{V_sh})-(\ref{wak}). We first note that depending on $z/a$ one can distinguish between two different regimes of behavior of $V_s$. In the central part of the channel, i.e. when $z/a \gg 1$, the slip contribution to the lift decays as $(a/z)^{3}$~\citep{wakiya1967}, being always very small, but finite. In contrast, when the gap between the sphere and the wall is small, $z/a-1\ll 1$, the slip velocity varies very rapidly with $z/a$~\citep{Goldman1967}:
\begin{equation}
V_s^{nb}=-1+\frac{0.7431}{0.6376-0.200\log \left( z/a-1\right)}.
\label{log}
\end{equation}
As a side note we should like to mention here that a logarithmic singularity in Equation~(\ref{log}) implies that in the near-wall region the lift coefficient, Equation~(\ref{eq_force1}), cannot be fitted by any power law $(a/z)^n$ as it has been previously suggested~\citep{dicarlo2009prl,hood2015,liu2016}.
It follows from Equation~(\ref{log}) that for an immobile particle in a contact with the wall, $z=a$, the slip velocity is largest, $V_{s}=-1$. In this limiting case the lift coefficient also takes its maximum value, $c_{l}^{KL}\simeq 9.257$~\citep{krishnan1995}. Far from the wall, the slip velocity is much smaller and can be neglected, so that we can consider $c_l \simeq c_{l0}$. Therefore, when $a\ll z\ll H$, the value of $c_l$ in Equation~(\ref{eq_force1}) is equal to $c_{l0}^{CV}|_{z/H\rightarrow 0}=55\pi /96\simeq 1.8$~\citep{Cox:Hsu77}, i.e. it becomes much smaller than for a particle at the wall. This illustrates that $c_l$ varies significantly in the vicinity of the wall due to a finite slip.
We now remark that the Stokeslet contribution (the second and the third terms in Equation~(\ref{eq_force1})) is
finite for $z\sim a$ only and vanishes in the central part of the channel. Within the close proximity to the wall we may neglect the corrections to the slip and the lift of order $a/H$ due to parabolic flow~\citep{Pasol2006,yahiaoui2010}
and due to the second wall. Therefore, in this region one can use the results by \citet{Cheruk:Mclau94} for the lift coefficients $c_{li}^{CM}$.
The stresslet contribution to the lift (first term in Equation~(\ref{eq_force1})) is finite
for any $z$. Close to the wall, the effect of particle size for this term is negligible as the coefficient
$c_{l0}^{CM}(z/a)$ is nearly constant~\citep{Cheruk:Mclau94}. So we may describe the stresslet contribution by the coefficient $c_{l0}^{VC}$ obtained by \citet{Vass:Cox76}.
This enables us to construct the following formula for the lift coefficient:
\begin{equation}
c_{l}=c_{l0}^{VC}(z/H)+\gamma c_{l1}^{CM}(z/a)V_{s}+c_{l2}^{CM}(z/a)V_{s}^{2}, \label{our_fit}
\end{equation}%
where $\gamma =G(z)/G_m= 1-2z/H\leq 1$ is a dimensionless local shear rate at
the particle position. The fitting expressions for three lift coefficients are summarized in Appendix~\ref{slip}. We, therefore, use Equation~(\ref{cl0}) to calculate $c_{l0}^{VC}$, Equation~(\ref{cl1CM}) to calculate $c_{l1}^{CM}$, and Equation~(\ref{cl2CM}) for $c_{l2}^{CM}$. Note that in the second term of Equation~(\ref{our_fit}) we have introduced a correction factor $\gamma$, which takes into account the variation
of $G$ in the second term of Equation~(\ref{cher}) and ensures the lift to remain
zero at the channel centerline.
We recall, that Equation~(\ref{our_fit}) is asymptotically
valid for any $z$ when $a/H\ll 1$ and $\Re\ll 1$. However, one can argue that it should be accurate enough at moderate Reynolds numbers. \label{add6} Indeed, the contribution of undisturbed flow to inertial terms in the Navier-Stokes equations remains relatively small when $\Re\leq 20$. By this reason constructed for $\Re\ll 1$ regular-perturbation methods~\citep{Ho:Leal74,Vass:Cox76,Cheruk:Mclau94} have successfully predicted the lift force on a point-like neutrally buoyant particle at a moderate Re. For larger Re, when a contribution of inertial terms becomes significant, the equilibrium positions should be shifted towards the wall with the increase in $\Re$~\citep{Schon:Hinch89,Asmolov99}.
We now turn to non-neutrally buoyant particles, which density is different from that of liquid, so that they experience an external gravity force, $F'_{g}$, which in dimensionless form can be expressed as
\begin{equation}
F_{g}=\dfrac{F'_{g}}{\rho a^4 G_m^2}=\dfrac{4\pi g}{3aG_m^2}\Delta \rho, \label{eq:Fg}
\end{equation}
where $\Delta\rho=(\rho_p-\rho)/\rho$. The gravity influences both the particle migration and equilibrium position when $F_{g}=O(1)$. It also induces an additional slip velocity which is of the order of the Stokes settling velocity,
\begin{equation}
V^{St}=\dfrac{F'_g}{6\pi\mu a^2G_m}=\dfrac{\Re_p F_g}{6\pi },\label{eq:St}
\end{equation}
where $\mu $ is the dynamic viscosity. The effect of this velocity on the lift is comparable to $F_l^{nb}$ when $V^{St}=O(1)$, i.e., at large gravity, $F_g\sim 6\pi \Re_p^{-1}\gg 1$, and is very important for vertical or nearly vertical channels.
For horizontal channels, the slip velocity is equal to that of a neutrally buoyant sphere since $F_x=0$.
Equation~(\ref{our_fit}) can also be applied in this case since the slip velocity remains small far from walls. Equilibrium positions of particles, $z_{eq}$, can then be deduced from the
balance between the lift and the gravity,
\begin{equation}
c_l(z_{eq})=F_g.
\label{eq:Zeq}
\end{equation}
Equation(\ref{eq:Zeq}) may have two, one or no stable equilibrium points depending on
$F_g$, and the sensitivity of equilibrium positions to the value of $a$ or $\Delta \rho$ is defined by the value $\partial
c_l/\partial z$. Thus, when the derivative is small, small variations in $F_g$ will lead
to a significant shift in focusing positions. We finally note that the range of possible $z_{eq}$
can be tuned by the choice of $U_m'$.
\section{Simulation method}
\label{sec_simul}
In this section, we present our simulation method and justify the choice of parameters.
For our computer experiment, we chose a scheme based on the lattice Boltzmann method~\citep{benzi_lattice_1992,Kunert2010random,Dubov2014} which has been successfully employed earlier to simulate a motion of particles in the channel flow. We use a simulation cell confined by two impermeable
no-slip walls located at $z=0$ and $z=79\delta$, so that in all simulations $H=79 \delta$, and two
periodic boundaries with $N_{x}=N_{y}=256\delta$, where $\delta$ is the lattice
spacing. \label{add4} Spherical particles of radii $a=4\delta-12\delta$ are implemented as
moving no-slip
boundaries~\citep{ladd_lattice-boltzmann_2001,bib:jens-janoschek-toschi-2010b,HFRRWL14}, where the chosen radii are sufficient to keep discretisation effects of the order of a few percent~\citep{Janoschek2013}.
A Poiseuille flow is generated by applying a body force, which is equivalent to a pressure gradient $-\nabla p$. We use a 3D, 19 velocity, single relaxation time implementation of the lattice Boltzmann method, where the relaxation time
$\tau$ is kept to 1 throughout this paper. Different flow rates are obtained by
changing the fluid forcing. We use two channel Reynolds numbers,
$\mathrm{Re}=11.3$ and $22.6$. To simulate the migration in an inclined
channel we apply the gravity
force directed at an angle
$\alpha$ relative to the $z$-axis at the center of the particle. In our simulations the values of dimensionless $F_{g}$ vary from $0$ (neutrally buoyant particle) to $13.91$.
In our computer experiments we determine the lift by using two different strategies. In the first method we extract the lift from the migration velocity. We measure the $x-$ and $z-$components of the particle velocity to find the dimensionless slip, $V_s=(V'_x-U'(z))/(aG_m)$, and migration velocities, $V_m=V_z'/(a G_{m})$. \label{add5} To suppress the fluctuations arising from the discretization artifacts we average the velocities over approximately 4000 timesteps. The error does not exceed $3\%$ for the particles of $a=4$ and rapidly decreases with $a$. The lift force can then be found from these calculations, by assuming that the particle motion is quasi-stationary. The
lift is balanced by the $z-$ component of the drag, $F_l'=-F_{dz}'$. Following~\citet{Dubov2014} we use an expression
\begin{equation}
F_{dz}^{\prime }\approx -6\pi \mu aV_{m}^{\prime }f_z(z/H,a/H), \label{eq_forcefit}
\end{equation}%
\begin{equation}
f_z=1+\dfrac{a}{z-a}+\dfrac{a}{H-a-z},
\label{eq_forcefit2}
\end{equation}%
where the second and the third terms are corrections to the
Stokes drag due to hydrodynamic interactions with two channel walls. In what follows
\begin{equation}
c_{l}=6\pi V_{m}f_z\Re_p^{-1}. \label{eq_cl}
\end{equation}
Second method to calculate the lift (and to check the validity of the
first approach) uses the balance of the lift and the gravity forces described by Equation~(\ref{eq:Zeq}). By varying the gravity force $F_g$ one can, therefore, comprehend the whole
range of equilibrium positions within the channel to obtain $c_l(z)$. The advantage of such an approach is that it does not require the particle motion to be quasi-stationary. However, the disadvantage of this method is that the convergence to
equilibrium can be slow in the central zones of the channel, where the slope of
$c_l(z)$ is small. Therefore, we use this computational strategy only in the near-wall region.
\section{Results and discussion}
In this section, we present the lattice Boltzmann simulation results and
compare them with theoretical predictions.
\subsection{Neutrally buoyant particles}
\label{sec_nb}
\begin{figure}
\centering
\includegraphics[height=5.6cm]{Asmolov_Fig02.eps}
\caption{(a) Dimensionless migration velocity computed as a function of
$z/H$ for particles of $a=4\delta$ (symbols). The location of the particle in a contact with the wall, $z=a$, is shown by a vertical
dotted line. Dashed curve plots theoretical predictions for pointlike particles. Solid curve shows a polynomial fit of simulation data. (b) Dimensionless slip velocities computed for the same particles (symbols). Solid curve plots the
slip velocity in a linear shear flow near a single wall. Dashed line plots the Faxen correction. Vertical
dotted line indicates the location of $z=a$.}
\label{fig_vel}
\end{figure}
We start with neutrally buoyant particles and first calculate their migration $V_m^{nb}$ and the slip $V_s^{nb}$ velocities as a function of $z/H$. Figure~\ref{fig_vel} plots simulation data obtained for particles of radius $a=4\delta$. Here we show only a half of the channel since
the curves are antisymmetric with respect to the channel axis $z =H/2$. These results demonstrate that migration velocity differs significantly from the velocity $c_{l0}\Re_p/(6\pi )$, where $\Re_p=a^2G_m/\nu$, predicted theoretically for pointlike particles~\citep{Vass:Cox76}. We also see that the equilibrium position, $V_m^{nb}=0$, of finite-size particles is shifted
towards the channel axis compared to that of pointlike particles, which is obviously due to their interactions with the
wall resulting in a finite slip velocity. Indeed, Figure~\ref{fig_vel}(b) demonstrates that computed $V_s^{nb}$ grows rapidly near the
wall being close to the theoretical predictions for a linear shear flow near a
single wall~\citep{Goldman1967}. Unlike theoretical predictions by~\citet{Goldman1967}, the computed
slip velocity does not vanish in the central part of the channel. \label{add1} Its value is roughly twice larger than the Faxen correction $4U_m' a^2/(3H^2)$~\citep{happel1965}. Note that a similar difference has been obtained in simulations of the migration of finite-size particles based on
the Force Coupling Method~\citep{Loisel2015}. These deviations from the Faxen corrections are likely also caused by hydrodynamic interactions of particles with the wall in a parabolic flow.
\begin{figure}
\centering
\includegraphics[height=5.6cm]{Asmolov_Fig03.eps}\\
\caption{Lift coefficient, $c_{l}$, for neutrally buoyant particles of $a=4\delta$ (circles) and $8\delta$ (squares) obtained from the migration velocity at $\mathrm{Re}=11.3$ (grey symbols)
and $22.6$ (white symbols). Solid and dash-dotted curves show predictions of Equation~(\ref{our_fit})
for $a=4\delta$ and
$8\delta$, dashed curve plots predictions for point-like particles. Vertical dotted lines show $z=a$. Black symbols show $c_{l}$ obtained for non-neutrally buoyant particles of $a=4\delta$ from the force balance at
$\mathrm{Re}=22.6$. The inset plots $c_{l}$ in the central part of the channel.
}
\label{fig_forcefit}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=5.6cm]{Asmolov_Fig04.eps}\\
\caption{Equilibrium positions for neutrally buoyant finite-sized
(gray circles) and point-like (white circle) particles. Dashed curve shows predictions of Equation~(\ref{our_fit}). }
\label{fig_LH}
\end{figure}
Figure~\ref{fig_forcefit} shows $c_l$ for particles of $a=4\delta$ and $8\delta$. The lift coefficient has been obtained from the migration velocity and from the force balance as specified above, and simulations have been made for two moderate Reynolds numbers, $\mathrm{Re}=11.3$ and $22.6$. As we discussed above, if $\mathrm{Re} \leq 20$ a potential dependence of $c_l$ on $\mathrm{Re}$ could be ruled out \emph{a priori}, and this is indeed confirmed by our simulations. Therefore, below we provide a detailed comparison of our simulation data with asymptotic solutions obtained for $\mathrm{Re} \ll 1$, which should be applicable for finite moderate $\mathrm{Re}$. Figure~\ref{fig_forcefit} also includes theoretical predictions by \citet{Vass:Cox76} and curves calculated with Equation~(\ref{our_fit}). One can see that simulation results show strong discrepancy from point-particle approximation, especially in the near-wall region, where hydrodynamic interactions are significant. This discrepancy increases with the size of particles. We can, however, conclude that predictions of our Equation~(\ref{our_fit}) are generally in good agreement with simulation results. Thus, for smaller particles, of $a=4\delta$, Equation~(\ref{our_fit}) perfectly fits the simulation data in the near-wall region, where the theory for point-like particles fails. Simulation results slightly deviate from predictions of Equation~(\ref{our_fit}) near the equilibrium positions and in the central part of the channel. For bigger particles, of $a=8\delta$, these deviations are more pronounced. We emphasize, however, that they are still much smaller than from the point-particle theory by \citet{Vass:Cox76}.
To examine a significance of the particle size in more detail, we plot in Figure~\ref{fig_LH}(a) computed equilibrium position, $z_{eq}/H$, as a function of $a/H$. We recall that the lift $c_l^{nb}(z)$ is antisymmetric with respect to the midplane of the channel axis, so that neutrally buoyant particles have a second equilibrium position at $H-z_{eq}$. In a point-particle approximation $z_{eq}/H\simeq 0.19$ ~\citep{Vass:Cox76}. We see that for finite-size particles $z_{eq}/H$ is always larger, and increases with the particle size. Note that the increase in $z_{eq}/H$ is nearly linear when $a/H \leq 0.1$. Also included in Figure~\ref{fig_LH} are predictions of Equation~(\ref{our_fit}). One can conclude that the theory correctly predicts the trend observed in simulations, but slightly deviates from the simulation data. A possible explanation for this discrepancy could be effects of parabolic flow (which are of the order of $O(a/H)$) on the slip velocity and the
stresslet~\cite[see][]{yahiaoui2010,hood2015}, which are neglected in our theory.
\subsection{Non-neutrally buoyant particles}
\label{nnb}
We now turn to the particle migration under both inertial lift and gravity forces.
\subsubsection{Horizontal channel}
Let us start with the investigation of migration of particles in a most relevant experimentally case of a horizontal channel ($\alpha = 0^{\circ}$).
We first fix a weak gravity force, $F_g=0.694$, and compute the migration velocity of particles of radii $a=4\delta$ in a horizontal channel. Simulation results are plotted in Figure~\ref{fig_vhor1}. We see that $V_m(z)$ is no longer antisymmetric, as it has been in the case of neutrally buoyant particles. The migration velocity can be calculated as
\begin{equation}
V_
=V_m^{nb}-V^{St}/f_z,
\label{vm_vert}
\end{equation}
where we use a fit for $V_m^{nb}$ computed for neutrally-buoyant particles (see Figure~\ref{fig_vel}(a)). The agreement between simulation data and calculations using Equation~(\ref{vm_vert}) is excellent, which confirms that Equation~(\ref{our_fit}) remains valid in the case of slightly non-neutrally buoyant particles. We remark that due to gravity $V_m$ is shifted downwards relative to $V_m^{nb}(z)$ shown in Figure~\ref{fig_vel}. As a result, with the taken value of $F_g$ the second equilibrium position disappeared.
We recall that this type of simulations allows one to find values of $c_l(z)$ in the vicinity of the wall by varying $F_g$. We have included these force balance results in Figure~\ref{fig_forcefit} and can conclude that they agree very well with data obtained by using another computational method and for neutrally-buoyant particles. This suggests again that above results could be used at moderate Reynolds numbers, $\Re \leq 20$, since in this case the lift coefficient does not depend on $\Re$.
\begin{figure}
\centering
\includegraphics[height=5.6cm]{Asmolov_Fig05.eps}\\
\caption{Migration velocity of non-neutrally buoyant particles in a
horizontal channel. Symbols show simulation data. Solid curve is calculation with Equation~(\ref{vm_vert}) using data for neutrally buoyant particles.}
\label{fig_vhor1}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=5.6cm]{Asmolov_Fig06.eps}\\
\caption{(a)~Equilibrium positions of non-neutrally buoyant particles ($a=4\delta$) in a horizontal channel; (b)~trajectories of the same particles released at $z_0=1.125 a$ computed at different $F_{g}=0.268$ (squares), $0.804$ (diamonds), $1.340$ (triangles), $2.681$ (circles), $9.383$ (diamonds). }
\label{fig_traj}
\end{figure}
Figure~\ref{fig_traj}(a) shows $z_{eq}/a$ computed at several $F_g$. It can be seen that when the gravity force is getting larger, the equilibrium positions decrease rapidly. This trend can be used to separate particles even when $\Delta \rho$ is very small. To illustrate this we now fix $\Re=11.3$, inject particles of $a=4 \delta$ close to the bottom of the channel, $z_0=1.125 a$, and simulate their trajectories at different $F_g$. In Figure~\ref{fig_traj}(b) we plot trajectories of particles, $z/a$, as a function of $x G_m a \nu $. The data show that if $F_g$ is large enough, particles sediment to the wall. However, when $F_g$ is relatively small, particles follow different and divergent trajectories, by approaching their equilibrium positions. We stress that at a given $F_{g}$ and $a/H$ trajectories, shown Figure~\ref{fig_traj}(b), remain the same for any $\Re \leq 20$ (see Appendix~\ref{app_b}). Therefore, even in the case of very small $\Delta \rho$, one can always tune the value of $\Re$ to induce the required for separation difference in $F_g$. For example, we have to separate particles of $a=2~\mu$m and different $\Delta \rho$ in the channel of $H=40~\mu$m. If we chose $\Re=0.3$, the separation length $L=50 x G_m a \nu$ of Figure~\ref{fig_traj}(b) will be ca. $3.3$~cm. By evaluating $\Delta \rho$ with Equation~(\ref{eq:Fg}), we can immediately conclude that trajectories plotted in Figure~\ref{fig_traj}(b) from top to bottom correspond to $\Delta \rho =0.007$, $0.022$, $0.037$ and $0.073$, which is indeed extremely small.
\subsubsection{Inclined channel}
When $F_g$ is large enough, it can also influences the slip velocity, and therefore, change the lift itself. This effect is especially important for vertical channels.
Note that due to the linearity of the Stokes equations, which govern a
disturbance flow at small particle Reynolds numbers, we can decouple the contributions of the particle-wall
interaction and of the gravity force into the slip velocity:
\begin{equation}
V_{s}=V_{s}^{nb}+\Delta V_{s}\sin\alpha,
\label{vslip_vert}
\end{equation}
where $\Delta V_{s}=V^{St}/f_x$ is the gravity-induced slip velocity for a vertical channel ($\alpha = 90^{\circ}$) and $f_{x}\left( z/H,a/H\right)$ is the correction to the drag for a particle
translating parallel to the channel walls.
The slip and the migration velocities of particles of $a=4 \delta$ in a vertical channel computed by using several values of $F_g$ are
shown in Figures~\ref{fig_vslip_vert}(a) and~\ref{fig_vert}(a). Note that the slip velocity, $V_s$, grows with $F_g$ since the Stokes velocity, $V^{St}$, is linearly proportional to $F_g$ (see Eq.(\ref{eq:St})). We now use simulation data presented in Figures~\ref{fig_vel}(a)
and~\ref{fig_vslip_vert}(a) to compute $\Delta V_{s}$, and then $\Delta V_{s}/F_g$. The results for $\Delta V_{s}/F_g$ are shown in
Figure~\ref{fig_vslip_vert}(b), and we see that all data collapse into a single curve, which confirms the validity of Equation~(\ref{vslip_vert}). Figure~\ref{fig_vslip_vert}(b) also shows that $\Delta V_{s}/F_g$ is nearly
constant in the central region of the channel, being smaller than $V^{St}$, but the deviations from $V^{St}$ grow when particles approach the wall. These results again illustrate that hydrodynamic interactions with the walls significantly affect motion of particles in the channel.
\begin{figure}
\centering
\includegraphics[height=5.6cm]{Asmolov_Fig07.eps}\\
\caption{Slip velocities (a) and $\Delta V_{s}/F_g$ (b) computed for non-neutrally buoyant particles of $a=4\delta$ in a
vertical channel. The data sets correspond to $F_g=3.475$ (circles), $6.956$ (triangles), $10.44$ (squares) and $13.91$ (diamonds). Dashed line shows $V^{St}/F_g$, vertical dotted lines plot $z=a$.
}
\label{fig_vslip_vert}
\end{figure}
\begin{figure}
\centering
\includegraphics[height=5.6cm]{Asmolov_Fig08.eps}\\
\caption{Migration velocities (a) and $\Delta V_{m}/F_g$ (b) computed for non-neutrally buoyant particles of $a=4\delta$ in a vertical channel. The data sets correspond to $F_g=3.475$ (circles), $6.956$ (triangles), $10.44$ (squares) and $13.91$ (diamonds). Vertical dotted lines plot $z=a$. Solid curve shows a polynomial fit of data.}
\label{fig_vert}
\end{figure}
We recall that the variation of the slip velocity caused by gravity is small for slightly
non-neutrally buoyant particles (see Figure~\ref{fig_vslip_vert}), so that Eq.(\ref{our_fit}) can be linearized with respect to $\Delta V_{s}$:
\begin{equation}
c_l\simeq c_{l}^{nb}+\Delta V_{s}\frac{\partial c_{l}(V_{s}^{nb})}{\partial V_{s}},
\label{eq_force3}
\end{equation}
where $c_{l}^{nb}=c_{l}(V_{s}^{nb})$ is the lift coefficient for neutrally
buoyant particles. By using Eq.(\ref{eq_cl}) we can then calculate the migration velocity
\begin{equation}
V_{m}=V_{m}^{nb}+\Delta V_{m}=V_{m}^{nb}+\Delta V_{s}\frac{\partial c_{l}(V_{s}^{nb})}{\partial V_{s}}\frac{\Re_p}{6\pi f_z}. \label{eq_vm}
\end{equation}
The computed migration velocity is shown in Figure~\ref{fig_vert}(a). We see that it decreases with $F_g$, and the equilibrium position shifts towards the wall, which is since $\Delta V_{s}/F_g$ is positive while $\partial c_{l}/\partial V_{s}$ is negative.
We can now evaluate $\Delta V_{m}/F_{g}$
by using simulation data presented in Figures~\ref{fig_vel} and~\ref{fig_vert}(a), and these
results are presented in Figure~\ref{fig_vert}(b). As one can see, the data collapse into a single curve, thus confirming the validity of our linearization, Equation~(\ref{eq_vm}).
\begin{figure}
\centering
\includegraphics[height=5.6cm]{Asmolov_Fig09.eps}\\
\caption{Equilibrium positions $z_{eq}/H$ for $a=4\delta$ and $F_g=3.475$. Circles show simulation data. Solid curve plots results obtained using $V_m = 0$, where $V_m$ is calculated with Eq.~(\ref{eq_vma}). }
\label{fig_vvert2}
\end{figure}
Finally, we briefly discuss the case of an arbitrary inclination angle $\alpha$, where the $z$-component of the force can be written as
\begin{equation}
F_z= c_{l}(V_s)+F_g\cos\alpha.
\label{eq_force4}
\end{equation}
By using Eqs.(\ref{vslip_vert}), (\ref{eq_vm}) and (\ref{eq_force4}), we can express the migration velocity as
\begin{equation}
V_{m}=V_{m}^{nb}+\Delta V_{m}\sin\alpha+F_g\cos\alpha\frac{\Re_p}{6\pi f_z}, \label{eq_vma}
\end{equation}
where $\Delta V_{m}$ is evaluated for a vertical channel (see Figure~\ref{fig_vert}(b)). The equilibrium positions can be found by using a condition $V_m = 0$, where $V_m$ is calculated with Eq.(\ref{eq_vma}). The results of these calculations made at a fixed $F_g=3.475$ and different $\alpha$ are plotted in Figure~\ref{fig_vvert2} together with direct simulation data, and one can see that they practically coincide. Our results show that in a vertical channel two stable equilibrium positions coexist. \label{add3} They are symmetric relative to the midplane and are located close to walls. Another, third equilibrium position has a locus at the midplane, but is unstable. A similar result has been obtained earlier~\citep{Vass:Cox76,Asmolov99}. If we slightly reduce $\alpha$ both stable equilibrium positions become shifted towards the lower wall due to gravity as well seen in Figure~\ref{fig_vvert2}. These two positions coexist only for $\alpha\geq 85.7^{\circ}$. On reducing $\alpha$ further
the upper equilibrium position disappears, and only one, a lower, equilibrium position remains. This obviously indicates that the inertial lift cannot balance gravity anymore. We note that this remaining single equilibrium position becomes insensitive to the inclination angle when $\alpha\leq 60^\circ$.
\section{Concluding remarks}
In this paper we have studied the inertial migration of finite-size particles in a plane channel flow at moderate Reynolds numbers, $\Re\leq 20$. We have shown that the slip velocity, $V_s$, which is finite even for neutrally buoyant particles, contributes to the lift and determines the equilibrium positions in the channel. We have proposed an expression for the lift which generalizes theories, originally applied for some cases of limited guidance, to finite-size particles in a channel flow. When the size of particle turns to zero, our formula recovers known expression of a point-particle approximation~\citep{Vass:Cox76}. For particles close to the walls we recover earlier predictions for finite-size particles in a linear shear flow~\citep{Cheruk:Mclau94}.
Our theoretical model, which is probably the simplest realistic model for a lift in the channel that one might contemplate, provides considerable insight into inertial migration of finite-size particles in microchannels. In particular, it provides a simple explanation of a significant increase in the lift near walls. It also allows one to predict a number of equilibrium positions and determine their location in various situations.
To check the validity of our theory, we have employed lattice Boltzmann simulations. Generally, the simulation results have fully confirmed the theory, and have shown that many of our theoretical results have validity beyond initial restrictions of our model. Thus, it has been confirmed that predictions of our theory do not depend on Reynolds number when $\Re \leq 20$, that equilibrium positions of heavy particles in a horizontal channel can be accurately determined by using data for the neutrally buoyant case, and more.
Several of our theoretical predictions could be tested in experiment. In particular, we have shown that particles of a very small density contrast should follow divergent trajectories, so that channel flows with low Reynolds numbers $\Re\sim1$ can be used to separate such particles. \label{add7}We stress that our theory should correctly predict the lift in near-wall regions also in pipes or square channels, and we expect that for this geometry it could be accurate even at $\Re\geq 20$ since the lengthscale of the disturbance flow would rather be the distance to the wall than the channel width. By this reason it would be possible to neglect the effects of other distant walls and parabolic flow on the lift. Note, however, that these effects should be taken into account in the central part of the channel.
Our model and computational approach can be extended to more complex situations, which include, for example, hydrophobic walls or particles allowing hydrodynamic slip at their surfaces~\citep{vinogradova1999,neto.c:2005}. In this case the hydrodynamic interaction in the near-wall region changes significantly~\citep{davis1994,vinogradova1996}, so that we expect that the lift force can be also dramatically modified. It would also be interesting to consider a case of an anisotropic superhydrophobic wall, which could induce secondary flows transverse to the direction of applied pressure gradient~\citep{feuillebois.f:2010b,vinogradova.oi:2011,harting.j:2012}. It has been recently shown~\citep{asmolov2015,pimponi.d:2014} that
particles translating in a superhydrophobic channel can be laterally displaced due to such a transverse flow. The use of this effect in combination
with the inertial migration should be a fruitful direction, which could allow to separate particles of different size or density contrast
not only by their vertical but also by transverse positions.
~\\
\begin{acknowledgements}
We thank Sebastian Schmieschek and Manuel Zellh\"ofer for their help on
technical aspects of the simulations. This research was partly supported by the
Russian Foundation for Basic Research (grant 15-01-03069).
\end{acknowledgements}
|
1,116,691,498,709 | arxiv | \section{Introduction}
\label{sec::Introduction}
The CMS detector is a general-purpose detector at the LHC at CERN~\cite{CMS_exp}. The task of the CMS Tracker \cite{Tracker1} is to measure the trajectories of charged particles with high accuracy and reconstruction efficiency \cite{Tracker3}.
The pixel detector is constructed in hybrid pixel technology. The barrel
pixel detector (BPIX) consists of three cylindric layers of pixel modules oriented coaxially with respect to the beamline and centred around the
collision point. The forward pixel detector (FPIX) includes a pair of disks per side composed of pixel modules, which are situated orthogonally to the beamline on each side of BPIX.
The sensors of the pixel detector are segmented into a total of 66 million pixels, each with the size of 100\,$\mathrm{{\mu}m}$ $\times$ 150\,$\mathrm{{\mu}m}$. A 52\,$\times$\,80 pixel array is processed by one read-out chip.
\section{Track reconstruction}
The tracks of electrically charged particles are determined via a multi-stage process.
In the first step, traversing particles are detected through hit pixels which have deposited charge above a certain threshold. Subsequently, these pixels are combined into clusters. These clusters are called hits after they are characterised by their charge and their 2D position. Hit reconstruction is based on the use of pre-computed, projected cluster shapes~\cite{Swartz}.
The hits are combined into tracks using a tracking algorithm. The trajectory building algorithm extrapolates tracks using hits estimated from former hits along
the track. The algorithm stops if no hit is found on two consecutive layers on the projected trajectory.
\section{Detector calibration}
\label{sec::DetectorCalibration}
\subsection{Bad component database}
\label{sec::Bad component database}
The bad component database contains the list of permanently or temporarily bad modules. It enables the trajectory building algorithm to skip a tracking layer on which a hit is expected to be missing due to a faulty module.
Bad modules are determined by measuring their occupancy. A map is created, in which the faulty sensors are identified as those with significantly lower occupancy compared to the average of the modules surrounding them. Figure~\ref{fig::bad component} shows the map of the sensors where the white areas correspond to the bad modules. White horizontal and vertical stripes in the middle of the maps are due to the fact that the coordinate 0 does not designate any ladders or modules.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.15]{SiPixelQuality.png}
\caption{Occupancy map of the sensors in 2012. Layer 1 and Layer 2 in BPIX (upper plots) and the two disks in FPIX (lower plots). Color code shows the occupancy of the sensors. The coordinates x,y,z refer to the local coordinate system of the CMS \citep{CMS_exp}.}
\label{fig::bad component}
\end{figure}
\subsection{Single Event Upsets}
Temporary faults in modules are caused by ionising radiation that can flip the memory state of the read-out chip.
This effect is called Single Event Upset (SEU).
The bad component list is updated every 23 seconds in order to reflect not only permanently bad modules, but also temporarily faulty ones which undergo a SEU.
The SEUs are fixed by reprogramming the read-out chips. Using an automatic online monitoring and recovery system we were able to recover a detector efficiency of 0.05\% per hour.
\section{Efficiency}
\label{sec::Efficiency}
Hit efficiency is defined as the detected fraction of all expected clusters in a fiducial region. After excluding components which are listed in the bad component database the efficiency was measured to be above 99.5\%, except for the first layer (Fig.~\ref{fig::Eff})
The efficiency on the first layer is lower, because of losing hit information during the trigger latency period due to buffer overflow in the read-out chips.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.25]{Eff.png}
\caption{Average hit finding efficiency of the pixel detector in 2012.}
\label{fig::Eff}
\end{figure}
\section{Resolution and cluster size}
\label{sec::Resolution and cluster size}
\subsection{Hit resolution}
The resolution of the cluster position measurement determines the accuracy of the track reconstruction. The resolution is measured from the hit residuals obtained in the track fitting. The hits along the tracks are re-fitted without the hit in the examined layer.
The distribution of the differences between the hits and the
interpolated track positions are shown in Fig.~\ref{fig::BPIX_res}. A student-t function is fitted to the distribution. Resolution is derived from the width of the function using a method described in Ref. \cite{Burgmeier}.
Hits can be reconstructed with an accuracy of about 10\,$\mathrm{{\mu}m}$ in Layer 2 in the azimuthal direction.
\begin{figure}
\centering
\includegraphics[scale=0.25]{BPIX_res.png}
\caption{The residual difference between the hit position and the
interpolated track, and the student-t function fit in Layer 2 in the transverse plane.}
\label{fig::BPIX_res}
\end{figure}
\subsection{Lorentz angle measurement}
The charge carriers, induced by the traversing particles inside the silicon bulk, are deflected by the Lorentz force due to the 3.8 T magnetic field of the CMS magnet.
This drift is characterised by the angle of deflection, which is known as the Lorentz angle. It is defined as the angle between the direction perpendicular to the surface of the sensor and the path of the electrons inside the sensor. Detailed information can be found in Ref.~\cite{Henrich}.
The Lorentz angle was determined as a function of integrated luminosity for 2012 data (Fig.~\ref{fig::LA}). It increases with integrated luminosity as a result of irradiation effects.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.25]{LA.png}
\caption{Lorentz angle as a function of integrated luminosity in 2012.}
\label{fig::LA}
\end{figure}
\subsection{Cluster size}
The average number of pixels in a cluster was also monitored as a function of integrated luminosity (Fig.~\ref{fig::cluster_size}).
The sudden enhancements at $6.5 \mathrm{~fb^{-1}}$ and $15 \mathrm{~fb^{-1}}$ correspond to threshold readjustments during LHC technical stops.
\begin{figure}
\centering
\includegraphics[scale=0.25]{cluster_size.png}
\caption{Cluster size as a function of integrated luminosity.}
\label{fig::cluster_size}
\end{figure}
\section{Conclusion}
\label{sec::Conclusion}
The pixel detector worked reliably with high efficiency in 2012. Its efficiency was preserved by the online monitoring and recovery system. Its resolution is in the order of 10 $\mathrm{{\mu}m}$ in the transverse plane.
\nocite{*}
|
1,116,691,498,710 | arxiv | \section{Introduction}
\label{sec:intro}
The encapsulation of a set of functionalities as services of a software module offers
strong advantages in software development, since it promotes the reuse of code and
ease maintenance efforts.
If a programmer is unacquainted with the implementation details of a particular
set of services, she may fail to identify correlations that exist across those services,
such as data and code dependencies, leading to an inappropriate usage.
This is particularly relevant in a concurrent setting, where it is hard to account
for all the possible interleavings between threads and the effects of these
interleaved calls to the module's internal state.
One of the requirements for the correct behavior of a module is to respect its
\emph{protocol}, which defines the legal sequences of invocations to its methods.
For instance, a module that offers an abstraction to deal with files
typically will demand that the programmer start by calling the method
\lstinline{open()}, followed by an arbitrary number of \lstinline{read()} or
\lstinline{write()} operations, and concluding with a call to \lstinline{close()}.
A program that does not follow this protocol is incorrect and should be fixed.
A way to enforce a program to conform to such well defined behaviors is to use the design
by contract methodology~\cite{Meyer1992}, and specifying contracts that regulate
the module usage protocol.
In this setting, the contract not only serves as useful documentation, but may
also be automatically verified, ensuring the client's program obeys the module's
protocol~\cite{Cheon2007,Hurlin2009}.
The development of concurrent programs brings new challenges on how to define
the protocol of a module.
Not only it is important to respect the module's protocol, but is also
necessary to guarantee the atomic execution of sequences of calls that are
susceptible of causing atomicity violations.
These atomicity violations are possible, even when the individual methods in the module are
protected by some concurrency control mechanism.
Figure~\ref{list:atomviolationexample} shows part of a program that schedules
tasks.
The \lstinline{schedule()} method gets a task, verifies if it is ready to run,
and execute it if so.
This program contains a potential atomicity violation since the method may
execute a task that is not marked as ready. This may happen when another
thread concurrently schedules the same task, despite the fact the methods
of \lstinline{Task} are atomic.
In this case the \lstinline{isReady()} and \lstinline{run()} methods should
be executed in the same atomic context to avoid atomicity violations.
Atomicity violations are one of the most common source of bugs in concurrent
programming~\cite{Lu2008} and are particularly susceptible to occur when
composing calls to a module, as the developer may not be aware of the
implementation and internal state of the module.
\begin{figure}[t]
\centering
\begin{minipage}{0.68\linewidth}
\begin{lstlisting}[frame=trbl]
void schedule() {
Task t=taskQueue.next();
if (t.isReady())
t.run();
}
\end{lstlisting}
\end{minipage}\hfill
%
\caption{Program with an atomicity violation.}
\label{list:atomviolationexample}
\end{figure}
In this paper we propose to extend module usage protocols with
a specification of the sequences of calls that should be executed atomically.
We will also present an efficient static analysis to verify these protocols.
The contributions of this paper can be summarized as:
\begin{enumerate}
\item A proposal of \emph{contracts for concurrency} addressing the issue of atomicity violations;
\item A static analysis methodology to extract the behavior of
a program with respect to the sequence of calls it may execute;
\item A static analysis to verify if a program conforms to a module's
contract, hence that the module's correlated services are correctly invoked
in the scope of an atomic region.
\end{enumerate}
The remaining of this paper is organized as follows.
In Section~\ref{sec:contract} we provide a specification and the semantics for
the contract. Section~\ref{sec:method} contains the general methodology of the
analysis.
Section~\ref{sec:programpattern} presents the phase of the analysis that extracts
the behavior of the client program while Section~\ref{sec:verification} shows how to
verify a contract based on the extracted information.
Section~\ref{sec:validation} follows with the presentation and discussion of the
results of our experimental validation.
The related work is presented in Section~\ref{sec:relwork}, and we conclude with
the final remarks in Section~\ref{sec:conclusion}.
\section{Contract Specification}
\label{sec:contract}
The contract of a module must specify which sequences of calls of its non-private
methods must be executed atomically, as to avoid atomicity violations in the
module's client program.
In the spirit of the \emph{programming by contract} methodology, we assume the
definition of the contract, including the identification of the sequences of
methods that should be executed atomically is a responsibility of the module's
developer.
\\\*
\line(1,0){250}\vspace{-3mm}
\begin{definition}[Contract]
\label{def:contract}
The contract of a module with public methods $m_1, \cdots\!, m_n$ is of the
form,
\begin{equation*}
\begin{aligned}
1. & \; e_1 \\
2. & \; e_2 \\
& \vdots \\
k. & \; e_k.
\end{aligned}
\end{equation*}
\noindent where each clause~$i$ is described by $e_i$, a star-free regular expression over the
alphabet $\{ m_1, \cdots\!, m_n \}$.
Star-free regular expressions are regular expressions without the Kleene star,
using only the alternative ($|$) and the (implicit) concatenation operators.
\end{definition}
\vspace{-5mm}\hspace{-3mm}\*\line(1,0){250}
\vspace{1.5mm}
Each sequence defined in $e_i$ must be executed atomically by the program
using the module, otherwise there is a violation of the contract.
The contract specifies a finite number of sequences of calls, since it is the
union of star-free languages. Therefore, it is possible to have the same expressivity
by explicitly enumerating all sequences of calls, i.e., without using the
alternative operator.
We chose to offer the alternative operator so the programmer can group similar
scenarios under the same clause.
Our verification analysis assumes the contract defines a finite number of
call sequences.
\parag{Example}
Consider the array implementation offered by \emph{Java}
standard library, \lstinline{java.util.ArrayList}.
For simplicity we will only consider the methods
\lstinline{add(obj)}, \lstinline{contains(obj)}, \lstinline{indexOf(obj)},
\lstinline{get(idx)}, \lstinline{set(idx, obj)}, \lstinline{remove(idx)}, and
\lstinline{size()}.
The following contract defines some of the clauses for this class.
\begin{equation*}
\label{eq:speceg1}
\begin{aligned}
1. & \; \mmeth{contains} \; \mmeth{indexOf} \\
2. & \; \mmeth{indexOf} \; (\mmeth{remove} \; | \; \mmeth{set} \; | \; \mmeth{get})\\
3. & \; \mmeth{size} \; (\mmeth{remove} \;
| \; \mmeth{set} \;
| \; \mmeth{get}) \\
4. & \; \mmeth{add} \; \mmeth{indexOf}. \\
\end{aligned}
\end{equation*}
Clause~$1$ of \lstinline{ArrayList}'s contract denotes the
execution of \lstinline{contains()} followed by \lstinline{indexOf()} should
be atomic, otherwise the client program may confirm the
existence of an object in the array, but fail to obtain its index due to a
concurrent modification.
Clause~$2$ represents a similar scenario where, in addition, the position of
the object is modified.
In clause~$3$ we deal with the common situation where the program verifies if a
given index is valid before accessing the array.
To make sure the size obtained by \lstinline{size()} is valid when
accessing the array we should execute these calls atomically.
Clause~$4$ represents scenarios where an object is added to the array and
then the program tries to obtain information about that object by querying the
array.
Another relevant clause is
$\mmeth{contains} \; \mmeth{indexOf} \; (\mmeth{set} \; | \; \mmeth{remove}),$
but the contract's semantic already enforces the atomicity of this clause as a
consequence of the composition of clauses~$1$~and~$2$, as they overlap in the
\lstinline{indexOf()} method.
\section{Methodology}
\label{sec:method}
The proposed analysis verifies statically if a client program complies with the
contract of a given module, as defined in Section~\ref{sec:contract}.
This is achieved by verifying that the threads launched by the program always execute
atomically the sequence of calls defined by the contract.
\\
This analysis has the following phases:
\begin{enumerate}
\item Determine the entry methods of each thread launched by the program.
\item Determine which of the program's methods are atomically executed.
We say that a method is \emph{atomically executed} if it is
atomic\footnote{An atomic method is a method that explicitly applies a
concurrency control mechanism to enforce atomicity.}
or if the method is always called by atomically executed methods.
\item Extract the behavior of each of the program's threads with respect to the
usage of the module under analysis.
\item For each thread, verify that its usage of the module respects the contract
as defined in Section~\ref{sec:contract}.
\end{enumerate}
In Section~\ref{sec:programpattern} we introduce the algorithm that extracts the
program's behavior with respect to the module's usage.
Section~\ref{sec:verification} defines the methodology that verifies whether the
extracted behavior complies to the contract.
\section{Extracting the Behavior of a Program}
\label{sec:programpattern}
The behavior of the program with respect to the module usage can be seen as the
individual behavior of any thread the program may launch.
The usage of a module by a thread $t$ of a program can be described by a language
$L$ over the alphabet $m_1, \cdots\!, \, m_n$, the public methods of the module.
A word $m_1 \cdots \, m_n \in L$ if some execution of $t$ may run the sequence of
calls $m_1, \cdots\!, m_n$ to the module.
To extract the usage of a module by a program, our analysis generates a context-free
grammar that represents the language $L$ of a thread~$t$ of the client program,
which is represented by its control~flow~graph~(CFG)~\cite{Allen1970}.
The CFG of the thread~$t$ represents every possible path the control flow
may take during its execution.
In other words, the analysis generates a grammar $G_t$ such that, if there is an
execution path of $t$ that runs the sequence of calls $m_1, \cdots\!, m_n$, then
$m_1 \cdots \, m_n \in \mathcal{L}(G_t)$.
(The language represented by a grammar $G$ is denoted by $\mathcal{L}(G)$.)
A context-free grammar is especially suitable to capture the structure of the CFG
since it easily captures the call relations between methods that cannot be captured
by a weaker class of languages such as regular languages.
The first example bellow will show how this is done.
Another advantage of using context-free grammars
(as opposed to another static analysis technique) is that we can use efficient
algorithms for parsing to explore the language it represents.
\*
\hspace{-6mm}\line(1,0){250}\vspace{-1.5mm}
\begin{definition}[Program's Thread Behavior Grammar]
\label{def:ppgrammar}
%
The grammar $G_t=(N,\Sigma,P,S)$ is build from the CFG of the client's program
thread~$t$.
We define,
\begin{itemize}
\item $N$, the set of non-terminals, as the set of nodes of the CFG.
Additionally we add non-terminals that represent each method of the
client's program (represented in calligraphic font);
\item $\Sigma$, the set of terminals,
as the set of identifiers of the public methods of the module under analysis
(represented in bold);
\item $P$, the set of productions, as described bellow, by
rules~\ref{eq:grammar0}--\ref{eq:grammar4};
\item $S$, the grammar initial symbol, as the non-terminal that represents
the entry method of the thread~$t$.
\end{itemize}
For each method \lstinline{f()} that thread~$t$ may run we add to $P$ the
productions respecting the rules~\ref{eq:grammar0}--\ref{eq:grammar4}.
Method \lstinline{f()} is represented by $\mathcal{F}$.
A CFG node is denoted by $\alpha : \llbracket v \rrbracket$, where $\alpha$ is
the non-terminal that represents the node and $v$ its type. We distinguish the
following types of nodes:
\emph{entry}, the entry node of method $\mathcal{F}$;
\emph{mod.h()}, a call to method \lstinline{h()} of the module \emph{mod} under
analysis;
\emph{g()}, a call to another method \lstinline{g()} of the client program;
and \emph{return}, the return point of method $\mathcal{F}$.
The $succ : N \rightarrow \mathcal{P}(N)$ function is used to obtain the successors
of a given CFG node.
\vspace{-2mm}
\begin{align}
\text{if } \alpha : \dsb{entry}, & \quad \{ \mathcal{F} \rightarrow \alpha \}
\cup \{ \alpha \rightarrow \beta \; | \; \beta \in succ(\alpha) \} \subset P
\label{eq:grammar0}\\
%
\text{if } \alpha : \dsb{mod.h()}, & \quad
\{ \alpha \rightarrow {\bf h } \, \beta \; | \; \beta \in succ(\alpha) \} \subset P
\label{eq:grammar1}\\
%
\text{if } \alpha : \dsb{g()}, & \quad
\{ \alpha \rightarrow \mathcal{G} \, \beta \; | \; \beta \in succ(\alpha) \} \subset P \nonumber\\
& \qquad \qquad \quad \text{where $\mathcal{G}$ represents \texttt{g()}}
\label{eq:grammar2}\\
%
\text{if } \alpha : \dsb{return}, & \quad \{ \alpha \rightarrow \epsilon \} \subset P
\label{eq:grammar3}\\
%
\text{if } \alpha : \dsb{otherwise}, & \quad \{ \alpha \rightarrow \beta \; | \;
\beta \in succ(\alpha) \} \subset P
\label{eq:grammar4}
\end{align}
No more productions belong to $P$.
\end{definition}
\vspace{-4mm}\hspace{-5mm}\*\line(1,0){250}
\vspace{1.5mm}
Rules~\ref{eq:grammar0}--\ref{eq:grammar4} capture the structure of the CFG
in the form of a context-free grammar.
Intuitively this grammar represents the flow control of the thread~$t$ of the
program, ignoring everything not related with the module's usage.
For example, if ${ \bf f } \; { \bf g } \in \mathcal{L}(G_t)$ then the thread~$t$
may invoke, method \lstinline{f()}, followed by \lstinline{g()}.
Rule~\ref{eq:grammar0} adds a production that relates the non-terminal
$\mathcal{F}$, representing method \lstinline{f()}, to the entry node of the
CFG of \lstinline{f()}.
Calls to the module under analysis are recorded in the grammar by the
Rule~\ref{eq:grammar1}.
Rule~\ref{eq:grammar2} handles calls to another method \lstinline{g()}
of the client program (method \lstinline{g()} will have its non-terminal
$\mathcal{G}$ added by Rule~\ref{eq:grammar0}).
The return point of a method adds an $\epsilon$ production to the grammar
(Rule~\ref{eq:grammar3}).
All others types of CFG nodes are handled uniformly, preserving the CFG structure
by making them reducible to the successor non-terminals (Rule~\ref{eq:grammar4}).
Notice that only the client program code is analyzed.
The $G_t$ grammar may be ambiguous, i.e., offer several different derivations
to the same word. Each ambiguity in the parsing of a sequence of calls
$m_1 \cdots \, m_n \in \mathcal{L}(G_t)$ represents different contexts
where these calls can be executed by thread~$t$.
Therefore we need to allow such ambiguities so that the verification
of the contract can cover all the occurrences of the sequences of calls in the
client program.
The language $\mathcal{L}(G_t)$ contains every sequence of calls the program
may execute, i.e., it produces no false negatives. However $\mathcal{L}(G_t)$ may
contain sequences of calls the program does not execute
(for instance calls performed inside a block of code that is never executed),
which may lead to false positives.
\parag{Examples}
\begin{figure*}[t]
\centering
\begin{minipage}{0.175\linewidth}
\centering
\begin{lstlisting}[frame=trbl]
void f() {
m.a();
if (cond)
g();
m.b();
}
void g() {
m.c();
if (cond) {
g();
m.d();
f();
}
}
\end{lstlisting}
\end{minipage}
%
\hspace{8mm}
%
\begin{minipage}{.45\linewidth}
\begin{minipage}{\linewidth}
\hspace{6mm} \lstinline{f()} \hspace{26mm} \lstinline{g()}
\end{minipage}
\hspace{-1mm}
\begin{minipage}{0.20\linewidth}
\centering
\begin{tikzpicture}[scale=1]
\node [cfgnode, name=A, node distance=10mm] {entry};
\node [cfgnode, name=B, below of=A] {m.a()};
\node [cfgnode, name=C, below of=B] {cond};
\node [cfgnode, name=D, below of=C] {g()};
\node [cfgnode, name=E, below of=D] {m.b()};
\node [cfgnode, name=F, below of=E] {return};
\node [above of=A, left of=A, node distance=5mm] {A};
\node [above of=B, left of=B, node distance=5mm] {B};
\node [above of=C, left of=C, node distance=5mm] {C};
\node [above of=D, left of=D, node distance=5mm] {D};
\node [above of=E, left of=E, node distance=5mm] {E};
\node [above of=F, left of=F, node distance=5mm] {F};
\draw [arrowthicktip] (A) -- (B);
\draw [arrowthicktip] (B) -- (C);
\draw [arrowthicktip] (C) -- (D);
\draw [arrowthicktip] (D) -- (E);
\draw [arrowthicktip] (E) -- (F);
\draw [arrowthicktip] (C) -- +(1.5,0) |- (E);
\end{tikzpicture}
\end{minipage}
%
\hspace{20mm}
%
\begin{minipage}{0.20\linewidth}
\begin{tikzpicture}[scale=1]
\node [cfgnode, name=G] {entry};
\node [cfgnode, name=H, below of=G] {m.c()};
\node [cfgnode, name=I, below of=H] {cond};
\node [cfgnode, name=J, below of=I] {g()};
\node [cfgnode, name=K, below of=J] {m.d()};
\node [cfgnode, name=L, below of=K] {f()};
\node [cfgnode, name=M, below of=L] {return};
\node [above of=G, left of=G, node distance=5mm] {G};
\node [above of=H, left of=H, node distance=5mm] {H};
\node [above of=I, left of=I, node distance=5mm] {I};
\node [above of=J, left of=J, node distance=5mm] {J};
\node [above of=K, left of=K, node distance=5mm] {K};
\node [above of=L, left of=L, node distance=5mm] {L};
\node [above of=M, left of=M, node distance=5mm] {M};
\draw [arrowthicktip] (G) -- (H);
\draw [arrowthicktip] (H) -- (I);
\draw [arrowthicktip] (I) -- (J);
\draw [arrowthicktip] (J) -- (K);
\draw [arrowthicktip] (K) -- (L);
\draw [arrowthicktip] (L) -- (M);
\draw [arrowthicktip] (I) -- +(1.5,0) |- (M);
\end{tikzpicture}
\end{minipage}
\hspace{8mm}
\end{minipage}
%
\caption{Program with recursive calls using the module \lstinline{m} (left)
and respective CFG (right).}
\label{fig:ppprogexample}
\end{figure*}
Figure~\ref{fig:ppprogexample}~(left) shows a program that consists of two
methods that call each other mutually.
Method~\lstinline{f()} is the entry point of the thread and
the module under analysis is represented by object~\lstinline{m}.
The control flow graphs of these methods are shown in
Figure~\ref{fig:ppprogexample}~(right).
According to Definition~\ref{def:ppgrammar}, we construct the grammar
$G_1=(N_1,\Sigma_1,P_1,S_1)$, where
\begin{align*}
N_1 & = \{ \mathcal{F}, \mathcal{G}, A, B, C, D, E, F, G, H, I, J, K, L, M \}, \\
\Sigma_1 & = \{ { \bf a, b, c, d } \}, \\
S_1 & = \mathcal{F},
\end{align*}
and $P_1$ has the following productions:
\begin{equation*}
\begin{aligned}
\mathcal{F} & \rightarrow A \quad\qquad\qquad & \mathcal{G} & \rightarrow G \\
A & \rightarrow B & H & \rightarrow {\bf c } \, I \\
B & \rightarrow {\bf a } \, C & I & \rightarrow J \; | \; M \\
C & \rightarrow D \; | \; E & J & \rightarrow \mathcal{G} \, K \\
D & \rightarrow \mathcal{G} \, E & K & \rightarrow {\bf d } \, L \\
E & \rightarrow {\bf b } \, F & L & \rightarrow \mathcal{F} \, M \\
F & \rightarrow \epsilon & M & \rightarrow \epsilon. \\
\end{aligned}
\end{equation*}
\begin{figure*}[t]
\centering
\begin{minipage}{0.225\textwidth}
\begin{lstlisting}[frame=trbl]
void f() {
while (m.a()) {
if (cond)
m.b();
else
m.c();
count++;
}
m.d();
}
\end{lstlisting}
\end{minipage}
%
\hspace{10mm}
%
\begin{minipage}{0.45\textwidth}
\begin{tikzpicture}[scale=1]
\node [cfgnode, name=A] {entry};
\node [cfgnode, name=B, below of=A] {m.a()};
\node [cfgnode, name=C, below of=B] {cond};
\node [cfgnode, name=D, below of=C, left of=C] {m.b()};
\node [cfgnode, name=E, below of=C, right of=C] {m.c()};
\node [cfgnode, name=F, below of=C, node distance=22mm] {count++};
\node [cfgnode, name=G, below of=F] {m.d()};
\node [cfgnode, name=H, below of=G] {return};
\node [above of=A, left of=A, node distance=5mm] {A};
\node [above of=B, left of=B, node distance=5mm] {B};
\node [above of=C, left of=C, node distance=5mm] {C};
\node [above of=D, left of=D, node distance=5mm] {D};
\node [above of=E, right of=E, node distance=5mm] {E};
\node [right of=F, node distance=13mm] {F};
\node [above of=G, left of=G, node distance=5mm] {G};
\node [above of=H, left of=H, node distance=5mm] {H};
\draw [arrowthicktip] (A) -- (B);
\draw [arrowthicktip] (B) -- (C);
\draw [arrowthicktip] (C) -- (D);
\draw [arrowthicktip] (C) -- (E);
\draw [arrowthicktip] (D) -- (F);
\draw [arrowthicktip] (E) -- (F);
\draw [arrowthicktip] (F) -- +(-2.5,0) |- (B);
\draw [arrowthicktip] (B) -- +(2.5,0) |- (G);
\draw [arrowthicktip] (G) -- (H);
\end{tikzpicture}
\end{minipage}
%
\caption{Program using the module \lstinline{m} (left) and respective CFG (right).}
\label{fig:ppprogexample2}
\end{figure*}
A second example, shown in Figure~\ref{fig:ppprogexample2}, exemplifies how
the Definition~\ref{def:ppgrammar} handles a flow control with loops.
In this example we have a single function \lstinline{f()}, which is assumed to be
the entry point of the thread. We have $G_2=(N_2,\Sigma_2,P_2,S_2)$, with
\begin{align*}
N_2 & = \{ \mathcal{F}, A, B, C, D, E, F, G, H\}, \\
\Sigma_2 & = \{ { \bf a, b, c, d }\}, \\
S_2 & = \mathcal{F}.
\end{align*}
The set of productions $P_2$ is,
%
\begin{equation*}
\begin{aligned}
\mathcal{F} & \rightarrow A \quad\qquad\qquad & E & \rightarrow { \bf c } \, F \\
A & \rightarrow B & F & \rightarrow B \\
B & \rightarrow { \bf a } \, C \; | \; { \bf a } \, G
& G & \rightarrow { \bf d } \, H \\
C & \rightarrow D \; | \; E
& H & \rightarrow \epsilon\\
D & \rightarrow { \bf b } \, F.
\end{aligned}
\end{equation*}
\section{Contract Verification}
\label{sec:verification}
The verification of a contract must ensure all sequences of calls specified
by the contract are executed atomically by all threads the client
program may launch.
Since there is a finite number of call sequences defined by the contract we can
verify each of these sequences to check if the contract is respected.
\begin{algorithm}[t]
\caption{Contract verification algorithm.}
\label{algo:verification}
\begin{algorithmic}[1]
\REQUIRE{$P$, client's program;
\\\hspace{9.95mm}$C$, module contract (set of allowed sequences).}
\FOR{$t \in \text{threads}(P)$}
\STATE{$G_t \gets \text{make\_grammar}(t)$}
\STATE{$G_t' \gets \text{subword\_grammar}(G_t)$}
\FOR{$w \in C$}
\STATE{$T \gets \text{parse}(G_t',w)$}
\FOR{$\tau \in T$}
\STATE{$N \gets \text{lowest\_common\_ancestor}(\tau,w)$}
\IF{$\neg \text{run\_atomically}(N)$}
\RETURN{ERROR}
\ENDIF
\ENDFOR
\ENDFOR
\ENDFOR
\RETURN{OK}
\end{algorithmic}
\end{algorithm}
The idea of the algorithm is to generate a grammar the captures the behavior of
each thread with respect to the module usage. Any sequence of the calls
contained in the contract can then be found by parsing the word
(i.e. the sequence of calls) in that grammar. This will create a parsing tree
for each place where the thread can execute that sequence of calls.
The parsing tree can then be inspected to determine the atomicity of the sequence
of calls discovered.
Algorithm~\ref{algo:verification} presents the pseudo-code of the algorithm
that verifies a contract against a client's program.
For each thread~$t$ of a program~$P$, it is necessary to determine if (and where)
any of the sequences of calls defined by the contract $w = m_1, \cdots\!, m_n$
occur in $P$ (line 4).
To do so, each of the these sequences are parsed in the grammar $G_t'$ (line 5)
that includes all words and sub-words of $G_t$.
Sub-words must be included since we want to take into account partial traces of the
execution of thread~$t$, i.e., if we have a program
\lstinline{m.a(); m.b(); m.c(); m.d();} we are able to verify the word
${ \bf b } \; { \bf c }$ by parsing it in $G_t'$.
Notice that $G_t'$ may be ambiguous. Each different parsing tree represents different
locations where the sequence of calls $m_1, \cdots\!, m_n$ may occur in thread~$t$.
Function $\mmeth{parse()}$ returns the set of these parsing trees.
Each parsing tree contains information about the location of each methods call
of $m_1, \cdots\!, m_n$ in program $P$
(since non-terminals represent CFG nodes).
Additionally, by going upwards in the parsing tree, we can find the node that
represents the method under which all calls to $m_1, \cdots\!, m_n$ are performed.
This node is the lowest common ancestor of terminals $m_1, \cdots\!, m_n$ in
the parsing tree (line 7). Therefore we have to check the lowest common ancestor
is always executed atomically (line 8) to make sure the whole sequence of calls is
executed under the same atomic context.
Since it is the \emph{lowest} common ancestor we are sure to require the
minimal synchronization from the program.
A parsing tree contains information about the location in the program
where a contract violation may occur, therefore we can offer detailed
instructions to the programmer on where this violation occurs and
how to fix it.
Grammar $G_t$ can use all the expressivity offered by context-free languages.
For this reason it is not sufficient to use the $LR(\cdot)$ parsing
algorithm~\cite{Knuth1965}, since it does not handle ambiguous grammars.
To deal with the full class of context-free languages a GLR parser
(Generalized $LR$ parser) must be used. GLR parsers explore all the ambiguities
that can generate different derivation trees for a word.
A GLR parser was introduced by Tomita in \cite{Tomita1987}. Tomita presents a
non-deterministic versions of the $LR(0)$ parsing algorithm with some optimizations
in the representation of the parsing stack that improve the temporal and spacial
complexity of the parsing phase.
Another important point is that the number of parsing trees may be infinite.
This is due to loops in the grammar, i.e., derivations from a non-terminal to itself
($A \Rightarrow \cdots \Rightarrow A$), which often occur in $G_t$
(every loop in the control flow graph will yield a corresponding loop in the grammar).
For this reason the parsing algorithm must detect and prune parsing branches
that will lead to redundant loops, ensuring a finite number of parsing trees is
returned.
To achieve this the parsing algorithm must detect a loop in the list of reduction
it has applied in the current parsing branch, and abort it if that loop did not
contribute to parse a new terminal.
\parag{Examples}
\begin{figure*}[t]
\centering
\begin{minipage}{0.225\linewidth}
\begin{lstlisting}[frame=trbl]
void atomic run() {
f();
m.c();
}
void f() {
m.a();
g();
}
void g() {
while (cond)
m.b();
}
\end{lstlisting}
\end{minipage}
\begin{minipage}{0.25\linewidth}
\begin{equation*}
\begin{aligned}
\mathcal{R} & \rightarrow \mathcal{F} \; { \bf c } \\
\mathcal{F} & \rightarrow { \bf a } \; \mathcal{G} \\
\mathcal{G} & \rightarrow A \\
A & \rightarrow B \; | \; \epsilon \\
B & \rightarrow { \bf b } \; A \\
\end{aligned}
\end{equation*}
\end{minipage}
%
\begin{minipage}{0.25\linewidth}
\begin{tikzpicture}[inner sep=.5mm]
\node[anchor=south] (a) at (1, 0.5) {$ \bf a$};
\node[anchor=south] (b1) at (2, 0.5) {$ \bf b$};
\node[anchor=south] (b2) at (3, 0.5) {$ \bf b$};
\node[anchor=south] (c) at (4, 0.5) {$ \bf c$};
\node[anchor=south] (B1) at (3, 1.5) {$B$};
\node[anchor=south] (A1) at (3, 2.25) {$A$};
\node[anchor=south] (A2) at (3.5, 1) {$A$};
\node[anchor=south] (e) at (3.5, .5) {$\epsilon$};
\node[anchor=south] (G) at (2, 4.5) {$\mathcal{G}$};
\node[anchor=south] (A3) at (2, 3.75) {$A$};
\node[anchor=south] (B2) at (2, 3) {$B$};
\node[anchor=south] (F) at (1, 5.25) {$\mathcal{F}$};
\node[anchor=south] (R) at (2.5, 6) {$\mathcal{R}$};
\path[-] (R) edge (F);
\draw (R) -- +(1.5,-.75) -- (c);
\path[-] (F) edge (a);
\path[-] (F) edge (G);
\path[-] (G) edge (A3);
\path[-] (A3) edge (B2);
\path[-] (B2) edge (A1);
\path[-] (B2) edge (b1);
\path[-] (A1) edge (B1);
\path[-] (B1) edge (A2);
\path[-] (B1) edge (b2);
\path[-] (A2) edge (e);
\end{tikzpicture}
\end{minipage}
%
\caption{Program (left), simplified grammar (center) and parsing tree of
${\bf a \; b \; b \; c }$ (right).}
\label{fig:parse}
\end{figure*}
Figure~\ref{fig:parse} shows a program (left), that uses the module
\lstinline{m}. The method \lstinline{run()} is the entry point of the thread~$t$
and is atomic.
In the center of the figure we shown a simplified version of the $G_t$ grammar.
(The $G_t'$ grammar is not shown for the sake of brevity.)
The \lstinline{run()}, \lstinline{f()}, and \lstinline{g()} methods are
represented in the grammar by the non-terminals $\mathcal{R}$, $\mathcal{F}$,
and $\mathcal{G}$ respectively.
If we apply Algorithm~\ref{algo:verification} to this program with the
contract $C=\{ { \bf a \; b \; b \; c } \}$ the resulting parsing tree,
denoted by $\tau$ (line $6$ of Algorithm~\ref{algo:verification}), is
represented in Figure~\ref{fig:parse} (right).
To verify all calls represented in this tree are executed atomically, the
algorithm determines the lowest common ancestor of
${ \bf a \; b \; b \; c }$ in the parsing tree (line $7$), in this
example $\mathcal{R}$.
Since $\mathcal{R}$ is always executed atomically (\lstinline{atomic} keyword),
it complies to the contract of the module.
Figure~\ref{fig:parse2} exemplifies a situation where the generated grammar is
ambiguous. In this case the contract is $C = \{ { \bf a \; b } \}$.
The figure shows the two distinct ways to parse the word
${\bf a \; b}$ (right).
Both these trees will be obtained by our verification algorithm
(line 5 of Algorithm~\ref{algo:verification}).
The first tree (top) has $\mathcal{F}$ as the lowest common ancestor of
${\bf a \; b}$. Since $\mathcal{F}$ corresponds to the method \lstinline{f()},
which is executed atomically, so this tree respects the contract.
The second tree (bottom) has $\mathcal{R}$ as the lowest common ancestor of
${\bf a \; b }$, corresponding to the execution of the \lstinline{else} branch
of method \lstinline{run()}.
This non-terminal ($\mathcal{R}$) does not correspond to an atomically executed method,
therefore the contract is not met and a contract violation is detected.
\begin{figure}[t]
\centering
\begin{minipage}{0.37\linewidth}
\begin{lstlisting}[frame=trbl]
void run() {
if (...)
f();
else {
m.a();
g();
}
}
void atomic f() {
m.a();
g();
}
void atomic g() {
m.b();
}
\end{lstlisting}
\end{minipage}
\begin{minipage}{0.35\linewidth}
\centering
\begin{equation*}
\begin{aligned}
\mathcal{R} & \rightarrow { \bf a } \; \mathcal{G} \; | \; \mathcal{F} \\
\mathcal{F} & \rightarrow { \bf a } \; \mathcal{G} \\
\mathcal{G} & \rightarrow { \bf b }
\end{aligned}
\end{equation*}
\end{minipage}
%
\begin{minipage}{0.15\linewidth}
\centering
\begin{tikzpicture}[inner sep=.5mm]
\node[anchor=south] (a) at (1, 0) {$ \bf a$};
\node[anchor=south] (b) at (2, 0) {$ \bf b$};
\node[anchor=south] (G) at (2, .75) {$\mathcal{G}$};
\node[anchor=south] (R) at (1.5, 2.25) {$\mathcal{R}$};
\node[anchor=south] (F) at (1.5, 1.5) {$\mathcal{F}$};
\path[-] (R) edge (F);
\path[-] (F) edge (G);
\path[-] (G) edge (b);
\draw (F) -- +(-.5,-.75) -- (a);
\end{tikzpicture}
\vspace{10mm}
\begin{tikzpicture}[inner sep=.5mm]
\node[anchor=south] (a) at (1, 0) {$ \bf a$};
\node[anchor=south] (b) at (2, 0) {$ \bf b$};
\node[anchor=south] (G) at (2, .75) {$\mathcal{G}$};
\node[anchor=south] (R) at (1.5, 1.5) {$\mathcal{R}$};
\path[-] (R) edge (G);
\path[-] (G) edge (b);
\draw (R) -- +(-.5,-.75) -- (a);
\end{tikzpicture}
\end{minipage}
%
\caption{Program (left), simplified grammar (center) and parsing tree of
${\bf a \; b }$ (right).}
\label{fig:parse2}
\end{figure}
\section{Analysis with Points-to}
\label{sec:pointsto}
In an object-oriented programming language the module is often represented as an
object, in which case we should differentiate the instances of the class of the
module. This section explains how the analysis is extended to handle multiple
instances of the module by using \emph{points-to} information.
\begin{algorithm}[t]
\caption{Contract verification algorithm with points-to information.}
\label{algo:verificationpointsto}
\begin{algorithmic}[1]
\REQUIRE{$P$, client's program;
\\\hspace{9.95mm}$C$, module contract (set of allowed sequences).}
\FOR{$t \in \text{threads}(P)$}
\FOR{$a \in \text{mod\_alloc\_sites}(t)$}
\STATE{$G_{t_a} \gets \text{make\_grammar}(t,a)$}
\STATE{$G_{t_a}' \gets \text{subword\_grammar}(G_{t_a})$}
\FOR{$w \in C$}
\STATE{$T \gets \text{parse}(G_{t_a}',w)$}
\FOR{$\tau \in T$}
\STATE{$N \gets \text{lowest\_common\_ancestor}(\tau,w)$}
\IF{$\neg \text{run\_atomically}(N)$}
\RETURN{ERROR}
\ENDIF
\ENDFOR
\ENDFOR
\ENDFOR
\ENDFOR
\RETURN{OK}
\end{algorithmic}
\end{algorithm}
To extend the analysis to points-to a different grammar is generated for each
allocation site of the module. Each allocation site represents an instance of
the module, and the verification algorithm verifies the contract words for
each allocation site and thread (whereas the previous algorithm verified the
contract words for each thread).
The new algorithm is shown in Algorithm~\ref{algo:verificationpointsto}.
It generated the grammar $G_{t_a}$ for a thread $t$ and module instance $a$.
This grammar can be seen as the behavior of thread $t$ with respect to
the module instance $a$, ignoring every other instance of the module.
To generate the grammar $G_{t_a}$ we adapt the Definition~\ref{def:ppgrammar}
to only take into account the instance $a$.
The grammar generation is extended in the following way:
\*
\hspace{-6mm}\line(1,0){250}\vspace{-1.5mm}
\begin{definition}[Program's Thread Behavior Grammar with points-to]
\label{def:ppgrammarpointsto}
%
The grammar $G_t=(N,\Sigma,P,S)$ is build from the CFG of the client's program
thread~$t$ and an object allocation site $a$, which represents an instance of the
module.
We define $N$, $\Sigma$, $P$ and $S$ in the same way as
Definition~\ref{def:ppgrammar}.
The rules remain the same, except for rule~\ref{eq:grammar1}, which becomes:
\begin{align}
& \text{if } \alpha : \dsb{mod.h()} \text{ and mod can only point to } a
\label{eq:grammarpt0}\\
& \quad
\{ \alpha \rightarrow {\bf h } \, \beta \; | \; \beta \in succ(\alpha) \} \subset P
\nonumber\\
& \text{if } \alpha : \dsb{mod.h()} \text{ and mod can point to } a
\label{eq:grammarpt1}\\
& \quad
\{ \alpha \rightarrow {\bf h } \, \beta \; | \; \beta \in succ(\alpha) \} \subset P
\nonumber\\
& \quad
\{ \alpha \rightarrow \beta \; | \; \beta \in succ(\alpha) \} \subset P
\nonumber\\
& \text{if } \alpha : \dsb{mod.h()} \text{ and mod cannot point to } a
\label{eq:grammarpt2}\\
& \quad
\{ \alpha \rightarrow \beta \; | \; \beta \in succ(\alpha) \} \subset P \nonumber
\end{align}
\end{definition}
\vspace{-4mm}\hspace{-5mm}\*\line(1,0){250}
\vspace{1.5mm}
Here we use the points-to information to generate the grammar, and we should
consider the places where a variable can point-to.
If it may point-to our instance $a$ or another instance we consider both
possibilities in the Rule~\ref{eq:grammarpt1} of
Definition~\ref{def:ppgrammarpointsto}.
\section{Contracts with Parameters}
\label{sec:contractwithargs}
\begin{figure}[t]
\begin{lstlisting}
void replace(int o, int n)
{
if (array.contains(o))
{
int idx=array.indexOf(o);
array.set(idx,n);
}
}
\end{lstlisting}
\caption{Examples of atomic violation with data dependencies.}
\label{list:atomviolationdf}
\end{figure}
Frequently contract clauses can be refined by considering the flow of data
across calls to the module. For instance Listing~\ref{list:atomviolationdf}
shows a procedure that replaces an item in an array by another.
This listing contains two atomicity violations: the element might not exist
when \lstinline{indexOf()} is called; and the index obtained might be outdated
when \lstinline{set()} is executed.
Naturally, we can define a clause that forces the atomicity of this sequence
of calls as $\mmeth{contains} \; \mmeth{indexOf} \; \mmeth{set}$, but this can
be substantially refined by explicitly require that a correlation exists
between the \lstinline{indexOf()} and \lstinline{set()} calls.
To do so we extend the contract specification to capture the arguments and
return values of the calls, which allows the user to establish the relation of
values across calls.
The contract can therefore be extended to accommodate this relations,
in this case the clause might be
\begin{equation*}
\mmeth{contains(X)} \;\;\; \mmeth{Y=indexOf(X)} \;\;\; \mmeth{set(Y,\_)}.
\end{equation*}
This clause contains variables ($\mmeth{X}, \mmeth{Y}$) that must
satisfy unification for the clause to be applicable. The underscore
symbol ($\mmeth{\_}$) represents a variable that will not be used
(and therefore requires no binding).
Algorithm~\ref{algo:verification} can easily be modified to filter out the
parsing trees that correspond to calls that do not satisfy the unification
required by the clause in question.
In our implementation we require a exact match between the terms of the program
to satisfy the unification, since it was sufficient for most scenarios.
It can however be advantageous to generalize the unification relation.
For example, the calls
\begin{lstlisting}
array.contains(o);
idx=array.indexOf(o+1);
array.set(idx,n);
\end{lstlisting}
also imply a data dependency between the first two calls.
We should say that $A$ unifies with $B$ if, and only if, the value of $A$
depends on the value of $B$, which can occur due to value manipulation
(data dependency) or control-flow dependency (control dependency).
This can be obtained by an information flow analysis, such as presented
in~\cite{Bergeretti1985}, which can statically infer the variables that
influenced the value that a variable hold on a specific part of the program.
This extension of the analysis can be a great advantage
for some types of modules. As an example we rewrite the contract
for the \emph{Java} standard library class, \lstinline{java.util.ArrayList},
presented in Section~\ref{sec:contract}:
\begin{equation*}
\label{eq:speceg2}
\begin{aligned}
1. & \; \mmeth{contains(X)} \; \mmeth{indexOf(X)} \\
2. & \; \mmeth{X=indexOf(\_)} \; (\mmeth{remove(X)} \;
| \; \mmeth{set(X,\_)} \;
| \; \mmeth{get(X)})\\
3. & \; \mmeth{X=size()} \; (\mmeth{remove(X)} \;
| \; \mmeth{set(X,\_)} \;
| \; \mmeth{get(X)}) \\
4. & \; \mmeth{add(X)} \; \mmeth{indexOf(X)}.
\end{aligned}
\end{equation*}
This contract captures in detail the relations between calls that may be
problematic, and excludes from the contract sequences of calls that does not
constitute atomicity violations.
\section{Prototype}
\label{sec:prototype}
A prototype was implemented to evaluate our methodology. This tool analyses
\emph{Java} programs using Soot~\cite{Vallee-Rai1999}, a Java static analysis
framework.
This framework directly analyses \emph{Java bytecode}, allowing us to analyse a
compiled program, without requiring access to its source code.
In our implementation a method can be marked atomic with a \emph{Java} annotation.
The contract is also defined as an annotation of the class representing the
module under analysis. The prototype is available in
\url{https://github.com/trxsys/gluon}.
\subsection{Optimizations}
To achieve a reasonable time performance we implemented a few optimizations.
Some of these optimizations reduced the analysis run time by a few orders of
magnitude in some cases, without sacrificing precision.
A simple optimization was applied to the grammar to reduce its size.
When constructing the grammar, most control flow graph nodes will have a single
successor.
Rule~\ref{eq:grammar4} (Definition~\ref{def:ppgrammar}) will always be
applied to these kind of nodes, since they represent an instruction
that does not call any function. This creates redundant ambiguities in the grammar
due to the multiple control flow paths that never use the module under analysis.
To avoid exploration of redundant parsing
branches we rewrite the grammar to transform productions of the form
$A \rightarrow \beta B \delta, \, B \rightarrow \alpha$ to $A \rightarrow \beta \alpha \delta$, if
no other rule with head $B$ exists.
For example, an \lstinline{if else} that do not use the module will create
the productions $A \rightarrow B, \, A \rightarrow C, \, B \rightarrow D, \, C \rightarrow D$.
This transformation will reduce it to $A \rightarrow D$, leaving no ambiguity for the
parser to explore here.
This optimization reduced the analysis time by at least one order of magnitude
considering the majority of the tests we performed.
For instance, the Elevator test could not be analyzed in a reasonable time prior
to this optimization.
Another optimization was applied during the parsing phase. Since the $GLR$
parser builds the derivation tree bottom-up we can be sure to find the lowest
common ancestor of the terminals as early as possible. The lowest common ancestors
will be the first non-terminal in the tree covering all the terminals of the
parse tree.
This is easily determined if we propagate, bottom-up, the number of terminals
each node of the tree covers.
Whenever a lowest common ancestor is determined we do not need further parsing
and can immediately verify if the corresponding calls are in the same atomic
context. This avoids completing the rest of the tree which can contain
ambiguities, therefore a possibly large number of new branches is avoided.
Another key aspect of the parsing algorithm implementation is the loop detection.
To achieve a good performance we should prune parsing branches that generated
unproductive loop as soon as possible. Our implementation guarantees
the same non-terminal never appears twice in a parsing tree without contributing
to the recognition of a new terminal.
To achieve a better performance we also do not explicitly compute the subword
grammar ($G_t'$). We have modified our $GLR$ parser to parse subwords as
described in~\cite{rekers1991}. This greatly improves the parser performance
because constructing $G_t'$ introduces many irrelevant ambiguities the parser
had to explore.
\begin{table*}[t]
\centering
\caption{Optimization Improvements.}
\label{tab:opt}
\begin{tabular}{l c@{\hspace{6mm}}}
\toprule
{\bf Optimization} & {\bf Improvement} \\
\midrule
Grammar Simplification & 428\% \\
Stop Parsing at LCA & ? \\
Subword Parser & 3\% \\
\bottomrule
\end{tabular}
\end{table*}
The Table~\ref{tab:opt} show how much the of the optimizations improve
the analysis performance. These results are build from an test made to
stress the performance of gluon but is consistent with real applications.
The \emph{Improvement} column show how much of an improvement that particular
optimization contributes to the analysis.
The \emph{Stop Parsing at LCA} cause an improvement that we were not able to
measure since the test was unable to complete in reasonable time.
\subsection{Class Scope Mode}
\label{sec:classscopemode}
Gluon normally analyzes the entire program, taking into account any sequence of
calls that can spread across the whole program (as long as they are consecutive
calls to a module). However this is infeasible for very large programs so, for
these programs, we ran the analysis with for each class, ignoring calls to other
classes. This will detect contract violations where the control flow does not
escape the class, which is reasonable since code locality indicates a stronger
correlations between calls.
This mode of operation can be useful to analyze large programs as they might
have very complex control flow graphs and thus are infeasible to analyze with
the scope of the whole program.
In this mode the grammar is build for each class instead of each thread.
The methods of the class will create non-terminals
$\mathcal{F}_1, \cdots\!, \mathcal{F}_n$, just as before.
The only change in creating this grammar is that we create the productions
$S \rightarrow \mathcal{F}_1 \; | \; \cdots \; | \; \mathcal{F}_n$ as the starting
production of the grammar ($S$ being the initial symbol).
This means that we consider the execution of all methods of the class being
analyzed.
\section{Validation and Evaluation}
\label{sec:validation}
To validate the proposed analysis we analyzed a few real-world programs
(Tomcat, Lucene, Derby, OpenJMS and Cassandra) as well as small programs
known to contain atomicity violations. These small programs were adapted
from the literature~\cite{Praun2004,Artho03,Artho04,Teixeira2011,IBM,PessanhaMSc,
Beckman2008} and are typically used to evaluate atomicity violation detection
techniques.
We modified these small programs to employ a modular design and we wrote
contracts to enforce the correct atomic scope of calls to that module.
Some additional clauses were added that may represent atomicity violations
in the context of the module usage, even if the program do not violate those
clauses.
For the large benchmarks analyzed we aimed to discover new, unknown, atomicity
violations. To do so we had to create contracts in an automated manner, since
the code base was too large.
To automate the generation of contracts we employ a very simplistic approach that
tries to infer the contract's clauses based on what is already synchronized in the
code. This idea is that most sequences of calls that should be atomic was
correctly used \emph{somewhere}. Having this in mind we look for sequences
of calls done to a module that are used atomically at least two points of the
program. If a sequence of calls is done atomically in two places of the code
that might indicate that these calls are correlated and should be atomic.
We used these sequences as our contracts, after manually filtering a few
irrelevant contracts.
This is a very simple way to generate contracts, which should ideally be written
by the module's developer to capture common cases of atomicity violations, so we
can expect the contracts to be more fine-tuned to better target atomicity violations
if the contracts are part of the regular project development.
Since these programs load classes dynamically it is impossible to obtain complete
points-to information, so we are pessimistic
and assumed every module instance could be referenced by any variable that are
type-compatible.
We also used the class scope mode described in Section~\ref{sec:classscopemode}
because it would be impractical to analyze such large programs with the scope of
the whole program.
This restrictions did not apply to the small programs analyzed.
\begin{table*}[t]
\centering
\caption{Validation results.}
\label{tab:valresults}
\resizebox{\textwidth}{!}{
\begin{tabular}{@{} l c@{\hspace{6mm}} c@{\hspace{6mm}} c@{\hspace{6mm}}
c@{\hspace{6mm}} c@{\hspace{6mm}} c@{\hspace{6mm}} c@{\hspace{6mm}}
| c@{\hspace{6mm}} c@{}}
\toprule
& & {\bf Contract}
& {\bf False} & {\bf Potential} & {\bf Real} &
& & {\bf ICFinder} & {\bf ICFinder} \\
{\bf Benchmark} & {\bf Clauses} & {\bf Violations}
& {\bf Positives} & {\bf AV} & {\bf AV} & {\bf SLOC}
& {\bf Time (s)} & {\bf Static} & {\bf Final} \\
\midrule
Allocate Vector~\cite{IBM} & 1 & 1 & 0 & 0 & 1 & 183 & 0.120 & - & - \\
Coord03~\cite{Artho03} & 4 & 1 & 0 & 0 & 1 & 151 & 0.093 & - & - \\
Coord04~\cite{Artho04} & 2 & 1 & 0 & 0 & 1 & 35 & 0.039 & - & - \\
Jigsaw~\cite{Praun2004} & 1 & 1 & 0 & 0 & 1 & 100 & 0.044 & 121 & 2 \\
Local~\cite{Artho03} & 2 & 1 & 0 & 0 & 1 & 24 & 0.033 & - & - \\
Knight~\cite{Teixeira2011} & 1 & 1 & 0 & 0 & 1 & 135 & 0.219 & - & - \\
NASA~\cite{Artho03} & 1 & 1 & 0 & 0 & 1 & 89 & 0.035 & - & - \\
Store~\cite{PessanhaMSc} & 1 & 1 & 0 & 0 & 1 & 621 & 0.090 & - & - \\
StringBuffer~\cite{Artho04} & 1 & 1 & 0 & 0 & 1 & 27 & 0.032 & - & - \\
UnderReporting~\cite{Praun2004} & 1 & 1 & 0 & 0 & 1 & 20 & 0.029 & - & - \\
VectorFail~\cite{PessanhaMSc} & 2 & 1 & 0 & 0 & 1 & 70 & 0.048 & - & - \\
%
Account~\cite{Praun2004} & 4 & 2 & 0 & 0 & 2 & 42 & 0.041 & - & - \\
Arithmetic DB~\cite{Teixeira2011} & 2 & 2 & 0 & 0 & 2 & 243 & 0.272 & - & - \\
Connection~\cite{Beckman2008} & 2 & 2 & 0 & 0 & 2 & 74 & 0.058 & - & - \\
Elevator~\cite{Praun2004} & 2 & 2 & 0 & 0 & 2 & 268 & 0.333 & - & - \\
\midrule
OpenJMS 0.7 & 6 & 54 & 10 & 28 & 4 & 163K & 148 & 126 & 15 \\
Tomcat 6.0 & 9 & 157 & 16 & 47 & 3 & 239K & 3070 & 365 & 12 \\
Cassandra 2.0 & 1 & 60 & 24 & 15 & 2 & 192K & 246 & - & - \\
Derby 10.10 & 1 & 19 & 5 & 7 & 1 & 793K & 522 & 122 & 16 \\
Lucene 4.6 & 3 & 136 & 21 & 76 & 0 & 478K & 151 & 391 & 2 \\
\bottomrule
\end{tabular}
}
\end{table*}
Table~\ref{tab:valresults} summarizes the results that validate the
correctness of our approach.
The table contains both the macro benchmarks (above) and the micro benchmarks
(bellow). The columns represent the number of clauses of the contract (Clauses);
the number of violations of those clauses (Contract Violations);
the number of false positives, i.e. sequences of calls that in fact the program
will never execute (False Positives);
the number of potential atomicity violations, i.e. atomicity violations that
could happen \emph{if} the object was concurrently accessed by multiple
threads (Potential AV); the number of real atomicity violations that can in
fact occur and compromise the correct execution of the program (Real AV);
the number lines of code of the benchmark (SLOC); and the time it took for the
analysis to complete (the analysis run time excludes the Soot initialization
time, which were always less than $179 \mathrm{s}$ per run).
Our tool was able to detect all violation of the contract by the client
program in the microbenchmarks, so no false negatives occurred, which supports
the soundness of the analysis.
Since some tests include additional contract clauses with call sequences
not present in the test programs we also show that, in general,
the analysis does not detect spurious violations, i.e.,
false positives.\footnote{In these tests no false positives were detected.
However it is possible to create situations where false positives occur.
For instance, the analysis assumes a loop may iterate an arbitrary number
of times, which makes it consider execution traces that may not be possible.}
A corrected version of each test was also verified and the prototype correctly
detected that all contract's call sequences in the client program were
now atomically executed. Correcting a program is trivial since the prototype
pinpoints the methods that must be made atomic, and ensures the synchronization
required has the finest possible scope, since it is the method that corresponds to
the \emph{lowest} common ancestor of the terminals in the parse tree.
The large benchmarks show that gluon can be applied to large scale programs
with good results.
Even with a simple automated contract generation we were able to detect $10$
atomicity violations in real-world programs.
Six of these bugs where reported
(\href{https://issues.apache.org/bugzilla/show_bug.cgi?id=56784}{Tomcat}
\footnote{\url{https://issues.apache.org/bugzilla/show_bug.cgi?id=56784}},
\href{https://issues.apache.org/jira/browse/DERBY-6679}{Derby}
\footnote{\url{https://issues.apache.org/jira/browse/DERBY-6679}},
\href{https://issues.apache.org/jira/browse/CASSANDRA-7757}{Cassandra}
\footnote{\url{https://issues.apache.org/jira/browse/CASSANDRA-7757}}),
with two bugs already fixed in Tomcat 8.0.11.
The false positives incorrectly reported by gluon were all due to
conservative points-to information, since the program loads and calls
classes and methods dynamically (leading to an incomplete points-to graph).
ICFinder~\cite{liu2013} uses a static analysis to detect two types of
common incorrect composition patterns. This is then filtered with a dynamic
analysis.
Of the atomicity violations detected by gluon none of them was captured by
ICFinder, since they failed to match the definition of the patterns.
It is hard to directly compare both approaches since they use very different
approaches. Loosely speaking, in Table~\ref{tab:valresults} the ICFinder Static
column corresponds to the Contract Violations, since they both represent the
static methodology of the approaches. The ICFinder Final column cannot be
directly compared with the Real AV column because ``ICFinder Final'' may
contain scenarios that not represent atomicity violations (in particular if
ICFinder does not correctly identify the atomic sets). ICFinder Final cannot
also be compared with ``Potencial AV'' since ``Potencial AV'' is manually
obtained from ``Contract Violations'', and ``ICFinder Final'' is the
dynamic filtration of ``ICFinder Static''. In the end the number of bugs
reported by gluon was $6$, with $2$ bugs confirmed and with fixes already applied,
$1$ bug considered highly unlikely, and $3$ bugs pending confirmation;
ICFinder has $3$ confirmed and fixed bugs on
Tomcat.~\footnote{\url{https://issues.apache.org/bugzilla/show_bug.cgi?id=53498}}
The performance results show our tool can run efficiently.
For larger programs we have to use class scope mode, sacrificing precision
for performance, but we still can capture interesting contract violations.
The performance of the analysis depends greatly on the number of branches
the parser explores.
This high number of parsing branches is due to the complexity of the control flow of
the program, offering a huge amount of distinct control flow paths.
In general the parsing phase will dominate the time complexity of the analysis,
so the analysis run time will be proportional to the number of explored
parsing branches.
Memory usage is not a problem for the analysis, since the asymptotic space complexity
is determined by the size of the parsing table and the largest parsing tree.
Memory usage is not affected by the number of parsing trees because
our $GLR$ parser explores the parsing branches in-depth instead of in-breadth.
In-depth exploration is possible because we never have infinite height parsing
trees due to our detection of unproductive loops.
\subsection{ICFinder}
ICFinder tries to infer automatically what a module is, and incorrect compositions
of pairs of calls to modules.
Two patterns are used to detect potencial atomicity violations in method calls
compositions:
\begin{itemize}
\item USE: Detects stale value errors. This pattern detects data or control flow
dependencies between two calls to the module.
\item COMP: If a call to method \lstinline{a()} dominates \lstinline{b()} and
\lstinline{b()} post-dominates \lstinline{a()} in some place, that is captured by
this pattern. This means that, for each piece of code involving two calls to the
module (\lstinline{a()} and \lstinline{b()}), if \lstinline{a()} is always executed
before \lstinline{b()} and \lstinline{b()} is always executed after \lstinline{a()},
it is a COMP violation.
\end{itemize}
Both this patterns are extremely broad and contain many false positives.
To deal with this the authors filter this results with a dynamic analysis that
only consider violations as defined in~\cite{Vaziri2006}.
This analysis assumes that the notion of \emph{atomic set} was correctly inferred by
ICFinder.
\section{Related Work}
\label{sec:relwork}
The methodology of design by contract was introduced by Meyer~\cite{Meyer1992}
as a technique to write robust code, based on contracts between programs and
objects.
In this context, a contract specifies the necessary conditions the program must
met in order to call the object's methods, whose semantics is ensured if
those pre-conditions are met.
Cheon et al. proposes the use of contracts to specify protocols for accessing
objects~\cite{Cheon2007}.
These contracts use regular expressions to describe the sequences of calls
that can be executed for a given \emph{Java} object.
The authors present a dynamic analysis for the verification of the contracts.
This contrasts to our analysis which statically validates the contracts.
Beckman et al. introduce a methodology based on \emph{typestate} that statically
verifies if a protocol of an object is respected~\cite{Beckman2008}.
This approach requires the programmer to explicitly \emph{unpack} objects
before it can be used.
Hurlin~\cite{Hurlin2009} extends the work of Cheon to support protocols in concurrent
scenarios.
The protocol specification is extended with operators that allow methods to be
executed concurrently, and pre-conditions that have to be satisfied before the
execution of a method.
This analysis is statically verified by a theorem prover. Theorem proving,
in general, is very limited since automated theorem proving tend to be inefficient.
Peng Liu et al. developed a way to detect atomicity violations caused by
method composition~\cite{liu2013}, much like the ones we describe in this
paper.
They define two patterns that are likely to cause atomicity violations, one
capturing stale value errors and the other one by trying to infer a correlation
between method calls by analyzing the control flow graph
(if \lstinline{a()} is executed before \lstinline{b()}
and \lstinline{b()} is executed after \lstinline{a()}).
This patterns are captured statically and then filtered with a dynamic analysis.
Many works can be found about atomicity violations.
Artho et al. in \cite{Artho03} define the notion of \emph{high-level data races},
that characterize sequences of atomic operations that should be executed
atomically to avoid atomicity violations.
The definition of high-level data races do not totally capture the
violations that may occur in a program.
Praun and Gross~\cite{Praun2004} extend Artho's approach to detect potential
anomalies in the execution of methods of an object and increase the precision of
the analysis by distinguish between read and write accesses to variables shared
between multiple threads.
An additional refinement to the notion of high-level data races was
introduced by Pessanha in~\cite{Pessanha2012}, relaxing the properties
defined by Artho, which results in a higher precision of the analysis.
Farchi et al.~\cite{Farchi2012} propose a methodology to detect atomicity
violations in the usage of modules based on the definition of high-level data races.
Another common type of atomicity violations that arise when sequencing several
atomic operations are \emph{stale value errors}.
This type of anomaly is characterized by the usage of values obtained atomically
across several atomic operations. These values can be outdated and compromise the
correct execution of the program. Various analysis were developed to detect these
types of anomalies~\cite{Artho04,Burrows2004,Pessanha2012}.
Several other analysis to verify atomicity violations can be found in the
literature, based on access patterns to shared
variables~\cite{Vaziri2006, Teixeira2011},
type systems~\cite{Caires2013}, semantic invariants~\cite{Demeyer2012},
and other specific methodologies~\cite{Flanagan2004,Flanagan2008,Flanagan2010}.
\section{Concluding Remarks}
\label{sec:conclusion}
In this paper we present the problem of atomicity violations when using a
module, even when their methods are individually synchronized by some
concurrency control mechanism.
We propose a solution based on the design by contract methodology.
Our contracts define which call sequences to a module should be executed in
an atomic manner.
We introduce a static analysis to verify these contracts. The proposed
analysis extracts the behavior of the client's program with respect to the module
usage, and verifies whether the contract is respected.
A prototype was implemented and the experimental results shows the analysis
is highly precise and can run efficiently on real-world programs.
\bibliographystyle{plain}
|
1,116,691,498,711 | arxiv | \section*{Background}
Proteins contain one or more domains each of which
could have evolved independently from the rest of the protein structure
and which could have unique functions \cite{Domain2011a,Domain2011b}.
Because of molecular evolution,
proteins with similar sequences often share similar folds and structures.
Retrieving and ranking protein domains that are similar to a query protein domain from a protein domain database
are critical
tasks for the analysis of
protein structure, function, and evolution
\cite{Zhang2010,Stivala2009,Stivala2010}.
The similar protein domains that are classified by a ranking system may help researchers infer the functional properties of a query domain from the functions of the returned protein domains.
The output of a
ranking procedure is usually a list of database protein domains
that are ranked in descending order according to a measure of their similarity to the query domain.
The choice of a similarity measure largely defines the performance of a ranking system as argued previously \cite{BaiGT2010}.
A large number of algorithms for computing similarity as a ranking score have been developed:
\begin{description}
\item[Pairwise protein domain comparison algorithms] compute the similarity between a pair of protein domains either by protein domain structure alignment or
by comparing protein domain features.
\emph{Protein structure alignment based methods} compare protein domain structures at the level of residues and sometime even atoms, to detect structural
similarities with high sensitivity and accuracy.
For example,
Carpentier et al. proposed YAKUSA \cite{YAKUSA2005}
which compares protein structures using one-dimensional
characterizations based on protein backbone
internal angles,
while
Jung and Lee proposed SHEBA \cite{SHEBA1990}
for
structural database scanning based on environmental profiles.
\emph{Protein domain feature based methods} extract structural features from protein domains and compute their similarity using a similarity
or distance function.
For example, Zhang et al.
used the 32-D tableau feature vector in a comparison procedure called IR tableau
\cite{Zhang2010}, while Lee and Lee
introduced a measure called WDAC (Weighted Domain Architecture Comparison) that is used in the protein domain comparison context \cite{Cosine2009}. Both these methods use cosine similarity for comparison purposes.
\item[Graph-based similarity learning algorithms]
use the traditional protein domain comparison methods mentioned above
that focus on detecting pairwise sequence
alignments while
neglecting all other protein domains in the database and their distributions.
To tackle this problem,
a graph-based transductive similarity learning algorithm
has been proposed \cite{BaiGT2010,Weston2006}.
Instead
of computing pairwise similarities for protein domains,
graph-based methods take advantage
of the graph formed by the existing protein domains.
By propagating similarity measures between the query protein domain and the database protein domains via
graph transduction (GT), a better metric for ranking database protein domains can be learned.
\end{description}
The main component of graph-based ranking is the construction of a graph as the estimation of
intrinsic
manifold of the database.
As argued by Cai et al. \cite{Cai2011},
there are many ways to define different graphs with different models and parameters.
However, up to now,
there are, in general, no explicit rules for choice of graph models and parameters.
In \cite{BaiGT2010}, the graph parameters were determined by a grid-search of different pairs of parameters.
In \cite{Cai2011}, several graph models were considered for graph regularization, and exhaustive experiments were carried out for the selection of a graph model and its parameters .
However, these kinds of grid-search strategies select parameters from discrete values in the
parameter space, and thus lack the ability to approximate an optimal solution.
At the same time, cross-validation \cite{CrossValidation2006,CrossValidation2009} can be used for parameter selection, but it
does not always scale up very well for many of the graph parameters, and sometimes it might
over-fit the training
and validation set while not generalizing well on the query set.
In \cite{Geng2012}, Geng et al. proposed an ensemble manifold
regularization (EMR) framework that combines the automatic intrinsic manifold approximation
and semi-supervised learning (SSL) \cite{SSL2010,SSL2011} of a support vector machine (SVM) \cite{SVM2011a,SVM2011b}.
Based on the EMR idea, we attempted to solve the problem of graph model and parameter selection
by fusing multiple graphs to obtain a ranking score learning framework for protein domain ranking.
We first outlined the graph regularized ranking score learning framework
by optimizing ranking score learning with both relevant and graph constraints , and then
generalized it to the multiple graph case.
First a pool of initial guesses of the graph Laplacian with different graph models and parameters is computed,
and then they are combined linearly to approximate the intrinsic manifold.
The optimal graph model(s) with optimal parameters is selected by assigning larger weights to them.
Meanwhile, ranking score learning is also restricted to be smooth
along the estimated graph.
Because the graph weights and ranking scores are learned
jointly, a unified
objective function is obtained.
The objective function is optimized alternately and conditionally with respect to multiple graph weights
and ranking scores in an iterative algorithm.
We have named our \textbf{Multi}ple \textbf{G}raph regularized \textbf{Rank}ing method \textbf{MultiG-Rank}. It is composed of an off-line graph weights learning algorithm and an on-line ranking algorithm.
\section*{Methods}
Graph model and parameter selection
Given a data set of protein domains represented by their tableau 32-D feature vectors \cite{Zhang2010} $\mathcal{X} = \{x_1, x_2, \cdots, x_N\}$,
where $x_i\in \mathbb{R}^{32}$ is the tableau feature vector of $i$-th protein domain, $x_q$ is the query protein domain, and the others are database protein domains.
We define the ranking score vector as $\textbf{f} = [f_1, f_2, ..., f_N]^\top \in \mathbb{R}^N$ in which $f_i$ is the
ranking score of $x_i$ to the query domain.
The problem is to rank the protein domains in $\mathcal{X}$ in descending order according to their
ranking scores and return several of the top ranked domains as the ranking results so that the returned protein domains are as relevant to the query as possible.
Here we define two types of protein domains: \emph{relevant} when they belong to the same SCOP fold type \cite{SCOP2009}, and \emph{irrelevant} when they do not.
We denote the SCOP-fold labels of protein domains in $\mathcal{X}$ as $\mathcal{L} = \{l_1, l_2, ..., l_N\}$, where
$l_i$ is the label of $i$-th protein domain and $l_q$ is the query label.
The optimal ranking scores of relevant protein domains $\{x_i\}, l_i=l_q$ should be larger than the irrelevant ones
$\{x_i\}, l_i\neq l_q$, so that the
relevant protein domains will be returned to the user.
\subsection*{Graph regularized protein domain ranking}
We applied
two constraints on the optimal ranking score vector $\textbf{f}$ to learn the optimal ranking scores:
\begin{description}
\item[Relevance constraint]
Because the query protein domain reflects the search intention of the user, $f$ should be consistent with protein domains that are relevant to the query
.
We also define a relevance vector of the protein domain as $\textbf{y} = [y_1, y_2, \cdots, y_N]^\top \in \{1,0\}^N$ where
$y_i = 1$, if $x_i$ is relevant to the query and $y_i = 0$ if it is not.
Because the type label $l_q$ of a query protein domain $x_q$ is usually unknown,
we know only that the query is relevant to itself and
have no prior knowledge of whether or not others are relevant;
therefore, we can only set $y_q=1$ while $y_i,~i\neq q$ is unknown.
To assign different weights to different protein domains in $\mathcal{X}$,
we define a diagonal matrix $U$
as $U_{ii}=1$ when $y_i$ is known, otherwise $U_{ii}=0$.
To impose the relevant constraint to the learning of $f$,
we aim to minimize the following
objective function:
\begin{equation}
\label{equ:Qr}
\begin{aligned}
\underset{\textbf{f}}{min}~ O^r(\textbf{f})
&=\sum_{i=1}^N (f_i-y_i)^2 U_{ii}\\
&=(\textbf{f} - \textbf{y})^\top U(\textbf{f} - \textbf{y})
\end{aligned}
\end{equation}
\item[Graph constraint]
$f$ should also be consistent with the local distribution found in the protein domain database.
The local distribution was embedded into a $K$ nearest neighbor graph $\mathcal{G}=\{\mathcal{V},\mathcal{E},W\}$.
For each protein domain $x_i$, its $K$ nearest neighbors, excluding itself, are denoted by $\mathcal{N}_i$.
The node set $\mathcal{V}$ corresponds to $N$ protein domains in $\mathcal{X}$, while
$\mathcal{E}$ is the edge set, and
$(i,j)\in \mathcal{E}$ if $x_j\in \mathcal{N}_i$ or $x_i\in \mathcal{N}_j$.
{
The weight of an edge $(i,j)$ is denoted as $W_{ij}$ which can be computed using different graph definitions and parameters as described in the next section.
The edge weights are further organized in a weight matrix $W = [W_{ij}] \in \mathbb{R}^{N\times N}$},
where $W_{ij}$
is the weight of edge {$(i,j)$}.
We expect that if two {protein domains}
$x_i$ and $x_j$ are close (i.e.,$W_{ij}$ is big), then $f_i$ and $f_j$ should also
be close.
To impose the graph constraint to the learning of $f$,
we aim to minimize the following
objective function:
\begin{equation}
\label{equ:Og}
\begin{aligned}
\underset{\textbf{f}}{min}~ O^g(f)
&=\frac{1}{2}\sum_{i,j=1}^N (f_i-f_j )^2 W_{ij}\\
&=\textbf{f}^\top D \textbf{f} - \textbf{f}^\top W \textbf{f}\\
&=\textbf{f}^\top L \textbf{f}
\end{aligned}
\end{equation}
where $D$ is a
diagonal matrix whose entries are $D_{ii}=\sum_{i=1}^N W_{ij}$ and $L = D -W$ is the
graph Laplacian matrix.
This is a basic identity in spectral graph theory and it provides some
insight into the remarkable properties of the graph Laplacian.
\end{description}
When the two constraints are combined,
the learning of $\textbf{f}$ is based on the minimization of the
following objective
function:
\begin{equation}
\label{equ:Of}
\begin{aligned}
\underset{\textbf{f}}{min}~ O(\textbf{f})&=O^r(\textbf{f}) + \alpha O^g(\textbf{f})\\
&=(\textbf{f} - \textbf{y})^\top U(\textbf{f} - \textbf{y}) + \alpha \textbf{f}^\top L \textbf{f}
\end{aligned}
\end{equation}
where $\alpha$ is a trade-off parameter of the smoothness penalty.
The solution is obtained by setting the derivative of $O(\textbf{f})$ with respect to $\textbf{f}$ to zero as
$\textbf{f}=(U+\alpha L)^{-1}U\textbf{y}$.
In this way, information from both
the query protein domain provided by the user and
the relationship of all the protein domains in $\mathcal{X}$
are used to
rank the protein domains in $\mathcal{X}$.
The query information is embedded in $y$ and $U$, while the protein domain relationship information is embedded in $L$.
The final ranking results are obtained by balancing the two
sources of information.
In this paper, we call this method \textbf{G}raph regularized \textbf{Rank}ing (G-Rank).
\subsection*{Multiple graph learning and ranking: MultiG-Rank}
Here we describe the multiple graph learning
method to directly learn a self-adaptive graph for ranking regularization The graph is assumed to be a linear
combination of multiple predefined graphs (referred
to as base graphs).
The graph weights are learned in a supervised way by considering the
SCOP fold types of the protein domains in the database.
\subsubsection *{Multiple graph regularization}
The main component of graph regularization is the construction of a graph.
As described previously, there are many ways to find the neighbors $\mathcal{N}_i$ of $x_i$ and to define the weight matrix $W$ on
the graph \cite{Cai2011}.
Several of them are as follows:
\begin{itemize}
\item
\textbf{Gaussian kernel weighted graph}: $\mathcal{N}_i$ of $x_i$ is found by comparing the squared Euclidean distance
as,
\begin{equation}
\begin{aligned}
||x_i-x_j||^2=x_i^\top x_i - 2 x_i^\top x_j +x_j^\top x_j
\end{aligned}
\end{equation}
and the weighting is computed using a Gaussian kernel as,
\begin{equation}
\begin{aligned}
W_{ij}=
\left\{\begin{matrix}
e^{-\frac{||x_i-x_j||^2}{2\sigma^2}}, &if~(i,j)\in \mathcal{E}\\
0, & else
\end{matrix}\right.
\end{aligned}
\end{equation}
where $\sigma$ is the bandwidth of the kernel.
\item
\textbf{Dot-product weighted graph}: $\mathcal{N}_i$ of $x_i$ is found by comparing the squared Euclidean distance
and the weighting is computed as the dot-product as,
\begin{equation}
\begin{aligned}
W_{ij}=
\left\{\begin{matrix}
x_i^\top x_j, &if~(i,j)\in \mathcal{E}\\
0, & else
\end{matrix}\right.
\end{aligned}
\end{equation}
\item \textbf{Cosine similarity weighted graph}:
$\mathcal{N}_i$ of $x_i$ is found by comparing
cosine similarity as,
\begin{equation}
\begin{aligned}
C(x_i,x_j)=\frac{x_i^\top x_j}{||x_i||||x_j||}
\end{aligned}
\end{equation}
and the weighting is also assigned as cosine similarity as,
\begin{equation}
\begin{aligned}
W_{ij}=
\left\{\begin{matrix}
C(x_i,x_j), &if~(i,j)\in \mathcal{E}\\
0, & else
\end{matrix}\right.
\end{aligned}
\end{equation}
\item \textbf{Jaccard index weighted graph}:
$\mathcal{N}_i$ of $x_i$ is found by comparing
the Jaccard index \cite{Jaccard2011} as,
\begin{equation}
\begin{aligned}
J(x_i,x_j)=\frac{|x_i\bigcap x_j|}{|x_i\bigcup x_j|}
\end{aligned}
\end{equation}
and the weighting is assigned as,
\begin{equation}
\begin{aligned}
W_{ij}=
\left\{\begin{matrix}
J(x_i,x_j), &if~(i,j)\in \mathcal{E}\\
0, & else
\end{matrix}\right.
\end{aligned}
\end{equation}
\item \textbf{Tanimoto coefficient weighted graph}:
$\mathcal{N}_i$ of $x_i$ is found by comparing
the Tanimoto coefficient as,
\begin{equation}
\begin{aligned}
T(x_i,x_j)=\frac{x_i^\top x_j}{||x_i||^2+||x_j||^2-x_i^\top x_j}
\end{aligned}
\end{equation}
and the weighting is assigned as,
\begin{equation}
\begin{aligned}
W_{ij}=
\left\{\begin{matrix}
T(x_i,x_j), &if~(i,j)\in \mathcal{E}\\
0, & else
\end{matrix}\right.
\end{aligned}
\end{equation}
\end{itemize}
With so many possible choices of graphs,
the
most suitable graph with its parameters for the protein domain ranking task is often not known
in advance;
thus, an exhaustive search on a predefined
pool of graphs is necessary.
When the size
of the pool becomes large, an
exhaustive search will be quite time-consuming and sometimes not possible.
Hence, a method for efficiently learning
an appropriate graph to make the performance
of the employed graph-based ranking method robust or
even improved is crucial for graph regularized ranking.
To tackle this problem we propose a multiple graph
regularized ranking framework,
that provides a series of initial guesses of the graph Laplacian
and combines them to approximate the intrinsic manifold in a conditionally
optimal way, inspired by a previously reported method \cite{Geng2012}.
Given a set of $M$ graph candidates $\{\mathcal{G}_1,\cdots,\mathcal{G}_M\}$,
we denote their corresponding candidate graph Laplacians as $\mathcal{T}=\{L_1,\cdots,L_M\}$.
By assuming that the optimal graph Laplacian
lies in the convex hull of the pre-given graph Laplacian candidates,
we constrain the search space of possible graph Laplacians o linear combination of $L_m$ in $\mathcal{T}$ as,
\begin{equation}
\label{equ:L}
\begin{aligned}
L=\sum_{m=1}^M \mu_m L_m
\end{aligned}
\end{equation}
where $\mu_m$ is the weight of $m$-th graph.
To avoid any negative contribution,
we further constrain
$\sum_{m=1}^M \mu_m=1,~\mu_m\geq 0.$
To use the information from
data distribution approximated by the new composite graph Laplacian $L$ in (\ref{equ:L}) for protein domain ranking,
we introduce a new multi-graph regularization term.
By substituting (\ref{equ:L}) into (\ref{equ:Og}), we get the augmented
objective function term in an enlarged parameter space as,
\begin{equation}
\label{equ:Omultig}
\begin{aligned}
\underset{\textbf{f},\mu}{min}~ &O^{multig}(\textbf{f},\mu)
=\sum_{m=1}^M \mu_m (\textbf{f}^\top L_m \textbf{f})\\
s.t.~&\sum_{m=1}^M \mu_m=1,~\mu_m\geq 0.
\end{aligned}
\end{equation}
where $\mu=[\mu_1,\cdots,\mu_M]^\top$ is the graph weight vector.
\subsubsection *{Off-line supervised multiple graph learning}
In the on-line querying procedure, the relevance of query $x_q$ to database protein domains is unknown and
thus the optimal graph weights $\mu$ cannot be learned in a supervised way.
However, all the SCOP-fold labels of protein domain in the database are known, making the supervised learning of $\mu$ in an off-line
way possible.
We treat each database protein domain $x_q\in \mathcal{D},~q=1,\cdots,N$ as a query in the off-line learning
and
all the items of its relevant vector $\textbf{y}_q=[{y}_{1q},\cdots,{y}_{Nq}]^\top $ as known because all the
SCOP-fold labels are known for all the database protein domains as,
\begin{equation}
\begin{aligned}
\textbf{y}_{iq}=
\left\{\begin{matrix}
1 &, if~l_i=l_q\\
0 &, else
\end{matrix}\right.
\end{aligned}
\end{equation}
Therefore, we set $U=I^{N\times N}$ as a $N\times N$ identity matrix.
The ranking score vector of the $q$-th database protein domain is also defined as $\textbf{f}_q=[y_{1q},\cdots,y_{Nq}]^\top$.
Substituting $\textbf{f}_q$, $\textbf{y}_q$ and $U$ to (\ref{equ:Qr}) and (\ref{equ:Omultig}) and combining them, we have the optimization
problem for the $q$-th database
protein domain as,
\begin{equation}
\label{equ:fq}
\begin{aligned}
\underset{\textbf{f}_q,\mu}{min}~ &O(\textbf{f}_q,\mu)
=(\textbf{f}_q-\textbf{y}_q)^\top (\textbf{f}_q-\textbf{y}_q)+\alpha \sum_{m=1}^M \mu_m (\textbf{f}_q^\top L_m \textbf{f}_q) + \beta ||\mu||^2\\
s.t.~&\sum_{m=1}^M \mu_m=1,~\mu_m\geq 0.
\end{aligned}
\end{equation}
To avoid the parameter $\mu$ over-fitting to
one single graph, we also introduce the $l_2$ norm
regularization term $||\mu||^2$ to the object function.
The difference between $f_q$ and $y_q$ should be noted:
$f_q \in \{1,0\}^N$ plays the role of the given ground truth in the supervised learning procedure,
while $y_q \in
\mathbb{R}^N$ is the variable to be solved.
While $f_q$ is the ideal solution of $y_q$, it is not always achieved after the learning.
Thus, we introduce the first term in (\ref{equ:fq})to make $y_q$ as similar to $f_q$ as possible during the learning procedure.
\textbf{Object function}:
Using all protein domains in the database $q=1,\dots,N$ as queries to learn $\mu$, we obtain the final objective function of
supervised multiple graph weighting and protein domain ranking as,
\begin{equation}
\label{equ:OFmu}
\begin{aligned}
\underset{F,\mu}{min}~ O(F,\mu)
&=\sum_{q=1}^N \left[ (\textbf{f}_q-\textbf{y}_q)^\top (\textbf{f}_q-\textbf{y}_q)
+\alpha \sum_{m=1}^M \mu_m (\textbf{f}_q^\top L_m \textbf{f}_q) \right]+ \beta ||\mu||^2\\
&=Tr \left[ (F-Y)^\top (F-Y)\right]+\alpha \sum_{m=1}^M \mu_m Tr(F^\top L_m F) + \beta ||\mu||^2\\
s.t.~&\sum_{m=1}^M \mu_m=1,~\mu_m\geq 0.
\end{aligned}
\end{equation}
where $F=[\textbf{f}_1,\cdots,\textbf{f}_N]$ is the ranking score matrix with the $q$-th column as the ranking score vector of $q$-th
protein domain, and $Y=[\textbf{y}_1,\cdots,\textbf{y}_N]$ is the relevance matrix with the $q$-th column as the relevance vector of the $q$-th
protein domain.
\textbf{Optimization}:
Because direct optimization to (\ref{equ:OFmu}) is difficult, instead we
adopt an iterative, two-step strategy to alternately optimize
$F$ and $\mu$. At each iteration, either $F$ or $\mu$
is optimized while the other is fixed, and then the
roles are switched. Iterations are repeated
until
a maximum number of iterations is
reached.
\begin{itemize}
\item \emph{Optimizing $F$}:
By fixing $\mu$, the analytic solution for
(\ref{equ:OFmu}) can be easily obtained by setting the
derivative of $O(F,\mu)$ with respect to $F$ to zero. That is,
\begin{equation}
\label{equ:F}
\begin{aligned}
\frac{\partial O(F,\mu)}{\partial F}&=2 (F-Y) + 2 \alpha \sum_{m=1}^M \mu_m (L_m F)=0\\
F&=( I+ \alpha \sum_{m=1}^M \mu_m L_m)^{-1} Y
\end{aligned}
\end{equation}
\item \emph{Optimizing $\mu$}:
By fixing $F$ and removing items irrelevant to $\mu$ from (\ref{equ:OFmu}),
the optimization
problem (\ref{equ:OFmu}) is reduced to,
\begin{equation}
\label{equ:LP}
\begin{aligned}
\underset{\mu}{min}~
&\alpha\sum_{m=1}^M \mu_m Tr (F^\top L_m F)+\beta ||\mu||^2\\
&=\alpha\sum_{m=1}^M \mu_m e_m + \beta \sum_{m=1}^M \mu^2\\
&=\alpha e^\top \mu + \beta \mu^\top \mu\\
s.t.~&\sum_{m=1}^M \mu_m=1,~\mu_m\geq 0.
\end{aligned}
\end{equation}
where $e_m=Tr (F^\top L_m F)$ and $e=[e_1,\cdots,e_M]^\top$.
The
optimization of (\ref{equ:LP}) with respect to the graph weight $\mu$
can then be solved as a standard quadratic programming (QP) problem \cite{Stivala2009}.
\end{itemize}
\textbf{Off-line algorithm}: The off-line $\mu$ learning algorithm is summarized as Algorithm \ref{alg:offline}.
\begin{algorithm}[h!]
\caption{MultiG-Rank: off-line graph weights learning algorithm.}
\label{alg:offline}
\begin{algorithmic}
\REQUIRE Candidate graph Laplacians set $\mathcal{T}$;
\REQUIRE SCOP type label set of database protein domains $\mathcal{L}$;
\REQUIRE Maximum iteration number $T$;
\STATE Construct the relevance matrix $Y=[y_{iq}]^{N\times N}$ where $y_{iq}$ if $l_i=l_q$, 0 otherwise;
\STATE Initialize the graph weights as $\mu_m^0=\frac{1}{M}$, $m=1,\cdots,M$;
\FOR{$t=1,\cdots,T$}
\STATE Update the ranking score matrix $F^t$ according to previous $\mu_m^{t-1}$ by (\ref{equ:F});
\STATE Update the graph weight $\mu^t$ according to updated $F^t$ by (\ref{equ:LP});
\ENDFOR
\STATE Output graph weight $\mu=\mu^{t}$.
\end{algorithmic}
\end{algorithm}
\subsubsection *{On-line ranking regularized by multiple graphs}
Given a newly discovered protein domain submitted by a user as query $x_0$,
its SCOP type label $l_0$ will be unknown and the domain will not be in the database $\mathcal{D}=\{x_1,\cdots,x_N\}$.
To compute the ranking scores of $x_i\in \mathcal{D}$ to query $x_0$, we
extend the size of database to $N + 1$ by adding $x_0$ into the database and
then solve the ranking score vector
for $x_0$ which is defined as $\textbf{f} =[f_0,\cdots,f_N] \in \mathbb{R}^{N+1}$ using (\ref{equ:Of}).
The parameters in (\ref{equ:Of}) are constructed as follows:
\begin{itemize}
\item \textbf{Laplacian matrix $L$}:
We first compute the $m$ graph weight
matrices $\{W_m\}_{m=1}^{M}\in \mathbb{R}^{(N+1)\times(N+1)}$ with their corresponding Laplacian matrices
$\{L_m\}_{m=1}^{M}\in \mathbb{R}^{(N+1)\times(N+1)}$ for the extended database $\{x_0,x_1,\cdots,x_N\}$.
Then with the graph weight $\mu$ learned by Algorithm \ref{alg:offline}, the new Laplacian matrix $L$
can be computed as in (\ref{equ:L}).
\emph{On-line graph weight computation}:
When a new query $x_0$ is added to the database, we calculate its $K$ nearest
neighbors in the database $\mathcal{D}$ and the corresponding weights $W_{0j}$ and $W_{j0},j=1,\cdots,N$.
If adding this new query to the database does not affect the
graph i n the database space, the neighbors and weights $W_{ij},i,j=1,\cdots,N$
for the protein domains in the database are fixed and can be pre-computed off-line.
Thus, we only need to compute $N$ edge weights for each graph instead of $(N+1)\times (N+1)$.
\item \textbf{Relevance vector $y$}:
The relevance vector for $x_0$ is defined as $\textbf{y} =[y_0,\cdots,y_N]^\top \in \{1,0\}^{N+1} $ with only $y_{0}=1$ known and $y_i$,
$i=1,\cdots,N$ unknown.
\item \textbf{Matrix $U$}:
In this situation, $U$ is a $(N+1)\times (N+1)$ diagonal matrix with $U_{00}=1$ and $U_{ii}=0$, $i=1,\cdots,N$.
\end{itemize}
Then the ranking score vector $f$ can be solved as,
\begin{equation}
\label{equ:f}
\begin{aligned}
\textbf{f}=(U+\alpha L)^{-1} U \textbf{y}
\end{aligned}
\end{equation}
The on-line ranking algorithm is summarized as Algorithm \ref{alg:inline}.
\begin{algorithm}[h!]
\caption{MultiG-Rank: on-line ranking algorithm.}\label{alg:inline}
\begin{algorithmic}
\REQUIRE protein domain database $\mathcal{D}=\{x_1,\cdots,x_N\}$;
\REQUIRE Query protein domain $x_0$;
\REQUIRE Graph weight $\mu$;
\STATE Extend the database to $(N+1)$ size by adding $x_0$ and compute $M$
graph Laplacians of the extended database;
\STATE Obtain multiple graph Laplacian $L$ by linear combination of $M$ graph Laplacians with weight $\mu$ as in (\ref{equ:L});
\STATE Construct the relevance vector $\textbf{y}\in \mathbb{R}^{(N+1)}$
where $y_{0}=1$ and diagonal matrix $U\in \mathbb{R}^{(N+1)\times (N+1)}$ with $U_{ii}=1$ if $i=0$ and 0 otherwise;
\STATE Solve the ranking vector $\textbf{f}$ for $x_0$ as in (\ref{equ:f});
\STATE Ranking protein domains in $\mathcal{D}$ according to ranking scores $\textbf{f}$ in descending order.
\end{algorithmic}
\end{algorithm}
\subsection*{Protein domain database and query set}
We used the
SCOP 1.75A database \cite{ASTRAL2004} to construct the database and query set.
In the SCOP 1.75A database, there are
49,219 protein domain PDB entries and 135,643 domains,
belonging to 7 classes and 1,194 SCOP fold types.
\subsubsection*{Protein domain database}
Our protein domain database was selected from \emph{ASTRAL SCOP 1.75A} set \cite{ASTRAL2004}, a subset of the SCOP (Structural Classification of Proteins)1.75A database which was released in March 15, 2012 \cite{ASTRAL2004}.
ASTRAL SCOP 1.75A 40\%) \cite{ASTRAL2004}, a genetic domain sequence subset,
was used as our protein domain database $\mathcal{D}$.
This database was selected from SCOP 1.75A database so that the selected domains have less than 40\% identity to each other.
There are a total of 11,212 protein domains in the ASTRAL SCOP 1.75A 40\% database belonging to 1,196 SCOP fold types. The ASTRAL database is available on-line at
http://scop.berkeley.edu.
The number of protein domains in each SCOP fold varies from 1 to 402.
The distribution of protein domains with the different fold types is shown in Fig. \ref{fig:FigProNum}.
Many previous studies evaluated ranking performances
using the older version of the ASTRAL SCOP dataset (ASTRAL SCOP 1.73 95\%) that was released in 2008 \cite{Zhang2010}.
\begin{figure}[h!]
\centering
\caption{Distribution of protein domains with different fold types in the ASTRAL SCOP 1.75A 40\% database.}
\label{fig:FigProNum}
\end{figure}
\subsubsection*{Query set}
We also randomly selected 540 protein domains from the SCOP 1.75A database to construct a query set.
For each query protein domain that we selected we ensured that there was at least one protein domain belonging to the same SCOP fold type
in the
ASTRAL SCOP 1.75A 40\% database, so that for each query, there was at least one "positive" sample in
the protein domain database.
However, it should be noted that the 540 protein domains in the query data set
were randomly selected and do not necessarily represent 540 different folds.
Here we call our query set the \emph{540 query} dataset because it contains 540 protein domains from the
SCOP 1.75A database.
\subsection*{Evaluation metrics}
A ranking procedure is run against the protein domains database using a query domain. A list of all matching protein domains
along with their ranking scores is returned.
We adopted the same evaluation metric framework as was described previously \cite{Zhang2010}, and
used the receiver operating characteristic (ROC) curve,
the area under the ROC curve (AUC),
and the recall-precision curve
to evaluate the ranking accuracy.
Given a query protein domain $x_q$ belonging to the SCOP fold $l_q$, a list of
protein domains is returned from the database by the on-line MultiG-Rank algorithm or other ranking
methods. For a database protein domain $x_r$ in the returned list, if its fold label $l_r$ is the same as that of
$x_q$, i.e. $l_r = l_q$ it is identified as a true positive (TP), else it is identified as a false positive (FP).
For a database protein domain $x_{r'}$ not in the returned list, if its fold label $l_{r'}= l_q$, it will be identified
as a true negative (TN), else it is a false negative (FN). The true positive rate (TPR),
false positive rate (FPR), recall, and precision can then be computed based on the above statistics as follows:
\begin{equation}
\begin{aligned}
&TPR =\frac{ TP}{ TP+ FN},~&FPR= \frac{ FP}{ FP+ TN}\\
&recall=\frac{ TP}{ TP+ FN},~&precision =\frac{TP}{ TP+ FP}
\end{aligned}
\end{equation}
By varying the length of the returned list, different $TPR$, $FRP$, recall and precision values are obtained.
\begin{description}
\item[ROC curve] Using $FPR$ as the abscissa and $TPR$ as the ordinate, the ROC curve can be plotted. For a
high-performance ranking system, the ROC curve should be as close to the top-left corner as possible.
\item[Recall-precision curve]
Using recall as the abscissa and precision as the ordinate, the recall-precision curve
can be plotted. For a high-performance ranking system, this curve should be close to the top-right
corner of the plot.
\item[AUC]
The AUC is computed as a single-figure measurement of the quality of an ROC curve. AUC is averaged over all the queries to evaluate the performances of different ranking methods.
\end{description}
\section*{Results}
We first compared our MultiG-Rank against
several popular graph-based ranking score learning methods for ranking protein {domains}.
We then evaluated the ranking performance of MultiG-Ranking against other
protein domain ranking methods using different protein domain comparison strategies.
Finally, a case study of a TIM barrel fold is described.
\subsubsection*{Comparison of MultiG-Rank against other graph-based ranking methods}
We compared our MultiG-Rank to two graph-based ranking methods, G-Rank and GT \cite{BaiGT2010}, and
against the pairwise protein domain comparison based ranking method proposed in \cite{Zhang2010} as a baseline method (Fig. \ref{fig:GraphROC}) .
The evaluations were conducted with the 540 query domains form the \emph{540 query} set. The average
ranking performance was computed over these 540 query runs.
\begin{figure}[h!]
\centering
\caption{
Comparison of MultiG-Rank against other protein domain ranking methods.
Each curve represents a graph-based ranking score learning algorithm.
MultiG-Rank, the Multiple Graph regularized Ranking algorithm; G-Rank, Graph regularized Ranking; GT, graph transduction; Pairwise Rank, pairwise protein domain ranking method \cite{Zhang2010}
(a) ROC curves of the different ranking methods;
(b) Recall-precision curves of the different ranking methods.
}\label{fig:GraphROC}
\end{figure}
The figure shows the ROC and the recall-precision curves obtained using the different graph ranking
methods.
As can be seen, the MultiG-Rank algorithm
significantly outperformed the other graph-based ranking algorithms;
the precision difference got larger
as the recall value increased and then tend to converge as the precision tended towards zero (Fig. \ref{fig:GraphROC} (b)).
The G-Rank
algorithm outperformed GT in most cases;
however, both G-Rank and GT were much better than the
pairwise ranking which neglects the global distribution of the protein domain database.
The AUC results for the
different ranking methods on the \emph{540 query} set are tabulated in Table \ref{tab:AUCgraph}.
As shown,
the proposed MultiG-Rank consistently outperformed the
other three methods on the 540 query set against our protein domain
database, achieving a gain in AUC of 0.0155, 0.0210
and 0.0252 compared
with G-Rank, GT and Pairwise Rank, respectively.
Thus, we have shown that the ranking precision can be improved significantly using our algorithm.
We have made three observations from the results
listed in Table \ref{tab:AUCgraph}:
\begin{enumerate}
\item
G-Rank and GT produced similar performances
on our protein domain database,
indicating that there is no significant difference
in the performance of the graph \emph{transduction} based or
graph \emph{regularization based} single graph ranking methods for unsupervised
learning of the ranking scores.
\item
Pairwise ranking produced the worst performance even though the method uses a carefully selected
similarity function as reported in \cite{Zhang2010}.
One reason for the poorer performance is that similarity computed
by pairwise ranking is
focused on detecting statistically significant pairwise differences only, while more subtle sequence similarities are missed.
Hence, the variance among different fold types cannot be accurately estimated when the global distribution is neglected and only the protein domain pairs are considered.
Another possible reason is that pairwise ranking usually
produces a better performance when there is only a small number of
protein domains in the database; therefore, because our database contains a large
number of protein domains, the ranking
performance of the pairwise ranking method was poor.
\item
MultiG-Rank produced the best ranking performance, implying that both
the discriminant and geometrical information in the protein domain database are important for accurate ranking.
In MultiG-Rank, the geometrical information is estimated by multiple graphs and the discriminant information
is included by using the SCOP-fold type labels to learn the graph weights.
\end{enumerate}
\subsubsection*{Comparison of MultiG-Rank with other protein domain ranking methods}
We compare the MultiG-Rank against several other popular protein domain ranking methods:
IR Tableau \cite{Zhang2010}, QP tableau \cite{Stivala2009}, YAKUSA \cite{YAKUSA2005}, and
SHEBA\cite{SHEBA1990}. For the query domains and the protein domain database we used the \emph{540 query} set and the ASTRAL SCOP 1.75A 40\% database, respectively.
The YAKUSA software source code was downloaded from http://wwwabi.snv.jussieu.fr/YAKUSA , compiled and used for ranking.
We used the ``make Bank" shell script (http://wwwabi.snv.jussieu.fr/YAKUSA) which calls the phipsi program (Version 0.99 ABI, June 1993) to format the database. YAKUSA compares a query domain to a database and returns a list of the protein domains along with ranks and ranking scores.
We used the default parameters of YAKUSA to perform the ranking of the protein domains in our database.
The SHEBA software (version 3.11) source code was downloaded from https://ccrod.cancer.gov/confluence/display/CCRLEE/SHEBA,
complied and used it for ranking.
The protein domain database was converted to ``.env" format and the
pairwise alignment was performed between each query domain and
each database domain to obtain the alignment scores.
First, we compared the different
protein domain-protein domain ranking methods and computed their similarity or dissimilarity.
An ordering technique was devised to detect
hits by taking the similarities
between data pairs as input.
For our MultiG-Rank, the ranking score was used as a measure of protein domain-protein domain similarly.
The
ranking results were evaluated based on
the ROC and recall-precision curves as shown in Fig. \ref{fig:RankingROC}.
The AUC values are
given in Table \ref{tab:AUCranking}.
\begin{figure}[h!]
\centering
\caption{
Comparison of the performances of protein domain ranking algorithms.
(a) ROC curves for
different field-specific protein domain ranking algorithms.
TPR, true positive rate; FPR, false positive rate.
(b) Recall-precision curves for different field-specific protein domain ranking algorithms.}
\label{fig:RankingROC}
\end{figure}
The results in Table \ref{tab:AUCranking} show that with the
advantage of exploring data characteristics from various
graphs, MultiG-Rank can achieve significant improvements
in the ranking outcomes; in particular, AUC is increased from 0.9478
to 0.9730 in MultiG-Rank which uses the same Tableau feature as IR Tableau.
MultiG-Rank also outperforms
QP Tableau, SHEBA, and YAKUSA; and AUC improves from
0.9364, 0.9421 and 0.9537, respectively, to 0.9730 with MultiG-Rank.
Furthermore, because of its better use of effective
protein domain descriptors, IR Tableau outperforms QP Tableau.
To evaluate the effect of using protein domain descriptors for ranking instead of
direct protein domain structure comparisons, we compared IR Tableau with YAKUSA
and SHEBA. The main differences between them are
that IR Tableau considers both protein domain feature extraction and comparison
procedures, while YAKUSA
and SHEBA compare only pairs of protein domains directly.
The quantitative results in Table \ref{tab:AUCranking} show
that, even by using the additional information from the
protein domain descriptor, IR Tableau does not outperform YAKUSA
.
This result strongly suggests that ranking performance improvements are achieved mainly by
graph regularization and not by using the power of a protein domain descriptor.
Plots of TPR versus
FPR obtained using MultiG-Rank and various field-specific protein domain ranking methods as
the ranking algorithms are shown in Fig. \ref{fig:RankingROC} (a) and the recall-precision curves obtained using them are shown in Fig. \ref{fig:RankingROC} (b). As can be seen from the figure, in most cases, our MultiG-Rank algorithm
significantly outperforms the other protein domain ranking algorithms.
The performance differences get larger
as the length of the returned protein domain list increases.
The YAKUSA
algorithm outperforms SHEBA, IR Tableau and QP Tableau in most cases.
When only a few protein domains are returned to the query,
the sizes of both the true positive samples and the false positive samples are small, showing that, in this case, all the algorithms yield
low FPR and TPR.
As the number of returned protein domains
increases, the TPR of all of the algorithms
increases. However, MultiG-Rank tends to converge
when the FPR is more than 0.3,
whereas the other ranking algorithms seems
to converge only when the FPR is more than 0.5.
\subsubsection*{Case Study of the TIM barrel fold}
Besides considering the results obtained for the whole database, we also
studied an important protein fold, the TIM beta/alpha-barrel fold (c.1).
The TIM barrel is a conserved protein fold that consists of eight $\alpha$-helices and
eight parallel $\beta$-strands that alternate along the peptide backbone \cite{TMI2010}.
TIM barrels are one of the most common protein folds.
In the ASTRAL SCOP 1.75A \%40 database, there are a total of 373 proteins belonging to
33 different superfamilies and 114 families that have TIM beta/alpha-barrel SCOP fold type domains,.
In this case study, the TIM beta/alpha-barrel domains from the query set
were used to rank
all the protein domains in the database.
The ranking was evaluated both at the fold level of the SCOP classification
and at lower levels of the SCOP classification (ie. superfamily level and family level).
To evaluate the ranking performance, we defined "true positives"
at three levels:
\begin{description}
\item[Fold level] When the returned database protein domain is from the same fold type as the query protein domain.
\item[Superfamily level] When the returned database protein domain is from the same superfamily as the query protein domain.
\item[Family level] When the returned database protein domain is from the same family as the query protein domain.
\end{description}
The ROC and the recall-precision plots of the protein domain ranking results of MultiG-Rank
for the query TIM beta/alpha-barrel domain
at the three levels are
given in Fig. \ref{fig:ResultsTMI}.
The graphs were learned using the labels at the family, superfamily and the fold level.
The results show that
the ranking performance at the fold level is better than at the other two levels; however, although
the
performances at the lower levels, superfamily
and family, are not superior to that at the fold level,
they are still good.
One important factor is that when the relevance at the lower levels was measured, a much fewer number of protein domains
in the database were relevant to the queries,
making it more difficult to retrieve the relevant protein domains precisely.
For example, a query belonging to the
family of phosphoenolpyruvate mutase/Isocitrate lyase-like (c.1.12.7) matched 373 database protein domains at the fold level because this family
has 373 protein domains in the ASTRAL SCOP 1.75A \%40 database.
On the other hand, only 14 and four protein domains were relevant to the query at the superfamily and family levels respectively.
\begin{figure}[h!]
\centering
\caption{
{
Ranking results for the case study using the TIM beta/alpha-barrel domain as the query.
(a)
ROC curves of the ranking results for the TIM beta/alpha-barrel domain at the
fold, superfamily, and family levels.
TPR, true positive rate; FPR, false positive rate.
(b)
Recall-precision curves of the ranking results for the TIM beta/alpha-barrel domain
at the fold, superfamily, and family levels.}
}
\label{fig:ResultsTMI}
\end{figure}
\section*{Conclusion}
\label{sec:conclu}
The proposed MultiG-Rank method introduces a new paradigm to
fortify the broad scope of existing graph-based ranking
techniques. The main advantage of MultiG-Rank lies in its ability
to represent the learning of a unified space of ranking scores for protein domain database
in multiple graphs.
Such flexibility
is important in tackling complicated protein domain ranking problems
because it allows more prior knowledge to be explored for
effectively analyzing a given protein domain database, including the possibility of choosing
a proper set of graphs to better characterize
diverse databases, and the ability to adopt a multiple graph-based ranking method to appropriately
model relationships among the protein domains.
Here, MultiG-Rank has been
evaluated comprehensively on a carefully selected subset of the ASTRAL SCOP 1.75 A protein domain database.
The promising experimental results that were obtained further
confirm the usefulness of our ranking score learning approach.
\bigskip
\section*{Competing interests}
The authors declare no competing interests.
\section*{Author's contributions}
JW invented the algorithm, performed the experiments and
drafted the manuscript.
HB drafted the manuscript.
XG
supervised the study and made critical changes to the manuscript.
All the authors have
approved the final manuscript.
\section*{Acknowledgements}
\ifthenelse{\boolean{publ}}{\small}{}
The study was supported by grants from National Key Laboratory for Novel
Software Technology, China (Grant No. KFKT2012B17), 2011 Qatar Annual
Research Forum Award (Grant No. ARF2011), and King Abdullah University of
Science and Technology (KAUST), Saudi Arabia. We appreciate the valuable
comments from Prof. Yuexiang Shi, Xiangtan University, China.
|
1,116,691,498,712 | arxiv | \section{Introduction}\label{sec:intro}
The India-based Neutrino Observatory (INO)~\cite{ino} is a mega science project aimed at building a large underground laboratory to study the atmospheric neutrinos. The observatory will be a multi-experiment facility with Iron CALorimeter (ICAL) detector as one of the experiments. The entire ICAL detector will sit under a rock cover of approximately 1 km and will be magnetized to detect the charge of muons produced from neutrinos. Neutrinos are fundamental particles belonging to the lepton family in the standard model of particle physics and are assumed massless. However, recent experiments indicate that neutrinos oscillate among their flavors and have finite but tiny masses that are yet to be measured. Determination of neutrino masses and oscillation parameters is one of the most important open problems in physics today. INO-ICAL experiment with active resistive plate chamber detector will be designed to address these problems by detecting the passage of charged particles mainly muons produced due to neutrino interactions with the ICAL detector.
\section{Resistive Plate Chambers}\label{sec:rpc}
Resistive Plate Chamber (RPC) detectors~\cite{Santonico1981, Santonico1988} are the
gaseous detectors pioneered during 1980s. Since then it has found its usage in various
high energy, nuclear and astroparticle physics experiments as well as applications in cargo imaging, medical diagnostic imaging and other fields.
The RPC detectors are known for its long time stability and large coverage area for detection. These detectors also produce very large pulse heights, in the range of 2-5 mV without any external amplification, when operating in avalanche mode. In addition, RPC has excellent time resolution (few $ns$), and good detection efficiency for minimum ionising particles. RPC can also be used to build a fast, efficient and robust triggering system by making use of its excellent time resolution, good efficiency and high granularity~\cite{Bencze}. The RPC's are also very simple and low cost detectors. However, it is very important to carefully choose the various design parameters like the electrode material, gas composition, environmental factors, etc. to fully exploit all the advantages of the RPC detectors.
Electrode material plays crucial
role in the functioning of the RPC detectors, so it is very important to select it carefully. The electrode material should have high resistivity and
high surface smoothness to avoid localisation of excess charge, and to prevent
alternating leakage path for post streamer recovery. Bakelite and glass are the most sutiable and commonly used electrode materials for the RPC detectors. We are exploring both of these materials to be used for the INO RPC detectors. In this paper, we provide results of our studies performed for the glass electrodes only. The details of our bakelite studies can be found elsewhere~\cite{ourbakelite}.
We selected three types of glass viz., Saint Gobain, Asahi and Modi which are available in the local market for our $R\& D$. In order to determine the electrical properties of the glasses
we performed the bulk resistivity and surface resistivity measurements.
For the purpose of determining
the smoothness of the surface we performed Scanning Electron
Microscopy (SEM) and Atomic Force Microscopy (AFM) measurements. We also
performed the X-Ray Diffractometry (XRD) measurements to determine the crystal
structure of the glass samples. Ultra Violet (UV) transmittance test was
performed to find out the the
reflectance and transmittance capabilities of these glasses.
Bulk resistivity of the Saint Gobain glass is found to be maximum, which is of the order $8 \times 10^{12}$ $\Omega$-cm. Modi and Asahi are having similar bulk resistivity and is about 3 times smaller than Saint Gobain. The surface resistivity for Asahi and Modi are similar and is of the order of
$2\times10^{11}\Omega/\Box$ on average. The Saint Gobain surface resistivity is slightly smaller and is of the order of $1.6\times 10^{11} \Omega/\Box$ on average.
The rough inner surface of the electrodes can cause variations in the electric
field inside the RPC, so it is very important to have the inner surface as smooth as
possible for the efficient operation of the RPC. Asahi is found to have the smoothest surface followed by Saint Gobain and Modi respectively. The XRD measurements of the glass samples shows amorphous structure for all the samples. We performed transmittance test on Perkin Elmer Lamda 3B spectrophotometer between Lambda range of 0 - 1200 $nm$ and found Saint Gobain to be most reflective while Modi the least with Asahi in between.
\section{RPC Characterisation}\label{sec:char}
Using the three types of glasses of thickness 3~$mm$, we fabricated several prototypes of size $30~cm\times 30~cm$ RPC's. Glass plate electrodes were coated with a conductive layer of graphite on the outer surface. For maintaining a uniform gas gap, polycarbonate button spacer of thickness 2~$mm$ were glued between the two electrode plates. The gas gaps were sealed from all sides using side spacers made up of polycarbonate. A gas inlet and
a gas outlet nozzles were glued on the diagonally opposite corners of the chamber. Honey comb panels with strips of width 2.8 $cm$ were used as pick up panels. Sealed RPC chambers were then checked for any gas leakage via a leak test using manometer technique.
The fabricated RPC's were characterised for efficiency, leakage current and noise rate under different operating conditions. We
characterise the RPC's for various gas composition, environmental temperature, atmospheric humidity and
threshold to obtain the optimum parameters to maximise the detector performance. A muon telescope consisting of three scintillator detectors
connected with the NIM/VME Data Acquisition system (DAQ) were put in place. The RPC detector, which was to be characterised, was squeezed in between the scintillator detectors and readout by the DAQ system.
\subsection{Variation of Gas Mixture Composition}
The RPC's were tested in avalanche mode with a gas mixture of $R134a$ (95.0\%), $C_{4}H_{10}$ (4.5\%), $SF_{6}$ (0.5\%) with a flow rate of 10 SCCM. This mixture is being currently used to operate the INO-ICAL RPC's prototypes~\cite{Pramana}. However, it is important to determine whether
this composition is the optimum or not. For this purpose we varied the compositions
of these gases and measured efficiency, noise rate and leakage current of the RPC's. Fig.~\ref{fig:GasMix1} shows the efficiency, leakage current and noise rate
for the $R134a$ (95.0\%), $C_{4}H_{10}$ (4.5\%), $SF_{6}$ (0.5\%) gas composition. Fig.~\ref{fig:GasMix2} and Fig.~\ref{fig:GasMix3} shows the efficiency and leakage current for the gas composition of $R134a$ (67.7\%), $C_{4}H_{10}$ (32.0\%), $SF_{6}$ (0.3\%) and $R134a$ (95.0\%), $C_{4}H_{10}$ (5.0\%), $SF_{6}$ (0.0\%) respectively. From these figures it can be seen that the efficiencies
for all these gas compositions are similar for all types of glasses. In all the chambers the plaeteau is about 90\%. The inefficiency is due to misalignment of the triggering paddles. With the proper alignment of triggering paddles the efficiency increases upto 97\%. Noise rate as expected is maximum in the absence of $SF_{6}$~\cite{gas}. Also, the noise rate is maximum for Modi and
minimum for Asahi. Leakage current is much higher for Modi compared to Asahi and Saint
Gobain, which have reasonable leakage current.
\begin{figure}[H]
\begin{minipage}{\linewidth}
\begin{minipage}{0.3\linewidth}
\includegraphics[width=5cm,height=5cm]{Figs/Eff_GasMix1.pdf}
\end{minipage}
\begin{minipage}{0.3\linewidth}
\includegraphics[width=5cm,height=5cm]{Figs/Current_GasMix1.pdf}
\end{minipage}
\begin{minipage}{0.3\linewidth}
\includegraphics[width=5cm,height=5cm]{Figs/Noise_GasMix1.pdf}
\end{minipage}
\end{minipage}
\caption{Efficiency, Leakage Current and Noise rates for the $R134a$ (95.0\%), $C_{4}H_{10}$ (4.5\%), $SF_{6}$ (0.5\%) gas mixture.}
\label{fig:GasMix1}
\end{figure}
\begin{figure}[H]
\begin{minipage}{\linewidth}
\centering
\begin{minipage}{0.3\linewidth}
\includegraphics[width=5cm,height=5cm]{Figs/Eff_GasMix2.pdf}
\end{minipage}
\begin{minipage}{0.3\linewidth}
\includegraphics[width=5cm,height=5cm]{Figs/Current_GasMix2.pdf}
\end{minipage}
\begin{minipage}{0.3\linewidth}
\includegraphics[width=5cm,height=5cm]{Figs/Noise_GasMix2.pdf}
\end{minipage}
\end{minipage}
\caption{Efficiency and Leakage Current for the $R134a$ (67.7\%), $C_{4}H_{10}$ (32.0\%), $SF_{6}$ (0.3\%) Gas Mixture.}
\label{fig:GasMix2}
\end{figure}
\begin{figure}[H]
\begin{minipage}{\linewidth}
\centering
\begin{minipage}{0.3\linewidth}
\includegraphics[width=5cm,height=5cm]{Figs/Eff_GasMix3.pdf}
\end{minipage}
\begin{minipage}{0.3\linewidth}
\includegraphics[width=5cm,height=5cm]{Figs/Current_GasMix3.pdf}
\end{minipage}
\begin{minipage}{0.3\linewidth}
\includegraphics[width=5cm,height=5cm]{Figs/Noise_GasMix3.pdf}
\end{minipage}
\end{minipage}
\caption{Efficiency and Leakage Current for $R134a$ (95.0\%), $C_{4}H_{10}$ (5.0\%), $SF_{6}$ (0.0\%) Gas Mixture.}
\label{fig:GasMix3}
\end{figure}
\subsection{Variation of environmental Temperature and Humidity}
Operational conditions like environment temperature and moisture also affects the performance
of the RPC detectors, so it is very important to find out the suitable environmental
conditions to operate the RPC in order to optimise the performance. We varied the environmental
temperature and humidity to different values and measured the efficiency, noise rate and
leakage current for the RPCs. Fig.~\ref{fig:Temp} shows the effect of temperature and humidity variation on the efficiency,
current and noise rate for the RPC made up of Asahi glass. Saint Gobain and Modi RPC's shows similar behaviour.
\begin{figure}[H]
\begin{minipage}{\linewidth}
\begin{minipage}{0.3\linewidth}
\includegraphics[width=5cm,height=5cm]{Figs/Temp_Eff_GA2.pdf}
\end{minipage}
\begin{minipage}{0.3\linewidth}
\includegraphics[width=5cm,height=5cm]{Figs/Temp_Current_GA2.pdf}
\end{minipage}
\begin{minipage}{0.3\linewidth}
\includegraphics[width=5cm,height=5cm]{Figs/Temp_Noise_GA2.pdf}
\end{minipage}
\end{minipage}
\caption{Efficiency, Leakage Current and Noise rates as a function of temperature and humidity for $R134a$ (95.0\%), $C_{4}H_{10}$ (4.5\%), $SF_{6}$ (0.5\%) gas mixture. }
\label{fig:Temp}
\end{figure}
\subsection{Variation of Thresholds}
The discriminator threshold value set to reduce the systematic noise may
also affect the performance of the detector by suppressing the signal. To study this we
varied the threshold values for the RPC and scintillator paddles and measured the
efficiencies and noise rates. Fig.~\ref{fig:Threshold} shows the effect of threshold variation on the efficiency and noise rates for Asahi glass RPC. We did not
observe significant dependence in efficiency, whereas the noise rate decreases as we increase
the threshold value from 30mV to 50mV. However, further increasing the threshold
value to 70mV did not affect the noise rate much, so we fixed the threshold value
of 50mV for operating the RPCs.
\begin{figure}[H]
\begin{minipage}{\linewidth}
\centering
\begin{minipage}{0.3\linewidth}
\includegraphics[width=5cm,height=5cm]{Figs/Threshold_Eff_GA2.pdf}
\end{minipage}
\begin{minipage}{0.3\linewidth}
\includegraphics[width=5cm,height=5cm]{Figs/Threshold_Noise_GA2.pdf}
\end{minipage}
\end{minipage}
\caption{Efficiency and Noise Rate as a function of Threshold variation for $R134a$ (95.0\%), $C_{4}H_{10}$ (4.5\%), $SF_{6}$ (0.5\%) gas mixture.}
\label{fig:Threshold}
\end{figure}
\section{Results and Conclusions}\label{sec:res}
The INO-ICAL experiment will be required to build approximately 28,000 RPC's of size $2~m \times 2~m$. Because of the requirement of large number of RPCs it is very important to perform a through $R\& D$ on all aspects of the detector performance before finalizing various
parameters. We procured three types of glasses available in the local
market, viz. Asahi, Saint Gobain and Modi to perform our $R\& D$. We performed various studies to assess
the material and electrical properties of these glasses. We found Asahi to be best in terms of surface resistivity as well as smoothness. Saint Gobain is best in terms of Bulk Resistivity and transmittance. Asahi and Saint Gobain is comparable in terms of smoothness and transmittance whereas Modi is lacking in almost all the parameters.
From detector performance studies, we found that all the three makes gives almost similar
efficiencies. However, Asahi RPC is having the minimum noise and leakage current
with Saint Gobain comparable to Asahi under $R134a$ (95.0\%), $C_{4}H_{10}$ (4.5\%), $SF_{6}$ (0.5\%) gas composition. From gas composition variation studies we found that the efficiency is almost independent upon the gas composition. The behaviour is
similar for all types of glasses. The gas composition of $R134a$ (95.0\%), $C_{4}H_{10}$ (4.5\%), $SF_{6}$ (0.5\%) was found to be having optimum efficiency and noise rate. The variation of temperature and
humidity showed that with increase in temperature from $17^{\circ}C$ to $21^{\circ}C$
and relative humidity variation between approximately 40 and 60\%, the noise increases considerably. But, further increasing the temperature to $23^{\circ}$C did not affect the noise much.
Leakage current also increases slightly with increase in temperature from $17^{\circ}C$ to
$21^{\circ}C$ but further increase in temperature to $23^{\circ}C$ did not have any impact
on current. We did not find any considerable effect of temperature and relative
humidity on efficiency. On the
other hand, we found that humidity has some weird effect on the current. Higher humidity
increased the currents considerably and at 67\% relative
humidity we observe strange behavior in current which is shown in Fig.~\ref{fig:Temp} (centre). We are further investigating this unexpected
behaviour.
From the discriminator threshold variations studies, we found that by increasing the threshold from 30mV
to 50mV the systematic noise decreased considerably, but further increasing the
threshold from 50mV to 70mV did not have any effect on noise rate. Therefore, we set the threshold value to 50mV for operating the RPCs.
In conclusion, we found that Asahi glass is best suited for the INO-ICAL RPC in terms of
almost all the parameters that we studied. Saint Gobain, however, is also not far behind
and appears to be equally good for INO-ICAL RPCs. The gas composition of $R134a$ (95.0\%), $C_{4}H_{10}$ (4.5\%), $SF_{6}$ (0.5\%) appears to be optimum. It is important to maintain the environment temperature around $21^{\circ}C$ and atmospheric relative humidity under control to operate INO-ICAL RPCs. All our results are based on small RPCs of size $30~cm \times 30~cm$, therefore, in order to complete our studies we are in the process of fabricating full size $2~m \times 2~m$ RPCs to confirm our measurements and findings.
\acknowledgments
We acknowledge the financial support from Department of Science and Technology (DST) and University of Delhi $R\&D$ grants for carrying out these studies. Daljeet Kaur would like to thank Council of
Scientific and Industrial Research (CSIR) for the financial support. We would also like
to thank the INO group at Tata Institute of Fundamental Research (TIFR) for providing
some of the raw materials for detector construction. We also thank Prof. J. P. Singh of IIT Delhi for his help in getting AFM measurements done at IIT Delhi.
|
1,116,691,498,713 | arxiv | \section{Properties of the operators $\{\Pi_{\mu}\}$}
Here we show that the operators in Eq.~\eqref{eq:pilambda} are mutually orthogonal projectors that add up to identity. To this end we use the \emph{grand orthogonality theorem} (a consequence of Schur's lemma) of representation theory.
\begin{theorem}[Grand orthogonality theorem] (See \cite[Sec. 2.2.]{serre1977linear})
Given irreducible representations $\rho_\mu$ and $\rho_\nu$, we have that:
\begin{align}
\sum_{g \in S_n} \rho_\mu(g)^\dag_{ij} \rho_\nu(g)_{k\ell} &= \frac{n!}{d_\mu} \delta_{ik} \delta_{j \ell} \delta_{\mu \nu}.
\end{align}
\label{thm:got}
\end{theorem}
Using Theorem \ref{thm:got} we obtain the following identity for any fixed $h \in S_n$.
\begin{align*}
\sum_{g \in S_n} \chi_\nu(g) \chi_\mu(g^{-1} h) &= \sum_{g \in S_n} \sum_{i,j} \rho_\nu(g)_{ii} \rho_\mu(g^{-1}h)_{jj} \\ &= \sum_{g \in S_n} \sum_{ijk}\rho_\nu(g)_{ii} \rho_\mu(g^{-1})_{jk} \rho_\mu(h)_{kj} \\ &= \sum_{ijk} \left( \sum_{g \in S_n} \rho_\nu(g)_{ii} \rho_\mu(g)^\dag_{jk} \right) \rho_\mu(h)_{kj} \\
&= \delta_{\mu \nu} \sum_{ijk} \delta_{ij} \delta_{ik} \frac{n!}{d_\nu} \rho_\mu(h)_{kj} = \delta_{\mu \nu} \sum_i \frac{n!}{d_\nu} \rho_\mu(h)_{ii} = \delta_{\mu \nu} \frac{n!}{d_\nu} \chi_\mu(h).
\end{align*}
It follows that $\{\Pi_{\mu}\}$ are mutually orthogonal projectors.
\begin{align}
\Pi_\nu \Pi_\mu &= \frac{d_\mu d_\nu}{(n!)^2} \sum_{gh} \chi_\nu(g) \chi_\mu(h) \rho(gh) = \frac{d_\mu d_\nu}{(n!)^2} \sum_{g \ell} \chi_\nu(g) \chi_\mu(g^{-1} \ell) \rho(\ell) = \delta_{\mu \nu} \frac{d_\mu}{n!} \sum_\ell \chi_\mu(\ell) \rho(\ell) = \delta_{\mu \nu} \Pi_\mu.
\end{align}
Using the fact that $d_\mu = \chi_\mu(e)$, where $e \in S_n$ is the identity, we obtain
\begin{align}
\sum_{\mu \vdash n} \Pi_\mu &= \sum_{g \in S_n} \frac{1}{n!} \rho(g) \sum_{\mu \vdash n} d_\mu \chi_\mu(g) = \sum_{g \in S_n} \frac{1}{n!} \rho(g) \sum_{\mu \vdash n} \chi_\mu(e) \chi_\mu(g).
\label{eq:sumpi}
\end{align}
The column-orthogonality of characters (stated below) implies that
\begin{align}
\sum_\mu \chi_\mu(e) \chi_\mu(g) &= n! \delta_{eg},
\end{align}
and substituting this into Eq.~\eqref{eq:sumpi} gives $\sum_\mu \Pi_\mu = I $.
\begin{lemma}[Column-orthogonality of characters] (See \cite[Thm 16.4]{alperin2012groups})
For any $h,g\in S_n$ we have
\begin{align}
\sum_{\mu \vdash n} \chi_\mu(h) \chi_\mu(g) &= \begin{cases}
|Z(g)| &\text{ if $h$ and $g$ are conjugate.}\\
0 & \text{ otherwise. }
\end{cases}
\end{align}
where $Z(g)$ is the centralizer of $g$ in $S_n$ (the set of all group elements that commute with $g$).
\end{lemma}
\section{Beals' quantum Fourier transform over the symmetric group}
For completeness, here we review Beals' efficient algorithm for the quantum Fourier transform over the symmetric group \cite{beals1997quantum} (see also \cite{Moore03} for generalizations to other nonabelian groups).
Let $n\geq 2$ be an integer and consider the symmetric group $S_n$ of permutations on $n$ objects. Also let $T=\{\tau_1,\tau_2,\ldots, \tau_n\}$ be a transversal of the left cosets of $S_{n-1}$ in $S_n$. Here $S_{n-1}$ is regarded as the subgroup of $S_n$ that does not permute the $n$-th object and transversality means that $S_n$ is a union of cosets $\tau_1S_{n-1},\ldots,\tau_n S_{n-1}$.
It will be convenient to fix some notation and conventions concerning the function whose Fourier transform we are interested in. We shall fix a unitary matrix representation of each irreducible representation of $S_n$. In particular, we choose the so-called Young Orthogonal representation (more on this below) and write $\rho_{\lambda}(\pi)$ for the matrix corresponding to the irreducible representation labeled by $\lambda \vdash n$ evaluated at $\pi\in S_n$, and $d_{\lambda}$ for its dimension.
For a function $f:S_n \rightarrow \mathbb{C}$ we have
\begin{equation}
\mathrm{QFT}_n\sum_{\sigma\in S_n} f(\sigma)|\sigma\rangle=\sum_{\omega \vdash n} \sum_{i,j \in d_\omega} (\hat{f}(\omega))_{ij} |\omega, i,j\rangle,
\label{eq:ft11}
\end{equation}
where
\begin{equation}
\hat{f}(\omega)=\sqrt{\frac{d_{\omega}}{n!}} \sum_{\sigma\in S_n} f(\sigma)\rho_\omega(\sigma).
\label{eq:ft22}
\end{equation}
For ease of notation in this section we include the normalization factor $\sqrt{\frac{d_{\omega}}{n!}}$ in Eq.~(\ref{eq:ft22}) rather than in Eq.~\eqref{eq:ft11} which differs from our convention in the main text (Cf. Eqs.~(\ref{eq:ft1},\ref{eq:ft2})). However the quantum fourier transform $\mathrm{QFT}_n$ is defined in exactly the same way.
Note that the basis vectors in the Fourier basis can be labeled by triples from the set $\Omega=\{(\omega, i,j): \omega\vdash n, 1\leq i,j\leq d_{\omega}\}$, since $|\Omega|=\sum_{\omega\vdash n} d_{\omega}^2 =|S_n|$.
Since the left cosets of $S_{n-1}$ partition $S_n$, we may write
\begin{equation}
f(g)=\sum_{j=1}^{n} F^{j}(g) \qquad \qquad F^{j}(g)\equiv \begin{cases} f(g) &, \text{if } g\in \tau_j S_{n-1}\\ 0 &, \text{otherwise} \end{cases}.
\label{eq:defF}
\end{equation}
Also define $f_j:S_{n-1}\rightarrow \mathbb{C}$ for $1\leq j\leq n$ by
\begin{equation}
f_j(h)\equiv f(\tau_j h) \qquad \quad h\in S_{n-1}.
\label{eq:deff}
\end{equation}
Young's orthogonal representation has a certain \textit{adapted} property that allows us to express Eq.~\eqref{eq:ft22} in terms of Fourier transforms of the functions $f_j$ for $1\leq j\leq n$ in a simple way. In particular,
\begin{equation}
\hat{f}(\lambda)=\sum_{k=1}^{n} \rho_{\lambda}(\tau_k) \bigoplus_{\lambda^{-}\in \Phi(\lambda) } \sqrt{\frac{d_{\lambda}}{n\cdot d_{\lambda^{-}}}}\cdot \hat{f_k} (\lambda^{-}),
\label{eq:dirsum}
\end{equation}
where
\[
\Phi(\lambda)=\{\lambda^{-}\vdash (n-1) : \lambda^{-}\leq \lambda\}
\]
is the set of partitions of $n-1$ whose diagram differs from $\lambda$ in exactly one box. Note that the Fourier transforms $\hat{f_k}$ appearing on the right hand side of Eq.~\eqref{eq:dirsum} are over the group $S_{n-1}$.
To describe Beals' QFT algorithm it will be convenient to first define some unitary transformations that are used in the construction. In the following we shall assume that we have enough qubits to encode certain classical strings as computational basis states.
Let $W$ denote the unitary which implements a classical reversible circuit that, given a permutation $\pi\in S_n$, computes its factorization:
\[
W|\pi\rangle=|\tau_j\rangle|h\rangle,
\]
where $j\in [n]$ and $h\in S_{n-1}$
satisfy $\pi = \tau_j h$. Note that this factorization is unique.
For $\pi \in S_n$ let $M(\pi)$ be a unitary such that
\[
M(\pi) \sum_{\lambda\vdash n}\sum_{p,q=1}^{d_{\lambda}} (A_{\lambda})_{p,q}|\lambda,p,q\rangle= \sum_{\lambda\vdash n} \sum_{p,q=1}^{d_{\lambda} }(\rho_{\lambda}(\pi)A_{\lambda})_{p,q}|\lambda,p,q\rangle
\]
Here $(A_{\lambda})_{p,q}$ are complex coefficients. Note that this can be implemented via a controlled-$\rho_{\lambda}(\pi)$ operation on the $p$ register (where the control is $\lambda$).
The unitary $M(\pi)$ acts trivially outside the subspace spanned by basis vectors $\{|\lambda,p,q\rangle: \lambda \vdash n, p,q\in [d_{\lambda}]\}$
For each irreducible representation $\sigma$ of $S_{n-1}$ there is a set of irreps $\lambda$ of $S_n$ such that the restriction of $\rho_{\lambda}$ to $S_{n-1}$ contains $\sigma$. These correspond to the partitions $\lambda$ that differ from $\sigma$ by adding one box. Moreover, each matrix element $p,q$ of $\rho_{\sigma}$ (where $p,q\leq d_{\sigma}$) is identified with a unique matrix element $p',q'$ of $\rho_{\lambda}$ (Here $p',q'$ depend on $p,q$ as well as $\lambda$, though our notation suppresses these dependencies).
Let $U$ be a unitary such that for any $\sigma \vdash (n-1)$ and $p,q\leq d_{\sigma}$ we have
\begin{equation}
U|0\rangle|\sigma,p,q\rangle=|0\rangle\sum_{\lambda \vdash n: \sigma \leq \lambda} \sqrt{\frac{d_{\lambda}}{n\cdot d_{\sigma}}}|\lambda,p',q'\rangle
\label{eq:U}
\end{equation}
and such that
\begin{equation}
U|\tau_i\rangle|\sigma,p,q\rangle=|\tau_i\rangle|\sigma,p,q\rangle \quad \qquad i\in [n].
\label{eq:pkinvariant}
\end{equation}
It is possible to construct such a unitary because the states appearing on the RHS of Eq.~\eqref{eq:U} for different values of $\sigma,p,q$ are orthonormal. However the action of $U$ in the rest of the Hilbert space is constrained by unitarity. The following lemma describes a property of $U$ that follows from this constraint.
\begin{lemma}
Let $P$ project onto the subspace spanned by basis states than encode partitions of $n-1$, that is,
\[
\mathrm{span}\{|w\rangle|\sigma,p,q\rangle: w\in \{0,\tau_1,\ldots, \tau_n\}, \sigma \vdash (n-1), 1\leq p,q\leq d_{\sigma}\}.
\]
Suppose $g:S_n\rightarrow \mathbb{C}$ is any function such that $g(\pi)=0$ for all $\pi\in S_{n-1}$ (regarded as the subgroup of $S_n$ that fixes $n$). Then $PU^{\dagger}|0\rangle|\hat{g}\rangle=0$.
\label{lem:orthog}
\end{lemma}
\begin{proof}
Below we write $P=\sum_{k=0}^{n}P_k$ where $P_0=\mathrm{span}\{|0\rangle|\sigma,p,q\rangle: \sigma \vdash (n-1), 1\leq p,q\leq d_{\sigma}\}$ and $P_k=\mathrm{span}\{|\tau_k\rangle|\sigma,p,q\rangle: \sigma \vdash (n-1), 1\leq p,q\leq d_{\sigma}\}$ for $k\in [n]$.
First suppose that $\hat{r}$ is the Fourier transform of some (arbitrary) function $r:S_{n-1}\rightarrow \mathbb{C}$. Define $R:S_n\rightarrow \mathbb{C}$ such that $R(\pi)=r(\pi)$ for all $\pi\in S_{n-1}$, and $R(\pi)=0$ otherwise. Then from Eq.~\eqref{eq:U} we have
\begin{equation}
|0\rangle\sum_{\sigma\vdash n-1,p,q} (\hat{r}(\sigma))_{pq}|\sigma,p,q\rangle=U^{\dagger}|0\rangle\sum_{\lambda\vdash n, p',q'} (\hat{R}(\lambda))_{p',q'}|\lambda,p',q'\rangle.
\label{eq:rR}
\end{equation}
Now consider a function $g:S_n\rightarrow \mathbb{C}$ as in the statement of the Lemma. Since
$g(\pi)=0$ for all $\pi\in S_{n-1}$ we have $\langle g|R\rangle=0$ and therefore $\langle \hat{g}|\hat{R}\rangle=0$ and therefore
$U^{\dagger}|0\rangle|\hat{g}\rangle$ is orthogonal to any state of the form on the LHS of Eq.~\eqref{eq:rR}. Noting that the LHS of Eq.~\eqref{eq:rR} is an arbitrary state in the image of $P_0$, we see that
\begin{equation}
P_0U^{\dagger}|0\rangle|\hat{g}\rangle=0.
\label{eq:p0}
\end{equation}
Then
\begin{align*}
PU^{\dagger}|0\rangle|\hat{g}\rangle&=(\sum_{k=1}^{n}P_k)U^{\dagger}|0\rangle|\hat{g}\rangle+P_0U^{\dagger}|0\rangle|\hat{g}\rangle\\
&=(\sum_{k=1}^{n}P_k)U^{\dagger}|0\rangle|\hat{g}\rangle\\&=(\sum_{k=1}^{n}P_k)|0\rangle|\hat{g}\rangle\\
&=0\\
\end{align*}
where in the first step we used Eq.~\eqref{eq:p0} and in the following step we used Eq.~\eqref{eq:pkinvariant}.
\end{proof}
Our definitions of $U, f_k, F^k$ and $M(\tau_k)$ are chosen so that the following holds.
\begin{claim}
For $1\leq k\leq n$ we have $M(\tau_k)U|0\rangle|\hat{f_k}\rangle=|0\rangle|\hat{F^k}\rangle$.
\label{claim:Fkfk}
\end{claim}
\begin{proof}
Follows by combining Eqs.~(\ref{eq:dirsum}, \ref{eq:U}, \ref{eq:defF}, \ref{eq:deff}).
\end{proof}
Finally, for $k\in [n]$ let $V_k$ be the unitary that acts as
\begin{align}
V_k|0\rangle|\sigma,p,q\rangle &=|\tau_k\rangle|\sigma,p,q\rangle \\
V_k|\tau_k\rangle|\sigma,p,q\rangle &=|0\rangle|\sigma,p,q\rangle,
\end{align}
for all $\sigma\vdash n-1$ and $1\leq p,q\leq d_{\sigma}$,
and which acts as the identity on all other computational basis states.
The quantum Fourier transform over the symmetric group is described in Algorithm \ref{alg:qft}.
\begin{algorithm}[H]
\caption{Implements a unitary $\mathrm{QFT_n}$\label{alg:qft}}
\hspace*{\algorithmicindent} \hspace{-22pt}\textbf{Input:} A state $|f\rangle=\sum_{g\in S_n} f(g)|g\rangle$.\\
\hspace*{\algorithmicindent} \hspace{-27pt} \textbf{Output:} The state $|\hat{f}\rangle=\mathrm{QFT}_n|f\rangle$ corresponding to the Fourier transform of $f$ over $S_n$.\\
\begin{algorithmic}[1]
\State{$|\phi\rangle\leftarrow W|\phi\rangle$} \Comment{$|\phi\rangle=\sum_{j=1}^{n} |\tau_j\rangle\sum_{h\in S_{n-1}} f_j(h)|h\rangle$}.
\State{$|\phi\rangle \leftarrow (I\otimes \mathrm{QFT}_{n-1})|\phi\rangle$} \Comment{$|\phi\rangle=\sum_{j=1}^{n} |\tau_j\rangle \sum_{\sigma \vdash n-1} (\hat{f_j}(\sigma))_{pq} |\sigma,p,q\rangle$}
\For{$k=1$ to $n$}
\State{$|\phi\rangle\leftarrow M(\tau_k)^{-1}|\phi\rangle$
}
\State{$|\phi\rangle\leftarrow UV_kU^{\dagger}|\phi\rangle$}
\State{$|\phi\rangle\leftarrow M(\tau_k)|\phi\rangle$ }
\EndFor \Comment{$|\phi\rangle=|0\rangle|\hat{f}\rangle$ (see Lemma \ref{lem:forloop})}
\State{\textbf{return} the second register $|\hat{f}\rangle$}.
\end{algorithmic}
\end{algorithm}
The following lemma shows that the algorithm performs the quantum Fourier transform, as claimed.
\begin{lemma}
For $0\leq k\leq n$, the state after the $k$th iteration of the for loop in Algorithm \ref{alg:qft} is
\begin{equation}
|\phi_k\rangle=|0\rangle\sum_{j=1}^{k}|\hat{F^{j}}\rangle+\sum_{j=k+1}^{n} |\tau_j\rangle |\hat{f_j}\rangle.
\label{eq:phik}
\end{equation}
When $k=0$ this describes the state before the first iteration (in this case the first term is not present). For $k=n$ we have $|\phi_n\rangle=|0\rangle\sum_{j=1}^{n}|\hat{F^{j}}\rangle=|0\rangle|\hat{f}\rangle$.
\label{lem:forloop}
\end{lemma}
\begin{proof}
By induction on $k$. The base case $k=0$ corresponds to the initial state $\sum_{j=1}^{n} |\tau_j\rangle |\hat{f_j}\rangle$.
Now suppose the state after the $(k-1)$-th iteration is given by $|\phi_{k-1}\rangle$ as described by Eq.~\eqref{eq:phik}. Then after line 4 of Algorithm \ref{alg:qft} the state is
\begin{align}
M(\tau_k^{-1})|\phi_{k-1}\rangle&=|0\rangle\sum_{j=1}^{k-1}M(\tau_k^{-1})|\hat{F^{j}}\rangle+\sum_{j=k}^{n} |\tau_j\rangle |\hat{f_j}\rangle.
\end{align}
For each $j\leq k-1$, the state $M(\tau_k^{-1})|\hat{F^j}\rangle$ is the Fourier transform of a function
$g(\pi)=F^j(\tau_k \pi)$ which is zero on $S_{n-1}$.
So for $j\leq k-1$, by Lemma \ref{lem:orthog} we have $P U^{\dagger}|0\rangle M(\tau_k^{-1})|\hat{F^j}\rangle =0$ where $P$ projects onto basis states that encode partitions of $n-1$. Therefore $V_k$ acts trivially on $U^{\dagger}|0\rangle M(\tau_k^{-1})|\hat{F^j}\rangle$ for all $j\leq k-1$, i.e.,
\begin{align}
V_kU^{\dagger}M(\tau_k^{-1})|\phi_{k-1}\rangle&=U^{\dagger}|0\rangle\sum_{j=1}^{k-1}M(\tau_k^{-1})|\hat{F^j}\rangle+\sum_{j=k}^{n} V_k|\tau_j\rangle |\hat{f_j}\rangle\\
&=U^{\dagger}|0\rangle\sum_{j=1}^{k-1}M(\tau_k^{-1})|\hat{F^j}\rangle+|0\rangle|\hat{f_k}\rangle+\sum_{j=k+1}^{n} |\tau_j\rangle |\hat{f_j}\rangle.
\end{align}
and, applying $M(\tau_k)U$ to the above gives
\begin{align}
|\phi_k\rangle&=|0\rangle\sum_{j=1}^{k-1}|\hat{F^j}\rangle+M(\tau_k)U|0\rangle|\hat{f_k}\rangle+\sum_{j=k+1}^{n} |\tau_j\rangle |\hat{f_j}\rangle\\
&=|0\rangle\sum_{j=1}^{k}|\hat{F^j}\rangle+\sum_{j=k+1}^{n} |\tau_j\rangle |\hat{f_j}\rangle
\end{align}
where we used Claim \ref{claim:Fkfk}. This completes the induction, and the proof.
\end{proof}
\section{ Generalized phase estimation}
In this section we will show that the circuits from Figure~\ref{fig:gpe} implement weak Fourier sampling and generalized phase estimation respectively. The following Lemma shows that the circuit in Figure \ref{fig:gpe}(a) implements weak Fourier sampling. We will make use of the inverse Fourier transform $\mathrm{QFT}_n^{\dagger}$ which acts as:
\begin{equation}
\mathrm{QFT}_n^{\dagger} \sum_{\omega \vdash n}\sum_{1\leq i,j\leq d_{\omega}} (c(\omega))_{ij} |\omega, i,j\rangle=\sum_{\sigma\in S_n}\sum_{\lambda \vdash n} \sqrt{\frac{d_\lambda}{n!}} \trace{\left(c(\lambda) \rho_{\lambda}(\sigma)^{\dagger}\right)}|\sigma\rangle.
\label{eq:invf}
\end{equation}
\begin{lemma}
The POVM $M_L$ can be implemented by applying $\mathrm{QFT}_n$, then performing a projective measurement of the representation label $\omega$, then appplying $\mathrm{QFT}_n^{\dagger}$.
\label{Lemma:partition_measurement}
\end{lemma}
\begin{proof}
Let a partition $\lambda \vdash n$ be given. Let $P_{\lambda}$ denote the projector such that $P_{\lambda}|\omega,i,j\rangle=\delta_{\omega, \lambda} |\omega,i,j\rangle.$ It suffices to show that for any state $|\psi\rangle\in \mathcal{H}$ we have $\Pi^L_{\lambda}|\psi\rangle=\mathrm{QFT}_n^{\dagger} P_{\lambda} \mathrm{QFT}_n|\psi\rangle$. To this end let $|\psi\rangle=\sum_{\sigma \in S_n} f(\sigma)|\sigma\rangle$. Using Eq.~\eqref{eq:pilambda} we get
\begin{align}
\Pi_{\lambda}|\psi\rangle& =\frac{d_{\lambda}}{n!} \sum_{\sigma\in S_n}\sum_{\alpha\in S_n}\chi_{\lambda}(\alpha)f(\sigma)|\alpha \sigma\rangle\\
&=\frac{d_{\lambda}}{n!}\sum_{\sigma,\beta\in S_n} \chi_{\lambda}(\beta\sigma^{-1})f(\sigma)|\beta\rangle =\frac{d_{\lambda}}{n!}\sum_{\sigma,\beta\in S_n} \chi_{\lambda}(\sigma \beta^{-1})f(\sigma)|\beta\rangle,
\label{eq:chif1}
\end{align}
where in the last line we used the fact that $\chi_\lambda (g)=\chi_{\lambda}(g^{-1})$ for all $g\in S_n$ (this can be seen for example using the fact that group characters are class functions and $g$ and $g^{-1}$ are always in the same conjugacy class since their cycle structure coincides). Now using Eq.~\eqref{eq:chif1} and the fact that
\[
\chi_\lambda(\sigma \beta^{-1})=\trace{\left(\rho_\lambda(\sigma \beta^{-1})\right)}=\trace{\left(\rho_\lambda(\sigma)\rho_{\lambda}( \beta)^{\dagger}\right)}
\]
gives
\begin{equation}
\Pi^L_{\lambda}|\psi\rangle=\sqrt{\frac{d_{\lambda}}{n!}}\sum_{\beta\in S_n}\trace{\left(\hat{f}(\lambda)\rho_{\lambda}(\beta)^{\dagger}\right)}|\beta\rangle\label{eq:povm1}.
\end{equation}
Below we show this is equal to $\mathrm{QFT}_n^{\dagger}P_\lambda \mathrm{QFT}_n|\psi\rangle$. We have:
\begin{align}
\mathrm{QFT}_n^{\dagger}P_\lambda \mathrm{QFT}_n|\psi\rangle&= \mathrm{QFT}_n^{\dagger}P_\lambda\sum_{\omega\vdash n}\sum_{i,j=1}^{d_{\omega}} (\hat{f}(\omega))_{ij}|\omega,i,j\rangle \\
&=\mathrm{QFT}_n^{\dagger} \sum_{i,j=1}^{d_{\lambda}} (\hat{f}(\lambda))_{ij}|\lambda,i,j\rangle =\sqrt{\frac{d_{\lambda}}{n!}}\sum_{\beta\in S_n}\trace{\left(\hat{f}(\lambda)\rho_{\lambda}(\beta)^{\dagger}\right)}|\beta\rangle,
\end{align}
where we used Eq.~\eqref{eq:invf} in the last line. This coincides with Eq.~\eqref{eq:povm1} and completes the proof.
\end{proof}
Next we show that the circuit in Fig.~\ref{fig:gpe} (b) implements the generalized phase estimation measurement $M_{\rho}$:
\begin{align}
|\tau,1,1\rangle\otimes |\psi\rangle \xrightarrow[]{\mathrm{QFT}_n^{\dagger}\otimes I} \frac{1}{\sqrt{n!}}&\sum_{\alpha\in S_n}|\alpha\rangle \otimes |\psi\rangle \\
\xrightarrow[]{C-\rho^{\dagger}}
&\frac{1}{\sqrt{n!}}\sum_{\alpha\in S_n}|\alpha\rangle \otimes \rho^{\dagger}(\alpha)|\psi\rangle \\
\xrightarrow[]{\text{Measure } M_L}
&\frac{1}{\sqrt{n!}}\sum_{\alpha\in S_n}\Pi^L_{\omega}|\alpha\rangle \otimes \rho^{\dagger}(\alpha)|\psi\rangle \quad \quad (\text{conditioned on meas. outcome }\omega)\\
=&\frac{d_{\omega}}{(n!)^{3/2}}\sum_{\alpha,\sigma\in S_n} \chi_{\omega}(\sigma)|\sigma\alpha\rangle \otimes \rho^{\dagger}(\alpha)|\psi\rangle\\
\xrightarrow[]{C-\rho} & \frac{d_{\omega}}{(n!)^{3/2}}\sum_{\alpha,\sigma\in S_n} \chi_{\omega}(\sigma)|\sigma\alpha\rangle \otimes \rho(\sigma \alpha)\rho^{\dagger}(\alpha)|\psi\rangle.\label{eq:outputstate}
\end{align}
Using the fact that $\rho(\sigma\alpha)=\rho(\sigma)\rho(\alpha)$ and that $\rho^{\dagger}(\alpha)=\rho(\alpha)^{-1}$ we see that the output state Eq.~\eqref{eq:outputstate} is equal to
\begin{align}
\frac{d_{\omega}}{(n!)^{3/2}}\sum_{\alpha,\sigma\in S_n} \chi_{\omega}(\sigma)|\sigma\alpha\rangle \otimes \rho(\sigma)|\psi\rangle
=\frac{1}{\sqrt{n!}} \sum_{\beta\in S_n}|\beta\rangle \otimes \frac{d_\omega}{n!}\sum_{\sigma\in S_n} \chi_{\omega}(\sigma) \rho(\sigma)|\psi\rangle.
\end{align}
On the right hand side the state of the second register is $\Pi_{\omega}|\psi\rangle$, as desired.
\section{$\QMA$, $\#\BQP$, and $\QAPC$}
Recall the definition of a verification circuit from the main text. In the following we write $p(\psi)$ for the acceptance probability of a verification circuit with input state $\psi$.
A (promise) problem is in $\QMA$ \textit{with completeness $c$ and soundness $s$}, also denoted $\QMA(c,s)$, if there exists a uniform polynomial-sized family of verification circuits $C_x$ labeled by instances $x$ of the problem, such that (A) If $x$ is a yes instance then there exists a witness $\psi$ such that $p(\psi)\geq c$, and (B) If $x$ is a no instance then $p(\psi)\leq s$ for all $\psi$.
A standard convention is to define $\QMA\equiv \QMA(2/3,1/3)$. The definition is not very sensitive to the choice of completeness $c$ and soundness $s$: it is known that $\QMA(c,s)=\QMA(2/3,1/3)$ whenever $0<s<c\leq 1$ and $c-s=\Omega(1/\mathrm{poly}(n))$. In fact, one can reduce the error of any $\QMA$ verifier exponentially in the input size $n$, so that, e.g., $\QMA(2/3,1/3)=\QMA(4^{-n},1-4^{-n})$ as well \cite{kitaev2002classical, marriott2005quantum}.
We choose to define $\#\BQP$ as a class of relation problems rather than a class of functions, languages (decision problems), or promise problems. Recall from the main text that Eq.~\eqref{eq:POVM} describes the POVM element corresponding to the accept outcome of a quantum verification circuit and $N_q$ denotes the number of eigenvalues of $A$ that are at least $q$. We define $\#\BQP$ as the class of problems that can be expressed as: given a quantum verification circuit with an $n$-qubit input register, total size $\mathrm{poly}(n)$, and two thresholds $0<b<a\leq 1$ and $a-b=\Omega(1/\mathrm{poly}(n))$, compute an estimate $N$ such that $N_a\leq N\leq N_b$. This defines a relation problem because there may be more than one solution for a given instance (this happens whenever $N_a\neq N_b$).
Ref.~\cite{brown2011computational} puts forward a slightly different definition of $\#\BQP$, as the class of promise problems obtained by augmenting and specializing the above definition with the promise that $N_a=N_b$. This has the pleasing feature that it results in a class of functions rather than relations, though at the expense of adding a promise.
One of the main results established in Ref.~\cite{brown2011computational} is that any problem in $\#\BQP$ (as they define it) can be efficiently reduced to computing a $\#\P$ function. We now describe how their proof can be straightforwardly adapted to our setting, establishing the same result for the definition of $\#\BQP$ that we use in this paper. Using the in-place error reduction for $\QMA$ \cite{marriott2005quantum} we may assume without loss of generality that $a=1-2^{-n-2}$ and $b=2^{-n-2}$ (see, e.g. the proof of Lemma 8 in the supplementary material of Ref.~\cite{bravyi2022quantum}).
Then
\begin{equation}
\mathrm{Tr}(A)\leq N_b+2^{-n-2}\cdot (2^n-N_b)\leq N_b+1/4,
\label{eq:trup}
\end{equation}
and
\begin{equation}
\mathrm{Tr}(A)\geq N_a (1-2^{-n-2})\geq N_a-1/4.
\label{eq:trlow}
\end{equation}
Eqs.~(\ref{eq:trup},\ref{eq:trlow}) show that the nearest integer $N$ to $\mathrm{Tr}(A)$ solves the given $\#\BQP$ problem as it satisfies $N_a\leq N\leq N_b$. It then suffices to show that we can compute $\mathrm{Tr}(A)$ to within an additive error of $1/4$ using efficient classical computation along with a single call to a $\#\P$ oracle. This is established in Theorem 14 of Ref.~\cite{brown2011computational} via a proof that is based on expressing $\mathrm{Tr}(A)$ as a sum over paths, see Ref.~\cite{brown2011computational} for details.
\section{Efficient sampling algorithm}
Now let us describe the algorithm stated in Lemma \ref{lem:samplekron}, which appears implicitly in Ref.~\cite{Moore07}.
The completely mixed state over the image of the projector $\Pi^L_{\mu}$ can be prepared as follows.
\begin{align}
\frac{\Pi^L_\mu}{d_\mu^2} &= \text{QFT}_n \left( \ket{\mu} \bra{\mu} \otimes I/d_{\mu} \otimes I/d_{\mu}\right) \text{QFT}_n^\dag.
\end{align}
Starting from $\frac{\Pi^L_\mu \otimes \Pi^L_\nu}{d_\mu^2 d_\nu^2}$ we then measure the projector $\Pi^L_\lambda$ using Weak Fourier sampling. We obtain outcome $\lambda$ with probability:
\begin{align}
p(\lambda) &= \text{Tr} \left(\frac{\Pi^L_\mu \otimes \Pi^L_\nu}{d_\mu^2 d_\nu^2} \Pi_\lambda \right) = \frac{d_\lambda}{d_\mu d_\nu n!} \sum_{g,h,\ell} \chi_\mu(g) \chi_\nu(h) \chi_\lambda(\ell) = \frac{d_\lambda}{d_\mu d_\nu} g_{\mu \nu \lambda}.
\end{align}
\section{Row sums}
\begin{lemma}
The multiplicity of an irreducible representation $\rho_\lambda$ in the conjugation representation $\rho_c$ is given by the row sum $R_{\lambda}$ defined in Eq.~\eqref{eq:rowsum}.
\end{lemma}
\begin{proof}
Recall that the centralizer $Z(\pi)$ of an element $\pi \in S_n$ is defined as the set of elements $g \in S_n$ that commute with $\pi$. The conjugation representation acts on $\mathbb{C}S_n$ as
\[
\rho_c(\pi)|\sigma\rangle = |\pi \sigma \pi^{-1}\rangle
\]
and therefore
\[
\chi_{c}(\pi)\equiv \mathrm{Tr}(\rho_c(\pi))=|Z(\pi)|,
\]
where $Z(\pi)$ is the centralizer of $\pi \in S_n$. Note that all elements $\pi\in S_n$ in the same conjugacy class $C$ have the same centralizer which we denote $Z(C)$.
We can decompose the conjugation representation into irreducibles as $\rho_c \simeq \bigoplus_{\lambda \vdash n} m_{\lambda} \rho_{\lambda}$ for some multiplicities $m_\lambda$. Taking the trace gives
\[
\chi_c(\pi)=\sum_{\lambda \vdash n} m_{\lambda}\chi_{\lambda}(\pi)
\]
for all $\pi\in S_n$. Using orthogonality of the irreducible characters we arrive at
\begin{align}
m_\lambda &= \frac{1}{n!} \sum_{g \in G} \chi_\lambda(g) \chi_{c}(g) = \frac{1}{n!} \sum_{C \vdash n} \chi_\lambda(C) |C| |Z(C)| \label{eq:rowsumC},
\end{align}
where $|C|$ is the order of the conjugacy class $C$. By Lagrange's theorem, $n! / |C| = |Z(C)|$, and substituting this into Eq.~\eqref{eq:rowsumC} shows that $m_{\lambda}=R_{\lambda}$ and completes the proof.
\end{proof}
\begin{lemma}
$R_{\lambda}=d_{\lambda}^{-1}\mathrm{Tr}(\Pi_{\lambda}^{c})$
\end{lemma}
\begin{proof}
By definition,
\[
\Pi_{\lambda}^{c}=\frac{d_{\lambda}}{n!}\sum_{g\in S_n}\chi_{\lambda}(g)\rho_c(g),
\]
and therefore
\[
\mathrm{Tr}(\Pi_{\lambda}^{c})=\frac{d_{\lambda}}{n!}\sum_{g\in S_n}\chi_{\lambda}(g)\chi_c(g) = d_\lambda R_\lambda,
\]
by Eq. \eqref{eq:rowsumC}
\end{proof}
\end{document} |
1,116,691,498,714 | arxiv | \section{Introduction}
\label{sect_intro}
The Large Area Telescope (LAT) on board the \emph{Fermi $\gamma$-ray Space Telescope} (\emph{Fermi}: \citealt{atwood09}) has increased the number of observed solar flares with photon emission above 100 MeV by an order of magnitude compared to all previous instruments \citep{share17}. One prominent characteristic of these flares is the long-duration emission extending hours past the impulsive phase, long after other flare associated electromagnetic emissions at longer wavelengths have decayed (e.g., \citealt{ajello14}). High-energy $\gamma$-rays can be produced by electrons and ions (primarily protons), with somewhat larger energies than the photons, via relativistic electron bremsstrahlung or decay of pions (and their byproducts) produced by interactions of protons with the background ions. Both mechanisms require transport of the accelerated particles deep into the high-density solar photosphere (where the particles lose most of their energy) and through column depths of about $2.5\times 10^{25}$ and $2.5\times 10^{26}$~cm$^{-2}$, respectively. Thus, the accelerated particles, wherever produced, must travel to the photosphere to produce the observed $\gamma$-rays. This transport is guided by the magnetic field lines connecting the acceleration site to the photosphere.
The impulsive phase (duration of $< 10^3$ s) radiations (from microwaves to GeV $\gamma$-rays) are produced by the interactions of the nonthermal electrons and protons with the flaring loop magnetic field and plasma (mainly at the loop footpoints). In a vast majority of flares, the impulsive emission is dominated by nonthermal electrons, rather than protons \citep{shih09}, in which particles are generally believed to be accelerated near the loop-top region (heights $> 10 ^9$ cm; e.g., \citealt{petrosian02, liu13}). This acceleration mechanism may be at work for some of the long-duration $\gamma$-ray flares observed by {\it Fermi}\ (e.g., \citealt{ajello14}). Examples include the 2011 March 7\,--\,8 \citep{ackermann14} and the 2017 September 10 \citep{omodei18} flares, during which the centroid of the $\gamma$-ray source coincided well with the active region (AR) where the flare was initiated. This is also the case for the initial phase of the stronger 2012 March 7 flare, which lasted for about 20 hours, later with a temporal drift of the centroid away from the AR \citep{ajello14}. In these events, the bulk of the hard X-rays (HXRs) and $\gamma$-rays may be explained in terms of thick-target emission from the footpoints of the flaring loops.
However, {\it Fermi}\ also detected $>$100~MeV photons from three other flares which, according to observations by the \emph{Solar TErrestrial RElations Observatory} (\emph{STEREO}), originated from ARs that were located 13$\arcdeg$\,--\,36$\arcdeg$ behind the solar limb seen from the Earth perspective \citep{pesce15, ackermann17}. These flares were also detected in HXRs by the \emph{Reuven Ramaty High Energy Solar Spectroscopic Imager} (\emph{RHESSI}), \emph{Fermi}/Gamma-ray Burst Monitor (GBM), and \emph{Wind}/Konus with similar time profiles, in extreme ultraviolet (EUV) by \emph{Solar Dynamics Observatory} (\emph{SDO}) and \emph{STEREO}, and in microwave by the Radio Solar Telescope Network (RSTN). In addition, \emph{RHESSI} detected HXR emission located just over the limb which is consistent with the top of the (relatively tall) flaring loop rooted at the source AR behind the solar limb. \emph{An important question is whether or not the LAT $\gamma$-rays are coming from this loop-top source as well}. As described below, the LAT observations and some theoretical arguments lead us to consider a different location on the Sun for the $>$$100$~MeV $\gamma$-ray source and perhaps a different site and mechanism for acceleration of particles (either electrons or protons). This would be an important step toward resolving the puzzle of {\it Fermi}\ BTL flares and understanding $\gamma$-ray flares in general.
\citet{cliver93} first proposed that the BTL $\gamma$-ray events are caused by particles that are accelerated at the shock driven by the associated coronal mass ejection (CME) and then propagate back to the visible solar disk. The goal of this study is to explore this scenario by investigating the magnetic connectivity and evolution of the CME-driven shock, and their relationship, in both space and time, with the observed $\gamma$-ray emission during a BTL flare. Specifically, we will evaluate to what extent the CME and CME-driven shock are magnetically connected to $\gamma$-ray emitting areas of the visible disk away from the AR, and track several key shock parameters over those magnetically-connected areas of the shock surface.\footnote{A corollary of this scenario is that we would expect a similar spread of $\gamma$-ray emission over the solar disk for on-disk flares as well. In fact, the new PASS-8 analysis of the X5.4 flare on 2012 March 7 shows hints of migration of the emission centroid moving away from its host AR over time (Allafort et al., in preparation), although in general it is harder to distinguish between a point source (which was assumed in locating the centroids) and an extended one due to the relative low number of photons detected by \emph{Fermi}.} To this end, we performed high-fidelity, data-driven magnetohydrodnamic (MHD) simulations to reconstruct the global corona and solar wind environment for the CME eruption associated with the strongest of the three BTL flares: {\it SOL2014-09-01}. We note the pioneering work by \citet{plotnikov17} toward this direction that used a potential magnetic field and a \emph{static} MHD global corona solution. We believe that the inclusion of the dynamic evolution of the CME, as done in the present study, is an important step forward and can shed critical new light on the underlying physics of BLT $\gamma$-ray flares.
This article is organized as follows. In Section~\ref{sect_obs}, we present a summary of relevant observations of this event and theoretical arguments. In Section~\ref{sect_modeling}, we describe our numerical model and present the simulation results with a focus on the magnetic field connectivity and shock evolution, followed by discussions in Section~\ref{sect_discuss} and summary and conclusion in Section~\ref{sect_summary}.
\section{Review of Observations and Theoretical Arguments: the {\it SOL2014-09-01} Flare \& CME Event}
\label{sect_obs}
Here we briefly review the observations of the \emph{SOL2014-09-01} (hereafter Sept14) flare relevant to this work (We refer the reader to \citealt{ackermann17} for more details). Specifically, we give two empirical reasons why we favor the CME-shock origin, rather than the direct flare acceleration, of the particles responsible for the $\gamma$-rays detected by the LAT in this flare.
The first reason is the difficulty of producing strong $\gamma$-rays in the tenuous solar corona. According to \emph{STEREO-B} data, this flare originated from what was named NOAA AR~12158 later. It was located at N14E126, about $36^{\circ}$ behind the east solar limb. \emph{RHESSI} images show a HXR source with a size of about 40$\arcsec$ ($\sim 30$ Mm) just over the limb, which is consistent with (a part of) the loop-top source of a relatively large flaring loop with a height of $\gtrsim$130~Mm above the photosphere. Similar examples have been reported \citep{krucker07}. Since all other $<$100~MeV emissions, as seen by \emph{Fermi}/GBM and \emph{Wind}/Konus, have light curves very similar to the \emph{RHESSI} HXRs, it is reasonable to assume that they also come from the top of the flare loop through thin-target bremsstrahlung emission \citep{chen13, petrosian16, effenberger17}. Since only a small fraction of the particle energy is lost during the {\it thin-target} bremsstrahlung, coronal HXR/$\gamma$-ray emission in general requires a higher number (and energy) of accelerated particles than if we are dealing with a {\it thick-target} footpoint emission, where particles lose all their energy \citep{petrosian73}, and where most of the HXRs and $\gamma$-rays \citep[e.g.,][]{hurford03,hurford06} in on-disk flares originate from. This difference would also be the case for either electrons or protons if the \emph{Fermi}/LAT $\gamma$-rays were also coming from the thin-target loop-top source. However, assuming thick-target emission by protons in the photosphere, one requires a total energy in protons comparable to that calculated for disk flares. Therefore, if the Sept14 \emph{Fermi}/LAT emission were from the loop-top source, it would require a much higher energy of the accelerated protons than any of the other (even stronger) \emph{Fermi}/LAT flares \citep{petrosian18}. This difficulty is the first reason for considering a different source and possibly a different acceleration mechanism for the production of the $\gamma$-rays.
The second and more important reason is that the centroid of the \emph{Fermi}/LAT source is about $300\arcsec (\sim 200$ Mm) northwest of the \emph{RHESSI} source and the corresponding light curve is different than that of all the other emissions. Specifically, the LAT light curve decays very gradually with emission detected for almost two hours, while all other emissions last less than one hour before falling below background. Although the possibility of LAT emission being also produced (in part) by particles accelerated near the loop-top source cannot be completely ruled out, the above two reasons lead to a more plausible scenario that this emission is produced at the photosphere by particles (most likely protons) accelerated at the CME-driven shock and escaping from the downstream back to the Sun \citep{cliver93}. In the case of BTL flares, unlike in on-disk flares, these particles must be streaming down to the photosphere along field lines connected to the LAT centroid region located on the visible disk, tens of degrees away from the host AR.
This scenario is further supported by the facts that the Sept14 flare is also associated with: (i) a fast CME observed by both \emph{SOHO}/LASCO and \emph{STEREO-B}/COR1 with a speed $>$1900~km~s$^{-1}$; (ii) a Type II radio burst with an estimated velocity of 2079 km s$^{-1}$ \citep{pesce15}, and (iii) an SEP event with a quick onset and hard spectrum observed by \emph{STEREO} \citep{cohen16, zelina17}. The CME white-light images and height-time history are shown in Figures~\ref{fig:wl} and \ref{fig:htplot}, respectively, and will be further discussed in \S 3.1.
\section{Modeling the {\it SOL2014-09-01} Event}
\label{sect_modeling}
\subsection{Global Coronal \& CME Models}
\label{subsect_model-descr}
To reconstruct the global corona and solar wind environment during the {\it SOL2014-09-01} CME eruption, we used the University of Michigan Alfv\'{e}n Wave Solar Model (AWSoM; \citealt{sokolov13, bart14}) within the Space Weather Modeling Framework (SWMF; \citealt{toth12}). AWSoM is a data-driven global MHD model with the inner boundary specified by observed magnetic maps and the simulation domain extending from the upper chromosphere to the corona and heliosphere. Physical processes implemented in the model include multi-species thermodynamics, electron heat conduction (both collisional and collisionless formulations), optically thin radiative cooling, and Alfv\'{e}n-wave turbulence that energizes the solar wind plasma. The Alfv\'{e}n-wave description is physically self-consistent, including non-Wentzel-Kramers-Brillouin (WKB) reflection \citep{heinemann80, velli93, hollweg07} and physics-based apportioning of turbulence dissipative heating to both electrons and protons. AWSoM has demonstrated its capability of reproducing solar corona condition with high-fidelity \citep[e.g.,][]{sokolov13, bart14, oran13, oran15, jin16, jin17a}.
Based on the steady-state global corona and solar wind solution, we initiate the CME by using an analytical Gibson-Low (GL) flux-rope model \citep{gibson98}, which has been successfully used in numerous modeling studies of CMEs (e.g., \citealt{chip04a, chip04b, lugaz05, chip14, jin16, jin17a}). The GL flux rope is mainly controlled by five parameters: the stretching parameter $a$ determines its shape, the distance $r_{1}$ of the flux rope center from the center of the Sun determines its initial position before being stretched, the radius $r_{0}$ of the flux-rope torus determines its size, $a_1$ determines its magnetic field strength, and a helicity parameter determines its positive (dextral) or negative (sinistral) helicity. Analytical profiles of the GL flux rope are obtained by finding a solution to the magnetohydrostatic equation $(\nabla\times{\bf B})\times{\bf B}-\nabla p-\rho {\bf g}=0$ and the solenoidal condition $\nabla\cdot{\bf B}=0$. This solution is derived by applying a mathematical stretching transformation $r\rightarrow r-a$ to an axisymmetric, spherical ball of twisted magnetic flux with radius $r_0$ centered in the heliospheric coordinate system at $r=r_1$. The transformed flux rope appears as a tear-drop shape. At the same time, Lorentz forces are introduced, which lead to a density-depleted cavity in the upper portion and a dense core at the lower portion of the flux rope, corresponding to a coronal cavity and a dense prominenence, respectively. This configuration can thus readily reproduce the typical three-part structure of an observed CME \citep{llling85}. The GL flux rope and contained plasma are then superposed onto the steady-state AWSoM solution of the solar corona: i.e. $\rho=\rho_{0}+\rho_{\rm GL}$, ${\bf B = B_{0}+B_{\rm GL}}$, $p=p_{0}+p_{\rm GL}$. The temperature will be updated from the new density $\rho$ and pressure $p$. The resulting combined background-flux rope system is in a state of force imbalance, due to the insufficient background plasma pressure to counter the magnetic pressure of the flux rope, and thus erupts immediately when the numerical model advances in time.
To specify the inner boundary condition of the magnetic field, we utilize a global magnetic map sampled from an evolving photospheric flux transport model \citep{schrijver03}, which assimilates new observations within 60$^{\circ}$ from disk center obtained by the \emph{SDO} Helioseismic and Magnetic Imager \citep[HMI;][]{schou12}. The assimilated magnetogram is updated every 6 hours. The Sept14 flare occurred behind the east limb where no direct observation of the magnetic field is available. This means that the magnetic field around the flare site at the time of the event contains the most aged observation obtained from about a half solar rotation earlier when the region was on the western side of the visible solar disk. Therefore, a large amount of magnetic flux could potentially be missing. From the magnetogram closer in time shown in Figure \ref{fig:magnetogram}a, we find that the flare source region AR~12158 is indeed completely missing. To alleviate this problem, we choose the assimilated magnetogram on 2014 September 8 00:04:00 UT (Figure \ref{fig:magnetogram}b), about a week after the event on September 1, when the magnetic field around the source region was first assimilated into the flux transport model. The missing flare source region AR~12158 and another large AR~12157 to the south of it are now properly included. The rest of the old and new magnetic maps are qualitatively very similar. As such, the 2014 September 8 magnetogram is a reasonable representation of the photospheric magnetic field at the time of the Sept14 flare and is thus used to specify the inner boundary condition of our global magnetic field model.
To configure a proper GL flux rope for initiating the Sept14 CME, we utilize a newly developed tool called the Eruptive Event Generator using Gibson-Low configuration (EEGGL; \citealt{jin17b}), which is designed to determine the GL flux-rope, including its location, orientation, and five key controlling parameters, using the observed magnetogram and CME speed near the Sun. The left panel of Figure~\ref{fig:initiation}a shows a zoom-in view of AR 12158 with weighted centers of positive/negative polarities and the polarity inversion line (PIL) determined by EEGGL. The green asterisk marks the central location to insert the GL flux rope, whose calculated key parameters are also listed. The right panel shows the 3D configuration of the global coronal magnetic field, with the inserted GL flux rope shown in red. The white field lines represent the large-scale helmet streamer structures. The selected field lines from surrounding active regions and open field are marked in green. The GL flux rope erupts due to the force imbalance upon insertion into the active region. The simulation is then evolved forward in time and the MHD equations are solved in conservative forms to guarantee the energy conservation across the CME-driven shock \citep{bart10, chip12, jin13}. To better resolve the shock structure, two more levels of refinement along the CME path are performed, which make the cell size $\sim$0.02 R$_\odot$ at 2 R$_\odot$ and $\sim$0.06 R$_\odot$ at 5 R$_\odot$. We run the simulation for 1 hour after the initiation, until the CME reaches $\sim$10 R$_\odot$.
Since the CME propagation near the Sun is mainly observed by coronagraphs, we generate synthesized white-light images (Thomson-scattered white-light brightness) and compare them with observations. The top panel of Figure~\ref{fig:wl} shows the observations from \emph{SOHO}/LASCO C2 and \emph{STEREO-B}/COR1. The bottom panel shows the synthesized white-light images. The color scale shows the relative total brightness changes with respect to the pre-event level. At the time of the Sept14 event, \emph{STEREO-B} and \emph{SOHO} were separated by $\sim$161$^{\circ}$ therefore observing the Sun from nearly opposite directions. By comparing the observation and simulation from two different viewpoints, we find that the observed CME is adequately simulated in terms of the direction of propagation and the width. Note that the absolute brightness comparison between the observation and simulation requires advanced calibration of the observational data as well as the inclusion of the contribution from the F corona (light scattered by interplanetary dust) in the simulation data, which are beyond the scope of this study. The reader is referred to previous studies for such model validation \citep[e.g.,][]{chip08, jin17a}. Also, there is a distinct feature (marked in Figure~\ref{fig:wl}) around the CME leading edge in the LASCO C2 image, which might imply that the corresponding shock front can deviate from a typically circular or dome shape. We speculate that this feature might be related to the complex and dynamically changing
background solar wind environment (e.g., affected by previous CMEs or coronal disturbances), which is not captured by the current simulation. We further compare the observed and simulated CME speeds by tracking the height-time (HT) history of the CME leading edge, as shown in Figure~\ref{fig:htplot}. The black dots show measurements from \emph{SOHO}/LASCO C2/3 (left panel) and \emph{STEREO-B} COR1/2 (right panel), while the red asterisks show corresponding measurements from the synthesized white-light images. We use the moment when the observed and simulated CMEs are around the same height as a guidance to calibrate the start time of the simulation in terms of the real observation. With this assumption, the first appearance of CME in the LASCO C2 field of view (11:12:05 UT) corresponds to $t = 10$~minutes in the simulation. In general, the CME HT history is well reproduced in the simulation, with the simulated CME being slightly slower by about 10-14\%.
\subsection{Field Connectivity Evolution}
\label{subsect_connect}
In the course of the eruption, the flux rope interacts and reconnects with the magnetic fields of the source AR as well as the global coronal field. As a result, the magnetic field configuration and connectivity can change dramatically, which could significantly influence the transport of the accelerated particles. With this global MHD simulation of the Sept14 event, we now investigate the field connectivity evolution in detail during the first hour of CME evolution.
Figures~\ref{fig:3d_evo}a-d show the 3D magnetic field configurations at selected times (5, 10, 20, and 30 minutes). Magnetic reconnection between the erupting flux rope (red) and the surrounding field lines (green and white) is evident, especially after the first 10 minutes. The interaction between the flux rope and the large-scale helmet streamers significantly changes the global corona configuration around the CME source region. The helmet streamers are opened up by reconnection or stretched by the CME expansion. Specifically, we further examine the field line connectivity around the \emph{Fermi} $\gamma$-ray emission region at $t = 30$~minutes (shown in Figure~\ref{fig:3d_evo}e). The derived \emph{Fermi} $\gamma$-ray emission centroid and 68\% uncertainty circle (adapted from \citealt{ackermann17}) are overlaid on the simulation data. The green field lines are the pre-existing open field connected to the CME-driven shock after $t \sim 6$~minutes. The red field lines are the closed field connected to the flaring AR. These field lines were not present before, but started to develop $\sim$5~minutes after the eruption through magnetic reconnection between the flux-rope magnetic field and the global coronal field.
To investigate the details of these two types of field lines, we further mark their photospheric footpoints on the magnetic field map in the top panel of Figure \ref{fig:open_close}. The closed field line regions (red) are relatively compact, compared with the elongated open field line region (green) to the south. The bottom panel of Figure \ref{fig:open_close} shows the 3D configuration of these two types of field lines. As evident in this plot, the open field lines change directions abruptly due to the rapid expansion the CME and CME-driven shock. For the closed field lines, the configuration is more complex with twisted large-scale loops. It appears that some of these field lines result from reconnection between the erupting flux rope and the nearby helmet streamers.
\subsection{CME-driven Shock Evolution}
\label{subsect_shock}
After the eruption, the flux rope drives a shock in the corona that propagates freely into the heliosphere. CME-driven shocks are believed to be responsible for acceleration of particles through the diffusive shock acceleration (DSA) mechanism (e.g., \citealt{axford77}) that produces the so-called gradual solar energetic particle (SEP) events \citep{reames99}. Due to the nonuniform background environment, the CME-driven shock evolution is highly spatially dependent. For example, a shock that is propagating into the fast solar wind could acquire a higher shock speed therefore leading to a higher stand-off distance from the flux rope driver \citep{jin17a}. Also, the shock parameters could vary significantly over the shock front, which can significantly affect the acceleration process \citep{chip05, li12}. Based on the white-light observations from SOHO and STEREO, several methods have been developed to derive the shock parameters directly from the data (e.g., \citealt{rouillard16, lario17, kwon18}). In this study, with the data-driven MHD simulation of the Sept14 event, we can track the shock location and key parameters (e.g., the compression ratio, shock Alfv\'{e}n Mach number, shock speed, and shock obliquity angle $\theta_{Bn}$) during the CME evolution. The shock obliquity angle $\theta_{Bn}$ refers to the angle between the shock normal (see equation~[\ref{eq_shock}]) and the upstream magnetic field. As shown below, such analysis can provide a more comprehensive picture of the shock as to its configuration and properties over the area linking back to the visible side of the Sun, where LAT $\gamma$-rays were detected.
We first determine the shock location at each time step by using the proton temperature gradient criteria \citep{jin13}. At each shock location, the shock normal is determined by using the magnetic coplanarity ${\bf (B_{d}-B_{u})}\cdot {\bf n}=0$ \citep{abraham72, lepping71}:
\begin{equation}
{\bf n}=\pm\frac{({\bf B_{d}}\times{\bf B_{u}})\times({\bf B_{d}}-{\bf B_{u}})}{|({\bf B_{d}}\times{\bf B_{u}})\times({\bf B_{d}}-{\bf B_{u}})|}
\label{eq_shock}
\end{equation}
where $\bf B_{d}$ and $\bf B_{u}$ represent downstream and upstream magnetic field respectively. Note that this method fails for $\theta_{Bn} = 0^{\circ}$ or $90^{\circ}$, which we found to be very rare in the actual simulations. The $\pm$ sign is determined by assuming a forward moving shock in the heliocentric coordinate. We then determine the upstream/downstream plasma parameters, from which the shock parameters (e.g., the compression ratio, shock speed, shock Alfv\'{e}n Mach number, and shock obliquity angle) can be calculated accordingly.
Figure \ref{fig:shock_evo} shows the shock geometry evolution during the first hour of the simulation. The color scales on the shock surface represent the four shock parameters (from top to bottom): the compression ratio, shock speed, shock Alfv\'{e}n Mach number, and shock $\theta_{Bn}$. The yellow field lines represent the open field near the \emph{Fermi} emission region (shown in Figures \ref{fig:3d_evo} and \ref{fig:open_close}). Based on the simulation, we found that the CME-driven shock started to intersect the open field lines around $t = 6$~minutes, when the fastest part of the shock reached $\sim$3 R$_{\odot}$. This finding is consistent with the estimation of the CME located at $\sim$2.5 R$_{\odot}$ at the onset of the \emph{Fermi}-LAT emission \citep{ackermann17}. However, we should note that the part of the shock intersecting the open field is closer to the Sun at $\sim$1.6 R$_{\odot}$. At $t = 20$~minutes, the CME-driven shock covered the entire open-field region around $\sim$3 R$_{\odot}$ linking to the front side of the Sun. Furthermore, the derived $\theta_{Bn}$ suggests that this part of the shock is a quasi-perpendicular shock with a mean $\theta_{Bn}\sim73^{\circ}$. Another observational fact worth mentioning is the EUV wave observed in this event. The EUV wave from the source region arrived at the open field region (connecting to the CME-driven shock) by 11:20 UT as shown in online movies (http://aia.lmsal.com/AIA\_Waves), an extension of the \citet{nitta13} study. There is a possibility that this EUV wave may trace the low-corona flank of the shock, as see in several other eruptive events (e.g., \citealt{carley13}), including the recent X8.2 flare on 2017 September 10 \citep{gopal18a, liu18, morosan18}, which was the first (and the only one so far) long-duration \emph{Fermi} flare associated with a ground level enhancement event \citep{omodei18}.
At $t = 30$~minutes, the shock surface starts to deviate from its initial spherical shape due to the non-uniform background solar wind condition. In particular, one part of the shock (as marked by a white arrow in Figure \ref{fig:shock_evo}), with open field lines crossing it, propagates into a fast wind region originating from an on-disk coronal hole and acquires a higher speed. This process may also lead to a ``shock-shock" interaction at the boundary between the two shock surfaces that causes an elevated shock compression ratio and Alfv\'{e}n Mach number at $t = 60$~minutes (marked with a white circle in the upper-right panel of Figure \ref{fig:shock_evo}). Note that the open field lines connected to this shock interaction region are closer to the \emph{Fermi} $\gamma$-ray emission region. Since the compression ratio, shock Alfv\'{e}n Mach number, and shock geometry are key parameters for DSA of SEPs, this part of the shock can be favorable for accelerating particles to higher energies \citep{chip05, sokolov06, li12, zhao14, hu17, hu18}.
We obtain the shock parameters averaged over the portion of the shock surface that is connected back to the visible side of the Sun. The temporal evolution of the resulting four key shock parameters as well as the average upstream local plasma density is shown in Figure \ref{fig:shock_profile} and described as follows:
\begin{enumerate}
\item
The shock compression ratio (Figure \ref{fig:shock_profile}a) increases rapidly from $\sim$1.8 at $t \sim$10~minutes to $\sim$4.6 at $t \sim 20$~minutes\footnote{The maximum compression ratio is slightly larger than 4 (strong shock limit) due to the non-ideal processes (e.g., heat conduction, other compression effects) or merged background density gradient in the simulation.}, and then gradually decreases to $\sim$3.7 at $t = 60$~minutes. This evolution trend closely follows that of the \emph{Fermi}/LAT $\gamma$-ray flux profile (red curve), with a similar $\sim$10~minute duration of the rapid rise phase.
\item
The average local plasma density at the shock front (Figure \ref{fig:shock_profile}b) is another important parameter, which is related to the seed population and the number of particles available for shock acceleration. We also plot an empirical quantity $CR\cdot\rho^{1/3}$ (blue curve), a product of the shock compression ratio (CR) and the ambient density to a $1/3$ power (heuristically selected to match the temporal trend of the Fermi $\gamma$-ray flux). The density generally decreases with time as the CME travels away from the Sun, which causes this empirical quantity to decrease after its initial increase during the first ~20 minutes. This could potentially explain the simultaneous decrease in the Fermi $\gamma$-ray flux, even though the shock compression ratio and Mach number remain high.
\item
The shock speed (Figure \ref{fig:shock_profile}c) shows a gradual increase from $\sim$400~km~s$^{-1}$ at $t \sim$10~minutes to $\sim$1000~km~s$^{-1}$ at $t \sim$35~minutes and then remains roughly constant.
\item
Likewise, the shock Alfv\'{e}n Mach number (Figure \ref{fig:shock_profile}d) gradually increases from $\sim$1 to $\sim$3 during the $t \sim$10-35~minutes interval and then grows even more slowly to $\sim$4 at $t \sim$60~minutes.
\item
The shock obliquity angle analysis (Figure \ref{fig:shock_profile}e) shows that the shock is originally a quasi-perpendicular shock before $t \sim$30~minutes (with $\theta_{Bn} \sim 75 ^\circ$ at $t \sim$10~minutes) and evolves into a quasi-parallel shock (with $\theta_{Bn} \sim 30 ^\circ$ at $t = 60$~minutes). The most rapid decrease in $\theta_{Bn}$ occurs during $t \sim$22-35~minutes, the onset of which coincides with the peak time ($t=22$~minutes) of the compression ratio and \emph{Fermi} $\gamma$-ray flux shown in Figure \ref{fig:shock_profile}a. Note that the same trend of shock obliquity angle variation during CME evolution was also found by \citet{chip05}.
\end{enumerate}
\section{Discussions}
\label{sect_discuss}
\subsection{Magnetic Connectivity: CME-Shock vs. Flare Site}
\label{subsect_magconnect}
As noted earlier in Section~\ref{subsect_connect}, the \emph{Fermi} $\gamma$-ray emission centroid is closer to the footpoints of the closed field lines connecting the source AR than those of the open field lines connecting the CME shock. It is possible that the particles accelerated at the flaring site or the areas passed by the coronal shock \citep{hudson18} can be trapped in the closed field through some mechanisms (e.g., \citealt{sheeley04}), re-accelerated, and then transported to the front side of the Sun through the connectivity established by the interaction between the erupting flux rope and global corona field. However, we would like to emphasize that, based on the present simulation result, it is difficult to unambiguously distinguish the potential contributions from the two groups of field lines to the observed LAT emission for four reasons. (i) The northern portion of the open-field footpoints is adjacent \emph{in space} to both the closed-field footpoints and the LAT centroid; (ii) The appearances of the closed field ($t \sim 5$~minutes) and open field ($t \sim 6$~minutes) are very close \emph{in time}; (iii) The current localization of the LAT centroid is based on an assumption of a point source for the $\gamma$-ray emission and can change if the actual source shape deviates from this assumption; (iv) Because of the proximity between the open and closed field lines, cross-field diffusion (e.g., \citealt{zhang03}) can allow shock-accelerated energetic particles to access the closed field lines as well.
\subsection{The CME-shock and $\gamma$-ray Connection}
\label{subsect_shockconnect}
However, our simulation result of the Sept14 event together with our initial inspection of other {\it Fermi}\ BTL flares, does reveal some unique and attractive features about the CME-driven shock linked to the observed $\gamma$-rays:
\begin{enumerate}
\item
Since the shock compression ratio is one of the key parameters in the DSA mechanism that determines the energetic particle production at the shock (e.g., the particle spectral index), the temporal correlation noted in Item~1 (Section~\ref{subsect_shock}) above indicates an intimate relation between the $\gamma$-ray flux and the shock particle production. This provides clear evidence supporting the mechanism that (at least some of) the $\gamma$-ray producing particles are accelerated by the CME-driven shock.
\item
The open field is connected to a quasi-perpendicular shock early on (Item~4 in Section~\ref{subsect_shock}), which is generally believed to be an efficient particle accelerator if the upstream coronal or heliospheric magnetic field is sufficiently turbulent (e.g.,\citealt{giacalone05, tylka05}). Furthermore, a recent \emph{in-situ} observation of Saturn's bow shock from the Cassini spacecraft \citep{masters17} shows that energetic electrons were only detected \emph{downstream} of the quasi-perpendicular shock, which suggests the potential importance of a quasi-perpendicular shock in accelerating particles that could escape the \emph{downstream} and propagate \emph{back to the Sun} to produce $\gamma$-rays.
\item
Another piece of supporting evidence for shock acceleration is that in all three identified \emph{Fermi} BTL events (Sept14, {\it SOL2013-10-11}, and {\it SOL2014-01-06}; \citealt{pesce15, ackermann17}), there are pre-existing open field lines (e.g., in on-disk coronal holes) near the $\gamma$-ray emission region, which could be potentially connected to the CME-driven shock.
\end{enumerate}
By using a 3D triangulation technique, \citet{plotnikov17} reconstructed the CME-driven shock structure from white-light observations of the Sept14 event. With the density and magnetic field information obtained from a \emph{static} solar corona constructed with the Magnetohydrodynamic Algorithm outside a Sphere (MAS) model \citep{lionello09}, the time-dependent distribution of the shock Mach number and obliquity angle were approximately derived. They found that the Mach number shows a rapid increase to supercritical values after the type-II burst onset and the shock has a quasi-perpendicular geometry during the $\gamma$-ray emission, which are in general agreement with our results. However, an important distinction between their and our studies is that, instead of using a \emph{static} coronal model, we self-consistently simulated the \emph{dynamic} evolution of the CME and the CME-driven shock. This allows us to track the detailed temporal evolution of the shock and derive the shock compression ratio, which is of critical importance to particle acceleration by shocks. In addition, we found that the shock geometry evolves and changes from quasi-perpendicular to quasi-parallel, instead of remaining quasi-perpendicular all the time.
We also briefly discuss the shock Alfv\'{e}n Mach number evolution derived from the simulation. Note that when this number is around unity (see Item~3 in Section~\ref{subsect_shock}), stochastic acceleration of particles by plasma turbulence (e.g., in the downstream of the shock) is more efficient than DSA \citep{petrosian16}. Stochastically accelerated particles could also serve as the seed population to be further accelerated by the shock. This could be the case early on, when the shock compression ratio is also relatively low, and could be related to the rapid rise in the detected LAT $\gamma$-ray flux. Later on, when the shock Alfv\'{e}n Mach number and compression ratio are sufficiently large, shock acceleration would be more important and can account for the gradual, long-duration $\gamma$-ray emission. Therefore, it is likely that, in addition to DSA, stochastic acceleration could also play a role in the Sept14 event, especially in the early stage. On the other hand, as mentioned in \S 3.1, the simulated CME speed is 10-14\% slower than the observed one. Considering the higher shock speed in the simulation, the Alfv\'{e}n Mach number will also be higher. The critical Alfv\'{e}n Mach number is estimated between 1 and 1.7 for the solar wind plasma with $\gamma = \frac{5}{3}$ and $\beta = 1.0$ \citep{edmiston84}. Therefore, it is also possible that the shock in the simulation already becomes supercritical in the early stage. In summary, we believe multiple acceleration mechanisms could be important in the beginning but DSA might be the dominant one after 20 minutes.
\subsection{Effect of Magnetic Mirroring}
\label{subsect_magmirror}
Finally, we discuss the possibility of particle transport back to the Sun from the CME-driven shock. This may appear difficult because of strong magnetic mirroring due to the high degree of convergence of magnetic field lines toward the Sun (with a mirror ratio of $\eta=B_{\odot}/B _{\rm CME}\gg 1$) and thus extremely small loss cones (e.g., a few degress; \citealt{klein18}, see their Section 8.4.4). In an ideal \emph{scattering-free} environment, this could potentially prevent particles from reaching the photosphere to produce $\gamma$-rays. In reality, however, the CME environment in general, and the shock downstream region in particular, are most likely \emph{highly turbulent}, rendering a sufficiently short scattering mean free path which can continuously scatter particles into the loss cone, thus precipitating to the Sun. How fast and what fraction of the particles can reach the photosphere depend on the relative values of the duration $\Delta T$ of the emission and the escape time from the trap, $T_{\rm esc}$. The latter depends on the mirror ratio $\eta$, the scattering time $\tau_{\rm sc}$, and the crossing time from the shock back to the Sun, $\tau_{\rm cross}\sim L/v_p$, where $v_p\sim c$ is the velocity of GeV protons responsible for $\gamma$-rays and $L=v_{\rm CME}\Delta T$ is the distance between the CME and the Sun. With some analytical and numerical treatments, \citet{malyshkin01} gave an approximate relationship between the escape time and the three variables $\eta$, $\tau_{\rm sc}$, and $\tau_{\rm cross}$ (see Figure 2, Petrosian 2016):
\begin{equation}
T_{\rm esc}=\tau_{\rm cross}\left(2\eta+\frac{\tau_{\rm cross}}{\tau_{\rm sc}}+\ln\eta\frac{\tau_{\rm sc}}{\tau_{\rm cross}}\right).
\end{equation}
Recent numerical simulations by \citet{effenberger18} have confirmed this relation. The upshot of this result is that for an \emph{isotropic} pitch-angle distribution, $T_{\rm esc}\sim 2\eta \tau_{\rm cross}$ for $\tau_{\rm sc}\sim \tau_{\rm cross}$ and $\eta \gg 1$, which means that $T_{\rm esc}/\Delta T\sim 2\eta v_{\rm CME}/c \sim \eta/100$ for a CME speed of $v_{\rm CME}=1,500$ km/s. Therefore, for $\eta\lesssim 100$, we have $T_{\rm esc} \lesssim \Delta T$; i.e., a large fraction of the downstream GeV protons can reach the photosphere within the emission duration $\Delta T$. We have tracked the history of the magnetic field strength along the field lines connecting the shock to the solar surface in our simulation and found that the median value of the mirror ratio $\eta$ increases from $\sim 10$ to $\sim 100$ during the first hour (Figure \ref{fig:shock_profile}f). Thus, for the first few hours, which is the duration of most Fermi events, the difficulty of particle transport back to the Sun due to magnetic mirroring can be overcome with a scattering mean free path of order of a solar radius or a scattering time of $\tau_{\rm sc}\sim 2$ s. For protons with gyro-frequency of $\Omega_p\sim15\cdot\left[\frac{B}{mG}\right]$ Hz, and magnetic field of $\sim$100 mG (the typical average value on the shock surface in the first hour of simulation), this would require a fractional turbulence energy ($\delta B/B)^2\sim 1/(\Omega_p \tau_{\rm sc})\sim 3\times10^{-4}$. According to \citet{effenberger18}, the situation is similar for a \emph{pancake} pitch-angle distribution. But for a distribution \emph{beamed} along the field lines, the escape time will be shorter and thus facilitate particle precipitation back to the Sun.
\section{Summary \& Conclusion}
\label{sect_summary}
In this study, we simulated the CME associated with a well-known \emph{Fermi} BTL flare on September 1, 2014 by using a data-driven global MHD model AWSoM within SWMF. We tracked the dynamic evolution of the global magnetic field and the CME-driven shock and investigated the magnetic connectivity between the shock and the region around the centroid of the \emph{Fermi}-LAT $\gamma$-ray source. We found supporting evidence for the hypothesis that the observed $\gamma$-ray emission is produced by particles that are accelerated in the CME environment and escape the shock downstream region along magnetic field lines connected to regions on the Sun far away from the hosting AR of the flare. Our specific findings are summarized as follows:
\begin{enumerate}
\item
To enable the high-energy particle precipitation and thus $\gamma$-ray emission on the front side of the Sun, certain \emph{magnetic connectivity} must be established between the emission region and the flare source AR or the CME-driven shock. In our simulation, both types of connections are present and appear close in space and time within the first few minutes of the event, as a result of the interaction between the erupting flux-rope magnetic field and the global solar corona. The CME-driven shock is connected to the front side of the Sun by open magnetic field lines that originate from an on-disk coronal hole. This part of the shock surface is away from the flux-rope driver and the shock nose that are propagating in a different direction. Such open-field configurations represent a favorable condition for connecting the CME-driven shock back to the solar surface, and have been identified in all three \emph{Fermi} BTL events reported so far.
\item
Within the shock surface connected to the front side of the Sun, the \emph{shock properties} vary significantly with time and space. The temporal evolution of the compression ratio and thus the rate of particle acceleration by the shock are closely correlated with the {\it Fermi}\ $\gamma$-ray flux, suggestive of a causal relationship. In addition, this part of the shock is initially a quasi-perpendicular shock and later evolves to a quasi-parallel shock, the former of which is believed to be an effective particle acclerator.
\item
These findings provide strong support for the aforementioned hypothesis and indicate that the \emph{CME-driven shock} can play an important role in accelerating particles that then travel back to the Sun to produce observed $\gamma$-rays. In addition to DSA, \emph{stochastic acceleration} by plasma turbulence may play a role as well, especially in the shock downstream region and during the early stage of the event.
\end{enumerate}
The present study is among the first attempts to solve the puzzle of {\it Fermi}\ BTL $\gamma$-ray flares. The identified mechanisms, in general, could be at work in on-disk \emph{Fermi} flares as well and can potentially solve another puzzle, i.e., long-duration $\gamma$-ray flares. BTL and long-duration $\gamma$-ray flares could be viewed as two faces of the same puzzle, with the $\gamma$-ray emission being \emph{spatially separated} in the former and \emph{temporally delayed} in the latter from the main flare emission commonly observed at longer wavelengths. In fact, recent statistical studies show that long-duration $\gamma$-ray flares observed by \emph{Fermi}/LAT always associated with wide, fast CMEs \citep{winter18} and Type II radio bursts \citep{gopal18b}. These results strongly support that long-duration $\gamma$-rays are produced by shock-accelerated protons precipitating back to the Sun.
Ultimately, one needs to self-consistently couple MHD simulations with particle acceleration, escape, and transport models (e.g., Borovikov et al. 2017, \citealt{hu18}). Furthermore, a comparative study is needed among not only the BTL events but also the on-disk events, by combining observational and simulation efforts, which we plan to pursue in future studies.
\begin{acknowledgements}
We are very grateful to the referee for invaluable comments that helped improve the paper. M.J. and W.L. were supported by NASA's {\it SDO}/AIA contract NNG04EA00C to LMSAL. W.L. by NASA HGI grants NNX15AR15G and NNX16AF78G and LWS grant NNX14AJ49G. N.V.N by NSF grant AGS-1259549. F.E. by NASA grant NNX17AK25G. G.L. by NASA grants NNX17AI17G and NNX17AK25G. W.M. by NASA grant NNX16AL12G and NSF AGS 1322543. We are thankful for the use of the NASA Supercomputer Pleiades at Ames and to its supporting staff for making it possible to perform the simulations presented in this paper.
The \emph{Fermi} LAT Collaboration acknowledges generous ongoing support from a number of agencies and institutes that have supported both the development and the operation of the LAT as well as scientific data analysis. These include the National Aeronautics and Space Administration and the Department of Energy in the United States, the Commissariat \'{a} l'Energie Atomique and the Centre National de la Recherche Scientifique / Institut National de Physique Nucl\'{e}aire et de Physique des Particules in France, the Agenzia Spaziale Italiana and the Istituto Nazionale di Fisica Nucleare in Italy, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), High Energy Accelerator Research Organization (KEK) and Japan Aerospace Exploration Agency (JAXA) in Japan, and the K. A. Wallenberg Foundation, the Swedish Research Council and the Swedish National Space Board in Sweden. {\it SDO} is the first mission of the NASA's Living With a Star (LWS) Program.
\end{acknowledgements}
\newpage
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.